id
stringlengths 3
9
| source
stringclasses 1
value | version
stringclasses 1
value | text
stringlengths 1.54k
298k
| added
stringdate 1993-11-25 05:05:38
2024-09-20 15:30:25
| created
stringdate 1-01-01 00:00:00
2024-07-31 00:00:00
| metadata
dict |
|---|---|---|---|---|---|---|
4552406
|
pes2o/s2orc
|
v3-fos-license
|
Comparing three UV wavelengths for pre‐exposing Gafchromic EBT2 and EBT3 films
Gafchromic films are used for X‐ray dose measurements during diagnostic examinations and have begun to be used for three‐dimensional X‐ray dose measurements using the high‐resolution characteristics of Gafchromic films for computed tomography. However, the problem of unevenness in Gafchromic film active layers needs to be resolved. Double exposures using X‐rays are performed during therapeutic radiology, although this is difficult for a diagnostic examination because of a heel effect. Thus, it has been suggested that ultraviolet (UV) radiation be used as a substitute for X‐rays. However, the appropriate UV wavelength has not been determined. Thus, we conducted this study to decide an appropriate UV wavelength. UV peak wavelengths of 245 nm (UV‐A), 310 nm (UV‐B), and 365 nm (UV‐C) were used to irradiate EBT2 and EBT3 films. Each UV wavelength was irradiated for 5, 15, 30, and 60 min, and irradiation was then repeated every 60 min up to 360 min. Gafchromic films were scanned after every irradiation using a flatbed scanner. Images were split into RGB images, and red images were analyzed using ImageJ, version 1.44, image analysis software. A region of interest (ROI) one‐half inch in diameter was placed in the center of subtracted Gafchromic film images, and UV irradiation times were plotted against mean pixel values. There were reactions in the front and back of Gafchromic EBT3 and the back of Gafchromic EBT2 with UV‐A and UV‐B. However, UV‐C resulted in some reactions in both sides of Gafchromic EBT2 and EBT3. The UV‐A and UV‐B wavelengths should be used. PACS number(s): 87.53 Bn
I. INTRODUCTION
Gafchromic films are used for X-ray dose measurements during diagnostic radiology, including computed tomography (CT), interventional radiology (IVR), and quality control (QC) and quality assurance (QA). For CT dose measurements, three-dimensional dosimetry methods have been developed, such as a sheet-roll phantom (1) or a half cylindrical phantom with Gafchromic films. (2) Gafchromic film uniformity is an important consideration for these high-resolution measurements. Gafchromic film nonuniformity errors that arise because of the unevenness of active layer thicknesses can affect the measured doses. In the diagnostic range for X-rays, exposure can be slightly changed based on Gafchromic film density. Thus, signal data for X-ray exposures are affected by these nonuniformity errors.
A double-exposure technique is used to reduce these nonuniformity errors. (3) However, it is difficult to provide homogenous X-ray exposure over a wide area, such as a 14 × 17 inch format. It is known that Gafchromic films react with certain ultraviolet (UV) rays. (4) Gafchromic films have been used to measure the amount of UV rays using this reaction. (5) In general, when using Gafchromic films, UV irradiation was originally considered to be taboo because it introduces density noise for X-ray measurements. (6) Nonuniformity errors are expressed using UV irradiation as a substitute for X-rays in a double-exposure technique. UV light can be uniformly irradiated over a wide area, and nonuniformity errors can be reduced by a subtraction method. In a preliminary study, we performed UV (360 nm) exposure (0.018 mW / cm 2 ) as a doubleexposure technique for Gafchromic EBT and obtained good results. (7) Because of the unevenness in the thickness of the active layer that could affect true data to appear as noise, UV rays were irradiated uniformly as a double-exposure technique with the aim of removing nonuniformity errors. An effect was confirmed for homogeneous improvements in UV irradiation when using Gafchromic EBT, (7) although the effects were unknown for Gafchromic EBT2 and EBT3. In these films, a yellow dye is included in an active layer to reduce nonuniformity errors. A study has shown that the uniformity of Gafchromic EBT2 with a single red channel in a double-exposure technique (pre-irradiation technique) is equal to that of a triple-channel method. (8) Thus, UV could be used for pre-exposure to improve uniformity. It is still necessary to confirm the reaction of UV rays so that we can determine if this yellow dye and UV exposure enhanced this reduction in nonuniformity errors. However, there is a problem with UV irradiation of Gafchromic films. In particular, the sensitivity to the UV rays of the Gafchromic film active layer needs to be clarified. Because there are different wavelengths of UV rays, ranging from 200 to 400 nm, the most suitable wavelength or an adaptation wavelength needs to be determined. Therefore, the aim of this study was to find suitable UV (A, B, or C) rays at which Gafchromic EBT2 and EBT3 active layer are more reactive.
In general, UV rays are divided into UV-A, UV-B, and UV-C, depending on the UV wavelength. (9) UV-A wavelengths are from 315 to 400 nm, UV-B wavelengths are from 280 to 315 nm, and UV-C wavelengths are from 100 to 280 nm. (9) In this study, we used UV-A of 365 nm, UV-B of 310 nm, and UV-C of 245 nm, and the sensitivities of Gafchromic films were compared.
A. Gafchromic films and UV lamps
Peak wavelengths of 365 nm (UV-A), 310 nm (UV-B), and 245 nm (UV-C) of UV light were used to expose two different Gafchromic films. A UV lamp that generated a UV peak wavelength of 365 nm was a black light, NEC FL10SBL (10 W) (NEC Lighting Ltd., Tokyo, Japan). A UV lamp that generated a UV peak wavelength of 310 nm was a UV-B chemical lamp, FL-10E (10 W) (Kyokko Denki Co. Ltd., Tokyo, Japan). A sterilization lamp that generated a UV peak wavelength of 254 nm was a Toshiba GL10SBL (10 W) (Toshiba Lighting & Technology Corp., Kanagawa, Japan). Two different Gafchromic films (both front and back sides) were used in this study, Gafchromic EBT2 (Lot # 02171403) and Gafchromic EBT3 (Lot # 04011401) (Ashland Inc., Covington, KY). EBT2 and EBT3 are transverse-type films; however, reflection-mode scanning was performed because the nonuniformity errors were small. (10) The front side of a Gafchromic EBT2 film is UV protected, but the back side is not. (11) Gafchromic EBT3 is not a UV protected film. (12) B. UV exposure Our experiments were conducted at night so that there was no interference from solar UV rays. The fluorescent lamps in the room were exchanged with UV ray cutting lamps. The cut work with a Gafchromic film was performed under an incandescent lamp. Using a 245 nm germicidal light, a 310 nm chemical lamp, and a 365 nm black light, because UV-rays are harmful to the human body, (13,14) the area surrounding an exposure was protected from UV rays with acrylic plates (Comoglas CG UV40 P, 0.3 cm thickness, Lot # 140406C B) (Kuraray Co. Ltd., Tokyo, Japan). A Gafchromic film was cut to 9 cm (long axis) by 4.5 cm (short axis). Both sides of Gafchromic EBT2 and EBT3 were placed on an acrylic board 3 mm thick (Fig. 1). Because each film was irradiated with each type of UV light, three sets were prepared.
Frontal and lateral view arrangements for UV exposure to Gafchromic films are shown in Fig. 2. The distance between a UV source surface and an optical detector surface was 72 cm. A UV meter probe was placed in the center of an exposure area and UV light strength was measured. UV ray strength was measured using a UVR-300 with a UD-360 probe (365 nm) and a UD-250 (245 nm) probe (Topcon Technohouse Corp., Tokyo, Japan). UV rays of 310 nm were measured by a UV Light Meter UV-340A (Mother Tool Corp., Nagano, Japan).
C. Image Scanning
Each UV ray source was used for exposure for 5, 15, 30, and 60 min, and exposure was then repeated every 60 min up to 360 min. After each exposure, scanning was done with a flatbed scanner (EpsonES-10000G, Seiko Epson Corp. Nagano, Japan) and images were acquired using Adobe Photoshop CS2 (Adobe Systems Inc., San Jose, CA). Scanning was done with 48 bit, 100 dpi resolution, and RGB mode with a PPC film (CR-PP686) (3M Co., St. Paul, MN), using a liquid-crystal protective film (LCD-230W) (Sanwa Supply Inc., Okayama, Japan) to prevent moiré artifacts (Newton's rings). (15) The Gafchromic films were always scanned in the same direction (portrait). The films were placed near the center of the scanner and reproducibly in the same position each time to avoid scanner nonuniformity errors, as shown in Fig. 1. (16) Procedures for preparations and Gafchromic film scanning were done in a room in which the temperature change was within 21°C to 25°C.
D. Image analysis
Only the image of the red channel was used for the analysis of the image. For a scanned image, the image data for the amount of change in pixel values without UV exposure was determined to identify the difference in pixel values changes when an image was UV irradiated.
A region of interest (ROI) one-half inch in diameter was placed at the center of a Gafchromic film and the mean ± standard deviation (SD) pixel value was measured (Fig. 1). Graphs were prepared with UV exposure times vs. pixel values, from which the UV light that provided the most efficient density change was chosen. Digitized image data were analyzed with ImageJ, version 1.44 (National Institutes of Health, Bethesda, MD), image analysis software for Macintosh.
The UV-A, UV-B, and UV-C were used to irradiate the front side of Gafchromic EBT2. When UV-A was used for exposure for 360 min, the mean ± SD pixel values were highest, at 1,328.99 ± 133.15. The mean pixel values with UV-B and UV-C were 856.11 ± 146.59 and 516.77 ± 112.76, respectively. These reactions were not accepted. That is, in the front side of Gafchromic EBT2, a reaction was not detected for 360 min with UV-A light at 0.074 mW / cm 2 (1,604.88 mJ / cm 2 ) because of the UV ray protective layer (Fig. 3). Next, the UV was used to irradiate the back side of Gafchromic EBT2. The mean ± SD pixel values after exposure with UV-A, UV-B, and UV-C for 360 min were 9,226.63 ± 182.04, 9,219.77 ± 153.32, and 1,177.51 ± 162.77, respectively. The reactions due to UV-A and UV-B were high, which provided effective results, whereas this was not the case with UV-C (Fig. 4). Based on these results, UV exposure of Gafchromic EBT2 was effectively achieved by irradiating with UV-A or UV-B to the back side of this film that was not UV ray protected.
For Gafchromic EBT3, the mean ± SD pixel values with UV-A, UV-B, and UV-C sources were 8,222. Comparing the types of UV rays, there were reactions in the front and back sides of Gafchromic EBT3 and the back side of Gafchromic EBT2 with UV-A. There were also reactions in the front and back sides of Gafchromic EBT3 and the back side of Gafchromic EBT2 with UV-B. However, UV-C resulted in minimal reactions on either side of Gafchromic EBT2 and EBT3. From these results, we considered that UV-A or UV-B could be used to effectively irradiate Gafchromic EBT2 and EBT3.
A. Film Reactions
We found different results when UV exposure was applied to the front or to the back side of Gafchromic EBT2. Because UV ray protection is present on the front side of this film, reactions were low for 360 min exposure to any type of UV source. However, by applying exposure to the back side, very high reactions were generated. The mean pixel values were 1,328.995 in the front and 9,226.631 in the back side when this film was irradiated with UV-A for 360 min, which was 6.94 times higher at the back side. The mean pixel value was 856.111 in the front side and 9,219.774 in the back side with UV-B: 10.769 times higher at the back side.
When Gafchromic EBT3 was irradiated for 360 min with UV-A and UV-B, high reactions were generated that were equal at both the front and the back sides, with mean pixel values of 8,222.005 (front) and 8,290.660 (back) with UV-A and 8,071.713 (front) and 8,198.057 (back) with UV-B. When both sides of Gafchromic EBT3 were compared with the back side of Gafchromic EBT2, the back side of Gafchromic EBT2 showed a higher reaction. For an index of the exposure strength, the exposure time was set and an exposure dose was measured.
B. UV protection layer
The front of Gafchromic EBT2 is UV-ray protected; therefore, there were minimal reactions. However, UV-A resulted in greater reactions than those with UV-B and UV-C. The pixel values were 1,328.995 ± 133.151, 856.111 ± 146.588, and 516.766 ± 112.762 after exposure for 360 min with UV-A, UV-B, and UV-C, respectively. That is, a Gafchromic film with a UV ray protection layer can be used by increasing the exposure strength.
C. UV protection for humans
The wavelengths of UV-A are from 315 to 400 nm, those of UV-B are from 280 to 315 nm, and those of UV-C are from 200 to 280 nm. UV rays may affect the human body. A UV exposure box was made with an acrylic board to cut UV rays to prevent this. There was no leakage of UV rays to the outside of this box.
D. Yellow dye in Gafchromic EBT2
One purpose for using a yellow dye in Gafchromic EBT2 and EBT3 films is that variations in active layer thickness results in density changes and the dye is used for nonuniformity corrections. Radiation dose information is provided by a red channel, and information on the degree of uniformity is provided by a blue channel, which information is used for nonuniformity corrections. (16,17) However, the uniformity of Gafchromic EBT2 in a single red channel using a double exposure technique (pre-exposure technique) was equal to that of a triple-channel method. (8) Therefore, thickness-unevenness of the active layer was enhanced by uniformly irradiating with UV rays, such as UV exposure to Gafchromic EBT film. (7) E. UV strength Because we irradiated at a distance of 72 cm using a 10 W fluorescent tube, relatively long-time exposure was necessary for an increase in density. When there was some increase in density even with exposure for 5 min and on comparison to dose data for nonexposure a difference was observed, a reaction due to UV rays could be expressed. However, the most suitable exposure dose for EBT2 and EBT3 films remains to be determined. For UV lamps with high exposure strengths, exposure can be done within a short time. Thus, it is necessary to determine the most suitable exposure dose or exposure time.
In this study, the sensitivity of Gafchromic film for UV rays was measured; the most suitable UV ray strength should be evaluated in future studies.
F. Scanning
Peculiar irregularities with quite a few scanners may occur during image scanning. In addition, these irregularities may occur because a protective film of liquid crystals and PPC are spread for moiré (Newton's rings) reduction between the glass surface of a scanner and a Gafchromic film. However, these irregularities can be reduced by making a subtraction, and only the changes that occurred will be expressed by UV exposure.
G. UV wavelengths
Because different UV-ray fluorescent tube types were used in this study, the wavelengths emitted had certain peaks and widths. Our results were best in terms of sensitivity with a UV lamp with a UV-A wavelength of 365 nm. However, UV-A wavelengths vary in the range of 315-400 nm and it could not be judged whether the UV rays with the UV lamp we used were the most efficient. It will be necessary to investigate this using a lamp that emits UV rays with wavelengths over a smaller range. It will also be necessary to conduct experiments using a UV ray apparatus (e.g., LED) that generates a specific wavelength to identify the required wavelength.
H. Optical density increases
At certain times after exposure began there was an increase in optical density. (16) Because this study was a evaluation of nonuniformity, density increase after the exposure was not considered.
V. CONCLUSIONS
The UV rays that Gafchromic EBT3 reacted to most were UV-A and UV-B. Both sides of this film provided equal results. Both the front and back sides reacted to UV-A and UV-B, whereas for Gafchromic EBT2, the reactions were different. Even if exposure was to the front side of Gafchromic EBT2 with UV-ray protection, it exhibited a reaction when using a 10W UV tube. Based on these results, using UV exposure, nonuniformities are emphasized in addition to the yellow dye. Uniform exposure with UV rays provided for using a Gafchromic film with a large area of UV rays could be used as a substitute for a double-exposure technique with X-rays. Thus, the precision of the measurement of X-rays dose by a Gafchromic film would be improved.
|
2018-04-03T06:15:26.353Z
|
2015-11-01T00:00:00.000
|
{
"year": 2015,
"sha1": "f8fb44e1d217b955eeac7ebf4873b9515cc07fc0",
"oa_license": "CCBY",
"oa_url": "https://aapm.onlinelibrary.wiley.com/doi/pdfdirect/10.1120/jacmp.v16i6.5663",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f8fb44e1d217b955eeac7ebf4873b9515cc07fc0",
"s2fieldsofstudy": [
"Medicine",
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
}
|
119359960
|
pes2o/s2orc
|
v3-fos-license
|
Null Geometry and the Penrose Conjecture
In this paper, we survey recent progress on the Null Penrose Conjecture, including a proof of the conjecture for smooth null cones that are foliated by doubly convex spheres.
Sir Roger Penrose argued in 1973 [12] that the total mass of a spacetime containing black hole horizons with combined total area |Σ| should be at least a |Σ|{16π. On the one hand, this conjecture is important for physics and our understanding of black holes. On the other hand, Penrose's physical arguments lead to a fascinating conjecture about the geometry of hypersurfaces in spacetimes. For spacelike (Riemannian) slices with zero second fundamental form the conjecture is known as the Riemannian Penrose Inequality and was first proved by Huisken-Ilmanen [8] (for one black hole) and then by the first author [3], using two different geometric flow techniques. This paper concerns a formulation of the conjecture for certain null hypersurfaces in spacetimes called the Null Penrose Conjecture (NPC). Over the last ten years, there has been a great deal of progress [1,5,9,10,11,14] on the NPC, culminating in a proof of the conjecture in a fair amount of generality for smooth null cones [13]. One surprising fact is that these null hypersurfaces, under physically inspired curvature conditions on the spacetime, have monotonic quantities, including cross sectional area [6], notions of energy [9,10,11,14], and, as we'll see with Theorem 1 below, a new notion of mass [13].
The Null Geometry of Light
The theory of General Relativity emerges from Albert Einstein's beautiful idea that matter in a physical system curves the intertwining fabric of both space and time. A spacetime is a four dimensional manifold with a metric of signature (3,1), meaning that the metric on each tangent plane is isometric to the Minkowski spacetime R 4 1 :" pR 4 ,´dt 2`d x 2`d y 2`d z 2 q. The minus sign in the metric implies the existence of null vectors, vectors with zero length (such as (1,1,0,0)), even though the vectors themselves are not zero. A null hypersurface is a codimension one submanifold of a spacetime whose three dimensional tangent planes are null in one dimension (and hence positive definite in the two other dimensions). For example, if we let r " a x 2`y2`z2 in the Minkowski spacetime, any translation of the downward cone Λ :" tt "´ru as depicted in Figure 1, is a null hypersurface.
Null geometry is counter intuitive in a number of ways. Since one dimension has zero length, null hypersurfaces have zero volume. Furthermore, since the metric is not invertible, the Riemann curvature tensor of the null hypersurface is not well defined. Also, the vector which is perpendicular to a null hypersurface must also be tangent (hence null). This normal-tangent duality complicates the notion of what the second fundamental form of a null hypersurface should be defined to be (the classical tool for analyzing the "shape" of substructures). Fortunately, for a 'conical' null hypersurface Ω -S 2ˆR called a null cone, where S 2 accounts for the two positive dimensions, Ω can be studied vicariously through the geometry of its spherical cross-sections including their Gauss and Codazzi equations. For our sacrifice in intuition to this normal-tangent duality we do gain some advantages. A direct consequence of a normal vector L being also tangent is that all curves along L must be geodesic. Thus, a null hypersurface can be thought of as a collection of 'light-rays' in the framework of General Relativity. Imagine standing on some 2-sphere Σ 0 in a spacetime, for example the surface of the Earth, and collecting all light rays 'hitting the surface' at a particular point in time. The resulting set constructs (or recovers) a null cone Ω reducing the usually complicated system of PDEs associated to flows in spacetimes to an analysis of ODEs. From standard uniqueness of ODEs, any two normal null flows of surfaces off of Σ 0 must result in two foliations of the same null cone. Moreover, from standard existence of ODEs, wherever there's a smooth spacelike 2-sphere Σ 0 there will be a null cone off of it, at least in a neighborhood of the sphere.
The Expanding Null Cone
The thermodynamics of black holes rests upon Hawking's area theorem [6] which states that in spacetimes satisfying the Dominant Energy Condition (or DEC) the area of a cross section of a black hole event horizon is non-decreasing. Similarly in our context, we will briefly explore how the DEC, which is a local curvature constraint modelling non-negative energy, ensures that null cones reaching null infinity can only be foliated by spheres of increasing area. To do so, we first need a quantity also needed to state our main result, Theorem 1.
Definition 1. For a spacelike 2-sphere Σ, the expansion is given by σ :" x´ H, Ly where H is the mean curvature, a normal vector measuring the mean extrinsic curvature of Σ.
For a null flow off of Σ along L, σ comes from d ds dA " σdA where dA is the area form (i.e. an 'element of area') on Σ.
Here the DEC comes into play, by way of the Raychaudhuri equation (see [6]), ensuring dσ{ds ď 0 along all geodesics. Hence, the only way to get standard asymptotics at infinity, which by any reasonable notion should have expanding 2-spheres (i.e "σp8q ą 0"), necessarily restricts our choice of Σ 0 to have strictly positive expansion. In fact, σ ą 0 on all of Ω enforces that all foliations must have expanding area.
With the null flow vector L having zero length we also forfeit our intuitive notion of measuring the speed of a flow. However, with an expanding null cone Ω, σ offers a convenient replacement.
Null cones that reach null infinity also have fundamental physical significance beyond the monotonicity of crosssectional area. In isolated physical systems, such as a cluster of stars or a black hole, we expect the curvature induced by localized matter to settle "far away" back to flat Minkowski spacetime. In our context, Mars and Soria [10] introduced the notion of an asymptotically flat null cone whereby the geometry of Ω approaches that of a downward cone of Minkowski at null infinity in a suitable sense. These asymptotically flat null cones allow us the ability to measure the total energy and mass of a system which we'll need in order to state the NPC.
Total Energy and Our Main Example
In 1968, Hawking [7] published a new mechanism aimed at capturing the amount of energy in a given region using the curvature of its boundary Σ.
Definition 2. The Hawking Energy is given by
For example, for any cross-section of the downward cone Σ ãÑ Λ :" tt "´ru in Minkowski the Gauss equation identifies a beautifully simple relationship between the intrinsic and extrinsic curvature, K " 1 4 x H, Hy (see [13]), where K is the Gauss curvature of Σ. From the Gauss-Bonnet Theorem we therefore conclude that 0 " ş . Thus, all cross-sections -no matter how squiggly -envelop matter content of vanishing energy, as expected of a flat vacuum.
For another, and our main example of a null cone, we go to the one parameter family of Schwarzschild spacetimes characterized by the metrić These spacetimes model an isolated black hole with the parameter M representing total mass. Note that the coordinate r has been chosen so that each sphere of fixed pt, rq is a round sphere of area 4πr 2 . Also, when M " 0 we recover exactly the Minkowski metric in spherical coordinates. The reader may have noticed the singularities r " 0 and r " 2M . The singularity at r " 0 is a curvature singularity called the black hole singularity giving rise to the isolated black hole. We see this black hole is isolated from the fact that the metric approaches the Minkowski metric for large values of r. On the other hand, the singularity at r " 2M is superficial, and can be removed with a change of coordinates. To show this we introduce the ingoing null coordinate v, In the transition from M " 0 to M ą 0 the downward cones of Minkowski transition to their spherically symmetric counterparts in Schwarzschild, referred to as the standard null cones. In Minkowski, Λ " tv " 0u for v " t`r, from (2) we see the analogous three dimensional slice Ω S :" tv " v 0 u (i.e. dv " 0) inherits the metric r 2 pdϑ 2`p sin ϑq 2 dϕ 2 q assigning positive lengths only for vectors along the two spherical coordinates pϑ, ϕq, not r. These coordinates pr, ϑ, ϕq P RˆS 2 identify points on the standard null cone Ω S .
The r-coordinate curves are exactly the geodesics that rule Ω S , allowing us to identify any cross-section by simply specifying r as a function on S 2 . Similar to Minkowski, the Gauss equation once again simplifies to an intriguingly simple expression for a cross-section Σ :" tr " ωpϑ, ϕqu ãÑ Ω S ( [13]) Σ also inherits the simple metric γ " ω 2γ from Ω S , where γ is the standard round metric on a sphere. So from Gauss- for dS the area form on a round sphere. Some fascinating observations follow from Jensen's inequality. We deduce that E H ě M which the reader may recognize as the special relativistic notion that the energy of a particle is always bounded below by its mass. Jensen's inequality also ensures that equality is reached only if ω " r 0 , corresponding to the t-slice intersections with Ω S (see Figure 4). Moreover, one can show ( [10]) that Σ is a round sphere if and only if ωpϑ, ϕq " r 0 1´ v¨ npϑ,ϕq , for some r 0 , v inside the unit ballB 3 Ă R 3 , and npϑ, ϕq the unit position vector, giving the energy E H pr 0 , vq " M ?
1´| v| 2 . This is precisely the observed energy of a particle of mass M traveling at velocity v relative to its observer (with the speed of light set to c " 1).
Using Schwarzschild as an example we can also motivate the notion of total energy and mass for an asymptotically flat null cone Ω. We start by bringing to the attention of the reader that the intrinsic geometry of Ω S is identical to that of the downward cone in Minkowski ( [14]). This is evident from (2), since the only component 'giving rise' to the Schwarzschild geometry (beyond Minkowski) is dv 2 , which vanishes for the induced metric on Ω S . So instead, we may actually account for all round spheres of Ω S by intersecting the downward cone of Minkowski by Euclidean hyperplanes (see Figure 1). On the one hand, any family of parallel hyperplanes in Minkowski are given as fixed time slices inside a reference frame (coordinates pt,x,ȳ,zq) of an observer traveling at velocity v. On the other hand, the ambient geometry of Schwarzschild settles to that of Minkowski, inheriting such characteristics asymptotically. Therefore, aided by Figure 4, we imagine that an asymptotically round foliation of Ω S is induced by an asymptotically Euclidean slicing of the spacetime (for which E H has a verified correlation to total energy [8]). We conclude that an observer 'at infinity' approximates our black hole to a particle (similarly to α of Figure 1) of total energy M ?
1´| v| 2 . In the general setting, considering an asymptotically flat null cone Ω, it follows that E H approaches a measure of total energy along an asymptotically round foliation called a Bondi Energy, E B (see [11]). Returning to Schwarzschild, we verify that the standard null Black Holes and the Null Penrose Conjecture Upon further inspection of (2) the reader may have noticed yet another null cone given by the slice H :" tr " 2M u (see Figure 2). With induced metric 4M 2 pdϑ 2 psin ϑq 2 dϕ 2 q, positive lengths are once again only assigned to vectors along the spherical coordinates, not v. In contrast to Ω S , any cross section Σ ãÑ H has metric γ " 4M 2γ . Thus, all cross-sections exhibit the exact same area 16πM 2 . In other words, on any cross-section of H, light rays emitted perpendicularly off of the surface remain trapped. Since no material particles travel faster than the speed of light, H indicates the hypersurface from which there is 'no return' upon entering, or the event horizon. As a result of this trapping, the 2-sphere H X Ω S is the unique cross-section of Ω S satisfying x H, Hy " 0, namely with null mean curvature.
Previous Work
Given a spacelike 2-sphere Σ with metric γ, every point on its surface has two positive dimensions in the available four of spacetime spent on tangent vectors. As a result, we can combine the remaining negative and positive dimensions to form a normal null basis tL, Lu. For a cross-section of a null cone Ω we choose L as the normal-tangent to Ω (for example, see Figure 3). This finally allows us to introduce some final data for our main Theorem below.
Definition 5. For a smooth spacelike 2-sphere Σ of second fundamental form II and null basis tL, Lu such that xL, Ly " 2 we define χ´:" x´II, L{σy, ζpV q :" 1 2 xD V L, Ly " τ pV q`V log σ where ζ is the connection 1-form (V a tangent vector field of Σ).
In his PhD thesis, Johannes Sauter ( [14]) showed for the special class of shear free null cones (i.e. satisfying χ´" 1 2 γ) inside vacuum spacetimes, one is able to solve a system of ODEs to yield explicitly the geometry of Ω. This then enables a direct analysis of E H at null infinity that allowed Sauter to prove the NPC. An observation of Christodoulou (see [14]) also shows that E H is monotonically increasing along foliations in vacuum if either the mass aspect function µ :" K´1 4 x H, Hy´∇¨ζ or the expansion σ remain constant functions on each cross-section. For a black hole horizon Σ 0 , we see from (1) that E H pΣ 0 q " a |Σ 0 |{16π making this observation particularly interesting. If one is successful in interpolating E H from the horizon to the Bondi Mass m B along one of these flows the NPC would follow for vacuum spacetimes a |Σ 0 |{16π " E H pΣ 0 q ď lim sÑ8 mpΣ s q " m B . Sauter was able to show for small pertubations of Ω off of the shear free condition, one obtains global existence of either of these flows and that E H converges. Unfortunately, one is unable to conclude that the foliating 2-spheres even become round asymptotically let alone E H approaching m B . In fact, Bergqvist ([2]) noticed this exact difficulty had been overlooked in an earlier work of Ludvigsen and Vickers ( [9]) towards proving the weak NPC, namely a |Σ 0 |{16π ď E B . In 2015, Alexakis ([1]) was able to prove the NPC for vacuum perturbations of the black hole exterior in Schwarzschild spacetime by successfully using the latter of the two flows in Sauter's thesis. Alexakis was once again afforded an explicit analysis of E H at null infinity. Work by Mars and Soria ([10]) followed soon afterwards in identifying the asymptotically flat condition on Ω to maintain an explicit limit of lim sÑ8 E H pΣ s q along geodesic foliations. In 2016 ( [11]), those authors constructed a new functional on 2-spheres and showed for a special foliation tΣ λ u off of the horizon Σ 0 called geodesic asymptotically Bondi (or GAB) that, a |Σ 0 |{16π ď lim λÑ8 E H pΣ λ q ă 8. Thus, for GAB foliations that approach round spheres, this reproduces the weak NPC of Bergqvist ([2]), Ludvigsen and Vickers ( [9]). Unfortunately, as in the aforementioned work of Sauter, Bergqvist, Ludvigsen and Vickers there is no guarantee of asymptotic roundness.
Mass Not Energy
These difficulties may very well be symptomatic of the fact that an energy is particularly susceptible to the plethora of ways boosts can develop along any given flow. We expect an infinitesimal null flow of Σ to gain energy due to an influx of matter analogous to the addition of 4velocities in Figure 5, E 3 " E 1`E2 . However, with energy being a frame dependent measurement and no way to discern a reference frame, we are left at the mercy of distortions along the flow. Without knowing, our measurements could experi-ence either an artificial increase in energy P Ñ P 1 or decrease P 1 Ñ P , as depicted in Figure 5. Geometrically, this manifests along the flow in a (local) 'tilting' of Σ (recall Figure 4). From the formula of E H pΣq for Σ ãÑ Ω S , Jensen's inequality indicates the existence of many flows with increasing E H yet only t-slice intersections produce the Bondi Mass. Not only is this flow highly specialized, it dictates strong restrictions on our initial choice of Σ from which to begin the flow. P 2 " pE 2 , P 2 q P 1 " pE 1 , P 1 q P P 1 P 3 " pE 3 , P 3 q
Figure 5: Propagation vs Boosts
This is not a problem, however, if appealing instead to mass rather than energy since boosts leave mass invariant, M 2 " E 2´| P | 2 " pE 1 q 2´| P 1 | 2 " pM 1 q 2 . Moreover, by virtue of the Lorentzian triangle inequality (provided all vectors are timelike and either all future or all past-pointing), along any given flow the mass should always increase M 3 " |pE 1`E2 , P 1` P 2 q| ě |pE 1 , P 1 q|`|pE 2 , P 2 q| " M 1`M2 .
One may hope therefore, by appealing to a notion of mass instead of energy, a larger class of flows will arise exhibiting monotonicity of mass and physically meaningful asymptotics.
Recent Progress from a new Quasi-local Mass
In search of a mass we return to our favorite model spacetime, the Schwarzschild spacetime with null cones Ω S . With the tantalizingly simple expression (3), a natural first guess at extracting the black hole mass M is to takẽ The reason being, irrespective of the cross-section Σ ãÑ Ω S , " M as desired. Amazingly, Jensen's inequality also ensures thatm ď E H whenever the integrand is non-negative by Gauss-Bonnet. Unfortunately, upon an analysis of the propagation of this mass on a general null cone, no clear monotonicity properties arise and we're left needing to modify it at the very least. However, on a return to our drawing board Ω S , one finds that all cross-sections also satisfy ζ " d log σ Ø τ " 0 and a sufficient condition [15]. Inspired by this, the second author (see [13]) put forward the modified geometric flux, ρ and mass mpΣq: The first thing we observe, is that the previously desired properties mpΣq " M for Σ ãÑ Ω S and m ď E H if ρ ě 0 are maintained (the second property now following from both the Guass-Bonnet and the Divergence Theorems). Even better, from a nine page calculation followed by three different 'integrations by parts' most terms combine to ensure this mass function is nondecreasing in great generality, as summarized by our main theorem.
So how likely is a foliation to be doubly convex? Well, in the case of a cross-section of Ω S in Schwarzschild, we know (4) holds trivially from (3). It also follows ( [13]) that 1 4 x H, Hy´1 3 ∆ log ρ " 1 ω 2 p1´2 M ω q. We conclude that all foliations of Ω S in the black hole exterior (ω ě 2M ) satisfy (4) and (5). A natural question follows as to whether these conditions are physically motivated for more general asymptotically flat null cones.
Having found that this mass functional exhibits somewhat generic monotonicity, our next concern is asymptotic convergence and whether we obtain a physically significant quantity. From the fact that m ď E H , we see that any doubly convex foliation tΣ s u approaching a geodesic foliation of an asymptotically flat null cone Ω yields a converging mass, since E H pΣ s q converges ( [10]). However, this convergence is an indirect observation insufficient for a direct analysis of the limit. With a fairly standard strengthening of the decay conditions on the geometry of Ω, called strong flux decay (see [13]) we're afforded an explicit limit for lim sÑ8 mpΣ s q. Amazingly, this limit is independent of any choice of asymptotically geodesic foliation (as in Schwarzschild), and we conclude with a proof of a Null Penrose Conjecture. Moreover, in the case that x H, Hy| Σ0 " 0 (i.e. Σ 0 is a horizon) we prove the NPC. Furthermore, if we have the case of equality for the NPC and tΣ s u is a strict doubly convex foliation, then Ω " Ω S .
The first part of the theorem follows from the fact that m ď E H and that the limit lim sÑ8 mpΣ s q (being independent of the flow) must therefore bound all Bondi energies from below, hence also m B . In the second part, if x H, Hy " 0 along with (5), then its a consequence of the Maximum Principle for elliptic PDE that ρ must be constant on Σ 0 warranting equality in Jensen's inequality. As a result, For the case of equality we refer the reader to [13].
Open Problems
An interesting condition resulting in a doubly convex foliation is that ρpsq be constant on each of the leaves of a foliation (i.e. on each Σ s ). As a result, this foliation satisfies mpΣ s q " E H pΣ s q, representing a 'rest-frame' flow given that energy equals mass. Studying the existence of this flow is of great interest. We also invite the reader to recall the dependence of ζ and σ on the null basis tL, Lu. An analogous construction of data under a 'role reversal' between L and L would result in a new flux functionρ on our surface Σ. We say a sur-faceΣ is time-flat ([4]) whenever ρ "ρ. The existence of 'time-flat' surfaces within asymptotically flat null cones are particularly interesting as they serve as pivots where a flow can 'flip-direction' without causing a discontinuous jump in mass (since m "m). L L Σ Figure 6: Bouncing This observation is of particular significance in the search for more general foliations that inherit the successes of Theorems 1 and 2. Another objective is to weaken the underlying smoothness assumption associated with null cones in Theorem 2, possibly toward broadening its validity to include focal points or multiple black hole horizons.
These open questions are important not only for understanding the physics of black holes and mass in general relativity, but also for expanding our knowledge of null geometry. Since null geometry is not particularly intuitive, physical motivations like these are very useful for providing fascinating conjectures to pursue.
|
2017-08-02T21:43:32.000Z
|
2017-08-02T00:00:00.000
|
{
"year": 2017,
"sha1": "aba00c7491adbc1e3553c1a68ab7d44391f21b51",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "aba00c7491adbc1e3553c1a68ab7d44391f21b51",
"s2fieldsofstudy": [
"Mathematics",
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
248158008
|
pes2o/s2orc
|
v3-fos-license
|
The Influence of Social Exclusion Types on Individuals' Willingness to Word-of-Mouth Recommendation
As the pace of modern life accelerates, social exclusion occurs more and more frequently in interpersonal interactions. The type of social exclusion can lead to different psychological needs of individuals, and, thus, affects the tendency of word-of-mouth (WOM) recommendation. There are three experiments in this research. Experiment 1 explores the influence of social exclusion types on the willingness of WOM recommendation. The result shows that being rejected increases individuals' willingness to WOM recommendations while being ignored decreases individuals' willingness. Experiment 2 explores the internal psychological mechanism of the influence of social exclusion types on WOM recommendation behavior and proves the mediating role of psychological needs (affiliative-focused needs; power/provocation need). In experiment 3, the moderating effect of product attributes (scarcity/popularity) on the main effect is analyzed. This research is the first to explore the influence of social exclusion types on individuals' willingness to WOM recommendations, which enriches the research on social exclusion in the field of WOM recommendations.
INTRODUCTION
With the rapid development of modern social networks and communication technology, individuals can exchange their experiences and feelings anytime and anywhere and disseminate information about products and services (Raacke and Bonds-Raacke, 2008). This kind of communication among consumers is a word-of-mouth (WOM) recommendation, which enriches interpersonal communication and strengthens information sharing among individuals (Ritson and Elliott, 1999). Besides, it is also an important content of social interpersonal communication. The WOM recommendation has become a new way of social communication with the pronoun of "Anli" (Chinese Internet buzzwords, means recommendation), "Zhongcao" (Same meaning as "Anli") and "Bacao" (Compared with "Zhongcao, " refers to the implementation of purchase behavior) appearing in interpersonal communication. However, with the change of pace and style of modern life, people are more and more likely to feel social exclusion in their interpersonal communication. Social exclusion is a very bad experience for individuals and an important social factor affecting individuals' psychology and behavior (Williams, 2001). So, how does social exclusion affect the WOM recommendation?
Existing studies on the impact of social exclusion on WOM recommendation present conflicting conclusions. Some studies have shown that the key function of WOM recommendation is to strengthen social relations and to alleviate the adverse experience brought by social exclusion (Berger, 2014). Therefore, the social exclusion will increase individuals' willingness to WOM recommendations (Berger, 2014;Kumar and Kaushal, 2021). Sinha and Lu (2019) found that individuals who experienced social exclusion would improve their brand attitude and willingness to WOM recommendations. Additionally, WOM recommendation reduces interpersonal distance through communication and sharing and makes up for the lack of the sense of belonging caused by social exclusion (Berger, 2014). However, other studies have found that socially excluded individuals are not friendly, and even tend to take aggressive actions to cope with it (Chow et al., 2008). Consequently, they should be less willing to make WOM recommendations to others. Moreover, social exclusion leads to social withdrawal, triggering the desire for solitude and avoiding communication and contact with others (Ren et al., 2016(Ren et al., , 2021, and, hence, less likely to make WOM recommendations (Twenge et al., 2007). How does social exclusion affect WOM recommendations? Previous studies have found conflicting findings, which makes it difficult to draw a consistent conclusion from them.
Therefore, based on psychological needs theory (Williams, 2009), the current research firstly takes social exclusion types as the entry point (being rejected/being ignored) to analyze the influence of social exclusion types on individuals' willingness of WOM recommendations. It provides a new perspective to integrate the contradictory viewpoints of previous studies and explains when social exclusion promotes WOM recommendation, which makes up for the limitations of previous research and expands relevant research in the field of social exclusion and WOM recommendation.
There are three experiments in this research. Experiment 1 shows that the type of social exclusion (being rejected/being ignored) could effectively affect individuals' willingness to WOM recommendations. Social exclusion is a phenomenon, in which an individual is rejected, isolated, or ignored by others or groups (Berger, 2014). Being rejected means receiving negative feedback explicitly while being ignored means having no feedback and it is implicit (Molden et al., 2009). Being rejected increases an individual's willingness to WOM recommendations, while being ignored decreases an individual's willingness to it. Experiment 2 explores the mediating role of individual psychological needs (uniqueness/relation) in the relationship between social exclusion types and individuals' willingness to WOM recommendations and verifies the theoretical logic of the main effect. Experiment 3 examines the moderating effect of product attributes (scarcity/popularity) on the main effect and further clarifies the boundary conditions. When the product attribute is popular, being ignored reduces the individual's willingness to WOM recommendations, and being rejected increases the individual's willingness to it. When the product attribute is scarce, the type of social exclusion does not significantly affect individuals' willingness to recommend through WOM.
Social Exclusion
With the change of modern lifestyle, social exclusion is increasingly common in everyday life, such as being isolated in chatting, being rejected in job hunting, and being ignored by friends or lovers (Berger, 2014). Social exclusion is a phenomenon, in which an individual is rejected, isolated, or ignored by others or groups (Berger, 2014). Existing studies have found that social exclusion leads to two completely different behavioral responses. Some studies argue that social exclusion increases prosocial behavior. Moreover, social exclusion increases individuals' willingness to cooperate with others and pay for the need for relations (Williams, 2009), and their charitable donation behavior (Lee and Shrum, 2012). However, other studies suggest that social exclusion increases antisocial behavior. For example, social exclusion reduces individuals' willingness to donate and the amount of donations (Lee and Park, 2019) increases their unethical behavior (Kouchaki and Wareham, 2015), and reduces their willingness to help others (Twenge et al., 2007). Molden et al. (2009) distinguished different types of social exclusion (being rejected/being ignored). Different types of social exclusion have something in common, that is, they are all excluded by specific people or groups. Nevertheless, there are also differences between them. Being rejected means receiving negative feedback; it is explicit while being ignored means having no feedback and it is implicit (Molden et al., 2009). Specifically, being rejected means that individuals receive clear feedback about their bad situation in a relationship or group, and, thus, receive an active rejection. Being ignored refers to the fact that individuals receive more hints of lack of social relations and, thus, are passively ignored (Leary, 2005;Williams, 2007). Studies have shown that different types of social exclusion produce different outcomes. Sinha and Lu (2019) found that being rejected led to the formation of low-level mental structure and activation of specific thinking mode, thus, causing individuals to prefer tangible (visual) compensation. On the other hand, being ignored led to the formation of higher-level mental structure and activation of abstract thinking mode, and in this situation, individuals would prefer intangible (verbal) compensation. Furthermore, Lee and Shrum (2012) suggested that being rejected increased individuals' donation behavior, while being ignored increased individuals' conspicuous consumption behavior.
The Influence of Social Exclusion Types on Psychological Needs
Social exclusion can threaten four fundamental needs: the need to belong, the need to maintain reasonably high self-esteem, the need to perceive control over one's social environment, and the need to feel recognized for existing and being worthy of attention (Williams, 2009). Belonging and self-esteem become an inclusionary need cluster, such that following social exclusion, individuals behave in ways to either remind themselves of their social connections, or that will improve their chances of belonging (Williams, 2009).
Control and existence become a power and provocation need cluster, such that when these needs rise to the top of individuals' priorities, they may be in dominating others and forcing others to recognize their existence (Williams, 2009). Belonging and self-esteem threats may motivate individuals to please others; control and meaningful existence threats might motivate aggressive and provocative responses (Williams, 2007).
Behavioral consequences appear to be split into two general categories: affiliative-focused needs (belonging and self-esteem) and power/provocation needs (control and recognition). If affiliative-focused needs are mostly thwarted, then, ostracized individuals will seek to fortify these needs by thinking, feeling, and behaving in a relatively prosocial manner (Bernstein et al., 2010). People generally have a desire to form and maintain positive interpersonal relationships (Baumeister and Leary, 1995). The affiliative-focused needs encourage individuals to pursue interpersonal relationships more actively and strive to maintain a good public image, and increase their willingness to cooperate in groups (Williams, 2001). Once the power/provocation needs are mostly thwarted, ostracized individuals will attempt to fortify these needs, which, in many instances, may result in controlling, provocative, and even antisocial responses (Warburton et al., 2006;Williams, 2007). Individuals lacking the sense of power/provocation tend to choose unique products to highlight their sense of existence (Wan et al., 2014). Hence, different types of social exclusion threaten different psychological needs, and then, lead to different behaviors of individuals (Lee and Shrum, 2012;Wesselmann et al., 2015).
This research purposes that social exclusion types (being rejected/being ignored) affect the psychological needs of individuals. Specifically, being rejected threatens individuals' sense of belonging, and the lack of the sense of belonging activates affiliative-focused needs. Being rejected means that an individual receives clear feedback about his or her bad status in social relations (Leary, 2005). That is to say, being rejected denies an individual's qualification as a member of a group. An individual's inability to possess in-group membership means being excluded from the group, which threatens his or her sense of belonging to the group (Baumeister and Leary, 1995). When the sense of belonging of an individual is threatened, he or she will have a strong desire to rebuild the connection with others, for example, to participate in group activities more actively, or even to purchase group exclusive items to seek recognition (Wan et al., 2014). When the individual's sense of belonging is threatened, the dominant need of him or her is the affiliative-focused needs (Lee and Shrum, 2012). Thus, being rejected threatens individuals' sense of belonging, thereby activating affiliative-focused needs.
However, being ignored threatens individuals' sense of meaningful existence, thereby activating the power/provocation need. Specifically, unlike explicitly being rejected, being ignored is unilateral, and individuals who are ignored do not know the reason (Williams, 2009). Being ignored is considered as the death of social significance (Williams, 2007;Hales, 2018), which causes individuals to be threatened to lose their belief in their meaningful existence (Williams, 2007). That is, when individuals' meaningful existence is threatened, they will feel a sense of being unimportant and being unnoticed, which makes them feel insignificant. Hence, the absence of a meaningful presence enhances the power/provocation needed to obtain attention (Warburton and Williams, 2005). Moreover, gaining attention can restore social visibility and confirm the existence of individuals, and seeking uniqueness is an important way to gain attention and restore the power/provocation need (Lee and Shrum, 2012). When individuals' meaningful existence is threatened, their dominant need is the power/provocation need. Therefore, being ignored threatens individuals' sense of meaningful existence, and they'll activate the power/provocation need.
The Influence of Social Exclusion Types on Willingness to WOM Recommendation
The WOM recommendation is the informal communication between consumers on the ownership and the usefulness of a certain product, as well as the services provided by the seller of the product (Westbrook, 1987). The WOM is an important content of social interpersonal communication. It enriches the content of interpersonal communication, strengthens the information sharing between individuals, and enhances the connection between each other. The WOM recommendation has five social functions: impression management, emotion regulation, information acquisition, persuasion, and social bonding (Berger, 2014). Individuals can achieve self-promotion and identity display through WOM recommendations (Packard and Wooten, 2013). As a result, people often make WOM recommendations to others to show that they are "professional" and, at the same time, to project a good image of being helpful. The emotion regulation function of WOM recommendation allows individuals to express and channel their emotions to reduce maladjusted feelings through sharing when they experience adversity (Dichter, 1966). The information acquisition function of WOM recommendation is reflected in those individuals, who obtain relevant information about products or things they are interested in through WOM recommendation (Sweeney et al., 2012;Berger, 2014). The persuasion function of WOM recommendation is reflected in the sales and social situations, where salesmen persuade customers to buy, or friends persuade each other to buy a certain product or carry out a joint activity (Roskos-Ewoldsen, 1997). Through WOM recommendation, individuals communicate with others by emotional communication, impression management, information acquisition, and persuasion to strengthen the common ground and finally establish and consolidate the connection with others (Berger, 2014). Thus, WOM recommendation is an indispensable component of social relationships. In addition, social exclusion, at present, is becoming more and more common in a relationship. Therefore, this research takes the type of social exclusion as the starting point to analyze the influence of social exclusion types (being rejected/being ignored) on individuals' willingness to WOM recommendations to enrich the relevant research of social exclusion in the social field.
In this research, the type of social exclusion (being rejected/being ignored) would influence the individuals' psychological needs (affiliative-focused needs, power/provocation need), thus, prompting them to produce WOM recommendations (Lee and Shrum, 2012). Specifically, when individuals are rejected, their sense of social belonging is threatened and their affiliative-focused needs are enhanced, and, thus, their willingness of WOM recommendation increases. Being rejected means that individuals are explicitly excluded from the group, resulting in their loss of social belonging. When individuals' sense of belonging is threatened, they will recover this by rebuilding social relations (Lee and Shrum, 2012), and WOM recommendation is an important way to rebuild it. Existing studies have shown that WOM recommendations can promote the social connection between people (Sun et al., 2006;Berger, 2014). WOM recommendations can also establish contact with strangers and can maintain contact with acquaintances as well (Chen, 2017). Similarly, this kind of recommendation is the information sharing between individuals, which makes different individuals have common ground and strengthens the social bonds between them (Ritson and Elliott, 1999). Prior research also suggests that talking about popular ads gives teenagers a common topic of conversation, which, in itself, is a social currency, allowing them to integrate with their peers and show that they belong to a group (Ritson and Elliott, 1999). Thus, individuals who are rejected can strengthen their WOM recommendations to rebuild social relations and restore affiliative-focused needs.
However, when individuals are ignored, their sense of meaningful existence is threatened, thus, enhancing their power/provocation need and reducing their willingness to WOM recommendation. Specifically, when individuals are ignored, they cannot communicate directly with the group that ignores them, let alone be the determining reason for being ignored (Warburton and Williams, 2005;Williams, 2009). At this point, individuals tend to get attention by highlighting their uniqueness to restore their sense of meaningful existence and meet the power/provocation need (Wan et al., 2014). Uniqueness refers to the tendency to define oneself as a distinction from the members of its reference group (Bloch, 1995). The need for uniqueness leads to a high desire to own unique products (Simonson and Nowlis, 2000). However, once the public became aware that these products are already purchased, owned, or used, the novelty products eventually lose their uniqueness (Granovetter and Soong, 1986). Thus, uniqueness drives individuals to fear becoming like others, thus, threatening their uniqueness. However, sharing information with others can reduce one's uniqueness (Ritson and Elliott, 1999). Consequently, ignored individuals are less inclined to engage in WOM recommendations due to activation of the power/provocation need. H1: Social exclusion types (being rejected/being ignored) can significantly affect individuals' willingness to WOM recommendations. Being rejected can lead to a significantly higher willingness for WOM recommendations than being ignored. H2: Psychological needs (affiliative-focused needs, power/provocation need) of individuals mediate the relationship between the social exclusion types (being rejected/being ignored) and willingness of WOM recommendations.
The Moderating Effect of Product Attributes: Scarcity and Popularity
Scarcity and popularity are common cues of product attributes. Scarcity cues are mostly used for limited-edition products, while popularity cues are mostly used for popular products (Hélène et al., 2013). Scarce products do not have a large quantity of supply and are generally limited and unique, while popular products have a large number of shipments and are generally very common (Gierl et al., 2008). Product attributes have very strong symbolic significance and can meet different psychological needs of individuals, and they are also important factors affecting individuals' behavior and decision-making (Snyder and Fromkin, 1980;Çelen and Kariv, 2004).
Product attributes (scarcity/popularity) can bring different social symbolic meanings to individuals (Robinson et al., 2016). Scarce products can convey individual uniqueness, and individuals can highlight their uniqueness in the crowd by purchasing scarce products (Parker and Lehmann, 2011). Therefore, individuals with unique needs prefer scarce products (Fromkin, 1970). Popular products, however, imply group relationships. The psychological reason behind the popularity is that individuals desire to become a part of a group and establish relationships with others in the group by consuming the same products (Jeong and Kwon, 2012). Relatively, when individuals imitate others to purchase popular products, they will think that they are more acceptable to others (Ha et al., 2016).
In this context, product attributes (scarcity/popularity) can effectively influence the relationship between social exclusion types (being rejected/being ignored) and WOM recommendation behavior. Specifically, when the product is popular, to meet the affiliative-focused needs, individuals who are rejected can better establish and maintain contact with others by recommending the popular product (Leibenstein, 1950). On this condition, individuals who are rejected are more likely to make WOM recommendations. However, for individuals who are ignored, being ignored activates their power/provocation need, and recommending a popular product will highlight the popularity of their taste and further reduce their uniqueness (Granovetter and Soong, 1986). However, power/provocation need leads to keeping uniqueness. Hence, individuals who are ignored are not inclined to make WOM recommendations. When the product is scarce, its audience scope is small, and its uniqueness is not easy to be widely accepted by social groups. It is difficult for individuals to obtain common ground or establish group relationships through recommending such products, and, thus, the social risk of recommending them by WOM is very high (DeSarbo et al., 2002). Hence, individuals who are rejected are less inclined to recommend such products by WOM. For individuals who are ignored, owning scarce products is an important way to meet their needs for uniqueness (Parker and Lehmann, 2011). If others have the same scarce products as them, their uniqueness will be threatened. Therefore, to meet their power/provocation need, individuals who are ignored are not inclined to recommend scarce products by WOM.
H3: Product attributes (scarcity/popularity) significantly moderate the relationship between social exclusion types and WOM recommendations. When the product is scarce, rejected individuals have affiliative-focused needs and are less likely to recommend by WOM. Meanwhile, neglected individuals are also not inclined to recommend by WOM for scarce products to satisfy their power/provocation need. When the product is popular, rejected individuals are more likely to recommend it by WOM to satisfy the affiliative-focused needs, while neglected individuals have power/provocation needs and are less likely to recommend through WOM.
Study 1
The purpose of experiment 1 is to test that social exclusion types (being rejected/being ignored) can significantly affect individuals' willingness to WOM recommendations.
Participants and Design
Based on the calculation method adopted by Cohen (1977) (the effect size f = 0.25 and the expected power = 0.80), the researcher determined sample sizes of more than 159 by G * Power 3.1 software. Therefore, experiment 1 recruited 180 participants, who are mainly students from a university, including undergraduates, postgraduates, and doctoral students. The final sample was N = 171 (M age = 21.41, SD age = 2.56, age range: 17-29, female 52.78%). Participants were randomly assigned to three experimental designs (being ignored, being rejected, or control), and each group was (n being rejected = 57, n being ignored = 56, and n control = 58).
Stimuli and Procedure
Participants were told that this experiment aimed at developing a psychological counseling technique for college students. Participants were randomly assigned to being ignored, being rejected, or control experimental conditions. Researchers used recalling and writing tasks to manipulate social exclusion types (Molden et al., 2009). The task asked participants in social exclusion (being ignored/being rejected) groups to recall an incident, in which they had been ignored or rejected, and then, write it down in 5 min. In the "being ignored group, " participants were asked to "write down a moment when you felt strongly ignored in some way. At that time, you were obviously ignored, but no one actually said they didn't want or like you." In the "being rejected group, " participants were asked to "write down a moment when you felt strongly rejected in some way. At that time, you were obviously rejected and clearly told that you were not accepted because they did not want or like you." In the control group, participants were asked to recall and write down life events when they drove or walked to the supermarket. The researcher checked the writing content in the task. When the content was consistent with the recall task, it was coded as "0"; when the content was inconsistent with the recall task, it was coded as "1." Then, all participants were given a picture of a task reward (a picture of a T-shirt) and then were asked to complete a series of questionnaires, including a report on their willingness to recommend the item by WOM: "If you owned this product, would you recommend this item to your friends?" (7-point scale, 1: Very Unwilling; 7: Very Willing) (Cheema and Kaikati, 2010), the degree of being ignored and being rejected that they perceived (7-point scale, 1: Very Low; 7: Very High) (Molden et al., 2009), and some items, such as personal interests and hobbies and comments on t-shirts. Then, they reported whether their willingness of WOM recommendations was based on their past shopping experience and guessed the purpose of this experiment.
Manipulation Check
All the participants reported the consistent contents in the task, 9 participants' willingness of recommendation depended on past shopping experience, and no participants guessed the real purpose of the experiment. There was a significant difference in the feeling of being rejected among the three groups (F = 117.57, P < 0.001, ES = 0.58). Participants in the being rejected group felt significantly more rejected than those in the control group (M being rejected = 5.37, SD = 1.05, M control = 3.17, SD = 0.80, t = 13.51, df = 168, p < 0.001, d = 2.36) and being ignored group (M being ignored = 3.23, SD = 0.74, t = 13.03, df = 168, p < 0.001, d = 2.36). The ignored group and the control group reported no significant distinction on being rejected (t = 0.37, df = 168, p = 0.715, d = 0.08). There was a significant difference in the feeling of being ignored among the three groups (F = 201.97, p < 0.001, ES = 0.71). Participants in the being ignored group felt significantly more ignored than those in the being rejected group (M being ignored = 5.54, SD = 0.81, M being rejected = 3.16, SD = 0.59, t = 16.95, df = 168, p < 0.001, d = 3.36) and control group (M control = 3.03, SD = 0.82, t = 17.90, df = 168, p < 0.001, d = 3.08). The rejected group and the control group reported no significant distinction on being ignored (t = 0.89, df = 168, p = 0.376, d = 0.18). The results showed that the experiment manipulation effectively influenced the majority of participants.
Willingness to WOM Recommendation
The results showed that there was a significant difference in the willingness to WOM recommendation among the three groups (F = 58.86, p < 0.001, ES = 0.41). The willingness to WOM recommendation in the being rejected group was significantly higher than that in the control group (M being rejected = 5.25, SD = 0.83, M control = 4.12, SD = 0.94, t = 6.65, df =168, p < 0.001, d = 1.27) and being ignored group (M being rejected = 5.25, SD = 0.83, M being ignored = 3.41, SD = 0.95, t = 10.75, df =168, p < 0.001, d = 2.06). At the same time, the WOM willingness of the being ignored group was significantly lower than that of the control group (M being ignored = 3.41, SD = 0.95, M control = 4.12, SD = 0.94, t = 4.18, df = 168, p < 0.001, d = 0.75). The results of experiment 1 verified hypothesis 1, as shown in Supplementary Figure 1. Experiment 1 verified hypothesis 1, that social exclusion types (being rejected/being ignored) can significantly affect the willingness of WOM recommendation. Being rejected will increase the willingness to WOM recommendations while being ignored will reduce the willingness to WOM recommendations.
Study 2
The purpose of experiment 2 is to test H2, psychological needs (affiliative-focused needs; power/provocation need) mediate the relationship between social exclusion types (being rejected/being ignored), and willingness to WOM recommendation.
Participants and Design
Based on the calculation method adopted by Cohen (1977) (the effect size d = 0.5 and the expected power = 0.80), the researcher determined sample sizes of more than 128 by G * Power 3.1 software. Therefore, experiment 2 recruited 140 participants, who are mainly students from a university, including undergraduates, postgraduates, and doctoral students. The final sample was N = 133 (M age = 21.99, SD age = 2.10, age range: 18-31, female 45.11%). The participants were randomly assigned to two experimental designs (being ignored, being rejected), and each group was (n being rejected = 66, n being ignored = 67).
Stimuli and Procedure
Experiment 2 conducted the social rejection experiment of Molden et al. (2009). Participants were told that this experiment was about online social interaction. They would discuss two randomly selected topics with two other participants in an online chat room. In effect, the other two participants were false participants played by researchers. In the experiment, real participants would receive email prompts and always be the first to speak. The two false participants would cooperate to reject or ignore the real participant with the preset dialogue content. In the "being ignored situation, " no matter what real participants sent out, they would be ignored by false participants, and the two false participants would have a dialogue in pairs. In the "being rejected condition, " no matter what real participants sent out, they would receive negation and refutation from two false participants. When the conversation lasted about 10 min, all participants received a message indicating that the conversation task was over. To control the psychological distance, all participants were asked to complete the following rating tasks as bystanders. Then, participants were asked to complete a series of questionnaires about their psychological needs (affiliative-focused needs; power/provocation need, 7point scale) (Williams, 2009) and their perceived degree of being ignored and rejected (7-point scale, 1: Very Low; 7: Very High) (Molden et al., 2009). Finally, participants were told they would be given a hat as a gift (presented with a picture of the hat), and then, they should report their willingness to recommend the item by WOM: "Would you recommend this hat to your friends?" (7-point scale, 1: Very Unwilling; 7: Very Willing) (Cheema and Kaikati, 2010), as well as other items. The emotion dimension scale of Hagtvedt (2011) was used to measure their emotional state. Finally, participants reported their psychological distance (1: Very Close; 7: Very Far) (Niu et al., 2010), whether their willingness of WOM recommendation was based on their past shopping experience and guessed the purpose of this experiment.
Manipulation Check
The willingness to recommend of the 7 participants depended on past shopping experience, and no participants guessed the real purpose of the experiment. There was no significant difference in emotional state between the two groups (M being ignored = 2.79, SD = 0.84, M being rejected = 2.80, SD = 0.83, t = 0.08, df = 131, p = 0.934, d = 0.01). There was no significant difference in psychological distance between the two groups (M being ignored = 5.18, SD = 0.85, M being rejected = 5.21, SD = 0.89, t = 0.22, df = 131, p = 0.827, d = 0.03). Participants in the being rejected group felt significantly more rejected than those in the being ignored group (M being rejected = 5.44, SD = 0.95, M being ignored = 3.67, SD = 0.88, t = 11.17, df = 131, p < 0.001, d =1.93). Participants in the "being ignored group" felt significantly more ignored than those in the being rejected group (M being ignored = 5.16, SD = 0.91, M being rejected = 3.24, SD = 0.90, t = 12.24, df = 131, p < 0.001, d = 2.12). The results showed that the experiment manipulation effectively influenced the majority of participants.
Psychological Needs
Participants in the being rejected group report significantly higher affiliative-focused needs than those in the being ignored group (M being rejected = 5.22, SD = 1.01, M being ignored = 3.61, SD = 0.92, t = 9.58, df = 131, p < 0.001, d = 1.67). Participants in the being ignored group reported significantly higher power/provocation need than those in the being rejected group (M being ignored = 5.42, SD = 0.88, M being rejected = 3.89, SD = 1.05, t = 9.15, df = 131, p < 0.001, d =1.58). The results showed that the need for power/provocation in the being ignored group was significantly higher than that in the being rejected group, while the affiliative-focused need in the being rejected group was significantly higher than that in the being ignored group.
The Willingness to WOM Recommendation
There was a significant difference in the willingness to WOM recommendation between the two groups. The willingness to WOM recommendation in the being ignored group was significantly lower than that in the being rejected group (M being ignored = 3.64, SD = 0.90, M being rejected = 5.23, SD = 1.06, t = 9.29, df = 131, p < 0.001, d = 1.62), again verifying the main effect of the research.
The Analysis of the Mediating Effect
To further test the relationship among social exclusion types, psychological needs (affiliative-focused needs or power/provocation need), and WOM recommendation, this research analyzed the mediating effect of psychological needs. A bootstrapping analysis (PROCESS Model 4, Hayes, 2013, with 10,000 bootstrapping resamples) was used to analyze the mediating role of psychological needs. The results showed that affiliative-focused needs mediated the influence of social exclusion types on the willingness to WOM recommendation (95% confidence interval β =1.60; CI = 1.26-1.94). Power/provocation need also mediated the influence of social exclusion types on the willingness to WOM recommendation (95% confidence interval β = 1.54; CI = 1.21-1.89). See Supplementary Figure 2 for details. Experiment 2 verified H2 and analyzed the mediating role of psychological needs (affiliative-focused needs or power/provocation need). It indicated that psychological needs (affiliative-focused needs or power/provocation need) mediated the relationship between social exclusion types and willingness to WOM recommendation, and proved the theoretical logic of the main effect.
Study 3
The purpose of experiment 3 is to test H3, the moderating effect of product attributes (scarcity/popularity) on the relationship between social exclusion and individuals' willingness to WOM recommendations.
Participants and Design
Based on the calculation method adopted by Cohen (1977) (the Effect size f = 0.25 and the expected Power = 0.80), the researcher determined sample sizes of more than 179 by G * Power 3.1 software. Therefore, experiment 3 recruited 200 participants. The final sample was N = 185 (M age = 21.74, SD age = 2.26, age range: 18-29, female 41.62%). Participants were randomly assigned to 2 (social exclusion types: being ignored/being rejected) × 2(product attributes: scarcity/popularity) experimental design, and each group was (n reject popularity = 47, n reject scarcity = 47, n ignore popularity = 46, n ignore scarcity = 45).
Stimuli and Procedure
Experiment 3 adopted the manipulation method of product attributes of Wu and Lee (2016). All participants were prompted to buy coffee cups from online retailers. The manipulation of scarce products was described as "Product A is an annual special limited edition, " and the manipulation of popular products was described as "75% of consumers bought this product B after viewing this site." The researcher recruited 60 participants online for pre-test (M age = 22.55, SD age = 2.48, age range: 20-29, female 58.33%). All participants were randomly assigned to the scarcity group or the popularity group, and products were presented to participants correspondingly. Then the participants were asked to rate the product attributes (7-point scale, 1: Very Scarce; 7: Very Popular) (Wu and Lee, 2016). The results showed that participants in the scarcity group rated product attributes significantly lower than those in the popularity group (M scarcity =3.13, SD = 0.82, M popularity =5.77, SD = 0.90, t =11.87, df =58, p < 0.001, d = 3.07), verifying the effectiveness of product attributes manipulation.
The manipulation of social exclusion types (being ignored/being rejected) was similar to experiment 2. To control psychological distance, all participants were asked to complete the following rating tasks as bystanders. Participants were then told that they would receive a coffee mug as a gift (presented with a picture of the mug). Participants in the scarcity group were told that the mug was a special annual limited edition, while participants in the popularity group were told that 75% of consumers bought this mug after viewing this site. After that, all participants completed a series of questionnaires, including the degree of perceived "being ignored" and "being rejected" (7-point scale, 1: Very Low; 7: Very High) (Molden et al., 2009), their psychological needs (affiliative-focused needs; power/provocation need,7-point scale) (Williams, 2009), their willingness to recommend the coffee cup by WOM: "Would you recommend this cup to others?" (7-point scale, 1: Very Unwilling; 7: Very Willing) (Cheema and Kaikati, 2010), and other items. Finally, the participants rated the product attributes (7-point scale, 1: Very Scarce; 7: Very Popular) (Wu and Lee, 2016), and completed the emotion state measurement. Moreover, participants reported their psychological distance, whether their willingness to WOM recommendation was based on their past shopping experience and guessed the purpose of this experiment.
Manipulation Check
The willingness to recommend of 15 participants depended on past shopping experience, and no participants guessed the real purpose of the experiment. There was no significant difference in emotional state between the two groups (M being ignored = 2.89, SD = 0.78, M being rejected = 2.79, SD = 0.73, t = 0.93, df = 183, p = 0.356, d = 0.13). There was no significant difference in psychological distance between the two groups (M being ignored = 5.26, SD = 0.99, M being rejected = 5.06, SD = 1.12, t = 1. 28, df = 183, p = 0.20, d = 0.19). Participants in the "being rejected group" felt significantly more rejected than those in the being ignored group (M being rejected = 5.43, SD = 0.97, M being ignored = 3.38, SD = 0.96, t = 14.38, df = 183, p < 0.001, d = 2.12). Participants in the being ignored group felt significantly more ignored than those in the being rejected group (M being ignored = 5.46, SD = 0.97, M being rejected = 3.44, SD = 1, t = 13.98, df = 183, p < 0.001, d = 2.05). In addition, participants in the scarcity group rated the product attributes significantly lower than those in the popularity group (M scarcity = 3.27, SD = 0.92, M popularity = 5.55, SD = 0.89, t = 17.15, df = 183, p < 0.001, d = 2.52). The results showed that the experiment manipulation effectively influenced the majority of participants.
Psychological Needs
The psychological needs of the two groups were significantly different. Participants in the "being rejected group" reported a significantly higher affiliative-focused need than those in the being ignored group (M being rejected = 5.59, SD = 0.87, M being ignored = 3.65, SD = 0.91, t = 14.83, df = 183, p < 0.001, d = 2.18). Participants in the 'being ignored group' reported a significantly higher power/provocation need than those in the being rejected group (M being ignored = 5.55, SD = 0.87, M being rejected = 3.52, SD = 0.97, t = 15, df = 183, p < 0.001, d = 2.20). The results showed that the "being ignored group" had more significant need for power/provocation need, while the "being rejected group" had more significant affiliativefocused need.
Willingness to WOM Recommendation
The results showed that the interaction between social exclusion types and product attributes significantly affected willingness to WOM recommendation (F = 88.44, p < 0.001, ES = 0.25). When the product was a popular product, participants in the "being rejected group" were significantly and more likely to recommend coffee cups by WOM than those in the being ignored group (M being rejected = 5.36, SD = 0.94, M being ignored = 3.33, SD = 0.82, t = 11.12, df = 91, p < 0.001, d = 2.30). However, when the product was unique, there was no significant difference between the two groups in willingness to WOM recommendation (M being rejected = 3.17, SD = 0.94; M being ignored = 3.53, SD = 0.94, t = 1.85, df = 90, p = 0.068, d = 0.38).
Moderated mediation analysis: The data were submitted to a moderated mediation analysis (using the macro-PROCESS, model 15, with 10,000 bootstrapping resamples; see Hayes, 2013). The independent variable (X) was a dummy variable representing the two experimental conditions (being ignored/being rejected). The moderator (V) was a dummy variable representing the two conditions (product attributes: scarcity/popularity). The dependent variable (Y) was the willingness of WOM recommendation. The mediator was the psychological needs (affiliative-focused needs or power/provocation needs) (M).
When the mediator was affiliative-focused needs, the results revealed that the moderating effect of product attributes was significant (95% confidence interval β = 1.94; CI = 1.34-2.64). Specifically, under popularity product conditions, the indirect effect of social exclusion type on the willingness to WOM recommendation was significant (95% confidence interval β = 1.98; CI = 1.71-2.26). Under scarce product condition, no distinctions appeared between the two groups (95% confidence interval β = 0.04; CI = −0.56 to 0.61). See Supplementary Figure 3 for details.
When the mediator was power/provocation need, the results revealed that the moderating effect of product attributes was also significant (95% confidence interval β = 1.94; CI = 1.31-2.64). Specifically, under popularity product conditions, the indirect effect of social exclusion type on the willingness to WOM recommendation was significant (95% confidence interval β = 1.94; CI = 1. 68-2.20). Under scarce product condition, no distinctions appeared between the two groups (95% confidence interval β = 0.003; CI = −0.62 to 0.62). See Supplementary Figure 3 for details. Experiment 3 proved H3 and analyzed the moderating effect of product attributes (scarcity/popularity) on the relationship between social exclusion types and individuals' willingness to WOM recommendations. The results showed that only when the product was popular, social exclusion types could effectively affect individuals' willingness to WOM recommendations.
Conclusion
There were three experiments in this research. Experiment 1 showed that social exclusion types (being rejected/being ignored) could effectively affect individuals' willingness to WOM recommendations. That is, being ignored would reduce individuals' willingness to WOM recommendations, and being rejected would increase individuals' willingness to it, verifying the main effect of the research. Experiment 2 demonstrated the mediating role of psychological needs (affiliative-focused needs; power/provocation need) between social exclusion types and individuals' willingness to WOM recommendations, proved the theoretical logic of the main effect, and constructed a complete internal mechanism model. Being rejected (being ignored) activates individuals' affiliative-focused needs (power/provocation need) and, thus, increases (decreases) their willingness to WOM recommendations. Experiment 3 clarified the moderating effect of product attributes (scarcity/popularity) on the main effect. For popular products, social exclusion types (being rejected/being ignored) can effectively affect individuals' willingness of WOM recommendation; However, for scarce products, social exclusion types (being rejected/being ignored) do not significantly affect individuals' willingness to WOM recommendation.
Theoretical Contributions
The theoretical contributions of this research are mainly reflected in the following aspects: First, this research enriches relevant research in the field of social exclusion. Existing relevant studies have presented conflicting conclusions. Some studies believe that WOM recommendation can strengthen social relationships, and make up for an individual's lack of sense of belonging caused by social exclusion. Therefore, social exclusion can promote individuals' WOM recommendations (Berger, 2014). Other studies suggest that social exclusion leads to antisocial or social withdrawal behaviors and less willingness to make WOM recommendations (Twenge et al., 2007;Chow et al., 2008). How will social exclusion affect WOM? Previous studies cannot draw a consistent conclusion. The current research takes social exclusion types as the entry point to explore the impact of social exclusion types (being rejected/being ignored) on the willingness of WOM recommendation, which provides a new perspective to integrate the contradictory viewpoints of previous studies. It helps to deepen theoretical development, as well as expand relevant research in the field of social exclusion and WOM recommendation.
Second, based on psychological needs theory, this research clarifies the mediating mechanism of the influence of social exclusion types (being rejected/being ignored) on WOM recommendations. Different types of social exclusion (being rejected/being ignored) threaten different psychological needs of individuals (affiliative-focused needs, power/provocation need), resulting in different behavioral outcomes. This research analyzes how social exclusion types (being rejected, being ignored) affect individuals' WOM recommendation through individuals' psychological needs (affiliative-focused needs, power/provocation need). Besides, this research examines the threat of being rejected (being ignored) to the sense of belonging (meaningful existence), which activates individuals' affiliativefocused needs (power/provocation need), to increase (reduce) their WOM recommendations. All in all, this research constructs a complete internal mechanism model, greatly enriching the research in the field of WOM recommendation.
Third, this research introduces product attributes (scarcity/popularity), focuses on their moderating effect on the relationship between social exclusion types and individuals' WOM recommendation, and expands the literature on product attributes. Previous studies on product attributes (scarcity/popularity) focused on the influence of product scarcity and popularity on consumers and the market (Wu and Lee, 2016;Shi et al., 2020), and few studies explored its impact on consumers' WOM recommendations. Taking consumer behavior as the research context, this research first identifies individuals' preference for product attributes under the condition of social exclusion, and clearly defines the moderating role of product attributes. This research proposes that due to the rejected individuals' affiliative-focused needs and the neglected individuals' power/provocation needs, both are not inclined to recommend scarce products by WOM. However, when the product is popular, the rejected individuals can meet the affiliative-focused needs through it, so they are more likely to recommend by WOM, while the neglected individuals need power/provocation and are unlikely to provide WOM recommendations. This research demonstrates the moderating effect of product attributes, establishes clear boundary conditions for the main effect, and constructs a clear framework in the theoretical and applied fields. It is conducive in helping socially excluded individuals select appropriate products through WOM recommendations. That is, for individuals who are rejected, it is more appropriate to recommend popular products. For individuals who are ignored, WOM recommendations should be carefully used to cope with social exclusion and avoid causing psychological discomfort.
Future Research
This research is the first to explain the influence of social exclusion types (being rejected/being ignored) on individuals' willingness of WOM recommendations from the perspective of psychological needs theory. Future research can further explore whether other mediating variables affect the relationship between social exclusion and individuals' willingness of WOM recommendations. Studies have shown that when suffering from social exclusion, individuals who are extremely eager to rebuild social relations will increase their spending on subordinate services (Baumeister and Leary, 1995) and become more obedient to group opinions. However, this research only studies from the perspective of individuals' WOM recommendations and does not discuss the influence of social exclusion on individuals' acceptance of WOM recommendations. Therefore, future research can focus on the influence of social exclusion on WOM acceptance behavior and the corresponding internal mechanism, which echoes the hot social topic of "payola." Finally, social exclusion can happen at any time and place. Social exclusion at different times and places may have different effects on individuals' WOM recommendations. Subsequent research can further explore the influence of temporal and spatial background differences on WOM recommendation. This research also explored the moderating effect of product attributes (scarcity/popularity) on the relationship between social exclusion types and individuals' willingness to WOM recommendations. There are many other categories of product attributes, such as public products/private products and hedonic products/utilitarian products. Many other moderating variables can be further explored in future research.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author/s.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by the Ethics Committee of Research Center for Psychological and Health Sciences, China University of Geosciences. The patients/participants provided their written informed consent to participate in this study.
AUTHOR CONTRIBUTIONS
FW was responsible for the logical reasoning of the research topic. WL was responsible for experimental materials and data. GC was responsible for collecting literature. All authors contributed to the article and approved the submitted version.
|
2022-04-15T13:17:12.692Z
|
2022-04-15T00:00:00.000
|
{
"year": 2022,
"sha1": "d01efa8375cb97fc554b609bba83f207a5b2d817",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "d01efa8375cb97fc554b609bba83f207a5b2d817",
"s2fieldsofstudy": [
"Psychology",
"Business"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
212628593
|
pes2o/s2orc
|
v3-fos-license
|
The Ground State Calculations for Some Nuclei by Mesonic Potential of Nucleon-Nucleon Interaction
The interaction of Nucleon-Nucleon (NN) has certain physical characteristics, which indicated by nucleon, and meson degrees of freedom. The main purpose of this work is calculating the ground state energies of 21 Deutron and Alfa through the two-body system with the exchange of mesons (Pi, segma, omega) that mediated between two nucleons. This paper investigates NN interaction based on the quasi-relativistic decoupled Dirac equation and self-consistent Hartree-Fock formulation. We construct one-boson exchange potential (OBEP) model, where each nucleon is treated as a Dirac particle and acts as a source of pseudoscalar, scalar, and vector fields. The potential in the present work is analytically derived with two static functions of meson, the single particle energy dependent(SPED) and generalized Yukawa (GY) functions, the parameters used in meson functions are just published ones (mass, coupling constant,and cut o? parameters). The theoretical results are compared to other theoretical models and their corresponding experimental data, one can see that the SPED function gives more satisfied agreement than the GY function in case of the considered nuclei.
I. INTRODUCTION
One of the aims of nuclear structure theory is to derive the ground state properties. Such properties are related to the constituents of matter, which are represented in the physics of elementary particles with their characteristics (electric charge, mass, spin,...) and how each particle interacts with others [1]. Yukawa (1934) introduced an assumption of some sort of field to be the reason of attraction between proton and neutron. This field is quantized, characterized by the force of the short range, and its mass equals 300 times of electron called Yukawa particle (meson). Meson is a Greek word, means intermediate and this is the right description for meson which transmits the nuclear force between hadrons, meson can participate in weak,strong and electromagnetic interactions with a net electric charge. All mesons are unstable and their lifetimes reach to hundredths of microseconds. Each meson characterized by quantum numbers, principle number (n = 0, 1, ..), orbital angular momentum (l = n − 1; indicates to the orbiting of quarks around each other), magnetic number (m = −l, ..., l), and spin (s = 0 for singlet state, 1 for triplet state). The description of quantum numbers can be illustrated by using the nuclear shell model [2].
The interaction between each nucleon with all other nucleons generates an average potential field where each nucleon moves. The rules of Pauli exclusion principle govern the occupation of orbital quantum states in the shell-model and postulate that under the meson-exchange between two nucleons, the wave function is totally antisymmetrical product wave function. Nucleons interaction forms a potential characterized by its dependence on position named with nuclear mean-field. There is an ability to calculate mean-field potential for electrons or in nuclei, the calculation methods are very similar, but the interactions are different. The N N -interaction is a fundamental problem in nuclear physics. It had a variant success in describing the nuclear properties, this determination ranged from the empirical picture to fit the experimental data, to derive it microscopically from the bare N N potential. Thus, there is no unique N N potential to be the start point [3][4][5].
A microscopic description was provided in nuclear models to include the elementary interaction between nucleons. The original attempts to find the fundamental theory of nuclear forces [6][7][8][9][10] were not so successful. The reason for their failure was the pion dynamics which has been restricted by the chiral symmetry [11,12]. The quantum description and field methods were included in making the potential structure such as Partovi-lomon model [13], Stony Brook group [14], Paris-group [15], Nijmegen-group [16] and Bonn-group potentials [12]. The successful theoretical models were based on (OBEP), including one-meson exchange and multi-mesons exchange, plus short range phenomenology. Inside a nucleus, there is a fact that nucleons move quasi-independently from one to another which achieves the concept of Nuclear Mean Field (NMF), this fact relies on Hartree-Fock interaction. The microscopic description of degrees of freedom related to nucleon and meson have to depend on a relativistic quantum field to include the full structure of the medium (spin structure) which is associated with the fermions field of Dirac equation [17], and the bound state energies. This can be found by solving the Dirac equation which leads to investigating the ground state energies of nuclei which is ensured by the calculation of the NMF potential with Dirac-Hartree-Fock [18].
It is known that the N N interaction can be distinguished into three parts, the first part is the Long-Range at r ≥ 2 fm originated from pseudoscalar mesons, the second part is the Medium-Range at 1f m ≤ r ≤ 2f m, which mainly comes from the exchange of scalar-meson(σ which is a fictitious scalar meson responsible for attraction), the third part is the Short-Range at r ≤ 1f m, from the vector-meson (ρ, ω, ...) exchange. In order to have a potential of N N interaction, there were many models that serve this point [19][20][21][22][23][24][25]. After little development in nuclear properties, the dominant part of interaction is central, having a strong repulsion at short range (r ≤ 0.7f m), and attraction force at intermediate range (> 1f m). There is a cancelation of major static effects between vector and scalar mesons to maintain the stability of nucleus [26]. Now, a variant number of pseudoscalar, scalar, and vector mesons are found, the vast advance of OBEP models related to N N −interaction not only for free parameter reduction, but also in the accuracy and fitting them with experimental data [12,27]. The development of quantum field theory and boson field Lagrangian by Heisenberg, Pauli, Dirac, and Rosenfeld in (1930), allow the meson field coordinates to depend on themselves by Yukawa in (1935). Firstly, Yukawa suggested the conjunction of scalar field coordinate of mesons, and then extended to include vector fields by Proca (1936), Kemmer (1938) embraced the pseudoscalar, axial vector and antisymmetric tensor. Till now, there are a large number of modifications in the vector-scalar combinations as well as pseudoscalar and pseudovector mesons.
Present work represents a motivated model of determining the ground state for deuteron( 2 H) and helium ( 4 He) based on N N −interaction, the potential related to OBEP with the exchange of pseudoscalar meson (π), scalar meson (σ) and vector meson (ω). This potential is derived analytically with two static functions of mesons, it relies on Dirac-Hartree-Fock equation. Then, we compare the obtained theoretical results with others and their corresponding experimental data. This paper arranged as follows. Section II, is devoted to explain the theoretical analysis in details, with three subsections, A, B and C. Subsection A, refers to how this model use Hartree-Fock equation with the Dirac Hamiltonian and how it deals with the wave functions. Subsection B is related to the mathematical treatment of each term in the Hamiltonian equation. Subsection C, represents the potential of our model. Section III, represents the results of the potential and groundstate energy for the selected nuclei. Finally, section IV, is the conclusion.
II. THEORETICAL ANALYSIS
There are several models which determine the structure and properties of the nucleus through the strong force between nucleons in the nucleus [28,29]. The states of nucleus are bounded due to this strong force. The nature of nucleons interactions can be described by studying it as two-body problem. The general wave equation used in such models has the form.Ĥ WhereĤ represents the general Hamiltonian operator, and E is the eigen energy. We studied the interaction through two-body via OBEP between two fermions (nucleons) so, the convenient representation of the energy is the relativistic form of Dirac equation.
Thus, the accurate interaction of the nuclear system can be described by Dirac Hamiltonian which include all fermions interaction and given by [30][31][32] Since I is the unit matrix, α and β are (4 × 4) Dirac matrices, m i is the nucleon mass, p i is the momentum of the system,T is kinetic energy operator and V ij is the potential energy between fermions' pairs and we ignore three and many body interactions in present work. The total kinetic energy of the nucleon equals the total energy subtracted from the rest mass energy [33].T Where M = Am i , with A is the number of nucleons and E is the total relativistic energy which has the form, The kinetic energy can be decomposed into two contributions, the first one is the relative space contribution T r , and the other is the center of mass contributionT cm [34,35].
The second part of Eq. (5) can be neglected. This neglecting the center of mass term to [36]. Applying the binomial theorem for E and substituting into Eq. (3), the relativistic kinetic energyT takes the form,T
= (
Where p ij = 1 2 (p i − p j ) is the relative momentum of the two nucleons system. By substituting Eq. (6) into Eq. (2), this leads to the effective nuclear Hamiltonian operator.
In Hartree-Fock theory, we seek the best single state given by the lowest energy expectation value of this hamiltonian.
A. Variational and Modified Hartree-Fock Wave function
One able to ensure the antisymmetry of the fermions' wave functions with the aid of Slater Determinant, and Hartree product to have the convenient form in calculating the ground state energy as the following wave function which is suitable for fermions [33]. So, the wave function of nucleus Ψ(r) becomes Where ψ i is the nucleon wave function which can be expanded as, Where C iα is the oscillator constant, and F α ( r i ) is the oscillator wave function which has two components, radial component Φ α and spin component χ α .
The two components have the following relation between them [17,37], as The principle of antisymmetry of the wave function did not be honored by the Hartree method according to Slater and Fock independently so, the accurate picture in calculating the ground state energy is the Hartree-Fock approximation. Using Eq. (11), where ε is external energy, v is a potential energy, and σ is the Pauli matrices.
Here we are dealing with the ground state so, (ε − v)/c 2 makes the value of the second term very small and can be neglected.
The wave functions for two nucleons i and j have the formula for bra part φ α (r i )φ γ (r j )| and ket part |φ β (r i )φ δ (r j ) as the bracket need two wave functions in each side of bracket, Where (l α s α m lα m sα |j α M α ) is the Clebsch-Gordon coefficient, χ 1/2 ms α is the spin function, andP Tα is the function of isotopic spin. The two wave functions depend on r i and r j which can be merged to one wave by changing the special coordinates for it, that converts to relative and center of mass coordinates, see Appendix A for more details. Then we have the formula, Where n α l α n γ l γ |N Lnl is the Talmi-Moshinsky bracket and φ nlm (r) = R nlm Y nlm , with radial function R nlm and Y nlm the spherical harmonics, the same treatment happens to the ket part. The bracket of spin function is χ S ms (i, j)|χ S ms (i, j) = 1 and the isotopic function is P T (i, j)|P T (i, j) = 1. This formula is convenient for two body interaction as in Deuteron and the number of nucleons of Helium nucleus should be emerged in equation through adding 4 i<j=1 . The bracket for spherical functions equal one as ϑ, ϕ are not affected here, but the distance r do. We have the solution of Radial wave function as an oscillator with the Laguerre function (Leigh, Ritz and Galerkin) method [38] where the wave function can be expanded in term of a complete set with basis set l represents the angular momentum, L l+ 1 2 n is the associated Laguerre polynomial [39], and the length parameter b = ( c) 2 mc 2 ω where m is the mass of the considered particles (nucleons) and ω is the oscillator frequency. The simplest shell-model should have the overall size of nucleus through the scale of this parameter and it is related to the number density of nucleons or equivalently for ω = 45A − 1 3 −25A − 2 3 according to the equilibrium density A of the even-even nucleus [40].
B. The handling of the kinetic energy term
Using Eqs. (9), (10), and (7), we obtain the relativistic modified Hartree-Fock equations, we apply the Lagrange multiplier method for seeking the minimum point of the expression, Differentiate Eq. (16) with respect to C * iα which is the conjugate of oscillator constant, one has Treating the first bracket as H 1 Taking into account Dirac matrices [41,42] We ignore the last term for both simplicity and avoiding the fourth power of momentum and speed of light ( p 4 8m 3 c 4 ), hence we have the kinetic term being vanished.
C. The construction of the potential through One Boson Exchange
After the treatment of the kinetic energy, the two-body Hamiltonian becomes The first term in right hand side is the remainder part results from the treatment of kinetic term in Dirac equation as Eq. (6) and V ij (r) is the potential according to two-body interaction. The relativistic form of one meson exchange potential between two nucleons (i, j) based on the degrees of freedom associated with three, pseudoscalar, scalar and vector mesons The Dirac representation for mesons' functions will be used to have Dirac matrices corresponding to Pauli spin matrices [30,43]. Substituting Eq. (23) and Eq. (22) into Eq. (21) to get V π , V σ , and V ω . We add π-meson as a pseudoscalar one to the previous two mesons because it ties the mesons with the nucleus as it is the fare one. We seek for the stability of nucleus and the exchange of pion meson increases the stability of the nucleus. The attractive behavior is represented in scalar (σ) meson and the repulsive behavior is represented in vector (ω) meson. So, the physics of nucleon potential is maintained. The Fock exchange between the two wave functions spatially is introduced after interaction in the potential, briefly in the ( ) symbol on the right part (ket part).
According to the relation between φ ,χ in Eq. (12) and defining the momentum for each nucleon(i,j) [33,44] p i = p r + 1 2 p R ,p j = −p r + 1 2 p R and p i =ṕ i , p j =ṕ j , p r = p Substituting those relations into Eq. (B.1),We will apply some important relations [45] on this equation. We have used two static functions for meson degree of freedom in N N interaction, (GY ) and (SP ED) with (k = π, σ, ω). These forms were used to carry out our calculations for Hartree-Fock problem (HF ). The first function [30] is represented by We have g 2 k is the meson-nucleon coupling constant, λ k is a parameter related to the structure function of the form factor and µ k = mc is the range of i-meson associated with the meson mass. The second function has the form [33], The details to obtain the following equation is explained in Appendices B, C, With total spin operator S and the meson function J(r), to simplify the solution and get the result, we suppose the nucleons of equal masses so, the relative mass µ = m1m2 m1+m2 and center mass M = m 1 + m 2 .
We substitute (S.n) from [46]. The determination of the energy eigen values requires the diagonalization of the Hamiltonian matrix whose elements are calculated with the functions of Eq. (7). We have Eq. (28) to show that our model can determine a satisfied results for S-state with the meson functions J σ , J ω , J omega , the value of mesons' wave functions depends on distance r which is determined as following: • In case of repulsive meson-exchange ω, the lower value of (r) is taken as the hadron radius 0.5f m and the upper value is calculated using the following equation with R is the range of meson and µ is the mass of meson.
• In case of the attractive meson-exchange π, σ, the upper limit of the previous case is taken as the lower limit in this case and the upper limit is determined using Eq. (29).
In the present work, we have applied our model to calculate the ground states of 2 H and 4 He nuclei (A=2,A=4) respectively. We have determined for two nucleons and four nucleons in (1S 1 2 -state) according n X j where n = 1, j = l + s and X represents the state. Table(1) is the group of parameters used for (π, σ, ω)mesons. The set of parameters are I,II that include mass µ, coupling constant (g) and the cut-off λ parameters. We have determined the ratio R, to ensure the accuracy between the calculated results and the experimental data [47].
Where E theor is the calculated ground-state and E exp the experimental one.We can also determine the binding energy per nucleon E A for the studied nuclei as [48], with the mass number A, and the total ground state energy E g.s.
III. RESULTS AND DISCUSSION
Table (1) represents the group of parameters used for (π, σ, ω)mesons. The set of parameters are I,II that include mass of meson, the coupling constant (g) and the cut-off parameter (λ). The potential is elaborated to calculate the ground state energies for the H 2 , He 4 nuclei. The results are listed in tables (2,3) in comparison with the experimental data. The ratio between the present work and experimental one is estimated for both cases, in other words by using the potential extracted from GY and SPED functions.
We have examined the potential Eq (28) to calculate the ground state energy of H 2 and He 4 nuclei using two static meson functions (GY, SP ED) with two sets of parameters listed in Table (1) which shows the different sets of the used parameters and for different exchange mesons, (σ, ω) mesons and (π, σ, ω) mesons. The potential for different cases is plotted in Figures(1 − 8). (1)). The potential energy Eq (28) is illustrated in figs.(1 − 8) by two sets of parameter. We have checked them with different meson-exchange function GY and SPED. So, we categorize our results into two groups:-For set I parameter with (GY, SP ED) calculated within (σ, ω), and for set II parameter the same above for both nuclei (H 2 , He 4 ).
All cases are calculated again within (σ, ω, π), in other words by adding the third exchange meson "π" which works attractively at large r. Figs. (1, 2) and (3,4) represent Category "I, II" for two meson exchange respectively and Figs. (5, 6) and (7,8) represent category "I, II" for three-meson exchange. Fig. (1) (left panel) shows the potential by using the GY meson function in which the effect of repulsive potential due to ω-meson appears at quite large distance, while the attractive part does not appear clearly as the depth of the potential is very small, the attractive part began with r ∼ 2.0f m nears to the diameter of H 2 nuclei and finished at r ∼ 0.7f m. Fig.(1) (right panel) is calculated by using SP ED meson function, in this case a significant attractive potential began with r ∼ 1.1f m and ended at r ∼ 0.25f m. Fig. (2) represents the same manner of H 4 nuclei with different values of the potential, the transferred point between the attractive and repulsive parts is similar to one in fig. (1), and the beginning point of the attractive part is controlled by the diameter of nuclei.
It can be noticed that, the depth of the attractive potential extracted by SP ED function is greater than the one from GY function for both nuclei figs.(1, 2). But it seems that, using set II, the behavior of two functions is close to each other from the transferred point and the depth, the difference between two nuclei still the values of the attractive potential, and (ω − meson) is the master here in which the repulsive potential has more higher values and the effect of (σ) has been damped see figs. (3,4). The effect of third meson exchange (π − meson) is added as shown in figs. (5,6) for set I, and figs. (7,8) for set II. Firstly again for set I figs. (5,6) represent the potential that behaves as the same before for two-meson exchange, in which the attractive part increased very slowly. The depth of the attractive potential increased significantly, π − meson flies at more than r ∼ 1.5f m, and the transferred point has different value from the one in two-meson exchange by using SP ED function. On the other hand, for set II as shown in figs. (7,8) GY and SP ED have an improvement in their transferred point and depth than in two-meson exchange, the SP ED function still better than GY function. We observe from Table (2, 3) two sets of parameters and how many mesons be exchanged between two nucleons, mesons exchanges with two functions and we listed our results for each one for the selected nuclei. Meanwhile, if the ratio tends to unity, the ground energies would be close to the experimental data. The preferable theoretical value of the 2 H nucleus in case of using two mesons is SP ED function for parameters.I, and by using three mesons, we have the value of SP ED in the parameter II to be more accurate than others. It is obvious from table(2) and table(3) the ground energy is close to the data in case of SPED function for set I and set II in comparison with the experimental data. The 4 He nucleus has little different manner, the theoretical value of SP ED function for two mesons in parameter I is better than the value of GY function. In case of handling three mesons, SP ED function in the parameter II is the best as shown in Table 3. Using the ratio relation is useful to ensure that our results for SP ED function is better than GY function and our attempt to include more two mesons in OBEP analytically is successful in result improvement. The ratio is getting better result for going on more massive nuclei and encouraged for our potential. We concluded that the used model is well-defined and compatible with the data and even than other models see [58,59]. The deuteron ground state energy is quiet little different from the numerical data in [51,60] as a good sign for our constructing potential analytically.The calculation of binding energy per nucleon serves our idea of being the OBEP with three and four mesons in case of SP ED function, and gives satisfied values for Deuteron and Helium nuclei comparing with the experimental one.
IV. CONCLUSION
In the framework of quasi relativistic formulation, the meson exchange potential helps in obtaining a potential with few number of parameters to calculate the ground state for the light nuclei Deuteron and Helium using two (σ, ω) and three (π, σ, ω) mesons exchange. In addition, it was shown that a self-consistent treatment of the semi-relativistic nucleon wave function in nuclear state has a great importance in calculations. The difference in masses of σ and ω mesons would not seriously change the main aspect of the concept of relativistic or semi-relativistic interaction, providing an average potential of cancelation of the repulsive meson (ω) and the attractive meson (σ) in conjunction with a weak long range effect (π). This work with OBEP in Dirac-Hartree-Fock equation gives a close relationship to other recent approaches, based upon different formalisms which tended to support this direction. The ground state energies for 2 H and 4 He nuclei are successfully determined through this work, and gives us a hope to continue with more massive nuclei. The nuclear properties are being clear in our trail to include more two mesons to describe the NN interaction through our potential. SP ED function has an good ability to give us the better shapes of our potential and also better values for energies. We hope that our potential represents a base for NN interaction with different ranges of energies in following search.
V. CONFLICT OF INTEREST
The authors declare that there is no conflict of interests regarding the publication of this paper.
Appendix A: Wave function with the Clebsch-Gordon coefficient
The wave functions for two nucleons i and j have a form with Clebsch-Gordon coefficients.
Where(l) is the orbital angular momentum, s γ is the spin, the total angular momentum j α = l α +s α , j γ = l γ +s γ , M α = m lα + m sα in which m lα is the projection of orbital quantum number ,m sα is the projection of spin quantum number , M γ = m lγ + m sγ andP Tα is the function of isotopic spin. The two wave functions are not connected and depend on r i ,r j so, the two wave functions need to be connected φ α (r i )φ γ (r j )| = m lα ms α m lγ ms γ λµ (l α s α m lα m sα |j α M α )(l γ s γ m lγ m sγ |j γ M γ ) (l α l γ m lα m lγ |λµ) φ nαlαm lα (r i )φ nγ lγ m lγ (r j )| χ 1/2 ms α χ 1/2 ms γ | P TαPTγ | (A.2) With λ = l α + l γ and µ = m lα + m lγ , we can change the special coordinates for each wave functions to become one wave, that depends on relative mass and center of mass.
Where n α l α n γ l γ |N Lnl is the Talmi-Moshinsky bracket , N L is total center of mass , nl is total relative. The wave function φ N Lnl (r, R) can be spitted in to the form As L gives the total orbital quantum number in center of mass , l gives the total orbital quantum number in relative coordinates and S = s i + s j is the total spin. Relative to the spin functions and isospin functions to be connected ,we have to use them as following.
Appendix B: The Derivation of OBEP through the exchange of two mesons
According to the relation between φ , χ in Eq. (12), one obtain Defining the momentum for each nucleon (i, j) p i = p R + 1 2 p r , p j = −p r + 1 2 p r and p i =ṕ i , p j =ṕ j [34,44]. Substituting those relations into Eq. (B.1), the dependence of J(r) on the relative distance (r) not on (R) makes its movement with the center of mass operators more easy, where p r = p and (σ i .p r )(σ i .p r ) = p 2 r , we obtain We will apply some important relations [61] 1.
Including these relations in potential equation.
|
2020-03-09T01:01:01.263Z
|
2020-03-06T00:00:00.000
|
{
"year": 2020,
"sha1": "089233f0d0209d711f51905175089eca4998fa55",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1155/2020/3271975",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "089233f0d0209d711f51905175089eca4998fa55",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
250928529
|
pes2o/s2orc
|
v3-fos-license
|
Aging Relevant Metabolite Itaconate Inhibits Inflammatory Bone Loss
Progressive bone loss during aging makes osteoporosis one of the most common and life impacting conditions in geriatric populations. The bone homeostasis is maintained through persistent remodeling mediated by bone-forming osteoblast and bone-resorbing osteoclast. Inflammaging, a condition characterized by increased pro-inflammatory markers in the blood and other tissues during aging, has been reported to be associated with skeletal stem/progenitor cell dysfunction, which will result in impaired bone formation. However, the role of age-related inflammation and metabolites in regulation of osteoclast remains largely unknown. In the present study, we observed dichotomous phenotypes of anti-inflammatory metabolite itaconate in responding to inflammaging. Itaconate is upregulated in macrophages during aging but has less reactivity in responding to RANKL stimulation in aged macrophages. We confirmed the inhibitory effect of itaconate in regulating osteoclast differentiation and activation, and further verified the rescue role of itaconate in lipopolysaccharides induced inflammatory bone loss animal model. Our findings revealed that itaconate is a crucial regulatory metabolite during inflammaging that inhibits osteoclast to maintain bone homeostasis.
INTRODUCTION
Osteoporosis is a common age-related skeletal disorder characterized by decreased bone mineral density and bone micro-architectural change. These changes lead to bone strength decline and contribute to bone fractures which could be life-threatening for geriatric populations (1). Bone is an active tissue that undergoes persistent remodeling. The maintenance of bone homeostasis relies on the balance of osteoblast mediated bone forming and osteoclast mediated bone resorption (2). Osteoblasts derive from skeletal stem/progenitor cells, whereas osteoclasts derive from hematopoietic monocyte/macrophage lineages (3,4). Due to age-related stemness loss and/or microenvironment change, the development and activity of osteoblast and osteoclast have been largely remodeled during aging, leading to progressive bone loss and onset of osteoporosis (5,6).
Immune system serves as the most important protection of organisms, by resisting foreign pathogens and eliminating injured or senescent autologous cells to maintain the homeostasis of our body. Both innate and adaptive immune systems undergo remarkable changes during aging. The most outstanding feature of innate immune system change is low-grade stimulation at basal level, whereas immune incompetent when specific reaction is needed (7,8). Inflammaging, an emerging concept describing chronic, sterile, and low-grade inflammation during aging, has been shown to contribute to the pathogenesis of various age-related diseases (9,10). In the skeletal system, aging induced circulating pro-inflammatory factors impair bone regeneration via decreased skeletal stem/progenitor cell number and osteogenic function (6). The role of inflammaging in regulating bone-resorbing osteoclast remains largely unknown.
Macrophages lie at the frontline of immune response, play crucial roles in both innate and adaptive immune systems. Accumulation of pro-inflammatory macrophages and release of cytokines in tissue have significant contributions to the inflammaging process (11,12). Metabolic reprogramming of tricarboxylic acid cycle from oxidative phosphorylation to glycolysis is a hallmark of macrophage activation, generating endogenous metabolites which regulate inflammatory response (13,14). Itaconate has been identified as one of the most highly induced metabolites during macrophage activation, playing an inhibitory role in inflammation (15,16). Bone resorbing osteoclasts are derived from monocyte/macrophage lineages and governed by pro-inflammatory cytokines (3). Itaconate may therefore be involved in regulating bone homeostasis especially bone resorption during aging.
Here, we investigated the metabolic remodeling of macrophages during aging, provided insights into the regulatory role of macrophage derived metabolite itaconate in osteoclastogenesis and lipopolysaccharides (LPS) induced inflammatory bone loss.
Osteoclast Differentiation and Function
Bone marrow derived macrophages (BMMs) were isolated as previously described (17). Briefly, femur and tibia from C57BL/6 mouse were carefully dissected and rinsed in cold PBS before flushing out the bone marrow with a-MEM media containing 30ng/ml recombinant mouse M-CSF at day 0. After being left in a 10-cm dish for 16h in a 37°C incubator supplied with 5% CO 2 , supernatants with unattached cells were transferred to a new 10cm dish at day 1 and cultured for 2 more days to collect attached BMMs before a media change at day 3. BMMs were harvested with trypsin and replated for in vitro experiments at day 4. BMMs isolated from 2-month-old mice were used as young BMMs, and BMMs isolated from 2-year-old mice were used as aged BMMs for in vitro experiments in this study.
For osteoclast differentiation, BMMs were digested and plated to 96-well plate at the density of 20,000 cells per well, complete a-MEM media containing 30 ng/ml recombinant mouse M-CSF and 100 ng/ml recombinant mouse RANKL were applied to cells with daily media change until the end point designed by experiments. RAW264.7 cell lines were cultured in complete DMEM media containing 75 ng/ml RANKL for differentiation.
Trap staining kit was purchased from Sigma and performed following the manufacturer's protocol after being fixed by PFA for identification of mature osteoclasts. Multinuclear (≥3) cells with positive trap staining were considered as mature osteoclast.
F-actin-ring and pit formation assay were performed to analyze the function of mature osteoclast as previously described (17). BMMs were plated on collagen coated plate for primary differentiation with a-MEM media containing 30 ng/ml M-CSF and 100ng/ml RANKL for 6 days before digested with collagenase and replated in Corning osteo assay strip wells. Cells were cultured in a-MEM media containing M-CSF and RANKL with indicated treatments for 3 more days before fixed for immunofluorescent staining and pit analysis. F-actin-tracker green purchased from Invitrogen were applied to the wells and incubated in 25°C for 1h after 4% PFA fixation and followed 0.1% triton-x perforation. 5 times of PBS wash were performed to remove nonspecific binding of actin tracker green and followed by 5 min DAPI staining to label the nuclear. F-actinring were imaged with Nikon microscope, and total number per wells were counted regardless of size. The wells were then bleached to remove the cells for pit analysis by quantifying the resorted area in the well.
Quantitative Real-Time PCR and Western Blotting
Cells for Quantitative real-time PCR analysis were lysed with Trizol Reagent purchased from Invitrogen and RNA were extracted following the manufacturer's protocol. Then the extracted RNA was reversed with the RevertAid First Strand cDNA Synthesis Kit (Thermo Scientific, Waltham, MA, USA). cDNA was used for real-time-qPCR with KAPA SYBR FAST qPCR Kit Master Mix (Kapa Biosystems, Hallandale, FL, USA). The primers listed below were used in real-time PCR to detect i n d i c a t e d g e n e s : G C G A A C G C T -G C C A C T C A a n d ATCCAGGCTTGGAAGGTC for mouse IRG1; CTCCCACTC TTCCACCTTCG and TTGCTGTAGCCGTATTCATT for mouse GAPDH; GAAGAAGACTCACCAGAAGCAG and TCCAGGTTATGGGCAGAGATT for mouse CTSK; TCCTT GCAATGTGGATGTTT and CGTCCTT-GAAGA0AATGC AGA for mouse MMP9; TCTTCCGAGTTCACATCCC and GACAGCACCATCTT-CTTCC for mouse NFATc1; GATGCC AGC-GACAAGAGGTT and CATACCAGGGGATGTTG CGAA for mouse TRAP. The relative mRNA levels of target genes were calculated by the 2 -DDCT method, with GAPDH as an internal control and normalized to the control group (18).
Cells for Western blotting were lysed in RIPA reagent from Invitrogen supplemented with protease and phosphatase inhibitor. Cell lysates were sonicated and spined at 12000 g for 30 min at 4°C to remove cell debris, then boiled with SDS sample buffer and used for western blotting following CST's protocol. Antibodies listed above were used to detect target protein.
Quantification of protein level was performed with Gel-Pro Analyzer software with at least 3 biological repeats.
Animals and LPS Induced Inflammatory Bone Loss Model
The use of animals and design of animal experiments were approved by the Institutional Animal Care and Use Committee of Tongji Hospital. All animals were supplied by the University Laboratory Animal Center. LPS induced inflammatory bone loss models were used for the in vivo experiments. 24 8-week-old male C57BL/6 mice were randomly assigned to four groups with n=6 in each group: Sham+Veh, Sham+DI, LPS+Veh, LPS+DI. No blinding was used during the experiment. 5mg/kg LPS (in LPS groups) or equal amount of PBS (in Sham groups) were subcutaneously injected on the middle cranial suture at day1 and day4, 30 mg/kg Dimethyl itaconate (in DI groups) or equivalent volume of PBS (in Veh groups) were given daily from day2 to day11 by intraperitoneal injection. The percentage of resorption area were calculated based on reconstructed CT scanning, Trap positive area per section in paraffin slides were used to evaluated osteoclast activity.
Micro-Computed Tomography (mCT) Imaging
The skulls of mice from animal experiments were scanned with the vivaCT40 mCT instrument (Scanco Medical, Bassersdorf, Switzerland), bone signals were obtained at 100kV and 98mA, with the resolution as 10.5mm. Three dimensional images were reconstituted and analyzed with the built-in software.
RNA-Sequencing of BMMs
BMMs were cultured with a-MEM in the absence of FBS for 12 h, then changed to complete media supplied with RANKL (100 ng/mL) and DI (20mM) or vehicle for 24 hours. Cells for RNA-sequencing were lysed with Trizol Reagent and submitted to Shanghai Biotechnology Corporation for the following process. Briefly, total RNA was extracted and checked for a RIN number to inspect RNA integrity. Qualified total RNA was further purified by RNAClean XP and RNase-Free DNase Set. The libraries were prepared with VAHTS mRNA-seq v2 Library Prep Kit for Illumina. The libraries were then sequenced with pair-end protocol, average raw bases data was 6Gb.
Statistical Analysis
All in vitro experiments were performed independently with at least three biological repeats, and the results are presented as means ± standard error of the mean (SEM). The animal study was performed under the ARRIVE guidelines. The sample size, randomization, blinding, outcome measures and statistical methods are described in Animals and LPS induced inflammatory bone loss model section. One-way-ANOVA followed by Tukey post hoc corrections were used for multigroup comparison. Two-tailed Student's t test was used for comparisons between 2 groups. In all analyses, P < 0.05 was taken to indicate statistical significance.
Aging Associated Osteoclast Activation and Metabolism Remodeling
Osteoporosis is an age-related bone disorder that could be due to decreased bone formation and/or increased bone resorption. Here in this study, we focused on osteoclast and its precursor macrophage during aging. In agreement with previous reports ( 19 , 20 ), we o bserved an age-related incr eas e of osteoclastogenesis activity between BMMs isolated from 2month-old and 2-year-old C57BL/6 male mice in our in vitro experiments, indicated by faster osteoclast formation and more osteoclast formed ( Figures 1A, B).
Aging has been shown to play an important role both in skewing the osteoclast precursor pool and promoting proinflammatory cytokine release in the bone marrow niches (20). While in response to inflammation, macrophage can also initiate metabolic reprogramming of the tricarboxylic acid cycle from oxidative phosphorylation to glycolysis to generate an antiinflammatory function. To confirm the metabolic remodeling of macrophages during aging, we first test the IRG1/itaconate metabolic pathway. Itaconate is produced during metabolic remodeling, catalyzed by IRG1 encoded enzyme cis-Aconitate decarboxylase. IRG1 gene expression level has been shown to be an indicator of itaconate level (21). We compared IRG1 expression level between BMMs isolated from young and aged mice, observed a significantly increased IRG1 expression level in a g e d B M M s , i n d i c a t i n g a m e t a b o l i c r e m o d e l i n g status ( Figure 1C).
Further we wonder if similar metabolic remodeling happens during RANKL induced osteoclastogenesis. We treated primary BMMs as well as macrophage cell line RAW264.7 with RANKL for different time periods. The IRG1 gene expression level increased rapidly to the peak level after 12 h RANKL treatment, then gradually decreased to basal level after 48h ( Figures 1D, E). Interestingly, when comparing the response of IRG1 expression to RANKL stimulation between young and aged BMMs, we noticed that young BMMs generated higher IRG1 expression level at all time points tested ( Figure 1F).
These results together suggested that BMMs undergo metabolic remodeling during aging and RANKL induced osteoclastogenesis, however the IRG1/itaconate metabolic pathway is more responsive in young BMMs. Therefore, we suspected that the different basal level and responsive level of the IRG1/itaconate pathway between young and aged BMMs may lead to distinct osteoclast phenotype. Next, we tested if the metabolic remodeling product itaconate plays a role in regulating osteoclastogenesis and activity.
Itaconate Inhibits RANKL Induced Osteoclast Differentiation and Activation
To exclude the impact of itaconate on BMMs proliferation, we performed cell proliferation and cytotoxicity assay with Cell Counting Kit 8 (CCK8). Membrane permeable dimethyl itaconate (DI) was used in this study, as it has been shown to be the most potent itaconate derivate that stabilizes the antiinflammatory transcription factor NRF2, inhibits IkBz and pro-interleukin (IL)-1b induction, as well as IL-6, IL-10 and interferon-b secretion (22)(23)(24). In the presence of membrane permeable dimethyl itaconate (DI) up to 50mM, no reduction of cell counting was observed in BMMs ( Figure 1G).
Several studies indicated that exogenous DI is insufficient to direct convert to endogenous itaconate. However, exogenous DI can increase endogenous itaconate accumulation in LPS stimulated macrophages (22,25,26). To test if exogenous DI can regulate the biosynthesis of endogenous itaconate, we performed qPCR for IRG1, the gene encoding enzyme cis-Aconitate decarboxylase which is responsible for itaconate biosynthesis. In resting BMMs, the IRG1 expression was not significantly affected by DI treatment, aligning with previous studies. Interestingly, upon RANKL stimulation, the IRG1 expression level was significantly downregulated by DI treatment in both young and aged BMMs (Figures 1H, I). So, it seems unlikely that DI can increase endogenous biosynthesis of itaconate through IRG1 expression in this case. If exogenous DI would be involved and occupy the endogenous itaconate degradation pathway, then prevent the degradation of endogenous itaconate remains unknown. But this could potentially explain the itaconate accumulation upon stimulation while not regulating the biosynthesis of itaconate.
To investigate whether exogenous addition of itaconate could regulate osteoclast formation, we treated both macrophage cell line RAW264.7 and primary BMMs with itaconate during RANKL induced osteoclast differentiation. With TRAP staining, we observed that itaconate treatment significantly inhibited osteoclast differentiation in a dose dependent manner in both RAW264.7 and BMMs (Figures 2A-D). At the concentration of 20mM, osteoclast formation was totally blocked in the RAW264.7 cell line. The inhibitory effect of itaconate on osteoclast formation was further verified by q-PCR test showing that the mRNA levels of both osteoclast master transcription factor NFATc1 and osteoclast marker genes CTSK, MMP9 and TRAP were downregulated in the presence of itaconate in the same trend as TRAP staining ( Figures 2E-H).
To further study the potential effect of itaconate on osteoclast function, F-actin ring staining and bone resorption pit formation analysis were performed on matured osteoclast. BMMs were induced with RANKL to form matured osteoclasts, those osteoclasts were then plated into Corning osteo assay strip wells and treated with different concentrations of itaconate. Factin ring staining results showed that the formation of F-actin ring was impaired under the treatment of itaconate at 5mM (Figures 3A, B). The quantification of the resorption pits showed that itaconate had a dose-dependent inhibitory effect on osteoclast resorption activity ( Figures 3C, D).
The in vitro results revealed that aging or RANKL induced metabolite itaconate serves as a negative regulator of osteoclast differentiation and resorption activity. Further we tested if itaconate could prevent inflammatory bone loss in vivo.
Itaconate Attenuates LPS Induced Inflammatory Bone Loss In Vivo
Lipopolysaccharide (LPS) produced by bacteria has been identified as the key mediator of chronic inflammation and has been widely used as an inducer for inflammatory bone loss model (27). We tested whether itaconate treatment could rescue LPS induced inflammatory bone loss in the mouse model. LPS was given subcutaneously on the middle cranial suture and itaconate was given through intraperitoneal injection. mCT scanning and reconstructed images of skulls were used to evaluate bone loss severity. We observed significant bone resorption along the middle cranial suture on the skulls from LPS+Veh group, whereas DI treatment effectively rescued bone resorption induced by LPS in LPS+DI group ( Figure 4A). The percentage of resorption area of skulls were quantified and showed that circulatory addition of itaconate protected inflammatory bone loss induced by LPS in vivo ( Figure 4B). The inhibitory effect of itaconate on inflammatory bone loss was further confirmed by TRAP staining of sectioned skull slides. The results showed that DI treatment decreased the percentage of TRAP+ area and osteoclast formation induced by LPS (Figures 4C, D).
Potential Mechanisms of Inhibitory Effect of Itaconate on Osteoclast
Itaconate has been reported as an anti-inflammation metabolite in macrophages by activating Nrf2 and inhibiting COX2 expression (24,28). To explore the mechanism underlying itaconate regulated BMMs during osteoclastogenesis, two key signaling pathways related to osteoclast differentiation were assessed by western blot. The results showed that itaconate inhibits the activation of the NF-kB and MAPK pathways by inhibiting phosphorylation of IKKs, IkBa, P65, JNK, and P38, but not ERK protein ( Figures 5A-C).
RNA-Sequencing of DI or vehicle treated BMMs during osteoclastogenesis was performed to reveal potential mechanisms. Differentially expressed gene assay highlighted significant gene profile alteration upon DI treatment as shown by volcano plot ( Figure 5D). From the volcano plot, we selected 6 differentially expressed genes which might be involve in skeletal homeostasis and presented the heatmap for these differentially expressed genes, including 5 downregulated genes Akap6, Trim30c, Ocstamp, Scin, Phex and 1 upregulated gene Fstl3 ( Figure 5E). The expression of these representative genes was then verified by qPCR ( Figure 5F). KEGG classification enrichment analysis of differential genes indicated that genes related to signal transduction, immune system and metabolism were enriched after DI treatment ( Figure 5G). KEGG pathway enrichment analysis of differential genes highlighted that Metabolism of xenobiotics by cytochrome P450, Glutathione metabolism and Drug metabolism-cytochrome P450 pathways were heavily involved ( Figure 5H).
DISCUSSION
In the present study, we focused on the metabolic remodeling of osteoclast precursor macrophages during aging, and the impact of these changes on osteoclast differentiation and bone homeostasis. We observed that itaconate is accumulated in aged macrophages, indicating an increased basal inflammation level. Itaconate is responsive to RANKL stimulation during osteoclastogenesis in both bone marrow derived primary macrophages and macrophage cell line RAW264.7 cells. Interestingly, the response of itaconate to RANKL stimulation is significantly impaired in aged macrophages. The inhibitory role of itaconate in osteoclast differentiation and activation was confirmed in vitro, and the rescue of LPS induced inflammatory bone loss by itaconate was verified in vivo. Mechanistically, itaconate treatment impaired NF-kB and MAPK signaling pathway, itaconate induced differentially expressed genes were enriched in KEGG classification of signal transduction and immune system. IRG1/itaconate metabolic pathway plays a central regulatory role in regulating macrophage activity and links metabolism to immunity (21,24). Our data revealed a novel IRG1/itaconate expression pattern in young and aged macrophages, which indicate that this metabolic pathway is also involved in the aging process. On the one hand, higher basal level of IRG1/ itaconate observed in aged macrophages may be associated with increased basal level of inflammation during aging, which is also known as inflammaging (9,29). On the other hand, less responsive IRG1/itaconate metabolic pathway upon stimulation in aged macrophages could be associated with age-related immunosenescence (30,31). Our findings revealed the dichotomous immune status of aged macrophages. However, this phenomenon could also be due to the heterogeneity of aged macrophages which has been observed during efferocytosis (32). Therefore, we anticipate that further characterization of aged macrophages by single cell sequencing will facilitate the understanding of age-related macrophages remodeling.
Recent studies have identified two osteoclast subtypes in bone marrow, the vessel-associated osteoclasts (VAO) which are highly associated with bone endothelial cells to regulate endochondral ossification, and the classical bone-associated osteoclasts (BAO) (33). Osteoclasts are derived from macrophage/monocyte lineage in bone marrow. The bone marrow is responsible for providing supportive microenvironments for hematopoiesis, osteogenesis, angiogenesis, as well as the interactions of these fundamental process to maintain the skeletal homeostasis. Upon aging, inflammation, and other stress factors, the remodeling of bone marrow niches could lead to impaired or imbalance skeletal homeostasis (34)(35)(36). The shifting of predominant osteoclast subtype from VAOs in early developing bones to BAOs in ageing bones has been observed during aging (33). In our study, we observed distinct osteoclastogenesis potentials of young and aged bone marrow macrophages, however we didn't verify the subtype and activity of these osteoclasts. It is possible that young and aged BMMs trend to differentiate to distinct subtype of osteoclasts and exhibit different activities in terms of resorbing cartilage or bone matrix. Metabolic remodeling has been reported to be essential for osteoclast formation and activity (37,38). Our data showed that the IRG1/itaconate metabolic pathway is involved in osteoclast formation and activity, supported by up regulated IRG1 expression during osteoclastogenesis. We further tested the impact of itaconate, IRG1 encoded enzyme aconitase decarboxylase produced metabolite, on osteoclast. Interestingly, itaconate exhibited an inhibitory effect on osteoclast differentiation and bone resorption. In vivo data showed that itaconate could rescue LPS induced inflammatory bone loss.
Given the inhibitory role of itaconate in regulating osteoclast, we propose that increased osteoclastogenesis activity of aged macrophages could be due to impaired IRG1/itaconate response to chronic inflammation.
Dimethyl itaconate (DI) was used in this study to explore the role of endogenous itaconate in regulating osteoclast, as itaconate acid has been considered to be negatively charged polar metabolite which has poor cell permeability. However, this derivative may not recapitulate exactly the same effects of the endogenous itaconate. In a recent comparative study evaluating unmodified itaconate and a panel of commonly used itaconate derivatives, the authors showed that neither dimethyl itaconate (DI), 4-octyl itaconate (4OI) nor 4-monoethyl itaconate (4EI) are converted to intracellular itaconate, while exogenous unmodified itaconic acid can actually enter macrophages. Besides, dimethyl itaconate and 4-octyl itaconate induce a stronger electrophilic stress response and this correlates with their immunosuppressive phenotype (22). The similarities and differences between dimethyl itaconate and endogenous itaconate in terms of regulatory mechanism need to be further addressed. To better justify the role of endogenous itaconate, unmodified itaconate might be a better exogenous itaconate to use in the future. While other studies have reported metabolic remodeling during osteoclast differentiation (37,39), to our knowledge this is one of the first studies that has shown metabolites itaconate was involved in inflammaging of skeletal system, and played negative role in the regulation of osteoclast formation and activity.
DATA AVAILABILITY STATEMENT
The sequencing data presented in the study are deposited in the GEO database, the assigned GEO accession numbers is GSE206723.
ETHICS STATEMENT
The animal study was reviewed and approved by Institutional Animal Care and Use Committee of Tongji Hospital.
AUTHOR CONTRIBUTIONS
YW, SL, and LZ conceived the study, performed most of the experiments, analyzed the results, and prepared the manuscript. PC and JL performed experiments. FG, JX, WZ, and AC supervised this study. All authors contributed to the article and approved the submitted version.
FUNDING
This project was supported by Grant Number 81902262 and 81672168 from the National Natural Science Foundation of China.
|
2022-07-22T13:27:52.506Z
|
2022-07-22T00:00:00.000
|
{
"year": 2022,
"sha1": "995e6c5e41bb8eb95181dddcff4a36920d478967",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "995e6c5e41bb8eb95181dddcff4a36920d478967",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
118183179
|
pes2o/s2orc
|
v3-fos-license
|
Spectral Representation of Correlation Functions in two-dimensional Quantum Field Theories
The non-perturbative mapping between different Quantum Field Theories and other features of two-dimensional massive integrable models are discussed by using the Form Factor approach. The computation of ultraviolet data associated to the massive regime is illustrated by taking as an example the scattering theory of the Ising Model with boundary.
Introduction
Many two-dimensional integrable statistical models with a finite correlation length can be elegantly discussed in terms of relativistic particles in bootstrap interaction [1]. In this formulation, the key object is the elastic S-matrix that describes the scattering processes of the massive excitations in the Minkowski space. Once we know the exact S-matrix of the model under analysis and the corresponding spectrum, we may proceed further and compute several quantities of theoretical interest, among them the central charge and the critical exponents of the conformal field theory arising in the ultraviolet regime [2]. Aim of this talk is to discuss some features of the structure of integrable QFT in terms of the properties of their correlation functions. The most promising method for the computation of the correlation functions results to be the Form Factor Approach, as originally proposed in [3,4]. In the following I will try to point out the reasons of the successful application of this approach together with several interesting properties which come out as by-products of its theoretical formulation. As a significant example of the computation of ultraviolet data in terms of a resummation of the Form Factors, we will consider the exact critical exponents of the energy and disorder operators in the Ising model with boundary. The scattering theory for such system has been recently proposed in [19].
Computation of Correlation Functions
To fully appreciate the bootstrap approach to the computation of the correlation functions, let us discuss the most common difficulties which arise in the perturbative method. Let be the action of the theory, where A 0 corresponds to a solvable QFT (e.g. a free theory, CFT, etc.) whereas A min defines the interactive part. For the perturbative definition of the Green functions we have the formal expressions is the k-th perturbative term. The above expression usually suffers of several drawbacks: • We may face, for instance, the renormalization problem, i.e. the presence of infinities which should correctly be handled. In this case, we need to express our final computation in terms of physical quantities by means of the renormalization procedure. This is usually a painful task.
• Assuming we were able to go through the renormalization program, the resulting expression may present a low rate of convergence, if any! In the light of the above considerations, a more efficient way to compute correlation functions is needed. This is provided by the spectral representation methods, i.e. by the possibility to express the correlation functions as an infinite series over multiparticle intermediate states. For instance, in a QFT with a self-conjugate particle the two-point function of an operator O(x) in real Euclidean space is given by where r denotes the radial distance, i.e. r = x 2 0 + x 2 1 and β the rapidity variable. Similar expressions are obtained for higher point correlators. The functions are the so-called Form Factors. Since the spectral representations are based only on the completeness of the asymptotic states, they are general expressions for any QFT. However, for integrable models, they become quite effective because the exact computation of the form factors reduces to finding a solution of a finite set of functional equations, as we will discuss below. Other advantages of the spectral representation method for the correlation functions may be summarized as follows: 1. It deals with physical quantities, i.e. there is no need of renormalization and the coupling constant dependance is taken into account at all orders by a closed expression of the Form Factors.
2. The above representation (2.3) and similar expressions for other correlators present a very fast rate of convergence for all values of the scaling variable (mr). This is quite expected for large values of (mr) which are dominated by the lowest massive state but the surprising result is that in many cases the series is saturated by the lowest terms also for small values of (mr). This is due to a threshold suppression phenomenon [7] which we will present below.
The two-point correlation functions (2.3) have the form of a Grand-Canonical
Partition Function of a one-dimensional fictious gas (with coordinate position but with a coordinate-dependent activity This observation is extremely useful to recover ultraviolet data of the theory, as the anomalous dimensions of the fields, in terms of massive quantities [5,6].
Let us discuss now the main properties of the Form Factors for 2-D Integrable Massive
Field Theories which are the crucial quantities entering the spectral representation of correlation functions. If not explicit said, we consider for simplicity the case of a theory with only one self-conjugate particle. For local scalar operators O(x), relativistic invariance implies that the form factors F n are functions of the difference of the rapidities β ij Except for the poles corresponding to the one-particle bound states in all sub-channels, we expect the form factors F n to be analytic inside the strip 0 < Im β ij < 2π.
The form factors of a hermitian local scalar operator O(x) satisfy a set of equations, known as Watson's equations, which for integrable systems assume a particularly simple form [3,4] In the case n = 2, eqs. (2.8) reduce to (2.9) It has been shown in [4] that eqs. (2.8), together with the next eqs. (2.12) and (2.14), can be regarded as a system of axioms which defines the whole local operator content of the theory. The general solution of Watson's equations can always be brought into the form where F min (β) has the properties that it satisfies (2.9), is analytic in 0 ≤ Im β ≤ π, has no zeros in 0 < Im β < π, and converges to a constant value for large values of β. These requirements uniquely determine this function, up to a normalization. The remaining factors K n then satisfy Watson's equations with S 2 = 1, which implies that they are completely symmetric, 2πi-periodic functions of the β i . They must contain all the physical poles expected in the form factor under consideration and must satisfy a correct asymptotic behaviour for large value of β i . Both requirements depend on the nature of the theory and on the operator O.
Notice that one condition on the asymptotic behaviour of the FF is dictated by relativistic invariance. In fact, a simultaneous shift in the rapidity variables gives Secondly, in order to have a power-law bounded ultraviolet behaviour of the two-point function of the operator O(x) (which is the case we will consider), we have to require that the form factors behave asymptotically at most as exp(kβ i ) in the limit β i → ∞, with k being a constant independent of i. This means that, once we extract from K n the denominator which gives rise to the poles, the remaining part has to be a symmetric function of the variables x i ≡ e β i , with a finite number of terms, i.e. a symmetric polynomial in the x i 's. The pole structure of the form factors induces a set of recursive equations for the F n which are of fundamental importance for their explicit determination. As function of the rapidity differences β ij , the form factors F n possess two kinds of simple poles.
The first kind of singularities (which do not depend on whether or not the model possesses bound states) arises from kinematical poles located at β ij = iπ. They are related to the one-particle pole in a subchannel of three-particle states which, in turn, corresponds to a crossing process of the elastic S-matrix. The corresponding residues are computed by the LSZ reduction [4] and give rise to a recursive equation between the n-particle and the (n + 2)-particle form factors (2.12) The second type of poles in the F n only arise when bound states are present in the model. These poles are located at the values of β ij in the physical strip which correspond to the resonance angles. Let β ij = iu k ij be one of such poles associated to the bound state A k in the channel A i × A j . For the S-matrix we have where Γ k ij is the three-particle vertex on mass-shell. The corresponding residue for the F n is given by [4] −i lim ǫ→0 ǫ F n+1 (β+iu j ik −ǫ, β−iu i jk +ǫ, β 1 , . . . , β n−1 ) = Γ k ij F n (β, β 1 , . . . , β n−1 ) , (2.14) where u c ab ≡ (π − u c ab ). This equation establishes then a recursive structure between the (n + 1)-and n-particle form factors. Important properties of the FF are pointed out by the following observations: 1. Notice that the functional and recursive equations satisfied by the Form Factors do not refer to any operator of the theory! This opens the possibility to classify the operator content of a massive QFT by computing the independent solutions of these equations and by associating them to the corresponding operators, as suggested and investigated in [6,10,11]. The structure of the local operators in integrable QFT has been analysed by other points of view in [12,13,14].
The computation of the Form Factors only depend on the S-matrix.
This implies that if the S-matrix S(λ) interpolates between two (or several) scattering matrices relative to different QFT by varying the parameter λ, there should be a corresponding mapping between the operator content of the QFT encountered in the flow. This correspondance may be difficult to establish at the perturbative level and therefore it completly relies on the non-perturbative effects encoded in the exact S-matrix. One of the most striking example relative to this observation is the correspondance between the Sinh-Gordon model (which is a Z 2 invariant model) and the Bullough-Dodd model (which does not present any symmetry at the perturbative level) for some particular values of the coupling constants of these models [16]. Another example of the non-perturbative mapping of the operator content of two QFT has been established for the Sinh-Gordon and Ising models [15].
3. As it follows from eq. (2.9), if S(0) = −1 the two-particle Form Factor necessarily vanishes at threshold This observation is quite important since it permits to understand the reason of the fast rate of convergence of the spectral series also at short distance scales, as shown in several significant examples discussed in the literature [5,7,8,9]. In fact, the correlation functions are satured by the first matrix elements and for any practical aim, their computation requires relatively little analytic work. The argument goes as follows [7]. Let us consider the two-point function in the momentum space The spectral function σ(s) gets more contributions each time that it passes through a threshold. If the matrix elements |F N | 2 were constants, its discontinuity at the threshold (Nm) due to the phase space would be given by However, as consequence of eqs. (2.10) and (2.15) the spectral function has a much softer behaviour at the different thresholds and therefore the values of the correlation functions are saturated by the first terms of the spectral representations even at large values of the momenta.
One-point Functions in the Ising Model with Boundary
A relevant aspect of a QFT is the interpolation between its infrared and ultraviolet regimes. In particular, it is extremely important to establish the relationship between the most significative parameters associated to the scaling behaviour in the ultraviolet regime to those which characterize the infrared properties. The example we choose to illustrate this relationship is the Ising model with a boundary. The conformal field theory relative to the scaling behaviour of the fixed point of this model has been discussed in [17,18] whereas the breaking of conformal invariance due to the presence of finite correlation length has been recently formulated in [19]. To compute the scaling dimensions in the presence of the boundary, it is sufficient to consider the one-point function of the energy operator ǫ 0 (t) =< 0 | E(y, t) | B > and the one-point function of the disorder operator µ 0 (t) =< 0 | µ(y, t) | B >, where | B > is the boundary state (see below). To fix the notation, t is the distance of the operators from the boundary whereas y is their parallel coordinate. By translation invariance the above one-point functions depend only on t. In the high temperature phase (the only one discussed here), these operators share the important property to couple only to an even number of particles, which we may consider as massive Majorana fermions described by annihilation and creation operators A(β) and A † (β). The mass of the fermion field is linearly related to the difference of the temperature, m = 2π(T − T c ). The important quantity we need for our computation is the wave function of the boundary state | B relative to the fixed and free boundary conditions. This has been determined in [19] and its explicit expression is given by * The computation of the one-point functions can be done by using the form factors determined in [3,5,6]. The energy operator couples only to the two-particle state, and its one-point function is given by With a simple integration we obtain (3.5) (K 0 and K 1 are Bessel functions) and in the ultraviolet limit (mt → 0) we recover the critical exponent x = 1 of the energy operator and the universal amplitudes determined in [18]. Concerning the computation of the one-point function of the disorder operator µ(x, t), it couples to all states with an even number of particles. Using the form factors determined in [3,5,6] and a simple algebraic identity, the relevant expression can be written in this case as (3.6) The one-point function of µ(x, t) can be then expressed as a Fredholm determinant where z ± = ±1/2π and W ± is the kernel of a linear integral symmetric operator W ± (β i , β j | t) = E ± (β i , mt)E ± (β j , mt) cosh β i + cosh β j ; (3.8) E ± (β, mt) = e −mt cosh β cosh β ± 1 .
The plus sign of the above quantities refers to the fixed b.c. whereas the minus sign to the free b.c.. In both cases, µ 0 (t) may be written in terms of the eigenvalues of the integral operator and their multiplicity as As far as (mt) is finite, the kernel is square integrable and therefore all results valid for bounded symmetric operators apply (see, for instance [20]). However, when (mt) → 0, the operator becomes unbounded. The mathematical problem has been studied in the literature [21]: the eigenvalues becomes dense in the interval (0, ∞) according to the distribution λ(p) = 2π cosh πp , (3.10) whereas, from Mercer's theorem, their multiplicity grows logarithmically as a i ∼ 1 π ln 1 mx . The critical exponents of the disorder operator relative to fixed and free boundary conditions are therefore given by x(z ± ) = − 1 π ∞ 0 dp ln 1 − 2πz ± cosh p = − 1 8 + 1 2π 2 arccos 2 (−2πz ± ) . (3.11) Substituting the values of z ± we obtain the results obtained in [18], i.e. x = 3/8 for the fixed b.c. and x = −1/8 for the free b.c.
|
2019-04-14T02:57:18.016Z
|
1994-05-19T00:00:00.000
|
{
"year": 1994,
"sha1": "3be1563bd435f0f506fbb48363dafb90968e5965",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "3be1563bd435f0f506fbb48363dafb90968e5965",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
216424135
|
pes2o/s2orc
|
v3-fos-license
|
The E ff ect of Labor Migration on Farmers’ Cultivated Land Quality Protection
: Since the reform and opening up, a large proportion of the Chinese rural labor force has transferred to urban and non-agricultural industries. Rural labor transfer not only changes the allocation of household labor in agricultural and non-agricultural sectors but also a ff ects the utilization of other agricultural production factors. Based on data from 818 households in three counties in northern Jiangsu province, this paper analyzed the impact of labor migration on farmers’ adoption of cultivated land quality protection (CLQP) behaviors. The survey results showed that farmers’ awareness of CLQP was still very weak, and the proportion of farmers adopting measures such as subsoiling, straw application, cover crops and green manures and the complementary use of organic fertilizers was still relatively low. The empirical results showed that perennial out-migration for work can constrain households’ protective inputs into soil conservation, but part-time farming locally can promote households’ inputs. The results also showed that farmer characteristics, farming conditions and external environment also significantly a ff ected the farmers’ adoption of soil conservation practices. According to these conclusions, this paper puts forward the corresponding policy implications.
Introduction
The quantity and quality of cultivated land is closely related to national food security [1]. As a very large country with a population of 1.4 billion people, stabilizing grain production and ensuring food security are of paramount importance to the Chinese national economy and people's livelihood. In Chinese agricultural production, there are long-term problems such as excessive application of fertilizers, insufficient circulation of organic matter and unreasonable farming methods, which have caused a degradation of cultivated land quality. The traditional agricultural production mode also has a negative impact on the environment and ecosystem functions, and the contradiction with the ecological environment is increasingly prominent. The key to solving the contradiction is to promote the coordinated development of intensive agriculture and sustainable agriculture technically [2]. Moreover, since opening up in 1978, the rapid progress of urbanization and industrialization and urban expansion, which have re-purposed what were formerly high-quality farmlands, have prominently contravened cultivated land quality protection (CLQP) [3].
Farmers, as the basic decision-making units, are the main subjects in the countryside economy and are even the organizers of cultivated land use and protection [4,5], especially with regard to raising quality [6]. At present, farmers' awareness of CLQP is still very weak in China, and the proportion of farmers adopting CLQP measures is still far behind that in developed countries [7][8][9].
Beyond China's late start in farmland quality protection, does the exodus of the rural labor force also restrict CLQP? Previous studies provide different views. Some scholars believe that labor migration prevents farmers from adopting cultivated land quality protection measures [10,11]. The argument is that labor migration reduces the share of agricultural income in households, making farmers less concerned about productivity and the sustainable use of cultivated land [12]. Some scholars also believe that off-farm employment promotes farmers' adoption of conservation agriculture techniques [13,14]. Because conservation agriculture techniques usually have the characteristics of saving labor input, the existence of off-farm employment opportunities means that the opportunity cost of agricultural labor is higher, thus helping to induce farmers to adopt labor-saving technology. In addition, some scholars believe that the negative impact of labor out-migration for work is mainly on farmers with non-agricultural income. For farmers with mainly agricultural income, labor out-migration for work promotes the adoption of sustainable agricultural technology [15]. In fact, China's rural labor force is characterized by "selective transfer" [16]. For example, in 2017, among the rural migrant labor force (excluding relocation of the whole family), 54% chose to work in other places instead of family agricultural production, while the remaining 46% chose to work in local non-agricultural industry and consider family agricultural production [17].
Measures of CLQP refer to all the methods that can maintain or improve the quality of cultivated land, including straw application, the complementary use of organic fertilizers, cover crop and green manure utilization, soil testing and formulated fertilization, crop rotation and other measures to improve the poor cultivated land. It also includes the measures to improve the capacity of moisture and fertility conservation of soil, such as ditch renovation and subsoiling [18]. These techniques for CLQP are resource conserving, environmentally non-degrading, technically appropriate and economically and socially acceptable. These characteristics are consistent with the characteristics of sustainable agriculture proposed by Food and Agriculture Organization of the United Nations (FAO) [18]. Scholars have analyzed farmers' CLQP behaviors based on a variety of econometric models. Some have used logistical modeling [19] and probit modeling [20] to analyze the decision-making mechanism of farmers' behavioral choice of whether to adopt farmland quality protection measures. Others have used ordered probit modeling [21], the Poisson model [8] and Heckman sample selection modeling [22] to analyze the influence factors for farmers' adoption of CLQP measures. However, these studies have several main deficiencies, which are as follows: First, the description of CLQP behavior is relatively simple, either taking one of the measures as an example or lumping all measures together; second is the assumption that the probability of different measures being taken by farmers is the same; third, they have ignored the potential correlation between different measures, which not only leads to biased estimation results but also leads to the extension of promotional policies for sub-optimal agricultural technology. In view of this, the Mvprobit model was selected in this paper to analyze the influence of labor migration on farmers' CLQP behaviors to avoid the above deficiencies of previous studies.
Whether the out-migrating labor force can consider agricultural production will inevitably lead to differences in the decision of whether farmers will adopt farmland quality protection measures. Based on questionnaire survey data from farmers in northern Jiangsu region, multivariate probit modeling was used to analyze the effect of labor migration on cultivated land quality protection, and the two-stage instrumental variable method was used to test the robustness of the analysis results. The results will provide new empirical evidence for the impact of migrant labor on the adoption of conservation tillage technology and provide an effective decision-making basis for the promotion and popularization of CLQP measures.
Sample and Data
The data used in this paper are from a questionnaire survey of rural households conducted by our research group in northern Jiangsu province from January to February 2016. The survey sample was distributed in three counties, namely, Lianshui, Shuyang and Xiangshui. The survey collected information on farmers' production management behavior, labor migration, cultivated land utilization and costs and benefits in 2015. The questionnaire survey used a two-round sampling method. The first round used improbability sampling to identify the survey site. Huaian, Suqian and Yancheng were selected from five cities in north Jiangsu, with each city selecting a county. Among thirty villages that were surveyed, twelve belonged to Lianshui county, ten to Shuyang county and eight to Xiangshui county. Random sampling was followed for the second sampling to determine the specific respondents in each village. Thirty farmers (excluding large grain growers, family farms and other new agricultural management subjects) were selected randomly as the survey objects.
The survey adopted the household survey method of face-to-face conversation between the investigator and the farmer in order to fill out the questionnaire. As a result, 887 questionnaires were completed. Among these, the invalid samples were removed, leaving 818 samples included in this paper, with an effective rate of 92.2%.
Model Specification
Assuming that farmers decide whether to adopt CLQP measures based on the principle of utility maximization, if the utility of adopting CLQP measures is greater than the utility generated by not adopting them, farmers will choose to adopt them, and vice versa. The impact of labor migration on farmers' adoption of CLQP can be defined as: Let S * ij , a latent variable, capture the ith farmer adopting the jth( j = 1, 2, · · · , J) CLQP measure. O ij is a vector of observable variables that depict household labor migration. Z ij is a vector of control variables that might influence the farmer's adoption of CLQP measures. α 0 j , α 1 j , α 2 j are the parameters to be estimated, µ ij is an error term and S ij is a dichotomous variable that is observed as a 1 if the farmer adopts the CLQP measures and 0 otherwise.
A given farmer adopting multiple CLQP measures is allowed in Mvprobit. µ ij ( j = 1, 2, · · · , J) is subject to a multivariate normal distribution with a conditional mean of zero. Its covariance Ω can be represented as: In Equation (2), ρ k j = ρ jk (k = 1, 2, · · · J), ρ k j (k j) is the correlation coefficient of the error terms µ ik and µ ij in Equation (1). If ρ k j > 0, remarkably, it means that there are complementarities in adopting measuresk and j of CLQP. If ρ k j < 0, remarkably, it means that there are substitutions in adopting measuresk and j of CLQP [23,24].
Variable and Data Description
Based on the policy document from the original ministry of agriculture and related findings [9,25], this paper mainly focuses on famers' adoption of CLQP measures, which consist of subsoiling, straw application, cover crops and green manures and the complementary use of organic fertilizers. In order to reveal the differential impact of different modes of out-migration for work, this paper sets two variables: Perennial out-migration for work and local out-migration for both work and agriculture. The former means that the transferred labor force works in other places all year round and that the labor force is no longer engaged in family agricultural production; the latter means that the transferred labor force is employed in local non-agricultural industry and that the labor force also gives consideration to family agricultural production.
According to available research results, we considered the following control variables: farmer characteristics, farming conditions and external environment. Farmer characteristics include householder's age, education, family size and agricultural income. Many participation and adoption studies have confirmed the role of household head or decision-maker characteristics, such as age and education, in the participation decision [26,27]. By general consensus, the acceptance of CLQP measures by farmers with younger heads of household is higher than that of older farmers, but the role of age is ambiguous because age as a proxy for experience may be offset by a greater reluctance to try new things, including new technologies or government-sponsored programs [19]. Higher age also means a deeper understanding of the negative effects of cultivated land quality degradation, so the possibility of adopting CLQP measures is higher [11]. Education is not only related to the ability to obtain and process information but also often conducive to implementing knowledge-intensive conservation and sustainable agricultural technologies [28]. We expect education to increase farmers' ability to process information [27] and to implement new farming techniques and apply them to production practices [29,30]. At the same time, farmers with higher education levels are more likely than those with lower education levels to obtain off-farm employment opportunities, reducing the incentive to adopt measures to protect the quality of cultivated land [9]. The adoption of CLQP measures also depends to a certain extent on whether farmers have enough labor force. Farmers with more labor are more likely to adopt CLQP measures [11]. The proportion of agricultural income may also affect the CLQP of farmers. The higher ratio of agricultural income to household income is, the more dependent farmers are on agricultural operation and the more likely they are to adopt CLQP measures [31]. Farming conditions include the scale of cultivated land, the quality of cultivated land and the convenience of irrigation. With large-scale cultivated land, it is not only easy to form economies of scale, enabling farmers to maintain or improve the fertility of cultivated land to increase crop yields, but also conducive to the use of machinery by farmers, reducing the difficulty of adopting measures to protect the quality of cultivated land [20,27]. The higher the evaluation of cultivated land quality is, the more likely the protection of cultivated land fertility will be neglected. Previous studies have found a lack of awareness of the fact that cultivated land quality degradation has a significant negative impact on the adoption of CLQP measures by farmers [9]. The convenience of irrigation can also affect farmers' CLQP behavior. Wollni et al. found that timely and effective irrigation is conducive to improving cultivated land value and promoting farmers' adoption of CLQP measures [21]. The external environment includes the availability of agricultural machinery services, agricultural technical guidance and participation in cooperatives. The employment of tillage machinery affects farmers' CLQP behavior because farmers mainly adopt some CLQP measures by purchasing agricultural machinery services. If they cannot employ protective tillage machinery, they may abandon the adoption of these measures [22]. Agricultural technical guidance can enhance farmers' awareness of cultivated land quality protection and is conducive to the popularization of CLQP measures [32]. Participating in cooperatives is beneficial for saving farmers the purchasing cost of agricultural materials (such as organic fertilizer, green manures seeds) and agricultural machinery services (subsoiling and straw application all rely on machinery). It can also accelerate the diffusion and promotion of cultivated land quality protection experience, thus affecting farmers' CLQP behavior [9] The definitions of variables in this study are shown in Table 1: Table 1. Definition of variables.
Outcome Variables
Subsoiling Whether mechanized subsoiling has been carried out in the last three years: Farmers in the surveyed areas mainly adopted three kinds of measures to protect the quality of cultivated land: Subsoiling, straw application and the complementary use of organic fertilizers. In view of this, the samples that planted green manures were incorporated into the complementary use of organic fertilizers to facilitate the model analysis. The statistical results showed that the farmers who adopted subsoiling, straw application and increased application of organic fertilizer accounted for 38.9%, 35.6% and 16.4% of the total sample, respectively. The overall level was not high, indicating that encouraging farmers to participate in farmland quality protection is still very difficult. Table 2 shows the unconditional probability and conditional probability of farmers adopting different CLQP measures. It shows that the adoption of any of the CLQP measures, such as subsoiling, straw application and the complementary use of organic fertilizers, promotes the adoption of the other two kinds of CLQP measures. Thus, they are likely to be interdependent. Therefore, it is reasonable to use the Mvprobit model for empirical analysis.
Among the samples, 605 households had a labor force that left for work, accounting for approximately 74.0% of households. Three hundred nineteen rural households had migrant workers, and 397 had part-time workers. It is worth noting that 95 rural households were led by workers who chose to work either in other areas or in local areas. The cross-relationship between the different patterns of labor force out-migrating for work and CLQP behavior is shown in Table 3. These results show that out-migration for work had an inhibitory effect on farmers' CLQP behavior, but the local part-time employment had a promoting effect on farmers' CLQP behavior. However, whether this is the case still needs to be tested by rigorous econometric models. Results of the descriptive analysis of the model control variables are shown in Table 4. As seen from Table 4, the average age of the heads of households interviewed was approximately 56 years old, and the average years of schooling was approximately 9 years, which is equivalent to a middle school education level. The average household size was approximately 5, and the ratio of agricultural income to household income was mainly concentrated in the range of 40-60%. The average scale of cultivated land was approximately 7.5 mu, and the subjective evaluation of cultivated land quality was mainly medium. Approximately 65% of the surveyed farmers said that their cultivated land could not be irrigated in a timely and adequate manner, which had a negative impact on crop growth. Among the farmers interviewed, only 20% said that their demands for agricultural machinery services could be met in time, approximately 15% said that they had received technical guidance related to CLQP, and approximately 23% had joined cooperatives. In addition, compared with farmers who did not adopt CLQP measures, farmers who adopted CLQP measures had more years of schooling, more sources of income, larger scale farms, poorer quality of cultivated land and higher availability of agricultural machinery services. Experience in agricultural technical guidance and participation in cooperatives were relatively high. Notes: The value in brackets is the standard deviation; *** significant at 1% level; ** significant at 5% level; * significant at 10% level.
Theory
China underwent rapid urbanization following the reform and opening-up policies that were initiated in 1978. The urbanization rate, measured by urban population, increased from 17.92% in 1978 to 57.35% in 2016 [33]. While making up for a shortage of labor force in the urban and industrial development, the environment of agricultural development has undergone profound changes, which are mainly reflected in the decreasing labor input in agricultural production, the continuous rise in agricultural labor costs and decreasing liquidity constraint.
Labor migration has fundamentally shaped economic development in destination areas. It is also one of the strongest forces affecting rural household change within the sending communities [34]. Relevant research shows that labor migration may influence whether farmers adopt cultivated land quality protection measures through five channels [35,36]. First, labor migration reduces the quantity and quality of the household agricultural labor force, which is likely to cause a shortage in the agricultural labor force, thus hindering the adoption of labor-biased measures to protect the quality of cultivated land (such as the application of organic fertilizer) by farmers. In the rural labor market, which has incomplete environment constraints, the effect is more pronounced [37]. Second, labor migration contributes to raising the income level of rural households and relaxing the liquidity of remaining members of the budget constraint boundary [34]. This enables rural households to invest in the protection of cultivated land quality, such as purchasing commodity organic fertilizer and agricultural machinery operation services (deep loosening soil preparation, straw returned without agricultural machinery service, green manure crops that also need the use of agricultural machinery to incorporate into the soil) [38]. Third, labor migration changes the household income structure. Farmers are becoming less dependent on agriculture, and the importance of farm income for households is declining. As a result, farmers no longer expect to increase their income through agriculture, and extensive operation gradually becomes a rational choice [12]. Fourth, labor migration can alleviate the negative impact of risk shocks such as nature and disease on agricultural production, and it can have certain agricultural insurance functions that enhance the ability of farmers to resist risk shocks and promote the adoption of measures to protect the quality of cultivated land [39]. Fifth, labor migration can ease the credit constraints of farm households. The increase in income level and the diversification of income sources brought about by labor migration increases the probability of peasant households obtaining formal credit [40]. Labor migration plays a promoting role in building a weak relationship network that is dominated by business margins, interest margins and friendship margins to obtain more informal credit sources [41]. Previous studies have confirmed that easing credit constraints has a promoting effect on cultivated land quality protection [9]. The effect of labor migration on protecting cultivated land quality depends on the competition between positive and negative effects. When migrant workers are no longer engaged in agricultural production, farmers' focus of employment begins to deviate from agricultural production. The negative effect of labor migration may take over, which is reflected in the reduced probability of adopting measures to protect the quality of cultivated land [15]. In contrast, when migrant workers can still give consideration to agricultural production, the focus of farmers' employment is not fundamentally changed. Thus, the positive effect of an out-migrating labor force is likely to take the lead, resulting in increased likelihood of farmers' adoption of farmland quality protection measures.
Results from the Mvprobit Model
The Mvprobit model was used for estimation, and the random sampling times were set as 30, which was slightly larger than the arithmetic square root of the number of household samples needed to obtain robust regression results [42]. According to a likelihood ratio test, the null hypothesis that the correlation coefficients between the error terms would equal zero was rejected at the 1% level, indicating that farmers' varying decision-making regarding CLQP measures was not independent of each other. Specifically, the error terms' correlation coefficients (ρ DM ,ρ DO ,ρ MO ) were all greater than zero and significant at a 1% level, indicating that the unobserved factors in Equation (1) had a homogenous effect on farmers' adoption of subsoiling, straw application and complementary use of organic fertilizers, which means that they were complementary to each other. The model fit was good. Table 5 reports the estimated coefficients of the variables and their marginal effects.
Consistent with previous expectations, the results showed that perennial out-migration for work has a significant negative impact on farmers' CLQP behavior, while local part-time farming has a significant positive impact on farmers' CLQP behavior. In terms of the marginal effect, perennial out-migration for work decreased the probability of farmers adopting subsoiling, straw application and the complementary use of organic fertilizers by 6.5%, 9.0% and 5.0%, respectively. Local part-time farming increased the probability of farmers adopting subsoiling, straw application and the complementary use of organic fertilizers increased by 12.9%, 12.7% and 11.5%, respectively. According to the survey, transferred labor belongs to rural household heads that go out for work. The quality of the labor force left behind for farming is generally not high. These individuals lack knowledge of new agricultural technologies and pay little attention to the sustainable use of cultivated land. Instead, they focus on the current output of cultivated land, blindly pursuing high input and high output, and have little enthusiasm for the protection of cultivated land quality. This is related to the reconfiguration of the household labor force between agricultural and non-agricultural sectors. The lack of an agricultural labor force also causes farmers to neglect or abandon the management of cultivated land [11]. The survey also found that the agricultural and non-agricultural income of local part-time farmers occupy an important position in the family income structure compared with those of rural households for which the head goes out for work all year long. Such rural households still attach higher importance to agricultural production activities. In a context in which a large amount of labor input is replaced by machinery in current agricultural production, the local part-time industry enables farmers to improve the substitution of capital for labor through capital accumulation, thus promoting the adoption of capital-biased CLQP measures (such as subsoiling and straw application) [15]. Moreover, under the complementary effect of different types of CLQP measures, this will further promote the adoption of labor-based CLQP measures (such as complementary use of organic fertilizers). Notes: The value in brackets is the standard deviation; *** Significant at 1% level; ** significant at 5% level; * significant at 10% level.
Among the control variables, education, family size, agricultural income, cultivated land scale, cultivated land quality, irrigation convenience, agricultural machinery services, agricultural technical guidance and joining a cooperative had significant influence on farmers' cultivated land quality protection behavior and the direction of this influence was consistent with expectations.
Robustness Examination
In the above model setting, there may be endogenous problems in the variable of out-migrating for work: Some unobservable characteristics of farmers may not only affect the decision to out-migrate for work by family members but also affect the adoption of CLQP measures, resulting in the problem of missing variables [15]. To eliminate the endogenous problem caused by missing variables, on the basis of the above Mvprobit model, this paper uses the two-stage instrumental variable method [43]. In the first stage of the model, considering the values of perennial migrant and local part-time farming are 0 or 1 and that there may be a substitute relationship between them, the Biprobit model was selected to conduct a regression analysis on the factors influencing the decision of household labor forces' migrant work behavior. The purpose was to estimate the generalized error terms of the perennial out-migration equation and the local part-time farming equation. In the second stage of estimation, the generalized error term obtained by Biprobit model estimation was added to the Mvprobit model as a new explanatory variable for parameter estimation, to solve or alleviate the endogeneity.
It should be noted that the explanatory variables for the Biprobit model included the characteristics of farmers, farming conditions, external environment, regional fixation effect and instrumental variables of labor migration mentioned above. The instrumental variable selected in this paper was the proportion of the transferred labor force out of the total number of workers in the village. Through the function of social relation networks, this variable has a strong correlation with the decision underlying household labor migration [44]. The estimated results of the Biprobit model also confirmed that the influence of this variable on farmers' decision-making regarding migration was significant at the 1% level, meeting the correlation requirements of instrumental variables. There is no other possible way for this variable to affect the CLQP behavior of farmers except through the influence of the rural labor force going out for work, which can meet the exogenous requirements. In addition, the correlation coefficient of the error term of the Biprobit model was significant and negative, which means that there is indeed an inverse relationship between perennial out-migration and local part-time farming, which also indicates that it is reasonable to use Biprobit modeling in the first stage of model estimation. Table 6 shows the main results of the second step of estimation. Perennial out-migration had a significant negative impact on the adoption of CLQP measures, such as subsoiling, straw application and the complementary use of organic fertilizers, while the impact of part-time farming locally was significant and positive, confirming the robustness of the previous research conclusions. The results of the model also show that after considering the endogeneity of the key variable, the marginal effect of both perennial out-migration and local part-time farming on the adoption of CLQP measures by farmers increased significantly, indicating that labor migration is an important factor affecting farmers' CLQP behavior. Notes: The value in brackets is the standard deviation; *** Significant at 1% level; ** significant at 5% level.
Interpreting the Results within the Context of the Literature
In accordance with results from the extant literature, farmer characteristics, farming conditions and external environment all had an impact on CLQP. Farmers with more years of education were more inclined to adopt subsoiling and straw application. Li and Yang found that education promoted farmers' adoption of sustainable agricultural technologies related to the protection of cultivated land quality [9,22]. The higher the household population was, the higher the probability of straw returning and organic fertilizer application.
Previous studies have pointed out that farmers' decisions regarding CLQP depends on families' labor endowment or the labor consumption of CLQP [25]. Increasing the application of organic fertilizer requires more labor. Similarly, returning straw to the field can increase damage caused by grass and pests in the field, which also ultimately requires more labor. A higher proportion of agricultural income is conducive to the adoption of CLQP measures by farmers, which is consistent with the conclusion drawn by other researchers [20]. This is because measures to protect the quality of cultivated land can increase crop yields, thus encouraging farmers to adopt them [45]. Large scales of cultivated land and convenient irrigation can promote farmers to adopt CLQP measures. From the perspective of a marginal effect, the positive effect of convenient irrigation on the probability of the adoption of measures to improve farmers' CLQP is at least equivalent to the expansion of farmers' cultivated land by 10 mu, indicating the importance of cultivated land irrigation infrastructure for sustainable agricultural development. The higher the subjective evaluation of cultivated land quality is, the lower the probability that farmers will adopt CLQP measures. Previous studies have found that the lack of a correct and comprehensive understanding of cultivated land quality degradation is an important factor keeping farmers from carrying out CLQP investment [11].
Availability of agricultural machinery services has a positive impact on farmers' CLQP behavior, especially on subsoiling and straw application. Availability of agricultural machinery services increases the adoption probability of both by 16% and 10.4%, respectively, and this effect is significant at the 1% level. The experience of agricultural technical guidance had a positive and significant influence on the farmers' CLQP measures. Farmers who participate in agricultural technical guidance are more aware of the benefits of CLQP, so they are more likely to adopt CLQP measures than farmers who have not received agricultural technical guidance. Joining cooperatives had a significant positive effect on adopting CLQP measures, which is consistent with the conclusion of Yang and Xie [8,9]. However, the enthusiasm of farmers in the surveyed areas for participating in cooperatives was generally low. One of the most important reasons was that agricultural income has been replaced by non-agricultural income as the main part of household income, and the motivation to participate in cooperatives has declined.
Implications for CLQP
Based on the above results, there are a number of implications: (1) In the process of popularizing CLQP technology, the relevant departments should focus on the subject of agricultural production.
(2) For rural households whose main labor force works in other areas, it is necessary to comply with their non-agricultural intention and help rural households with the intention to stay in cities and towns to settle down. (3) In terms of promoting CLQP, efforts should also be made to improve cultivated land irrigation infrastructure, actively promote the development of a socialized agricultural machinery service market, continue to strengthen the technical guidance of CLQP, and give full consideration to the demonstration and publicity functions of cooperatives.
Limitations and Future Research Directions
Chinese society is a relational society, and farmers naturally form their own social networks and culture in the long-term relationship. Cultural and social aspects influence farmers' decisions through interaction, reciprocity, learning and trust. The paper considers the influence of farmers' characteristics, farming conditions and external environment, but very few considerations are given to the cultural and social aspects. In this paper, the definition of CLQP behavior did not include the measures of trench repair, crop rotation, soil testing and formulated fertilization. While some enlightening conclusions have been drawn from the empirical analysis of subsoiling, straw application and the complementary use of organic fertilizers, it is not clear whether these conclusions are still valid for other types of CLQP measures. Besides the mentioned CLQP measures, sustainable agriculture concepts such as water-saving irrigation and processing and utilization of crop straw are also important measures worth thinking about in future research. Moreover, the impacts of different types of CLQP measures on farmers' production costs, output benefits and ecological environment also need to be investigated and evaluated in depth. These are important questions worth thinking about in follow-up research.
Conclusions
Labor migration not only changes the resource allocation of rural household labor forces in agricultural and non-agricultural employment sectors but also affects the utilization of other agricultural production factors. Based on survey data from rural households in north Jiangsu Province, this paper analyzed the influence of different labor migration on farmers' CLQP behavior. The results showed that in the surveyed areas, the proportion of CLQP measures such as subsoiling, straw application, cover crop and green manure utilization and the complementary use of organic fertilizers was still relatively low. Empirical results showed that perennial out-migration inhibits the input of CLQP by farmers, while local part-time farming promotes the input of CLQP by farmers.
In view of this, this paper holds that the transferred labor force belongs to perennial out-migration, and typically agricultural income loses its dominant position in the family income structure. Moreover, the labor force left behind for farming is of low quality and does not care much about the sustainable use of cultivated land. Transfer labor belongs to local part-time farming, whose agricultural income still occupies an important position in the family income structure. Through capital accumulation, migrant workers can help to promote the adoption of capital-biased CLQP measures. At the same time, under the complementary effect of different measures of CLQP, the adoption of labor-biased CLQP measures is promoted. The empirical results also show that education, family size, agricultural income, cultivated land scale, cultivated land quality, irrigation convenience, agricultural machinery services, agricultural technical guidance, participation in a cooperative and other factors affect farmers' CLQP behavior.
|
2020-04-09T09:14:01.119Z
|
2020-04-07T00:00:00.000
|
{
"year": 2020,
"sha1": "59cf3840f3a2719f2e7cd59e282f64c718152ae8",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/12/7/2953/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "f9f7483a5f005a7da68b8a49a22ddedf44e20e40",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Economics"
],
"extfieldsofstudy": [
"Business"
]
}
|
211086094
|
pes2o/s2orc
|
v3-fos-license
|
Immunogenicity and safety of a combined DTPa-IPV/Hib vaccine administered as a three-dose primary vaccination course and a booster dose in healthy children in Russia: a phase III, non-randomized, open-label study
ABSTRACT We assessed the immunogenicity and safety of the combined diphtheria-tetanus-acellular pertussis-inactivated poliovirus/Haemophilus influenzae type b vaccine (DTPa-IPV/Hib) in children in Russian Federation aiming to support the registration of the vaccine in Russia. In this phase 3, non-randomized, open-label study (NCT02858440), healthy children received three primary doses at 3, 4.5, and 6 months of age (N = 235) and a booster dose at 18 months of age (N = 225). Seroprotection rates against diphtheria, tetanus, Hib, and poliovirus 1–3, seropositivity rates against pertussis antigens, and antibody geometric mean concentrations/titers for all antigens were evaluated one month post-primary and post-booster vaccinations. Solicited local and general adverse events (AEs) were collected during a 4-day period and unsolicited AEs during a 31-day period post-vaccination. Serious AEs were recorded throughout the study. At post-primary vaccination, all infants were seroprotected against diphtheria, tetanus, and poliovirus 1 and 2, 99.3% against poliovirus 3, and 98.4% against Hib. At least 98.9% of participants were seropositive for the three pertussis antigens. At post-booster vaccination, all toddlers were seroprotected/seropositive against all vaccine components. The most frequent local and general solicited AEs were redness, reported for 52.6% and 44.9% of children, and irritability, reported for 64.7% and 39.1% of children, post-primary and post-booster vaccination, respectively. Unsolicited AEs were reported for 20.4% (post-primary) and 5.8% of children (post-booster vaccination). Most AEs were mild or moderate in intensity. Six serious AEs were reported in three (0.4%) children; none were fatal or assessed as vaccination-related. DTPa-IPV/Hib proved immunogenic and well tolerated in the Russian pediatric population.
Introduction
Although routine infant vaccination significantly decreased the morbidity and mortality associated with previously common childhood infectious diseases, including diphtheria, tetanus, pertussis, Haemophilus influenzae type b (Hib) and poliomyelitis, disease burden remains substantial, affecting populations worldwide. [1][2][3][4][5][6] The crowded childhood immunization routine schedules might be a potential deterrent for parents and providers to comply with recommendations and this can result in decreased vaccine coverage, and ultimately, disease outbreak. 7 Introducing combination vaccines to replace complex immunization schedules has several benefits, such as ease of storage, simplified administration, fewer injections, increased patient and health care acceptance, higher rates of compliance to vaccination schedules, improved coverage rates, reduced shipping and administration costs, reduced confusion over labeling in the medical office, and reduced number of visits. [8][9][10][11] A pentavalent diphtheria-tetanus-acellular pertussisinactivated polio and Hib conjugate vaccine (DTPa-IPV/Hib; Infanrix-IPV/Hib, GSK) has been widely used in several countries across the world since its first license in 1997. The vaccine was shown to be immunogenic, with an acceptable safety profile, when administered as primary and/or booster vaccination according to different schedules. [12][13][14][15][16][17] In the Russian Federation, a 3-dose primary vaccination schedule (with doses administered at 3, 4.5, and 6 months of age), and a booster dose at 18 months of age are currently recommended against the following diseases: diphtheria, tetanus, pertussis, poliomyelitis and diseases caused by Hib. 18,19 DTPa-IPV/Hib combines all antigens in one single formulation and its use can therefore complement the current standard of care for hepatitis B immunization in Russian children.
The aim of the present study was to evaluate the immunogenicity and safety of the combined DTPa-IPV/Hib vaccine when administered as a 3-dose primary vaccination course at 3, 4.5, and 6 months of age and as a booster dose at 18 months of age in healthy children according to the Russian immunization schedule to support the registration of the combination vaccine in this country.
Study design and participants
This phase 3, single group, non-randomized, open-label study was conducted in five centers in the Russian Federation between September 2016 and November 2018. Healthy infants, born full-term, aged 3-4 months (90-120 days) at the time of the first vaccination, for whom written informed consent was obtained from their parents/adoptive parents were enrolled in the study. Infants were not eligible if they had received immunosuppressant or immune-modifying drugs or previous DTP, poliovirus or Hib vaccination. A full list of exclusion criteria is provided in the Supplementary text.
Participants received four doses of combined DTPa-IPV /Hib as a 3-dose primary vaccination course at 3, 4.5 and 6 months of age and a booster dose at 18 months of age. At each vaccination, a 0.5 mL dose was administered intramuscularly in the upper side of the thigh.
An internet-based central randomization system was used to allocate treatment numbers by dose and to track enrollment in the study. Laboratory personnel performing sample testing were blinded to the treatment allocation.
The study was conducted in compliance with the Declaration of Helsinki, the International Conference on Harmonization Guideline for Good Clinical Practice, and all applicable local regulations. The study protocol and informed consent were reviewed and approved by Independent Ethics Committees/Institutional Review Boards at each center. The trial is registered at http://www.clinicaltrials.gov (NCT02858440) and the full protocol is available at http:// www.gsk-clinicalstudyregister.com/study/4677.
Study objectives
The primary objective was to evaluate the immune responses to the vaccine components in terms of seroprotection rates for diphtheria, tetanus, Hib, and poliovirus serotypes 1-3 antigens, and in terms of seropositivity rates for pertussis antigens in infants one month after the third dose of the primary vaccination.
Secondary objectives were to assess the immune responses to the vaccine components in terms of seroprotection rates for diphtheria, tetanus, Hib, and poliovirus serotypes 1-3 antigens, and in terms of seropositivity rates for pertussis antigens in toddlers one month after the booster vaccination; antibody concentrations or titers against diphtheria, tetanus, Hib, poliovirus types 1-3, and pertussis antigens in children one month after both primary and booster vaccinations; as well as to evaluate vaccine safety and reactogenicity.
Immunogenicity assessments
Blood samples (3.5 mL) were collected one month after the third dose of the primary vaccination and one month after the booster vaccination. Antibodies against diphtheria, 20 tetanus, 21 Hib PRP, and pertussis components 22,23 were measured using standard in-house enzyme-linked immunosorbent assays. 96-well microplate coated with the corresponding purified antigen were incubated with dilutions of serum samples, controls, and standard. Microplate were washed and mouse horseradish peroxidase (HRP)-conjugated anti-human IgG monoclonal antibodies (diphtheria, tetanus, pertussis) or goat HRPconjugated anti-human Ig polyclonal antibodies (Hib) were added. Enzyme activity was revealed spectrophotometrically using tetramethylbenzidine. Concentrations were calculated from the reference standard curve using a four parameters logistic fitting algorithm and expressed in IU/mL (diphtheria, tetanus, pertussis) or microgram (µg)/mL. Assay cutoffs (equals to the lower limit of precision and linearity) were 0.057 IU/mL (diphtheria), 0.043 IU/mL (tetanus), 0.066 µg/mL (anti-PRP), 2.693 IU/mL (PT), 2.046 IU/mL (FHA), and 2.187 IU/mL (PRN). Antibodies against poliovirus 1-3 antigens were measured by a standard in-house neutralizing antibody assay adapted from the WHO Guidelines for WHO/EPI Collaborative Studies on Poliomyelitis. 24 All analyses were performed at the Clinical Laboratory Sciences (GSK, Rixensart or Wavre, Belgium) applying validated laboratory tests.
Seroprotection was defined as antibody concentrations ≥0.1 IU/mL for diphtheria and tetanus, ≥0.15 µg/mL (indicative of short-term protection) and 1.0 µg/mL (indicative of long-term protection) for PRP, and antibody titers ≥8 ED 50 (titers expressed in terms of the reciprocal of the dilution resulting in 50% inhibition; samples with a titer greater than or equal to 1:8 is considered seroprotective) for poliovirus types 1-3. 25-27 A generally accepted correlate of protection for Bordetella pertussis is not yet established since not only PT antibodies play an important role, but also other antibodies, such as FHA and PRN, as well as cellular immune responses seem to contribute to protection. 26,27 In this study, participants with anti-PT, anti-FHA, and anti-PRN antibody concentrations above the assay cutoffs were considered seropositive.
Antibody geometric mean concentrations (GMCs) and geometric mean titers (GMTs), and seroprotection/seropositivity rates were calculated one month following both primary and booster vaccination.
Safety and reactogenicity assessments
Participants were observed for at least 30 minutes following the administration of the study vaccine for any immediate reactions. Solicited local (injection site pain, redness, swelling) and general (drowsiness, fever, irritability, loss of appetite) adverse events (AEs) occurring within the 4-day (days 0-3) period and unsolicited AEs occurring within the 31-day (days 0-30) period after each vaccine dose administration were recorded on diary cards by the participants' parents/adoptive parents. All solicited local AEs were considered as related to vaccination. The causality of other AEs was assessed by the investigator. The intensity of all AEs was evaluated on a 3-grade scale from mild to severe. Severe (grade 3) AEs were defined as crying when a limb is moved (for pain), diameter >20 mm (for redness and swelling), axillary temperature >39.0°C (for fever), preventing normal everyday activities (for irritability and drowsiness), and as not eating at all (for loss of appetite). Large injection site reactions (swelling with a diameter >50 mm, noticeable diffuse swelling or noticeable increase of limb circumference) were recorded for up to 4 days (days 0-3) after the booster vaccination. Related and medically-attended AEs were also recorded. Serious AEs (SAEs) were collected during the entire study period. All unsolicited AEs and SAEs were classified using the Medical Dictionary for Regulatory Activities (MedDRA) Primary System Organ Class and Preferred Terms. 28
Statistical analyses
A sample size of 200 evaluable infants was requested by the local regulatory authorities; assuming a 15% drop-out rate, a total of approximately 235 infants were to be enrolled in the study.
For each vaccination course (primary and booster), immunogenicity analyses were performed on the according-toprotocol (ATP) cohort for immunogenicity, which included all vaccinated participants who met all eligibility criteria, complied with the protocol, and for whom assay results were available post-vaccination for at least one study vaccine antigen. Seroprotection rates, seropositivity rates, and antibody GMCs and GMTs were calculated with 95% confidence intervals (CIs) one month after the primary and booster vaccinations. GMC/GMT calculations were performed by taking the anti-log of the mean of the log 10 concentration/titer transformations. Antibody concentrations/titers below the cutoff of the assay were given an arbitrary value of half the cutoff.
For each vaccination course, safety analyses were performed on the total vaccinated cohort, which included all participants who received at least one dose of the study vaccine. The percentage of infants with at least one solicited (local and general) and unsolicited AE was calculated after each vaccine dose and overall per participant, with exact 95% CIs.
Demographics
In total, 235 children were enrolled and vaccinated with three primary doses; 225 of them received the booster dose ( Figure 1). The ATP cohort of the primary and booster vaccination courses included 183 and 190 participants, respectively. Reasons for exclusion from the ATP cohort, as well as reasons for withdrawal from the study are presented in Figure 1. The mean age was 14.1 weeks at the receipt of the first primary dose and 17.7 months at booster dose. All participants were of European heritage (Table 1). Vaccination against HBV was documented for 162 out of the 235 children, 115 of them received the vaccine at the same time as the study vaccine.
Immunogenicity
One month after the third dose of primary vaccination, all infants had antibody levels above the seroprotective threshold for diphtheria, tetanus, and poliovirus types 1 and 2, and 99.3% of participants (all but one) were seroprotected against poliovirus type 3 ( Table 2). Against Hib, 179 (98.4%) participants had anti-PRP antibody concentrations ≥0.15 µg/mL. At least 98.9% of participants were seropositive for each of the three pertussis antigens (Table 3).
One month after the booster vaccination, all infants had seroprotective antibody levels against diphtheria, tetanus, and poliovirus types 1-3. All participants had anti-PRP antibody concentrations ≥1.0 µg/mL and were seropositive for each of the PT, FHA and PRN antigens.
Between primary and booster vaccinations, considerable increases in antibody concentrations and titers were observed for all vaccine antigens (Tables 2, 3).
Safety and reactogenicity
The most commonly reported local AE was redness after both primary (52.6%) and booster (44.9%) vaccinations. Irritability was the most frequent general AE, reported for 64.7% of children following the primary and 39.1% following the booster vaccination; any fever was recorded for 22.8% and 11.6% of children, respectively ( Figure 2). The most common local AE of grade 3 intensity was pain, reported by 3 (1.3%) infants after the primary and 4 (1.8%) toddlers after the booster vaccination. Grade 3 irritability was reported by 19 (8.2%) participants following the primary and 11 (4.9%) participants following the booster vaccination. One child experienced grade 3 fever after the booster dose, that was considered vaccination-related ( Figure 2).
One child experienced a large swelling reaction the day after the booster vaccination, with a maximum diameter of 80 mm. The swelling resolved within seven days.
During the 30-day post-vaccination period in the primary phase, at least one unsolicited AE was reported for 48 (20.4%) infants; 2 (0.9%) infants experienced an unsolicited AE considered vaccination-related (agitation and erythema) ( Table 4). One (0.4%) infant experienced a grade 3 unsolicited AE (rhinitis) considered unrelated to vaccination by the investigator. During the 30-day post-booster period, unsolicited AEs were recorded for 13 (5.8%) children; for one child (0.4%), one event (nightmare) was assessed by the investigator to be vaccination-related. No grade 3 unsolicited AEs were reported in the 30-day period after the booster dose. Medically attended AEs were recorded for 23 (9.8%) participants after the primary and 6 (2.7%) participants after booster vaccination (Table 4).
A total of six SAEs were reported for three (1.3%) infants: one infant experienced gastric infection, one infant experienced anal fistula and proctitis, and one infant experienced a circulatory collapse, congenital heart disease, and patent ductus arteriosus. All SAEs were considered by the investigator as unrelated to vaccination and all infants recovered by study end. No fatalities were reported during the study.
Discussion
The combined DTPa-IPV/Hib vaccine induced robust immune responses to all vaccine antigens after three primary doses and a booster dose and had an acceptable safety profile when administered to healthy Russian children before their second year of life. A booster effect on antibody concentrations was observed for all vaccine antigens.
All participants achieved seroprotective levels against diphtheria and tetanus one month after the primary and booster vaccinations. This is in line with data from previous studies conducted in healthy children in Asia and Europe. 15,29,30 When infants received DTPa-IPV and Hib vaccines, administered separately or combined as a single injection, according to a 2-4-6-months primary schedule followed by a booster dose at 16-19 months of age, seroprotection rates for diphtheria and tetanus were 100% one month after the primary series, and declined in time, but returned to 100% one month after the booster dose. 13 In another study evaluating the immune responses of DTPa-IPV/Hib
Total vaccinated cohort for primary N=235
Completed primary vaccination course N=230
Total vaccinated cohort for booster N=225
Moved from the study area (3) Protocol violation (2) Completed the study N=223
Booster ATP cohort for immunogenicity N=190
Reason for exclusion: vaccine administration forbidden (1); protocol violation (5); administration of any medication forbidden (3); non-compliance with vaccination schedule (15); noncompliance with blood sampling schedule (5); missing serological data (6) Moved from the study area (1) Lost to follow-up (1) co-administered with a rotavirus vaccine, 97.3% of participants were seroprotected against diphtheria and 100% against tetanus after three vaccine doses administered at 3, 4 and 5 months of age. 17 In the present study, 98.9% of participants were seropositive for PT and 99.4% for FHA and PRN antigens one month after the primary vaccination. Similar results have been observed in trials conducted in European infants, comparing the immune responses to pertussis vaccination following vaccination with either an HBV-containing hexavalent combination (DTPa-HBV-IPV/Hib) or the DTPa-IPV/Hib and HBV vaccines separately. 29,31 In several studies conducted in Asian infants, seropositivity rates of 100% were also reported for all three pertussis antigens following primary vaccination with DTPa-IPV/Hib according to different schedules. 14,15,30 In the current study, at one month after the booster dose, all participants were seropositive for all three pertussis antigens, in line with previous reports. 15,30 As different assays and seropositivity thresholds were used across studies, seropositivity rates cannot be directly compared. However, data indicate the mounting of robust immune responses against pertussis antigens following vaccination with DTPa-IPV/Hib. Almost all children (98.4%) achieved anti-PRP antibody levels ≥0.15 µg/mL one month following the primary vaccination. While several studies reported seroprotective levels for 100% of study participants after completing the primary vaccination course, 17,29,30 in other reports seroprotective rates ranged between 96.4% and 98.7%. 13,15 One month postbooster dose, all study participants achieved seroprotective levels indicative of long-term protection (≥1.0 µg/mL) against Hib. In other studies, lower anti-PRP antibody levels were observed in children receiving the combined DTPa-IPV/Hib vaccine as compared with those who received the Hib vaccine as a separate injection. 13,14,16 Nevertheless, the lower anti-PRP antibody responses to combined DTP-Hib vaccines were previously shown not to be associated with an impaired function of the induced antibodies, nor with impaired immune memory against Hib. 32,33 The successful long-term Hib disease control in Europe achieved with a DTPa-HBV-IPV/Hib combination vaccine also confirms that these immunological findings have no clinical impact. 34 The immune responses observed in the current study against poliovirus types 1-3 one month following the primary vaccination were in line with previous reports. In Caucasian infants who completed a 3-dose primary schedule with either DTPa-HBV-IPV/Hib or DTPa-IPV/Hib + HBV vaccines, seroprotection rates were achieved by almost all participants; antibody levels ranged between 481.6-1590.3 ED 50 (poliovirus 1), 350.6-1961.2 ED 50 (poliovirus 2), and 1152.3-2425.5 ED 50 (poliovirus 3). 29,35 The three primary doses of DTPa-IPV/Hib vaccine administered at 3, 4.5, and 6 months of age and the booster dose administered at 18 months of age were well tolerated, with an acceptable reactogenicity profile. The frequencies of the observed solicited local and general AEs were similar to those reported from other studies, with redness and irritability being the most frequent solicited local and general AEs (of all grades), respectively, although the vaccination schedules differed across studies. 14,29,30,36 In line with previous reports, 15,37 the most commonly reported grade 3 local and general solicited AEs were pain and irritability, respectively. While the incidences of solicited local AEs remained similar, the incidence of solicited general AEs in the current study tended to be lower after the booster dose as compared with primary vaccination. The frequencies of AEs observed following booster vaccination were nevertheless similar with those reported from a phase 3 trial evaluating the safety of the booster dose of DTPa-IPV/Hib in Vietnamese toddlers. 12 Consistent with previous reports, 14,15 the occurrence of vaccine-related unsolicited AEs was not frequent. No SAEs that occurred during the study were considered related to the study vaccine. The observed 0.4% frequency of large swelling reactions is comparable with literature data. 36,37 The strengths of this study included the successful enrollment of a large number of children from 5 different sites distributed over the Russian Federation, the laboratory testing conducted in one central laboratory, and the use of validated laboratory tests. The open-label and non-randomized design might be one of the limitations of the study. The number of evaluable participants for the primary endpoint of the study on immunogenicity was not derived from a sample size powered computation but based on the required number of participants as requested by the Russian Regulatory Authorities. The trial was conducted in one country, though the results could be generalized to other populations with similar disease prevalence and immunization practices. The lack of data on pre-primary and pre-booster immune responses in infancy did not allow the assessment of the fold-change of antibody levels from preto post-vaccination for the primary and booster immunization series. Co-administration of DTPa-IPV/Hib with an HBV vaccine was not envisaged in the study protocol, however, concomitant administration with HBV or any other vaccine as part of the national immunization schedule and as part of routine vaccination practice was allowed. Additionally, broad literature data exist to support the concomitant injection of DTPa-IPV/Hib and HBV vaccines 30,35,38,39 and many countries use these vaccines as standard of care.
The coverage of pediatric DTP immunization in the Russian Federation has remained high for more than a decade, with 97% of children receiving the third DTP dose. 40,41 However, resurgences of childhood diseases still occur. Therefore, a high uptake of pediatric vaccinations remains paramount and the administration of combination vaccines has been shown to improve vaccination compliance. 10 Moreover, in the Russian Federation whole- cell pertussis vaccination predominates over the acellular pertussis vaccines. 42 The use of an acellular pertussis containing combination vaccine like DTPa-IPV/Hib might therefore contribute to improved vaccination acceptance, as this vaccine is less reactogenic than whole-cell pertussis vaccines, enhancing compliance and coverage. 43 In conclusion, the combined DTPa-IPV/Hib vaccine administered as a 3-dose primary vaccination at 3, 4.5 and 6 months of age and a booster dose at 18 months of age induced robust immune responses to all vaccine antigens and was well tolerated in healthy Russian infants. For the benefit of healthcare professionals, a summary contextualizing the results and relevance of this clinical research is displayed in the Focus on Patient Section (Figure 3).
Focus on the Patient
What is the context?
What is new?
What is the impact?
• Combination vaccines protect against several diseases in one single injection. They are therefore widely used in the routine pediatric practice. • The combination vaccine DTPa-IPV/Hib (Infanrix-IPV/Hib) is indicated for immunization against five childhood diseases: diphtheria, tetanus, pertussis, poliomyelitis, and diseases caused by Haemophilus influenzae type b (Hib). • Three doses plus one booster dose against these diseases are currently recommended for infants in the Russian Federation.
• We investigated the immunogenicity and safety of the combination vaccine in healthy Russian infants.
• This vaccine induced a robust immune response.
• After the three first doses we found that: all infants had protective antibody levels against diphtheria and tetanus more than 98% of infants had protective antibody levels against poliomyelitis and Hib antigens more than 98% of infants were seropositive for three pertussis antigens • All toddlers had protective or positive antibody concentrations for all 5 diseases after the booster dose.
• There are no safety concerns and the vaccine is well tolerated.
• This study shows that this combination vaccine has an acceptable immunogenicity and safety profile and could thus replace standalone vaccines reducing the number of injections needed to follow the immunization recommendation of the Russian Federation. Author contributions SOK, MS, and IO were involved in the conception and design of the study; VR, WJ, and AG collected the data; MS, VR, IO, and WJ performed the study; VR, NB, and WJ contributed materials/analysis/ reagent tools, and NM, SOK, VR, WJ, LA, and DF were involved in data analysis and interpretation. All authors had full access to the data, revised the work critically, approved the final version to be published, and take full accountability for all aspects of the work.
Disclosure of potential conflicts of interest MS, NB, DF, LA, SOK, NM, and WJ are employed by the GSK group of companies. WJ, LA, and MS hold restricted shares in the GSK group of companies as part of their employee remuneration. VR, IO, and AG have nothing to declare.
Funding
This work was supported by GlaxoSmithKline Biologicals SA, including all costs associated with the development and publication of this manuscript.
Data sharing statement
The results summary for this study (GSK study number 116194, gcp02858440) is available on the GSK Clinical Study Register and can be accessed at www.gsk-clinicalstudyregister.com.
For interventional studies that evaluate our medicines, anonymized patient-level data will be made available to independent researchers, subject to review by an independent panel, at www.clinicalstudydatare quest.com within six months of publication.
To protect the privacy of patients and individuals involved in our studies, GSK does not publicly disclose patient-level data.
|
2020-02-13T09:12:26.511Z
|
2020-02-12T00:00:00.000
|
{
"year": 2020,
"sha1": "942715f3077f2843d98a139d9b8db91b3b99c4e4",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/21645515.2020.1720437?needAccess=true",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "e255cd8e685615167812ffd8230d014da244107d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
233443964
|
pes2o/s2orc
|
v3-fos-license
|
On the two-loop divergences in 6D, ${\cal N}=(1,1)$ SYM theory
We continue studying $6D, {\cal N}=(1,1)$ supersymmetric Yang-Mills (SYM) theory in the ${\cal N}=(1,0)$ harmonic superspace formulation. Using the superfield background field method we explore the two-loop divergencies of the effective action in the gauge multiplet sector. It is explicitly demonstrated that among four two-loop background-field dependent supergraphs contributing to the effective action, only one diverges off shell. It is also shown that the divergences are proportional to the superfield classical equations of motion and hence vanish on shell. Besides, we have analyzed a possible structure of the two-loop divergences on general gauge and hypermultiplet background.
Introduction
Supersymmetric field theories in diverse dimensions, especially those exhibiting the maximally extended supersymmetry, display very interesting quantum properties. For example, divergences in such theories sometimes unexpectedly vanish. In some cases such miracles are caused by a hidden supersymmetry of the theory. This refers, e.g., to 4D, N = 4 SYM theory where all possible divergent diagrams cancel each other due to the maximally extended rigid N = 4 supersymmetry [1][2][3][4]. A consistent derivation of the 4D, N = 2 non-renormalization theorem was given in [5] in N = 2 harmonic superspace formulation [6][7][8], which is the most adequate approach to 4D N = 2 supersymmetric gauge theories. Another very interesting example of the miraculous divergence cancelation is provided by N = 8 supergravity, which is the maximally extended supergravity theory in four dimensions. At present, it is believed that this theory is finite up to at least seven loops, see [9] and references therein, although the possible all-loop ultraviolet finiteness is also discussed (see, e.g., [10,11]).
Similarly to 4D (super)gravity theories, the degree of divergence in higher dimensional gauge theories increases with a number of loops. One can expect that supersymmetry and, especially, the maximally extended supersymmetry, is capable to improve the ultraviolet behavior in such theories. This is the basic reason of interest in investigating UV divergences of the higher dimensional supersymmetric gauge theories. They were actually studied for a long time, see, e.g., [12][13][14][15][16][17][18][19][20][21][22][23]. In this paper we will concentrate on the 6D rigid N = (1, 1) SYM theory. This theory is in many aspects similar to 4D, N = 4 SYM theory in four dimensions, and one can expect some similarity of the structure of divergences in both theories. However, they essentially differ in the UV domain. In contrast to N = 4 SYM theory, which is finite to all loops, its 6D counterpart is non-renormalizable by power-counting. Nevertheless, the extended supersymmetry leads to the finiteness of the theory up to two loops, at least on mass shell [15][16][17]24]. The modern methods of computing scattering amplitudes [24] demonstrate that UV divergences in 6D N = (1, 1) SYM theory should start from the three-loop level (see also [12][13][14]).
In our previous works [26][27][28][29][30][31] we studied UV properties of 6D, N = (1, 0) and N = (1, 1) theories in the 6D harmonic superspace formulation. In particular, it was found that 6D, N = (1, 1) theory is off-shell finite in the one-loop approximation in the Feynman gauge, although the divergences are still present in the non-minimal gauges [31] (they vanish on shell). The two-loop divergences in the hypermultiplet two-point Green functions were shown to also vanish off shell [28]. However, the complete two-loop calculation in the harmonic superspace approach has not been done so far. In the present paper we continue the study of 6D, N = (1, 1) SYM theory at two loops. We argue that it is not finite off shell in the Feynman gauge in the two-loop approximation, although the divergences vanish on shell. Our consideration is limited to the gauge superfield sector and does not involve the background hypermultiplet. However, the result is still applicable to other sectors of the models due to the implicit N = (0, 1) supersymmetry. Indeed, we formulate the model in terms of interacting N = (1, 0) harmonic gauge multiplet and hypermultiplet in adjoint representation of the gauge group. The action of the model is manifestly invariant under N = (1, 0) supersymmetry by construction. An additional N = (0, 1) supersymmetry is implicit and is present only if the hypermultiplet belongs to the adjoint representation of the gauge group. Note that, albeit N = (1, 0) theories are in general plagued by anomalies [32][33][34][35], N = (1, 1) SYM theory is not anomalous.
The letter is organized as follows. In section 2 we recall 6D, N = (1, 1) SYM theory in N = (1, 0) harmonic superspace. Section 3 is devoted to a brief account of the effective action in the gauge multiplet sector. The effective action is formulated within the background harmonic superfield method. This allows us to perform the calculations in a manifestly gauge invariant and N = (1, 0) supersymmetric manner. In section 4 we analyze the structure of possible two-loop contributions to the effective action and calculate all divergent terms in this approximation. In section 5 we discuss a possible structure of the two loop divergences when the background hypermultiplet is taken into account. In the last section 6 we summarize the results.
The six-dimensional maximally extended N = (1, 1) supersymmetric gauge theory can be formulated in N = (1, 0) harmonic superspace. In this framework it amounts to N = (1, 0) supersymmetric gauge theory coupled to the hypermultiplet q + in the adjoint representation of gauge group. All the necessary notations and conventions are collected in our previous papers, see, e.g., [26,27]. Here we recall only the basic concepts.
The coordinates of 6D, N = (1, 0) harmonic superspace are denoted as (z, u) = (x M , θ a i , u ±i ), where x M , M = 0, .., 5, are the 6D Minkowski space-time coordinates, θ a i , a = 1, .., 4 , i = 1, 2 , are Grassmann variables, and u ± i , u +i u − i = 1 , are the harmonic variables [36,37]. For the analytic coordinates we use the notation ζ and the antisymmetric 6D Weyl γ-matrices are used, (γ M ) ab = −(γ M ) ba , ( γ M ) ab = 1 2 ε abcd (γ M ) cd , with the totally antisymmetric tensor ε abcd . By definition, the analytic superfields are annihilated by the spinor covariant derivative D + a = u + i D i a and in the analytic basis (where D + a are "short") are defined on the analytic harmonic superspace (ζ, u ±i ). Also we will need the covariant derivative D − a = u − i D i a and the harmonic derivatives The spinor and harmonic derivatives satisfy the algebra The full harmonic and the analytic superspace integration measures are defined as follows In the harmonic superspace formalism the gauge field is a component of the analytic gauge superfield V ++ . A necessary ingredient is also a non-analytic harmonic connection V −− obtained as a solution of the harmonic zero-curvature condition [8] (2.6) Using these superfields one can construct the gauge covariant harmonic derivative ∇ ±± = D ±± +iV ±± . The superfield V −− is also used to define the spinor and vector connections in the gauge-covariant derivatives. In the λ-frame we have [38] where ∇ ab = 1 2 (γ M ) ab ∇ M and ∇ M = ∂ M − iA M , with the superfield connections defined as The covariant derivatives (2.7) satisfy the algebra The superfield W a ± is the superfield strength of the gauge multiplet, Also we define the analytic superfield [22] F ++ ≡ (D + ) 4 V −− which satisfies the harmonic constraint ∇ ++ F ++ = 0 following from (2.6) and the analyticity of V ++ .
The classical action of 6D, N = (1, 1) SYM theory in the harmonic superspace formulation is written as where (u + n u + 1 ) −1 , . . . are the harmonic distributions defined in [8], A = 1, 2 is a Pauli-Gürsey group SU (2) index and q + The superfields V ++ and q + A take values in the adjoint representation of the gauge group, i.e., where t I are the gauge algebra generators subjected to the normalization condition tr (t I t J ) = δ IJ /2. The action involves the negative-dimension coupling constant f , [f ] = m −1 , and the covariant harmonic derivative The classical equations of motion in the theory have a form The action (2.11) is invariant under the manifest N = (1, 0) supersymmetry and an additional hidden N = (0, 1) supersymmetry. The hidden supersymmetry mixes the gauge and hypermultiplet superfields with each other [22], As a result, the action (2.11) is invariant under 6D, N = (1, 1) supersymmetry. Certainly, it is also invariant under the superfield gauge transformation parameterized by a real analytic superfield λ.
Effective action
When quantizing gauge theories, it is convenient to use the background field method allowing to construct the manifestly gauge invariant effective action. For 6D, N = (1, 0) SYM theory in the harmonic superspace formulation this method was worked out in [25][26][27]. In many aspects it is similar to that for 4D N = 2 supersymmetric gauge theories [39,40] (see also the review [41]).
Following the background field method we split the superfield V ++ into the sum of the "background" superfield V ++ and the "quantum" one v ++ , Then we expand the effective action in a power series in quantum superfields and obtain a theory of the superfields v ++ , q + in the background of the classical superfield V ++ , which is treated as a functional argument of the effective action. Our aim is to study the two-loop contributions to the effective action in the gauge superfield sector. To this end, it is sufficient to assume that the hypermultiplet is purely quantum.
Using the results of refs. [25][26][27] the general expression for the effective action ca be written in the form where the operator and η M N is 6D Minkowski metric with the mostly negative signature. The total action, S total = S 0 + S gf + S FP + S NK , includes the gauge-fixing term corresponding to the Feynman gauge, the action for the fermionic Faddeev-Popov ghosts b and c, as well as the action for the bosonic real analytic Nielsen-Kallosh ghost ϕ,
5)
The action (3.4) depends on the background field V ++ through the background gauge bridge superfield, in a close analogy with 4D, N = 2 SYM theory.
The calculation of the effective action is carried out in the framework of the loop expansion. In the one-loop approximation the quantum corrections to the classical action are determined by the quadratic part of the action S total . After integration over quantum superfields this quadratic part produces the one-loop contribution Γ (1) to the effective action. The contributions coming from the Faddeev-Popov ghosts, the Nielsen-Kallosh ghost, and the quantum hypermultiplet contain divergences. However, for N = (1, 1) theory they cancel each other since in this case the hypermultiplet lies in the adjoint representation of the gauge group, see refs. [25][26][27] for details. This implies that the theory under consideration is off-shell finite in the one-loop approximation.
In this paper we will investigate the two-loop divergences. Before starting the calculations it is instructive to discuss the structure of propagators and vertices. That part of the total action S total which is quadratic in quantum superfields defines the (background-superfield dependent) propagators of these superfields, which are similar to those for 4D, N = 2 theory [8,39,42] In comparison with the 4D, N = 2 case, the operator ⌢ has a different form and is given by (3.3).
For calculating the two-loop quantum corrections we will need vertices which are cubic and quartic in quantum superfields. In the theory under consideration there are several types of such vertices.
The first type includes the cubic and quartic self-interactions of the gauge superfield described by the corresponding terms in the classical action (2.11), . (3.11) The interaction of the gauge multiplet with hypermultiplet can be also found from classical action (2.11) and is given by the term The action (3.5) describes the interaction of gauge multiplet and the Faddeev-Popov ghosts where f IJK are the structure constants of the gauge group.
Off-shell two-loop divergences
Using the power counting [25] one can show that the only possible two-loop divergent contribution in the gauge superfield sector has the structure where a is a constant, which diverges after removing a regularization. Below we will calculate the constant a in the modified minimal subtraction scheme for the considered N = (1, 1) SYM theory off shell.
In the process of calculation we do not assume any restriction on the background gauge multiplet and perform the analysis in a manifestly gauge invariant form. In the two-loop approximation there are Feynman supergraphs of two different topologies, which we will call 'Θ' and '∞' topologies. The graphs of the 'Θ' topology are generated by cubic interactions. In the N = (1, 1) theory under consideration they are presented by eqs. (3.10), (3.12), and (3.13). The graphs of '∞' topology contain a vertex corresponding to the interaction. It is given by eq. (3.11).
It is convenient to separately consider the diagrams containing only the gauge propagators G 2,2 . They are presented in Fig. 1, where the gauge propagators are depicted by wavy lines. Also it is expedient to consider superdiagrams involving the hypermultiplet and ghosts propagators together. They are presented in Fig. 2. The hypermultiplet propagators G (1,1) are denoted by solid lines, and the Faddeev-Popov ghost propagators G (0,0) by dashed lines. In addition, we will take into account that the theory under consideration is finite at one loop. Therefore, there is no need to renormalize the one-loop subgraphs in the two-loop supergraphs 5 .
The analytic expression corresponding to the diagram Γ I (of the '∞' topology) presented in Fig. 1 is written as .
(4.2)
This expression involves two Green functions G (2,2) in the coincident θ limit. According to eq. (3.7), each expression for G (2,2) contains a harmonic δ-function. Due to these δ-functions the first term in the curly brackets will contain a singularity, since (u + 1 u + 2 ) and (u + 3 u + 4 ) in the denominator vanish. To avoid this problem, one should use a 'longer form' of the gauge superfield Green function G (2,2) [8,42], Next, in the first term of the expression (4.2) we should annihilate the Grassmannian delta-functions δ 8 (θ 1 − θ 2 )| θ 2 →θ 1 . This gives the factor (u + 1 u + 2 ) 4 (u + 3 u + 4 ) 4 in the numerator canceling the singular terms in the denominator. The resulting expression is proportional to Let us consider a part of this expression depending on u 1 and u 2 , The only possible divergent contributions could appear from the terms containing D −− inside ∇ −− . However, Therefore, the first term in eq. (4.2) diverges. Taking into account that we see that (in the Euclidean space after the Wick rotation) its divergent part is equal to The second term in eq. (4.2) does not contain harmonic singularities. Therefore, when using the long form of the gauge propagator, we obtain the expressions proportional to Therefore, the contribution of this term vanishes.
The analytic expression for the two-loop diagram Γ II (of the 'Θ' topology) presented in Fig. 1 is constructed using the cubic gauge superfield vertex (3.10) and it has the form . (4.10) As the next steps, we substitute the explicit expression for the Green function G (2,2) and integrate by parts with respect to one of the (D + ) 4 factors. Also it is possible to calculate the harmonic integrals over u 4 , u 5 , u 6 using the corresponding delta-functions which come out from the propagators. As a result, we obtain After integrating over θ 2 using the Grassmannian delta-function we are left with the coincident θ 2 → θ 1 limit in the two remaining delta-functions. In order to annihilate these Grassmannian delta-functions in the coincident θ-point limit we need four (D ± ) 4 -factors. However we have only three. The remaining (D − ) 4 factor should be obtained from the expansion of the inverse ⌢ operator. But in this case we produce an extra operator (∂ 2 ) 4 , so that the overall momentum degree in the denominator will be 6+ 8 = 14. Taking into account the presence of the integrations d 6 kd 6 q, we conclude that the resulting integral is convergent. Therefore, the superdiagram considered can produce only finite contributions to the effective action. Now, let us demonstrate that in 6D, N = (1, 1) theory the last two contributions Γ III and Γ IV depicted in Fig. 2 cancel each other. The arguments are basically analogous to those used for 4D, N = 4 SYM theory in [43]. First, we note that the vertex (3.13) contains the backgrounddependent covariant harmonic derivative ∇ ++ , which acts on the ghost field b. After integrating by parts with respect to this derivative, the latter will act on the ghost propagator G (0,0) which is related to the hypermultiplet Green function by eq. (3.9). Due to this relation the analytical expression for the sum of two contributions Γ III and Γ IV presented in Fig. 2 takes the form (1,1) As pointed out in [43], the identity 1 + (u + 1 u − 2 )(u − 1 u + 2 ) = (u + 1 u + 2 )(u − 1 u − 2 ) allows one to transform the contribution (4.12) to the form This expression vanishes due to the useful property of the harmonic delta-function (u − 1 u − 2 )δ (2,−2) (u 1 , u 2 ) = 0 [8]. Thus, these two diagrams cancel each other. Obviously, this cancelation takes place only in the case of N = (1, 1) theory, when the hypermultiplet is in the adjoint representation of the gauge group. In a general 6D, N = (1, 0) SYM theory the diagrams in Fig. 2 enter with different group factors, which prohibits the cancelation.
Thus, we see that the only divergent contribution comes from the '∞' superdiagram, and the divergent part of the two-loop effective action in the gauge superfield sector is given by the expression Making use of the identity integrating by parts with respect to the derivative D ++ 1 and taking into account that F ++ τ is independent of the harmonic variables, we see that Moreover, in the dimensional regularization scheme we have (4.17) So, in the MS-scheme, (4.18) Thus, the divergent part of the two-loop effective action can be finally written in the form 19) and the constant a appearing in Eq. (4.1) is now identified as (4.20) An interesting peculiarity of the two loop divergences obtained is that they contain only leading twoloop pole 1 ε 2 , while the sub-leading pole 1 ε is absent. We believe that the reason for this may be hidden N = (0, 1) supersymmetry and the absence of the off-shell one-loop divergences in the theory under consideration. The result obtained matches with the statement of ref. [22] that the candidate two-loop counterterms in N = (1, 1) SYM theory vanish on mass shell, provided they are required to be N = (1, 0) off-shell supersymmetric and gauge invariant. More details on this point are given in the next section.
Hypermultiplet dependence of the two-loop divergences
In the previous section we have calculated the two-loop divergences in the gauge multiplet sector, where the background hypermultiplet q + is absent. Now we discuss a possible structure of the twoloop divergences in the case when the background hypermultiplet is taken into account. Of course, the hypermultiplet-dependent contribution to two-loop divergences can be obtained by the straightforward quantum computations of the two-loop effective action. However, the general form of such divergences can in principle be described without direct calculations, just starting from the expression (4.19) and assuming the invariance of the effective action under the hidden N = (0, 1) supersymmetry. Taking into account the result (4.19) one might expect that including the on-shell background hypermultiplet will merely lead to the replacement of F ++ in (4.19) by the total classical equation of motion for the background gauge multiplet (2.13) coupled to hypermultiplet.
As was proved in [22,44], only the classical action (2.11) is N = (1, 1) supersymmetric off shell, while in any other N = (1, 1) invariant the hidden N = (0, 1) supersymmetry must be on-shell. Therefore we will assume here that the hypermultiplet satisfies the classical equations of motion where q − A := ∇ −− q + A . In this case the N = (0, 1) supersymmetry transformation (2.14) for non-analytic gauge potential takes the form Let us now rewrite the expression (4.1) in the central basis, Here we made use of the definition of covariant d'Alembertian (3.3) and integrated by parts with respect to the harmonic derivative ∇ −− . The coefficient a is given by eq. (4.20). Our aim is to find the appropriate terms which should be added to the action (5.3) to ensure the invariance under hidden N = (0, 1) supersymmetry transformations. First, we rewrite the N = (0, 1) transformation (2.14) in the form After that one can see that the following generalization of the action (5.3), for the background hypermultiplet satisfying (5.1), under (5.4) is transformed as and so is invariant modulo the gauge superfield equation of motion E ++ = 0. The action (5.5) can be rewritten, up to a total harmonic derivative, as Passing to the analytic basis, we finally obtain We see that two-loop divergences vanish on the total mass-shell (2.13), as expected.
Finally, we note that the superficial degree of divergence in N = (1, 0) SYM theory was calculated in [25] in the form where L is a number of loops in the supergraph with N q external lines of hypermultiplet and N D is a number of spinor derivatives acting on the external lines. Divergent contributions correspond to the case ω 0. Hence at L = 2, the number of the external hypermultiplet lines should be N q 4. Possible divergent contributions in the gauge superfield sector at two loops have the universal structure (4.1). The number of external hypermultiplet lines should be even to secure gauge invariance. Hence the possible hypermultiplet-dependent divergent contributions have two or four external hypermultiplet lines. Taking into account these reasonings and N = (1, 0) supersymmetry, we obtain the following expression for the two-loop divergences [q +B , q + B ] (5.10) + terms proportional to the hypermultiplet equations of motion, where the constant a ia given by (4.20) and c 1 , c 2 are the arbitrary dimensionless numerical coefficients, which can be fixed only within the quantum field theoretical computations of the effective action. Comparing (5.10) with (5.5), we observe that the role of hidden N = (0, 1) supersymmetry is just to relate the unknown constants c 1 and c 2 to the original constant a. Indeed, requirement of invariance of the expression (5.10) under the N = (0, 1) supersymmetry yields the same expression (5.8).
Summary
In the present paper we have studied two-loop divergent contributions to the effective action for 6D, N = (1, 1) SYM theory formulated in N = (1, 0) harmonic superspace. In this approach it amounts to the model (2.11) of the minimally coupled N = (1, 0) gauge multiplet and the hypermultiplet, both in the adjoint representation of the gauge group. The classical action of the model is invariant under an additional N = (0, 1) supersymmetry, so that it actually describes N = (1, 1) SYM theory.
In the papers [26,41] we have demonstrated by explicit calculations that, in the minimal gauge, N = (1, 1) SYM theory in six-dimensions is one-loop finite off shell. In the present paper, using the superfield background field method, we have calculated the divergent part of the two-loop effective action in the gauge multiplet sector. The corresponding background field dependent supergraphs determining the effective action are given by Fig. 1 and Fig. 2. It was shown that the divergences of the supergraphs Γ III and Γ IV in Fig. 2 cancel each other due to the hidden N = (0, 1) supersymmetry. The supergraph Γ II in Fig. 1 is finite. The total divergence is only due to the supergraph Γ I in Fig. 1. The corresponding divergent contribution to the two-loop effective action is proportional to the classical equation of motion. This means that the theory is not off-shell finite at two loops in the gauge multiplet sector even in the Feynman gauge, while the divergences vanish on shell in this sector. Nevertheless, it is worth pointing out that the two-loop divergences in the theory under consideration are 'softer' in some sense as compared with the general quantum field theory setting. The divergent part of the two-loop effective action (4.19) contains only the leading two-loop pole 1 ε 2 , the sub-leading pole 1 ε being absent. This peculiarity could be attributed to hidden N = (0, 1) supersymmetry. Also, we have analyzed, on the grounds of gauge invariance, power counting, the explicit N = (1, 0) supersymmetry and the hidden N = (0, 1) supersymmetry, the possible structure of the twoloop divergences for N = (1, 1) super Yang-Mills theory in an arbitrary gauge and hypermultiplet background. It was shown that such divergences vanish on the total equations of motion (2.13) and contain an arbitrary dimensionless numerical coefficient. To fix this coefficient, we must carry out the direct quantum field theoretical calculations. Thus, obviously, the most urgent problem for further study is to calculate the two-loop divergences in the general background field setting, including not only the background gauge multiplet but the background hypermultiplet as well. We hope to confirm our assertion that the total two-loop divergences involve the complete classical equation of motion.
Another interesting problem is to calculate the two-loop divergences for the general N = (1, 0) SYM theory without hidden N = (0, 1) sector. We plan to perform the detailed calculation of the two-loop divergent contributions for the general N = (1, 0) gauge theory in a forthcoming work.
|
2021-04-30T01:16:07.972Z
|
2021-04-29T00:00:00.000
|
{
"year": 2021,
"sha1": "0045942fa36b53a17932f1e759e783cb29e1843b",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.physletb.2021.136516",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "0045942fa36b53a17932f1e759e783cb29e1843b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
254451577
|
pes2o/s2orc
|
v3-fos-license
|
Reporting and methodological quality of studies that use Mendelian randomisation in UK Biobank: a meta-epidemiological study
Objectives To identify whether Mendelian randomisation (MR) studies are appropriately conducted and reported in enough detail for other researchers to accurately replicate and interpret them. Design Cross-sectional meta-epidemiological study. Data sources Web of Science, EMBASE, PubMed and PsycINFO were searched on 15 July 2022 for literature. Eligibility criteria Full research articles that conducted an MR analysis exclusively using individual-level UK Biobank data to obtain a causal estimate of the exposure–outcome relationship (for no more than ten exposures or outcomes). Methods and analysis Data were extracted using a 25-item checklist relating to reporting and methodological quality (based on the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE)-MR reporting guidelines and the guidelines for performing MR investigations). Article characteristics, such as 2021 Journal Impact Factor, publication year, journal word limit/recommendation, whether the MR analysis was the primary analysis, open access status and whether reporting guidelines were followed, were also extracted. Descriptive statistics were calculated for each item, and whether article characteristics predicted overall article completeness was investigated with linear regression. Results 116 articles were included in this review. The proportion of articles which reported complete information/adequate methodology ranged from 3% to 100% across the different items. Palindromic variants, variant replication, missing data, associations of the instrumental variable with the exposure or outcome and bias introduced by two-sample methods used on a single sample were often not completely addressed (<11%). There was no clear evidence that article characteristics predicted overall completeness except for primary analysis status. Conclusions The results identify areas in which the reporting and conducting of MR studies needs to be improved and also suggest researchers do not make use of supplementary materials to sufficiently report secondary analyses. Future research should focus on the quality of code and analyses, attempt direct replications and investigate the impact of the STROBE-MR specifically. Study registration https://osf.io/nwrdj
Abstract
Objectives To identify whether Mendelian randomisation (MR) studies are appropriately conducted and reported in enough detail for other researchers to accurately replicate and interpret them. Design Cross-sectional meta-epidemiological study. Data sources Web of Science, EMBASE, PubMed and PsycINFO were searched on 15 July 2022 for literature. Eligibility criteria Full research articles that conducted an MR analysis exclusively using individual-level UK Biobank data to obtain a causal estimate of the exposure-outcome relationship (for no more than ten exposures or outcomes). Methods and analysis Data were extracted using a 25-item checklist relating to reporting and methodological quality (based on the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE)-MR reporting guidelines and the guidelines for performing MR investigations). Article characteristics, such as 2021 Journal Impact Factor, publication year, journal word limit/ recommendation, whether the MR analysis was the primary analysis, open access status and whether reporting guidelines were followed, were also extracted. Descriptive statistics were calculated for each item, and whether article characteristics predicted overall article completeness was investigated with linear regression. Results 116 articles were included in this review. The proportion of articles which reported complete information/adequate methodology ranged from 3% to 100% across the different items. Palindromic variants, variant replication, missing data, associations of the instrumental variable with the exposure or outcome and bias introduced by two-sample methods used on a single sample were often not completely addressed (<11%). There was no clear evidence that article characteristics predicted overall completeness except for primary analysis status. Conclusions The results identify areas in which the reporting and conducting of MR studies needs to be improved and also suggest researchers do not make use of supplementary materials to sufficiently report secondary analyses. Future research should focus on the quality of code and
WHAT IS ALREADY KNOWN ON THIS TOPIC
⇒ Mendelian randomisation (MR) is becoming more widely used each year and, while recently created reporting and methodological guidelines exist, little is known about the reporting or methodological quality of published MR articles. ⇒ Previous systematic reviews suggest reporting quality is poor in certain areas, but these reviews focus on a narrow range of reporting outcomes and articles.
WHAT THIS STUDY ADDS
⇒ This study found that, across a sample of 116 MR articles from all fields, and across a broad range of dataextraction items related to reporting and methodological quality (partly based on the Strengthening the Reporting of Observational Studies in Epidemiology-MR reporting guidelines), quality varied greatly and was particularly poor for several items. ⇒ This study also found that factors such as Journal Impact Factor, year of publication and journal word limit/recommendations did not clearly predict overall article completeness.
HOW THIS STUDY MIGHT AFFECT RESEARCH, PRACTICE OR POLICY
⇒ This study highlights that the reporting and methodological quality of MR articles needs to be improved as, without methodologically sound analyses and transparent reporting, results cannot be adequately interpreted, and therefore the impact of the research and the benefit gained from public funding is reduced.
Introduction
Mendelian randomisation (MR) is a method of causal inference that uses genetic variation as an instrumental variable (IV) for an exposure to estimate a causal effect. In principle (ie, under certain assumptions), this estimate is free from confounding, including reverse causation. 1 It is still a relatively new technique and, due to advances in genetic research, it is becoming more popular and widely used. 2 Large cohort studies, such as the UK Biobank (UKB) which contains genetic, health and lifestyle data for around half a million people, have made it relatively easy to perform powerful MR analyses quickly. 3 Furthermore, platforms such as MR-BASE 4 have made it quick and easy to conduct two-sample MR (ie, where the IV-exposure and IV-outcome associations come from different samples) 5 with publicly available genome-wide analysis scan (GWAS) summary data. While reporting guidelines such as the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE)-MR 6 have recently been developed, it is not currently known whether MR studies across the whole field report their analyses appropriately and report them in enough detail for others to accurately replicate and interpret them. Currently, while some tools for assessing the risk of bias in MR studies exist, none of these have been tested or validated for general use. 7 Also, due to the extra information on genetic variants and the differing assumptions, tools for assessing bias in conventional public health research are not appropriate. A previous review on MR-Base studies found that 44% of studies provided sufficient detail on the first core assumption (ie, the genetic instrument is causally associated with the exposure), 31% on the second (ie, the genetic instrument shares no common cause with the outcome), 89% on the third (ie, the genetic instrument only has a causal effect on the outcome via the exposure) and 32% on assumptions of falsification tests. 8 Another previous systematic review found that only 44% of MR studies discussed the plausibility of the core assumptions, and 14% gave insufficient detail of the statistical analysis. 9 However, this review only looked at articles pre-2014, before both UKB became available and before MR became popular and widely used in epidemiology. Furthermore, while this review looked at a broad spread of MR methodologies, it only assessed the articles on these two points meaning its focus was rather narrow. Another review assessed the reporting quality of MR articles (up to 2017) on cancer outcomes and found around half the articles included (40%-69%) did not report subject characteristics, did not conduct power calculations, did not describe the core MR assumptions and did not exclude variants that failed certain quality control criteria. 10 This review assessed more recent articles and assessed these articles in more detail but focused on the narrow topic of cancer MR studies.
The aim of our study is to assess whether published articles that conduct MR analyses using individual-level data from the UKB cohort use appropriate analyses and report enough details to allow accurate interpretation and replication and whether this varies across article type. The findings will highlight which specific characteristics of MR analyses are omitted. As MR is a rapidly expanding field, it is vital we make sure the research is being accurately conducted and reported so it can be adequately interpreted and replicated. This will lead to work in the field being more robust, which in turn will lead to a reduction in wasted resources and an increase in impact.
Methods
The guidelines for reporting meta-epidemiological methodology research 11 were used when writing this manuscript. We do not provide measures of interrater reliability as conflicts were often resolved by a consensus being reached between two or three reviewers. This study was preregistered on the Open Science Framework and the protocol can be found here https://doi.org/10. 17605/OSF.IO/NWRDJ.
Article search and eligibility criteria
We searched four databases (Web of Science, EMBASE, PubMed and PsycINFO) for articles, which contained "UK Biobank", "UKB" or "UKBiobank" and "Mendelian randomisation" or "Mendelian randomization" on 15 July 2022 (online supplemental table S1).
We only included peer-reviewed, English-language (due to feasibility), original research articles, which were non-retracted and were open-access or fully accessible via our institution, which conducted MR analysis. We excluded: ► Articles where there were more than 10 independent exposures and/or outcomes (eg, phenome-wide association studies). This was due to feasibility. ► Articles which did not obtain a causal estimate of the exposure on the outcome. ► Articles which did not use solely individual-level data from UKB to obtain information on the IV-exposure and IVoutcome relationship when calculating the causal estimate, (ie, studies which pooled the data with other cohorts or conducted two-sample MR with publicly available summary level GWAS data). ► Articles for which eligibility was unclear. This study includes articles which apply two-sample MR methods on UKB data either by utilising a split-sample approach or using the same UKB sample to estimate the IV-exposure and IVoutcome associations.
Article eligibility was assessed by one reviewer for title and abstract screening. For an initial batch of 64 papers (published before 4 November 2022 and found in the initial search which took place at the beginning of the project) full texts were screened by two reviewers independently, with a third reviewer resolving any conflicts when a consensus could not be reached. Articles were full text screened by a single reviewer in the second batch (articles found in the updated search conducted during the peer-review process, as requested by the editor).
Data extraction
Data extraction was carried out by two independent reviewers for each article, with any conflicts being resolved by a third reviewer when a consensus could not be reached, in batch 1. Data were extracted by a single reviewer in batch 2. No reviewer reviewed their own article. Articles were reviewed using a 25-item checklist (See the Code Ocean repository at https://doi.org/10.24433/CO. 4457049.v4 for full list 12 ). On each item, the reviewer answered either 'yes', 'partially' or 'no', with 'unclear' or 'NA' also being allowed responses for specific items.
Both the STROBE-MR guidelines 6 and the 'Guidelines for performing Mendelian Randomisation investigations' 13 were used to create the data extraction items. Each item is based on an item or items from the STROBE-MR while the 'Guidelines for performing Mendelian Randomisation investigations' were used to finalise the wording of each question to make sure it covers appropriate
Original research
analytical practices as well as reporting practices. To measure how reporting quality differs across journals and article types, the 2021 Journal Impact Factor, journal word limit/recommendation, year of publication, whether the analysis was the primary analysis (with articles for which the MR analysis is the joint primary analysis coded as 'partially'), open access status (with 'free access' articles being coded as 'partially') and whether the authors followed reporting guidelines (with the following of the STROBE-MR being coded as 'yes' and other less relevant reporting guidelines being coded as a 'partially'), were also extracted.
First, a project protocol including initial data-extraction items was created. Then a pilot of ten articles in batch 1 was conducted to finalise the data-extraction items. During batch 1 data-extraction, the wording of some items was altered slightly to remove ambiguity in certain cases, while maintaining the intended meaning. We extracted information on the reporting of assumptions, design, variables of interest, the sample/data, the IV, MR estimator, the addressing of bias, the results and software and code.
Statistical analysis
The percentage of articles which obtained a 'yes', 'partially' and 'no' (also 'unclear' where relevant) on each item of interest was calculated. Univariable linear regression was then used to investigate potential associations between the completeness of articles (average percentage across items for each article with 'yes' as 100%, 'partially' as 50% and 'no' or 'unclear' as 0%) and the year of publication, 2021 Journal Impact Factor (logged), word limit/recommendation (both number of words and whether a word limit/recommendation was present or not-articles from journals with page or character limits were large and were classed as having no word limit), whether the MR analysis was the primary analysis (answers of 'yes' were coded as 1, 'partially' as 0.5 an 'no' as 0), whether the article was open access ('yes' coded as 1 and 'partially' or 'no' coded as 0) and whether the articles followed reporting guidelines ('yes' and 'partially' coded as 1 and 'no' coded as 0). Due to limited variation in both open access statement and relevant reporting guideline use (as most articles were published before the creation of the STROBE-MR) we collapsed certain categories together. As we did not specify these variables would be handled this way in our preregistration, these analyses should be viewed as exploratory. The analysis for completeness on year was rerun adjusting for batch as a sensitivity analysis due to the fact that batch 2 articles were published after batch 1 articles but were also reviewed by a single reviewer rather than two and also reviewed at a later date (which could bias the association between article completeness and year of publication). All analyses were carried out in R V.4.1.0 and are available along with the data at https://doi.org/10.24433/ CO.4457049.v4. 14
Results
A total of 116 articles were included in the final sample (see figure 1 for a flow chart of article exclusion). The articles excluded
Original research
at full text screening, the articles included in the review and the data extracted for each article can be seen at the Code Ocean repository (https://doi.org/10.24433/CO.4457049.v4). Overall, the mean article completeness was 55% (SD = 9%). Percentages for each item can be seen in figure 2 and at Code Ocean (https://doi. org/10.24433/CO.4457049.v4).
Reporting the assumptions of MR
Of the 116 articles included in this review, only 13% (15) of articles completely reported the three core assumptions of MR, while 35% (41) partially reported them (ie, reported them incorrectly) and 52% (60) did not outline all three assumptions/did not outline them at all. While these assumptions have been previously detailed Original research in a number of articles and reporting them it is not necessary for replication, they are important for the interpretation of the results and will not be common knowledge to those who do not conduct MR themselves. Often, they are reported wrong with the second assumption being reported as 'no association between the IV and confounders of the exposure-outcome relationship', rather than 'the IV shares no common cause with the outcome' . The distinction between these assumptions is that the former is a specific form of pleiotropy that is already covered by the third assumption, whereas the latter covers important and otherwise unmentioned issues such as population stratification and dynastic effects. 15 Reporting the design Articles regularly did not clearly report whether the study was a one-sample or two-sample MR design. While 53% (61) did report this accurately, 15% (17) only partially reported this (ie, implied its design in reference to it not being the alternative) and 33% (38) did not. This information is not only trivial to provide but helps to clearly communicate how the analysis was conducted (and therefore how it can be replicated) without having to infer this from more complicated details. It is also important to understanding the biases the results may be subject to and, thus, it is vital for the interpretation of the results.
Reporting the variables of interest
As would be expected, 100% of articles completely reported what exposure outcome relationship was being assessed. However, only 27% (31) of articles completely reported which UKB phenotypes were used vs 73% (85) which partially reported this (ie, did not provide the UKB field IDs). UKB has many similar and closely related variables and without the IDs it is difficult to be certain which exact variable has been used by researchers. Reporting of field IDs can remove this ambiguity; 92% (107) of articles clearly reported how these variables were handled in enough detail to replicate the analysis and interpret the results, vs 8% 9 which only partially reported this.
Reporting the sample
For information on the eligibility criteria and subsample size, 84% (98) of articles completely reported this information while 15% (17) only partially reported this and 1% (1) did not. As sample size and exclusion criteria are vital to reporting in all fields of science this high rate is unsurprising. In contrast, only 15% (17) of articles completely reported information on the genetic data (ie, UKB microarrays, exclusion of variants and imputation information) while 47% (54) partially reported this information and 39% (45) didn't report it at all. As these processes were mostly conducted by UKB centrally, most researchers may feel there is no need to report them. However, this information aids interpretation for those unfamiliar with UKB.
Reporting IV information
Of the 116 articles included in the study, 65% (75) completely reported the genetic variants and weights used to construct the IV, while 9% (10) partially reported this and 27% (31) did not. 40% (46) reported that variants were identified in a different sample as that used in the analysis, or externally weighted, 10% (12) reported only that variants were identified in the same sample and unweighted or identified and weighted in a larger sample which includes the sample used in the MR analysis, 7% (8) reported that variants were identified and weighted from the same sample used in the MR analysis and 43% (50) did not report enough detail to assess this. Whether the variants used had been independently replicated had lower reporting quality across articles; 9% (10) of articles reported that variants were independently replicated, while 8% (9) reported they were partially replicated (eg, replicated in a partially overlapping sample), 13% (15) used unreplicated variants and 71% (82) did not report this. While using variants which were identified in the same sample, or which were not replicated can introduce bias, it is sometimes unavoidable. However, it is vital that this is outlined in the article as a possible source of bias and its potential impact should be discussed. For the 100 articles for which information on proxies and palindromic variants were relevant (ie, used variants for the IV which were not identified in solely UKB participants), 36% (36) of articles clearly reported proxy information (ie, whether all variants were present in UKB and, if not, whether these variants were excluded or proxied), 2% (2) partially reported this and 62% (62) did not. For the reporting of palindromic variants, 4% (4) clearly reported this if they were present and how they were handled if so, 2% (2) partially reported this and 94% (94) did not. Most researchers will feel that if palindromic or proxy variants were not present, they do not need to be mentioned. However, this creates ambiguity especially when many variants are being used. This information can always be included in to not disrupt the flow of the manuscript.
Reporting analysis methods
Of the 116 articles, 86% (100) clearly reported the MR estimator used and the covariates adjusted for, while 13% (15) partially reported this and 1% (1) did not. While this rate of reporting is relatively high compared with the other items included in this review, this is fairly low considering how vital to replication and interpretation this information is.
Addressing bias
Whether missing data could have biased the results was clearly addressed by 3% (4) of articles, while 2% (2) partially addressed this and 95% (110) did not (these articles may still have presented the percentages of missing data but did not comment on the impact of this issue). Of the 96 articles which conducted multiple testing, 17% (16) addressed this (ie, corrected for this bias or explained why no correction was needed), 2% (2) partially addressed this and 81% (78) did not. Many MR studies which investigate multiple exposures or outcomes will look at related, correlated variables and in these cases correcting for multiple testing will be too conservative. In these cases it may be preferable to not correct for this but this decision needs to be justified, which it often was not. For the 113 articles for which it was applicable (ie, more than one variant was used), 27% (31) addressed heterogeneity (ie, reported heterogeneity or used a method which excluded outliers) of the individual variant MR estimates, while 19% (22) partially did and 53% (60) did not. Reporting of assessments of instrument strength (ie, the F-statistic or the R 2 /variance explained) was better, with 65% (75) of articles completely reporting this, 9% (11) partially reporting this and 26% (30) not reporting this. Furthermore, the conducting and reporting of sensitivity analysis (any analysis to assess whether violations of assumptions are biasing the results) was common across articles, with 98% (114) completely reporting this and 2% (2) not. However, sensitivity analysis is a broad category of analyses, and this does not mean the analyses were conducted well or were the best suited sensitivity analyses for the study in question. Finally, of the 29 articles which used two-sample MR methods on one-sample data, only 10% (3) of articles addressed this potential source of bias, while 10% (3) partially addressed this and 79% (23) did not. Previous evidence suggests that Original research sample overlap may not have large effects in terms of biasing the results for some two-sample MR methods, 16 however, this should still be addressed by the authors.
Reporting results
Descriptive statistics were completely reported by 74% (86), partially reported by 5% (8) and not reported by 21% (24) of articles. Reporting rates were far worse for the IV-exposure and IV-outcome associations which were completely reported by 5% (6), partially reported (ie, either only one of the two reported or reported for each individual variant) by 34% (39) and not reported by 61% (71) of articles. For reporting of the causal estimate reporting rates were (as would be expected) far better, with 79% (92) of articles completely reporting this and 21% (24) partially reporting this (ie, not making the units clear, not reporting an interpretable scale, or not giving a measure of uncertainty). Finally, 68% (79) of articles visualised the results in a figure, with 3% (3) only partially doing this (ie, the figure was not the best choice for visualisation or was difficult to interpret) and 29% (34) not visualising the results at all.
Providing software and code
Of the articles, 15% (17) clearly reported the statistical programmes, packages and versions of each used for the statistical analysis, 70% (81) partially reported this (missing versions or packages) and 16% (18) did not. If complete code is provided then omissions of packages used and version numbers would matter little, however, only 11% (13) of articles provided complete code, 6% (7) provided partial code and 83% (96) did not provide any code. The provision of complete and readable code is vital to improving replicability and would cover most of the other items in this review.
Effects of article characteristics
There was no clear evidence that 2021 Journal Impact Factor, word limit/recommendation (both word number or whether a limit/recommendation was present) or year of publication (unadjusted or adjusted for batch) predicted percentage of article completeness across items. There was evidence that primary analysis status ('yes' = 59% (69), 'partially' = 20% (23) and 'no' = 21% (24)) predicted an increase in completeness (mean difference in completeness percentage (95% CI) with primary analysis coded as 1, joint-primary coded as 0.5 and secondary analysis coded as 0 = 6% (2%, 10%)). While it might be expected that articles which did an eligible MR analysis as a joint primary analysis or secondary analysis would report this analysis less completely than those which conducted MR as the sole primary analysis, the ability to report methods and results in supplementary materials means researchers are always able to report analyses and results fully. This finding implies that researchers do not make proper use of supplementary materials and do not report all they should in these. Exploratory analyses for whether open access status ('yes' = 82% (95), 'partially' = 9% (11) and 'no' = 9% (10)) or the use of reporting guidelines ('yes' = 3% (4), 'partially' = 8% (9) and 'no' = 89% (103)) predicted article completeness, showed no clear evidence. Regression results can be seen in figure 3 and at the Code Ocean repository (https://doi. org/10.24433/CO.4457049. v4).
Discussion
This study shows that the quality of analyses and reporting varied greatly across items and was worst for aspects relating to palindromic variants, variant replication, missing data, associations of the IV with the exposure/outcome and bias introduced by twosample methods used on a single sample. Article completeness was not clearly predicted by 2021 Journal Impact Factor, word limit/recommendation or year of publication, but was predicted by primary analysis status. Exploratory analyses on whether open access status or the use of reporting guidelines predicted article completeness found no clear evidence of associations.
The results of this study roughly align with those of previous reviews. A previous study found that 44% of MR articles discussed the plausibility of the core assumptions 9 and another found that 31%-89% of studies gave sufficient information on the validity of the core assumptions, 8 while our study did not look at this specifically, we found that 48% outlined the core assumptions (although mostly incorrectly), 27% completely addressed variant heterogeneity (relevant to the third core assumption), and 65% completely addressed the strength of the IV (ie, the first assumption). However, the previous studies did not investigate the reporting of variant heterogeneity like we did, and one found lower reporting of instrument strength than us (30%-34%). This previous study also identified that 14% of studies gave insufficient detail of the statistical analysis (ie, information to accurately replicate including the CIs of the results), while we found that 14% did not give complete information about the MR estimator and covariates and 21% did not provide the causal estimate on an interpretable scale with measures of uncertainty. The other study (on MR-BASE analyses) showed similarly high rates of defining the exposure/outcome relationship being tested and defining the variables of interest (92%-97%), similarly middling rates of describing how variants were identified (68%) and similarly low rates of describing data harmonisation (14%-25%) and providing code (8%). They also found no evidence that reporting quality changed over time, supporting our findings. However, they found lower rates of reporting the causal estimate on an interpretable scale than we did (52% vs 79%). This study also reviewed articles with more than 10 exposures/outcomes (but stratify the results based on this) while we did not, and only rated articles on whether they did or did not provide information, while we also recorded whether information was partially provided. Another review (on oncology MR studies) found that 49% of articles reported subject characteristics and 48% did not describe the core MR assumptions. We found higher rates of reporting of subject characteristics (84%) and similar rates of not reporting the core assumptions (52%) in a broader sample of MR articles. It is important to note that no articles from the current review were included in the previous reviews.
Strengths of this study are the comprehensiveness of the review items and that articles from the first batch (55% of the overall sample) were full text screened and reviewed by two reviewers with a third resolving conflicts, reducing the impact of bias and human error. This also allowed for the data extraction items to be refined and a consensus to be agreed on how each item was answered between the reviewers. One limitation, however, is that due to time restraints, articles from the second batch were screened and reviewed by the primary author only. As this was conducted after the screening and reviewing of batch 1 was completed, and thus a consensus on how each item should be answered had already been agreed on by the authors, this is unlikely to have a large effect on the data other than an increased risk of error. However, the mean article completeness of batch 2 was similar to the mean for each second reviewer and all were within one SD of each other. A further limitation is that overall article completeness is an arbitrary measure which assumes each item is equally
Original research
important. While that is untrue, any weighting applied to this measure would be subjective and therefore just as arbitrary, and as the measure is only intended to identify predictors of article quality, we believe it is suitable to construct it in this manner. Also, while this review includes a large number of MR articles across all fields, it focuses on a very specific subset: those which used individual-level UKB data only to obtain a causal estimate for less than 10 exposures or outcomes. To include articles outside of this subset would have made the review too large but it is possible that article completeness differs drastically outside this subset. Further, several methods articles are included in the review. Many would argue if an analysis is just being done to demonstrate a method it does not need to be reported in detail. We acknowledge there is some merit to this argument but feel best practice is always to report any analyses in a reproducible manner. Finally, several items could have been coded in more detail if the scope of the project was not already so large (ie, the quality of sensitivity analysis or code provided could have been extracted). The replicability of articles would also be better assessed by carrying out full replication attempts. These are two areas which should be followed up in future research, if only on a small subsample of articles. It would also be worthwhile to investigate whether the adoption of the STROBE-MR, preregistration, 17 sharing materials 14 or peer-review 18 improves article completeness, however, more time needs to pass from its creation for the former to be feasibly investigated.
Conclusions
The findings of this study highlight areas of poor conduct and reporting in MR research which need to be improved to increase replicability and impact. Only primary analysis status clearly predicted article completeness implying that researchers do not sufficiently use supplementary materials to report secondary analyses and results. Increased analysis and reporting quality in the field is vital for improving our ability to replicate and accurately interpret findings, increasing the impact of research and making better use of public money. Future research should focus on the quality of certain aspects such as code or sensitivity analysis, as well as attempting direct replications, and should investigate the impact of the STROBE-MR specifically once it is more widely used.
Contributors MJG is the primary investigator and drafted the protocols, while FS, RCR and MRM commented on and edited these. MJG caried out title/abstract screening. MJG, FS and RCR carried out full-text screening and the data-extraction pilot. Final data extraction was carried out by MJG, FS, AC and RCR. The analysis was then carried out and the manuscript was written by MJG with guidance from JNK, RCR and MRM. The corresponding author (MJG) attests that all listed authors meet authorship criteria and that no others meeting the criteria have been omitted. The corresponding author (MJG) also accepts full responsibility for the finished work and the conduct of the study, had access to the data, and controlled the decision to publish.
|
2022-12-09T22:47:48.270Z
|
2022-12-08T00:00:00.000
|
{
"year": 2022,
"sha1": "1e6dfc20cd03d820a911f1e028c620d0ba4313ed",
"oa_license": "CCBY",
"oa_url": "https://ebm.bmj.com/content/ebmed/early/2022/12/08/bmjebm-2022-112006.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "64144c3efaef5536fa29d2da4da5062012e5b6d1",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": []
}
|
216543657
|
pes2o/s2orc
|
v3-fos-license
|
Systematic Review The Impact of Hospital Accreditation on Nurses' Perceptions of Quality of Care?
Introduction: Accreditation is the recognition of the quality of services that have met the National Hospital Accreditation Standards. In implementing hospital accreditation , it covers patient safety goals, patient-focused service standards, hospital management standards, national programs and integration of health education in hospital services. How is the impact of hospital accreditation on the quality of care, especially nursing services in accordance with the perceptions and attitudes of nurses in hospitals. Therefore, it is necessary to identify the impact of applying hospital accreditation in accordance with nurses' perceptions and attitudes towards
INTRODUCTION
Accreditation, defined as public recognition by National Health Accreditation Commission accreditation standards achievement by Health Organizations, through assessment by external and independent survivors appointed by the ministry of health (Reisi, Raeissi, Sokhanvar, & Kakemam, 2019). Developing countries often use hospital accreditation to ensure patient quality and safety. However, the adoption of hospital accreditation standards is considered too demanding in the field of health services. In addition, the empirical literature on the benefits of accreditation is rare and this is the first empirical interrupted time series analysis designed to examine the impact of health accreditation on measures of hospital quality (El-jardali, Jamal, Dimassi, Ammar, & Tchaghchaghian, 2008). The National Hospital Accreditation Standard is a method for viewing and assessing the quality of health care organizations using external surveyors and published standards. This is often compared to an internal review process where members of the organization develop their own methods and standards to assess quality. There is little evidence available to verify which of the two forms of review have an impact on clinical outcomes and patient care. The accreditation process focuses more on risk management and patient safety than the previous steps to ensure the level of compliance with standards (Griffith, 2018). Patient safety is important to consider in health care. Thus, various programs are included by health institutions to monitor their services including patient safety procedures. Accreditation is one program to monitor health services and evaluation processes that are internationally and nationally recognized that are used to assess, promote, and ensure quality and efficient patient care and patient safety (Top & Tekingündüz, 2015).
Accreditation is increasingly being applied as a tool for the government to regulate and guarantee the quality of service quality and patient safety (Alswat et al., 2017). Hospitals must implement, develop and evaluate an effective, sustainable quality assessment and performance improvement program in all hospitals, based on information systems (SIM RS). Hospital management must ensure that the program improves patient quality and safety in hospital services, to involve all hospital departments and services (Bahrami, Chalak, Montazeralfaraj, & Dehghani Tafti, 2014). Improving market orientation and patient safety has become a major concern for nursing management. For that nurses, must build a climate of improving quality and patient safety is the key to improving nursing quality (Nomura, Pruinelli, Da Silva, Lucena, & Almeida, 2018).
Health care facilities are always associated with increased patient safety risks. In the past, the functions of risk management and quality improvement often operated separately in health service organizations. In addition, each individual who is also responsible for each function has a different reporting path. Nurses' perceptions of improving quality and patient safety are important in health care organizations. In addition, some hospitals have admitted that patients will receive care with guaranteed safety. High-quality care is very important to protect the institution's financial assets and also to lose, so it is very important to have risk and management plans. In this case, risk management and patient safety professionals correlate in close working relationships (Greenfield, Pawsey, & Braithwaite, 2011).
The nurse is responsible for ensuring that caring for the patient with safe care, no danger has occurred. Nurses have an important role in patient safety and reduce medical errors. In the past few decades, patient safety has become a high priority health system problem, because of the high potential for side effects in health facilities because it shows the challenge of a weak patient safety culture, in hospitals. Therefore, this problem must be integrated into all policy makers and managerial initiatives in our health system, as a top priority (Zyoud et al., 2019). Nurses as the largest group of healthcare providers in offering direct patient care. The purpose of this study was to describe nurses 'perceptions of the culture of patient safety after hospital accreditation, the correlation between nurses' perceptions of hospital accreditation and improving the quality of nursing services.
MATERIALS AND METHODS
This systematic review is carried out following the steps based on the PRISMA Statement, namely (Reisi et al., 2019) Formulating research questions, (Eljardali et al., 2008) Choosing relevant research terms and formulating search phrases in consultation with information specialists in the health sciences, (Griffith, 2018) Planning a search strategy, (Top & Tekingündüz, 2015) Approve inclusion and exclusion criteria, (Alswat et al., 2017) Conduct systematic searches in electronic databases, (Bahrami et al., 2014) Select appropriate research articles and [7] Conduct quality assessments of studies chosen for review (Moher et al., 2009). Search Strategy and Inclusion Criteria Systematic searches were collected from ProQuest (347,327 journals), Scopus (418 journals), and ScienceDirect (592 journals). Search is done using a combination, using Boolean terms and quotes to qualify for this study, this article uses a cross-sectional method, one article uses multiple regression analysis, one article uses study intervention and one article uses descriptive research design methods and reviewed design literature and answered research questions. Articles that are not relevant to the research questions or not related to nurses' perceptions related to the patient safety culture and hospital accreditation era will be excluded from this study (as shown in Figure 1). In the screening process, we get 15 full text articles that meet the requirements of the search key words that match those included in the study, and also according to the purpose of this review.
Finally, seven (n = 5) articles are selected from this selection process. To sharpen the selection process and find the expected objectives in this literature review, we limit the articles in various electronic databases from various countries that have a diversity of nurse characteristics and various hospital policies related to the patient safety culture. It aims to get the latest information from the topic of nurses' perceptions of the patient safety culture in the era of hospital accreditation, because in this era, it cannot be denied that in all regions of the world, hospital accreditation is one of the requirements in quality hospital service ratings . In all of these studies, the study sample was nurses, according to the purpose of this article, and looked at whether there were significant changes in nurses' perceptions of patient safety culture in the accreditation era, because patient safety was the target of assessment to obtain accredited hospital status (as shown in Table 1 ).
RESULTS
Based on the research subject we found the number of respondents was 5311 of nurses, 12.112 survey from patients record, 30 staff of health care and director hospitals as 638 respondents. Based on the location of the study we found all of studies were conducted at the hospitals. Based on the research design, we found thirteen quantitative studies with the type of 9 cross sectional, 2 descriptive correlation, 2 comparation and 2 types of retrospective. For data sources, the questionnaire is the instrument used from the thirteen articles and intervention was conducted from the two articles. We identified several instruments used to measure perception the nurses about outcomes of hospitals accreditation to improving quality of care.
From the results of a review of fifteen articles it was found that nurses perceived an improvement of quality care in hospitals as an outcome of accreditation. In terms of the benefits of hospitals accreditation (Reisi et al., 2019), nurses' perspectives, hospital managers can facilitate accreditation program implementation through financial support, strengthening of commitment and accountability, solving human resource issues, and stop current programs overlapping and revising the accreditation program. According to nurses perception that accreditation standards should also be revised regularly to ensure that they comply with the most recent worldwide standards and continue to be relevant (El-jardali et al., 2008), the impact of hospital accreditation is improve the patient safety level. Nurses' perceptions of patient safety appear to be essential. Nurses are important for the improvement of the patient safety culture in health care organizations. Accreditation has an overall statistically significant improvement in the perception of the culture of patient safety (Griffith, 2018;Nomura et al., 2018), Participants' perception of safety climate was positively correlated with their attitude toward medication error reporting; this correlation strengthened following completion of the program (Alswat et al., 2017), Nurses perceptions that JCI accreditation may help health systems enhance the awareness and ability to prevent MAEs and achieve successful quality improvements (Bahrami et al., 2014). Hospital accreditation has a positive impact on quality results especially on the quality of care provided to patients and patient satisfaction (Greenfield et al., 2011), nurses have an perception that hospital accreditation had a significant impact on hospitals' IC infrastructure and performance and accreditation helps teams review their work and improve their ideas about what they have done . . . it clarifies things for staff and gives a sense of direction, clarify issues and be constructive and offer ideas for improvement, felt comfortable and confident to approach the surveyors but they were pressured by time to move on . . . much more time for discussion is necessary, the overall timetable is rushed and often compromised (Sekimoto et al., 2008;Yildiz & Kaya, 2014). Interventions that help hospital administration and managers to provide more supportive leadership may facilitate safety climate improvement. Hospitals and their units should develop more friendly and intimate working environments that remove nurses' fear of penalties. Administration and managers should support nurses who report their own errors (Zyoud et al., 2019), the nurses perception that hospital accreditation improvement in the nursing documentation. Educational interventions performed by nurses led to a positive change that improved nursing documentation and, consequently, better care practices (Al-Awa et al., 2012).
DISCUSSION
The perception of nurses on hospital accreditation outcomes is an increase teamwork and productivity. There needs to be a reward, recognition, leadership and commitment to the accreditation of hospitals so that it can enchance cooperation in improving patient safety (Reisi et al., 2019). The impact of hospital accreditation is improve the patient safety. Accreditation has an overall advantage over certification in clinical leadership and review but some of these results are statistically significant improvement in the perception of the culture of patient safety . Where both systems, active on their own, show a positive relationship with quality management, the effect in combinations seems to be bigger and more significant. The determination and evaluation of the patient safety culture level in hospitals should be viewed as a continuous process where a need for continuous improvements in the hospital patient safety culture exists. To improve the patient safety level, nurses' perceptions of patient safety appear to be essential. Nurses are important for the improvement of the patient safety culture in health care organizations. Moreover, some hospitals have recognized that providing (Griffith, 2018). Nurses' perceptions of the patient safety culture are influenced by many factors including internal nurse factors and external or environmental factors. Internal factors are competency and position in the organization and also competence consists of age, level of education, and clinical experience. And it is classified as an external factor that influences nurses' perceptions of the patient's safety culture: leadership, hospital policy, teamwork, management support, open communication, promotion, and appreciation. Hospital accreditation had a significant impact on performance and accreditation helps teams review their work and improve their ideas (Sekimoto et al., 2008). Nurses' Perceptions themselves had enough staff and resources to provide quality nursing care, and had good nurse-doctor relationships and collegial Presence of nursing leadership visible and competent all factors that are strongly related to the assessment of the safety of patient care in their workplace. The perception of having enough staff and resources might not be consistent with the patient's ratio: actual staff, but it seems to be an important factor related to how nurses see patient safety in their hospital ward or unit. This is a fact that nurses are the largest community in an accredited hospital or not, and the main point is that nurses are the spearhead of hospital services that interact more frequently with patients. Thus, the patient safety culture must be the commitment of the nurses. In accreditation, patient safety is one of the objectives assessed, and is the main prerequisite that hospitals must meet various size requirements. There is a significant increase in nurses' perceptions of the patient safety culture in the process and after hospital accreditation. Hospital accreditation has a positive impact on quality especially on the quality of care provided to patients and patient satisfaction (Sekimoto et al., 2008). Hospital accreditation make a nurses have a safety culture rooted in their awareness, it will be a more point in the process of improving the quality of hospital services, of course. Nurses part of professional caregivers to patients and the most energy in hospitals in managing patients comprehensively will continue to increase awareness in improving the quality and safety of patients in hospitals.
CONCLUSION
The study results show that hospital accreditation make a nurses perception the quality of care. Quality of care is the one of impact from hospitals accreditation. Nurses in accredited hospitals feel a higher level of quality of health services. Nurses' perceptions of the patient safety culture are influenced by leadership, commitment, and supporting strategic quality planning, education and training. Another influence of the factors is the management of quality and data use compared to their counterparts in non-accredited hospitals. In accredited hospital perception of safety climate was positively correlated with their attitude toward medication error reporting. Strengthening the safety climate in the workplace is an important step towards improving patient safety. At the level of the patient safety competency dimension, teamwork and communication are significantly related to the perceived safety climate. Therefore, increasing nurse safety competencies, with an emphasis on teamwork and effective communication, can contribute to building a strong safety culture. Because there was a lack in the reviewed studies used the cross-sectional and correlation, well-designed such as qualitative should be conducted to more objectively evaluated the fenomena of nurses perception about impact of hospital accreditation on quality of care.
|
2020-03-12T10:20:48.720Z
|
2020-01-01T00:00:00.000
|
{
"year": 2020,
"sha1": "940f376a5cd28e8de1ed202fdcb56c56612f952e",
"oa_license": "CCBY",
"oa_url": "https://e-journal.unair.ac.id/JNERS/article/download/17218/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4c8b1b05303b7cdd04fb2f8a4b240edf74611dc9",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
250070990
|
pes2o/s2orc
|
v3-fos-license
|
Paradoxes, nurses’ roles and Medical Assistance in Dying: A grounded theory
Background In June 2016, the Parliament of Canada passed federal legislation allowing eligible adults to request Medical Assistance in Dying (MAID). Since its implementation, there likely exists a degree of hesitancy among some healthcare providers due to the law being inconsistent with personal beliefs and values. It is imperative to explore how nurses in Quebec experience the shift from accompanying palliative clients through “a natural death” to participating in “a premeditated death.” Research question/aim/objectives This study aims to explore how Quebec nurses personally and professionally face the new practice of MAID and their role evolution. Research design A grounded theory design was used. Participants and research context We recruited 37 nurses who participated in or coordinated at least one MAID. Semi-structured interviews and focus groups were conducted and audiotaped. Data collection and analysis followed Strauss and Corbin steps. Ethical considerations Ethics approval was received from the investigator’s affiliated University. Participants were informed regarding the research goal, signed a written consent, and were assigned pseudonyms. Findings/results Results show that nurses experienced the wide range of paradoxe during MAID centering around the following eight elements: 1) confrontation abouth death, 2) choice, 3) time of death, 4) emotional load, 5) new Bill, 6) relationship with the person, 7) communication skills, and 8) healthcare setting. The shifting of views and values in this new role is presented by the contradiction of opposites. Conclusions A better understanding of the paradox experienced by nurses involved with MAID paves the way for the development of interventions.
Introduction
Medical assistance in dying (MAID) is a special form of palliative care and has been associated with several ethical dilemmas, resulting in substantial media attention. According to the Commission on End-of-Life Care, 1 a total of 2 268 people in 2020 received MAID in Quebec compared to 494 in 2016. In Canada, a total of 21 589 MAID 2 have has occurred since legislative enactment. Although this number seems marginal, the number of people seeking MAID continues to increase as this practice becomes more and more well-known.
MAID is an end-of-life care option available since December 2015 in Quebec. 3 In Quebec, MAID is defined as a "care consisting of the administration of medication or substances by a physician to a person at the end of life, at the request of the person, with the aim of relieving his or her suffering by causing death." 3 Assisted death is prohibited by law in Quebec, which is why we focus solely on MAID. MAID represents a historic change in Canadian 4 and Quebec society. Although physicians play a crucial role in MAID administration, the nurse's role is intertwined in the larger responsibility of "providing palliative care" as stated in article 36 of the Nurses Act. 5 Nurses are largely widely involved in the care before, during, and after MAID across a number of settings, including hospitals, nursing homes, palliative homes, and in the community. The portrait of palliative care in Quebec is being transformed and it carries an important ethical issue for nurses, since they are pillars of palliative care.
People in palliative care, who respect the six criteria, can now decide how and when they want to die. This choice initiates a paradigm shift for healthcare workers and, for some, this change is difficult and still taboo. What was once considered "killing" is now legally recognized as "caring." As this practice is still fairly new and infrequent, it is imperative to study the experience of nurses who provide this care. The impact of MAID on nursing has been studied in a number of emerging studies; however, few of them were conducted in the province of Quebec which has a different legislation and practice. Thus, using grounded theory as a method, we aimed to investigate how French-speaking nurses in Quebec perceived their role in accompanying clients and their families through MAID care.
Research question/aim/objectives
The goal of this study is to explore the nursing role evolution in the accompaniment of clients and families seeking MAID.
To do so, the objectives are as follows: 1. to explore the nursing role evolution in accompanying clients and families through the various stages (before, during, and after) of MAID; 2. to describe the needs of nurses accompanying clients and families through MAID; and 3. to describe the paradoxes experienced by nurses in accompanying clients and families through MAID.
This article focuses on objective 3. Results from objectives 1 and 2 will be presented in another article.
Background
The adoption of the Canadian law on MAID in 2016 means that it is a recent topic with emergent research and literature on the subject. The literature appears to be more concerned with the concepts of euthanasia and assisted suicide, which are distinct themes from MAID. Relevant studies indicate that MAID is causing shifts in how palliative and end-of-life care is being perceived by nurses, with effects at ethical, 6-8 professional, [9][10][11][12] and personal levels. 4,13,14 This extension of the nursing scope of practice is often met with a moral sensemaking and with some expressing distress 9,10 Mitchell 15 defines living paradox as a "rhythmic shift in perspectives, where awareness arises from experiencing the contradiction of opposites in the daily relationship of value priorities while traveling toward the not-yet." Indeed, in choosing the values to participate in MAID care, nurses struggled with deciding to affirm certain values while simultaneously disconfirming others. The paradox is the moment when nurses struggle to decide. It is the process of changing perspectives that leads to the clarification of values and may lead to an ethical dilemma. 15 It differs from an ethical dilemma which refers to a "situation when nurses are unable to solve the ethical problems they face, which includes difficulties in balancing interests, allocating resources, and establishing interpersonal relationships." 16 Fundamentally, the cause of a dilemma in nursing ethics is the gap between existing nursing practice and the ideal nursing goal. 17 Research design Design. A qualitative methodology of grounded theory design appeared preferable as the study's phenomenon of interest is both a social interaction and a process. More specifically, the method of Corbin and Strauss 18 was the methodological foundation since the authors are positioned in a pragmatist and constructivist paradigm. Derived from symbolic interactionism, grounded theory maintains that the reality of the world is subjective and that the interpretation of situations is influenced by social interactions. 18 Nurses, therefore, can manifest their behaviors as a result of their interpretation of the world. In this way, their actions and treatment choices toward MAID are based on their meaning of it. This meaning emerges through situational interpersonal interactions and an intersubjective reality based on shared language symbols. 19 For example, the term "medical assistance in dying" may be a social symbol. For some nurses, it may represent the end of suffering, while for others it signifies the end of care and, thus, an ethical conflict emerges. Grounded theory, through symbols drawn from social interactions, provides a better understanding of how nurses define and explain the process of care, as it reflects a social process. Rooted in the empirical world, grounded theory proposes to take the form of an exploration, careful examination, and constant return to the concreteness of phenomena as experienced by nurses to interpret their reality as caregivers. This approach will provide relevant information, increase understanding, and provide a sound guide for action. 18,20 Ethical considerations Following research ethics board approval received from the investigator's affiliated site # CER-19-255-07.02, participants were recruited through social media postings and also through the university's communication department in order to reach nurses. Finally, the snowball sampling technique allowed us to recruit participants in various regions of Quebec. After the explanations, all participants were free to resign or sign the consent form and participate in the interview.
Participants and research context
Between June 2019 and March 2020, we recruited French-speaking Quebec nurses who had experience accompanying clients through MAID. They had to be Registered Nurses practicing in Quebec and must have participated in MAID to some extent. In the province of Quebec, nurse practitioners were not licensed to practice MAID at the time the study was conducted, which is why they were excluded.
By conducting theoretical sampling that targets nurses who had different experiences with MAID, the nursing role was explored and clarified. Data collection continued until theoretical saturation was reached. 18 We used four data collection tools: 1) a sociodemographic and personal characteristic questionnaire, 2) a semi-structured interview, 21 3) a journal, 20 and 4) memos. 20 The sociodemographic questionnaire gathered information regarding nurses' age, marital status, ward, education, previous experience with MAID, and their support network. The qualitative semi-structured interviews were recorded in digital audio format, took place in the participants' homes or any other place chosen by the participants, and lasted between 45 and 90 minutes. Most of them were conducted face-to-face and two were done virtually.
The semi-structured interview is based on a guide developed by a clinical expert, a content expert, and an expert in grounded theory. The draft interview guide addresses themes such as experience, perceptions of MAID, and the evolution of the nursing practice responsibility. Questions include, but are not limited to, the following: "How did you become involved in MAID?" and "How do you perceive MAID?" Throughout the interview, inductive hypotheses were verified with the participants and the questions evolved along with the understanding of the phenomenon, to be ultimately framed as follows: "Tell me how your perception of MAID has evolved from before and after having participated in it?" If necessary, clarification questions such as "What do you mean by…?" and "Can you explain a little more…?" were asked to get a better understanding of what the participants' comments meant. This interview guide remained open-ended and was modified depending on the theoretical saturation of the interviews conducted.
Researchers also kept a journal in which they noted their preconceptions through memos in order to be sensitive to empirical data. 20 Memos are the written records of the researchers' methodological reflections, which are created as the analysis evolves and are noted at any point in the research process. 20 These memos also reflected researchers' reflexivity as well as their field observations, such as a description of the locations and the non-verbal observations of the participants.
Analysis
Data was analyzed concomitantly with data collection to respect the grounded theory principle of circularity. [22][23][24] This circularity allowed the researchers to constantly compare by going back and forth between the data, the results of the analysis, and the memos. The discussions with the research team were very productive and helped to generate new ideas or hypotheses. The latter led to changes in the interview guide to support the theory being developed. 25,26 This study aimed to discover new ideas and not to validate a theory, so the emerging data was not inserted into an existing analysis grid derived from a theoretical model or literature review.
Each interview was transcribed by undergraduate and graduate student research assistants. Subsequently, QSR NVivo 12 software was used to aid in data entry and organization of transcripts, memos, and extracts from scientific articles, facilitating data management. Data was analyzed through open, axial, and selective coding. 24 Open coding is the first step in which the researcher reads the transcripts and labels (codes) to each important concept present in each line and paragraph. Next, the data is grouped together to create categories in axial coding. Finally, in selective coding, the researcher finds an overarching category that captures the essence of the data.
Participants' sociodemographic profile
Thirty-seven nurses contacted the researchers following a publicity on social media and consented to participate. The demographic characteristics of the 36 participants are shown in Table 1. One participant did not fill out the form. Nurses were between 22 and 64 years of age with a mean of 36 years, 26 had a university degree, and 27 worked full-time. They worked in different clinical settings: 1) 5 in community setting general, 2) 6 in community setting specialized in palliative care, 3) 11 in palliative care unit, 4) 9 in medicine ward, 5) 2 in geriatric unit, and 6) 3 in long-term care facility. Twenty-one had more than 10 years of experience in nursing and 19 had more than 5 years in palliative care specifically. Finally, 25 of them needed psychological assistance after they participated in MAID and 22 took part in MAID four times or fewer.
Paradoxes during the care of MAID
We found that study participants had all experienced substantial paradoxes during MAID. From our qualitative analysis of the interviews, we identified eight elements causing paradoxes. These eight elements are represented in Figure 1 and are as follows: 1) confrontation about death, 2) choice, 3) time of death, 4) emotional load, 5) new Bill, 6) relationship with the person, 7) communication skills, and 8) healthcare setting.
Confrontation about death. The first paradox centers around death itself. MAID forced the nurses interviewed to be confronted with the subject of death. Nurses felt that they were providing a patient with unique care through MAID; however, they also felt that they were taking someone's life as MAID is not a natural death. Nurses are trained to save lives. At first glance, therefore, MAID goes against what they were trained to do, as this participant testified: "We were trained to save life at all costs and, here, we take life away. We are doing the opposite of what we were taught. If the person is not well, they code, we will massage, and we will do everything to bring them back. This is the opposite. We'll do everything we can to make sure they leave with dignity." (P.36) "We give the lethal dose in fact." (P15) "Well, that's another big question, a Pandora's box and now that's a lot of ethical dilemmas about death." (P30).
To rationalize their decision to participate in MAID, several nurses explain that they contribute to ending the suffering, as explained in detail by participants 17 and 31: "That first experience troubled me. Not because of the experience itself, but because of all the previous events that could have ended much better if MAID had been available before. I have images of people I have accompanied in painful end-of-life situations (P17). We do MAID, it is not painful. There is pain, but there is much less pain and suffering compared to someone slowly dying for two or 3 days with pulmonary complications (P31)." A third participant concurred, stating, "I helped someone get out of that. I don't like to see someone suffer, I felt privileged to be able to accompany my patient. It was very intimate. I got to know his fears, to learn how he lived, to get to know his family and to accompany them from A to Z. I didn't say to myself: 'Oh my god, I killed someone!" (P7) As illustrated through these nurses' testimonials, MAID creates a paradox and sometimes even an ethical dilemma since it is not a standard death, yet nurses rationalized their actions by stating that they are ending a patient's suffering by providing this care.
Choice. The second paradox arising from nurses' experience with MAID was related to a patient's choice in seeking MAID. Nurses could either respect it and see MAID as palliative care they are providing or claim conscientious objection and refuse to participate in it. Nurses that did not experience any distress related to a patient's decision to pursue MAID were reassured in knowing that they had respected the person's wishes, and that MAID is legalized care. As the following participants explained, these nurses did not judge and respected the person's choice to end their suffering: "I find that it is part of our care as much as anything else. Since they are in palliative care, we relieve their physical and psychological suffering with medication and psychotherapy. MAID, I find, is also a kind of care in the sense that it's what the patient wants. He has no quality of life, has physical or moral suffering, and I can help him with that." (P12) "It's care that you have to get familiar with and it's new." (P16). "It's care. It's a choice. It's a treatment. So, for me, that's what it is: care that relieves the person according to the decision that they have made." (P7, P20, P23) "Who am I to say it's right or wrong? We can't judge that." (P21, P36) "I was there to accompany them in what they want for the end of their lives. It's not for me to decide." (P27) In contrast, nurses who were not comfortable participating in MAID had the option of refusing by stating a conscientious objection. This objection was primarily related to religious beliefs or beliefs related to life and death. Participants, moreover, recommended not participating in MAID if they felt discomfort, stating that "I don't think it's the care you should give if you're not comfortable, because it's going to be felt by the family." (P8) Time of death. The third paradox centers around the time of death. The scheduled and planned moment of death caused a lot of discomfort for many nurses because it is not as natural. As participant 29 said: "I had an obsession about the time and the moment. If I was obsessed with it, maybe the patient was too! There's something scary about it because there's a time. All day long, everyone reminds us: 'Don't forget that at such and such a time, there is the MAID. At such and such a time, yes, the doctor is going to be there. Did we remind him? At such and such a time, he must be there!' I find this unbearable. I had a patient who wrote down every day and the hours left." (P29) Contrastingly, participants also felt that MAID enabled people to regain control over their lives by deciding how and when they would die so as not to let the disease take everything away from them. "The lady had been given the chance to say goodbye, to decide how it would happen, where it would happen, and with her little dog by her side." (P15) "I remember my patient said to me, 'I've controlled my whole life, why can't I control my death?' People want to control until the end. We're in an era where we control everything. This lady wanted to control the time and date of her death." (P7) "A patient said to me, 'I can't do anything about cancer, but yes, I'm going to decide when I'm going to go.'" (P24) Emotional load. The fourth paradox concerns the emotional load, dividing nurses in their perceptions of MAID. On the one hand, nurses perceived MAID as beautiful, soft and a worthy death. Some participants claiming that "It's so beautiful. It's wonderful to be able to help these people who are suffering so much" (P15). "It's super sweet." (P34) "I think it's wonderful. They talk about dying with dignity in hospice; well, I've seen all kinds of different end of life scenarios and I think MAID is the most beautiful." (P12) On the other hand, others found it sad and difficult to overcome. "You get home and it's harder. You think about it. It's an emotional burden too." (P23) Some participants even mentioned it was a traumatizing event and did not wish to repeat it. "I kind of had the image of the lady all weekend in everything I did. It's like it's in us. It's like a post-traumatic shock that we experienced." (P17). The majority of the participants (69%) asked for psychological assistance from a peer or specialist after participating in the MAID program.
New bill. The fifth paradox centers around the fact that the Bill is still quite new. For some, this is seen positively. It is perceived as a new challenge with the ability to do good. As one participant stated, "I actually had a personal interest. I wanted to learn about it because I had a very close family member with multiple sclerosis. It started from a personal interest in understanding the process and, in the end, I find that it allows for real, human exchanges" (P4). Another participant felt that "It's okay to take part in MAID. It exists. We have access to that now. It comes with everything else in the evolution of care." (P24) For others, it is regarded as unknown and stressful, as evidenced by the following testimonials: "I was a little anxious because I didn't know what to expect." (P12) "The night before I was very stressed and I didn't sleep very well. I was not stressed about taking part in MAID, I was stressed about the procedure in general. I was thinking, 'Oh my God, what if this doesn't go well? What if I do something wrong?' It was the unknown that scared me a little bit." (P37) Relationship. The sixth paradox stems from the nurse's relationship with the patient. Some nurses find that having a close relationship with the patient makes the experience more rewarding, as evidenced by the following testimonials: "The beauty for me is really in the approach, in being there for the person and their family. This is an aspect we learn in school that I feel we leave out of our practice far too often." (P36) "That's what it is all about for me, to form a special bond with these patients." (P26) On the contrary, some nurses find that having a close relationship with the patient makes providing MAID more difficult as there is significant emotional burden and distress and, consequently, opt for a more distant relationship with the patient. One nurse reporting, "I could have gone to see her every day, to see how she was doing and get attached to her, but I think I was more effective by keeping a certain distance." (P15) Communication skills. The seventh paradox relates to communication skills. The better interpersonal skills a nurse has, the more competent they feel and the less distress they experience. The participants who felt competent with MAID stated that listening was essential: "Listening, a lot of listening and seeing how people feel. The patient, yes, is a priority, but the family is also around and must be taken care of as well." (P25) Additionally, gaining experience with MAID allows nurses to be more comfortable and less task-oriented: "When it's the sixth MAID you have on a unit, it's not the same as the first. It's less procedural." (P11) Other participants expressed not knowing what to say: "I felt like I had a lack of tools. From the moment you are asked, you don't know what to say." (P9) "Before MAID is provided, it's the anxiety of what the family is going through. Then, afterward, we are stuck with our dissatisfaction in knowing that maybe we could have offered more as a nurse." (P29). For those less comfortable with communication, being focused on the task and procedure during MAID served as a coping mechanism. "I don't know if it's a way to protect myself psychologically, but when I'm busy with nursing tasks, I'm fine with it." (P22) Healthcare setting. Finally, the eighth paradox centers around the characteristics of the healthcare setting. When nurses participating in MAID are released, meaning they only have one patient to take care of, and when they have mentorship and team support, providing MAID is a lot easier, as evidenced by the following testimonials: "On my unit, if there is a nurse tasked with MAID, they will only have that one patient because a patient who receives MAID needs a lot of attention. You really can't take care of other patients, that nurse is really dedicated." (P12) "It is well done in our department. The head of our department liberates us after MAID care so we don't have to continue to work with a head full of emotions." (P23) On the other hand, when they have many patients to care for, when they do not have support from their team and head nurse, and when information is not effectively communicated, MAID is stressful, as evidenced by the following testimonial: "It's very difficult emotionally. It takes an incredible amount of self-sacrifice. Right now, most nurses do it knowing that they must go on with their day. That's why it puts a damper on most of them." (P35)
Discussion
This research is the first Quebec grounded theory study to explore the nursing experience related to their role in the care of MAID. It provides an initial depiction of the paradoxes experienced by nurses supporting clients and their families through MAID in Quebec, Canada, where the practice is still new, and informational support and clinical guidelines are hard to find or non-existent. The discussion presents the eight paradoxes that emerged from the data that demonstrated "the opposing views of the same contradiction in the structuring of meaning." 15
Confrontation about death
The first paradox concerns death itself, which confronts nurses who are used to saving lives. This study goes in the same way as Joolaee, Ho 11 for whom participants considered MAID as a form of care for patients and a unique learning experience that was in a different paradigm than the education they had received previously. Our participants referred to the dignity of the patient in providing this care. 27 Kabigting 27 explains this paradox by quoting Parse 28 who highlights the fact that it is through paradox that "the whole truth of a phenomenon is revealed." Therefore, dignity can only be truly known in light of a paradox, through our experiences of lack of dignity. Our participants were in favor of MAID because they have seen too many people suffer in their end of life as in the Pesut, Thorne 29 research and they were advocates of the dignity of the person.
Choice
By studying how nurses develop their values, Sastrawan and Weller-Newton 30 found three reference points: religious lens, humanity perspective, and professionalism "resulting in a unique combination of personalprofessional values that comprise nurses' value system." Nurse's values, and more specifically related to MAID, were explored in this study showing that our participants respect the patient's request, choice, and the right to autonomy. Nurses' ethical conflicts are activated when someone disagrees with treatment decisions or questions the harm/benefit ratio of MAID. 31 These authors use the term "range of moral responses," rather than conscientious objection, to reflect the uncertainty about MAID that was characteristic of nurses in their studies. As observed in our study, no participants expressed conscientious objection to MAID, but some were uncertain about their feelings about it. 12 This process can lead to moral distress. 6 These authors also suggest to expand nurses' ethical approaches to conscience in care to create opportunities for nurses to inclusively discuss challenging ethical issues they encounter in practice. Failure to do so can cause stress related to issues that trouble their conscience, and to burnout. 6
Time of death
Our results corroborate those from other researchs conducted until now that indicate that nurses are glad to contribute to a dignified death, but the unnatural and the planned aspect generates stress and paradoxes. 4,13,29,32 For the person requesting MAID, deciding the time of death is a way to regain control of their life. 29 This information was reported as a positive aspect of MAID at the individual level in the study of Joolaee Ho 11 and as a way to respect patients' autonomy 33 over the illness. 33
Emotional load
On the one hand, nurses perceived MAID as a beautiful, gentle, and dignified death, and on the other hand, others found it sad and difficult to overcome. Nurses were surprised that MAID resulted in a peaceful death 29 while others even described their experience as post-traumatic shock. This is new information in the spectrum of the nurse's emotional burden that has not been found in other studies of moral distress. Moral distress is, according to Peter and Liaschenko, 34 a reaction to the constraints of a nurse's moral identity, relationships, and responsibilities that underlie a morally uninhabitable workplace, in which incoherent understandings and unstainable practices are present. Most studies report that healthcare professionals express moral ambiguity and distress without describing the broad spectrum of emotions. 7,9,10 New Bill The new Bill is seen positively by participants in this study. Our results go along other studies that perceive that legislation allows patients to have a new end-of-life option that was not previously available. 11 It is perceived as a new challenge with the ability to do good and a new learning opportunity for HPCP. 11 With the new legislation comes the negative side of uncertainty 12 as unknown 13 and stressful because nobody knows what to expect because few healthcare settings have established a protocol. 11
Relationship with the person
Nurses have a unique relationship with patients because of the intense and continuing nature of their interaction and this unique relationship can be challenged in the context of MAID. 14 Some nurses find that having a close relationship with the patient makes the experience more rewarding and on the contrary, some nurses find that having a close relationship with the patient makes providing MAID more difficult as there is significant emotional burden and distress and, consequently, opt for a more distant relationship with the patient. This intimacy/distance paradox was observed by Boroujeni, Mohammadi. 35 Although the sociocultural context of our two researches is different, the essence is the same. The length of the care and the connection, or bond, created with the client and its family affected the nurses feelings greatly. "The more interdependent the nurse and the patient and/or the patient's family had been, the greater the impact of the patient's death on the nurse." Although it is in nurses' code of ethics, few studies have reported how the intimacy/distance paradox impacts nurses' care and feelings.
Communication skills
Having excellent technical capacity with requisite communication skills was described as essential. Nurses identified empathy, listening, engaging, and being comfortable with intense emotion as key elements to effective communication with patients and families. 4 The better interpersonal skills a nurse has, the more competent they feel and the less distress they experience. Additionally, gaining experience with MAID allows nurses to be more comfortable and less task-oriented. For those less comfortable with communication, focusing on the task and procedure during MAID served as a coping mechanism. It was also observed that nurses who practice in a more procedural, rather than an existential manner, will be less involved and experience less distress, as shown by Bellens and Debien 13
Healthcare setting
As evidenced by Beuthin and Bruce, 4 the role of registered nurses in MAID varied dramatically across different settings and ranged from simply being involved in the technical aspect to orchestrating most of the communication, advocacy, and relational care. Many had to be leaders and create their role to include the following: providing information about MAID to patients and families, coordinating the MAID process, preparing equipment, ensuring IV access for medication delivery, coordinating and informing healthcare professionals related to the MAID procedure, documenting the care provided, supporting patients through the entire experience, and providing post-death care 14 Like in Pesut and Thorne 12 's study, nurses in this research were working within systems that differed greatly in their response to MAID. This patient-centered perspective meant that nurses prioritized a MAIDrelated request and/or provision over other duties and that when the clinical setting and leaders put favorable context like nurses participating in MAID are released from other case load and duties, meaning they only have one patient to take care of, and when they have mentorship and team support, providing MAID is a lot easier. Participants described teamwork as essential to a successful MAID process and to benefit mutual support. 12 Multidisciplinary MAID debriefs are a vital source of education and support 11 and unfortunately, healthcare professionals are dissatisfied with the support that their institution offers. Nurse leaders' role in MAID is to support nurses by ensuring they have the required knowledge to manage patients requesting the service, whether or not the nurse is directly involved in the MAID process. 36 The heterogeneity of the clinical settings and diverse Quebec regions are the main strengths of the study although the main limitation is the lack of ethnic diversity, which hinders transferability. Transferability of results is facilitated by the description of the research process, participant characteristics, and the context, as well as by the variation in sample composition when searching for theoretical saturation.
Conclusions
This grounded theory research is the first in Quebec to identify the wide range of paradoxes nurses face as they evolve to accompany clients and families seeking MAID. These are on: 1) confrontation abouth death, 2) choice, 3) time of death, 4) emotional load, 5) new Bill, 6) relationship with the person, 7) communication skills, and 8) healthcare setting. The shifting of views and values in this new role is presented by the contradiction of opposites. This significant contribution also adds that nurses may suffer from post-traumatic shock disorder following this care when they are not clear about their values. It also demonstrates that a lot is needed in clinical settings to support nurses in overcoming the paradox of this new and unusual caring role. Finally, it reinforces the importance for healthcare managers to release nurses participating in MAID from their daily caseload, allowing them to provide dedicated one-to-one care to best accompany patients and their families through this experience, and to provide team debriefs post-MAID although it is costly. This study paves the way for the development of supportive and educational interventions.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this
|
2022-06-28T06:18:05.772Z
|
2022-06-27T00:00:00.000
|
{
"year": 2022,
"sha1": "0033898ec5ce9f7a2df37d3d511e628dd1dbd6fc",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1177/09697330221109941",
"oa_status": "HYBRID",
"pdf_src": "Sage",
"pdf_hash": "c428be6add3a708f917f1840505abcbebd76b512",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
260939516
|
pes2o/s2orc
|
v3-fos-license
|
Antibody attributes, Fc receptor expression, gestation and maternal SARS-CoV-2 infection modulate HSV IgG placental transfer
Summary Antibody-dependent cellular cytotoxicity (ADCC) is associated with protection against neonatal herpes. We hypothesized that placental transfer of ADCC-mediating herpes simplex virus (HSV) immunoglobulin G (IgG) is influenced by antigenic target, function, glycans, gestational age, and maternal severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection. Maternal and cord blood were collected from HSV-seropositive (HSV+) mothers pre-COVID and HSV+/SARS-CoV-2+ mothers during the pandemic. Transfer of HSV neutralizing IgG was significantly lower in preterm versus term dyads (transfer ratio [TR] 0.84 vs. 2.44) whereas the TR of ADCC-mediating IgG was <1.0 in both term and preterm pre-COVID dyads. Anti-glycoprotein D IgG, which had only neutralizing activity, and anti-glycoprotein B (gB) IgG, which displayed neutralizing and ADCC activity, exhibited different relative affinities for the neonatal Fc receptor (FcRn) and expressed different glycans. The transfer of ADCC-mediating IgG increased significantly in term SARS-CoV-2+ dyads. This was associated with greater placental colocalization of FcRn with FcγRIIIa. These findings have implications for strategies to prevent neonatal herpes.
INTRODUCTION
Active or passive immunization during pregnancy provides an important opportunity to improve maternal health and reduce neonatal morbidity and mortality from infectious diseases. 1,24][5][6] Immunizations may be most effective when administered early in the third trimester as placental syncytiotrophoblast expression of the neonatal Fc receptor (FcRn), which plays the dominant role in immunoglobulin G (IgG) transfer, increases after 28 weeks of gestation. 7][10] For example, while the A297-N-linked glycosylation site of Fc is not in the FcRn binding site, glycans at this site affect the conformation and modify the affinity for the FcRn. 11,124][15] The antigenic target of IgG also affects placental transfer, presumably because it may affect which subclass is elicited, glycan modifications, and conformation of Fc upon fragment antigen binding (Fab).
7][18][19] In 2018, it was estimated that there were $14,000 annual (4,000 HSV-1 and 10,000 HSV-2) cases of neonatal herpes, although the incidence of neonatal HSV-1, which has emerged as the more common cause of primary genital infections in the United States and other countries, is increasing. 19Risk factors for neonatal HSV disease include primary or first-episode maternal infection in the third trimester, preterm birth, maternal age less than 21 years, and invasive monitoring, which may disrupt the protective epithelial barrier. 17,18The increased risk for neonatal disease following primary compared to reactivating HSV may reflect higher maternal HSV viral loads as well as limited placental transfer of HSV-specific IgG. 20The increased risk with preterm birth may also be linked to reduced transfer of HSV-specific IgG, although this has not been well studied. 21he function of the transferred Abs may also impact the clinical outcome.Neonates who acquired higher levels of Abs that mediate antibody-dependent cellular cytotoxicity (ADCC) from their mothers were more likely to have disease limited to the skin whereas those with low levels of ADCC-mediating Abs were more likely to have disseminated disease after controlling for the nAb titer. 22Importantly, primary HSV infection elicits a predominantly neutralizing response with little or no ADCC detected until at least 6 months following primary infection. 23aternal coinfections also may modulate placental transfer of pathogen-specific IgG by several mechanisms including an increase in total maternal IgG levels and subsequent competition for FcRn, placental inflammation, and changes in placental expression of FcRn and FcgRIIIa, the receptor associated with ADCC responses. 24,25For example, maternal malaria infection is associated with decreased transport of measles but not tetanus Abs, whereas maternal HIV is associated with decreased placental transfer of both measles and tetanus Abs. 13,26,27n contrast, the transfer of influenza-and pertussis-specific Abs were preserved in pregnant women infected with SARS-CoV-2 during the third trimester. 25hile early recognition and treatment with acyclovir have reduced the mortality of neonatal HSV disease, morbidity remains high.Thus, there is an urgent need to develop safe and effective vaccines and/or monoclonal Abs to prevent maternal infection, boost maternal antibody titers and subsequent placental transfer, and/or treat infected neonates.There is little data on the determinants of placental transfer of HSV-specific IgG.The current study, therefore, was designed to compare the quantity, function (neutralization, complement component C1q binding, or FcgRIIIa activation as a biomarker of ADCC 21,[28][29][30][31][32] ), and antigenic targets of HSV-specific IgG in mother-cord blood dyads who delivered at term compared to preterm and to evaluate whether maternal SARS-CoV-2 coinfection during pregnancy impacted the transfer by comparing maternal HSV-seropositive (HSV+) pre-COVID and maternal HSV+SARS-CoV-2-coinfected (HSV+/SARS-CoV-2+) dyads.
Demographics and clinical characteristics of study participants
Maternal and cord blood were obtained from HSV+ mothers and their newborns who delivered between October 2018 and December 2019 (pre-COVID, n = 41) and HSV+/SARS-CoV-2+ mothers (defined by a positive maternal SARS-CoV-2-specific nasopharyngeal PCR test at enrollment) and newborns who delivered between April and December 2020 (n = 36) (Table 1).To evaluate the impact of preterm delivery on placental transfer of IgG, both groups were dichotomized at 37 weeks of gestation into term and preterm cohorts.There were no significant differences in maternal demographics, although more pre-COVID preterm mothers had preeclampsia (p < 0.01, chi-square).In addition, maternal blood was obtained earlier relative to delivery in the pre-COVID term compared to the other groups (p < 0.001, Kruskal-Wallis with Dunn's multiple comparison test) and therefore this variable was included in linear regression models as further described in the following.28 [25-35] 31 [26-34] 28 [24-33] There was a statistical differences in the distribution of maternal COVID-19 disease severity comparing the term vs. preterm groups: 17 (60.7%)term and 1 (12.5%)preterm) were asymptomatic and were tested as part of infection control policy; 8 (28.6%) term and 3 (37.5%)preterm were classified with mild disease; 3 (10.7%)term and 2 (25%) preterm were classified with moderate disease; and 2 (25%) preterm mothers had severe disease (p < 0.05, chi-square).The onset and duration of SARS-CoV-2 infection is unknown, but all participants had detectable plasma SARS-CoV-2 immunoglobulin M (IgM), immunoglobulin A (IgA), and IgG (Figure S1).None of the mothers had received a COVID-19 vaccine, and none of the newborns were diagnosed with SARS-CoV-2 or neonatal herpes.The majority of pregnant participants were HSV-1+ (n = 61) or HSV-1 and HSV-2 dually seropositive (n = 11), and thus assays were conducted using HSV-1-infected cell lysates or recombinant HSV-1 glycoproteins as antigenic targets.However, as HSV-1 and HSV-2 Abs cross-react, HSV-2-seropositive-only participants (n = 9) were included.
Total and HSV-specific antibody response in mother-infant pre-COVID and SARS-CoV-2+ dyads Because there was a difference in the timing of maternal blood collection relative to delivery in the pre-COVID term compared to preterm and SARS-CoV-2+ cohorts, we first assessed whether timing was associated with maternal total IgG or HSV-specific IgG.There was no association (Figure S2, p = 0.31 and 0.34, respectively).However, the transfer of total IgG and IgG1 (but not IgG3) (Figures 1A-1C) and the transfer of HSVbinding IgG, IgG1, and IgG3 (Figures 1D-1F) were significantly impaired in preterm compared to term pre-COVID dyads with transfer ratios (TRs) (ratio of cord to maternal plasma IgG levels) < 1.0 (Table 2; Figure S3A-S3C).This resulted in significantly less total IgG and IgG1 and HSV-specific IgG, IgG1, and IgG3 in the cord blood of preterm vs. term infants (p < 0.001 for total IgG and p < 0.0001 for others, Mann-Whitney U test).HSV-specific IgG2 and IgG4 were below the limit of detection in maternal and cord blood and thus were not included in the analyses.Maternal total IgG concentrations were higher in SARS-CoV-2+ term compared to pre-COVID term or preterm pregnancies (Figure 1A, p < 0.05, Kruskal-Wallis with Dunn's multiple comparison test).This was associated with TRs of total IgG <1.0 in both the term and preterm SARS-CoV-2+ cohorts (Table 2).However, similar to the pre-COVID cohort, the TRs of HSV-specific IgG, IgG1, and IgG3 were all significantly lower in preterm versus term SARS-CoV-2+ dyads (p < 0.001 for HSV-specific IgG and IgG1, p < 0.01 for HSV-specific IgG3, Mann-Whitney U test, Table 2).Inclusion of the few HSV-2 seropositive only, denoted by open symbols (Figure 1), did not impact the results.
Differences in transfer of neutralizing, C1q binding, and FcgRIIIa-activating IgG in term versus preterm pre-COVID and SARS-CoV-2+ dyads To explore whether there were differences in the function of placentally transferred HSV-specific IgG, we compared the neutralizing titer by plaque reduction assay (Figure 2A), C1q binding activity by ELISA (Figure 2B), and fold activation of human FcgRIIIa (ADCC) (Figure 2C) in paired maternal and cord blood samples using HSV-infected cell lysate as the antigenic target.Activation of FcgRI-and FcgRIIa-activating HSV IgG was also measured, but no increase relative to background was observed.
There were no statistically significant differences in levels of functional Abs comparing mothers who delivered at term vs. preterm.However, the transfer efficiency differed.Neutralizing antibodies (nAbs) were efficiently transported (defined as TR > 1.0) only in term but not preterm dyads (TR 2.44 [1.5-3.3]2A and S3D).C1q-binding Abs transferred with similar efficiency in term and preterm infants (TR 1.33 [1.0-2.1] and 1.2 [1.0-1.4],respectively; Figure 2B and Table 2).In contrast, FcgRIIIa-activating Abs were inefficiently transferred in both term and preterm dyads, which resulted in significantly lower levels of this functionality in cord compared to maternal plasma (p < 0.0001, Figure 2C).The TR was <1.0 in both term and preterm dyads and was significantly lower in the preterm dyads (0.72 [0.6-0.8] vs. 0.50 [0.4-0.6],p < 0.01, Mann-Whitney U test, Table 2).
Transfer of anti-glycoprotein D (gD)-and anti-glycoprotein B (gB)-specific Abs
The difference in the TR of functionally distinct Abs could reflect their antigenic targets, IgG subclasses, glycans, and/or FcRn affinity.To explore these possibilities, we first compared the transfer of anti-gD and anti-gB Abs using recombinant proteins and plasma from the pre-COVID (Figures 3A and 3C, respectively) and SARS-CoV-2+ (Figures 3B and 3D, respectively) term dyads.These two envelope glycoproteins were selected because both are major targets of nAbs and gB has recently been identified as a target of ADCC responses. 23,33Both anti-gD and anti-gB IgG were efficiently transported (TR > 1.0, Table 2), but the anti-gD Abs were almost exclusively IgG1 (Figures 3A and 3B) whereas the anti-gB Abs comprised both IgG1 and IgG3 subclasses (Figures 3C and 3D).There were no differences in TR of anti-gD IgG or IgG1 comparing pre-COVID and SARS-CoV-2+ dyads, but the TR of anti-gB IgG was significantly greater in the SARS-CoV-2+ compared to pre-COVID dyads (p < 0.05, Mann-Whitney U test) and trended to be higher comparing anti-gB IgG1 and IgG3 subclasses (p = 0.06 and p = 0.09, Mann-Whitney U test, respectively) (Table 2).These differences are consistent with the higher TR of ADCC-mediating IgG in the SARS-CoV-2+ compared to pre-COVID cohort.
Functionality of antigen-and subclass-enriched Abs in pre-COVID dyads
To further dissect the differences in function of gD-and gB-specific IgG1 and IgG3, we enriched plasma from maternal (n = 4) and cord (n = 3) term pre-COVID samples for anti-glycoprotein-specific IgG using Protein L columns followed by gD and gB lectin columns.The anti-gD-enriched fraction recognized gD (but not gB) on western blots with recombinant proteins or HSV-infected cell lysate as antigens whereas the anti-gB-enriched fraction recognized gB but not gD (Figure S4A).The final flow-through, which was depleted of both anti-gD and anti-gB IgG (gD -gB -), recognized several other bands within the HSV-infected lysate but did not recognize recombinant gD or gB protein.The anti-gD IgG had significantly more neutralizing activity compared to either the anti-gB or the gD -gB -fraction (p < 0.001 and p < 0.0001, respectively, ANOVA with Tukey's multiple comparison) but had little or no FcgRIIIa-activating activity (Figures 4A and 4B).In contrast, the anti-gB and gD -gB -fractions exhibited both functions and had significantly more ADCC-mediating activity compared to the anti-gD-enriched fraction (p < 0.05.ANOVA with Tukey's multiple comparisons test).Detection of FcgRIIIa-activating Abs in the gD -gB -fraction is consistent with studies showing that gB is only one of several targets of ADCC-mediating IgG although the other antigenic targets have not been identified. 33iven that IgG1 crosses the placenta more efficiently than IgG3 and nAbs cross more efficiently than FcgRIIIa-activating Abs, we hypothesized that the nAbs would be primarily IgG1 whereas the FcgRIIIa-activating Abs might be predominantly IgG3.To test this, we enriched for IgG1 by passing the Protein L column eluent over a Protein A column, which binds IgG1 (as well as IgG2 and IgG4) more efficiently than IgG3; 34 the enrichment was assessed by ELISA with anti-IgG as the capture antigen and subclass-specific secondary Abs (Figure S4B).Refuting our hypothesis, both the neutralizing and the FcgRIII-activating Abs were primarily IgG1 and not IgG3 (Figures 4C and 4D).To confirm this, the anti-gB-enriched IgG (n = 3 pools each comprised plasma from 2 participants) was applied to a Protein A column; the FcgRIIIa activity mapped to the IgG1-enriched fraction (Protein A eluent) (Figure 4E).
The finding that both neutralizing and FcgRIIIa-inducing activity were contained within the IgG1 fractions suggested that the differences in TR might reflect differences in FcRn affinity, which in turn, may be associated with differences in IgG glycan modifications.To assess this, we quantified the apparent affinity for the FcRn.The IgG1 fraction had significantly greater apparent affinity (1/K D ) compared to IgG3 (p < 0.01, Mann-Whitney U test), and the anti-gD had significantly greater apparent affinity than anti-gB-enriched fractions for the FcRn (p < 0.0001, Mann-Whitney U test) (Figure 4F).
Differential expression of glycans in anti-gD-and anti-gB-enriched maternal IgG
MALDI-TOF mass spectrometry analysis of IgG isolated from anti-gD-and anti-gB-enriched fractions (term pre-COVID maternal plasma, n = 5 each) showed increased abundance of terminal sialylated glycans in anti-gD-compared to anti-gB-enriched fractions (p < 0.01, paired t test); a modification, when expressed on the Fc region, is associated with FcRn affinity. 7,13,15,35,36Conversely, the anti-gB-enriched IgG displayed an increased abundance of fucosylated, galactosylated, N-acetylglucosamine (GlcNAc) and bisecting GlcNAc compared to anti-gD (all p < 0.01, paired t test, Figure 5A and Table S1).7][38] Additional glycan analyses were performed with anti-gD-and anti-gB-enriched IgG from the SARS-CoV-2+ cohort (n = 5 maternal term samples each).Although the trends were similar, no significant differences in the abundance of different glycans were observed comparing the anti-gD-and anti-gB-enriched samples (Figure 5B and Table S1).Moreover, little or no GlcNAc and bisecting GlcNAc glycans were detected in the SARS-CoV-2+ anti-gD-or anti-gB-enriched samples.These findings suggest that differences in glycans alone do not explain the increased TR of FcgRIIIa-activating IgG in the term SARS-CoV-2+ cohort.
Increased colocalization of FcRn and FcgRIIIa on placentas from SARS-CoV-2-positive mothers compared to SARS-CoV-2negative mothers
To determine if placental expression of FcRn or FcgRIIIa contributed to increased transfer of ADCC-mediating IgG, placental tissue from SARS-CoV-2+ and contemporaneous SARS-CoV-2 PCR negative (SARS-CoV-2-) deliveries (n = 8 term and n = 5 preterm each) were stained with Abs to placental alkaline phosphatase (PLAP), FcRn, and FcgRIIIa and the expression and colocalization compared (Figure 6).As expected, there was significantly more staining of FcRn in term compared to preterm placental tissue independent of SARS-CoV-2 coinfection (p < 0.01 for SARS-CoV-2-and p < 0.05 for SARS-CoV-2+, Kruskal-Wallis with Dunn's multiple comparison test, Figure 6B).There was no difference in relative amount of FcgRIIIa staining, but FcRn-FcgRIIIa colocalization was greatest in the term SARS-CoV-2+ placental tissue and was significantly greater than that in term and preterm SARS-CoV-2-tissue (p < 0.05 and p < 0.01, Kruskal-Wallis with Dunn's multiple comparison test, respectively, Figures 6C and 6D).
Multivariable analyses confirm association of gestational age and SARS-CoV-2 coinfection with transfer of neutralizing and FcgRIIIa-activating IgG, respectively
Multivariable linear regression models were constructed to identify factors associated with transfer of HSV-specific IgG, neutralizing, or FcgRIIIa-activating Abs.Variables included maternal age, neonatal sex, gestational age (dichotomized as term vs. preterm), COVID-19 infection, timing of maternal blood collection, maternal total IgG, and HSV-specific IgG.Gestational age and maternal HSV-specific IgG levels were associated with the transfer of HSV-specific IgG (both p < 0.0001).Gestational age was also positively associated with the transfer of nAbs (p < 0.03).In contrast, COVID-19 infection and newborn female sex were significantly associated with the transfer of FcgRIIIa-activating Abs.However, the interaction between neonatal sex and COVID-19 was significant (p < 0.05), and when this was included in the model, only COVID-19 status retained a significant association (p < 0.05, Table 3).Moreover, when only female newborns were included in the model, COVID-19 infection was positively associated with FcgRIIIa-activating Ab transfer (p < 0.05), but the association was not significant when only males were included (Table S2).We further assessed the potential association between timing of maternal blood collection and TRs.There was no significant association between maternal blood collection time (relative to delivery) and TR of HSV-specific IgG (rho = 0.15m, p = 0.18), nAbs (rho = 0.29, p = 0.35), or ADCC-mediating Abs (rho = À0.11,p = 0.32) (Figure S5).
DISCUSSION
Results of these studies demonstrate that preterm vs. term gestation, antibody function, IgG glycans, and placental expression and colocalization of FcRn and FcgRIIIa contribute to efficiency of HSV-specific IgG placental transfer.The transfer of nAbs, which are the dominant response to acute HSV infection, 39 is greater than FcgRIIIa-activating and C1q-binding Abs in term pre-COVID dyads as reflected by the differences in TRs.The relative inefficient transfer (TR < 1.0) of FcgRIIIa-activating, ADCC-mediating IgG in both term and preterm neonates (in the absence of SARS-CoV-2 coinfection) may have important clinical implications since, as suggested by a prior clinical study, ADCC likely plays a more important role in controlling viral dissemination. 22The importance of ADCC in protecting neonates has also been suggested in mouse studies, which demonstrated that immunization of female mice with a vaccine that elicits a predominant ADCC response protected their pups from subsequent viral challenge whereas nAbs elicited by HSV infection or a gD subunit vaccine were less protective. 32,40Preterm infants may be particularly vulnerable to disease because the transfer of both neutralizing and ADCC-mediating IgG is compromised (TR < 1.0).The importance of gestational age in modifying transfer of neutralizing IgG (but not FcgRIIIa-activating IgG) was confirmed in the multivariable linear regression model where gestational age was the only significantly associated factor.This is likely a consequence of decreased expression of the FcRn in preterm placentas, which we documented by immunohistochemistry, as well as shorter time available for Abs to cross.
Differences in IgG subclass do not explain the differential placental transfer of neutralizing compared to FcgRIIIa-activating Abs as both were predominantly IgG1; the anti-gD IgG were restricted to the IgG1 subclass, and while anti-gB included both IgG1 and IgG3, all the ADCC-mediating activity mapped to IgG1.However, the antigenic targets and glycans differed, and we speculate that this may have contributed to the variance in transfer efficiency.The anti-gD Abs had essentially only neutralizing activity, had greater apparent affinity (1/K D ) for FcRn, and expressed glycans that, when expressed on the Fc region, are associated with increased FcRn affinity.In contrast, the anti-gB IgG exhibited both neutralizing and FcgRIIIa-activating functions and expressed relatively more glycans, which when expressed on the Fc region, are associated with greater FcgRIIIa affinity.Although we separated the anti-gB IgG1 from IgG3 to confirm that the FcgRIIIa-activating function mapped to IgG1, we were unable to separate the neutralizing from FcgRIIIa-activating anti-gB IgG1 for glycan studies.Moreover, we conducted the glycan studies with total IgG and thus could not distinguish Fc from Fab glycans.While most studies that associated N-linked glycans with FcR affinity focus only on Fc glycans, Fab domain glycans can also modulate placental transfer efficiency.41][42][43][44] SARS-CoV-2 coinfection was associated with a significant increase in the transfer of FcgRIIIa-activating Abs and a concomitant small decrease in the transfer of nAbs.This increase was significant comparing the term SARS-CoV-2+ dyads to all the other groups.This finding was unanticipated as infections including SARS-CoV-2, HIV, and malaria, for example, are more often associated with decreased IgG transport. 13,25,27The proposed mechanisms for impaired transport in the setting of maternal coinfections include hypergammaglobulinemia (IgG >15 g/L) with associated saturation of placental FcRn, alterations in glycans, and inflammatory changes in the placenta. 13,25While we did observe a modest increase in total IgG in the SARS-CoV-2+ mothers, this increase was not sufficient to saturate receptors or decrease the TR of HSV-specific IgG.There were also differences in glycans with a decrease in GlcNAc and bisecting GlcNAc glycans in the anti-gB-enriched SARS-CoV-2 compared to pre-COVID dyads.However, this difference is unlikely to contribute to the observed increase in transfer of ADCC-mediating IgG.Rather, we speculate that the increased colocalization of FcRn and FcgRIIIa observed in the immunohistochemistry studies of placental tissue obtained from SARS-CoV-2+ compared to SARS-CoV-2-term deliveries facilitated the transfer of ADCC-mediating IgG.This finding is consistent with another recent study that also documented increased FcRn/FcgRIIIa colocalization, which could favor the transfer of IgG with increased affinity for the FcgRIIIa. 25How SARS-CoV-2 infection modifies placental architecture and whether similar changes occur in response to other active infections require future study.Notably, there was a significant interaction between female neonatal sex and COVID-19 infection, resulting in increased transfer of FcgRIIIa-activating IgG in female SARS-CoV-2+ dyads.Relatively greater placental transfer of maternal IgG in female versus male newborns was previously reported in a SARS-CoV-2+ cohort and was linked in part to reduced maternal IgG levels. 45However, we did not observe differences in maternal HSV-specific IgG when comparing male versus female pregnancies and there were no sex-based differences in placental FcRn/FcgRIIIa expression although the sample size is small.Thus, the underlying mechanisms for this association require future investigation.
The observation that HSV-specific FcgRIIIa-activating Abs (in the absence of SARS-CoV-2 coinfection) are transmitted inefficiently (TR < 1.0) contrasts with findings showing selective transfer of FcgRIIIa-activating Abs targeting pertussis, influenza, and respiratory syncytial virus (RSV) antigens (TR > 1.0). 15The differences may be linked to differential glycan expression as the anti-gB IgG expressed significantly less galactosylated glycans whereas the anti-RSV and anti-influenza IgG in the prior study expressed increased levels of galactosylated glycans, which are associated with FcRn and FcgRIIIa binding. 14,15n summary, the results of this study on placental transfer of HSV-specific IgG have several potential clinical implications.The relative inefficient transfer of ADCC-mediating Abs (in the absence of SARS-CoV-2 coinfection), combined with the prior observation that only low levels of ADCC-mediating Abs are elicited in response to primary HSV infection, 23 may contribute to the increased risk of neonatal disease associated with primary maternal infection.If ADCC is important for preventing neonatal herpes dissemination, 22 vaccines that elicit high titer ADCC responses and monoclonal Abs with ADCC-mediating activity including those that target gB 33 should be prioritized.The monoclonal Abs should be engineered to express glycans that promote both FcRn binding and FcgRIII activation to optimize placental transfer.Future studies should also focus on understanding the mechanisms that promote FcRn and FcgRIII colocalization as potential strategies to promote antibody transfer.
Limitations of the study
There are several limitations to this study including the absence of any HSV transmission and the recruitment of pre-COVID and SARS-CoV-2+ cohorts during different time periods.However, it would have been difficult to identify SARS-CoV-2-uninfected pregnancies during the first wave of the pandemic.Another limitation is that the time between maternal blood collection and delivery differed for the pre-COVID term and the other groups.Almost all the pregnant women recruited earlier in the third trimester delivered at term.However, the multivariable models as well as lack of any significant association (Spearman correlation) between timing of maternal blood collection and maternal antibody titers or TRs strongly suggest that this difference did not impact the results.In addition, we evaluated Abs directed against only two viral antigens, gB and gD.While these accounted for the majority of nAbs, the anti-gD-and anti-gB-depleted fraction retained substantial ADCCmediating activity.Identification of the targets and placental transfer efficiency of these potentially functionally important Abs requires future study.In addition, the glycan studies were conducted only with maternal samples and not with isolated Fc fragments.Glycans expressed by Fab and Fc may contribute to antibody function, conformation, affinity for the FcRn, and placental transfer.Future more detailed analyses of isolated Fc glycans as well as glycans in the cord blood are needed to establish the link between glycans and placental transfer.
STAR+METHODS
Detailed methods are provided in the online version of this paper and include the following:
Functional antibody assays
Neutralization of HSV-1(B3x1.1), a clinical isolate, was assessed by plaque reduction assay.Serial 2-fold heat-inactivated plasma dilutions were incubated with HSV-1(B3x1.1) (75-100 plaque-forming units) for 1 hour before inoculating Vero cells.Plaques were counted after 48 hours, and the neutralization titer was defined as the plasma dilution that yielded a 50% reduction in viral plaque numbers relative to cells treated with virus only control (Figure S3). 39C1q binding antibodies were quantified by ELISA.Plates were coated with HSV-1(B3x1.1) infected Vero cell lysate or uninfected lysate overnight. 39Plates were blocked and then incubated with serial dilutions of heat-inactivated plasma samples followed by 2-hour incubation with 1mg/ml of human C1q complement (Complement Technology), and bound C1q was then quantified with 1mg/ml of anti-human C1q-HRP conjugated antibody (Complement Technology).The dilution that yielded a 50% reduction in odu after subtracting the odu for uninfected cell lysate (1:20) was used to report the data.FcgRIIIa activation, a biomarker for ADCC, was assayed using the ADCC FcgRIIIa (human) Reporter Bioassay (cat #: G7015, Promega, Madison, WI) with 1:5 dilution of plasma.Fold-induction was calculated relative to luciferase activity in the absence of plasma after subtracting the background for uninfected cells.Similar bioassays were conducted to measure FcgRI-and FcgRIIa-activating HSV IgG (cat # GA1341 and G9901: Promega, Madison, WI).The relative of affinity of IgG for the FcRn was measured using the Lumit FcRn Binding Immunoassay (Cat #: W1151, Promega).Human IgG1 labeled with large BiT (Tracer-LgBiT) was used as the tracer and was incubated with C-terminal biotinylated human FcRn bound to Streptavidin-SmBiT (hFcRn-Biotin-SA-SmBiT), which yields a maximal luminescence signal (SpectraMax M5, Molecular Devices, CA).Test samples were added (in duplicate) and the decrease in luminescent signal reflecting competition with the Tracer-LgBit for binding to the FcRn was determined.Results are presented as 1/relative K D .
Enrichment for anti-gD, gB, IgG1 or IgG3 antibodies
Individual or pooled plasma samples (2 ml/sample) were applied to a Protein L column (cat # 89963, Thermofisher, Waltham, MA) per manufacture's guidelines, incubated for 1h at room temperature, washed 3 times with 2 ml of phosphate buffered saline (PBS, pH7.0) and Ig eluted from the column with 0.1M glycine in 1 ml (pH 2-3) (cat # 21004, Thermofisher).The flow through and eluent were neutralized with 1M Tris-HCl (pH 8) (cat # 15568025, Thermofisher).The buffer was exchanged with PBS and samples concentrated using a 30,000 KD molecular weight Protein Concentrator (cat # 88522, Thermo Fisher) and resuspended in 1 ml total volume.The Ig-enriched samples were then either incubated with lectin-gD or lectin-gB agarose column for 1 hour, bound Ig eluted with 0.1M glycine, neutralized to pH 7 with 1M Tris-HCl, buffer exchanged and concentrated as above.Alternatively, the Protein L eluent was applied to a Protein A column (cat # 20356, Thermofisher), which binds human IgG1, IgG2 and IgG4 but not IgG3.A subset of samples was sequentially incubated with a gD or gB lectin agarose column followed by a Protein A column to enrich for IgG1 or IgG3 specific anti-gD and gB.After concentration, the total protein was quantified by nanodrop and all samples were diluted to a final concentration of 0.5 mg/ml for functional assays.
Western blots
Western blots were prepared with 5 mg of recombinant gD-1 or gB-1 protein produced in HEK293 cells 29 or 10 mg HSV-1-infected or uninfected Vero cell lysates per lane, proteins separated by SDS-PAGE, transferred to nitrocellulose and immunoblotted with10 mg/ml of enriched IgG samples in blocking buffer overnight followed by anti-human IgG-HRP (1:500) (cat #: 1721033, BioRad, Hercules, CA).Blots were scanned using ChemiDoc imaging system equipped with GelDOC2000 software.
Glycan analysis
The IgG enriched for anti-gB and anti-gD antibodies underwent deglycosylation by incubating overnight with PNGaseF (cat #: NS99010, N-zyme scientific, Doylestown, PA) in PBS at 37 C.The deglycosylated IgG samples were then precipitated using cold ethanol, and the supernatant containing the released native N-glycans was collected and dried using vacuum centrifugation.To desalt the samples, they were resuspended in 0.1% trifluoroacetic acid and loaded onto graphite spin columns containing porous graphitized carbon (PGC) (cat #: 60106407, Thermofisher).The native N-glycans were washed with 0.1% trifluoroacetic acid and eluted from the graphite spin column using 25% acetonitrile/0.1% trifluoroacetic acid.Subsequently, the eluted N-glycans were dried using vacuum centrifugation.Permethylation of the native N-glycans was performed by combining the samples with iodomethane (cat #: 289566, Sigma-Aldrich, St. Louis, MO) in anhydrous dimethyl sulfoxide (cat #: 276855, Sigma-Aldrich).The mixture was then loaded onto spin columns containing sodium hydroxide beads (cat #: 367176, Sigma-Aldrich).Purification of the resulting permethylated N-glycans was achieved through liquid-liquid extraction using a 1:1 chloroform/methanol mixture.The chloroform layer, which contained the permethylated N-glycans, was collected and dried under nitrogen gas.The dried permethylated N-glycans were re-suspended in 50% methanol.A 1 ml sample was mixed with super-DHB matrix (cat #: 63542, Sigma-Aldrich) at a 1:1 ratio and spotted onto an MTP 384 well-polished steel target plate (Bruker Daltonics, Billerica, MA).Analysis of the samples was performed using MALDI-TOF with the Ultraflextreme mass spectrometer (Bruker Daltonics).The acquisition software used was FlexControl 3.4, and the raw data was processed using FlexAnalysis 4.0 software.The resulting mass spectra were converted to peak lists.To determine the percent composition of each N-glycan, the raw abundance of each N-glycan was divided by the sum of the abundances of all N-glycans.
Figure 1 .
Figure 1.Maternal and cord blood total and HSV-specific IgG in pre-COVID and SARS-CoV-2+ dyads Total IgG (A), IgG1 (B), and IgG3 (C) and HSV-specific IgG (D), IgG1 (E), and IgG3 (F) were measured in maternal and cord blood for term and preterm pre-COVID and SARS-CoV-2+ dyads.For the HSV-specific Abs, the results are shown as optical densitometry units (odu) at 1:12,500 plasma dilution for IgG and at 1:2500 plasma dilution for IgG1 and IgG3.Each sample was tested in duplicate, and each dyad is color-coded; open symbols indicate dyads where the mother was only HSV-2 seropositive.Maternal and cord blood antibody levels were compared using Wilcoxon matched-pairs signed rank test, and term vs. preterm maternal or cord levels were compared using Mann-Whitney U test (*p < 0.05, **p < 0.01, ***p < 0.001, ****p < 0.0001).The bar shows mean G standard deviation for the group.
Figure 2 .
Figure 2. Differences in transfer of neutralizing, C1q-binding, and FcgRIIIa-activating IgG in term versus preterm pre-COVID versus SARS-CoV-2+ dyads (A) HSV-specific neutralization titers were determined by plaque reduction assay for term and preterm pre-COVID and SARS-CoV-2+ dyads, and results are presented as the plasma dilution that inhibited 50% of viral infection relative to cells infected in the absence of plasma.(B) HSV-specific C1q-binding Abs were measured by ELISA using HSV-1-infected and uninfected cellular lysate.Results are presented as optical densitometry units (odu) at 1:20 plasma dilution after subtracting the odu response to uninfected cellular lysate.(C) FcgRIIIa activation was measured using the human FcgRIIIa ADCC Reporter Bioassay with HSV-1-infected cells and 1:5 plasma dilution.Results are presented as fold induction relative to no plasma control.Each sample was tested in duplicate, and each dyad is color-coded; open symbols indicate dyads where the mother was only HSV-2 seropositive.Maternal and cord blood antibody levels were compared using Wilcoxon matched-pairs signed rank test, and term vs. preterm maternal or cord levels were compared using Mann-Whitney U test (*p < 0.05, ***p < 0.001, ****p < 0.0001).The bar shows mean G standard deviation for the group.
Figure 3 .
Figure 3. Transfer of anti-glycoprotein D-and anti-glycoprotein B-specific Abs The relative concentrations of anti-gD-(A and B) and anti-gB (C and D)-specific IgG, IgG1, and IgG3 were quantified in term pre-COVID (A and C) and SARS-CoV-2+ (B and D) dyads by ELISA with recombinant proteins as antigens in paired samples from term dyads.Results are presented as odu at 1:12500 plasma dilution for IgG and at 1:2500 plasma dilution for IgG1 and IgG3.Each symbol is the mean of duplicates, and each dyad is color-coded; open symbols indicate dyads from HSV-2 seropositive only mothers (Wilcoxon matched-pairs signed rank test, ***p < 0.001, ****p < 0.0001).The bar shows mean G standard deviation for the group.
Figure 4 .
Figure 4. Functionality of antigen-and subclass-enriched Abs in pre-COVID dyads Maternal (n = 4, circles) and cord (n = 3, triangles) term pre-COVID plasma was enriched for anti-gD and anti-gB or IgG1 and IgG3 subclasses and then assayed for neutralization activity (A and C) or FcgRIIIa activation (B and D) using 0.5 mg/mL of each enriched sample.Each sample was tested in duplicate.Neutralization is presented as the percent reduction in viral plaque formation relative to control wells.(E) Additional pooled samples (n = 3 pools from 2 participants each) were sequentially passaged over the gB lectin followed by Protein A columns and assayed using the ADCC reporter assay at 0.5 mg/mL.(F) The relative FcRn-binding affinity of enriched samples was measured using Lumit FcRn binding assay, and results are shown as relative apparent affinity (1/K D [M]).The bars indicate mean G standard deviation.Differences between anti-gD, anti-gB, and gD -gB -neutralization and FcgRIIIa activation were compared using ANOVA with Tukey's multiple comparison (A and B), and differences between IgG1 and IgG3 or anti-gD and anti-gB in C-F were compared by Mann-Whitney U test (*p < 0.05, **p < 0.01, ***p < 0.001, ****p < 0.0001).
Figure 5 .
Figure 5. Differential expression of glycans in anti-glycoprotein D-and anti-glycoprotein B-enriched maternal IgG The relative glycan composition of anti-glycoprotein D-and anti-glycoprotein B-enriched plasma were quantified by mass spectrometry in samples from pre-COVID (A) and SARS-CoV-2+ (B) mothers who delivered at term (n = 5 each).The relative abundance of glycans expressing sialic acid, galactose, fucose, GlcNAc, and bisecting GlcNAc was calculated.Each symbol is the mean of duplicates, and each sample is color-coded.The bars indicate mean G standard deviation for the group, and the abundance of glycans in the anti-gD-versus anti-gB-enriched samples is compared by paired t test (**p < 0.01).
Figure 6 .
Figure 6.Increased colocalization of FcRn and FcgRIIIa on placentas from SARS-CoV-2-positive mothers compared to SARS-CoV-2-negative mothers Immunohistochemistry was performed on placental tissues obtained from SARS-CoV-2-positive and negative mothers (n = 8 each for Term and n = 5 for preterm) and stained for placental alkaline phosphatase (PLAP), FcRn and FcgRIIIa.(A) Representative slides from one term and preterm SARS-CoV-2-positive and negative placenta each are shown.Intensity of FcRn (B), FcgRIIIa (C), and colocalization of these (D) were quantified using ImageJ (FcRn intensities were compared using Kruskal-Wallis with Dunn's multiple comparison, *p < 0.05, **p < 0.01).
Table 3 .
Variables associated with transfer ratios of HSV-specific IgG, neutralizing, or FcgRIIIa-activating Abs using linear regression models a Measure of interaction between neonatal sex and COVID status.
|
2023-08-17T15:07:42.944Z
|
2023-08-01T00:00:00.000
|
{
"year": 2023,
"sha1": "12657e47a01e05111cbc52d33fb815719b29d179",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.isci.2023.107648",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8d3ed947507d7dcbe2ee487c61e2647bcebf0aa8",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
}
|
257089081
|
pes2o/s2orc
|
v3-fos-license
|
In situ strategy for biomedical target localization via nanogold nucleation and secondary growth
Immunocytochemistry visualizes the exact spatial location of target molecules. The most common strategy for ultrastructural immunocytochemistry is the conjugation of nanogold particles to antibodies as probes. However, conventional nanogold labelling requires time-consuming nanogold probe preparation and ultrathin sectioning of cell/tissue samples. Here, we introduce an in situ strategy involving nanogold nucleation in immunoenzymatic products on universal paraffin/cryostat sections and provide unique insight into nanogold development under hot-humid air conditions. Nanogold particles were specifically localized on kidney podocytes to target synaptopodin. Transmission electron microscopy revealed secondary growth and self-assembly that could be experimentally controlled by bovine serum albumin stabilization and phosphate-buffered saline acceleration. Valuable retrospective nanogold labelling for gastric H+/K+-ATPase was achieved on vintage immunoenzymatic deposits after a long lapse of 15 years (i.e., 15-year-old deposits). The present in situ nanogold labelling is anticipated to fill the gap between light and electron microscopy to correlate cell/tissue structure and function. Sawaguchi et al. introduce a novel immunocytochemistry method that utilizes in situ gold nanoparticle labelling in immunoenzymatic products on universal paraffin/cryostat sections. The method is helpful, not only in overcoming limitations of the conventional nano gold labelling methods, but also useful in bridging the gap between light and electron microscopy.
O ne of the major goals of morphological biology is to identify correlations between cell/tissue structures and functions. Visualization of the exact location of targeting molecules presents unique spatial information that no other biomedical analysis can provide. This process is called immunocytochemistry, a combination of immunochemistry and cell morphology, which uses specific antibodies against the target molecules of interest. In the last decade, many cell biologists have used immunocytochemistry, which has the benefits of improved fluorescence light microscopy, such as two-photon excitation microscopy; however, owing to the limited spatial resolution of light microscopy, electron microscopy is often needed to address important questions 1 . To close the gap between light and electron microscopy, researchers have recently used correlative light and electron microscopy (CLEM) to determine the orientation of complex cell/tissue architectures by light microscopy and identify ultrastructural correlations through electron microscopy 2 .
Nanogold particles with diameters of 1-100 nm are also known as colloidal gold when dispersed in water. In 1857, Faraday first provided a scientific description of the optical properties of nanometer-scale gold synthesized by reducing chloroauric acid solution 3 . Since then, various experimental methods have been developed for the synthesis of nanogold particles based on Turkevich's pioneering study 4 in 1951, which identified the nucleation and growth process in the synthesis of colloidal gold. Recently, nanogold particles have attracted increasing interest because of their potential in nanotechnology-based on their unique size and three-dimensional structure 5,6 .
In 1959, Singer 7 first reported an antibody tagged with the electron-dense protein ferritin for immunocytochemistry because the electron-scattering ability of fluorescent probes is insufficient for electron microscopy. Since the report by Faulk and Taylor in 1971 8 , the general strategy for electron microscopic localization is the conjugation of nanogold "probes" to antibodies 9,10 . The nanogold probes are easily distinguished on the labeled cell/tissue structures and can be counted for quantitative analysis of labeling intensity. However, conventional methods require the timeconsuming preparation of nanogold probes 11,12 and their conjugation to antibodies 9,10 . A variety of nanogold-conjugated antibodies are commercially available to decrease preparation, but ultrathin sectioning of the cell/tissue samples still requires time and skill for post-embedding nanogold probe labeling.
In a previous study 13 , we introduced an informative threedimensional survey of complex cell/tissue architectures using low-vacuum scanning electron microscopy (SEM) accompanied by CLEM imaging. This study aimed to develop its application in immunocytochemical localization, which remains challenging owing to difficulties in achieving labeling intensity with large particles. There is a trade-off relation between large particles and labeling intensity in post-embedding nanogold probe labeling 14 , but large particles (>30 nm) are indispensable for visualization under low-vacuum SEM. In this context, we introduce a new strategy involving in situ nanogold nucleation in immunoenzymatic 3,3'-diaminobenzidine tetrahydrochloride (DAB) products on universal paraffin/cryostat sections and provide novel insight into nanogold development in hot-humid air conditions for ultrastructural localization.
Results and discussion
Practical in situ nanogold labeling. The proposed procedure employs the catalytic properties of an enzyme (horseradish peroxidase; HRP) that yields a red/brown-colored reaction product from DAB 15 . Compared with immunofluorescence, the enzyme system 16 is advantageous in that the resultant precipitates remain permanent and visible under a standard bright-field light microscope. Fig 1 illustrates the flow diagram from the light microscopic survey to the correlative ultrastructural localization of the target molecules (in this experiment, synaptopodin 17 , expressed in the process of podocytes in the kidney glomerulus). After the light microscopic survey and preselection, the sections were treated with 0.01% tetrachloroauric acid (HAuCl 4 ) for 10 min, leading to nanogold particle "nucleation" (Fig. 1a). Then, the sections were exposed to hot-humid air in a humid chamber at 37°C for 9-15 h to accomplish the "secondary growth" of the nucleated nanogold particles around the target molecules.
For electron microscopic observation, one section was placed on the wide specimen stage of the low-vacuum SEM (Fig. 1b), which was suitable for the whole-section survey at the centimeter scale (larger than the limited millimeter scale for conventional electron microscopy). Low-vacuum SEM allows backscattered electron imaging of non-conductive biological samples because the negative charge accumulations on the non-conductive materials can be neutralized by the positive ions in residual gas molecules 18 . As shown in the representative electron micrographs, nanogold particles were specifically localized on the process of podocytes in the kidney glomerulus, consistent with the preselected light microscopic findings. The underlying fine structures of the podocytes and their processes, which were not observable with conventional light microscopy, were clearly observed at higher magnification.
Fortuitous nanogold development in a summer. The present in situ nanogold labeling was fortuitously developed on a laboratory bench, independent of previous studies. At the beginning of this study, we initially attempted to "enhance" the contrast of DAB deposition sites with 0.1% HAuCl 4 solution to convert the color signal into an electron-dense compound for electron microscopy 19,20 . It has been reported that exposure to gold chloride intensifies the electron density of DAB deposition sites 19 . In our early trials, however, the enhanced signals were indistinguishable from the enhanced electron emission, known as the edge effect, generated along the sectioned cell/tissue edges (Fig. 2a). The disappointing section was left on our laboratory bench over a hot-humid summer weekend. As a result, the section color unexpectedly changed from yellow to purple, indicating the development of nanogold particles 3 . Further experiments demonstrated a constant color change (Fig. 2b) and indicated nanogold labeling (Fig. 2c) accompanied by undesirable nonspecific particles owing to an unrefined preliminary protocol.
Historically, the protocol for newly synthesized nanogold particles has been developed by extensive trial-and-error strategies rather than directed design [4][5][6] . In this study, the nanogold labeling efficiency and specificity were optimized by coordinating the HAuCl 4 concentration, treatment time, and subsequent hot-humid incubation. First, the optimal combination was determined in the range of 0.02-0.01% HAuCl 4 and 3-20 min by first performing hot-humid development at 37°C for 12 h (Fig. 2d-g). Next, the optimal hot-humid development time was identified in the range of 9-15 h for standardized treatment with 0.01% HAuCl 4 for 10 min (Fig. 3). Finally, the optimal temperature was determined to be 37°C for hot-humid development by the elimination of unsatisfactory development at 18°C and morphological destruction at 60°C (Fig. 3). Indispensably, the cell/tissue structures must be visible under ideal nanogold labeling to enable exact localization.
In situ nanogold nucleation on the target site. Transmission electron microscopy (TEM) was applied to precisely localize the nanogold particles and define their morphological features at high resolution. The nanogold particles developed in situ were specifically localized in the processes of podocytes (Fig. 4a), consistent with the original report by conventional immunogold labeling 17 . The underlying processes of nanogold particle formation could be explained by LaMer's classic theory 21 and its modifications, which have described the concept of burst nucleation as the first step in a phase transition. Consequently, the nucleated particles act as "seeds" for the secondary growth induced by a combination of monomer addition, aggregation, and coalescence. Optical color change in the cell/tissue section after hot-humid incubation at 37°C for 12 h. c Lowvacuum SEM images of the color-changed section. Note the intense nanogold labeling on the DAB deposition sites (in Box I) and nonspecific particles over the uriniferous tube (in Box II). d-g All sections were preset to be incubated in hot-humid conditions at 37°C for 12 h. d, e Nanogold labeling was insufficient after treatment with 0.002% HAuCl 4 for 20 min (d) and 0.01% HAuCl 4 for 3 min compared with the standard treatment with 0.01% HAuCl 4 for 10 min (e). Intense labeling was obtained with 0.01% HAuCl 4 for 20 min, but it obscured the underlying fine structure of the podocyte process. f Appropriate labeling was obtained with 0.02% HAuCl 4 for 3 min, but the short elongation of 10 min caused nonspecific particles on the uriniferous tube. g DAB alone as a negative control. ARTICLE COMMUNICATIONS BIOLOGY | https://doi.org/10.1038/s42003-021-02246-3 In this study, the primary step of nanogold particle "nucleation" was confirmed among the immunoenzymatic DAB products for the target molecules after treatment with 0.01% HAuCl 4 alone for 10 min (Fig. 4b). The speculated reactivity between HAuCl 4 and DAB was verified by dot blotting on filter paper 20 (Fig. 4c) and by dripping 0.05% HAuCl 4 into 0.02% DAB aqueous solution, which produced brown grains in a microtube. In immunohistochemical staining, DAB is known to be oxidized by hydrogen peroxide (H 2 O 2 ) in the presence of HRP that forms a brown deposition, representing the location of the HRP for light microscopy. Intensifications of DAB deposition sites have been reported with various heavy metallic ions as well as gold chloride 19,20,[22][23][24][25] . The crucial binding ability between gold chloride and the immunochemical reaction product of DAB has been indicated by energy dispersive X (EDX)-ray analysis 25 . Interestingly, oxidative polymerization of DAB on gold electrode has been reported in an electrochemical study for the preparation of polymeric film coated electrode 26 . However, further analysis is needed due to the lack of detailed knowledge concerning DAB polymerization and the chemical characteristics of the resultant deposition in immunohistochemistry.
Secondary growth in hot-humid air conditions. An advanced series of TEM images demonstrated the secondary growth of nanogold particles via seed-mediated "monomer addition", "aggregation", and "coalescence" in accordance with LaMer's theory 4,5,21,27 (Fig. 4d). The secondary growth resulted in a significant increase in the average diameter (Fig. 4e) accompanied by a broadening of the size distribution (Fig. 4f). Concerning the gold source for monomer addition, elemental analysis indicated that the sections were equally stained with HAuCl 4 regardless of the DAB deposition sites (Fig. 5a). It could be assumed that gold monomers were supplied from the overall staining to the nanogold particles by liquid diffusion because the microscope slides were moistened after the hot-humid incubation. In fact, the secondary growth failed in incubation in dry air, which might lack the dissolved gold monomers, and in waterdrop incubation, which might dilute the dissolved gold monomers (Fig. 5b).
Further TEM revealed the changes in nanogold configurations from spherical to multibranched polynuclear assemblies (alternatively referred to as gold nanostars 28 and nanoflowers 29 ) and the associated merged lines 30 (Fig. 6a). High-resolution scanning transmission electron microscopy (HR-STEM) demonstrated representative bright-field and annular dark-field images of intense nanogold labeling on the processes of podocytes (Fig. 6b). Higher magnification clearly shows the details of multiple crystalline structures in the attachment region (Fig. 6c) and lattice fringes with interplanar spacing of 0.24 nm corresponding to Au (111) planes accompanied by fast Fourier transform (FFT) patterns (Fig. 6d). Recently, studies of aggregating motion have been advanced by in situ liquid-cell TEM nanotechnologies that enable the "real-time" observation of aggregating nanogold particle motion in a cluster [31][32][33][34] and manipulation of individual particles by electron beam 35 . It will be of great interest to apply such forefront technologies to elucidate the precise mechanism of secondary growth in hot-humid air conditions, reaching largesized intense labeling in a short time.
Influence of pH on nanogold particle size. Nanogold particles are unstable and tend to aggregate at short interparticle distances, which has attracted increased attention in the field of colloid science. Notably, secondary growth was reproduced on a dot blotting spot by hot-humid incubation at 37°C for 12 h (Fig. 6e), providing a new filter paper assay for this study. In this protocol, the cell/tissue sections were treated with H 2 O 2 as usual in enzyme-based immunocytochemistry. It could be speculated that residual H 2 O 2 readily reduced HAuCl 4 and accelerated secondary growth [36][37][38][39] . However, this speculation was contradicted by the Fig. 3 Coordination of the duration and temperature of hot-humid incubation. All sections were preset to be treated with 0.01% HAuCl 4 for 10 min. Note the immature growth of nanogold particles after hot-humid incubation at 37°C for 6 h. Satisfactory labeling was observed after 12 h, but elongation to 24 h caused nonspecific particles on the uriniferous tube. No labeling was developed in the humid chamber at 18°C for 24 h. Incubation at 60°C accelerated nanogold development, but the underlying structure of the podocyte process was significantly damaged.
nucleation and secondary growth observed in H 2 O 2 -free blotting spots.
It is well known that the chemical properties of HAuCl 4 solution depend on its pH 26,39-41 . The primary screening by filter paper model assay showed a decline in the purplish tone in proportion to increasing pH value (Fig. 7a). Correspondingly, the low-vacuum SEM (Fig. 7b) demonstrated the reduction in labeling intensity on the sections. Further TEM observations (Fig. 7c) and quantitative analysis (Fig. 7d, e) clarified the reduction in secondary growth, which could be explained by the stabilization of nanogold particles at higher pH, consistent with previous reports [40][41][42] .
Experimental control of aggregation. Nanogold particles can be stabilized when macromolecules are adsorbed on the surface, creating a mechanical barrier against aggregation. Practically, bovine serum albumin (BSA) is added to solutions of antibodyconjugated nanogold particles as a "capping" agent in conventional methods 10,12 . A filter paper assay showed a decline in the purplish tone after pretreatment with 2% BSA for 10 min prior to hot-humid incubation, indicating suppression of the secondary growth (Fig. 8a). Correspondingly, stabilization was proven on cell/tissue sections (Fig. 8b, c, e, f). HR-STEM revealed multiple crystalline structures within the spherical particles (Fig. 8c), implicating polynuclear coalescence under suppression.
Successful stabilization has increased interest in destabilization by electrolytes that induce aggregation by reducing electrostatic repulsion 43 . Indeed, pretreatment with phosphate-buffered saline (PBS) accelerated the growth and aggregation of nanogold particles, leading to massive polynuclear configurations (Fig. 8d, e, g). This series of experiments and findings will play an important role in verifying theoretical models for the process of nanogold particle development.
Potential in applied immunocytochemistry. Fig 9a shows a diagram of the nanogold particle labeling originating from the in situ nucleation of immunoenzymatic DAB products, followed Fig. 4 TEM analyses of nanogold nucleation and secondary growth. a Ultrathin TEM images of the in situ nanogold labeling of synaptopodin treated with 0.01% HAuCl 4 for 10 min followed by hot-humid incubation at 37°C for 12 h. Note the specific labeling on the processes of podocytes. b Nucleated nanogold particles after treatment with 0.01% HAuCl 4 for 10 min. c Visualization of the chemical reaction between HAuCl 4 and DAB on dot blots and in vitro. ABC avidin-biotin complex, BSA bovine serum albumin, PBS phosphate-buffered saline, DAB diaminobenzidine, DW distilled water. d Time-lapse growth and changes from spherical to multibranched shapes. e, f Quantitative comparison of the average diameter (e) and the size distribution (f) of the nanogold particles. n = 500 particles/group. All data are expressed as the standard deviation (SD) ± mean. Statistical significance was assessed by Student's t test. Fig. 5 Elemental analysis of Au distribution in the HAuCl 4 -treated sections and conditioning for secondary growth. a A clear Au peak emerged after treatment with 0.01% HAuCl 4 for 10 min compared with negative control of DAB alone. The Au peak levels were irrelevant to DAB deposition and equally observed in the glomerulus (G2) and uriniferous tube (U2). Note the unchanged Au peak pattern at G3 and U3 after hot-humid incubation at 37°C for 12 h. G: glomerulus. U uriniferous tube. b Conditioning for secondary growth. Distinct nanogold particles were developed in high-humidity air but not dried air or waterdrops and were incubated at 37°C for 12 h. by secondary growth and aggregation under hot-humid air conditions. This in situ nanogold labeling employed the catalytic properties of an enzyme that yields semipermanent precipitates on the target molecules. By taking advantage of this process, we achieved retrospective nanogold labeling of gastric H + / K + -ATPase on vintage DAB deposits after a long lapse of 15 years 44 (i.e., 15-year-old-DAB deposits, Fig. 9b; cf. Fig 3C in ref. 44 ). The paraffin and cryostat blocks of cell/tissue samples are semipermanent and show potential in retrospective investigations by the reappraisal of archived samples. The pathological application showed intense labeling of platelet glycoprotein IIb/IIIa preserved in an archived specimen of rabbit arterial thrombosis 45 (Fig. 9c). Moreover, this method can be combined with in situ hybridization, which enables the ultrastructural localization of a defined sequence of DNA or RNA on cell/tissue sections 46 (Fig. 10).
Recent advances in biomedical research, such as the production of regenerative organs from induced pluripotent stem cells and the morphological changes induced by CRISPR/Cas9-mediated genome editing, require the ultrastructural localization of a particular molecule to correlate cell/tissue structure and function. The present in situ nanogold labeling is user-friendly with basic paraffin/cryostat sections and highly anticipated to create a new approach in applied biomedical immunocytochemistry, bridging the gap between light and electron microscopy.
Methods
Preparation of rat kidney cryostat sections. Male Wistar rats (Kyudo, Kumamoto, Japan) at 10 weeks of age were deeply anesthetized and then perfused with 4% paraformaldehyde in 0.1 M phosphate buffer (pH 7.4) from the left ventricle of the heart. The kidney was excised and further fixed by immersion in the above fixative for 2 h at room temperature (RT). After the organs were washed in running tap water for 2 h, they were protected in a graded series of 5%, 10 and 20% sucrose in PBS for 2 h and in a mixture of 1:1 (v/v) 20% sucrose:OCT compound (Sakura Finetek, Tokyo, Japan) at 4°C overnight. The samples were then embedded in OCT compound and cut into sections (10 µm in thickness) with a cryostat (CM3050 S, Leica Microsystems, Wetzlar, Germany).
Immunocytochemistry of synaptopodin. The cryostat sections of rat kidney were rinsed in PBS and then boiled in 10 mM citrate buffer, pH 6.0, for 10 min to retrieve the antigenic sites. After cooling to RT, the sections were incubated in methanol containing 0.3% H 2 O 2 for 30 min to block endogenous peroxidase activity. After several rinses in PBS, the sections were incubated in 5% normal horse serum (NHS)/1% BSA in PBS for 10 min to block nonspecific binding and then incubated with a mouse monoclonal antibody against synaptopodin (clone G1D4; Progen Biotechnik, Heidelberg, Germany; diluted 1:50 with 5% NHS/1% BSA in PBS) at 4°C overnight. As controls, the procedure was performed without Fig. 6 The secondary growth and aggregation of nanogold particles. a Characteristic nanogold assemblies after hot-humid incubation at 37°C for 12 h. a-I A representative branched feature of the nanogold particles. Note the attached portion (arrow) and merged line (arrowhead) within the assembly. a-II Heterogenous light and shade patterns are shown in each nanogold particle. Arrowheads indicate the stripe pattern at regular intervals. a-III Multibranched assembly. a-IV A multibranched assembly surrounding a hole. b-d HR-STEM images. Representative bright-field and annular darkfield images of intense nanogold labeling on the processes of podocytes (b). Higher magnification clearly shows the details of multiple crystalline structures in the attachment region (c) and lattice fringes (d) with interplanar spacing and FFT patterns. e A dot blotting spot exhibited a color change similar to that of cell/tissue sections after hot-humid air incubation at 37°C for 12 h. The low-vacuum SEM micrographs show secondary growth on the filter paper with a representative "star-like" nanogold particle (inset). Light microscopic survey. For the light microscopic survey of the DAB deposition sites, the sections were rinsed in a graded series of 80%, 90%, and 100% ethanol for dehydration. After clearance in xylene, the sections were mounted within Malinol (Muto Pure Chemicals, Tokyo, Japan) covered with a NEO Micro cover glass (size 24 × 50 mm, thickness no. 1 = 0.13-0.17 mm: Matsunami Glass, Osaka, Japan). The DAB deposition sites were surveyed under a light microscope (BX51, Olympus, Tokyo, Japan) equipped with a digital camera (DP73, Olympus).
Nanogold particle development. Next, for light microscopy, the microscope slides were incubated in xylene for 18-48 h to remove the coverslips by dissolving the fixed mounting medium. Then, the sections were rehydrated with a series of 100%, 90%, and 70% ethanol (5 min each). After several rinses in distilled water (DW; at least three times, 5 min each), the sections were treated with a 0.01% aqueous solution of hydrogen tetrachloroaurate (III) tetrahydrate (HAuCl 4 ·4H 2 O, Nakarai Tesque, Kyoto, Japan) for 10 min at RT. After being rinsed several times in DW and dried, the sections were incubated in a humid chamber (equipped with DW on the floor, adequate humidity = 94-96%) at 37°C for 12 h and then dried with a blower. As controls, other sections were incubated in dried air (humidity = 25-30%) or waterdrops.
Low-vacuum SEM. The microscope slides were placed on the wide stage of the specimen holder using adhesive conductive tape and then placed in a low-vacuum SEM (TM4000Plus or TM3030Plus, Hitachi High-Tech, Tokyo, Japan) operating at 15 kV. Elemental analysis was performed by using an EDX detector equipped for low-vacuum SEM. Fig. 7 Influence of pH on nanogold size. a Primary screening by filter paper model assay. Note the decline in the purplish tone in proportion to the increasing pH value of HAuCl 4 solutions. b Low-vacuum SEM. Note the satisfactory labeling at pH 2.8 and undistinguishable intensities at pH 5.0 and pH 7.0 after hot-humid incubation at 37°C for 12 h. c Ultrathin TEM images of the nanogold particles before and after hot-humid incubation at 37°C for 12 h. d, e Quantitative comparison of the average diameter (d) and size distribution (e) of the nanogold particles. n = 500 particles/group. All data are expressed as the standard deviation (SD) ± mean. Statistical significance was assessed by Student's t test.
Characterization of the developed nanogold particles. The nanogold particles were morphologically characterized by using a transmission electron microscope. For the characterization of nanogold particles, the sections were dehydrated in a graded series of 80%, 90%, and 100% ethanol and embedded into epoxy resin on microscope slides by using a capsule-supporting ring 47 . Subsequently, the polymerized resin block was removed from the microscope slide by rapid cooling in liquid nitrogen vapor. Ultrathin sections (60-80 nm in thickness) were cut and observed without heavy metal staining by using TEM (HT7700, Hitachi-High Tech, Tokyo, Japan) operating at 80 kV or HR-STEM (HD-2700A, Hitachi-High Tech, Tokyo, Japan) operating at 200 kV. Fig. 8 Inhibition and acceleration of secondary growth after pretreatment with BSA and PBS. a Filter paper model assay for nanogold stabilization. Note the decline in the color change induced by pretreatment with 2% BSA for 10 min prior to hot-humid incubation. b Ultrathin TEM images of the nanogold particles after pretreatment with 2% BSA for 10 min prior to hot-humid incubation. Note that the secondary growth remained in a spherical form rather than a polynuclear configuration. c HR-STEM images of lattice fringes with interplanar spacing and FFT patterns. Note the multiple crystalline structures in the spherical particle after hot-humid incubation. d Pretreatment with PBS for 10 min accelerated growth and aggregation in polynuclear configurations. e Quantitative comparison of the average diameter. Because of the emergence of nonspecific particles, the average diameter was not applicable for 12 h of incubation after PBS pretreatment. NT not treated, N/A not applicable. f Size distribution of the nanogold particles after pretreatment with 2% BSA for 10 min prior to hot-humid incubation at 37°C for 0, 6, and 12 h. g Size distribution after pretreatment with PBS for 10 min prior to hot-humid incubation at 37°C for 0 and 6 h. Because of the emergence of nonspecific particles, the size distribution was not applicable for 12 h of incubation after PBS pretreatment. n = 500 particles/group. All data are expressed as the standard deviation (SD) ± mean. Statistical significance was assessed by Student's t test.
The resultant brown dots of 0.05% DAB/DW and 0.05% HAuCl 4 /DW were clipped out and mounted on microscope slides using adhesive conductive tape. The development of nanogold particles was examined in a humidity chamber at 37°C for 12 h, as described above for the cell/tissue sections. The influence of pH on nanogold particle development was examined by adjusting the initial pH value of 0.01% HAuCl 4 /DW (pH 2.8) to pH 5.0, pH 7.0, and pH 9.0 with 0.2 M K 2 CO 3 . The BSA stabilization test was performed with 1% or 2% BSA/DW by blotting over the brown dots of 0.05% DAB/DW and 0.05% HAuCl 4 /DW prior to incubation in the humid chamber.
Application to archived immunocytochemical specimens. Archived paraffin sections of isolated gastric mucosa were employed from the immunocytochemical study by Sawaguchi et al. 44 (cf. Fig. 3C in ref. 44 ). In brief, a piece of isolated gastric mucosa was cryofixed in a liquid isopentane/propane mixture cooled by liquid nitrogen. Freeze substitution was carried out in 0.1% glutaraldehyde in acetone at −80°C for 16 h, and the specimen was then gradually warmed to RT. After the specimens were washed with ethanol, they were embedded in paraffin. Paraffin sections (5 µm in thickness) were deparaffinized, rehydrated, and incubated in methanol containing 0.3% H 2 O 2 for 30 min. After several rinses in PBS, the sections were incubated in 5% NHS/1% BSA in PBS for 10 min to block nonspecific binding and then incubated with a mouse monoclonal antibody, 2B6, against H + / K + -ATPase (MBL, Nagoya, Japan; 2 µg/ml diluted with 5% NHS/1% BSA in PBS) at 4°C overnight. After PBS washes, the sections were incubated with biotinylated horse anti-mouse IgG at RT for 40 min, followed by washing with PBS. Then, the sections were incubated in an ABC kit for 30 min. The samples were washed again with PBS, and the peroxidase reaction was developed as described above.
In addition to light microscopy, after a long lapse of 15 years from the original preparation, the microscope slides were incubated in xylene for 48 h to remove the coverslips. Then, the sections were rehydrated with a series of 100%, 90%, and 70% ethanol. After several rinses in DW, nanogold particles were developed as described above.
Application to pathological immunocytochemistry. Paraffin sections of rabbit arterial thrombosis were obtained from a disturbed blood flow model 45 . In brief, the left femoral arteries of male Japanese white rabbits (Kyudo Corp, Kumamoto, Japan) weighing 2.5-3.0 kg were damaged by inserting a 2.5 (diameter) × 9 (length) Fig. 9 Illustration of in situ nanogold labeling and biomedical applications. a Proposed illustration of the in situ nanogold labeling. Nanogold particles nucleated from HAuCl 4 in situ among the DAB deposition sites. The secondary growth and aggregation that occur in hot-humid conditions are sufficient for low-vacuum SEM observation. b Rat gastric parietal cell. Retrospective nanogold labeling of H + /K + -ATPase on a valuable archived specimen after a long lapse of 15 years (i.e., 15 years old-DAB depositions). c Pathological application to rabbit arterial thrombosis. Note the intense labeling of glycoprotein IIb/ IIIa expressed on the platelets surrounding the leukocyte (Leu). The cell nuclei were stained with hematoxylin for light microscopy. mm angioplasty balloon catheter (Boston Scientific, Galway, Ireland) into the femoral artery via the carotid artery. Three weeks later, the damaged femoral arteries were constricted to reduce the flow volume. Then, the rabbits were intravenously injected with heparin (500 U/kg) 15, 30, and 180 min thereafter, and then euthanized with an overdose of pentobarbital (60 mg/kg, intravenous) to evaluate thrombus formation. The animals were perfused with 4% paraformaldehyde, and the femoral artery was embedded in paraffin and sectioned (3 µm in thickness).
After deparaffinization, the sections were incubated in methanol containing 0.3% H 2 O 2 for 30 min to block endogenous peroxidase activity. After several rinses in PBS, the sections were incubated in Protein Block (X0909; Agilent, Santa Clara, CA, USA) for 10 min to block nonspecific binding and then incubated with a sheep polyclonal antibody against the platelet glycoprotein IIb/IIIa (Affinity Biologicals, Inc., Hamilton, CA, USA; diluted 1:500 with Antibody Diluent purchased from Agilent) at 4°C overnight. The procedure was performed without primary antibodies as a control. After the sections were washed with PBS, they were incubated with biotinylated donkey anti-sheep IgG (Jackson ImmunoReseach Laboratories, West Grove, PA, USA; diluted 1:1000 with Antibody Diluent) at RT for 30 min, followed by washing with PBS. Then, the sections were incubated in a ready-to-use solution of peroxidase-labeled streptavidin (Nichirei Biosciences, Tokyo, Japan) for 5 min. After the samples were washed with PBS, the peroxidase reaction was developed as described above. The cell nuclei were stained with Mayer's hematoxylin for 3 min and exposed to running tap water for at least 3 h to develop the color.
Application to in situ hybridization. Paraffin sections (5 µm in thickness) of the ovary from C57BL/6 J female mice, 8-12 weeks of age, were prepared by chemical fixation with 4% paraformaldehyde in PBS at RT for 24 h. After deparaffinization, the sections were treated with 0.2 N HCl and digested with proteinase K. After postfixation with 4% paraformaldehyde in PBS for 5 min, the sections were immersed in 2 mg/ml glycine in PBS for 30 min and kept in hybridization medium, 40% deionized formamide in 4× standard saline citrate (SSC) (1× SSC = 0.15 M sodium chloride and 0.015 M sodium citrate, pH 7.0), until hybridization. Hybridization was carried out at 37°C overnight with digoxigenin-labeled oligo-DNAs for 28 S rRNA (2173-2206) dissolved in hybridization medium 46 . After repeated washings with 2× SSC, the sections were incubated with HRP-labeled sheep antidigoxigenin antibody (Roche Diagnostics, Mannheim, Germany). After the samples were washed with PBS, the peroxidase reaction was developed as described above.
|
2023-02-23T15:48:23.941Z
|
2021-06-10T00:00:00.000
|
{
"year": 2021,
"sha1": "3317ed4d0825bd62869f94418e536dc3ff5bfe16",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s42003-021-02246-3.pdf",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "3317ed4d0825bd62869f94418e536dc3ff5bfe16",
"s2fieldsofstudy": [
"Medicine",
"Materials Science"
],
"extfieldsofstudy": []
}
|
5628428
|
pes2o/s2orc
|
v3-fos-license
|
Impact of Chemotherapy-Related Hyperglycemia on Prognosis of Child Acute Lymphocytic Leukemia
The acute lymphoblastic leukemia (ALL) is the malignant tumor with the highest incidence in the childhood, although the risk-stratification chemotherapy has significantly improved the prognosis, the event-free survival (EFS) and overall survival (OS) have been significantly improved, the recurrence and disease progression, as well as the drug toxicity -related mortality, are still increasing (Hunger et al., 2012). The recent studies have found that during the child critical disease process, the incidence of hyperglycemia was one of the risk factors that would affect the prognosis (Devos and Preiser, 2004; Faustino and Apkon, 2005), and the similar conclusion had been confirmed in the adult malignancies (Ali et al., 2007), the hyperglycemia might directly affect the cell growth, and induce the drug resistance of tumor cells (Feng et al., 2011). The occurrence of hyperglycemia during the period of inductive remission chemotherapy is an independent risk factor towards the early recurrence and high mortality of adult ALL patients, compared with the nonhyperglycemic patients, the risks were 1.57 times and 1.71 times, respectively (Weiser et al., 2004). because
Introduction
The acute lymphoblastic leukemia (ALL) is the malignant tumor with the highest incidence in the childhood, although the risk-stratification chemotherapy has significantly improved the prognosis, the event-free survival (EFS) and overall survival (OS) have been significantly improved, the recurrence and disease progression, as well as the drug toxicity -related mortality, are still increasing (Hunger et al., 2012).The recent studies have found that during the child critical disease process, the incidence of hyperglycemia was one of the risk factors that would affect the prognosis (Devos and Preiser, 2004;Faustino and Apkon, 2005), and the similar conclusion had been confirmed in the adult malignancies (Ali et al., 2007), the hyperglycemia might directly affect the cell growth, and induce the drug resistance of tumor cells (Feng et al., 2011).
The occurrence of hyperglycemia during the period of inductive remission chemotherapy is an independent risk factor towards the early recurrence and high mortality of adult ALL patients, compared with the nonhyperglycemic patients, the risks were 1.57 times and 1.71 times, respectively (Weiser et al., 2004).because
Impact of Chemotherapy-Related Hyperglycemia on Prognosis of Child Acute Lymphocytic Leukemia
Bi-Hong Zhang 1 , Jian Wang 1 , Hong-Man Xue, Chun Chen* the L-asparaginase (L-asp) and glucocorticoids are the commonly used drugs during the child ALL inductive chemotherapy, which would affect the production, release and functions of insulin, the chemotherapy-related hyperglycemia is also a common complication of child ALL, with the occurring rate as 4% to 20%, or even as high as 56% (Belgaumi et al., 2003;Sonabend et al., 2008).Currently, the report about the impacts of hyperglycemia during the inductive chemotherapy towards the prognosis of child ALL is few, and the conclusions are inconsistent.This study compares the rates of complete remission, 5-year recurrence-free survival and 5-year overall survival between the hyperglycemic ALL children and the non-hyperglycemic ALL children during the inductive chemotherapy, aiming to analyze the relationship of hyperglycemia and prognosis of child ALL.
Study population
The clinical data of 159 primary ALL children treated in the department of Pediatrics, Sun Yat -Sen Memorial Hospital, from June 2008 to May 2012, were retrospectively analyzed, with the exclusion of those who were confirmed the hyperglycemia before the chemotherapy or with the family history of diabetes, as well as those who did not finish the inductive remission chemotherapy.All the patients were treated with the program of Guangzhou Child ALL 2008 chemotherapy collaboration committee, namely the VDLD program: vincristine (1.5 mg/m 2 /d) + daunorubicin (30mg/m 2 /d) + L-asp (5000 IU/m 2 /d, q3d, with a total of 8 days) + steroid (prednisone 60 mg/m 2 /d, d1-7, oral administration; dexamethasone 6 mg/m 2 /d, d8-28, stopped gradually after the oral administration).the follow-up was performed till January 2014, with the median follow-up time as 3.2 years (0.08 to 5.6 years).
Data Collection:
Clinical data recording: age of the initial diagnosis, gender, risk stratification, blood glucose values during the inductive therapy, insulin treatment, diagnosis date, complete remission time, recurrence time, death or last follow-up date .
Definition of chemotherapy-related hyperglycemia: during the L-asp and dexamethasone-contained inductive chemotherapy, the appearance of fasting plasma glucose≥126 mg/dl and (or) random blood glucose≥200 mg/dl was twice or more.every child was performed the blood glucose test more than twice.And the children were divided into the hyperglycemia group and the euglycemia group according to the blood glucose values.
Insulin treatment: the situation of random blood glucose≥200 mg/dl would be performed the intensive insulin therapy (short-acting RI + middle-acting insulin NPH), while the situation of 126 mg/dl ≤ random blood glucose ≤ 200 mg/dl used the intensive insulin or pure short-acting insulin therapy or diet control, the treatment program was individualized, the blood sugar control target: the random blood glucose<110~126 mg/dl (7~8 mmol/l).
Risk stratification: According to the program of Guangzhou Child ALL 2008 chemotherapy collaboration committee, the risks were divided into the standard risk (SR), interim risk (IR) and high risk (HR), SR: met all of the following points: the 7d response of prednisone was good, the 8th-d peripheral blood juvenile cells<1.0×10 9 /L; age≥1 year old and< 6 years old ; WBC<20×10 9 /L; the marrow M1 on the 15th inductive chemotherapy day (primary lymphocytes + immature lymphocytes +<5%) or M2 (primary lymphocytes + immature lymphocytes 5% to 25%) ; the marrow M1 on the 33rd inductive chemotherapy day.IR: met all the following points: prednisone response was good, the 8th-d peripheral blood juvenile cells<1.0×10 9 /L; age<1 year old or ≥6 years old ; WBC≥20×10 9 /L; the marrow M1 or M2 on the 15th inductive chemotherapy day; the marrow M1 on the 33rd inductive chemotherapy day, or complies with the SR standard, while the marrow M3 on the 15th inductive chemotherapy day (primary lymphocytes + immature lymphocytes >25%), the marrow M1 on the 33rd inductive chemotherapy day .HR: met at least one of the following: non-SR and marrow M3 on the 15th inductive chemotherapy day; prednisone response was poor, the 8th-d peripheral blood juvenile cells≥ 1.0×10 9 /L; the marrow M2 or M3 on the 33rd inductive chemotherapy day; existed the abnormality of t (9:22) (BCR/ABL) or t (4; 11) (MLL/AF4).
Definition of prognostic index: the recurrence of ALL includes the marrow recurrence and central nervous system relapse.CR was defined as the bone marrow initial cells<5 %, and the bone marrow restore the normal hematopoietic function.RFS was defined as the date from the diagnosis to the relapse, death or final follow-up.OS was defined as the date from the diagnosis to the death or final follow-up.
Statistical Analyses: All the data were statistically analyzed using SPSS17.0software .the 5-year overall survival rate and relapse-free rate used the Kaplan-Meier method, and were performed the log-rank test to compare the difference of survival curves between the hyperglycemia group and the non-hyperglycemia group ; the risk factor analysis of hyperglycemia used the Χ 2 test, the CR rate comparison of the 2 groups used the Fisher exact test; the non-normally distributed measurement data were expressed by the median, while the counting data used the Χ 2 test, with P<0.05 considered as the statistical significance.
High-risk factor analysis of chemotherapy-related hyperglycemia
The median age of the patients in this research when initially diagnosed was 4.7 years old (1.1 to 16.8 years old).Among the 159 children, 38 patients (23.90%) occurred the chemotherapy-related hyperglycemia, and divided into the hyperglycemia group, among who the glucose level of 10 (6.29%) was≥200 mg/dl, while without the case of ketoacidosis; 121 patients (76.1%) did not appear the hyperglycemia, thus divided in the euglycemia group.Within the 38 cases of the hyperglycemia group, 16 cases (42%) were performed the insulin treatment.The hyperglycemia rate of the high-age group (≥ 10 years old) was higher than that of the lower-age group (43.33%VS 19.38%), and the difference was statistically significant (P=0.009), and the incidences of the interimand high-risk groups were higher than the standard risk group (26.81%VS 4.76%, P=0.028), while relevant to the gender (P=0.056).
CR conditions of the 2 groups at the end of inductive chemotherapy: among the 38 cases of the hyperglycemia group, 33 cases achieved CR (86.8%), while among the 121 cases of the euglycemia group, 115 cases achieved CR (95%), there was no significant difference between the two groups (P=0.134) The analysis of risk factors and remission conditions of hyperglycemia during the inductive chemotherapy was shown in Table 1.
Survival analysis
The follow-up was performed until Jan 2014, and among the 159 included inside the statistics, 21 cases relapsed (with relapse rate as 13.21%), 11 cases died (including 4 cases of deaths with the relapse); among the 38 cases of the hyperglycemia group, 11 cases relapses (with the relapse rate as 28.95%), 6 cases died (including 2 cases of deaths with the relapse).
The Kaplan-Meier method was used to compare the accumulative 5-year survival rates and the relapse-free rates of the two groups, and the results were shown in Table 2 and Figures 1 and 2. The accumulative 5-year overall survival rate of the hyperglycemia group was 83.8±6.0%,significantly lower than that of the euglycemia group (94.9±2.4%),P=0.014;Similarly, the accumulative 5-year relapse-free rate of the hyperglycemia group was (62.9±8.7%),significantly lower than the euglycemia group (80.2±9.1%),P<0.001.
These results suggested that the prognosis of the hyperglycemia group was poorer, with its recurrence rate and mortality rate higher than the non-chemotherapyrelated hyperglycemic children.
Discussion
The current cure rate of child ALL has been improved significantly, but there is still 25%~ 30 % of relapse, and the post-relapse treatment is still the bottleneck towards the improvement of the overall prognosis of child ALL, and the clearance of high risk factor that would lead to the relapse of child ALL might reduce the recurrence.The clinical data of adult ALL indicated that the occurrence of hyperglycemia during the inductive remission period was the independent risk factor that affected the ALL relapse and high mortality rate (Weiser et al., 2004).The impacts of hyperglycemia occurrence during the inductive remission period towards the prognosis of child ALL is still not clear.The results of this research suggested that the hyperglycemia occurrence during the inductive remission period was connected with the poor prognosis of child ALL, the accumulative 5 -year relapse-free survival rate and the overall survival rate of the hyperglycemia group were significantly lower than the euglycemia group.
In this study, a total of 159 children were included into the statistics, and the hyperglycemia occurrence rate during the inductive chemotherapy was 23.9%, the proportion of blood glucose level≥200 mg/dl for twice or more was 6.29% (10/ 159), similar to the previous researches (4% to 20%), while significantly lower than the report of Rona (58%), in which the proportion of blood glucose level ≥200 mg/dl for twice or more was as high as 34%, and that might be related with the higher proportion of obese children and the high proportion of postprandial glucose detection (Sonabend et al., 2009).the combination of L-asp and glucocorticoids, as well as the disease stress, might be the main reason of hyperglycemia (Vu et al., 2012), whether the types of glucocorticoids (prednison or dexamethsone) would affect the hyperglycemia incidence is still controversial, and the leukemia itself might also affect the glucose metabolism, appearing as the elevated level of basic glycosylated hemoglobin, insulin resistance or insulin receptor abnormalities (Roberson et al., 2008;Sonabend et al., 2009;Spinola-Castro et al., 2009).
In this study, the newly-onseted ALL children had the median age as 4.6 years old, and the hyperglycemia incidence among the age≥10-year-old children was significantly higher than the lower age group (43.33%VS 19.23%, P=0.008), and the incidences of the interimand high-risk groups were significantly higher than the standard -risk group (22.53% VS 5.33%, P=0.017).A number of studies had confirmed that the age >10-year- old when initially diagnosed was the predilection age of hyperglycemia during the child ALL inductive remission period, and it was also a risk factor towards the ketoacidosis, the incidence of the hyperglycemia group was higher (Roberson et al., 2008;Lowas et al., 2009;Roberson et al., 2009;Sonabend et al., 2009;Spinola-Castro et al., 2009), and thus became the index of poor prognosis in a number of collaborative groups.
In this study, the occurrence of hyperglycemia during the inductive chemotherapy did not significantly affect the CR rate of children (compared with the euglycemia group, 86.8% vs 95%, P=0.134), while the prognosis of the hyperglycemia group was poorer, the 5-year overall survival rate was significantly lower than the euglycemia group (83.1±6.3%vs 94.2±2.9%,P=0.014), the 5 -year relapse-free rate was also significantly lower than the euglycemia group (64.1±8.9%vs 88.6±3.8%,P<0.001).Several studies of adults had shown that the hyperglycemia could predict the higher mortality, the mean fasting blood glucose >112.5 mg/dl could significantly increase the mortality of cancer patients (Bochicchio et al., 2010;Seshasai et al., 2011), and the appearance of hyperglycemia in the inductive chemotherapy of adult ALL patients exhibited the poor prognosis, compared with the euglycemic patients, the risks of the early recurrence and higher mortality were 1.57 times and 1.71 times, respectively, with shorter median CR (24 months vs 52 months, P=0.001) and median survival time (29 months vs 88 months, P<0.001) (Weiser et al., 2004).currently, there were three research institutions that reported the relationship of hyperglycemia during the inductive remission with the childhood ALL, while the results were different .the results of Sonabend were similar to our conclusions, when the blood glucose of ALL children was greater than 200 mg/dl, compared with other ALL children, the 5-year recurrence rate (68%±6.7%vs 85%±3.6%,P=0.025) and the overall survival (74%±6.1% vs 96%±1.9%,P<0.0001) were significantly reduced, while the risk of death was 6.2 times than that of other ALL children, thus the blood glucose level≥200 mg/dl was an independent predictor of survival towards the ALL children (Sonabend et al., 2009).while the study of Roberson did not draw the conclusions about the relationship of hyperglycemia during the inductive chemotherapy and poor prognosis of the ALL children, the overall survival rates and the accumulative recurrence rate between the two groups did not exist the significant differences (Roberson et al., 2009), the sample size of Spinola was too small (12 patients, with 16 times of hyperglycemia detected), although it was unable to confirm the relation of hyperglycemia and poor prognosis of the ALL children, the author believed that it would be of great importance to evaluate the changes of blood glucose level during the ALL inductive chemotherapy period (Spinola-Castro et al., 2009).the reason that the conclusions of Sonabend and Roberson existed the difference was still unclear, which might because of the different constitution of research subjects (age, risk degree, immune status, infection and delay effects of chemotherapy), different degrees of hyperglycemia (incidence rate as 34% vs 16 %), different remedial methods after the relapse (whether performed the stem cell transplantation) and others.
The conventional view was that the hyperglycemia could mainly reduce the patients' immunity, increase the chances of infection (the blood glucose of the hyperglycemia group was 2.1 to 2.5 times than the euglycemia group), delay the chemotherapy, decrease the clearance of leukemic minimal residual lesions, thus affecting the effects of ALL treatment (Weiser et al., 2004;Sonabend et al., 2008).Now, it's considered that the relationships between the abnormal glucose metabolism and the poor prognosis of cancer are multifactorial, the cancer cells could downregulate P53 to affect the stability of glucose metabolism, while the hyperglycemia could provide more energy towards the tumor cell growth by the glycolytic pathway (Yeung et al., 2008), in the type 2 diabetes patients who exist the insulin resistance, the hyperinsulinemia and high insulin -like growth factor (IGF) levels would downregulate P53 by the AKT signaling pathway.The insulin exists the somatomedinlike properties, the recent experiments have found that the high-level insulin and glucose concentration could promote the growth of a variety of tumor cells (pancreatic cancer, breast cancer, hepatocellular carcinoma, ALL primary cells and cell lines) via both independent and synergic mechanisms, while the insulin might reduce the tumor cell apoptosis and induce their drug resistance (Brown et al., 2008;Feng et al., 2011;Pan et al., 2012).
Currently, the inductive chemotherapy -induced hyperglycemia is still using the insulin to control the blood sugar clinically.The research of critically ill patients in 2009 found that: the mortality rate of the patients with the control objective of blood glucose level as 81~108 mg/dl (intensive insulin therapy) was significantly higher than the patients with the blood glucose controlled in 180 mg/dl (conventional insulin therapy) or in a slightly lower level (OR 1.14, 95%CI 1.02-1.28,P=0.02). the prospective clinical study of adult ALL and lymphoma also found that during the intensive insulin therapy towards the poor prognosis of hyperglycemia, the level of insulin/C -peptide >0.175 indicated the decreased overall survival rate (P=.0016) and relapse-free survival rate (P=0.0002), as well as CR was shortened (P=.0042) (Vu et al., 2012).there had not been reported about the characteristics of glucose metabolism in the inductive remission period of child ALL, and the impacts and dosage-efficacy relationships of insulin therapy towards the child ALL prognosis are not clear, thus it could not rule out that the different intensities of insulin therapy might exhibit certain impacts towards the research findings.Sonabend mentioned only 56 cases of hyperglycemia (glucose level≥200 mg/dl), among who 16 cases were treated with the insulin (28.6%), while Roberson stressed that the insulin therapy was just for a short period, once the glucose level was<200 mg/dl and did not rise again (including the patients of ketoacidosis), the insulin therapy should be stopped, but the usage of insulin had no fixed guidelines, the both reports of Sonabend and Roberson did not provide the detailed intensity of insulin therapy.In this study, among the 38 hyperglycemic patients, 16 cases were treated with the insulin, the proportion of insulin therapy was higher (42%), and the blood sugar control target as< DOI:http://dx.doi.org/10.7314/APJCP.2014.15.20.8855Impact of Chemotherapy-Related Hyperglycemia on Prognosis of Child Acute Lymphocytic Leukemia 110~126 mg/dl also indicated that the intensity of insulin therapy was higher, which also might be associated with the poor prognosis of the hyperglycemic children .
The limits of this study lied in the use of a retrospective analysis, thus the impacts of insulin therapy intensity towards the prognosis of hyperglycemic ALL children could not be assessed; and the differences of hyperglycemia towards the severe infection rate and delays of chemotherapy in the euglycemic children were not evaluated ; and the sample size of hyperglycemic children was small (38 cases).
The results of this study showed that the prognosis of the ALL children, who occurred the hyperglycemia during the inductive chemotherapy, was poorer that those with euglycemia, and the accumulative 5-year survival and relapse-free rate were significantly reduced.Therefore, in the clinical works, the monitoring towards the blood glucose level should be paid attention to during the inductive chemotherapy, especially towards the population with high-risk of hyperglycemia, it should be active to prevent the occurrence of hyperglycemia .The proportion of insulin used in this study (42%) and the greater usage intensity could not be excluded from the association with the poor prognosis, which needed the prospective and randomized controlled study for the further confirmation.The effective prevention and treatment of hyperglycemia would help to improve the overall prognosis of ALL children.
Figure 1 .
Figure 1.KM Survival Curves of 5-Year Overall Survival Rate of the 2 Groups (Log-Rank Test)
|
2017-06-10T16:30:07.589Z
|
2014-01-01T00:00:00.000
|
{
"year": 2014,
"sha1": "22632420696d2e41b71be662d7529b3655630f97",
"oa_license": "CCBY",
"oa_url": "http://society.kisti.re.kr/sv/SV_svpsbs03V.do?cn1=JAKO201435648479335&method=download",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "87da270b60a13dc99d8a6ffd63693b6b53e3cbbc",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
255499628
|
pes2o/s2orc
|
v3-fos-license
|
Impact Fatigue Life of Adhesively Bonded Composite-Steel Joints Enhanced with the Bi-Adhesive Technique
One of the most common loading conditions that bonded joints experience in service is repeated impact. Despite the destructive effects of impact fatigue, the behavior of metal-composite bonded joints subjected to repeated impact loads has rarely been studied in the literature. Therefore, it is of utmost importance to pay attention to this phenomenon on the one hand and to find solutions to improve the impact fatigue life of bonded composite metal components on the other hand. Accordingly, in this study, the use of the bi-adhesive technique is proposed to improve the durability of composite-metal single-lap joints (SLJs) under impact fatigue loading conditions. J-N (energy-life) method is also used to analyze the experimental data obtained. Accordingly, in the present study, the impact fatigue behavior of single adhesive metal to composite joints was analyzed experimentally based on the J-N method and also numerically using the finite element method (FEM). By using two adhesives along a single overlap, the impact fatigue life of joints between dissimilar composite and metal joints was also analyzed experimentally. The results show that the double adhesives technique can significantly improve the impact fatigue life of the tested joints. It was also found that the optimum length ratio of the adhesives (the length covered by the ductile adhesive relative to the total overlap size) is a function of the stiffness of the joint and is more pronounced for less stiff bonded joints. A linear elastic numerical analysis was also conducted to evaluate the stress state along the bloodline of the bonded joints. Results show that the compressive peel stress made at the boundary of the two adhesives can be a possible reason behind the different results observed.
Introduction
In recent decades, the use of carbon fiber reinforced polymers (CFRP) in modern industrial structures such as airframes and vehicle bodies has increased significantly. Composite structures offer advantages over metal structures in terms of high strength, fatigue resistance, and lightweight. Because of the importance of using composites in industry, the application of adhesive to join metals to composites is inevitable. On the other hand, compared to conventional joining methods, adhesive bonding is one of the most important techniques for joining different materials (including composite materials) since adhesives offer advantages such as more even stress distribution, higher fatigue strength, energy absorption, etc. Numerous studies have dealt with the mechanical performance of bonded composite structures from different points of view [1]. One of the critical topics considered by the authors is the strength of bonded composite joints subjected to high strain rates, including impact loads. Chen et al. [2] found a 75% reduction in energy absorption for composite joints subjected to high strain rate experiments compared to the quasi-static loads due to the lower elongation at failure. Huang et al. [3] numerically and experimentally investigated the low energy bending impact damage of bonded single lap joints (SLJs) with similar (composite/composite) and dissimilar (composite/steel) substrates. They found that dissimilar joints are more susceptible to low-energy bending impact damage than similar joints. Effects of a wide range of strain rates on the impact strength of epoxy adhesives were experimentally studied by Houjou et al. [4]. In another study, Chung and Kwak [5] evaluated the shear impact strength of bonded joints by conducting some impact experiments using a piezoelectric force sensor. They proposed a test methodology that would allow them to simultaneously characterize the bond strength and shock absorption of the tested joints.
The effects of high strain rate and impact loading on the performance of bonded assemblies have been explored numerically [6,7] and experimentally [8][9][10] in several studies. Borges et al. [7] developed a numerical approach to analyze the influence of the loading rate on the mechanical response of adhesive joints using cohesive zone modeling. In this method, the mechanical properties of the adhesive are defined as a function of the loading rate. They proved that by reducing the strain rate, the fracture stress is reduced for the tested materials. Ramezani et al. [11] numerically and experimentally investigated the effects of loading rate on the failure load and failure mechanism of single lap joints made of composites and hybrid composites. The results showed that the additional layers could significantly reduce the local stresses and delamination in composites subjected to both static and high loading speeds. According to their results, the mechanical response of the adhesive layer directly affects the failure mechanisms. Due to the viscoelastic response of adhesive materials, changing the strain rate can significantly influence the stiffness, strength, and ductility of the joints.
During service, joints are often subjected to several unforeseen low-energy impacts. These loads are not capable of individually causing joint failure, but they have the potential to cause accumulative damage to the adhesive layer [12]. Despite extensive studies on the impact strength of adhesive joints, there is little published work on the effects of repeated low-energy impacts on the degradation of adhesive, mechanical properties. Experimental results have shown that low-energy cyclic impacts are more destructive than normal fatigue loads [13]. The literature shows that the fatigue endurance limit of the structures under impact fatigue is extremely lower than the normal fatigue in bonded joints. By conducting a wide range of experiments, Jalali et al. [13] showed that under impact fatigue, the endurance limit of bonded steel joints is less than 8% of the joint's impact strength, while under normal fatigue loading, the fatigue limit is around 40-50% of the static strength of the joint [14,15]. The residual static strength of adhesive joints subjected to pre-impact loads was also experimentally obtained by Kemiklioglu et al. [16]. Their results showed that the residual strength is a direct function of impact cycles. According to the experimental data [13], two different types of damage mechanisms were observed for joints subjected to impact fatigue and quasi-static loading. For quasi-static loading, the damage usually starts at the bonding ends, and the crack then propagates along the bondline. For joints subjected to low-energy fatigue impact loading, the damage starts in the middle of the overlap. This is due to the stress waves propagating through the overlap and creating a higher stress level in the middle of the joint. The fractography analysis conducted by Jalali et al. [13] showed two different regions at the fracture surface of SLJs after impact fatigue failure. The first area corresponds to damage initiation and accumulation in the center of the overlap, and the second area corresponds to rapid crack growth taking place at the joint ends before the final failure of the joints.
Despite the more uniform stress distribution in bonded joints, compared to other mechanical joining techniques, the load transfer capacity of adhesives is significantly reduced due to the local stress concentration at the overlap ends, especially for SLJs with large overlap sizes [17,18]. The rotation of the substrates due to the momentum of the applied loads leads to a local peel stress concentration at the ends of the bondline. This uneven stress distribution along the overlap can significantly affect the durability of the joints. Authors have presented various methods, such as tapering the joint parts or using adhesive fillets at the overlap ends to reduce these edge effects. However, one of the most effective techniques to reduce stress concentration is the bi-adhesive technique, where two adhesives with different properties (one more brittle and one more ductile) are applied on a single overlap [18]. The use of a ductile adhesive at the ends of the bonded joint reduces the shear and peel stresses leading to a more uniform stress distribution along the bonded area, resulting in a higher load-bearing capacity of the joint [18]. However, it should be noted that the properties of the adhesives change depending on the loading rate, so at very high strain rates, the ductile adhesive may no longer show a ductile response [7,19]. Consequently, the damping capability of the ductile adhesive decreases at higher strain rates [20]. Accordingly, it is important to understand the behavior of bi-adhesive joints under impact loads, especially for dissimilar adhesive joints that as a common material used in lightweight and load-bearing industrial structures. The bi-adhesive technique was considered by Akhavan-Safar et al. [20] to improve the impact fatigue life of similar joints (joints with similar adherends). They experimentally investigated the impact fatigue life of bi-adhesive joints as a function of the impact energy. The authors used similar steel adherends to find length ratio effects on the joint endurance limit subjected to repeated impact loads. The results showed that the bi-adhesive technique could significantly increase the impact strength of the tested joints. It was also shown that the ductile adhesive creates a second local stress concentration along the bondline, which symmetrically reduces the stress level along the bondline.
As shown in the previous studies [18,21], the fracture strength can be significantly improved by using the bi-adhesive technique when the joints are exposed to impact and quasi-static loading conditions. However, as mentioned in a recent review article [18], despite the extensive work on bi-adhesive joints subjected to quasi-static loading conditions, the use of bi-adhesive for impact loads has rarely been considered in the literature. Moreover, these few studies are limited to impact strength and not impact fatigue. They also mainly consider joints with similar substrates, while dissimilar joints, which are common in practice, have received less attention.
Accordingly, the aim of the current study is to further explore this topic by considering the application of the bi-adhesive technique in joints with dissimilar CFRP-Steel substrates subjected to repeated low-energy impacts. The results have been analyzed based on the J-N method, where the applied impact energy (J) is plotted against the fatigue life (N) on a semi-logarithmic scale. The results are compared with similar adherends bonded joints subjected to impact fatigue. A numerical analysis is also included to show the shape of the stress waves through the tested adhesive joint. The numerical modeling also helps to find the possible relation between the adhesives' length ratio, substrate material, and the peel stress, as a critical stress component, along the overlap.
Material
In order to manufacture dissimilar joints, two different substrate materials were used. The metal substrate was made of steel plates with a thickness of 3 mm. Ten layers of woven carbon fibers were used to manufacture the composite adherends. The hand layup method was used to create the composite plates. LY 5052 resin as the matrix was considered, and the fabricated composites were cured in a hot press for 24 h at 23 • C followed by 4 h post-curing process at 100 • C. Table 1 gives the mechanical properties of the steel and composite materials considered for the numerical study. Figure 1 shows the dimensions of the tested joints where a 3 mm thick steel and a 3 mm thick CFRP composite, each 100 mm long, were used as adherends to manufacture SLJs with a width of 20 mm. [20,22]. Figure 1).
SLJ Geometry
In order to manufacture bi-adhesive joints, two different adhesives with different properties were used. One of them is a ductile adhesive with a maximum tensile elongation of 33%, while the considered brittle adhesive has a maximum tensile elongation of 1.4%.
As already discussed by Akhavan-Safar et al. [18], the optimum E ratio is affected by several geometrical parameters such as adhesive thickness, total overlap length, adherends elasticity moduli, etc. A wide range of E ratios has been considered by researchers, from 0.003 to 0.9 [18]. Accordingly, in this study, MEGA-POX 330 was used as the ductile adhesive, and DO-GHOLO (Ghaffari Co., Iran) was considered as the brittle adhesive with an E ratio of 0.48. In order to achieve this E ratio, the resin-to-hardener weight ratio for MEGA-POX 330 was set to 0.66, and it was 1.25 for the brittle adhesive (DO-GHOLO). The joints were cured at room temperature for one week. Bulk specimens were manufactured based on ASTM D638 and were tested in a previous study. The results are shown in Table 2. Figure 1 shows the dimensions of the tested joints where a 3 mm thick steel and a 3 mm thick CFRP composite, each 100 mm long, were used as adherends to manufacture SLJs with a width of 20 mm. As discussed in [18], the length ratio (d) in bi-adhesive joints is a key factor controlling the mechanical behavior/strength of the SLJs. The optimal d-value, which represents the highest fracture load, is a function of the joint geometry, the mechanical properties of the adhesives, and the substrate stiffness [18]. In practice, joints with different overlaps are used, which can be shorter or often longer than the joint used in the current study. In general, however, the advantages of the bi-adhesive technique are more pronounced for longer overlaps, where higher peel stress is generated at the bonding ends, limiting the strength of the joint. Consequently, 50 mm overlap length as a lab-level overlap size in SLJs was considered to be representative of real joints with similar overlap sizes that can benefit from the bi-adhesive technique and, more importantly, help readers to easier compare the current results with those previously published in [20,21], where the same overlap length has been used. However, further studies are needed to analyze size effects on the mechanical behavior of bi-adhesive joints.
SLJ Geometry
Holes were made at the ends of the adherend specimens to be able to mount the specimens on the drop weight impact fatigue test machine and also to apply load on a loading pin placed in these holes.
Adhesive thickness plays a key role in joint strength [23]. Numerical studies have shown that increasing the adhesive thickness in bi-adhesive SLJs decreases the concentration of shear and peel stresses along the lap length [24] (compared to a joint with a single brittle adhesive where a higher value of peel stress is observed along the lap length [21]). Consequently, increasing the adhesive thickness causes a more uniform load transfer through the adhesive layer, resulting in an increase in the strength of bi-adhesive joints. However, similar to SLJs, there is an optimal adhesive thickness at which maximum strength is achieved [21]. Based on experimental data, the influence of adhesive thickness on ultimate load is more pronounced for joints with a lower E [25] and/or a lower aspect ratio [21]. The optimum adhesive layer thickness for bi-adhesive joints is generally assumed to be in the range of 0.4 to 0.5 mm [18]. For the tested joints, the adhesive thickness was set to 0.5 mm.
One of the most important parameters that can significantly change the strength of bi-adhesive joints is the adhesives length ratio (d). If the total overlap is defined as L, and the total length of the overlap occupied with the ductile adhesive is considered as 2L 1 (L 1 at each overlapping end), then the length ratio is defined as d = L 1 /L. Accordingly, the length of the overlap covered by the brittle adhesive is equal to L 2 , which is obtained as L − 2L 1 . Previous studies [18,20,21] have shown that the optimum length ratio for bi-adhesive joints is between 0.1 and 0.3 depending on various parameters, such as the stiffness ratio of the two adhesives. However, based on the definition of the length ratio, the maximum possible length ratio is 0.5, which corresponds to joints where the ductile adhesive covers the entire overlap. Accordingly, in this study, two different adhesive length ratios, including 0.1 and 0.2, were considered. Joints were also manufactured with the stiffer (brittle) adhesive, and these results were used as a reference.
Manufacturing
Preparing the surface of the adherends by removing contamination, dust, oxide, etc., is an essential step in manufacturing adhesive joints. Good surface treatment can significantly improve the quality of the adhesion between the adherend and the surface. One of the most common techniques used for preparing the surface of the steel is sandblasting. Accordingly, the surface of steel adherends was sandblasted, followed by acetone cleaning.
As recommended by various authors [26], smooth sandpaper followed by acetone cleaning was used to prepare the bonding surface of the composite adherends.
One of the important steps in manufacturing bi-adhesive joints is to control the adhesive length ratio. The ductile and brittle adhesives usually have different viscosities; however, it is often difficult to control the length ratio of the adhesives without using a physical boundary between the two. Authors have already considered several techniques for controlling the length ratio. Using a physical boundary not only avoids mixing the two adhesives but also controls the adhesive thickness. However, in the current research, aluminum wires of 0.5 mm diameter (the same as the adhesive thickness) were placed at the boundary of the two adhesives to avoid mixing the two adhesives during the curing process and also to control the length ratio of the adhesives. The wire can also be seen as a useful technique to control the thickness of the adhesive. It should be noted that the presence of a wire as a common length ratio control technique can affect the overall behavior of the joints, but without a physical barrier, it is difficult to control the length ratio of the adhesives. On the other hand, due to the presence of the wires in all the tested joints, the effect of wires on the results was ignored. To manufacture the joints, first, the ductile adhesive was applied at both ends, and then the brittle adhesive was applied in the middle of the overlap. To keep the bonded adherends aligned during the curing process, a specific mold, shown in Figure 2, was employed. Bonded joints were cured at room condition for 7 days.
boundary of the two adhesives to avoid mixing the two adhesives during the curing process and also to control the length ratio of the adhesives. The wire can also be seen as a useful technique to control the thickness of the adhesive. It should be noted that the presence of a wire as a common length ratio control technique can affect the overall behavior of the joints, but without a physical barrier, it is difficult to control the length ratio of the adhesives. On the other hand, due to the presence of the wires in all the tested joints, the effect of wires on the results was ignored. To manufacture the joints, first, the ductile adhesive was applied at both ends, and then the brittle adhesive was applied in the middle of the overlap. To keep the bonded adherends aligned during the curing process, a specific mold, shown in Figure 2, was employed. Bonded joints were cured at room condition for 7 days.
It should be noted that the idea behind using the bi-adhesive technique is to improve the strength of stiff joints bonded with a brittle adhesive [18]. As already shown experimentally in [21], by increasing the adhesive aspect ratio (d) above the optimal value (at which the lap is mostly or completely covered by the ductile adhesive), the length of the brittle adhesive in the middle of the lap is not reached enough to withstand the transmitted load, which of course is not an ideal condition in practice. It should also be noted that in real applications, the joints often experience a wide range of strain rates from quasistatic to impact. Consequently, only using a ductile adhesive with low stiffness along the lap cannot withstand the service loads, especially at low strain rates. Accordingly, in the current study, joints with a single ductile adhesive were not analyzed.
Test Procedure
For quasi-static tests, the joints were tested under tensile loading using a universal tensile testing machine. The load was measured using the machine load cell, and the displacement is the crosshead displacement measured by the test machine. In addition, to align the loading line with the bondline, end tabs were bonded to both ends of the joints (as shown in Figure 1) before testing the specimens. The rate of displacement in quasistatic tests was set to 1 mm/min. It should be noted that the idea behind using the bi-adhesive technique is to improve the strength of stiff joints bonded with a brittle adhesive [18]. As already shown experimentally in [21], by increasing the adhesive aspect ratio (d) above the optimal value (at which the lap is mostly or completely covered by the ductile adhesive), the length of the brittle adhesive in the middle of the lap is not reached enough to withstand the transmitted load, which of course is not an ideal condition in practice. It should also be noted that in real applications, the joints often experience a wide range of strain rates from quasi-static to impact. Consequently, only using a ductile adhesive with low stiffness along the lap cannot withstand the service loads, especially at low strain rates. Accordingly, in the current study, joints with a single ductile adhesive were not analyzed.
Test Procedure
For quasi-static tests, the joints were tested under tensile loading using a universal tensile testing machine. The load was measured using the machine load cell, and the displacement is the crosshead displacement measured by the test machine. In addition, to align the loading line with the bondline, end tabs were bonded to both ends of the joints (as shown in Figure 1) before testing the specimens. The rate of displacement in quasi-static tests was set to 1 mm/min.
For the impact fatigue analysis, in this study, similar (steel/steel) and dissimilar (composite/steel) adhesive joints were subjected to various levels of low energy impacts, and the corresponding lives were analyzed. Then, by using the bi-adhesive technique, the impact fatigue life of the same joints was improved. The rate of improvement in fatigue life using the bi-adhesive technique was analyzed experimentally. In order to construct the J-N curves, joints were tested at different impact energy levels. In order to apply the cyclic low-energy impact fatigue loads, an in-house drop weight low energy impact test device with a maximum capacity of 50 N.m already designed and employed in previous studies [13,20] was used in the current research (see Figure 3). The impactor weight was set to 5 kg, and by changing the impactor height, the joints were subjected to different levels of impact energy. By using the low-energy impact test machine, joints were subjected to different levels of impact loads (5 to 15 N.m). Due to the large mass used (5 kg) and also the low impact energy considered, the mass was dropped from a very low height resulting in a low impact velocity. Although, in this condition, a small rebound is assumed, the influence of the rebound speed was considered negligible for all tested joints. Figure 3 shows the device used to apply the impact fatigue loads.
N curves, joints were tested at different impact energy levels. In order to apply the cyclic low-energy impact fatigue loads, an in-house drop weight low energy impact test device with a maximum capacity of 50 N.m already designed and employed in previous studies [13,20] was used in the current research (see Figure 3). The impactor weight was set to 5 kg, and by changing the impactor height, the joints were subjected to different levels of impact energy. By using the low-energy impact test machine, joints were subjected to different levels of impact loads (5 to 15 N.m). Due to the large mass used (5 kg) and also the low impact energy considered, the mass was dropped from a very low height resulting in a low impact velocity. Although, in this condition, a small rebound is assumed, the influence of the rebound speed was considered negligible for all tested joints. Figure 3 shows the device used to apply the impact fatigue loads.
Impact loads were repeated until joint failure. Three samples were tested for each condition. The impact energy level was defined by the height of the impactor that was controlled using a vertical guide. In order to minimize friction between the impactor and the guide shafts, lubricated ball bearings were used. Accordingly, the effect of friction was considered negligible. The ball bearings can also align the impactor during the test. As also mentioned before, pins were used to mount the joints in the impact test machine. The impactor applies the impact loads to the joints through the pins. The energy absorbed by joints is the sum of the energy applied until the failure of SLJs. Consequently, it is simply calculated from the energy of each impact multiplied by the number of impact cycles to the failure of the joint.
Finite Element Analysis (FEA)
The stress distribution across the bondline in a bi-adhesive joint is significantly influenced by the presence of a ductile adhesive at the bonding ends. In order to obtain a better insight into the stress state and also to find a possible relationship between the stress distribution and the obtained experimental data, a simple linear elastic FEA was performed for joints with different substrates and different length ratios of the adhesives. The same Figure 3. The device used to apply the impact loads (already developed and used by authors [13]). Impact loads were repeated until joint failure. Three samples were tested for each condition. The impact energy level was defined by the height of the impactor that was controlled using a vertical guide. In order to minimize friction between the impactor and the guide shafts, lubricated ball bearings were used. Accordingly, the effect of friction was considered negligible. The ball bearings can also align the impactor during the test. As also mentioned before, pins were used to mount the joints in the impact test machine. The impactor applies the impact loads to the joints through the pins. The energy absorbed by joints is the sum of the energy applied until the failure of SLJs. Consequently, it is simply calculated from the energy of each impact multiplied by the number of impact cycles to the failure of the joint.
Finite Element Analysis (FEA)
The stress distribution across the bondline in a bi-adhesive joint is significantly influenced by the presence of a ductile adhesive at the bonding ends. In order to obtain a better insight into the stress state and also to find a possible relationship between the stress distribution and the obtained experimental data, a simple linear elastic FEA was performed for joints with different substrates and different length ratios of the adhesives. The same elastic properties as given in the previous section were used. The linear FE analysis allows only to have an idea of the ratio between the stress in the two adhesives and the stress distribution.
However, considerations of the failure modes and failure mechanisms are not possible with this analysis. A 2D analysis was performed using Abaqus. Eight elements were created along the adhesive thickness. Figure 4 shows the meshed FE model. The samples were meshed by a four-node bilinear plane strain quadrilateral, reduced integration elements. Aluminum wires used in experiments to separate the adhesives were also considered in the numerical simulation, as shown in Figure 4. A displacement of 0.5 mm was applied to one end while the other end of the joint was clamped. The peel stress along the mid-plane of the adhesive layer was analyzed in this study.
Materials 2022, 15, x FOR PEER REVIEW 9 of 17 elastic properties as given in the previous section were used. The linear FE analysis allows only to have an idea of the ratio between the stress in the two adhesives and the stress distribution. However, considerations of the failure modes and failure mechanisms are not possible with this analysis. A 2D analysis was performed using Abaqus. Eight elements were created along the adhesive thickness. Figure 4 shows the meshed FE model. The samples were meshed by a four-node bilinear plane strain quadrilateral, reduced integration elements. Aluminum wires used in experiments to separate the adhesives were also considered in the numerical simulation, as shown in Figure 4. A displacement of 0.5 mm was applied to one end while the other end of the joint was clamped. The peel stress along the mid-plane of the adhesive layer was analyzed in this study. Figure 5 shows the typical behavior of various joints with different substrates and different adhesive length ratios under static loading conditions. First, it was experimentally shown that the bi-adhesive technique could significantly improve the static strength of the joints. The static strength test results also show that the optimal length ratio is influenced by the assembly configuration. Based on the results, the length ratio that leads to the maximum strength is different for dissimilar and similar SLJs. In joints with similar adherends, the best static strength was obtained for a length ratio of 0.2, while the best improvements for composite-steel joints were related to a length ratio of 0.1. As presented in Figure 5, the effects of length ratio are more significant in joints with similar adherends than in dissimilar ones. It should be noted that the displacement shown in Figure 5 is the displacement of the crosshead and does not accurately reflect the displacement that the bondline experienced during the test.
Static Tests
One of the key factors in manufacturing bi-adhesive joints is to find the optimum length ratio. It is obvious that the optimum length ratio depends on the joint geometry and the stiffness of the adherends. Figure 5 shows the typical behavior of various joints with different substrates and different adhesive length ratios under static loading conditions. First, it was experimentally shown that the bi-adhesive technique could significantly improve the static strength of the joints. The static strength test results also show that the optimal length ratio is influenced by the assembly configuration. Based on the results, the length ratio that leads to the maximum strength is different for dissimilar and similar SLJs. In joints with similar adherends, the best static strength was obtained for a length ratio of 0.2, while the best improvements for composite-steel joints were related to a length ratio of 0.1. As presented in Figure 5, the effects of length ratio are more significant in joints with similar adherends than in dissimilar ones. It should be noted that the displacement shown in Figure 5 is the displacement of the crosshead and does not accurately reflect the displacement that the bondline experienced during the test.
Static Tests
One of the key factors in manufacturing bi-adhesive joints is to find the optimum length ratio. It is obvious that the optimum length ratio depends on the joint geometry and the stiffness of the adherends. be lost due to the different stiffness of the substrates. Therefore, the edge effects are more critical at the edge with the lower stiffness substrate. Less stiffness causes more rotation of the substrate during the test, resulting in greater peel stress in the adhesive layer. It is obvious that the failure starts from the critical points. Another failure mechanism is delamination, which is a common problem in composite laminates and has been studied by several authors [27][28][29]. For joints with a single brittle adhesive, a significant stress concentration is observed at the lap ends, and failure is mainly driven by crack initiation at the ends of the bondline. In bi-adhesive joints, where part of the brittle adhesive is replaced by a ductile adhesive at both ends of the overlap, not only does the stress concentration at the lap ends disappear (due to the plastic deformation of the ductile adhesive), the brittle adhesive at its ends experiences compressive (negative) peel stresses, which is one of the main reasons behind the significant improvement in the static strength of bonded bi adhesive joints compared to single adhesive SLJs. Figure 6 shows the distribution of the peel stress along the single-and bi-adhesive layer in dissimilar ( Figure 6a) and similar (Figure 6b) adherend SLJ subjected to 0.5 mm elongation. Due to the different properties of the substrate, a nonsymmetric stress distribution is observed along the overlap (Figure 6a), while for similar joints, the stress distribution is symmetric (Figure 6b). However, for the analyzed joints, the brittle adhesive experiences high peel stress at the bonding ends (around 60-70 MPa in the case of the linear elastic assumption), while the stress at the tip of the brittle adhesive reduces to compressive loads where the normal stress applied to the brittle adhesive is no longer peeling stress and varies between 0 and −17 MPa. In the case of dissimilar joints, the symmetry of the stress along the overlap would be lost due to the different stiffness of the substrates. Therefore, the edge effects are more critical at the edge with the lower stiffness substrate. Less stiffness causes more rotation of the substrate during the test, resulting in greater peel stress in the adhesive layer. It is obvious that the failure starts from the critical points. Another failure mechanism is delamination, which is a common problem in composite laminates and has been studied by several authors [27][28][29].
For joints with a single brittle adhesive, a significant stress concentration is observed at the lap ends, and failure is mainly driven by crack initiation at the ends of the bondline. In bi-adhesive joints, where part of the brittle adhesive is replaced by a ductile adhesive at both ends of the overlap, not only does the stress concentration at the lap ends disappear (due to the plastic deformation of the ductile adhesive), the brittle adhesive at its ends experiences compressive (negative) peel stresses, which is one of the main reasons behind the significant improvement in the static strength of bonded bi adhesive joints compared to single adhesive SLJs. Figure 6 shows the distribution of the peel stress along the single-and bi-adhesive layer in dissimilar ( Figure 6a) and similar (Figure 6b) adherend SLJ subjected to 0.5 mm elongation. Due to the different properties of the substrate, a non-symmetric stress distribution is observed along the overlap (Figure 6a), while for similar joints, the stress distribution is symmetric (Figure 6b). However, for the analyzed joints, the brittle adhesive experiences high peel stress at the bonding ends (around 60-70 MPa in the case of the linear elastic assumption), while the stress at the tip of the brittle adhesive reduces to compressive loads where the normal stress applied to the brittle adhesive is no longer peeling stress and varies between 0 and −17 MPa. Materials 2022, 15, x FOR PEER REVIEW 11 of 17
Cyclic Impact Analysis
One of the most destructive loads that joints can be subjected to in service is cyclic impact, which can lead to unpredictable catastrophic failure. Although the energy applied during each impact cycle is much less than the impact strength of the joints, significant damage occurs due to the stress waves propagating through the adhesive layer that
Cyclic Impact Analysis
One of the most destructive loads that joints can be subjected to in service is cyclic impact, which can lead to unpredictable catastrophic failure. Although the energy applied during each impact cycle is much less than the impact strength of the joints, significant damage occurs due to the stress waves propagating through the adhesive layer that accumulates damage in the adhesive layer during impact fatigue. An increasing number of impacts leads to a gradual deterioration of the mechanical properties of the adhesive on the one hand and an accumulation of microcracks/damage on the other. Accordingly, the ability of the adhesive to dampen the stress waves would be significantly reduced after a certain number of impact cycles [13].
J-N Behavior
Designing a durable adhesive bond is the most important end goal when designing bonded assemblies. A key factor in achieving this is to make the stress more uniform along the bondline. As previously discussed, the technique of the bi-adhesive, or more generally graded adhesive, is one solution to achieve this goal. The length ratio of the adhesives, as well as the stiffness of the substrate, play a key role in reducing local stresses in SLJs. In order to investigate the impact of these parameters on the impact fatigue life of different SLJs, cyclic impact tests were performed at three different impact energy levels. In order to analyze the impact fatigue behavior of the joints, a J-N methodology is used in which the impact fatigue life of the joints is plotted against the applied impact energy per cycle in a semi-logarithmic plot.
By using Basquin's type equation, a logarithmic trend line was fitted to the experimental data to estimate the cyclic life of the joints at lower impact energy levels corresponding to higher impact fatigue lives. Figure 7 shows the J-N behavior of the joints for different length ratios in the tested dissimilar steel-to-CFRP bonded joints and also for similar steel-to-steel bonded configurations. accumulates damage in the adhesive layer during impact fatigue. An increasing number of impacts leads to a gradual deterioration of the mechanical properties of the adhesive on the one hand and an accumulation of microcracks/damage on the other. Accordingly, the ability of the adhesive to dampen the stress waves would be significantly reduced after a certain number of impact cycles [13].
J-N Behavior
Designing a durable adhesive bond is the most important end goal when designing bonded assemblies. A key factor in achieving this is to make the stress more uniform along the bondline. As previously discussed, the technique of the bi-adhesive, or more generally graded adhesive, is one solution to achieve this goal. The length ratio of the adhesives, as well as the stiffness of the substrate, play a key role in reducing local stresses in SLJs. In order to investigate the impact of these parameters on the impact fatigue life of different SLJs, cyclic impact tests were performed at three different impact energy levels. In order to analyze the impact fatigue behavior of the joints, a J-N methodology is used in which the impact fatigue life of the joints is plotted against the applied impact energy per cycle in a semi-logarithmic plot.
By using Basquin's type equation, a logarithmic trend line was fitted to the experimental data to estimate the cyclic life of the joints at lower impact energy levels corresponding to higher impact fatigue lives. Figure 7 shows the J-N behavior of the joints for different length ratios in the tested dissimilar steel-to-CFRP bonded joints and also for similar steel-to-steel bonded configurations. The bi-adhesive technique was able to significantly improve the impact fatigue life of the tested samples. It was found that for dissimilar joints, increasing the length ratio from 0.1 to 0.2 reduced the impact fatigue life, while for similar steel joints, the length ratio of 0.2 resulted in the highest impact fatigue lives. A possible reason behind this difference is the different peel stress distributions that the bondline experiences in different joints. Based on the numerical analysis results shown in Figure 6, in CFRP/steel joints, the brittle The bi-adhesive technique was able to significantly improve the impact fatigue life of the tested samples. It was found that for dissimilar joints, increasing the length ratio from 0.1 to 0.2 reduced the impact fatigue life, while for similar steel joints, the length ratio of 0.2 resulted in the highest impact fatigue lives. A possible reason behind this difference is the different peel stress distributions that the bondline experiences in different joints. Based on the numerical analysis results shown in Figure 6, in CFRP/steel joints, the brittle adhesive at its end experiences a higher compressive load for a 0.1 length ratio than for a 0.2 length ratio. On the other hand, for steel/steel joints, the normal compressive stresses applied to the ends of the brittle adhesive are higher for joints with a length ratio of 0.2 (see Figure 6b). However, under all conditions tested, an impressive increase in impact fatigue life was observed for bi-adhesive joints compared to single-rigid adhesive joints.
As can be clearly seen in Figure 7, no impact fatigue endurance limit can be determined for the tested joints. The same trend was noted in previous studies [13]. However, still, further research is needed for very low-impact energy levels to investigate the fatigue strength of joints subjected to very high cyclic impacts. Based on the results shown in Figure 7, in case of any possible endurance limit, it would be much lower than the impact strength of the joints (less than 4 N.m for the tested joints). The endurance limit in impact fatigue is also much lower than normal fatigue [15]. Thus, it can be said that the impact fatigue loading regime has a greater potential to cause premature failure of bonded structures. The experiments proved that increasing the impact energy significantly decreased the impact life of the joints. As expected, it was found that joint life is not a linear function of impact energy; for example, a 50% reduction in impact energy increases fatigue life by more than 70%.
The curves fitted to the experimental data points (Figure 7) also showed that the effects of length ratio become more pronounced with increasing impact energy. On the other hand, it should be noted that the optimal length ratio is also a function of the ratio of Young's modulus of the two ductile and brittle adhesives [18,21]. It should be noted that due to the viscoelastic behavior of the adhesives, increasing the strain rate decreases the ductility of the adhesives resulting in a different E ratio and consequently leading to a different optimal length ratio. As already shown in [20], the ductile adhesive is more sensitive to the loading speed than the brittle adhesive for the tested materials. Therefore, the E-ratio would increase as the load rate increased. Accordingly, the optimum length ratio for quasi-static testing is not necessarily an optimum ratio for high strain rate and impact loading conditions. Not only the loading speed but also the rigidity of the substrate changes the optimal length ratio. Using different materials as substrates also complicates the problem due to the non-symmetrical distribution of the peel stress along the bondline. A nonuniform stress distribution along the bondline also changes the optimum length ratio. Based on the results obtained, the best length ratio for dissimilar joints is 0.1 (among the tested conditions), while for similar steel/steel SLJ is 0.2. Another reason behind this difference is the different peel stress state at the boundary of the two adhesives that was discussed earlier.
However, to analyze the sensitivity of each parameter (substrate stiffness, dissimilarity of bonded substrates, and loading rate), further experiments are needed, where all these parameters need to be experimentally analyzed at multiple levels.
Total Energy Absorption
In this analysis, the total energy absorption is defined as the summation of the energy absorbed during the impact loads until failure. The energy applied at each impact is calculated based on the falling height and the weight of the impactor. According to the results, the bi-adhesive technique shows an impressive increase in the energy absorption capacity of the tested dissimilar joints (as shown in Figure 8). According to the results, the impact energy absorption for joints subjected to 10 N.m impact loads increases from 40 N.m for the length ratio of 0 (single brittle adhesive joints) to 1630 and 870 N.m for bi-adhesive joints with the length ratios of 0.1 and 0.2, respectively. It means that besides the increase in the static strength, the bi-adhesive technique increased the impact energy absorption capacity by a factor of around 41, which is a significant improvement in the damping capacity of the tested CFRP/steel bonded joints.
Energy absorption is a function of the number of impact cycles and the energy level of each impact. The results show that for joints subjected to an impact load of 10 N.m, the total energy absorption is higher than for joints subjected to a fatigue impact load of 5 and 15 N.m. The total energy absorption not only influences the applied impact load but is also affected by the adhesive length ratio in bi-adhesive joints. Increasing the impact energy from 5 to 10 decreases the total energy absorbed for joints with a length ratio of 0.2, while it is the opposite for a length ratio of 0.1. Therefore, to achieve an optimum condition with maximum cyclic impact energy absorption, the interaction between the level of impact loading and the load ratio of the adhesive should be analyzed. 15 N.m. The total energy absorption not only influences the applied impact load but also affected by the adhesive length ratio in bi-adhesive joints. Increasing the impact e ergy from 5 to 10 decreases the total energy absorbed for joints with a length ratio of 0. while it is the opposite for a length ratio of 0.1. Therefore, to achieve an optimum conditio with maximum cyclic impact energy absorption, the interaction between the level of im pact loading and the load ratio of the adhesive should be analyzed.
Fracture Surface Analysis
One of the key parameters to find the root cause of failure is fractography. As di cussed in [13], for similar steel substrates bonded with a single stiff/brittle adhesive su jected to impact fatigue loading, failure begins in the middle of the overlap. It was foun that at lower energy levels, the cracks are more concentrated in the center of the bondlin while at higher impact energies, the cracks are more widely spaced along the overlap. is also clear that by increasing the number of impacts, the damage propagates and caus a weakening of the joint. Failure occurs when the damaged area reaches a critical lev where the remaining impact strength of the joint is less than the applied impact fatigu load. Although a similar failure initiation and propagation can be assumed for CFRP/ste joints with a single brittle adhesive, however, the failure mechanism in dissimilar CFR steel SLJs with two different adhesives used along the lap is much more complex and, shown in Figure 9, a combination of delamination in the CFRP part, zones with interfaci failure between the adhesive and the substrates, and a region of cohesive failure mod Since two different adhesives with different behavior were used along the overlap, a com bination of ductile damage and brittle fracture is to be expected before the failure of th joint. On the other hand, impact fatigue cycles can also produce local damage and m crocracks. The damage accumulation and the growth of the damage zones finally led the final failure of the tested bi-adhesive connections. Local delamination was also o served in composite substrates. Although the impact fatigue shows distinct zones alon the overlap for single adhesive SLJs (see [13] for more details), these distinct zones we not clearly observed on the surface of bi-adhesive joints. Due to the stress wave distrib tion along the bonded area [20], a fatigue damage zone is expected in the area with th brittle adhesive, while the ductile adhesive is usually less sensitive to the applied low energy impact loads due to the higher damping capacity. As shown in Figure 9, the fra ture process of the bi-adhesive SLJs is a combination of cohesive failure through the ad hesive layer, adhesive failure, and a local CFRP delamination.
Fracture Surface Analysis
One of the key parameters to find the root cause of failure is fractography. As discussed in [13], for similar steel substrates bonded with a single stiff/brittle adhesive subjected to impact fatigue loading, failure begins in the middle of the overlap. It was found that at lower energy levels, the cracks are more concentrated in the center of the bondline, while at higher impact energies, the cracks are more widely spaced along the overlap. It is also clear that by increasing the number of impacts, the damage propagates and causes a weakening of the joint. Failure occurs when the damaged area reaches a critical level where the remaining impact strength of the joint is less than the applied impact fatigue load. Although a similar failure initiation and propagation can be assumed for CFRP/steel joints with a single brittle adhesive, however, the failure mechanism in dissimilar CFRP-steel SLJs with two different adhesives used along the lap is much more complex and, as shown in Figure 9, a combination of delamination in the CFRP part, zones with interfacial failure between the adhesive and the substrates, and a region of cohesive failure mode. Since two different adhesives with different behavior were used along the overlap, a combination of ductile damage and brittle fracture is to be expected before the failure of the joint. On the other hand, impact fatigue cycles can also produce local damage and microcracks. The damage accumulation and the growth of the damage zones finally led to the final failure of the tested bi-adhesive connections. Local delamination was also observed in composite substrates. Although the impact fatigue shows distinct zones along the overlap for single adhesive SLJs (see [13] for more details), these distinct zones were not clearly observed on the surface of bi-adhesive joints. Due to the stress wave distribution along the bonded area [20], a fatigue damage zone is expected in the area with the brittle adhesive, while the ductile adhesive is usually less sensitive to the applied low-energy impact loads due to the higher damping capacity. As shown in Figure 9, the fracture process of the bi-adhesive SLJs is a combination of cohesive failure through the adhesive layer, adhesive failure, and a local CFRP delamination.
|
2023-01-08T05:15:18.436Z
|
2023-01-01T00:00:00.000
|
{
"year": 2023,
"sha1": "3f7e165a303d1662d7195706d9c3d7dae243be15",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/16/1/419/pdf?version=1672646463",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3f7e165a303d1662d7195706d9c3d7dae243be15",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
255221713
|
pes2o/s2orc
|
v3-fos-license
|
Micronutrients Improve Growth and Development of HLB-Affected Citrus Trees in Florida
Enhanced nutritional programs (ENPs) have improved citrus trees’ growth and development in the era of Huanglongbing (HLB). Studies conducted with variable rates of manganese (Mn) and Iron (Fe) on young HLB-affected citrus trees showed that applying double the standard recommendation increased growth and biomass accumulation. Since HLB is believed to cause deficiency symptoms of micronutrients in citrus trees, it is critical to ensure their optimal levels in the leaves. This could be achieved by soil application of either a Mn rate of 8.9 to 11.5 kg ha−1 as MnSO4 (31%) for young HLB-affected ‘Valencia’ (Citrus sinensis (L.) Osbeck) citrus trees or an Fe rate of 9.6 to 11.8 kg ha−1 as Ferrous sulfate heptahydrate (20%) for ‘Bingo’ (Citrus reticulata, Blanco) citrus trees. Maintaining optimal levels of these micronutrients may enable citrus trees to carry out photosynthetic activities to ensure growth and development. It may also help the tree in the regulation of various physiological processes as part of the antioxidant enzyme Mn-superoxidase dismutase (SOD). Micronutrient manipulation through variable rates of fertilizer application to influence nutrient availability is an important mitigating factor for HLB-affected citrus trees and an integral component of citrus production in Florida.
Introduction
During the last two decades, the total citrus production in the United States (US) has declined significantly [1][2][3]. Florida, the second largest citrus producer in the nation, has recorded the greatest reduction of more than 80%, from 13.5 million tons in 1998 to about 2.6 million tons in 2021 [4]. Despite this reduction, citrus remains one of the leading tree crops produced in Florida, contributing about $1.3 billion annually to the state's economy [4]. Citrus production decline has been largely ascribed to huanglongbing (HLB) or citrus greening disease that was first reported around 2005 [5,6]. Other challenges, such as hurricane incidence, disease and pest infestation, prolonged water scarcity, and market competition, have also contributed to the decline in production [6][7][8][9][10][11]. However, HLB has been the major cause of yield reduction, poor juice quality, and smaller fruit sizes in recent years [1,12].
HLB, which means "yellow shoot disease" in Chinese, is believed to be caused by multiple groups of phloem-limited bacterium that belong to the genus, Candidatus Liberibacter asiaticus (CLas) [6,13,14]. HLB is phloem-limited because the bacteria propagate in the Phloem of the tree, where the translocation of minerals takes place [14]. HLB is spread from tree to tree by an insect vector called Diaphorina citri, Kuwayama (Asian citrus psyllid, ACP) [5,6,14]. The ACP completes its life cycle, which consists of eggs, nymphs, and the adult stage, on new growth or on shoot tips [6,14]. Its mode of transmission is by feeding and injecting the bacteria into the phloem of the tree [5,14]. HLB was first found in China in the 19th century and has spread to most parts of the world, thus threatening the global citrus industry [15,16]. After the disease was reported in 2005, HLB was detected in parts of the US, such as Georgia, Louisiana, South Carolina, Texas, and California [6,16,17].
The disease interrupts the physiological functions of citrus trees, including but not limited to translocation of mineral nutrients from one part of the plant to the other, yellow shoot, blotchy-mottled leaves, and branch dieback that affects metabolism and growth of the tree [15,18]. When a citrus tree is HLB infected, there is a decline in roots and fibrous root density, leading to a reduction in nutrient and water uptake, which in turn leads to nutrient deficiency symptoms and a decrease in yield [7,8,19]. Citrus trees affected with HLB usually show deficiencies in manganese (Mn), zinc (Zn), phosphorus (P), calcium (Ca), magnesium (Mg), iron (Fe), and boron (B), and also require a minimum amount of water [2,[19][20][21]. Therefore, application rates of micronutrients in general need to be readjusted to ensure their optimal levels in plants to enable growth and development. In addition, considering that the citrus nutrition management guidelines by the University of Florida Institute of Food and Agricultural Sciences (UF/IFAS) were developed prior to HLB in Florida, it is worthwhile to evaluate the rates of application for micronutrients to determine optimal thresholds in a balanced fashion that are therapeutic to tree health [2,20,22].
Currently, HLB has no cure, however, the management programs adapted for HLB include intensive chemical control of the ACP, aggressive removal of HLB-affected trees, enhanced nutritional programs (ENPs), and planting disease-free nursery rootstocks [23]. The frustration of managing citrus trees affected by HLB along with adverse weather conditions is influencing the decisions of some producers to either change crop type or use the land for a nonagricultural activity, contributing to the rapid decline of citrus production in Florida. The purpose of this review was to give an overview of how HLB has affected citrus nutrition, specifically, micronutrient dynamics in citrus trees, and assess research on the use of micronutrients as therapies for HLB-affected trees.
By far, ENPs appear to mitigate and help manage citrus trees in the era of HLB [2,18,20,22,24]. The nutritional level of plants and their defense mechanism can be highly interrelated, as some studies have shown the benefit of micronutrients on both the health and natural defense of citrus crops, in response to the action of diverse types of pathogens [2,18,20,25]. As mentioned earlier, there are many factors that could limit citrus fruit yields including diseases and hurricanes [3,7,8,10,19]. However, inadequate nutrition, especially during the critical growth period of citrus trees may not only reduce yield but also produce fruits with poor juice and size quality [1]. According to Morgan et al. (2016), the interaction between HLB-affected trees and nutrient uptake can vary, resulting in different nutrient concentrations in plant tissues, depending on the mobility of that nutrient [3,18].
For the moment, mitigating HLB and keeping citrus trees healthy will require more than one approach until a cure is found [9,19]. Some citrus growers are now customizing fertilization with pest control practices, to keep trees productive [14,[26][27][28][29]. The objective of this review highlights efforts made by researchers to mitigate the negative impacts of HLB and to keep citrus trees productive, with emphasis on micronutrients. Since there has not been any known review about how ENPs could be used as a therapy for HLB in Florida, this review may identify research in the area of citrus nutrition and the gap in knowledge to improve citrus production in the era of HLB.
Resistant or Tolerant Rootstock
Rootstocks have played a crucial role in helping to maintain citrus trees' productivity in the era of HLB [30]. Although there has not been a resistant rootstock to HLB, some rootstocks have been observed to provide more tolerance to the disease relative to others [31]. A study on 15 different rootstocks revealed that Volkamer lemon and US 897 rootstocks have shown some level of tolerance that might enable young trees to withstand the damaging effects of HLB [31]. Since it has been shown that greater than 40% of the citrus root is damaged before HLB symptoms are observed in above-ground tissues [7,32], a rootstock that is tolerant to HLB would make a significant difference as far as the growth and development of the tree is concerned [31]. The capability of some rootstocks to adapt well to different soil conditions makes them more tolerant to HLB than others [30,31]. However, Plants 2023, 12, 73 3 of 10 rootstock performance may depend on several factors including the drainage system of the soil, pH, and salinity, among others [30]. Soil pH, specifically, plays an important role in the absorption of water and nutrient and root growth [33]. By far, there is no rootstock that is resistant to HLB, however, there is a possibility that rootstocks show some form of tolerance to HLB if they are able to withstand the adverse conditions stated above [31].
Use of Protective Covers
Another tool that has been key to providing young citrus trees (1-4 years old) a lifeline against infestation by ACP is the individual protective cover (IPC) [12]. These are screens that fit over each tree to avoid feeding by the ACP as long as the prevention method lasts [34]. The IPC is said to be one of the effective strategies to prevent the incidence of ACP, which in turn keep citrus trees free from HLB [12]. The IPC is made of either high-density polyethylene or polyvinyl that has a mesh size smaller than the ACP [12,30]. The use of IPC is only partially adopted by commercial citrus growers because the method could be expensive and might increase production costs [30]. Therefore, it is economically applicable for fresh fruit producers who typically have relatively high returns [12,30]. The IPC has successfully excluded ACP from contact with the leaves [12].
Chemical Control of ACP
Intensive chemical control programs against ACP have been deemed necessary to combat HLB by growers in Brazil and Florida [35]. The use of insecticides is traditionally among the early and widely adapted strategies used to control the population of insect infestation in crop production to minimize the spread of plant disease [30,35]. Applying insecticides at critical flushing periods can significantly reduce populations of ACP, therefore, routine applications of insecticides in Florida citrus have been recommended to control ACP [13]. For insecticide application to control ACP, timing is crucial because the active chemical may either work on the adult, the nymph, or both [13,30]. Fenpropathrin for instance kills ACP adults within minutes, even before they can acquire and transmit the disease [30,35]. Therefore, it is critical to know which stage to apply for effective results [13]. For the control of ACP, insecticides such as imidacloprid, Cholorpyrifors 4E, fenpropathrin, Dimethoate 400, Endosulfan, and Malathion, among others, have been labeled for use in citrus [13,30,35]. However, frequent use of one class of insecticides and application of higher doses have caused the ACP to develop strategies that detoxify the active ingredient, making the chemical harmless to the ACP [30]. This has rendered most insecticides to be noneffective for the control of ACP [30,35]. Some investigators have confirmed resistance of ACP to insecticides, for example, in the US, Brazil, and Vietnam [36][37][38]. There have been reports on the use of natural predators such as lady beetles (Coleoptera: Coccinellidae), syrphid flies (Diptera: Syrphidae), lacewings (Neuroptera: Chrysopidae, Hemerobiidae), and spiders (Araneae) [16,23,36]). Although, not much has been reported on the extent to which these predators reduce ACP infestation to be considered a biological control agent [36].
Use of Enhanced Nutritional Program(s)
There has been multiple research that supports the use of ENPs to mitigate the damaging effects of HLB [18,22,24,[39][40][41][42]. When citrus trees are infected by HLB disease, they show signs of micronutrient deficiencies (for example zinc, manganese, and iron) and this may be because the severity of HLB causes phloem plugging, thus limiting nutrient translocation within the plant [14,25] and as a result damages more than 40% of the fibrous roots [7,32], which may, in turn, inhibit nutrient absorption. Considering the importance of micronutrients such as Mn and Fe in photosynthesis, and as a component of Mn superoxide dismutase (Mn-SOD); in defense of the plant against stress, a deficiency of Mn may be damaging to the total development of the tree [43]. Some researchers in Florida found enhanced foliar micronutrient application to increase yield as compared to the standard micronutrient application, although, in their study, it was not cost-effective [2,40]. Re-searchers indicated that Mn with sulfate increased yield when compared to other macro and micronutrients applied [18].
What Is Known
Studies have shown that ENPs are an effective way to mitigate HLB since the affected trees exhibit nutritional imbalances [3,26]. The supply of either Zn, Mn, or Cu through the leaves and soil tends to help HLB-affected trees avoid the negative impact of the disease [44]. Researchers evaluated the effects of Zn, Mn, and Cu on the physiological growth of HLB-affected and healthy control trees [44]. They observed that the citrus trees infected with CLas generally had lower dry-weight biomass irrespective of the subjected treatment when compared with the healthy trees [44]. These results agreed with that of other researchers in Florida where they observed a significantly reduced dry-weight biomass for 2 year-old HLB-affected sweet orange trees that were subjected to variable Mn rates application [22]. In 2016 and 2017, a study conducted to evaluate the interaction of HLB and foliar application of Cu on the growth and nutrient acquisition of sweet oranges also showed reduced dry-weight biomass [20]. It seems that, once CLas infects the tree, growth is retarded to some extent, and this may be because of an interruption of metabolism, due to the starch accumulation in the phloem [5].
In 2019, researchers conducted a study on the therapeutic effects of Mn and other micronutrients on HLB-affected sweet oranges and observed that applying four (4×) times the standard recommendation rate was therapeutic for the 8-10 year-old HLB-affected trees [42]. For their study, they reported that the citrus trees subjected to 4× the standard recommendation had a high cycle threshold (Ct) relative to the other treatments [42]. However, a study conducted by other researchers on the growth and development of 1-3 year-old HLB-affected sweet orange and mandarin trees subjected to variable rates of Mn and Fe, respectively observed trunk and height increase with rates equivalent to double (2×) the standard recommendation [22,24]. For these same studies, they observed no impact of the micronutrients on Ct values [22,24]. This helps in the understanding that there have been some variabilities in terms of micronutrient rates for better growth and their role with Ct values and the age of trees.
In some cases, foliar Mn application at a rate of 3× the standard application showed a 45% yield increase when compared with the unsprayed control [3,18]. However, researchers observed a 25% yield reduction when trees were subjected to 6× the standard recommendation with an increase in canopy size at the expense of yield [18]. This suggests that modifying the traditional micronutrient recommendations in Florida seems to be appropriate for sweet orange trees affected by HLB [2,3,26]. A follow-up study on the latter observed that the applied micronutrients showed significant variation in seasonal root growth [2]. For example, there was a reduction in fine root length and density following the application of 3× the standard application rates for micronutrients [2]. Moreover, other researchers observed that foliar application of Fe 2+ can restore the growth of citrus trees affected by greening [45]. In their study, the HLB-affected trees subjected to Fe 2+ foliar application showed faster growth than the untreated control [45].
From the information gathered, it appears that the HLB-affected trees require more micronutrients than what is traditionally recommended for citrus production in Florida to achieve optimum nutrition. It has also been observed that foliar application of micronutrients to supplement soil-applied nutrients is able to correct deficiencies for HLB-affected trees [22,24,39,42]. HLB can impact yield due to poor growth and development, because of reduced root biomass and nutritional deficiencies [6][7][8]19], it is important that growers supply optimum nutrition while the tree is still young (between 1-4 years old). For this reason, knowledge about how much micronutrient is required of the young tree may be necessary for growth and biomass accumulation [22,24]. Two recent studies provided the optimal ranges of Mn and Fe at which growth and biomass are at a maximum (Figures 1 and 2) [22,24]. Thus, if citrus growers in Florida apply Mn and Fe at this rate, the tree may have the capability to produce and hold fruits and this might improve the overall yield. growers supply optimum nutrition while the tree is still young (between 1-4 years old). For this reason, knowledge about how much micronutrient is required of the young tree may be necessary for growth and biomass accumulation [22,24]. Two recent studies provided the optimal ranges of Mn and Fe at which growth and biomass are at a maximum (Figures 1 and 2) [22,24]. Thus, if citrus growers in Florida apply Mn and Fe at this rate, the tree may have the capability to produce and hold fruits and this might improve the overall yield.
Enhanced Nutritional Program for Citrus Production
Enhanced nutritional programs (ENPs) are slow-or controlled-release, liquid or dry soluble granular fertilizers that contain all or most essential macronutrients and micronutrients to provide the citrus trees with readily available nutrients throughout the produc-
Enhanced Nutritional Program for Citrus Production
Enhanced nutritional programs (ENPs) are slow-or controlled-release, liquid or dry soluble granular fertilizers that contain all or most essential macronutrients and micronutrients to provide the citrus trees with readily available nutrients throughout the production season to mitigate the debilitating impacts of HLB. There are three major criteria that qualify a mineral element to be considered an essential plant nutrient [46,47]. These include (1) a given plant must be unable to complete its life cycle in the absence of the mineral element, (2) the function of the element must not be replaceable by another mineral element, and (3) the element must be directly involved in plant metabolism, for example, as a cofactor of an enzyme [46]. This means that all essential mineral elements for a citrus tree are deemed important, and if one of them is deficient, it can limit the growth potential of the tree. Similar to that of many other higher plants, citrus trees require all essential nutrients in their right proportion [1].
The goals of optimal nutrient management are to (1) ensure that plants have optimal levels of essential nutrients for growth and development throughout all critical growth stages, (2) guarantee an adequate supply of all essential nutrients either through plant roots or leaves, and (3) ensure that soil physical and chemical properties favor nutrient absorption by plant roots [1,46]. It is well understood that a growing plant may have already lost its potential while deficiency symptoms are observed on the leaves. Therefore, it is the goal of any nutrient management program to test plant leaf tissue to ensure that the levels of all essential nutrients are optimized.
Why Micronutrients Matter for HLB-Affected Trees
Manganese is an essential element for plants, intervening in several metabolic processes, mainly in photosynthesis and as an enzyme antioxidant cofactor [46,48]. Reduced Mn (Mn 2+ ) form is the only available metal form for plants. It is taken up through an active transport system in epidermal root cells and transported as divalent cation Mn 2+ into the plants [46,49,50]. According to past research, Mn has a profound influence on three physiological (metabolic) functions: (i) photosynthesis, particularly electron transport in photosystem II and chloroplast structure, (ii) N metabolism, especially the sequential reduction of nitrate, and (iii) aromatic ring compounds as precursors for aromatic amino acids, hormones (auxins), phenols, and lignin [50,51]. The concentration of Mn in the soil may be controlled by chemical complexes formed by Mn 2+ due to low or high pH [52]. At higher soil pH (up to about pH 8), autooxidation of Mn 2+ is over MnO 2 , Mn 2 O 3 , and Mn 3 O 4 , which are not normally available to plants [49,52,53]. Manganese is an important oligo element involved in the regulation of many different physiological processes as well as part of the antioxidant enzyme Mn-SOD [54]. Manganese deficiency greatly affects photosynthesis; however, visual symptoms occur when plant growth is severely depressed [55]. Deficiency symptoms are observed in newly emerged leaves because of low phloem mobility of Mn that prevents remobilization of Mn from older to younger leaves [55]. In addition, Mn deficiency causes reductions in lignin concentrations in plant roots [52,55]. Research has revealed that Mn deficiency in citrus may significantly reduce yield and fruit color, and the fruit may become smaller and softer than normal [1].
Iron (Fe) is a transitional element that is characterized by the relative ease by which it may change its oxidation state and by its ability to form complexes with different ligands [46]. This variability expressed by Fe is essential in biological redox systems [46]. Iron as a micronutrient is required by most plants in small quantities. It is well known for its metabolic processes such as deoxyribonucleic acid (DNA) synthesis, photosynthesis, and respiration [56]. It is also a constituent of many electron carriers and enzymes, and therefore, important in plant metabolism [56]. The presence of Fe in iron-containing heme proteins makes its levels in the plant critical in the electron transfer chain e.g., cytochromes [57]. Cytochromes are found in the electron transfer systems in chloroplasts and mitochondria [46,57]. Other heme enzymes are catalase and peroxidases [46]. It is reported that, under conditions of Fe deficiency, the activity of both types of enzymes declines [46].
Although Fe is abundant in the soil, it is mostly in a complex form, and plants absorb Fe by an active process, thus, by giving out energy to reduce Fe 3+ to Fe 2+ to make it available for absorption in the rhizosphere [57,58]. Plant iron absorption is also dependent on soil pH and redox potential [58]. At lower pH, Fe is readily available to plants, however, in aerobic soil conditions and high pH soils, Fe is in the form of insoluble ferric oxides [46,58]. Since HLB weakens the tree's immune system [1,3] and contributes to the loss of more than 40% of the fibrous root system [32], it is a concern that affected trees may not exert enough energy to absorb required Fe, hence, affecting the rate of Fe absorption [7,32,46,58]. It is therefore critical to provide an adequate amount of Fe in the form that is readily available in the rhizosphere to increase their chances of being absorbed [1,24].
Iron deficiency is characterized by chlorosis in young leaves, which is not only associated with the decline of chlorophyll and ßcarotene, but also with changes in the expression and assembly of other components of the photosynthetic apparatus [46,58]. Due to the low solubility of the oxidized ferric form in an aerobic environment, Fe in the soil is mostly not available to plants [59]. When the plant is deficient in Fe, the ferredoxin content is decreased to a similar extent as the chlorophyll content, and the fall in ferredoxin level is associated with a lower nitrate reductase activity [46,60].
Low pH and moisture conditions could trigger Fe toxicity and may be a serious problem for the growth and development of citrus [1,46,58]. Even though this condition is predominantly observed in waterlogged soils and in the event of heavy rainfall or excess irrigation [46], other researchers have reported that the iron catalyzed formation of oxygen-free radicals in the chloroplasts can cause Fe toxicity under dryland conditions [61].
What Could Be Done in the Future?
The USDA-NASS projected that there would be about a 32% decline from the previous year's total production (2.6 million tons, 2021-2022) for the 2022-2023 citrus production season [4]. This projection was made in September 2022 before hurricane Ian hit Florida. Thus, the actual production could be much lower [62,63]. Pathogens can alter the nutrition of citrus trees in diverse ways that are reflected in the symptoms of the disease. Some pathogens may immobilize nutrients in the rhizosphere or in infected tissues and interfere with translocation and utilization efficiency [64]. However, mineral nutrients confer crop resistance and tolerance to diseases [62]. Nutrient manipulation through fertilization, or modification of the soil environment to influence nutrient availability, may be a useful cultural control for plant disease and an integral component of production agriculture [62,64]. In general, it is expected that citrus demonstrates some symptomology from pathogen attack or disease infection, especially when they are deficient in one or more micronutrients since most micronutrients function to intervene in the activity of chemical processes (redox process, ROS production, etc.) or enzyme biosynthesis [65,66].
This review focused on the impact of essential micronutrients on mitigating HLB to promote citrus tree growth and development, which in turn might improve yield. It is expected that micronutrient fertilization for HLB-affected citrus trees will be updated in view of recent study findings, to ensure that tree productivity is improved. It may be ideal to investigate the impact of variable rate application of other micronutrients such as Zn, Cu, and B on HLB-affected citrus trees. The latter, among other previous information provided in this review, might be a critical addition to citrus nutrient management guidelines. Another way to mitigate the effects of HLB is to launch site-specific research into other methods of fertilizer applications to maximize nutrient use efficiency. This could be either a single application method, such as fertigation only, or a combination of fertigation supplemented with either foliar or granular application to ensure better tree growth and development.
|
2022-12-29T16:08:49.577Z
|
2022-12-23T00:00:00.000
|
{
"year": 2022,
"sha1": "2860739ab8e5b0ff438b977e2ecd055192be73ba",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2223-7747/12/1/73/pdf?version=1671789488",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d9ff38af26891d470f6624fb0f08e5aa0479f8e2",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
}
|
238105074
|
pes2o/s2orc
|
v3-fos-license
|
Asthma and Keratoconus: An analysis of the risk factors association with the severity of keratoconus
Background: A cross-sectional study was undertaken in Australia to explore a wide range of risk factors associated with keratoconus. A questionnaire addressing age, gender, educational background, ocular and medical history, smoking and alcohol consumption, and physical examination comprising anthropometric measurements was collected; eye examination was undertaken. The associations between a range of risk factors and keratoconus was determined using univariate and multivariable linear regression analyses. steeper assessment in a recruited Our study reported as the only factor found to be signicantly associated The results of this study allow us to better the aetiology of and
keratoconus cohort recruited in Australia. Our study has reported asthma as the only risk factor found to be signi cantly associated with keratoconus. The results of this study allow us to better understand the aetiology of keratoconus and such a knowledge could be useful in instigate systemic management of patients to slow or prevent keratoconus. Background Keratoconus (KC) typically presents as a bilateral, asymmetric condition characterised by progressive corneal thinning resulting in corneal protrusion, irregular astigmatism and decreased vision. The prevalence of KC ranges from 1:2000 cases (reported in 1986) [36] to 1:375 (reported in 2016 [16].
Glasses and contact lenses are the main treatment options for the mild and moderate stages of KC. In severe KC cases, the central cornea becomes extremely thin and irregular and corneal transplantation surgery may be required to restore vision. KC is the commonest indication for corneal transplantation in Australia, accounting for 31% of grafts [21]. The aetiology of KC is likely multifactorial re ecting the interplay of a range of genetic and environmental factors [12] but may also be in uenced by mechanical trauma such as eye rubbing [4,15].
There is no medical treatment to halt the progression of KC apart from stiffening of the cornea by corneal collagen crosslinking (CXL) [10]. While diagnosing the end stage of KC for corneal transplantation is relatively straightforward, diagnosing and detecting the earlier stages when most vision can be retained still presents challenges. Despite KC being a relatively common corneal disease, there is a lack of large, prospective clinical studies of the disease. Indeed the USA-based Collaborative Longitudinal Evaluation of Keratoconus (CLEK) [5,46,49,53,54,55]study and the UK based Dundee University Scottish Keratoconus Study (DUSKS) [50] are the only two longitudinal prospective studies evaluating KC. These two studies looked into the characteristics and clinical data that might in uence KC progression, but they did not include all the assessment of known environmental risk factors. Other studies investigating risk factors in KC had methodological limitations including: retrospective study design [1,51], small sample size [19,23,33,51], postal questionnaires [34], the absence of ocular biometric measures [6,7,8,28,30,45], found a relatively weak association [33] and lack of a comprehensive environmental risk factor questionnaire [6,7,8,28,30]. There is an growing need to better understand the aetiology of the condition not only due to the recent increase in rates of case but also due to the costs associated with the diagnosis and management of keratoconus that represent a signi cant economic burden to the patient as well as the society [9].
Methods
This current prospective study recruited patients of both genders over a wide age range together with corneal tomography and a detailed questionnaire (including all currently known major risk factors) in order to assess the association of these factors with KC. The results will allow us to better understand the etiology of KC and also assist in early diagnosis of KC so as to provide early disease management and ultimately arrest progression of this disease without the need for corneal transplantation.
The subjects for this study were recruited from public clinics at the Royal Victorian Eye and Ear Hospital (RVEEH), private ophthalmologists' rooms, optometry clinics or consenting general public with KC. The study protocol was approved by the RVEEH Human Research and Ethics Committee (Project # 10/954H). This protocol followed the tenets of the Declaration of Helsinki and all privacy requirements were met. A patient information sheet, consent form, privacy statement and patient rights were provided to all individuals participating in the study.
All KC patients were required to complete a study questionnaire, and clinical and physical examination, which included the anthropometric measurements of height and weight. Inclusion and exclusion criteria have been described elsewhere [39,40,41]. In brief, KC was diagnosed on the basis of the presence of one or more of the following: an irregular cornea (as determined by distortion of keratometric mires and/or corneal topography/tomography imaging), scissoring of the retinoscopic re ex, and demonstration of at least one biomicroscopic sign, including Vogt's striae, Fleischer's ring, or corneal thinning and/or scarring typical of KC. To assess the association of risk factors on disease, we obtained corneal curvature data from the four-map selectable display of the Pentacam tomography system (Oculus, Wetzlar, Germany).
The study questionnaire collected the following information: age, gender, educational background, birth history, childhood infections, time spent undertaking near work, intermediate work and outdoors, duration of keratoconus, ocular and medical history and smoking and alcohol consumption habits, information on the subject's general health status and current medication and eye rubbing. Questions on a number of disorders and diseases related to common medical conditions including clinically diagnosed asthma, eczema, diabetes, hypertension, rheumatoid arthritis, migraine, allergy, connective tissue disorders and sleep disorders were recorded. The speci c type, date of onset and medications and/or other treatments were recorded.
Biometric Measurements
All individuals had their height (H) measured using a wall-mounted measuring scale (Livingstone International, Rosebery, Australia) and their weight measured using calibrated electronic scales (Livingstone International, Rosebery, Australia).The two variables (height and weight) were then placed in the universally recognised formula (W[kg] / H[m] 2 ), that allowed determination of the participant's body mass index (BMI).
Statistical Analysis
All statistical analyses were conducted using Stata version 14.0 (Stata Corp, College Station, TX).
Normality of the variables was examined using boxplots, Kolmogorov-Smirnov and Shapiro-Wilks tests. Continuous variables (age and BMI) are presented as median (interquartile range [IQR]) for skewed distribution and mean (standard deviation [SD]) for normal distribution, whereas categorical variables are presented as absolute (n) and relative frequencies (%). Mean corneal curvature of each eye was calculated automatically by Pentacam imaging as the mean value of central radial curvatures (Avg Km).
Univariate regression analysis was performed to explore the association of a number of risk factors with KC. As all the subjects had asymmetric, bilateral KC, the steepest Avg Km of either the right or left eye was used to identify risk factor association. In the second step, multivariable linear regression analysis was performed on all parameters which showed a p < 0.1 signi cance with KC in the univariate analysis. All continuous variables were examined for correlations and multi-collinearity using Pearson Product-Moment Correlation. A two-tailed p-value < 0.05 was considered statistically signi cant.
General Characteristics
The study included a total of 260 KC subjects, of whom 159 subjects (61.2%) were male. Of the 260 subjects, 251 (96.5%) completed the general and medical questionnaire. The mean age of the recruited subjects who completed all aspects of the study was 35.5 (SD = 14.8) years. The majority of the subjects, 171 (68.2%) were European, while 37 (14.6%) were Asians (primarily South, South-East Asians) and the remaining 43 (17.2%) were of other ethnicity (including Middle Eastern, Africans and Mixed) ( Table 1). A signi cant number of subjects 196 (78.1%) had completed at least secondary education of whom 153 (61%) had also either attended or completed further study at a tertiary higher education institute, technical and further education or a university degree.
Risk factor assessment
As a rst step, univariate regression analysis was undertaken to identify associations at p < 0.1.
Associations with increased risk of KC were identi ed with higher BMI (p = 0.075), subjects belonging to other ethnic group (p = 0.09), smoking cigarettes (p = 0.078), having arthritis and asthma (p = 0.05, p = 0.03 respectively). Diabetes was associated with a protective effect for KC (p = 0.08) as was eczema (p =
Discussion
We identi ed a statistically signi cant positive association between asthma and increased severity of KC for the rst time. Subjects presenting with asthma had a 2.2D steeper corneal curvature compared to subjects having no asthma.
Sabiston in 1966, rst described a positive association between asthma and an increased risk of KC [37] and multiple other studies have strengthened the argument for a role of asthma in KC. In a New Zealand study, it was noted that 38% out of 673 KC individuals presented with asthma compared to 18% in the general population [34] and in the Collaborative Longitudinal Evaluation of Keratoconus (CLEK) study, asthma was reported at 14.9% [53]. In the Dundee University Scottish Keratoconus (DUSK) study in Dundee, KC patients with asthma were diagnosed up to 3.1 years earlier than those without asthma [50]. In addition, in an Israeli study of 662,644 adolescents, an increased risk of asthma in 807 KC patients (OR1.5, 95%CI 1.3-1.6) was reported [31]. Similarly, an association was reported in a Danish nationwide registry study (OR2.21, 95%CI 1.91-2.55) [3] as well as in a United States Nationwide Heath Care Claims Database analysis (OR 1.31, 95% CI, 1.17-1.47) [51]. However, no studies have assessed asthma with effect on severity of KC.
Our nding adds to the growing evidence that asthma not only appears to be associated with KC but also appears to increase severity of the condition. This in an important nding as currently it is estimated that approximately 300 million people in the world have asthma and prevalence in Australia is reported as the highest in the world at 21% for clinical asthma [47].
Asthma & KC in children
Asthma is an in ammatory disease of the small airways of the lung and has been reported to be caused by a combination of genetic predisposition with environmental factors [11]. These environment factors likely include indoor and outdoor allergens, tobacco smoke, chemical irritants as well as air pollution and likely prompt an allergic reaction or airway irritation. Asthma is a common chronic disease that affects people of all ages in all parts of the world. The International Study of Asthma and Allergies in Childhood reported that globally the age distribution of the burden of asthma peaks at age 10-14 years and that asthma is a cause of substantial burden of disease, including both premature death and reduced quality of life in people of all ages [13].
The onset of the KC is usually in the teens to early adulthood, but there are reports of increasing prevalence in children as young as at the age of 4 years [38]. Paediatric KC has also been shown to be more aggressive than adult KC [22]. Our current nding showing asthma has an association with increased severity of KC, is thus important in the examination and systemic management of paediatric KC subjects. While KC in adults has been studied extensively, the disease in the paediatric population has not. Visual impairment in paediatric patients may affect social and educational development, thus negatively impact on their quality of life. Our recent ndings show that the quality of life in KC patients is lower than that in patients with later-onset eye diseases such as age-related macular degeneration or diabetic retinopathy, [39] highlighting the signi cant long-term morbidity associated with KC. It is therefore necessary to consider asthma as a major contributing risk factor in paediatric KC cases, which may lead to a paediatric-speci c therapeutic algorithm.
Shared genetic path for Asthma and KC
A large number of genetic loci have been implicated in asthma through the use of linkage, genome wide association studies (GWAS) and animal studies. There is some evidence to support a possible shared genetic contribution between asthma and KC. A previous Genome Wide Association Study (GWAS) conducted on asthma in an Australian population identi ed signi cant association with the single nucleotide polymorphism (SNP) rs4129267 (odds ratio 1·09, p = 2·4 × 10 − 8 ) in the interlukin-6 receptor (IL6R) gene [14]. The ligand for this receptor is the interleukin 6 (IL6) gene. IL6 is known to be a potent pleiotropic cytokine that regulates cell growth and differentiation and plays an important role in immune response [20]. Shetty et al recently reported up-regulation of IL6 mRNA in tears of KC patients [44]. Thus, it could be postulated that increased IL6 levels lead to enhanced up-regulation of the immune response through binding with its receptor (IL6R), followed by subsequent activation of the JAK-STAT pathway. While increased IL6 mRNA expression has been detected in tears of KC patients, this does not necessarily con rm IL6 as a causative agent in KC. Interestingly, the cornea is one of the tissues of the eye exhibiting the highest expression of IL6R whereas IL6 has a low expression in this tissue (Ocular Tissue Database, University of Iowa). Ultimately, a direct comparison of gene expression or protein levels will be required in KC compared to non-KC corneal tissue to substantiate these ndings. Interestingly, the convergence of immune genes and pathways from different diseases has recently been demonstrated in a meta-analysis of GWAS in ten paediatric autoimmune diseases where such signalling pathways are a common denominator [29].
Other risk factors
In addition to asthma we also assessed several other commonly reported risk factors which haven't been assessed in KC including age, gender, educational background, birth history, childhood infections, time spent undertaking near work, intermediate work and outdoors, duration of keratoconus, ocular and medical history, smoking and alcohol consumption habits; systemic conditions and eye rubbing. Several of these were signi cant at the univariate level (BMI, ethnicity, cigarette smoking, arthritis, diabetes and eczema) and suggestive at the multivariate level (eczema, arthritis and diabetes with p values of 0.07, 0.09 0.1 respectively). Interestingly, arthritis showed a positive association with KC whereas diabetes and eczema were associated with a protective effect for KC. The signi cant "protective effect" of diabetes against KC [2,24,25,26,27,32,35,42] has been previously described in the literature and researchers have investigated the biochemical properties of cornea to explain this effect [17,43,48]. We report ndings on time spent on near work, intermediate work and outdoor activities for the rst time. As KC typically affects teens to early adults, we were expecting a positive association with near work-related activities as seen in myopia [18]. However, no such trend was evident suggesting that KC is more pathophysiologically driven than by these environmental conditions.
Strengths & Limitations
The main strength of the current study is that it included a wide range of risk factors that could be assessed with their association with KC. To the best of our knowledge, there are no previous studies which looked at the association of hypertension or migraine with KC and there are very few studies that assess potential risk factors such as cigarette smoking and alcohol consumption. The design of this study has addressed several methodological issues found in previous studies on KC, such as small sample size [23], postal questionnaires [34], data from medical record [52] the absence of ocular biometric measures [6,7,8,28,30,45] and the lack of a comprehensive environmental risk factor questionnaire [6,7,8,28,30]. Findings from this study are therefore envisaged to lead to substantial progress in our understanding of the aetiology of KC and classifying it as quasi-in ammatory condition.
The limitation of the study is the lack of a clinically quanti ed severity risk factor questionnaire for some of the collected measures. Data collected on eczema and asthma did not record severity, number of episodes and length of time presenting with these indicators. It would have been interesting to nd out more details about these two conditions including severity of asthma and current treatments as our results and the literature suggest an association of these with KC and with severity of KC.
Conclusions
In conclusion, this study describes the assessment of a range of risk factors in a large KC cohort recruited in Australia. Our study has reported asthma as the only risk factor found to be signi cantly associated with KC following multivariate analysis. In addition, as multifactorial disorders, keratoconus and asthma may share some overlapping risk factors such as shared genes or common environmental factors. Future studies are needed to be undertaken to con rm our ndings as well as undertake GWAS to uncover some of the genetic associations with KC in an unbiased manner and will help in elucidating potential gene pathways involved in KC.
Signi cance
Our results show that asthmatic patients tend to have more severe KC and thus close monitoring for disease progression would be advised, as treatments such as corneal cross linking can be performed stabilise the disease, which may reduce the need for future corneal transplantation. The study protocol was approved by the Royal Victorian Eye and Ear Hospital Human Research and Ethics Committee (Project # 10/954H). This protocol followed the tenets of the Declaration of Helsinki and all privacy requirements were met. A patient information sheet, consent form, privacy statement and patient rights were provided to all individuals participating in the study.
Consent for publication
Not applicable
Availability of data and materials
The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.
Competing interests
The authors declare that they have no competing interests.
|
2020-04-09T09:05:21.504Z
|
2020-04-07T00:00:00.000
|
{
"year": 2020,
"sha1": "54e16aadbfae99db17ffc385004defdd3c5763de",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-19388/v1.pdf?c=1590735740000",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "3422689029336076a420ecb1169851b970b300e1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
261473267
|
pes2o/s2orc
|
v3-fos-license
|
A Broadband Asymmetrical Doherty Power Amplifier With Optimized Continuous Mode Harmonic Impedances
This article presents a design methodology for an asymmetrical Doherty Power Amplifier (DPA) which achieves a high average efficiency at back off across its operating bandwidth. It is shown that by combining continuous modes with post matching techniques, it is possible to achieve excellent efficiency performance whilst maintaining broadband operation. Analysis is provided on how the knee effect can reduce the effective potential efficiency of Class J and Continuous Inverse Class F modes. An optimum combination of $2\text{nd}$ and $3\text{rd}$ harmonic impedances is then proposed for the carrier PA which can minimise this knee effect impact on efficiency performance. Also, it is shown how the drain supply can be used to improve the bandwidth over which an intrinsic optimum load can be maintained. Based on this analysis, a simple iterative design procedure is then presented which can be directly implemented with standard RF design tools. This design procedure is then verified in the design and manufacture of a prototype DPA using the Wolfspeed CG2H40010F GaN HEMT. The realised PA operates between 2.1 and 3.2 GHz with a peak power output of between 43.9 and 44.5 dBm. The PA achieves a high average drain efficiency of 64.7 % at 8–9 dB of back off. The DPA has been tested with and without digital pre-distortion (DPD) by considering a 60 MHz LTE OFDM signal with 9 dB peak-to-average power ratio (PAPR). When DPD is enabled, the presented DPA achieves a drain efficiency of between 52.1–64.3 with an adjacent channel power ratio (ACPR) of between −42.2– −44.1 dB over the bandwidth of 2.1–3.2 GHz.
I. INTRODUCTION
Wireless mobile communication standards are ever evolving, looking to provide higher data rates to users.To achieve these data rates more complex modulation schemes are being used along with larger bandwidths and more frequency bands.The complex modulation schemes used tend to feature waveforms with high Peak-to-Average Power Ratios (PA-PRs).These bandwidths and complex waveforms put extra demand on the performance requirements of Radio Frequency Power Amplifiers (RF PAs).Firstly, it is often difficult to match a transistor device within a PA to a 50 system over a significant bandwidth while maintaining sufficient efficiency performance, due mostly to the quality factor of matching circuits used.Secondly, high PAPR waveforms cause the efficiency of the PA to degrade due to the PA operating in output power back-off (BO).This has lead to various techniques to try to improve efficiency at BO including Sequential Power Amplifiers (SPAs) or load modulated amplifiers such as Doherty Power Amplifiers (DPAs) and Load Modulated Balanced Amplifiers (LMBAs).These load modulation techniques operate on the same fundamental principle which is to change the impedance presented to the transistor device at BO to minimise DC losses and increase efficiency.
One promising technique is a DPA, though with the traditional implementation there can be bandwidth limitations.Traditional DPAs feature impedance transformations using quarter wave transmission lines which are inherently limited in the bandwidth in which they can transform a given load.Also, traditional DPAs tend to be limited to 6 dB in BO.Therefore, to overcome this, the peaking and carrier amplifiers within the DPA need to be designed to deliver different power levels at Peak Output Power (POP), to then create asymmetrical operation and a high BO level which approaches modern PAPRs, which are typically in the range of 9 to 14 dB.To overcome the bandwidth issue there have been various techniques proposed which aim to use other ways of providing impedance transformation, where [1] gives a good review of these techniques.A promising technique to improve bandwidth is known as Post Matching (PM) which improves bandwidth by replacing the impedance inverter and impedance transformer within a traditional Doherty.Device parasitics are typically absorbed in the impedance inverter whilst an impedance transformer known as the Post Matching Network (PMN) is implemented as a higher order transformer to improve bandwidth.There are plenty of examples of PM in [2], [3], [4], [5], [6], [7], [8], with high BO operation achieved in [5], [6], [7], [8].
To improve the efficiency and bandwidth performance of the DPA, Continuous Modes (CM) can be introduced into the design.These modes can theoretically achieve high efficiency over a large bandwidth.There are multiple examples of implementing CM in a DPA within literature with [9], [10], [11], [12], [13], [14], [15], [16], [17], where CM have been combined with the PM technique in [12], [13], [14], [15], [16], [18].All of the examples however are limited to 6 dB in BO.In the particular cases of [12] and [16], there is some resistive loading of the harmonics, around 5-10 , which can reduce overall efficiency.There is one example where a CM DPA achieves greater than 6 dB of BO in [17].This approach achieves a good average drain efficiency of 57.2 % at BO, 36 % fractional bandwidth with 9-10 dB of BO.
When examining the use of CMs further, Class J is used in [12], [13], [14], [15], [16].With [17], Continuous Class F (CCF) mode is used, where it can be difficult to achieve the required 3rd harmonic open circuit over the reported fractional bandwidth of 36 %.In some instances the PA could therefore be operating closer to Class J, where there is a likelihood that the 3rd harmonic is terminated within the short region of the smith chart over parts of the bandwidth.As will be demonstrated in this article when analysing CMs under the influence of a real device model, the use of the Class J mode with alpha (α) values approaching zero can greatly reduce efficiency.Also, with CMs it is often thought that the 3rd harmonic termination has little effect on efficiency performance, [13], [19], and can therefore be ignored to ease the complexity of design.This is usually the case when careful consideration is made in the design to avoid knee clipping, such as the introduction of clipping contours in [20] and [21].In this article it is found that considerations can be made on the 3rd harmonic impedance which can then play a role in the achievable efficiency with the particular transistor model used.This has been touched upon in [21] where the 3 rd harmonic impedance is offset to effect 2nd harmonic clipping contours.In other words, there are potentially some optimum 3rd harmonic impedances which can help avoid knee clipping from occurring, and in turn improve performance, which is further corroborated in this article.
To improve on the state-of-the-art for CM DPAs, a solution is proposed to improve BO efficiency whilst maintaining a high level of fractional bandwidth.The main novelty behind this solution is the use of the combination of Class J and Continuous Inverse Class F (CCF −1 ) modes for the carrier within the DPA, which avoids using Class J with α values close to zero, as seen in previous works.It is shown in Section II-A that efficiency performance can be impacted by the knee region with a real device for Class J at these α values.Also, as stated, there is typically a misconception that the 3rd harmonic has little effect on efficiency.As will be demonstrated in Section II-B, by controlling the 3rd harmonic impedance the PA can be driven further into saturation without knee clipping occurring, allowing for high efficiency to be achieved.As stated with previous works in literature, a high intrinsic target impedance at BO is often used which could cause bandwidth performance loss with some DPAs.The supply voltage is therefore proposed as a design variable in Section II-C, in an effort to reduce the quality factor of the required match for the carrier PA, and hopefully allow for a intrinsic load closer to the target load.A stable intrinsic load is important in a DPA, as any variance can either cause a PA to compress early and produce less power, or move further away from compression and therefore lose efficiency.Typically within a CM DPA the harmonics are matched by the PMN, though for the first time in this article a harmonic impedance buffer close to the transistor device is used to appropriately terminate the 2nd harmonic, alleviating issues in resistive loading of harmonics seen in previous work, but also allowing for more control to achieve a particular set of harmonic impedances.The solution proposed by this article is also simple in its implementation and can remove the many complexities due to asymmetrical load modulation, and therefore the extended analysis needed to describe the interaction between both carrier and peaking amplifiers.To simplify the design and maintain high bandwidth and efficiency performance, both carrier and peaking amplifiers are designed in a simple step by step manner.This process provides an superior first run design which can then be tuned for performance within the simulator.Through the novel combination of modes used, along with optimised 3rd harmonic impedance terminations, implemented through the use of a harmonic impedance buffer, a DPA with high average BO efficiency has been realised.To the best of the author's knowledge, the proposed CM-DPA shows the highest average drain efficiency seen in literature, at BO levels greater than 8 dB, as also depicted in Fig. 1.
II. THEORETICAL ANALYSIS A. CONTINUOUS MODE ANALYSIS FOR OPTIMUM EFFICIENCY
Continuous Modes (CM) can be used to design a highly efficient broadband RF PA.These modes introduce a parameter α which allows for an extension on conventional modes, e.g.Class B or F. This α parameter allows for different combinations of reactive fundamental and harmonic terminations whilst still having similar efficiency performance to the more conventional underlying mode.When different PA modes were introduced, and then also their extension into CM, the waveforms of these modes often assume that there is an allowable full voltage swing which can reach zero to minimise overlap with the current waveform.Within a device such as a HEMT however, there is a knee voltage which needs to be sustained across the drain-source junction so that a given amount of drain current can flow.The impact of the knee effect on the CCF mode has been studied previously in [22] where drain efficiency was shown to drop to around 70% for α values approaching zero, instead of the theoretical 90%.Here, some analysis is provided on the Class J and CCF −1 modes, which extends on the initial analysis provided by the author in [23].
Theoretically, Class J should be able to achieve the same drain efficiency as Class B of around 78% across all α values, whilst for CCF −1 efficiency is closer to 90%.The impedances for the fundamental and second harmonic for Class J are with the corresponding admittances for the fundamental and second harmonic for CCF −1 are To examine these modes more practically under the influence of a real transistor device, and therefore examine any knee region effects on efficiency performance, a harmonic balance simulation is run with the ideal impedances of both modes presented at the intrinsic plane of the Wolfspeed CGH40010F large signal model.In all cases a perfect 3rd harmonic short is provided, with simulations run at a carrier frequency of 2 GHz.The resulting drain efficiency for varying 2nd harmonic phase of both modes for different intrinsic load values is shown by Fig. 2. With Class J, as the 2nd harmonic phase approaches a short, i.e. a phase of +/-180 • , the efficiency of this mode drops to between 55-60%.For either extreme of 2nd harmonic phase for Class J however, at a phase of around +/-90 • , this efficiency increases up to 79-82%.Efficiency for CCF −1 remains relatively high across all values of 2nd harmonic phase and intrinsic load values, achieving between 79 to 88 %.
The voltage and current waveforms for different α values for Class J are further examined, as shown in Fig. 3.These waveforms show the mechanism behind Class J, where for increasing values of α, more 2nd harmonic voltage is induced to narrow the peak of the voltage, offsetting any increased overlap in current and voltage due to some fundamental imaginary impedance.However this same mechanism also stops the voltage from falling below the knee threshold, through some voltage trough boosting and phase shift in where the voltage minima occurs.This 2nd harmonic can therefore change the amount of knee clipping occurring which impacts the amount of 3rd harmonic current generation, shown by the broadening of the current waveform in Fig. 3.This broadening causes more overlap with VDS and therefore causes the lower efficiency for α values approaching zero for Class J, seen previously.In theory, the current waveform is typically assumed to be a half-rectified sinusoid with only even harmonic content [22].However as demonstrated here, 3rd harmonic content within the current waveform can impact efficiency.To then examine the knee effect theoretically, a new definition of the current waveform is needed.However this becomes very complex as the amount of 3rd harmonic current has a multitude of dependencies, including input drive level, phase at which the voltage drops below the knee threshold and the impedance presented to the 3rd harmonic.To therefore arrive at a solution where high efficiency is maintained across a broad frequency range, a simple practical approach was taken in [23] where CCF −1 and Class J combined at particular α values to maintain high average efficiency.This approach is extended here in this article through some examination of the impact the 3rd harmonic impedance has on achievable efficiency.
B. THIRD HARMONIC IMPACT ON EFFICIENCY
In theory, for both CCF −1 and Class J modes, a perfect 3rd harmonic short is usually provided.In reality however, to provide this ideal short over a wide frequency band can be difficult.The 3rd harmonic impedance is therefore likely to vary when considering a broadband PA.As demonstrated previously, the amount of 3rd harmonic content in the current waveform can impact efficiency.However this was only studied for the case when the 3rd harmonic is perfectly shorted.For that reason, the impact the 3rd harmonic impedance has on efficiency needs to be examined.This is achieved through another harmonic balance simulation with the ideal impedances of both modes presented at the intrinsic plane of the Wolfspeed CGH40010F large signal model.This time however, for each combination of fundamental and 2nd harmonic impedance, the 3rd harmonic is swept fully around the edge of the smith chart at the intrinsic plane.
The impact of any variance of 3rd harmonic impedance is first examined on the amount of 3rd harmonic content within the current waveform, along with the maximum achieved efficiency for a range of 2nd harmonic impedances, shown in Fig. 4. By observing the 3rd harmonic content, this can give an indication of the amount knee clipping occurring.It shown that by terminating the 3rd harmonic with some imaginary impedance between −30 to −10 , a lower amount of 3rd harmonic current is generated, along with a higher achieved efficiency.This is further demonstrated through plotting intrinsic efficiency contours for the 3rd harmonic impedance in Fig. 5. Again, these load pull contours show that improved efficiency can be achieved by offsetting the 3rd harmonic impedance away from the traditional short.However, the considerable significance to all of these observations is the feasibility of being able to achieve this particular combination of 2nd and 3rd harmonic impedances in practise, due to the natural rotation of 2nd harmonic into the 3rd harmonic design space.
To therefore confirm these observations over a frequency range of 2.1-3.2GHz, this combination of modes with the 3rd harmonic impedance offset are then verified with a harmonic balance simulation, where for each frequency a different CM α value is chosen and the phase of the 3rd harmonic reflection coefficient is varied between −130 • to −152 • .The impedances presented over this frequency range and the resulting efficiency and power performance is shown in Fig. 6.These results show the benefits of using these harmonic impedances for efficiency performance.Therefore, the carrier amplifier within the DPA in this article is designed to these optimised CM harmonic impedances in an effort to achieve high BO efficiency.
C. DRAIN SUPPLY AS A DESIGN PARAMETER IN DPAS
When designing a DPA for asymmetrical operation, the amount of power the carrier and peaking PAs provide becomes unequal and therefore require a separate design.This means that careful considerations must be made on the optimum load which each PA is matched to, and supply voltages used for each PA.However, as will be demonstrated here, a consideration also needs to be made on how the supply voltage affects the quality factor of the required matching impedance.This is of particular importance when looking to maintain asymmetrical operation over a wide bandwidth.This can be achieved by means of the relationships between output power, optimum load and supply voltage.The average power generated by the PA can be calculated with the integral of the maximum voltage and current, namely where ω 0 = 2π f 0 is the angular frequency of the RF carrier.By solving this integral, the optimum load in relation to voltage and power can be represented as By rearranging this optimum load equation, the required supply voltage for a given load and output power can be obtained as follows V supply = 2R opt P avg + V knee (7) These equations will be employed in the design of the DPA in Section III where a specific optimum load will be specified based on a particular output power level for the carrier and peaking amplifiers.Based on this optimum load the required supply voltage can then be determined.
Packaged devices tend to have a significant amount of parasitics between the intrinsic plane and the external package plane.When considering different intrinsic load values, these parasitics cause a large impedance transformation at the transistor package plane.It is therefore important to understand the impact that these package parasitics will have on the quality factor of a match and the ability to maintain a particular load over a given bandwidth.With previous works such as in [17] the optimum load proposed at BO is greater than 200 , which can greatly increase the quality factor of the match required.This means it can be difficult to maintain the target load over the full frequency band of the PA, either causing early compression of the carrier PA, or no compression at all.This can be inferred from measured efficiency and gain plots, where at particular frequencies there is no gain compression at BO along with low efficiency, indicating a lower intrinsic load than required at that particular BO power level.This typically occurs at the extremes of the PAs operating frequency band, with evidence of this non-optimal BO intrinsic load seen in other reported CM PM DPAs [13], [14], [15], [16].
Here, the quality factor of the required match is examined based on different supply voltages.This is done by calculating the required optimum loads at different supply voltages for a peak output power of 10 W, with a back off power level of 3.5 W, which are both typical of the DPA designed for this article.The package parasitics are then embedded to observe the required package impedance needed between 2.1-3.2GHz, to achieve the intrinsic optimum load.The required intrinsic and package impedances for different supply voltages at the power levels stated previously are plotted in Fig. 7, along with the Class B loadline for each power value in each case.It is shown that by reducing the supply voltage, and in turn reducing the required intrinsic load value, the quality factor of the required package impedance transformation can be reduced.As stated previously, R opt values at BO in literature have been seen to range anywhere from 100 and 200 , therefore potentially reducing the ability to maintain a given intrinsic load.Any variance in load can either cause early compression with less output power, or less compression and therefore a loss in efficiency.Another issue with an unbalanced intrinsic load is the difference in peaking gate voltage needed to allow the carrier PA to go into compression, which could become a varying quantity with frequency.In this article a lower supply voltage is used for the carrier PA to better maintain a given intrinsic load, in an effort to improve efficiency performance.
D. ASYMMETRICAL CM DPA DESIGN PROCESS
The block diagram for the asymmetrical Post Matching (PM) technique used in this article is shown by Fig. 8. PM is a well known technique for improving bandwidth, achieved through the replacement of the impedance inverter and transformer within a traditional DPA.Impedance inversion for the carrier amplifier is implemented by absorbing device parasitics along with an offset line to ensure impedance inversion at the Load Modulation Point (LMP).Impedance transformation is then achieved through the Post Matching Network (PMN) which is then used to provide an impedance Z L at the LMP.As demonstrated previously in [24] and [15], maintaining the carrier and peaking networks to the minimum length of 90 • and 180 • respectively can ensure best possible bandwidth performance.To meet these criteria of phase for both carrier and peaking networks, an offset line of impedance Z P is added to both branches, also shown by Fig. 8.The same impedance is used for both lines to avoid discontinuity in line width, making conversion from schematic modelling into EM simulations easier.As in [6] and [8], a component termed β is defined as the current ratio of the carrier and peaking branches leading into the LMP.In this article this is approximated by the ratios of power being delivered by the carrier and peaking PAs.
The main novelty of the PM approach taken in this article is that optimised harmonic impedances defined previously, which are used for both PAs.For harmonic matching, impedance buffers are used as in [25] then later in [26] with the design of a CCF PA, where in this article only the 2nd harmonic is matched.This ensures that the 2nd harmonic is properly loaded close to the device to ensure optimum efficiency performance, whilst also minimising electrical length added between the device and LMP, which can also greatly impact bandwidth performance.As also demonstrated by Fig. 8, a combination of Class J and CCF −1 is used for the carrier PA, whilst Class J is used for the peaking amplifier.A higher supply voltage will be used for the peaking PA, where using this class of operation within a given range of α values can still generate good efficiency, but also limit the peak intrinsic voltage from overreaching the breakdown voltage.Another novelty is a simplified design process for the design of a PM DPA, which is summarised below.With PM it can be complex to analytically solve the required impedances of all matching networks, such as the co-design proposed by [5].It is also difficult to achieve the required phase delay needed in the carrier circuit for impedance inversion over any significant bandwidth.This can result in imperfect impedance inversion with frequency, where low-order impedance inverters are used in [3] to alleviate this issue.This therefore makes maintaining any specific optimum impedance difficult over bandwidth.For that reason the power ratio becomes a frequency dependent quantity.However, various circuit parameters around the different carrier and peaking circuits rely heavily on the current ratio β.It can therefore become a complex problem to find analytically these impedances.In this article a simple design flow is proposed which can deliver a wideband DPA with high BO drain efficiency.This process will be further expanded on with the design of a DPA combining continuous modes with the post matching technique in Section III.The summarised steps are as follows: 1) Overall circuit diemnsions are calculated, particularly the maximum power delivery of the carrier and peaking PAs, such that the the PMN Z L is defined and Z P can be calculated.
2) The next process is to add an impedance buffer for harmonic matching to the carrier amplifier along with the required delay line to achieve the necessary 90 • for impedance inversion when connecting into the LMP and PMN. 3) A load pull is then conducted on the carrier amplifier circuit including the harmonic match and impedance inversion to find the optimum frequency-dependent impedance the PMN needs to present to carrier amplifier for best efficiency performance at BO. 4) Once the carrier amplifier and PMN has been designed, the impedance which is presented to the peaking amplifier at the LMP is calculated.This is based on an estimated frequency-dependent current ratio taken from the simulated measured current from the carrier amplifier at the LMP.This impedance is then used to design the peaking amplifier for best efficiency performance.
III. DESIGN PROCEDURE FOR A POST MATCHING CONTINUOUS MODE DOHERTY PA A. CIRCUIT SPECIFICATIONS AND DIMENSIONING
Before proceeding with an accurate design, the overall DPA design specifications, i.e. output power, current, supply voltages, are determined.For this design, the DPA is designed to provide a peak power of 45 dBm with an efficiency peak at 9 dB BO.For peak power, the total power is split between 10 W for the carrier and 20 W for the peaking, resulting in a total of 30 W. For the carrier, the CG2H40010F is chosen as 10 W is needed at POP.The datasheet of the CG2H40010F specifies that it can deliver a saturated power of 17 W [27], where a slightly higher supply voltage can be used to achieve the required 20 W. To avoid any additional parasitics associated with a larger device, and therefore the reduction in bandwidth performance that could cause, the CG2H40010F device is also chosen for the peaking PA.Based on the defined split of power at POP between carrier and peaking PAs, the power ratio β is determined to be 1.8.At BO, the carrier amplifier needs to deliver 3.8 W (35.8 dBm).Using these defined metrics, the various impedance, drain supply and power values can be calculated for both amplifiers, shown by Fig. 9.
For the PMN, a Z L of 10 was chosen which is close to the optimum package impedance of a load pull of Class J impedances.With this Z L value and using the current ratio β = 1.8, at POP the impedance the carrier sees looking into the LMP becomes 27.7 whilst the peaking amplifier sees 15.6 .Based on these calculated impedances, and the power defined previously at POP, the carrier and the peaking will provide a peak current respectively of 0.9 and 1.6 A to the LMP.As mentioned previously, reducing the optimum load needed at BO can help reduce the quality factor of the match needed.An intrinsic R opt of 40 is therefore chosen which provides room for any later modulation into lower resistance values at peak power without drawing more current than the transistor can supply.Taking this R opt value and the power at BO, a drain supply of 18 V is calculated.Based on this supply voltage and a peak delivery of 10 W for the carrier, the intrinsic R opt will be modulated to 15 .For the peaking amplifier a drain supply of 34 V is chosen which along with the peak power of 20 W results in an R opt of 30 .
B. CARRIER AND PMN DESIGN
Once all circuit values have been calculated, the next step is the co-design of the carrier amplifier along with the PMN, with the first process being to match the 2nd harmonic.As demonstrated in this article, it can be beneficial to combine CCF −1 and Class J modes.The harmonic response between 2.1-3.2GHz proposed earlier in this article is the target impedances for high efficiency.These impedances are achieved by adding an impedance buffer with a combined 180 • shorted stub and a 90 • open stub.A series line is tuned, along with the length the stubs, to provide the appropriate 2nd harmonic response.The stubs used and resulting harmonic response are shown in Fig. 10.The 3rd harmonic response once the carrier PA is loaded by the PMN is also shown to fall within the region needed for high efficiency.The next step is then to add a delay line to achieve the necessary 90 • for impedance inversion when looking into the LMP and PMN.This line has a characteristic impedance of 15.6 which matches the impedance the peaking amplifier sees looking into the LMP.This allows for continuity in width of the two lines combining at the LMP.The length is then tuned until 90 • is achieved at the center frequency, as shown by Fig. 10.The next step is to then perform a load pull to find the impedances the PMN needs to match to for best BO efficiency performance.Before doing so, an approximation of the peaking amplifier is made with a 180 • stub, thus avoiding the need to design the peaking amplifier first, but still allowing the effect it has on the intrinsic impedance to be observed during a load pull.The carrier output matching network along with the peaking network approximation is shown by OMN C in Fig. 10(a).The resulting efficiency contours from a load pull of the OMN C circuit is shown in Fig. 10(b).ADS' matching tool is then used to design a PMN filter to match these load pull contours to 50 .The resulting effect on the carrier intrinsic impedance once the PMN is included is seen on a Smith chart in Fig. 10(b), along with the impedance being presented by the PMN.Finally, the effect the PMN has on the intrinsic resistance is shown in Fig. 10(c).This shows that the PMN has helped match the intrinsic impedance closer to the desired intrinsic R opt of 40 .The resulting intrinsic waveforms of the carrier PA at different frequencies once connected with the peaking PA in a harmonic balance simulation are shown by Fig. 11(a).
C. PEAKING DESIGN & COMBINATION
The final step in the design process is to design the peaking amplifier.Similar to before with the carrier, the 2nd harmonic is first matched with a combination of a 180 • shorted stub and a 90 • open stub.A series line is tuned, along with the length the stubs, to provide the appropriate 2nd harmonic response.The aim was to maintain the second harmonic within the Class J mode, where CCF −1 can cause a much higher voltage, which could potentially exceed the breakdown voltage with the high supply voltage being used for the peaking.The stubs used and resulting harmonic response are shown in Fig. 12.The next step is to then match the peaking amplifier to the load presented by the PMN at the LMP.To do this, a frequency dependent impedance presented by the PMN can be derived by calculating the frequency dependent power ratio.This is achieved by measuring the current delivered by the carrier amplifier into the LMP in simulations, along with assuming the the current delivered by the peaking PA into the LMP is to be 1.6 A, as calculated previously.The resulting calculated impedance presented by the PMN to the peaking PA is shown on a Smith chart in Fig. 12(b), based on the calculated power ratio, as shown by Fig. 12(c).The same figures shows the simulated power ratio and the PMN impedance presented to the peaking PA from a harmonic balance simulation with the combination of the peaking and carrier PAs into a DPA.This shows good agreement with the calculated values, demonstrating the design method has good predictability of the load to design the peaking PA once the carrier PA has been designed.
Once the PMN impedance has been calculated, the peaking OMN is designed to match an intrinsic impedance of 30 to this PMN load.This OMN also features the addition of a delay line to achieve the necessary 180 • to provide as much of an open circuit as possible to the carrier amplifier.This line has a characteristic impedance of 15.6 which matches as close as possible the impedance the peaking amplifier sees looking into the LMP such that it has minimal effect on the match.The length is then tuned until 180 • is achieved at the center frequency, as shown by Fig. 12(c).The designed The final step is to then combine both carrier and peaking amplifiers into a DPA with a single harmonic balance simulation.The input of both amplifiers are matched and a Wilkinson divider is used to split the input power to both amplifiers.A delay line is added to the carrier input match to account for the extra electrical length in the peaking amplifier.This delay line is tuned until carrier and peaking currents align in phase at the LMP.The layout is then converted to microstrip components and then electromagnetically simulated, where the output match is optimised for output power, BO drain efficiency and peak drain efficiency.The final electromagnetically simulated layout is shown by Fig. 13(a).The drain efficiency resulting from an harmonic balance simulation is then shown by Fig. 14.The simulated BO drain efficiency varies between 55-74.8 % with an average of 66 %.The drain efficiency at POP varies between 68-80.4 % with an average of 72.3 %.
IV. EXPERIMENTAL RESULTS
The fabricated PA is shown in Fig. 13(b) and its size is 83 mm × 103 mm, using Rogers RT/duroid 5880 with 0.51 mm thickness.The PA is mounted on an aluminium heat sink to dissipate heat during operation.During measurements the carrier amplifier is supplied with a drain voltage of 18 V and a biased with a quiescent current I q = 40 mA.The peaking amplifier is supplied with a drain voltage of 32 V and a gate voltage of −6.4 V which was experimentally found to ensure good BO performance.Both continuous-wave (CW) and modulated signals were used to characterise the PA, and the results are shown in this section.Digital Pre-Distortion (DPD) was also applied with modulated signals to demonstrate the ability to linearise the PA with a conventional DPD.
A. CW MEASUREMENTS
A CW measurement setup was built with with an RF signal generator (Keysight N5183B) and a 35 W driver (Empower 1178BBM58CGM) connected to the PA input.The PA input power is sensed with a coupler (Mini-Circuits ZUDC20-0283) and a power meter (Rohde & Schwarz NRP-Z18) while the PA output power is directly measured with another power meter (Rohde & Schwarz NRP-Z23) with an embedded attenuator.Scalar calibration of the test setup was performed at frequency steps of 50 MHz and power steps of 1 dB.This calibration was performed with power steps to ensure any variation in the driver amplifier was accounted for.Fig. 14(a) shows the comparison of simulated and measured drain efficiency at peak and at BO power with respect to frequency.The PA operates between 2.1 and 3.2 GHz with an average peak drain efficiency of 73.9 % across the bandwidth, with the maximum achieved of 84.9 %.The measured average efficiency achieved at between 8-9 dB BO across the bandwidth of the PA was found to be 64.7 %.Both Fig. 15 shows power sweeps of gain and drain efficiency vs. output power across the bandwidth of the PA at 50 MHz intervals.
The PA achieves a maximum power output of between 43.9-44.5 dBm and a small signal gain of between 9.5-11.6dB.These are also shown by Fig. 14(b).There is some difference in the simulated and measured efficiency performance which can likely be attributed to the large signal model only being validated for a supply of 28 V, seen in Wolfspeed's documentation [27].
B. MODULATED SIGNAL MEASUREMENTS
After CW validation, the next step was to test the PA with a complex modulated signal such as Long Term Evolution (LTE) downlink signals which typically present a high PAPR.The presented DPA is designed to be highly efficient and therefore presents a non-linear gain characteristic and so DPD is beneficial to achieve linear operation.For this, a 60 MHz carrier-aggregated signal is generated in Matlab with a combination of three 20 MHz channels.The total PAPR of the signal was 9 dB and the same 60 MHz signal is then used to test the PA at three different frequencies (2.2, 2.6, and 3 GHz).The DPD is implemented with a simple Generalized Memory Polynomial (GMP) and has a polynomial order of 11, a memory order of 3, and symmetrical lead and lag memory order of 3.
Three different plots were used to demonstrate the PAs linearity before and after DPD: Amplitude Modulation/Amplitude Modulation (AM/AM), AM/Phase Modulation (AM/PM) and power density spectrum plots.The measurement results are shown in Fig. 16.Here, the AM/AM and AM/PM plots are shown by (a) and the power density spectrum plots shown by (b).When DPD is not used, an Adjacent Channel Power Ratio (ACPR) of between 28.4-29.4dBc was measured.Once linearised, the PA achieved an ACPR of
C. PERFORMANCE COMPARISON WITH STATE OF THE ART
The performance of the presented DPA is compared with other published high-BO PAs in Table 1 available in the state-of-the-art literature.As it can be seen from Table 1, the proposed combination of continuous modes and post matching technique shows its benefits in considerable efficiency performance of 73.8 % at saturation and of 64.7% at 8-9 dB BO, whilst this DPA is also capable of maintaining broadband performance with a fractional bandwidth of 42 %.To the best of the author's knowledge, the proposed CM-DPA shows the highest average drain efficiency at BO currently in the literature.Some works such as [28] and [29] have been able to achieve even higher fractional bandwidths of between 48-57%, though their average BO drain efficiencies can be more than 10 % lower, i.e. 49 to 52 %, in comparison to the presented DPA.The use of continuous modes, and in particular the specific combination of Class J and CCF −1 modes, has allowed this high BO drain efficiency to be achieved.Additionally, by exploiting the drain supply to change optimum load values, much higher performance has been able to be achieved over a wider bandwidth.
V. CONCLUSION
This article presents a simple design methodology for an asymmetrical DPA using continuous modes and the post matching technique to achieve great BO performance whilst maintaining broadband operation.Analysis was provided on how the knee effect can reduce the effective performance of Class J and Continuous Inverse Class F modes.An optimum combination of 2nd and 3rd harmonic impedances as proposed for the carrier PA which can minimise this knee effect impact on efficiency performance.Also, it is shown how the drain supply can be used to improve the bandwidth over which can intrinsic optimum load can be maintained.The presented design procedure was verified in the design and manufacture of a prototype DPA using the Wolfspeed CG2H40010F GaN HEMT.The realised PA operates between 2.1-3.2GHz with a peak power output of between 43.9-44.5 dBm.The PA achieved a high average drain efficiency of 64.7 % at 8-9dBs of BO.When tested with a 60 MHz LTE modulated signal with 9 dB PAPR, a drain efficiency of between 52.1-64.3 was achieved.Once digital predistortion was applied, the ACPR was reduced to between −42.2 and −44.1 dB.
FIGURE 1 .
FIGURE 1. State of the art BO Average Efficiency vs. Fractional Bandwidth, with values calculated from data read from graphs.All amplifiers referenced feature BO levels greater than 8 dB.
FIGURE 2 .
FIGURE 2. Maximum achievable efficiency based on the variation of 2nd harmonic phase, covering both Class J and CCF −1 optimum impedances for different Ropt values.Results are obtained by presenting optimum package impedances to the Wolfspeed CGH40010F large signal model with a harmonic balance simulation at 2 GHz.
FIGURE 3 .
FIGURE 3. Impact of varying α for Class J mode on resulting intrinsic current and voltage waveforms, with a look at how the 2nd harmonic voltage gives the overall intrinsic voltage a trough boost to minimise knee clipping.This results in a current waveform with a narrower peak due to less 3rd harmonic content from reduced knee clipping at higher α values.
FIGURE 4 .
FIGURE 4. Effect of varying 3rd harmonic impedance between +/-30 on the 3rd harmonic content of the current waveform and maximum achieved efficiency.
FIGURE 5 .
FIGURE 5. 3rd Harmonic intrinsic efficiency contours based on the plotted combination of 2nd harmonic impedances of Class J and CCF −1 at 2 GHz.Any 3rd harmonic impedance bound by the contour line guarantees an efficiency equal to or greater than that of the contour line, for all plotted 2nd harmonic impedances.
FIGURE 6 .
FIGURE 6. Proposed combination of Class J and CCF −1 modes between a fundamental frequency of 2.1 to 3.2 GHz, along with a 3rd harmonic impedance offset for improved efficiency.These target impedances are validated with a harmonic balance simulation with the results for drain efficiency and power plotted against frequency.
FIGURE 7 .
FIGURE 7. Required impedance transformation at the package plane, Z p , and intrinsic plane, Z int , for different supply voltages for a power output of 3.5 W at BO (red) and 10 W at POP (brown).It is shown that by reducing the supply voltage, and in turn reducing the required intrinsic load value, the quality factor of the required package impedance transformation can be reduced.
FIGURE 8 .
FIGURE 8. Overall schematic of the PM technique used, along with impedance buffers for 2nd harmonic harmonic loading and target 2nd harmonic impedances for both carrier and peaking PAs.The carrier branch is highlighted in orange and the peaking branch in purple.
FIGURE 9 .
FIGURE 9. Post matching diagram of calculated impedances, supply voltages, power and currents.The two impedances cases correspond to peak output and backoff power.
FIGURE 10 .
FIGURE 10.(a) Carrier harmonic stub and inversion circuit, with peaking approximation in the square box, and PM circuit.(b) Smith chart of first, 2nd and 3rd harmonic response, including load modulation between BO and POP, along with load-pull contours of efficiency greater than 75% for PMN impedance (c) Carrier OMN length in degrees and intrinsic resistance with and without PMN.
FIGURE 11 .
FIGURE 11.(a) Carrier amplifier intrinsic waveforms at BO and POP, with loadline waveform shown to the left and with VDS and IDS shown to the right.(b) Peaking amplifier intrinsic waveforms at POP, with loadline waveform shown to the left and with VDS and IDS shown to the right.
FIGURE 12 .
FIGURE 12. (a) Peaking matching circuit with harmonic stubs, matching and delay line.(b) Smith chart of fundamental and 2nd Harmonic response along with PMN impedance presented to peaking PA.(c) Peaking OMN length and calculated and measured power ratio β.
FIGURE 13 .
FIGURE 13. (a): layout of the PA prototype with component values and microstrip width/length annotated on it.(b): photo of the fabricated PA.
FIGURE 14 .
FIGURE 14.(a) Measured and simulated CW peak and BO drain efficiency performance vs. frequency, (b) measured and simulated CW maximum power output at -1 dB compression and small signal gain vs. frequency.
FIGURE 16 .TABLE 1 .
FIGURE 16.AM-AM and AM-PM (a) and output spectrum (b) of tested PA with and without DPD with a 60-MHz LTE signal at 2.2, 2.6, and 3 GHz.
|
2023-09-03T15:08:13.256Z
|
2023-10-01T00:00:00.000
|
{
"year": 2023,
"sha1": "29d9a064094731d659cfa41ced4e4dff84570331",
"oa_license": "CCBY",
"oa_url": "https://ieeexplore.ieee.org/ielx7/9171629/9206027/10237343.pdf",
"oa_status": "GOLD",
"pdf_src": "IEEE",
"pdf_hash": "b990bd62dd571fd58fa0fc95747c94c95c8e737b",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": []
}
|
269142340
|
pes2o/s2orc
|
v3-fos-license
|
Decoding Early Childhood Caries: A Comprehensive Review Navigating the Impact of Evolving Dietary Trends in Preschoolers
This comprehensive review delves into the intricate relationship between evolving dietary trends in preschoolers and the prevalence of early childhood caries (ECC). The investigation meticulously analyzes ECC epidemiology, etiology, and preventive strategies. The review unveils the multifaceted nature of ECC, highlighting microbial, dietary, and environmental factors contributing to its development. Significantly, the study explores the global prevalence of ECC and its substantial implications for the overall health, nutrition, and development of preschool-aged children. The implications for public health and policy are deliberated, advocating for targeted interventions and collaborative efforts among healthcare professionals, policymakers, educators, and parents. The conclusion presents a compelling call to action, urging collective engagement to mitigate the impact of ECC and prioritize the well-being of preschoolers. This review offers valuable insights for healthcare professionals, policymakers, educators, and parents to inform evidence-based strategies for addressing ECC and promoting early childhood oral health.
Introduction And Background
Early childhood caries (ECC), also known as baby bottle tooth decay or nursing caries, manifests through decayed, missing, or filled tooth surfaces in any primary tooth of a child under six.ECC has emerged as a prevalent and concerning issue on a global scale, ranking among the most common chronic childhood diseases.Its ramifications transcend oral health, impacting a child's overall well-being, nutrition, and developmental trajectory.Acknowledging the seriousness of ECC is paramount for devising effective strategies to combat this health challenge [1].Defined as dental issues such as decayed, missing, or filled tooth surfaces in the primary teeth of children aged six and below, ECC demands attention due to its widespread prevalence and potential long-term repercussions [2].A systematic review of studies conducted in India reveals an alarming prevalence of ECC, ranging from 49.6% to 46.9%.This statistic implies that nearly one in every two children in India is affected by ECC [3].Notably, Andhra Pradesh exhibited the highest prevalence of ECC at 63%, while Sikkim reported the lowest prevalence at 41.92% [4].Furthermore, the prevalence of dental caries in the Indian population aged between three and 75 years was 54.16% [5].However, comprehensive data on the global prevalence of ECC remains unavailable.
The primary objective of this exhaustive review is to delve into the intricate relationship between evolving dietary trends in preschoolers and the prevalence of ECC.By scrutinizing the current knowledge in this domain, it endeavors to comprehensively understand the myriad factors contributing to ECC and pinpoint effective preventive measures.Encompassing a wide-ranging exploration, the review delves into the epidemiology, etiology, and preventive strategies associated with ECC in preschool-aged children.Key objectives include scrutinizing global prevalence and demographic factors influencing ECC, investigating microbial, dietary, and environmental contributors, analyzing evolving dietary trends and their impact on oral health, evaluating the correlation between dietary patterns and changes in oral microbiota, exploring existing preventive measures, and delineating challenges while proposing future directions for research and public health initiatives.
Review Epidemiology of ECC
Recent studies have underscored the global public health challenge posed by ECC, affecting nearly half of preschool children worldwide [6].The prevalence of ECC exhibits considerable variation across different countries and populations, with certain regions reporting alarmingly high rates of up to 74.3% among threeto five-year-olds [7].Notably, ECC disproportionately affects socially disadvantaged populations, with prevalence rates soaring to as high as 85% in some disadvantaged groups [8].Multiple risk factors contribute to the prevalence of ECC, including feeding and dietary practices, oral hygiene habits, socioeconomic status, and parental attitudes [8].These findings emphasize the imperative of implementing effective prevention strategies, particularly in areas with high prevalence rates and among vulnerable populations.By addressing these risk factors and tailoring interventions to the specific needs of affected communities, strides can be made toward reducing the burden of ECC and promoting oral health equity.
Demographic Factors Influencing ECC
Several demographic factors have emerged as significant influencers of the prevalence of ECC.Extensive research indicates that sociodemographic variables such as parental education, household income, and maternal psychosocial factors directly impact ECC [9][10][11].Furthermore, a study conducted in India revealed that unique risk factors for ECC encompassed the family's socioeconomic and educational status, the mother's or caregiver's oral hygiene practices, and demographic characteristics [12].Moreover, findings from a separate study indicated that individuals of mixed race and white ethnicity exhibited increased and decreased risks for ECC compared to individuals of black ethnicity, respectively [11].These insights underscore the critical importance of considering a diverse array of demographic factors in comprehending and addressing the prevalence of ECC across various populations.By acknowledging and addressing these demographic nuances, tailored interventions can be developed to mitigate the burden of ECC within specific communities effectively.
Socioeconomic Impact
Socioeconomic factors have a profound influence on the occurrence of ECC.Children hailing from lowincome households and those lacking access to a consistent medical home face heightened susceptibility to dental caries [13].Notably, ECC disproportionately affects socially disadvantaged populations, with prevalence rates soaring to as high as 85% within certain marginalized groups [14].Extensive research underscores parental socioeconomic status, educational attainment, household income, and employment status as predisposing factors for ECC [15,16].
Furthermore, a comprehensive study examining the nexus between ECC and poverty in low-and middleincome nations corroborates poverty as a substantial risk factor for ECC [15].These findings underscore the imperative for deploying effective prevention strategies adept at addressing the socioeconomic determinants associated with ECC.By targeting interventions that specifically address these socioeconomic barriers, strides can be made toward mitigating the burden of ECC, particularly within vulnerable and marginalized populations.
Microbial Factors
Role of Streptococcus mutans: S. mutans is pivotal in the onset of dental caries, particularly ECC.Distinguished for its robust acidogenic and aciduric properties, S. mutans significantly contributes to enamel demineralization [17].While S. mutans is a principal pathogenic agent in dental caries, its presence may exhibit variability, even in children afflicted with severe ECC (SECC).This suggests the potential involvement of other closely associated microbial species in caries progression [17].Detection of S. mutans on tooth surfaces is a robust indicator of cavity development, especially in young children [18].Furthermore, S. mutans has been observed to engage in symbiotic interactions with other microorganisms, such as Candida albicans, influencing their cariogenic potential and augmenting the ECC process [19].Hence, although S. mutans remains a pivotal factor in caries initiation, its abundance may not singularly predict caries development, with other microbial entities likely contributing significantly to the decay process [17,19].
Other contributing bacteria: While the etiology of ECC is multifaceted, microbial factors emerge as significant determinants.While S. mutans stands out as a primary pathogenic bacterium in dental caries, other bacterial species also assume critical roles in the progression of caries.These encompass various Streptococcus species, including Streptococcus sanguinis, low-pH non-S.mutans streptococci, and Atopobium spp., alongside Veillonella spp., Actinomyces spp., Bifidobacterium spp., and Lactobacillus fermentum [17,20,21].Notably, Lactobacilli, in particular, are closely associated with lesion advancement [21].The dysbiotic state of oral microflora, predominantly instigated by a sugar-rich dietary regimen, is the primary impetus behind ECC [22].Consequently, while S. mutans remains a significant factor in caries evolution, its prevalence alone may not be a solitary predictor, with various other microbial species likely exerting vital roles in tooth decay [17,20,21].
Dietary Factors
Sugar consumption trends: Recent studies highlight sugar consumption as a pivotal factor in developing dental caries among children.The habitual intake of foods and beverages rich in free sugars emerges as a primary catalyst for the onset of dental caries, elevating the risk of ECC [23].A study on Chinese children aged two to five years revealed associations between ECC and SECC with dietary imbalances, high grain consumption, and limited food variety [24].Furthermore, household sugar purchases at three years correlate with family sugar consumption, subsequently impacting the incidence of permanent dentition caries [25].Thus, advocating for reduced free sugar intake and a balanced, diverse diet emerge as fundamental strategies for ECC prevention [23].
Impact of modern dietary habits on ECC: The influence of contemporary dietary patterns on ECC is profound.Infant dietary practices, particularly the consumption of sugary beverages, have been linked to the occurrence of ECC among preschoolers [26].Notably, a study involving Chinese children aged two to five years revealed that heightened food diversity correlated with reduced caries prevalence [27].Additionally, frequent consumption of simple carbohydrates, primarily dietary sugars, significantly escalates the risk of dental caries [28].Moreover, feeding and dietary habits, such as continual sipping or grazing on sugary foods and beverages, have been identified as critical contributors to ECC [12,29].These findings underscore the importance of promoting healthy, varied dietary habits to safeguard against ECC.
Environmental Factors
Fluoride exposure: Excessive fluoride exposure poses various health risks.Some of the dangers associated with overexposure to fluoride include dental fluorosis, skeletal fluorosis, cardiac insufficiency, reproductive issues, thyroid dysfunction, joint and bone conditions, and neurological problems [30].Dental fluorosis, characterized by enamel discoloration, stems from elevated fluoride concentrations during childhood.Conversely, skeletal fluorosis manifests as a bone disease, causing discomfort and bone damage.Furthermore, heightened fluoride exposure has been associated with potential health hazards such as decreased fertility, early puberty onset in girls, and neurological disorders, including a potential link to attention deficit hyperactivity disorder [30].Effective control of fluoride exposure is imperative to mitigate these adverse health effects.
Socioeconomic and cultural influences: Socioeconomic and cultural determinants significantly impact the prevalence of ECC.Research indicates that children from lower socioeconomic backgrounds are at higher risk of ECC development [9,16,31].Environmental factors such as culture, lifestyle, and dietary patterns also greatly influence caries susceptibility or resilience [31].ECC risk factors include inadequate nutrition, suboptimal oral hygiene practices, limited dental care access, and maternal education levels [16].Additionally, a study investigating nutritional factors associated with ECC underscored the heightened risk posed by increased free sugar consumption [24].Therefore, addressing socioeconomic and cultural factors and promoting healthy dietary habits and oral hygiene practices is paramount to ECC prevention.Figure 1 shows the etiological factors of dental caries.
Overview of Contemporary Preschooler Diets
Parents influence their children's dietary habits significantly, serving as primary role models whose behaviors and eating habits often mirror their own [32].This parental influence underscores the importance of fostering healthy eating practices within the family.Various social, physical, and intraindividual factors shape children's eating behaviors, encompassing the family environment, peer influences, and individual preferences [32].These multifaceted influences necessitate a comprehensive approach to promoting nutritious dietary habits among children.Public health interventions advocate for nutrition education in preschool settings, with a particular emphasis on increasing the consumption of fruits and vegetables [33].These initiatives aim to cultivate lifelong healthy eating habits by instilling nutritional awareness from an early age.Pediatricians frequently encounter children adhering to various special diets, including vegetarianism, macrobiotics, and exclusion diets for food allergies [34].Understanding the implications of these dietary patterns is essential for providing tailored medical guidance and support.
The availability of healthy food options, such as fruits, vegetables, and whole grains, alongside the prevalence of fast food and sugary beverages, significantly influences children's dietary choices [35].Efforts to improve food accessibility and promote healthier alternatives are critical for shaping positive dietary patterns.Emotional eating, characterized by the use of food to assuage emotions or seek comfort, has been linked to increased BMI in young children [35].Addressing emotional eating behaviors is integral to fostering a healthy relationship with food from an early age.Feeding practices within the family dynamic, including pressure to eat, restrictive feeding behaviors, and monitoring practices, exert notable impacts on children's eating behaviors and food preferences [35].Recognizing and addressing these practices can facilitate cultivating healthy dietary habits in preschoolers, offering enduring benefits for their health and well-being.
Influence of Processed Foods and Sugary Snacks
Childhood obesity and overweight are often linked to the consumption of sugar-sweetened beverages (SSBs) and ultra-processed foods (UPFs) [36].The excessive intake of UPFs is known to induce metabolic changes in children and adolescents, further exacerbating the risk of developing overweight or obesity [37].Processed foods, lacking essential vitamins, minerals, and fiber, frequently contribute to nutritional deficiencies in children [38].This deficiency arises from the inherent nature of processed foods, which strip away essential nutrients during manufacturing processes.
Moreover, the high sugar and unhealthy fat content prevalent in processed foods can result in excessive calorie intake, leading to weight gain and obesity among children [38].The disproportionate intake of these calorie-dense foods can disrupt energy balance, consequently fostering unhealthy weight gain.Furthermore, certain artificial additives commonly present in processed foods have been implicated in causing hyperactivity and behavioral disturbances while also potentially impacting neurological function [38].These additives may exert adverse effects on some individuals, prompting concerns regarding their widespread usage in food products.The appealing taste profile of processed foods, often enhanced by elevated salt, sugar, and fat levels, can lead to addictive consumption patterns [38].This addictive nature perpetuates overconsumption, thereby exacerbating the risk of obesity and associated health complications.
Additionally, the frequent consumption of sugary snacks and beverages contributes to dental decay and other oral health issues among children [39].To counteract the adverse effects of processed foods and sugary snacks on preschoolers, it is imperative to foster healthier eating habits and provide alternative, nutritious options [38].This entails incorporating fresh fruits and vegetables, whole grains, lean proteins, and healthy fats into their diets, ensuring a balanced nutritional intake.Furthermore, parents and caregivers play a pivotal role in modeling healthy eating behaviors and restricting the availability of processed foods within the home environment [36].By promoting these strategies, efforts can be made to mitigate the negative impact of processed foods on the health and well-being of preschool-aged children.
Beverages and Their Impact on Oral Health
The consumption of sugary drinks can significantly impact oral health.When ingested, the sugars in these beverages fuel bacteria in the mouth, triggering the production of acid that can harm the teeth by causing cavities or erosion.To minimize tooth exposure to the acid produced by bacteria, consuming sweetened beverages in one sitting is advisable rather than sipping them over an extended period.Moreover, if juice is provided to children, it is recommended to have them drink it only with meals and to offer water in a sippy cup for consumption throughout the day.Fluoridated tap water and milk are heralded as superior alternatives for dental health, as they aid in protecting teeth against cavities and maintaining their strength [40].Research has underscored the association between consuming SSBs and an elevated risk of dental caries and erosion.These findings underscore the detrimental effects of SSBs on oral health outcomes, mainly dental caries and erosion [41].In contrast, as per the American Dental Association, water is lauded as the optimal beverage for oral health and overall wellness.Additionally, milk is touted as a beneficial option for teeth, as it can safeguard tooth enamel, provide essential vitamins and calcium, and mitigate tooth decay [42].
Cultural and Regional Variations in Dietary Patterns
Cultural and regional variations in dietary patterns are influenced by various factors, such as religious beliefs, food availability, affordability, accessibility, and cultural background.Studies have shown that dietary patterns vary across different regions and cultures, which can impact health outcomes and nutritional status [43][44][45].For instance, a study on young Polish females found that the region's affluence is strongly reflected in dietary behaviors, with higher adherence to traditional Polish dietary patterns in less affluent regions [45].Similarly, a study on Swiss participants found that statistically significant differences were observed across language regions, with participants in the French-and Italian-speaking regions scoring higher than those in the German-speaking region [46].These findings highlight the importance of understanding cultural and regional variations in dietary patterns to promote healthy eating habits and prevent diet-related diseases.
Changes in Microbial Composition
Research has delved into the ramifications of evolving dietary trends on the oral microbiota, shedding light on their profound impact.The rapid escalation in carbohydrate consumption, mainly sucrose, has disrupted the evolved equilibrium between the oral microbiota and dental health, rendering dental caries the most prevalent chronic ailment globally [47].From the advent of agriculture to the Industrial Revolution, dietary shifts have substantially and swiftly escalated carbohydrate intake, unsettling the homeostasis of the oral microbiome and dental well-being [47].Moreover, studies underscore that diet serves as a vital nutritional source for the oral microbiota while concurrently exerting selective pressure, favoring the survival and propagation of specific organisms.This selective pressure can precipitate pathological alterations in the oral microbiota [48].Furthermore, research has elucidated that dietary interventions can influence the oral microbiome at the genetic-strain level, impacting the host's immune response and metabolic profile [49].Consequently, it is evident that evolving dietary trends profoundly influence the composition and equilibrium of the oral microbiota, with far-reaching implications for both oral and systemic health.
Relationship Between Diet and Bacterial Virulence
The interplay between diet and bacterial virulence is intricate and can yield diverse and sometimes contradictory outcomes.Modifying the host's diet has the potential to either suppress or exacerbate disease severity and the proliferation of pathogens [50].For instance, epidemiological investigations have highlighted a correlation between diet and the risk of gastric cancer, particularly concerning Helicobacter pylori infection [51].Moreover, pathogens operate within dynamic nutritional microenvironments within the host, and the host's diet can influence microbial virulence, a phenomenon termed "nutritional virulence" [52].Consequently, the impact of diet on bacterial virulence encompasses many factors, including the host's immune response, pathogen adaptability, and the diversity and functional capacity of the microbial community [53].
The Role of Diet in Biofilm Formation
Dietary sugars have been identified as critical regulators of bacterial-fungal interactions in saliva, impacting interkingdom biofilm formation on tooth surfaces.Research indicates that sucrose and starch facilitate the coexistence of bacteria and fungi, fostering heightened biofilm accumulation and acid production, potentially contributing to ECC development [54].Moreover, forming biofilms in the food industry constitutes a multifaceted process wherein the quantity and composition of nutrients, including dietary components, influence biofilm development [55].Furthermore, investigations into the impact of diet on oxidative stress and inflammation induced by bacterial biofilms in the oral cavity underscore the role of specific dietary patterns in shaping biofilm induction and the proliferation of both pathogenic and beneficial bacteria [56].These findings underscore the significant role of diet in modulating biofilm formation and its potential ramifications for both oral and systemic health.
Importance of Early Oral Health Education
Early oral health education is pivotal in fostering positive oral health habits and averting dental caries in children.The research underscores the necessity of commencing health education at an early age to monitor growth and stave off potential pathologies [57].It is worth noting that subpar oral health can translate to absenteeism from school and diminished academic performance among children [58].Regular preventive dental checkups with oral health professionals facilitate the dissemination of age-appropriate anticipatory guidance to parents and caregivers [59].Many countries have implemented oral health education programs within school settings, recognizing early childhood as a critical phase for optimal oral health [57].The overarching aim of oral health education is to enhance knowledge, which fosters the adoption of favorable oral health behaviors conducive to improved oral well-being [60].Furthermore, various preventive measures and interventions have been identified for ECC prevention.These include regulating fermentable carbohydrate intake, avoiding bottle or sippy cup usage before bedtime, ensuring adequate fluoride exposure, scheduling regular preventive dental appointments, disseminating oral health education, implementing screening strategies, and fostering interprofessional collaboration [59].By employing these multifaceted approaches, efforts can be concerted toward mitigating the burden of ECC and promoting enduring oral health among children.
Promoting Healthy Dietary Habits
Promoting healthy dietary habits encompasses a range of strategies and approaches advocated by various reputable sources.The CDC advocates the "Reflect, Replace, Reinforce" method, which entails introspection on eating habits, the substitution of unhealthy choices with healthier alternatives, and the reinforcement of new habits [61].Additionally, the CDC recommends meal planning, eating well-balanced meals, and exercising patience when transitioning to new eating patterns [61].Furthermore, the WHO stresses the importance of fostering healthy nutrition throughout all stages of life and establishing a conducive food environment through collaborative efforts involving multiple sectors and stakeholders, including government bodies, the public, and the private sector [62].The WHO also recommends decisive actions by policymakers, such as harmonizing trade, food systems, and agricultural policies, to encourage healthy dietary practices [61].Moreover, resources like the Early Childhood Learning and Knowledge Center (ECLKC) and KidsHealth offer practical guidance for families, including serving a diverse array of nutritious foods and snacks, curbing the intake of sugary beverages, and involving children in the food selection process [63,64].These resources underscore the significance of adopting a holistic approach to promoting healthy dietary habits, encompassing individual reflection, environmental support, and policy-level interventions.
Role of Fluoride in Prevention
Fluoride is a cornerstone in preventing dental caries among children [65].Its mechanisms include bolstering dental mineralization and bone density, exhibiting bactericidal effects on cariogenic bacteria, and retarding demineralization while fostering enamel remineralization within dental plaque [66].Fluoride varnish emerges as a well-established tool for preventing ECC, renowned for its ease of application and favorable tolerance by children [67,68].Other modalities for fluoride administration in caries prevention encompass fluoride toothpaste, prescription fluoride supplements, and fluoride mouth rinses [65,69].Nonetheless, while fluoride's efficacy is widely acknowledged, evidence regarding the effects of various fluoride concentrations remains somewhat limited, necessitating consideration of the risk of dental fluorosis across different fluoride concentrations [66].
Dental Sealants and Their Efficacy
Dental sealants, thin coatings applied to the chewing surfaces of the back teeth (molars), protect against cavities [70].Proven effective, they are instrumental in preventing and halting pit-and-fissure occlusal caries lesions in both primary and permanent molars among children [71].Additionally, sealants can impede the progression of noncavitated occlusal caries lesions [72].Research indicates resin sealants offer a preventive fraction of up to 61% over five years [72].These sealants can be categorized into three types: glass ionomer, resin-modified glass ionomer, and resin-based sealants, with the latter preferred for their superior retention and effective caries prevention [72].Studies demonstrate that dental sealants can reduce the incidence of dental caries by up to ninefold [73], making them a cost-effective intervention, particularly for children at high risk of developing cavities [73].Sealants can prevent 80% of cavities in the back teeth over two years, a significant statistic given that nearly nine out of 10 cavities occur in these teeth [70].Despite their efficacy, the utilization of sealants remains suboptimal, with less than half of children and adolescents benefiting from this preventive measure [70].Closing this gap in sealant utilization could significantly enhance oral health outcomes among young populations.
Lack of Awareness and Education
Several factors contribute to the prevalence of ECC, necessitating multifaceted approaches for prevention and intervention.Firstly, a notable need for more awareness among parents and caregivers regarding the significance of early oral health and the potential repercussions of ECC persists [2].This deficiency in awareness often translates into inadequate oral hygiene practices and inappropriate feeding habits, further exacerbating the risk of ECC development [2].Secondly, there is a pressing need for more comprehensive education on oral health and ECC prevention targeted at parents, caregivers, and healthcare professionals [1].This educational endeavor should emphasize the importance of oral hygiene, dietary habits, and the detrimental effects of improper feeding practices [1].Moreover, socioeconomic and educational factors significantly influence ECC prevalence, with socially disadvantaged populations and children from lowincome families being disproportionately affected [2].Addressing these socioeconomic and educational determinants emerges as pivotal in mitigating the prevalence of ECC.Additionally, enhancing integration between medical and dental healthcare systems is imperative for delivering preventive services and fostering interdisciplinary approaches to oral health promotion [74].Cultural factors also play a pivotal role, with parents' education levels, stress levels, oral health beliefs, attitudes, and cultural backgrounds closely intertwined with ECC and dental caries [75].Addressing these cultural nuances holds promise for improving oral health outcomes among children.To tackle these challenges comprehensively, concerted efforts should be made to augment awareness and education on oral health and ECC prevention, enhance access to dental care, and promote interdisciplinary approaches to oral health promotion [1,2,74].This endeavor encompasses developing and implementing evidence-based prevention strategies, integrating oral health education into school and community settings, and enhancing access to dental services for socially disadvantaged populations.
Barriers to Accessing Dental Care
The hurdles encountered in tackling ECC encompass obstacles to accessing dental care, which can significantly impact children's health and overall well-being.Untreated cavities in children can lead to pain, infections, and difficulties with essential activities such as eating, speaking, playing, and learning [76].Hence, access to dental care emerges as a critical component in preventing and managing ECC.The research underscores the significance of harnessing technology to reach vulnerable families and assist them in cultivating positive oral health behaviors aimed at averting tooth decay in young children [76].Furthermore, factors including excessive sugar consumption, poor oral hygiene practices, inadequate fluoride exposure, and enamel defects play substantial roles in ECC development, underscoring the necessity for preventive measures and enhanced access to dental care [2].Effectively addressing disparities in early childhood dental caries mandates integration between medical and dental healthcare systems to provide preventive services within primary healthcare settings [74].Moreover, parental challenges in implementing oral health practices are influenced by various factors such as education, stress levels, health beliefs, attitudes, and cultural considerations [75].These findings underscore the paramount importance of surmounting barriers to dental care access and implementing efficacious strategies for preventing and managing ECC.
Cultural and Socioeconomic Challenges
Addressing ECC poses challenges influenced by diverse cultural and socioeconomic factors.Recent research has underscored the profound impact of factors such as excessive sugar consumption, low maternal education levels, and varying socioeconomic statuses on the susceptibility to dental caries among children in low-and middle-income countries [77].Additionally, ECC risk is closely linked to factors like feeding practices, dietary habits, oral hygiene routines, and limited access to dental care, particularly among socially disadvantaged populations [2,12].Furthermore, it has been highlighted that families significantly influence the dissemination of health-related information regarding oral health.Thus, interventions targeting individual, familial, and communal levels effectively address ECC [1].Moreover, a comprehensive review of oral health policies across different regions has underscored the necessity for holistic strategies to alleviate the ECC burden while considering the cultural and socioeconomic determinants of the disease [78].These findings underscore the imperative of tailored interventions for the diverse cultural and socioeconomic contexts inherent in preventing and managing ECC.By addressing these multifaceted factors, efforts to combat ECC can be rendered more effective and inclusive, ultimately promoting improved oral health outcomes among children.Cultural and socioeconomic challenges in ECC are shown in Figure 2.
Conclusions
This comprehensive review has revealed significant insights into the complex dynamics of ECC.Exploring the intricate relationship between evolving dietary trends in preschoolers and ECC prevalence has shed light on the multifaceted nature of this oral health concern.The findings underscore the global prevalence of ECC and its multifactorial origins, encompassing microbial, dietary, and environmental influences.The implications for public health and policy are substantial, emphasizing the urgent need for targeted interventions and preventive measures at both community and policy levels.The review advocates for a collaborative approach involving healthcare professionals, policymakers, educators, and parents to formulate and implement effective strategies.Furthermore, the call to action extends to healthcare professionals for knowledge dissemination and policy advocacy, educators integrating oral health education into curricula, and parents actively participating in their children's oral health practices.By fostering such collaborative efforts, we can work toward a future where ECC is minimized and the well-being of preschoolers is prioritized both dentally and holistically.This call to action serves as an invitation to unite in the collective pursuit of a healthier and brighter future for the youngest members of our communities.
FIGURE 2 :
FIGURE 2: Cultural and socioeconomic challenges in ECC ECC, early childhood caries Image credit: Kanika S. Dhull
|
2024-04-15T15:04:54.601Z
|
2024-04-01T00:00:00.000
|
{
"year": 2024,
"sha1": "90fce921db09f95c4eec15d4a2077454732e42c2",
"oa_license": "CCBY",
"oa_url": "https://assets.cureus.com/uploads/review_article/pdf/232468/20240413-22078-6eabfu.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "44ab56e62261401d6620adfe99aed5c4ae8dbab9",
"s2fieldsofstudy": [
"Medicine",
"Education"
],
"extfieldsofstudy": []
}
|
67857780
|
pes2o/s2orc
|
v3-fos-license
|
Subunits of the mechano-electrical transduction channel, Tmc1/2b, require Tmie to localize in zebrafish sensory hair cells
Mutations in transmembrane inner ear (TMIE) cause deafness in humans; previous studies suggest involvement in the mechano-electrical transduction (MET) complex in sensory hair cells, but TMIE’s precise role is unclear. In tmie zebrafish mutants, we observed that GFP-tagged Tmc1 and Tmc2b, which are subunits of the MET channel, fail to target to the hair bundle. In contrast, overexpression of Tmie strongly enhances the targeting of Tmc1-GFP and Tmc2b-GFP to stereocilia. To identify the motifs of Tmie underlying the regulation of the Tmcs, we systematically deleted or replaced peptide segments. We then assessed localization and functional rescue of each mutated/chimeric form of Tmie in tmie mutants. We determined that the first putative helix was dispensable and identified a novel critical region of Tmie, the extracellular region and transmembrane domain, which is required for both mechanosensitivity and Tmc2b-GFP expression in bundles. Collectively, our results suggest that Tmie’s role in sensory hair cells is to target and stabilize Tmc channel subunits to the site of MET.
Introduction
The auditory and vestibular systems detect mechanical stimuli such as sound, gravity, and acceleration. These two systems share a sensory cell type called hair cells. The somas of hair cells are embedded in the sensory epithelium and extend villi-like processes from their apex into the surrounding fluid. The shorter of these, the stereocilia, are arranged in a staircase-like pattern adjacent to a single primary cilium known as a kinocilium. Long proteins linkages tether neighboring cilia together. Deflection of the kinocilium along the excitatory axis tugs the interconnected stereocilia, which move as a single unit called the hair bundle [1]. When tension is placed on the upper-most linkages known as tip links, the force is thought to open mechanosensitive channels at the distal end of the shorter stereocilia [2,3]. These channels pass current, depolarizing the cell and permitting electrical output to the brain via the eighth cranial nerve. The conversion of a mechanical stimulus into an electrical signal is known as mechano-electrical transduction (MET) [4]. Aside from the channel, several other proteins converge at the base of the tip links to form a molecular complex that gates the channel. While a handful of essential members of this MET complex have been identified, we do not fully understand how all of these proteins interact. It is also not known how the MET channel is localized to and maintained at the stereocilia tips.
To characterize the molecular underpinnings of MET and the underlying cause of pathology in human patients, it is essential to examine the individual components of the transduction complex in a comprehensive fashion. Thus far, only a few proteins have been designated as members of the MET complex, and more than one protein may comprise the channel subunits. Because the MET channel has yet to be reconstituted in a hetereologous system, the identity of the pore-forming subunits of the channel remains uncertain. However, several studies in recent years have revealed strong candidates for the pore-forming subunits: the Transmembrane Channel-like (TMC) proteins TMC1 and TMC2. Mutations in TMC1 cause human deafness [5], and double knock-outs of mouse Tmc1/2 result in the loss of MET currents [6][7][8]. In zebrafish, Tmc2a and Tmc2b are required for MET in hair cells of the lateral line organ [9], and overexpression of a fragment of Tmc2a generates a dominant negative effect on hair-cell mechanosensitivity, suggestive of direct interference with the channel [10]. The TMCs localize to the tips of stereocilia, the site of MET, in mice and zebrafish [3,6,8,9,11,12]. In TMC2 knockout mice, permeation properties of the MET channel are altered [13]. Likewise, a point mutation in mouse Tmc1 results in altered channel properties, suggesting direct changes to the pore [7,14]. A recent paper used a cysteine modification assay to demonstrate that modification of certain residues lining the predicted pore of TMC1 results in changes to channel conductance [15]. Furthermore, the authors demonstrated that adding a MET channel blocker during exposure to the cysteine modifier prevented their modification, implying that the unaffected residues line the inner pore and are thus inaccessible when the channel is blocked. This body of evidence concludes that the TMCs are essential subunits of the MET channel, and are likely to at least partially constitute the pore.
Another key component of the complex is Protocadherin-15 (PCDH15), which comprises the lower end of the tip link [16,17] and interacts with the TMCs [8,10]. A fourth membrane protein, Lipoma HMGIC fusion partner-like 5 (LHFPL5, formerly called TMHS), interacts with PCDH15 and is critical for localizing PCDH15 to the site of MET [18,19]. LHFPL5 is also required to properly localize TMC1 in mouse cochlear hair cells [8]. However, loss of LHFPL5 in cochlear hair cells does not completely abolish MET currents, and currents can be rescued by overexpression of PCDH15 [19]. This evidence suggests that LHFPL5 is not essential but rather acts as an accessory protein. Another TMC1/2 interacting partner is Calcium and integrin binding protein 2 (CIB2), which is a cytosolic protein that is localized in stereocilia and required for MET in cochlear hair cells [20].
A sixth essential member of the MET complex is the transmembrane inner ear (TMIE) protein. Loss of TMIE results in deafness in fish, mice and humans [21][22][23][24][25][26]. A recent study suggested that TMIE is required for mechanosensitivity in cochlear hair cells of mice [27]. These authors showed that despite normal morphology of the inner ear, hair cells lacking TMIE fail to label with aminoglycosides or FM 1-43, both of which are known to permeate the MET channel [28,29]. TMIE was first localized to the stereocilia of hair cells [30,31], and then to the stereocilia tips where MET occurs [26]. Zhao et al. [26] further demonstrated that loss of TMIE ablates MET currents, that TMIE interacts with both LHFPL5 and the CD2 isoform of PCDH15, and that interfering with the TMIE-CD2 interaction alters MET. They proposed that TMIE could be a force-coupler between the tip link and channel. However, the CD2 isoform of PCDH15 is only essential in cochlear hair cells and not vestibular hair cells [32]. Zebrafish do not possess the CD2 isoform [10,33], and yet they still require Tmie for hair-cell function [22]. These findings raised the tantalizing possibility that Tmie might have an additional role in MET that is independent from the tip links. Here, we present an alternative role for Tmie in hair-cell function.
We first confirmed that mechanosensitivity is absent in a previously reported zebrafish mutant of tmie, ru1000 [22], and demonstrated that this defect is rescued by transgenic Tmie-GFP. The localization of Tmie-GFP is maintained in the absence of other transduction components, suggesting that Tmie trafficks independently to hair bundles. Unexpectedly, GFPtagged Tmcs fail to localize to the hair bundle in tmie mutants, and overexpression of Tmie leads to a corresponding increase in bundle expression of Tmc1 and Tmc2b. To determine which regions of Tmie are involved in regulating the Tmcs, we performed a domain analysis of Tmie by expressing mutated or chimeric transgenes of tmie in tmie ru1000 larvae, and made three key discoveries: (i) Tmie can function without its putative first transmembrane domain, (ii) the remaining helix (2TM) and adjacent regions are responsible for Tmie's function in hair cells, and (iii) dysfunctional versions of Tmie have reduced efficacy in localizing the Tmcs, supporting the conclusion that impaired MET is due to reduction of Tmc protein in hair bundles. Our evidence suggests that Tmie's role in the MET complex is to promote localization of Tmc1/2 to the site of MET in zebrafish sensory hair cells.
Gross morphology is normal in tmie ru1000 mutant zebrafish
The literature on TMIE's role in sensory hair cells is somewhat contradictory. Earlier studies proposed a developmental role for TMIE [21][22][23], while later studies evidenced a role in MET [26,27]. To begin our analysis and attempt to clarify the issue in zebrafish, we examined live tmie ru1000 larvae at 5-7 days postfertilization (dpf) using confocal microscopy. The ru1000 allele harbors a nonsense mutation leading to an N-terminal truncation, L25X [22]. We observed that mature hair cells of tmie ru1000 larvae were grossly normal compared to wild type siblings in both the inner ear lateral crista and the lateral line organ, an organ specific to fish and amphibians (Fig 1A). We detected a slight thinning of the mutant hair bundles, as revealed using a transgene Actin-GFP [34]. The underlying reason for the observed bundle thinning is Zebrafish tmie ru1000 mutants: Phenotype and functional rescue by Tmie-GFP. All confocal images are of live, anesthetized larvae. (A) Hair cells in the lateral-line neuromasts (7 dpf) and inner ear cristae (5 dpf) from wild type and tmie ru1000 larvae. A transgene (Actin-GFP) was used to visualize stereocilia bundles. (B) Sample traces from an auditory evoked behavior response (AEBR) assay, performed on 6 dpf larvae over the course of 3 minutes. Pure tone stimuli are indicated by asterisks. Peaks represent pixel changes due to larval movements (magenta indicates positive response). (C) Quantification of AEBR displayed as box-and-whiskers plot; significance determined by two-tailed unpaired t-test with Welch's correction. (D) not known, however we note that thin bundles have been observed in other zebrafish MET mutants such as those carrying mutations in ap1b1 [35] and tomt [11]. Both genes have been previously implicated in protein trafficking in hair cells, with tomt having a specific role in targeting Tmc1/2 proteins to the hair bundle [11,35]. We conclude that morphology is grossly normal in tmie-deficient zebrafish, suggesting that tmie does not have a developmental role in hair bundle formation. Our findings are consistent with the grossly normal hair-bundle morphology observed in Tmie-/-mice [26].
Tmie-deficient zebrafish are deaf due to a defect in hair-cell mechanosensitivity
Next, we used an assay for the auditory evoked behavior response (AEBR) to quantify hearing loss in tmie ru1000 mutants. We exposed 6 dpf larvae to a pure tone stimulus (157 dB, 1000 Hz, 100 ms) once every 15 seconds for three minutes and recorded their startle responses (sample traces in Fig 1B). Larvae deficient in tmie appeared to be profoundly deaf, with little to no response compared to wild type siblings (Fig 1B and 1C). We then determined basal (unevoked) hair-cell activity of tmie ru1000 larvae using FM 1-43 or FM 4-64. Both are vital dyes that permeate open MET channels, making them useful for detecting the presence of active MET channels in hair cells [28,29,34]. A 30-second bath application of FM dye readily labels hair cells of the lateral line organ, which are arranged in superficial clusters called neuromasts. We briefly exposed wild type and tmie ru1000 larvae to FM dye and then imaged the neuromasts ( Fig 1D). Consistent with previous findings [22,23,27], tmie ru1000 neuromasts have a severe reduction in FM labeling, suggesting that these hair cells have a MET defect. To characterize mechanically evoked responses of hair cells, we recorded extracellular potentials, or microphonics ( Fig 1H). Using a piezo actuator, we applied a 200 Hz sine wave stimulus to 3 dpf larvae while simultaneously recording voltage responses from hair cells of the inner ear. In agreement with results from our FM dye assay and with microphonic recordings previously reported [22], microphonics are absent in tmie ru1000 larvae ( Fig 1H, gray trace).
Transgenic tmie-GFP rescues the functional defect in tmie ru1000 mutants
To rescue mechanosensitivity in tmie ru1000 larvae, we generated a construct expressing Tmie tagged with GFP on its C-terminus, then expressed this transgene using a hair cell-specific promoter, myosin 6b (myo6b). Our Tmie-GFP rescued the FM labeling in tmie ru1000 hair cells ( Fig 1E, quantified in Fig 1F). Tmie-GFP also restored microphonic potentials to wild-type levels ( Fig 1H, orange trace). In a stable line with a single transgene insertion, we observed that Tmie-GFP expression varies among hair cells, even within the same patch of neuroepithelium (lateral crista, S1A Fig). Immature hair cells, which can be identified by their shorter stereocilia and kinocilia (S1A Fig, bracket and arrow, respectively), consistently show a bright and diffuse pattern of labeling. This high expression level in immature bundles is characteristic of transgenes expressed using the myo6b promoter, which drives expression more strongly in young hair cells [18,34]. In mature hair cells, expression patterns of Tmie-GFP are variable. At high expression levels, Tmie-GFP signal is enriched in the bundle in a broader pattern (S1B Fig). At reduced levels, the signal appears to be concentrated at the beveled edge of the hair bundle (S1C Fig). At very low levels, we can observe puncta along the stereocilia staircase, consistent with localization at stereocilia tips (S1D Fig). We suspect that the diffuse "bundle fill" pattern is due to overexpression, and that lower levels of Tmie-GFP recapitulate the endogenous pattern of localization at the site of MET, as previously observed in mice [26].
Tmie-GFP is capable of trafficking without other members of the MET complex
Having confirmed that our exogenously expressed Tmie-GFP is functional, we used this transgene to probe Tmie's role in the MET complex. First, we characterized Tmie's interactions with other MET proteins in vivo by expressing transgenic Tmie-GFP in mutant pcdh15a, lhfpl5a, and tomt larvae (Fig 2). Because a triple knock-out of the zebrafish tmc genes is not available, we used tomt mutants as a proxy for tmc-deficient fish based on recent reports of defective Tmc bundle localization in tomt-deficient fish and mice [11,36]. As in wild type bundles (Fig 2A), we detected Tmie-GFP signal in each of these MET mutants (Fig 2B-2D), even in splayed hair bundles (Fig 2B and 2C, arrowheads). While normal localization of Tmie in the MET mutants could be in part due to overexpression, the presence of Tmie-GFP in stereocilia suggests that Tmie does not absolutely require any individual MET protein for entry into the hair bundle.
Tmc1-GFP and Tmc2b-GFP fail to localize to stereocilia without Tmie
We next wanted to test whether loss of Tmie affects the integrity of the MET complex. To confirm the presence of tip links, we examined TEM images from 5 dpf wild type and tmie ru1000 larvae, n = 3 each (Fig 3A). Of 67 wild type sections examined, we observed 22 tip links, 23 insertion plaques, and 36 examples of tenting. Of 87 tmie ru1000 sections examined, we observed 27 tip links, 26 insertion plaques, and 39 examples of tenting. We then used an antibody Tmie-GFP is present in the hair bundles of MET mutants. Confocal images of the bundle region in hair cells of the inner ear lateral cristae in 6 dpf larvae. Larvae expressing transgenic Tmie-GFP in the genetic backgrounds of wild type (A), and homozygous mutants for the tip link protein Pcdh15a (B, pcdh15a psi7 ), the accessory protein Lhfpl5a (C, lhfpl5a tm290d ), and the Golgi-localized protein Tomt (D, tomt tk256c ). Tomt-deficient fish lack Tmc expression in hair cell bundles [11], presumably mimicking the condition of a triple Tmc knockout. Arrowheads indicate splayed hair bundles. n = 8 each genotype. Scale bar is 5μm.
https://doi.org/10.1371/journal.pgen.1007635.g002 against the tip-link protein Pcdh15a and observed punctate expression along stereocilia in tmie ru1000 larvae ( Fig 3B). Finally, we stably expressed GFP-tagged Pcdh15aCD3 and its trafficking partner, Lhfpl5a, in tmie ru1000 larvae. Stable expression of transgenes in zebrafish is achieved through random genomic insertion of a plasmid containing promoter and gene. To ascertain comparable expression of a given transgene across larvae, we used transgenic lines with 50% transmission, indicative of a single insertion event. Siblings produced in the same clutch of eggs were used as controls throughout this study. In 6 dpf tmie ru1000 larvae, we observed punctate expression of transgenic Pcdh15aCD3-GFP ( Fig 3C) and GFP-Lhfpl5a ( Fig 3D) along stereocilia tips, similar to the pattern obtained with antibody labeling of Pcdh15a. These three assays confirmed that tip links were intact in tmie ru1000 larvae, which agrees with our in vivo data ( Fig 1A) and previous results in Tmie-/-mice [26].
To test for the presence of the Tmc proteins in tmie ru1000 stereocilia, we again used stable GFP-tagged transgenic lines with single insertions: Tg(myo6b:tmc1-GFP) and Tg(myo6b: tmc2b-GFP) [11]. Unfortunately, we were unable to successfully express a tmc2a transgene. The Tmc1-GFP signal was very dim and only reliably visualized in a subset of the tall and accessible hair bundles of the lateral cristae, where we detected severely reduced GFP fluorescence in the stereocilia of tmie ru1000 hair cells as compared to wild type siblings ( Fig 3E). The Tmc2b-GFP signal was more robust and we detected it in the hair bundles of the lateral cristae ( Fig 3G), anterior maculae (Fig 3I), and lateral line organ ( Fig 3J). We imaged anterior maculae at 2 dpf, which are closer the surface of the fish at this stage of development; the GFP signal was too faint in the posterior maculae, which are located in a deeper, medial position next to the brain. In all of these hair cell types, we observed a severe reduction in Tmc2b-GFP fluorescence in the hair bundles of tmie ru1000 larvae. Although Tmc2b was previously reported to have differential effects in lateral line neuromasts [9], we did not observe a difference in Tmc2b-GFP expression among head or trunk neuromasts, likely because the myo6b promoter drives expression in all hair cells. In the lateral cristae, mature tmie ru1000 hair cells expressing Tmc2b-GFP often displayed fluorescence within the apical soma near the cuticular plate, suggesting a trafficking defect (Fig 3G, arrows; position of cuticular plate denoted in Fig 1G). We quantified the loss of Tmc1-GFP ( Fig 3F) and Tmc2b-GFP ( Fig 3H) from the hair bundle region of lateral cristae and found a striking and consistent reduction in tmie mutants. Loss of Tmc1/2b could be a result of disruption of the MET complex, but we previously showed that localization of transgenic Tmc1-GFP and Tmc2b-GFP is normal in pcdh15a mutants [11]. The aforementioned study and our experiments demonstrated that mislocalization of Tmc1/2b is not a hallmark of all MET mutants, and thus their mislocalization in tmie mutants is a specific effect.
Overexpression of Tmie increases bundle localization of Tmc1-GFP and Tmc2b-GFP
We hypothesized that if the loss of Tmie reduces Tmc localization in the hair bundle, then overexpression of Tmie would have the opposite effect. To test the consequence of overexpression of Tmie on Tmc localization, we created a second construct of tmie coupled with p2A-NLS(mCherry). The p2A linker is a self-cleaving peptide, which leads to translation of significance determined by two-tailed unpaired t-test with Welch's correction, p = 0.0002. (G) Images of the lateral cristae in 4 dpf larvae expressing Tmc2b-GFP. The arrow points to the cuticular plate/apical soma region, just below the ROI. (H) Plot of the integrated density of Tmc2b-GFP fluorescence in the ROI, expressed in arbitrary units. Statistical significance determined by two-tailed unpaired t-test with Welch's correction, p = 0.0005. (I) Representatives images of anterior maculae in 2 dpf larvae expressing Tmc2b-GFP. We examined n = 14 wild type and n = 13 tmie ru1000 maculae. (J) Representative images of lateral line neuromasts in 4 dpf larvae expressing Tmc2b-GFP. We examined n = 18 wild type and n = 20 tmie ru1000 neuromasts. All statistics are mean ± SD. Scale bar in A is 50 nm, in B-D are 2 μm, in E-J are 5μm.
https://doi.org/10.1371/journal.pgen.1007635.g003 equimolar amounts of Tmie and NLS(mCherry). Hence, mCherry expression in the nucleus denotes Tmie expression in the cell (Fig 4B and 4D, lower panels). We generated a stable tmie ru1000 fish line carrying the tmie-p2A-NLS(mCherry) transgene driven by the myo6b promoter. Semi-quantitative PCR revealed that the myo6b promoter produces higher transcript levels of tmie than in non-transgenic siblings ( Fig 4A). We then crossed tmie ru1000 fish carrying Tg(myo6b:tmie-p2A-NLS(mCherry)) to tmie ru1000 fish carrying either Tg(myo6b:tmc1-GFP) or Tg(myo6b:tmc2b-GFP). In the lateral cristae, we observed that overexpression of Tmie led to a robust increase in bundle expression of both Tmc1-GFP ( Fig 4B) and Tmc2b-GFP ( Fig 4D). We quantified GFP fluorescence in the hair bundle region of tmie ru1000 larvae and found that, compared to wild type siblings expressing only one of the tmc-GFP transgenes, co-overexpression with Tmie increased bundle expression of Tmc1-GFP by 2.4-fold ( Fig 4C) and Tmc2b-GFP by 2.5-fold ( Fig 4E). Combined with the finding that Tmc expression is lost in hair bundles lacking Tmie, our data suggest that Tmie positively regulates Tmc localization to the hair bundle.
We questioned whether Tmie was affecting the level of translation of tmc1/2b transcripts, but examination of GFP fluorescence in the soma region of the lateral crista revealed no difference between tmie ru1000 and sibling larvae expressing the Tmc2b-GFP transgene (S2A Fig These results indicate that Tmie is unlikely to affect translation of tmc transcripts, reinforcing our hypothesis that Tmie regulates Tmc bundle localization.
Transgenes can effectively determine protein functional capacity
To gain a better understanding of Tmie's role in regulating the Tmcs, we characterized a new allele of tmie, t26171, which was isolated in a forward genetics screen for balance and hearing defects in zebrafish larvae. Sequencing revealed that tmie t26171 fish carry an A!G mutation in the splice acceptor of the final exon of tmie, which leads to use of a nearby cryptic splice acceptor (S3A Fig, DNA, cDNA). Use of the cryptic acceptor causes a frameshift that terminates the protein after amino acid 139 (A140X), thus removing a significant portion of the cytoplasmic C-terminus (S3A Fig, Protein). Homozygous mutant larvae exhibit severe auditory and vestibular deficits, being insensitive to acoustic stimuli and unable to maintain balance (S3A Fig, Balance). FM 4-64 labeling of tmie t26171 mutant hair cells suggests that the effect of the mutation is similar to the ru1000 mutation (S3B Fig, quantified in S3D Fig). This finding implicates the C-terminal tail, a previously uncharacterized region, in Tmie's role in MET. However, when we overexpressed a near-mimic of the predicted protein product of tmie t26171 (1-138-GFP) using the myo6b promoter, we observed full rescue of FM labeling defects in tmie ru1000 larvae (S3C Fig, quantified in S3D Fig), as well as behavioral rescue of balance and acoustic sensitivity (n = 19). These results revealed that when expressed at higher levels, loss of residues 139-231 does not have a significant impact on Tmie's ability to function.
This paradoxical finding highlighted an important advantage of the use of transgenes over traditional mutants when identifying domains that are fundamentally essential to the function of a protein. There are myriad reasons why a genomic mutation may lead to dysfunction, including reduced transcription or translation, protein misfolding and degradation, or mistrafficking. In cases where a mutated protein retains partial efficacy, exogenous expression may overcome these deficiencies by producing proteins at higher levels. This overexpression Semi-quantitative PCR of tmie cDNA using primers within exon 4 to detect transcripts from both transgene and the endogenous gene. We generated cDNA from RNA extracted from 5 dpf whole larvae. Larvae carrying Tg(myo6b:tmie-p2A-NLS(mCherry)) had 6.5-fold more tmie transcript than wild type non-transgenic siblings. Products are from 40 cycles. Less PCR reaction (2.5x less) was loaded for gapdh to avoid saturation. (B) Confocal images of the lateral cristae in 4 dpf tmie ru1000 larvae coexpressing two transgenes, tmc1-GFP (upper panel) and tmie-p2A-NLS(mCherry) (lower panel). The p2A linker is a self-cleaving peptide that results in equimolar translation of Tmie and nuclear mCherry. (C) Plot of the integrated density of Tmc1-GFP fluorescence/crista in the ROI, expressed in arbitrary units; significance determined by one-way ANOVA. (D-E) Same as B-C except using tmc2b-GFP instead of tmc1-GFP. All statistics are mean ± SD. Scale bars are 5μm. https://doi.org/10.1371/journal.pgen.1007635.g004 Tmie localizes Tmc channel subunits in zebrafish sensory hair cells can reveal domains that are truly essential or non-essential to protein function, as seen with the differential rescue results in the tmie t26171 mutant and its transgene mimic (S3D Fig). Moreover, the use of transgenes enabled us to carry out a comprehensive structure/function analysis of Tmie. To this end, we systematically deleted or replaced regions of tmie to generate 13 unique tmie constructs (Fig 5A), and then expressed these constructs in hair cells of the tmie ru1000 mutants.
Earlier studies in zebrafish and mice proposed that Tmie undergoes cleavage, resulting in a single-pass mature protein [22,37]. To test this hypothesis, we generated the SP44-231 construct of Tmie, which replaced the N-terminus with a known signal peptide (SP) from a zebrafish Glutamate receptor protein (Gria2a). The unrelated signal peptide serves to preserve the predicted membrane topology of Tmie. We also generated a similar construct that begins at amino acid 63, where the sequence of Tmie becomes highly conserved (SP63-231). Three of the constructs contained internal deletions (Δ63-73; Δ97-113; Δ114-138). In three more constructs, we replaced part of or the entire second transmembrane helix (2TM) with a dissimilar helix from the CD8 glycoprotein (CD8; CD8-2TM; 2TM-CD8). We included our mimic of the zebrafish tmie t26171 mutant, which truncates the cytoplasmic C-terminus (1-138). Further manipulating the C-terminus, we made a construct that mimics the truncation seen in the mouse sr J mutant (1-113). In mice, this truncation recapitulates the phenotype seen in a complete deletion of Tmie [21]. In addition, the Δ114-138 construct deletes the internal region of tmie that differentiates constructs 1-113 and 1-138. We included an alternate splice isoform of tmie with a different final exon, altering the C-terminal sequence (Tmie-short). This isoform is found only in zebrafish [22] and its function has not been explored. Finally, we expressed a fragment of the C-terminus that is lost in our zebrafish tmie t26171 mutant (139-231).
Subcellular localization of mutated or chimeric Tmie reveals domains required for self-localization to the bundle
To determine subcellular localization of the transgenic tmie constructs, we inserted the coding sequence of each construct into a plasmid containing the myo6b promoter for expression, including a C-terminal GFP tag for visualization. These plasmids were then individually coinjected into tmie ru1000 eggs with transposase mRNA to generate mosaic expression of the constructs in a subset of hair cells. At 4-6 days post injection, we imaged hair cells expressing each transgene ( Fig 5B). To quantify the enrichment in the bundle versus soma, we measured the integrated density of GFP fluorescence in a small central area of mature bundles ( Fig 5C, black oval) and separately in the plasma membrane or soma-enriched compartments (Fig 5C, magenta oval). Correcting for area, we then divided the bundle values by the total values (bundle/bundle + soma) and expressed this as a ratio ( Fig 5D). Values closer to 1 are bundle enriched, while values closer to 0 are soma-enriched. We excluded two constructs from this analysis: CD8-GFP because it was detected only in immature bundles (Fig 5B, CD8, Localization fell into three broad categories: bundle-enriched, soma-enriched, and equally distributed. Most of the fusion proteins were bundle-enriched, similar to full-length Tmie-GFP expression (Fig 5B and 5D). Three constructs were trafficked to the bundle but also expressed strongly in the soma (SP63-231, CD8-2TM, . This result suggests that the deleted regions in these constructs have some role in designating Tmie as a bundle-localized protein. Also of note, the full replacement of the 2TM helix (CD8) was unable to maintain stable expression in mature bundles ( In the CD8, CD8-2TM, and 2TM-CD8 constructs, all or part of the 2TM is replaced by the helix from the CD8 glycoprotein (in yellow). Tmie-had no effect. Only two constructs that included the second TM were soma-enriched (Tmieshort and 1-113), suggesting an inability to traffic to the bundle. These two constructs were thus excluded from further analyses. Since three of our constructs that manipulated the C-terminus showed impaired bundle targeting, we expressed a fragment of the C-terminus (139-231-GFP) to determine if it contained a bundle targeting signal. Expression of this fragment was restricted to the soma and kinocilium with little to no bundle expression (S4C Fig), and showed no functional rescue of MET activity (S4D Fig). Together, our results suggest that no single motif but rather multiple regions of Tmie contribute to its bundle localization.
FM labeling identifies functional regions in the second transmembrane domain and adjacent residues of Tmie
To identify regions of Tmie involved in the mechanosensitivity of hair cells, we measured the functionality of the nine tmie constructs that yielded hair-bundle expression. As in Fig 1F, we generated stable lines of each transgenic construct and quantified fluorescence in lateral line neuromasts after exposure to FM 4-64 (Fig 6). We used larvae at 6 dpf, a later stage that maximizes the number of hair cells in each neuromast. In all but one case (CD8-2TM-GFP), these fish lines contained single transgene insertions to equalize expression of the tmie construct within a clutch. In the case of CD8-2TM-GFP, we used larvae from a founder that transmitted the transgene to >10% of offspring with consistently bright expression.
Of nine constructs examined, four generated wild type levels of FM fluorescence in tmie ru1000 neuromasts (Tmie, SP44-231, Δ114-138, and 1-138; Fig 6A and 6B). Two constructs (Δ97-113 and Δ63-73) did not rescue above mutant levels of FM 4-64. While residues 63-73 have not been characterized, the Δ97-113 result is consistent with the findings of previous publications in humans and mice, showing that mutations in this intracellular region impair hearing and hair cell function [24,26]. Three constructs were capable of producing partial rescue (SP63-231, CD8-2TM, and 2TM-CD8). Each one of the five dysfunctional constructs altered part of a contiguous region of Tmie: the 2TM and adjacent domains. These results highlight this region of Tmie as vital for function. To determine whether any of the constructs also produce a dominant effect on hair-cell function, we compared FM label in wild type larvae with or without the individual transgenic tmie constructs (Fig 6D). Expression of GFP-tagged SP63-231 or Δ63-73, which yielded impaired rescue in tmie ru1000 larvae, caused reduced FM label in transgenic wild type cells (Fig 6C and 6D). Interestingly, both dominant negative constructs delete parts of the extracellular region of Tmie.
Recordings of mechanically evoked responses confirm that the second transmembrane domain and adjacent regions are required for normal haircell function
Bath applied FM dye demonstrates the presence of permeable MET channels, but does not reveal any changes in mechanically evoked responses in hair cells. Therefore, we also recorded microphonics of mutant larvae expressing individual tmie transgenes. Reduced hair-cell short is a fish-specific isoform of Tmie that contains an alternate final exon (in orange). Dotted lines represent internal deletions. (B) Representative confocal images of each construct being expressed as a GFP-tagged transgene in hair cells of 4-6 dpf tmie ru1000 larvae. Expression is mosaic due to random genomic insertion into subsets of progenitor cells after single-cell injection. All images are of cells in the inner ear cristae. Scale bar is 5μm. (C) The localization of each GFP fusion protein was determined by measuring the fluorescence/area in the bundle (b) and soma (s), and then calculating b / (b+s). (D) Enrichment in the hair bundle is displayed as a ratio for each construct, with 1 being completely bundle-enriched and 0 being completely soma-enriched.
https://doi.org/10.1371/journal.pgen.1007635.g005 Fig 1G) with DIC + GFP fluorescence. The right image is a maximum projection of the 7 sections in the soma region (magenta bracket in Fig 1G) showing FM 4-64 fluorescence. (A) Representative images of neuromasts in tmie ru1000 larvae, each stably expressing an individual counts have been observed in neuromasts of MET mutants at 6 dpf but not at 2 dpf [11], however, the amplitude of microphonics increases with age and cell counts [38]. As a compromise, we used 3 dpf larvae and recorded from the inner ear where there is a larger population of hair cells. This earlier time point additionally allowed us to determine MET activity near the onset of mechanotransduction to rule out indirect or progressive effects of Tmie loss. We inserted a recording pipette into the inner ear cavity and pressed a glass probe against the head (Fig 7A). Using a piezo actuator to drive the probe, we delivered a step stimulus at increasing driver voltages while recording traces in current clamp ( Fig 7B). For each transgenic tmie line, we measured the amplitude of the response at the onset of stimulus (Fig 7C-7I). We limited our analysis to the lines expressing constructs that failed to fully rescue FM labeling (Fig 7E-7I). As positive controls, we used the full-length tmie-GFP line ( Fig 7C) and also included the SP44-231-GFP line (Fig 7D), expressing the cleavage product mimic. Both control constructs fully rescued the responses in tmie ru1000 larvae. Consistent with a reduction in labeling with FM dye, we found that the microphonic responses were strongly or severely reduced in tmie ru1000 larvae expressing the GFP-tagged constructs SP63-231, Δ63-73, CD8-2TM, 2TM-CD8 or Δ97-113 (Fig 7E-7I). We again saw dominant negative effects in wild type larvae expressing transgenic SP63-231-GFP or Δ63-73-GFP (Fig 7E and 7F, blue traces).
Regions of Tmie that mediate hair-cell mechanosensitivity are also required for localizing Tmc2b-GFP
After identifying functional regions of Tmie, we asked whether these regions are involved in regulating Tmc localization. To answer this question, we quantified hair bundle expression of transgenic Tmc2b-GFP in hair cells of tmie ru1000 mutant larvae stably co-expressing individual transgenic tmie constructs ( Fig 8H). As in Fig 4, we tagged our tmie constructs with p2A-NLS (mCherry) so that Tmc2b-GFP expression in the hair bundles could be imaged separately. Because we did not generate stable transgenic lines for all constructs, we were concerned that variable levels of Tmie protein expression among siblings, particularly low levels, might confound the experiment. However, examination of our lowest expressing p2A-NLS(mCherry) constructs (Tmie, CD8-2TM, and Δ97-113) revealed no correlation between the mCherry and GFP signals; higher levels of nuclear mCherry signal did not correlate with higher bundle expression of GFP-tagged Tmc protein (S5A Fig). Additionally, the Tmie-p2A-NLS(mCherry) line used in S5A Fig was also used for semi-qPCR as well as quantification of Tmc1-GFP signal. This low level of mCherry signal (visualized in Fig 4B, lower panel) corresponded to a 6.5-fold increase in tmie transcript ( Fig 4A) and resulted in a 2.4-fold increase in Tmc1-GFP (Fig 4C), comparable to the 2.5-fold increase of Tmc2b-GFP (Fig 4E) seen in a different Tmie-p2A-NLS(mCherry) line with brighter mCherry signal (visualized in Fig 4D, lower panel). Taken together, these results indicate that even very low-expressing transgenes produce saturating levels of Tmie protein. We confirmed the lack of correlation between mCherry and Tmc2b-GFP signals with our other tmie constructs (S5B Fig). We then proceeded to examine bundle expression of Tmc2b-GFP when co-expressed with the positive control of SP44-231 tmie construct. FM fluorescence was normalized to wild type non-transgenic larvae generated with the Tmie-GFP line. (B) Box-and-whiskers plot of the integrated density of FM fluorescence/cell for each tmie construct. We normalized values to the average of wild type siblings for each construct. (C) Representative images of neuromasts in wild type larvae with or without transgene. FM fluorescence was normalized to wild type non-transgenic larvae of the Tmie-GFP line. (D) Box-and-whiskers plot of the integrated density of FM fluorescence/cell in wild type neuromasts with and without tmie transgene. We normalized values to the average of wild type siblings for each construct. Significance determined within each clutch by one-way ANOVA, n � 9, �� p < 0.01, ��� p < 0.001, ���� p < 0.0001. Scale bars are 10μm.
https://doi.org/10.1371/journal.pgen.1007635.g006 and the five tmie constructs that yielded impaired mechanosensitivity. As an additional positive control, we also included analysis of the deletion Δ114-138, a construct with full functional rescue and normal localization. As in Figs 3 and 4, we used the taller and more accessible bundles of the lateral crista for quantification. Plots of the mean amplitude of the response peak ± SD as a function of the stimulus intensity of the driver voltage. We used the stimulation protocol explained in B to obtain responses from larvae expressing one of our transgenic tmie constructs, as labeled. Statistical significance was determined by two-way ANOVA comparing all groups to wild type non-transgenic siblings, n � 5, � p < 0.05, �� p < 0.01, ��� p < 0.001, ���� p < 0.0001. Four constructs showed full rescue of Tmc2b-GFP levels in the bundles of tmie ru1000 larvae. The SP44-231 cleavage mimic produced highly variable levels with examples in the wild type range and others increasing Tmc2b-GFP expression well above wild type levels (Fig 8B and 8H). The higher levels of Tmc2b-GFP achieved with SP44-231 are comparable to the signal increase caused by overexpression of full-length Tmie (Figs 4D, 4E and 8A, right panel). We suspect that the exogenous Gria2a signal peptide leads to variable processing of SP44-231 and thus contributes to this variability in Tmc2b-GFP fluorescence. In tmie ru1000 larvae expressing the positive control construct Δ114-138, we observed values of Tmc2b-GFP fluorescence in the wild type range, as expected (Fig 8H, Δ114-138). Surprisingly, constructs SP63-231 ( Fig 8C) and 2TM-CD8 (Fig 8F) also gave rise to wild type levels of Tmc2b-GFP (Fig 8H). When we recorded microphonics in these larvae, we found that co-expression of Tmc2b-GFP with either SP63-231 (S6A Fig) or 2TM-CD8 (S6C Fig) resulted in better functional rescue of tmie ru1000 than when either tmie construct was expressed without Tmc2b-GFP (Fig 7E and 7H). We also determined that microphonic potentials correlated with the levels of Tmc2b-GFP in the bundles of tmie ru1000 larvae co-expressing 2TM-CD8, r = 0.879, p = 0.0018 (S6D Fig). The same analysis of SP63-231 showed a positive trend, r = 0.722, yet was statistically non-significant (p = 0.1682), which may be due to small sample size (S6B Fig). These results suggest that in the lines of SP63-231 and 2TM-CD8, functional rescue is Tmc dose-dependent.
Of the three constructs that produced little to no functional rescue, CD8-2TM ( Fig 8E) and Δ97-113 (Fig 8G) had severely reduced levels of Tmc2b-GFP in hair bundles (Fig 8H). In tmie ru1000 larvae expressing the Δ63-73 construct, there was severely reduced but still faintly detectable Tmc2b-GFP signal ( Fig 8D). As with the functional rescue experiments, this difference was not statistically significant ( Fig 8H). Interestingly, the bulk of this signal was observed in immature bundles (Fig 8D, arrows, and S7 Fig), but there was some detectable Tmc2b-GFP signal in mature bundles (S7 Fig). To generalize our findings to Tmc1, we examined Tmc1-GFP localization in tmie ru1000 larvae co-expressing the null-function construct CD8-2TM (S8 Fig). As with Tmc2b-GFP, we detected no bundle expression of Tmc1-GFP. Overall, these results suggest that the level of functional rescue by the tmie constructs is correlated to the amount of Tmc1/2b present in the hair bundle.
Discussion
TMIE was first identified as a deafness gene in mice and humans [21,24]. The predicted product is a relatively small membrane protein (231 amino acids) containing a highly conserved region within and around the second hydrophobic domain. Previous studies established that TMIE is required for MET in hair cells [22,26,27] and is an integral member of the complex [26]. How TMIE contributes to the function of the MET complex was not clear. Our comprehensive structure-function analysis of Tmie revealed that the functional capacity of various tmie mutant constructs is determined by their efficacy in localizing Tmc2b-GFP to the hair bundle, as summarized in Fig 9A and modeled in Fig 9B. These findings unveil a hitherto unexpected role for Tmie in promoting the localization of the channel subunits Tmc1 and Δ114-138 constructs; for the SP63-231 construct, we used a mix of larvae with stable insertions or F1 offspring. (A) Sibling wild type, tmie ru1000 , and tmie ru1000 expressing transgenic Tmie. For the quantification in H, Tmc2b-GFP fluorescence was measured within the ROI (right panel, black line). (B-G) Images of lateral cristae from tmie ru1000 larvae expressing individual tmie constructs tagged with p2A-NLS (mCherry), as labeled. The arrows in D point to Tmc2b-GFP in immature hair bundles. (H) Plot of the integrated density of Tmc2b-GFP fluorescence in the ROI, comparing tmie ru1000 larvae expressing a tmie construct (magenta) to wild type (black) and tmie ru1000 (gray) siblings not expressing tmie construct. We normalized values to the average of wild type siblings for each construct. Significance for SP44-231, SP63-231, and 2TM-CD8 was determined by the Kruskal-Wallis test, for all other tmie constructs by one-way ANOVA, n � 6, ��� p < 0.001, ���� p < 0.0001. Scale bars are 10μm.
https://doi.org/10.1371/journal.pgen.1007635.g008 Tmie localizes Tmc channel subunits in zebrafish sensory hair cells Tmc2b to the site of MET. Our study broadens our understanding of the assembly of the MET complex and points to a pivotal role of Tmie in this process.
A previous report on the ru1000 mutant suggested that Tmie's role in zebrafish hair cells was developmental, with mutant lateral line hair cells showing stunted kinocilia and the absence of tip links [22]. In our hands we did not observe any gross morphological defects in vivo or at the ultrastructural level in the inner ear, and the localization pattern and levels of Pcdh15a and Lhfpl5a were unaffected in tmie ru1000 larvae. This observation is consistent with intact hair-bundle morphology. In contrast, splayed stereocilia are a dominant feature of hair cells missing their tip links such as pcdh15a or lhfpl5a mutants [18,33]. In TMIE-deficient mice, hair cell morphology is also grossly normal up to P7 [26,27]. In agreement with a previous study in mice [26], our results indicate that tmie ru1000 mutants are profoundly deaf due to ablation of MET in hair cells. We were able to fully rescue this deficit in zebrafish ru1000 mutants by exogenous expression of a GFP-tagged transgene of tmie. Exogenous expression gave rise to variable levels of Tmie-GFP in hair bundles, with lower levels revealing a punctate pattern expected for a member of the MET complex, and higher expression levels leading to expression throughout the stereocilia. Excess Tmie-GFP did not appear to cause adverse effects in hair cells, which is consistent with a previous study in the circler mouse mutant [39].
Trafficking patterns of Tmie and Tmcs suggests trafficking 'cliques' of MET proteins
There are still many open questions regarding the sequence of events for assembly of the MET complex. One question is whether assembly occurs before or after ascension of stereocilia. Previous reports demonstrate that bundle targeting of Pcdh15a relies upon the presence of Lhfpl5a [18,19,40]. Likewise, LHFPL5 requires PCDH15 for localization at stereociliary tips [19,40]. Here, we demonstrate that Tmc1 and 2b require Tmie for targeting to the bundle, while Pcdh15a and Lhfpl5a are unaffected in tmie mutants. Collectively, these findings suggest that Pcdh15a-Lhfpl5a traffick together in one unit, and Tmie-Tmcs traffick in a separate unit. This conclusion supports the idea that the MET complex proteins traffick to stereocilia in multiple groups and then assemble the full MET complex at the site of transduction.
Another interesting finding from our data is that Tmie may be the exception to the rule of co-dependent transport to the hair bundle because it retains normal trafficking patterns in the absence of individual MET proteins (Pcdh15a, Lhfpl5a, or Tmc1/2a/2b). As our experiment used an overexpressed Tmie transgene, measurement of endogenous levels of Tmie in other MET mutants would be an important follow-up experiment for future studies.
Tmie has distinct regions required for localization and function
Using our transgenic tmie constructs, we identified specific regions of Tmie that are required for trafficking or function (Fig 9). Despite reduced bundle targeting, a truncated version of Tmie containing amino acids 1-138 showed full functional rescue, though only when it was overexpressed as a transgene (S3 Fig). We speculate that higher levels of expression enabled this truncated version to overcome inefficient trafficking. Conversely, despite normal localization to the bundle, expression of the Δ97-113 construct did not rescue function at all, even when it was likewise overexpressed. These results demonstrate that Tmie's functional role is separate from its ability to target to the bundle.
In total, three constructs resulted in reduced targeting of Tmie to the bundle, namely SP63-231, CD8-2TM, and the 1-138 construct, the last of which truncates the C-terminus. As the predicted topology of Tmie places the C-terminus on the cytoplasmic side of a vesicle, this topology would expose amino acids 97-231 for recognition by trafficking machinery.
Alteration of the C-terminus results in partial or full mistargeting of Tmie. We suspected a potential bundle targeting signal in amino acids 139-231. However, when we express this peptide (139-231-GFP), fluorescence is diffused throughout the cell, including the kinocilium and nucleus. These results suggest that while the C-terminus is integral to bundle targeting, there are other regions of Tmie that are required for proper stereocilia localization. These regions include a portion of the external peptide sequence and residues in the 2TM domain of Tmie (Fig 9B, blue outline). Reduced efficiency in bundle targeting may be due to impaired interactions with externally localized components of the MET complex or partial misfolding. This idea is supported by the finding that expression of constructs SP63-231 and 2TM-CD8 lead to impaired rescue of function in tmie ru1000 larvae, while the partially mislocalized truncation construct (1-138) showed full functional rescue when overexpressed as a transgene.
Tmie promotes the levels of Tmc1/2b in the hair bundle
The regulatory role of Tmie with respect to the Tmcs is strongly supported by the strikingly different effects of loss of Tmie versus overexpression of Tmie. When Tmie is absent from the bundles, so are Tmc 1 and 2b; when Tmie is overexpressed, the bundle levels of Tmc1 and 2b are boosted as well. These results disagree with a previous finding in mice showing that Myc-TMC2 is present in hair bundles of TMIE-deficient cochlear hair cells [26]. This discrepancy may be due to different methods of expression, different hair cell types, or species-specific effects. Zhao et al. used a cytomegalovirus promoter to drive high levels of expression of Myc-TMC2 in an in vitro explant of cochlear tissue. Vestibular hair cells were not characterized, and the effects of TMIE-deficiency on localization of TMC1, which is the dominant TMC protein in mature cochlear hair cells, was not reported in their study. Further investigation is warranted to determine if the Tmie-Tmc relationship uncovered by our experiments is a conserved feature or is potentially dependent on the type of hair cell, as MET components may vary among different cell types.
One important question is whether Tmie and the Tmcs can physically interact to form a complex that is transported to the hair bundle. A direct interaction of the mouse TMC1/2 and TMIE proteins was not detected in a heterologous system [26]. However, our in vivo analysis suggests a strong dependency of the Tmcs on Tmie. There may be an indirect interaction, or an as-yet-undetected direct interaction between the Tmcs and Tmie, perhaps missed by previous experiments because the native hair cell environment is essential for the interaction to occur. When mammalian TMCs are expressed in heterologous cell types, they mainly populate inner membrane compartments such as the endoplasmic reticulum [8,10,11,15,26]. Successful folding and trafficking of the TMCs to the plasma membrane may require specialized trafficking components in the hair-cell secretory pathway. One example of such as factor is the Golgi-enriched Tomt protein, which is essential for Tmc1/2 trafficking in hair cells [11,36]; there are likely to be other components as well.
Experimental support for cleavage of Tmie's first hydrophobic region
While the membrane topology of Tmie has not been biochemically determined, Phobius software predicts an N-terminal signal peptide in mouse and human TMIE, and a transmembrane helix in zebrafish Tmie [41]. Interestingly, the orthologues in C. elegans and D. melanogaster do not contain this first hydrophobic region of Tmie. Upon removal of this region from zebrafish Tmie (construct SP44-231), we observed a localization pattern identical to full-length Tmie and full functional rescue of tmie-deficient fish. In addition, expressing the SP44-231 construct in tmie ru1000 larvae rescues Tmc2b-GFP bundle expression to wild type levels or higher. To our knowledge, these results are the first in vivo evidence that vertebrate Tmie can function without the first hydrophobic domain. Our study supports the notion that Tmie undergoes cleavage, resulting in a single-pass membrane protein that functions in the MET complex (Fig 9B).
The 2TM and intracellular neighboring domain may mediate integration within the MET complex
Only two of our Tmie constructs displayed dominant negative effects in wild type larvae (SP63-231 and Δ63-73), suggesting successful integration into the MET complex and competition or interference with endogenous Tmie. Both of these constructs delete unique parts of the extracellular region of Tmie, and have little to no rescue of mechanosensitivity in tmie ru1000 mutants. The transmembrane chimeras in our study also yield impaired rescue but do not appear to affect the function of endogenous Tmie in wild-type hair cells. These data suggest that the entire 2TM domain is required to produce the dominant negative effect on endogenous Tmie. Combined with the finding that replacement of the 2TM with an unrelated helix causes instability of Tmie in mature hair cells, we propose that the 2TM is essential for integration of Tmie into the MET complex.
One construct does not fit this hypothesis: Δ97-113, which contains the full 2TM and has no functional rescue, but does not have a dominant negative effect. However, this region contains arginine residues that have previously been implicated in human deafness [24,[42][43][44] and that in mice were linked to interactions with PCDH15-CD2 [26]. It is possible that the cytoplasmic region adjacent to the 2TM is also required for full integration into the MET complex, perhaps through interactions unrelated to the Tmcs. Interestingly, one of the mouse mutations, R93W, resulted in loss of TMIE localization at the site of MET in cochlear hair cells. In contrast to these findings, when we remove this entire intracellular region from zebrafish Tmie, it is still capable of targeting to hair bundles. This result may reflect different targeting motifs for different hair-cell types or species differences in recognition sequences for trafficking machinery.
Tmie stabilizes Tmc expression at the site of MET
When co-expressed with Tmc2b-GFP, our Tmie constructs reveal a strong link between function and Tmc bundle expression. In addition to defects in targeting Tmcs to the hair bundle (constructs Δ97-113 and CD8-2TM), our data also suggest a role for Tmie in maintaining the levels of Tmc2b in stereocilia. We previously reported that trafficking and stability/maintenance are two distinct events that can be separated experimentally by examining transgene expression in immature vs mature bundles [18]. The only region of Tmie with a clear effect on maintenance of bundle Tmc signal was amino acids 63-73. Loss of these residues resulted in Tmc2b-GFP signal in immature hair cells suggesting proper trafficking, but a large reduction in mature cells suggesting poor maintenance in the MET complex. Based on our data, we conclude that the first half of the transmembrane domain and the intracellular residues 97-113 are required for targeting the Tmc subunits to the site of MET (Fig 9B, yellow fill), while the extracellular residues 63-73 stabilize Tmc expression in the MET complex (Fig 9B, orange fill). Since co-expression of Tmc2b-GFP can overcome the functional deficits in constructs SP63-231 and 2TM-CD8, we propose that residues 44-62 and the second half of the 2TM are important but not absolutely essential to regulating Tmc bundle expression. This finding reinforces the significance of our data obtained with the constructs Δ63-73, CD8-2TM, and Δ97-113, which still fail to rescue Tmc2b-GFP levels. In addition, we demonstrated that CD8-2TM also does not rescue expression of Tmc1-GFP, suggesting that similar mechanisms are employed for trafficking of both Tmc1 and Tmc2 members of the Tmc superfamily.
In sum, through a systematic in vivo analysis of tmie via transgenic expression, we identified new functional domains of Tmie. We demonstrated a strong link between Tmie's function and Tmc1/2b expression in the bundle. Evidence continues to mount that the Tmcs are poreforming subunits of the MET channel, and our results implicate Tmie in promoting and maintaining the localization of the MET channel. The precise mechanism underlying Tmie's regulation of the Tmcs awaits further investigation.
Ethics statement
Animal research was in compliance with guidelines from the Institutional Animal Care and Use Committee at Oregon Health and Science University (protocol # IP00000100).
Zebrafish husbandry
We maintained zebrafish (Danio rerio, txid7955) at 28˚C and bred according to standard conditions. In this study, we used the following zebrafish mutant lines: tmie ru1000 [22], tmie t26171 , pcdh15a psi7 [10], lhfpl5a tm290d [45], and tomt tk256c [11]. We maintained all zebrafish lines in a Tübingen or Top long fin wild type background. We examined larvae at 3-7 days post-fertilization (dpf), of undifferentiated sex. For experiments involving single transgenes, non-transgenic heterozygotes were crossed to transgenic fish in the homozygous or heterozygous mutant background. We genotyped larvae by PCR and subsequent digestion or DNA sequencing, or by behavior phenotype prior to experiments, as detailed for each experiment below. Primers are listed in Table 1.
The 5' entry vector contained the promoter for the myosin 6b gene, which drives expression only in hair cells. All tmie transgenic constructs were subcloned into the middle entry vector using PCR or bridging PCR and confirmed by sequencing. The primers for each vector are listed in Table 1. For GFP-tagging, we used a 3' entry vector with a flexible linker (GHGTGSTGSGSS) followed by mEGFP. For NLS(mCherry) experiments, a p2A self-cleaving peptide (GSGATNFSLLKQAGDVEENPGP) was interposed between the tmie construct and the NLS(mCherry). This causes translation of a fusion protein that is subsequently cleaved into the two final proteins. The 2TM helix replacements from residues 21-43 result in the following chimeric helices: CD8 (YIWAPLAGTCGVLLLSLVITLYC), CD8-2TM (YIWAPLAGTCGILA IIITLCCIF), and 2TM-CD8 (LWQVVGIFSMFVLLLSLVITLYC).
Multisite Gateway LR reactions [48,49] were performed to generate the following constructs: pDest (-6myo6b: To generate transgenic fish, plasmid DNA and tol2 transposase mRNA were co-injected into single-cell fertilized eggs, as previously described (Kwan et al., 2007). For each construct, 200+ eggs from an incross of tmie ru1000 heterozygotes were injected. To obtain stable transgenic lines, >24 larvae with strong marker expression were raised as potential founders. For each GFP-tagged transgene, at least two founder lines were generated and examined for visible bundle expression. To equalize expression of each transgene within a clutch, for each tmie construct we isolated a line with a transgene transmission rate of 50%, indicating single transgene insertion. The CD8-2TM-GFP construct was the exception; we instead identified a single adult with mosaic transmission of the transgene (transmitted to >10% of offspring) and used this line in FM and microphonics experiments. Imaging during the FM experiment confirmed that CD8-2TM-GFP was consistently highly expressed (Fig 6A, CD8-2TM). For NLS (mCherry) experiments, injected fish were raised to adulthood and genotyped to identify tmie ru1000 heterozygotes and homozygotes. We identified founders for each construct and then crossed these founders to tmie ru1000 heterozygotes carrying Tg(myo6b:tmc2b-GFP). This generated offspring that expressed both transgenes in the tmie ru1000 mutant background, and we used these larvae for experiments. In SP44-231, SP63-231, and CD8-2TM, and full-length tmie in the Tmc1-GFP background, stable transgenic lines were generated from the founder before experiments were carried out.
Microscopy
We anesthetized live larvae with E3 plus 0.03% 3-amino benzoic acid ethylester (MESAB; Western Chemical) and mounted in 1.5% low-melting-point agarose (Sigma-Aldrich cas. # 39346-81-1), with the exception of the morphology images from Figs 1A and 7A in which larvae were pinned with glass rods and imaged in E3 or extracellular solution containing MESAB. We captured the image in Fig 7A at room temperature using a Hamamatsu digital camera (C11440, ORCA-flash2.8), MetaMorph Advanced NX software, and an upright Leica DMLFS microscope. We used differential interference contrast (DIC) with a Leica HC PL Fluotar 10x/0.3 lens. For all imaging except Fig 7A, images were captured at room temperature using an Axiocam MrM camera, Zeiss Zen software, and an upright Zeiss LSM700 laser-scanning confocal microscope. We used DIC with one of two water-immersion lenses: Plan Apochromat 40x/1.0 DIC, or Acroplan 63x/0.95 W. Laser power and gain were unique for each fluorophore to prevent photobleaching. We averaged 2x or 4x for each image, consistent within each experiment. The Tmc1-GFP and Tmc2b-GFP transgenes are very dim, and high laser power (4%) and gain (1100) were necessary to detect signal in wild types. At these settings, autofluorescence from other wavelengths can falsely enhance the emission peak at 488. To reduce detection of autofluorescence, we simultaneously collected light on a second channel with an emission peak at 640 nm.
Transmission electron microscopy (TEM)
We sorted 5 dpf zebrafish larvae by behavior (tap sensitivity and balance), then fixed them overnight at 4˚C in PBS containing fresh 1% EM-grade formaldehyde (Electron Microscopy Sciences, Hatfield, PA) and fresh 2% EM-grade glutaraldehyde (Tousimis Research Corporation, Cat # 1060A). For further fixation and contrast, we incubated larvae for 10 min on ice with 1% osmium tetroxide (Electron Microscopy Sciences, Hatfield, PA), followed by 1 hr on ice in 1% uranyl acetate (Electron Microscopy Sciences, Hatfield, PA). We dehydrated larvae in a graded series of EM-grade acetone (Electron Microscopy Sciences, Hatfield, PA), then embedded in Embed-812 (Electron Microscopy Sciences, Hatfield, PA). We collected thin sections on PELCO 200 mesh nickel grids (Ted Pella, Redding, CA), and stained with 4% uranyl acetate and Reynolds lead citrate. We collected electron microscopy images on an FEI Tecnai 12 BioTWIN transmission electron microscope (ThermoFisher Scientific, Hillsboro, OR) operated at an 80 kV accelerating voltage.
Auditory evoked behavioral response (AEBR)
Experiments were conducted as previously described [50]. Briefly, 6 dpf larvae were placed in six central wells of a 96-well microplate mounted on an audio speaker. Pure tones were played every 15 s for 3 min (twelve 100 ms stimuli at 1 kHz, sound pressure level 157 dB, denoted by asterisks in Fig 1B). Responses were recorded in the dark inside a Zebrabox monitoring system (ViewPoint Life Sciences). Peaks represent pixel changes from larval movement. A response was considered positive if it occurred within two seconds after the stimulus and surpassed threshold to be considered evoked, not spontaneous (Fig 1B, green indicates movement detected, magenta indicates threshold surpassed). For each larva, we used the best response rate out of three trials. Response was quantified by dividing the number of positive responses by total stimuli (12) and converting to a percent. If the larvae moved within two seconds before a stimulus, that stimulus was dropped from the trial data set (i.e. the number of total stimuli would become 11). Each data point on the graph in Fig 1C is the percent response of an individual larva. We used a two-tailed unpaired t-test with Welch's correction to determine significance, ���� p < 0.0001. Wild type and mutant larvae were genotyped by FM 1-43 labeling.
Immunofluorescent staining
We used an anti-Pcdh5a monoclonal antibody directed against amino acids 1-324 [10] as described previously [18]. In brief, we sorted wild type and ru1000 larvae at 5 dpf by behavior (tap response and balance). We then fixed 8 larvae per 2ml microtube in Phosphate Buffered Saline + 0.01% Tween-20 (PBST) + 4% paraformaldehyde, rotating overnight on a nutator at 4˚C. We washed with PBST 3x for 10 min each, then permeabilized with PBS + 0.5% Tri-tonX100 on a shaking table (50 rpm) for 1 hour, then at 4˚C overnight without shaking. We blocked with PBS + 1% DMSO + 5% goat serum + 1% Bovine Serum Albumen (BSA, Sigma-Aldrich Lot # SLSF5374V) for 2 hours minimum at room temperature on a shaking table (50 rpm). We applied the mouse anti-1C4 Pcdh15a antibody at 1:200 in blocking buffer overnight on the nutator at 4˚C. We washed with PBST 3x for 15 minutes each on the shaking table (50 rpm). We applied blocking buffer + secondary antibody, 546-conjugated goat anti-mouse IgG at 1:500 concentration (Life Technologies), and also included phalloidin-488 at 1:100 to visualize Beta-actin filaments in hair bundles, on a shaking table (50 rpm) for 4-5 hours in the dark at room temperature. We washed with PBST 3x for 10 minutes each and stored at 4˚C before imaging.
cDNA generation by Reverse Transcription Polymerase Chain Reaction (RT-PCR) and semi-quantitative PCR
For S3 Fig and Fig 4A, we extracted total RNA using the RNeasy mini kit (Qiagen). Larvae were homogenized using a 25 gauge syringe (Becton Dickinson, ref # 309626). To reverse transcribe cDNA we used the RNA to cDNA EcoDry Premix (Clontech, Cat # 639549). We then performed PCR on the cDNA using High Fidelity Phusion polymerase (New England Biolabs, Cat # M0530). To amplify the short isoform of Tmie (Tmie-short) and the t26171 allele, we sorted 30 wild type and 30 t26171 larvae by behavior (tap sensitivity and balance defect) at 5 dpf and used the pooled cDNA as template for the PCR reactions. Primers are listed in Table 1. Both transcripts were verified by DNA sequencing.
For Fig 4A, we sorted non-transgenic wild type (hetereozygote) and tmie ru1000 (homozygote) larvae by behavior at 5 dpf; the transgenic pool contained a mix of tmie ru1000 hetereozygotes and homozygotes (no behavior difference because full-length tmie transgene rescued phenotype). The tmie transgene was a single insert with low mCherry expression, the same line used for the data in Fig 4B and 4C, and S5 Fig, Tmie. For each genotype, n = 20 larvae were homogenized. We performed PCR at 30, 35, and 40 cycles. We ran on 2% gel and loaded 25 ul of tmie PCR product; we loaded only 10 ul of gapdh product to avoid saturating the bands. We quantified the 40 cycle bands in ImageJ using gapdh levels to normalize tmie levels. Primers are listed in Table 1; both tmie and gapdh produced 98 bp amplicons. The primers for tmie amplified a region within exon 4 in order to detect transcripts from both transgene and endogenous tmie.
FM 1-43 and FM 4-64 labeling
Larvae were briefly exposed to E3 containing either 3μM N-(3-Triethylammoniumpropyl)-4-(4-(Dibutylamino)styryl)Pyridinium Dibromide (FM 1-43, Life Technologies) or 3μM of the red-shifted N[scap]-(3-triethylammoniumpropyl)-4-(6-(4-(diethylamino)phenyl)hexatrienyl) pyridinium dibromide (FM4-64; Invitrogen). After exposure for 25-30 seconds, larvae were washed 3x in E3 and neuromasts were imaged from top-down. Neuromasts were chosen based upon their orientation, bundles pointing upward preferred. Typically, three neuromasts were examined per larvae, from the head or trunk depending on best angle; we never observed differences in posterior/anterior labeling. Laser power was adjusted for each experiment to avoid saturation of pixels but was consistent within a clutch. FM levels were quantified in ImageJ [51] as described previously [10]. In brief, maximum projections of each neuromast were generated using seven optical sections, beginning at the cuticular plate and moving down through the soma (magenta bracket, Fig 1G). We then measured the integrated density of the channel with an emission peak at 640 nm for FM 4-64, and at 488 nm for FM 1-43. This integrated density value was divided by the number of cells, thus converting each neuromast into a single plot point of integrated density per cell (IntDens/cell). Statistical analyses were always performed between direct siblings. For Fig 6, individual values were divided by the mean of the sibling wild type neuromasts in order to display the data as a percent of wild type, making it easier to compare across groups. Statistical significance was determined within an individual clutch using one-way ANOVA. We PCR-genotyped all larvae from the Tmie-GFP line; after confirming results, for all other tmie construct experiments, we continued to genotype transgenic larvae by PCR but genotyped non-transgenic wild type and tmie ru1000 larvae by behavior and expected FM labeling patterns.
Microphonics
Larvae at 3 dpf were anesthetized in extracellular solution (140mM NaCl, 2mM KCl, 2mM CaCl 2 , 1mM MgCl 2 , and 10mM 4-(2-hydroxyethyl)-1-piperazineethanesulfonic acid (HEPES); pH 7.4) containing 0.02% 3-amino benzoic acid ethylester (MESAB; Western Chemical). Two glass fibers straddled the yolk to pin the larvae against a perpendicular cross-fiber. Recording pipettes were pulled from borosilicate glass with filament, O.D.: 1.5 mm, O.D.: 0.86 mm, 10 cm length (Sutter, item # BF150-86-10, fire polished). Using the Sutter Puller (model P-97), we pulled the pipettes into a long shank with a resistance of 10-20MO. We then used a Sutter Beveler with impedance meter (model BV-10C) to bevel the edges of the recording pipettes to a resistance of 3-6 MO. We pulled a second pipette to a long shank and fire polished to a closed bulb, and then attached this rod to a piezo actuator (shielded with tin foil). The rod was then pressed to the front of the head behind the lower eye, level with the otoliths in the ear of interest, to hold the head in place while the recording pipette was advanced until it pierced the inner ear cover. Although it has been demonstrated that size of response is unchanged by entry point [52], we maintained a consistent entry point dorsal to the anterior crista and lateral to the posterior crista (see Fig 7A). After the recording pipette was situated, the piezo pipette was then moved back to a position in light contact with the head. We drove the piezo with a High Power Amplifier (piezosystem jena, System ENT/ENV, serial # E18605), and recorded responses in current clamp mode with a patch-clamp amplifier (HEKA, EPC 10 usb double, serial # 550089). Each stimulus was of 20 ms duration, with 20 ms pre-and post-stimulus periods. We used either a sine wave or a voltage step and recorded at 20 kHz, collecting 200 traces per experiment. In Fig 1H, we used a 200 Hz sine wave at 10V, based on reports that 200 Hz elicited the strongest response [38]. In Fig 7, we used multiple step stimuli at varying voltages (2V, 3V, 4V, 5V, 6V, and 10V). The piezo signal was low-pass filtered at 500Hz using the Low-Pass Bessel Filter 8 Pole (Warner Instruments). Microphonic potential responses were amplified 1000x and filtered between 0.1-3000 Hz by the Brownlee Precision Instrumentation Amplifier (Model 440). We used Igor Pro for analysis. We averaged each set of 200 traces to generate one trace response per fish, then measured baseline-to-peak amplitude. These amplitudes were used to generate the graphs in Fig 7. Statistical significance was determined by 2-way ANOVA comparing all groups to wild type non-transgenic siblings. We used PCR to genotype larvae.
Quantification of fluorescence signal in the ROI
Using ImageJ, maximum projections of each crista were generated for analysis (5 sections per stack for Tmc1-GFP in Fig 3D and In the ROI, we quantified the integrated density of the channel with an emission peak at 480 nm for GFPtagged constructs, and with an emission peak at 640nm for mCherry. For GFP-tagged constructs, this was repeated in the region above the bundles containing only inner ear fluid and the kinocilia in order to subtract background fluorescence. Background for nuclear mCherry was measured from the soma region between the bundle and nuclei. Each lateral crista generated one data point in all quantification graphs. In some cases, we saw single cells that appeared to have a GFP-fill, probably due to clipping of the GFP tag. We excluded these cells from analyses, since they falsely increased the signal. Likewise, for quantification of soma GFP, we excluded crista with immature hair cells that highly expressed the myo6b driven transgene. Due to the 3D nature of the mound-shaped cristae, it was difficult to completely exclude the apical soma region, leading the bundle signals to average above zero in tmie ru1000 expressing either Tmc1-GFP or Tmc2b-GFP. For determination of significance, we used the Kruskal-Wallis test for quantification of Tmc2b-GFP bundle signal in the background of coexpressed SP44-231, SP63-231, and 2TM-CD8; all others quantifications with three or more groups are one-way ANOVA. For comparison of two groups, we used two-tailed unpaired ttests with Welch's correction. We PCR-genotyped all larvae from Fig 3B-3J; when introducing co-expression of tmie constructs, we continued to PCR-genotype larvae containing the tmietransgene but genotyped non-tmie-transgene wild type and tmie ru1000 larvae by behavior.
Statistical power
In experiments subjected to quantitative analysis, we used G � power [53] to determine the sample size required. For microphonics, we used the strongest driver stimulus setting (10V) in these determinations. Plot of the integrated density of Tmc2b-GFP fluorescence in the ROI of lateral cristae from 4 dpf larvae. ROI in A and D is the soma region, in B and E is the whole hair cell, and in C and F is a subtraction of whole cell fluorescence minus soma fluorescence to roughly determine the relative contribution of bundle signal. Significance was determined by two-tailed unpaired ttest with Welch's correction, �� p < 0.01, ���� p < 0.0001. (TIF)
S3 Fig. Differential effects on function with a genomic mutation and a transgene mimic.
(A) Data for a novel mutant allele of tmie, t26171. DNA: Chromatographs of the DNA sequence of tmie in wild type (above) and tmie t26171 (below) showing the genomic region where the mutation occurs. An arginine is mutated to guanine in the splice acceptor (black box, above) of the final exon of tmie, exon 4. The dashed black box below indicates the mutated original splice acceptor site. Use of a cryptic splice acceptor (black box, below) 8 nucleotides downstream causes a frameshift and an early stop codon ( � ). cDNA: Chromatograph of the DNA sequence from RT-PCR of tmie t26171 larvae bridging exons 3 and 4. Protein: The predicted protein products, shown here as a two-pass transmembrane protein. The wild type protein has many charged residues (positive in light gray, negative in dark gray) that are lost in tmie t26171 . Balance: Photos of wild type and tmie t26171 larvae, taken with a hand-held Canon camera. Arrow points to a larva that is upside-down, displaying a classic vestibular phenotype. (B) Top-down view of a representative neuromast after exposure to FM 4-64, imaged using confocal microscopy. The first panel is a single plane through the soma region while the second panel is a maximum projection of 7 panels through the soma region, beginning at the cuticular plate (as denoted by magenta bracket in Fig 1G). (C) Same as (B) except that the first panel shows the bundle region so that 1-138-GFP can be visualized in bundles (as depicted by dashed green line, Fig 1G). The transgene is driven by the myo6b promoter. (D) Plot of the integrated density of FM fluorescence per cell. We normalized values to the average of wild type siblings. Displayed wild type and tmie ru1000 data are from siblings of Tg(1-138-GFP); tmie ru1000 and are the same values reported in Fig 6. Data for tmie t26171 is from a separate experiment. Statistical significance determined by one-way ANOVA, ���� p<0.0001. Scale bar is 10μm. (TIF) S4 Fig. Expression pattern and functional rescue by tmie constructs CD8 and 139-231. All images were captured using confocal microscopy. (A) Stereocilia of a neuromast viewed from above. The same neuromast was imaged at 4 dpf and 6 dpf. In hair cells expressing CD8-GFP, signal was initially detected in immature bundles, but this expression was only detectable in soma by dpf 6 as the cells matured (n = 10 cells). Significance was determined by one-way ANOVA, n � 7, ���� p < 0.0001. (TIF) S1 Data. All data used for quantifications in this study. (XLSX) microscopy and to Jim Hudspeth for the tmie ru1000 fish line. We also thank Eliot Smith for feedback on the manuscript, and Leah Snyder and Lisa Hiyashi for laboratory support.
|
2019-03-08T14:09:43.444Z
|
2019-02-01T00:00:00.000
|
{
"year": 2019,
"sha1": "9bb72ccc1e94b811e67696aba49b1a9e869273b5",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosgenetics/article/file?id=10.1371/journal.pgen.1007635&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9bb72ccc1e94b811e67696aba49b1a9e869273b5",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
213478515
|
pes2o/s2orc
|
v3-fos-license
|
Ni nanocatalysts supported on mesoporous Al2O3–CeO2 for CO2 methanation at low temperature
The selectivity and activity of a nickel catalyst for the hydrogenation of carbon dioxide to form methane at low temperatures could be enhanced by mesoporous Al2O3–CeO2 synthesized through a one-pot sol–gel method. The performances of the as-prepared Ni/Al2O3–CeO2 catalysts exceeded those of their single Al2O3 counterpart giving a conversion of 78% carbon dioxide with 100% selectivity for methane during 100 h testing, without any deactivation, at the low temperature of 320 °C. The influence of CeO2 doping on the structure of the catalysts, the interactions between the mesoporous support and nickel species, and the reduction behaviors of Ni2+ ions were investigated in detail. In this work, the addition of CeO2 to the composites increased the oxygen vacancies and active metallic nickel sites, and also decreased the size of the nickel particles, thus improving the low temperature catalytic activity and selectivity significantly.
Introduction
Natural gas is a potential clean fuel as well as an important feedstock used to produce other key industrial chemicals. The process of carbon dioxide (CO 2 ) hydrogenation to produce methane (CH 4 ) is a promising route for recycling CO 2 captured from the combustion of fossil fuels. 1-4 CO 2 methanation, also known as the Sabatier reaction (4H 2 + CO 2 / CH 4 + 2H 2 O, DH 298 K ¼ À165 kJ mol À1 ), is exothermic and thermodynamically favoured at low temperatures but there are signicant kinetic barriers 5 and thus it still remains a big challenge to develop a catalyst with both excellent catalytic activity and selectivity at low temperatures. Great efforts have been made to study metal-supported catalysts for the hydrogenation of CO 2 to CH 4 . Compared with expensive noble metals (Rh, Ru, Pd) and other common transition metals (Fe, Co), [6][7][8][9][10][11] Ni-based composites, so far, have been the most extensively used in CO 2 hydrogenation to CH 4 because of their low cost, excellent catalytic activities and selectivity. [12][13][14][15][16] However, the sintering problems of Ni nanoparticles at relatively high reaction temperatures and the deposition of carbon lead to rapid deactivation during the reaction processes. 17 Therefore, it is desirable to explore novel nanocatalysts which are highly efficient, mechanically resistant, chemically and physically stable, and resistant to sintering.
Many strategies have been proposed to alleviate the fast deactivation and low selectivity of catalysts for CH 4 , such as the modication of catalytic supports, the addition of structural or electronic promoters, and adjustments to the preparation routes for the catalysts. [18][19][20][21][22][23] Among these, the modication of supports has drawn much attention because changes to the metal-support interactions affect the reactivity and bonding of chemisorbed molecules as well. For instance, MgO and ZrO 2 were investigated for their capacities to improve the catalytic activity and selectivity of heterogeneous catalysts. [24][25][26][27] In general, CeO 2 has acted as an electronic and structural promoter to enhance the performance of Ni-based catalysts by reinforcing the thermal stability, and improving the exchange of oxygen species as well as the uniform distribution of metals over the catalyst. [28][29][30] Here, we describe a Ni-modied catalyst loaded on an Al 2 O 3 -CeO 2 support through a one-pot sol-gel method, and demonstrate its activity for the hydrogenation of CO 2 . The increased quantity of active nickel sites combined with the oxygen vacancies of the composite support promoted by CeO 2 lead to an excellent performance in the hydrogenation of CO 2 to form CH 4 . Although there are some reports in the literature regarding CeO 2 -based composites for CO 2 methanation, most of these have used CeO 2 as a separate carrier or promoter, and very few reports have focused on the Ce species as both a promoter and a carrier at the same time. The low content of Ce and the poor interaction between CeO 2 and Al 2 O 3 in composites prepared in previous studies have led to inferior catalytic performance. [31][32][33] In this study, we have developed a mild method to construct Al 2 O 3 -CeO 2 composites, namely a one-step sol-gel method. These Al 2 O 3 -CeO 2 composites have high redox activity, high numbers of oxygen vacancies, resistance to sintering and excellent thermal stability. Our Al 2 O 3 -CeO 2 -1.0-supported Ni catalyst exhibited 100% selectivity for CH 4 with 78% CO 2 conversion in a 100 h test at the low temperature of 320 C. (25 mL) was added dropwise to the above solution with stirring until gels formed. Aer being aged at room temperature for 48 h, the gels were ve times solvent exchanged with EtOH to remove impurities, and dried at 80 C for two days. The white product Ni/Al 2 O 3 -CeO 2 -1.0 (Al/Ce mole ratio ¼ 1) was obtained by calcining the powder at 500 C for 7 h. In the composites, the sum mole ratio of Al and Ce was xed at 0.04 mol, and the Ni content was maintained at 10 wt%. The resulting porous 10Ni/Al 2 O 3 -CeO 2 catalysts were named Ni/Al 2 O 3 -ZrO 2 -x (x ¼ 10, 5.0, 2.5, and 1.0), where x represents the Al/Ce ratio. Single Ni/Al 2 O 3 catalysts were prepared for comparison using the same process but without adding the Ce source.
Material characterizations
X-ray Photoelectron Spectroscopy (XPS) was conducted on an ESCALAB250Xi XPS spectrometer (Thermo Fisher Scientic), and the binding energies of all photoelectron peaks were calibrated using C 1s spectra (binding energy at 284.8 eV). Powder X-ray Diffraction (PXRD) characterization was performed on a Smartlab diffractometer (Rigaku) with ltered Cu Ka radiation (l ¼ 1.5405 A). N 2 adsorption and desorption isotherms were performed on an Autosorb iQ2 analyzer (Quantachrome) in a liquid nitrogen bath at 77 K. H 2 -temperature programmed reduction (H 2 -TPR) was conducted using an Altamira AMI 200-R-HP unit with a thermal conductivity detector (TCD). Thermogravimetric analysis (TGA) was performed on a DTG-60 thermal gravimetric analyser (Shimadzu) in an air atmosphere. All prepared catalysts were stored in an inert glovebox (O 2 < 0.1 ppm, H 2 O < 0.1 ppm, Mikrouna) before use and characterization.
Catalytic tests
CO 2 methanation was performed in a continuous xed-bed reactor in a stainless steel tube with a length of 330 mm and an inner diameter of 12 mm at normal pressure and various temperatures. Briey, 0.5 g catalyst was mixed with an equivalent weight of quartz sand (40-70 mesh) and reduced in situ under pure H 2 with a gas hourly space velocity (GHSV) of 2000 mL g À1 h À1 at 500 C for 9 hours before the catalytic test. Then the instrument was cooled to 160 C and a mixed stream of CO 2 and H 2 (volumetric ratio of H 2 /CO 2 ¼ 4) was introduced into the gas circuit as the feedstock. The gases in the outow were analysed using an online gas chromatograph (Fuli 9790II). CH 4 (S CH 4 ) selectivity and CO 2 (X CO 2 ) conversion were determined by the following equations: where, W CO 2 ,in denotes the moles of CO 2 in the feedstock, and W CO 2 ,out , W CO,out , and W CH 4 ,out denote the carbon moles of CO 2 , CO and CH 4 at the outow reactor, respectively.
Characterization of the catalysts
The X-ray powder diffraction (PXRD) patterns of fresh Ni/Al 2 O 3 - different molar ratios of Al/Ce were evaluated by N 2 sorption isotherms at 77 K (Fig. 2). Similar to the single Ni-Al 2 O 3 sample, all the Ni/Al 2 O 3 -CeO 2 composites displayed type IV isotherms with big hysteresis loops, demonstrating the existence of mesopores in these composite supports. The pristine mesoporous Al 2 O 3 -modied Ni catalyst had a high specic surface area of 291 m 2 g À1 with an average pore volume of 0.35 cm 3 g À1 . As the Ce loading in the Ni/Al 2 O 3 -CeO 2 -x composites was increased, the region of the hysteresis loops tended to decrease (Fig. 2A); at the same time, the average pore diameters widened gradually in the range 3.66 to 4.35 nm (Fig. 2B), indicating that the Ce and Ni species might occupy the small pores of Al 2 O 3 or cause partial collapse of pristine structures. The basic data on the catalysts are listed in Table S1. † The prepared Ni/Al 2 O 3 -CeO 2 -x samples exhibited obvious decreases in pore volumes and specic surface areas, but an increase in pore diameters in comparison to the Ni/Al 2 O 3 catalyst. These results may be ascribed to the covering of the Al 2 O 3 surface by Ce or Ni species, or the aggregation of NiO species.
Effect of Ce content
H 2 -TPR was used to test the reduction behaviour of the Ni/ Al 2 O 3 -CeO 2 -x catalysts (Fig. 3), and the Ni/Al 2 O 3 catalyst was also measured for comparison. The Ce content had a remarkable effect on the reduction of Ni 2+ ions in the composites. With increasing amounts of Ce in the Ni/Al 2 O 3 -CeO 2 composites, the reduction peaks for the NiO species shied gradually to lower temperatures (624 C for Ni/Al 2 O 3 to 504 C for Ni/Al 2 O 3 -CeO 2 -1.0). This indicated that the doping of cerium into the Ni/Al 2 O 3 -CeO 2 supports broke the reciprocities between Al 2 O 3 and NiO due to the formation of the Al 2 O 3 -CeO 2 composite, which was benecial for the reduction of NiO. The reduction peak at around 624 C could be observed for the Ni/Al 2 O 3 sample; the high temperature could be ascribed to the intense interactions between the Al 2 O 3 support and NiO species. For Ce-doped samples, the reduction peaks for Ni/Al 2 O 3 -CeO 2 -x shied slowly to lower temperature with increasing Ce loading, and the consumption of H 2 also increased, which could be attributed to the Ce 3+ /Ce 4+ couple, which could create both bulk and surface oxygen vacancies. 35 The inferior reduction behaviour of the Nibased species on mesoporous Al 2 O 3 led to inadequate numbers of active metallic Ni sites in comparison with those in the Ni-supported Al 2 O 3 -CeO 2 composites. The best reduction behaviour of NiO species in this study was obtained for the catalyst with the Al/Ce molar ratio of 1.0.
The valence state of Ni, the interactions between NiO species and the Al 2 O 3 -CeO 2 support, and the surface chemical environment of fresh Ni/Al 2 O 3 -CeO 2 -x catalysts were further revealed by XPS (Fig. 4). The binding energy at 855.9 eV of Ni 2p 3/2 is the characteristic peak of Ni 2+ , and no obvious peak shi could be detected for Ni/Al 2 O 3 -CeO 2 -x regardless of the Al/Ce ratio (Fig. 4A), which demonstrated that the Ce content did not change the chemical environment of the NiO species dispersed on the surface of the composites. From this observation in combination with the PXRD analysis, we could infer that the nickel existed on the surface of the catalyst mainly as highly dispersed NiO species. However, the intensity of the Ni 2+ peak strengthened with increasing Ce content, indicating the This journal is © The Royal Society of Chemistry 2020 RSC Adv., 2020, 10, 2067-2072 | 2069 generation of vast numbers of active sites on the support surface in the reduction process, which could be further conrmed by the XPS of spent catalysts (Fig. 4B). The XPS peak at around 529.9 eV corresponded to the lattice oxygen (OL) on the surface of CeO 2 or Al 2 O 3 ( Fig. 4C and D), and another peak at around 530.9 eV was assigned to adsorbed oxygen (OA) on the surface. 36 The detailed information about the OA to OL ratio for the fresh composites on the basis of the OL and OA area percentages is summarized in Table S2. † The OA and OL peaks for the Ni-supported Al 2 O 3 -CeO 2 catalysts were located in the ranges 531.2-531.9 eV and 529.6-531.0 eV, respectively. With increasing Ce content, the peaks shied slowly to lower binding energies, which may be ascribed to the ever-increasing numbers of oxygen vacancies on the surface of the Al 2 O 3 -CeO 2 catalysts at higher Ce content, thus contributing to the adsorption and conversion of CO 2 by the catalysts. 37
Catalytic performance
The catalytic hydrogenation of CO 2 to CH 4 with the Ni/Al 2 O 3 -CeO 2 -x catalysts was performed in a xed-bed reactor (GHSV of 6000 mL g À1 h À1 , H 2 /CO 2 ¼ 4.0, atmospheric pressure, temperature varied from 150 to 450 C). As the reaction temperature was progressively increased, the CO 2 conversion rst increased for all catalysts, but it then reached a peak value at an optimum reaction temperature and started to decrease (Fig. 5A). The catalytic activity declined when the temperature was further increased to 350 C, which could be ascribed to the endothermic reverse reaction. The CeO 2 -modied catalysts displayed an obviously higher CH 4 selectivity compared with the single Ni/Al 2 O 3 sample (Fig. 5B). The amount of CeO 2 in the Al 2 O 3 -CeO 2 composite had a critical impact on the catalytic performance, especially at lower temperature. The Ni/Al 2 O 3 catalyst without CeO 2 displayed a low CO 2 conversion of only 9.8% at the low reaction temperature of 250 C, but when we introduced trace amounts of Ce species into the catalyst (Ni/ Al 2 O 3 -CeO 2 -10), the conversion of CO 2 increased sharply to the value of 42.9%. It was noteworthy that the CO 2 conversion increased step-by-step when the CeO 2 loading was increased in the Ni-modied Al 2 O 3 -CeO 2 samples. Apparently, with an increase of CeO 2 content, a lower temperature was sufficient to reach the same CO 2 conversion level; the excellent catalytic performance of 78% CO 2 conversion with 100% CH 4 selectivity was obtained for the Ni/Al 2 O 3 -CeO 2 -1.0 composite catalyst at the relatively low temperature of 320 C.
In order to investigate the effect of Ce species on the longterm durability of the catalyst, measurements were performed (Fig. 6). Indeed, the loading of CeO 2 in the composite had a signicant inuence on the catalytic performance of CO 2 methanation. The Ni/Al 2 O 3 -CeO 2 -1.0 sample exhibited 78% CO 2 conversion with almost 100% CH 4 selectivity during the 100 h test, demonstrating excellent long-term stability and selectivity, and showing that CeO 2 doping signicantly improved the long-term stability of Ni/Al 2 O 3 catalysts. Obviously, the introduction of CeO 2 could promote catalytic stability and activity at the same time, which may be attributed to the generation of oxygen vacancies on the surface of the support and the increased metallic nickel surface area, as evidenced by XPS ( Fig. 4C and D). On the one hand, Ni species provided active sites for activating molecular CO 2 and could facilitate the formation of atomic hydrogen by dissociating H 2 from the Nibased catalyst. On the other hand, the surface oxygen vacancies resulted in the formation of carbon species, which could react with the atomic hydrogen on the surface of the catalyst to form CH 4 . The structural stability of the Ni/Al 2 O 3 -CeO 2 -1.0 catalyst was further conrmed by PXRD aer the long-term reaction; no obvious change could be observed when it was compared with the fresh catalyst (Fig. S1 †).
Conclusions
To sum up, Ni-modied mesoporous Al 2 O 3 -CeO 2 composite catalysts containing various amounts of CeO 2 were synthesized through a one-pot sol-gel route and used for CO 2 conversion to CH 4 at low reaction temperatures. The mesoporous Ni/Al 2 O 3 -CeO 2 catalysts displayed excellent CH 4 selectivity and CO 2 conversion in comparison to the single Ni-modied Al 2 O 3 catalyst. The uniform distribution of Ni species combined with the improved surface oxygen vacancies resulting from CeO 2 loading on the support made the excellent catalytic activity and CH 4 selectivity possible at lower temperatures. The Ni/Al 2 O 3 -CeO 2 -1.0 catalyst displayed impressive catalytic properties of 78% CO 2 conversion with 100% CH 4 selectivity at 320 C; this performance was retained without any decay during 100 h of testing.
Conflicts of interest
There are no conicts to declare.
|
2020-01-16T09:08:33.826Z
|
2020-01-08T00:00:00.000
|
{
"year": 2020,
"sha1": "4bd2a45118a834767eee45787e5948681fd7610d",
"oa_license": "CCBYNC",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2020/ra/c9ra08967e",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a345a8e372551e0fb54b1f95a53f7443ca8b6b76",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
}
|
251042842
|
pes2o/s2orc
|
v3-fos-license
|
Climate Change and the Professional Obligation to Socialize Physicians and Trainees into an Environmentally Sustainable Medical Culture
On entering medical school, all trainees take on the mantle of professionalism [1]. Professionalism, the collection of behaviors doctors engage in to communicate their fiduciary responsibility to their patients, is central to the trust at the core of the doctor-patient relationship [2]. Physicians in training learn these professional behaviors, as well as the values that govern them, primarily through observing and emulating their mentors, a process referred to as professional socialization [3]. The behaviors and beliefs that physicians adopt while undergoing professional socialization need to be constantly reexamined to ensure that medicine remains true to its fiduciary role. With this goal in mind, we wish to address medicine’s contributions to climate change and the environmentally unsustainable behaviors that physicians are socialized to accept and adopt. Climate change has been identified as the number one public health concern of the twenty-first century [4], and yet, as of 2013, the US health care system was responsible for 10% of US greenhouse gas emissions, 12% of acid rain production, 10% of smog formation, 1% of stratospheric ozone depletion, and 1–2% of other toxic emissions [5]. These effects on the environment contribute to an estimated loss of 614,000 disability-adjusted life years (DALYs) annually [6], a number comparable to the DALYs incurred by the patients who die annually from medical errors in the US health care system [7]. Climate change negatively impacts nearly all aspects of health [8]. In mental health alone, temperature fluctuations have been correlated with increased prevalence of a number of psychiatric disorders [9], and increased rates of trauma and posttraumatic stress disorder from more frequent and severe natural disasters contribute to increased rates of comorbid substance use and domestic violence [10, 11]. Medicine’s contribution to this significant morbidity and mortality is incommensurate with its obligation to do no harm to its patients. Psychiatry has a unique role to play in addressing medicine’s current environmentally unsustainable culture because, first, a number of psychological factors make addressing medicine’s carbon footprint difficult for physicians and, second, psychiatry has already started taking a leading role among the medical specialties in addressing its professional carbon footprint. Therefore, in this editorial, we briefly summarize current contributors to the US health care system’s large carbon footprint, reflect on social and psychological factors that may support a medical culture that has not prioritized environmental sustainability, consider how this professional culture may endanger the doctor-patient relationship, and discuss how actions in medicine and specifically psychiatry can be adjusted to socialize medical personnel into a more environmentally sustainable practice that can also sustain the integrity of its fiduciary obligations. This discussion builds upon what Academic Psychiatry has previously published to call the psychiatric community to action in addressing climate change as a profession [12–15]. * Joshua R. Wortzel Jrwortzel@gmail.com
On entering medical school, all trainees take on the mantle of professionalism [1]. Professionalism, the collection of behaviors doctors engage in to communicate their fiduciary responsibility to their patients, is central to the trust at the core of the doctor-patient relationship [2]. Physicians in training learn these professional behaviors, as well as the values that govern them, primarily through observing and emulating their mentors, a process referred to as professional socialization [3]. The behaviors and beliefs that physicians adopt while undergoing professional socialization need to be constantly reexamined to ensure that medicine remains true to its fiduciary role. With this goal in mind, we wish to address medicine's contributions to climate change and the environmentally unsustainable behaviors that physicians are socialized to accept and adopt.
Climate change has been identified as the number one public health concern of the twenty-first century [4], and yet, as of 2013, the US health care system was responsible for 10% of US greenhouse gas emissions, 12% of acid rain production, 10% of smog formation, 1% of stratospheric ozone depletion, and 1-2% of other toxic emissions [5]. These effects on the environment contribute to an estimated loss of 614,000 disability-adjusted life years (DALYs) annually [6], a number comparable to the DALYs incurred by the patients who die annually from medical errors in the US health care system [7]. Climate change negatively impacts nearly all aspects of health [8]. In mental health alone, temperature fluctuations have been correlated with increased prevalence of a number of psychiatric disorders [9], and increased rates of trauma and posttraumatic stress disorder from more frequent and severe natural disasters contribute to increased rates of comorbid substance use and domestic violence [10,11]. Medicine's contribution to this significant morbidity and mortality is incommensurate with its obligation to do no harm to its patients.
Psychiatry has a unique role to play in addressing medicine's current environmentally unsustainable culture because, first, a number of psychological factors make addressing medicine's carbon footprint difficult for physicians and, second, psychiatry has already started taking a leading role among the medical specialties in addressing its professional carbon footprint. Therefore, in this editorial, we briefly summarize current contributors to the US health care system's large carbon footprint, reflect on social and psychological factors that may support a medical culture that has not prioritized environmental sustainability, consider how this professional culture may endanger the doctor-patient relationship, and discuss how actions in medicine and specifically psychiatry can be adjusted to socialize medical personnel into a more environmentally sustainable practice that can also sustain the integrity of its fiduciary obligations. This discussion builds upon what Academic Psychiatry has previously published to call the psychiatric community to action in addressing climate change as a profession [12][13][14][15].
Contributors to the US Health Care System's Carbon Footprint
Many factors contribute to the carbon footprint of the medical profession, though the largest are systemic, arising from the hospital sector (39%) and the development and distribution of prescription medications (14%) [16]. Modernizing hospital care has seemingly become synonymous with generating more trash. For example, over the past two decades, doctors in many nations have moved away from using multi-use medical devices to using single-use disposable devices in efforts to maximize efficiency and minimize health risks.
In one study conducted in Istanbul, disposable items contributed to a 330% increase in hospital waste production between 2000 and 2017, or approximately 0.43 kg of waste/bedday to 1.68 kg of waste/bed-day [17]. In China and many Western medical systems, hospital waste can even be as high as 3-4 kg of waste/bed-day [18]. Some hospital executives are beginning to make concerted efforts to reduce their waste production and overall carbon footprints [19], though these efforts are still in their infancy.
The activities of pharmaceutical companies generate about 55% more greenhouse gas emissions than the entire automotive sector [20]. Efforts are underway within the pharmaceutical industry to be more environmentally sustainable, including using "green chemistry" to produce fewer environmentally hazardous byproducts, reducing packaging waste, and improving the transportation efficiency of their products [21]. So far these efforts have led to modest improvements in the overall carbon footprint of the pharmaceutical industry [22], though, like hospital systems, there is much room for further improvement.
While much of medicine's large carbon footprint is determined by large systems of care, individual decisions by physicians also have profound effects. For example, in 2018, 25% of the UK population was prescribed a psychotropic medication [23], yet a follow-up review estimated that at least 10% of total prescriptions were unnecessary (e.g., prescriptions were not clinically indicated, there were more effective nonpharmacological alternatives, prescriptions were redundant) [24]. The rationale of polypharmacy in psychiatry is multifaceted and has historically been difficult to address [25]; however, given the sheer number of psychotropics prescribed and the large carbon footprint generated by producing them, efforts to reevaluate psychiatrists' current prescribing practices could significantly minimize this waste and the carbon footprint it generates [26].
Professional travel also merits consideration. Traveling to and from the annual meeting of the American Psychiatric Association (APA) produces 1.2-1.6 metric tons of CO 2 per person [27], which is roughly equivalent to the per capita annual carbon footprint recommended by the Intergovernmental Panel on Climate Change to prevent worst-case scenarios of global warming by the end of this century [28]. Similarly, medical students applying to psychiatry residency before the COVID-19 pandemic produced on average 5.4 metric tons of CO 2 per person traveling to and from their interviews-nearly 4 times their recommended annual footprint [29]. Though telepsychiatry is still in its infancy and requires continued investigation, it too should be considered as a means of reducing psychiatry's carbon footprint [30]. A growing literature in other specialties has documented that the carbon footprints from travel for medical appointments can be substantial, ranging from 0.70 to 372 kg CO 2 equivalents per consultation [31]. To put these numbers in perspective, travel to and from all US ambulatory care visits in 2018 alone [32] generated a carbon footprint that was at least 30 times larger than the total carbon footprint produced by travel for the 2018 APA Annual Meeting [27]. Travel associated with medical training and patient care has historically been considered unavoidable, though COVID-19 pandemic-related travel restrictions that necessitated virtual meetings, interviews, and televisits have called the essentialness of these carbon footprints into question.
Factors in Medicine That Have Contributed to Environmental Unsustainable Health Care
There are multiple reasons why physicians have historically contributed so substantially to climate change. Perhaps the biggest is that many providers are still unaware of the significant environmental impacts of their practice [26]. In a sample of over 400 international members of the American Thoracic Society, 80% identified that climate change was relevant to patient care, yet nearly half reported lacking knowledge about how to address climate change with their patients, and only 30% were aware of what their hospitals were doing to address their carbon footprints [33]. Even for those who are knowledgeable about this topic, cultural forces within medicine, such as an emphasis on time efficiency, run counter to sustainability. In the same international survey, 45% of the responding physicians cited lack of time for why they did not address climate change in their clinical practice [33].
Other psychological factors may also contribute to physicians' significant contributions to climate change. For example, some medical providers, much like the general population, feel powerless to make an appreciable impact on climate change [34]. Some have also been socialized to believe that providing good health care comes at the cost of being less environmentally conscious [34], and those caught in this dialectic may manage this double bind and resultant moral distress through repression and denial [35]. Other physicians cope with their outsized carbon footprints through rationalizing that their good work as healers and helpers provides a "moral offset" that outweighs their contributions to climate change [36]. However, these means of coping with the status quo are short-lived solutions that will become less tenable as climate change increasingly affects patients' health and as the public becomes more aware of medicine's impact on climate change.
How Medicine's Large Carbon Footprint May Endanger the Doctor-Patient Relationship
Core to the doctor-patient relationship is the mutual understanding that doctors have patients' best interests at heart. When this trust is broken, the doctor-patient relationship is fundamentally injured. As patients learn about all the ways that their physicians contribute to climate change and how climate change negatively impacts their health, they may lose faith in the doctor-patient relationship and disengage from the medical establishment. The relational tension currently observed between parents and their children who are anxious about climate change may offer some insights into what the relationship might come to look like for doctors and their patients. In a recent international study of 10,000 adolescents and young adults polled about climate anxiety, 59% of participants reported feeling very to extremely worried about climate change [37], and many expressed a deep sense of confusion, betrayal, and anger toward adults who they perceive as not doing enough to protect them from an unsafe future. For some young people, this perception has led to a feeling of outright antagonism toward adults who they feel are being emotionally neglectful and even abusive through their inaction to address climate change, and in others it has led to withdrawal from adults who they see as unresponsive to their needs [38]. Parental inaction regarding climate change is contributing to children developing anxious disorganized and anxious avoidant attachment with their caretakers.
The deleterious impact of a disrupted doctor-patient relationship is not just theoretical. We have seen this mistrust manifest most recently on a national scale in patients' mistrust of vaccines during the COVID-19 pandemic [39]; however, in a number of other instances where patients have perceived their physicians' fiduciary responsibilities as compromised, this perception has contributed to negative clinical consequences [40]. Patients' trust in their physicians has been shown to be directly related to patients' willingness to follow treatment recommendations and seek care in a timely fashion [41,42]. In turn, following treatment recommendations and efficient access to care are directly associated with reduced health care costs and improved clinical outcomes [43,44]. Beyond the loss of good feeling between patient and physician, the erosion of the doctor-patient relationship fundamentally threatens the quality of care a patient is receptive to receiving from their doctor, much as a child is less able to receive comfort and guidance from a parent whom the child perceives as ineffectual and insecurely attached. While only a theoretical concern currently, the unintended ramifications of medicine's contributions to climate change on the doctorpatient relationship should be considered, and damage to the doctor-patient relationship could hopefully be minimized through early action as a profession to be more sustainable.
Socializing Medical Personnel into a More Environmentally Sustainable Medical Culture
There is no question that, first and foremost, medicine needs to pursue means of reducing its carbon footprint to cultivate a medical culture that is attentive to its effects on climate change. Instituting systems-level policies to enforce greener hospital practices will be essential to this end. Medical systems, particularly outside the USA, have become increasingly aware of their carbon footprints, and they have been working to find ways to reduce their contribution to climate change. For example, in the UK, the National Health Service has set the goal of delivering carbon-neutral health care as soon as possible, and it has already reduced the carbon footprint of its delivery of care emissions by 57% compared to its 1990 levels [8]. Similar efforts have been made in Australia and Germany [8]. Clinicians in multiple specialties have also endeavored to find ways to reduce the individual carbon footprints of their medical procedures. For example, anesthesiologists in the UK have been advocating for the use of fewer disposable devices and have identified anesthetic agents with fewer damaging impacts on the environment [45]. Other organizations, such as the My Green Doctor Foundation [46], offer guidance and educational materials to physicians for how they can reduce their professional carbon footprints, and payment for carbon offsets (i.e., donations to companies for carbon sequestration and development of sustainable energy) is being used to minimize the net impact of professional travel on climate change [47]. The US government has recently established the goal of reducing the carbon footprint of the US health care system by 50% by 2030, and grants and agencies have been put in place to help support these efforts [48].
Psychiatry has taken a leading role in addressing its carbon footprint as a medical specialty. In 2017, the APA published a position statement affirming its commitment to "mitigate the adverse health and mental health effects of climate change" [49]. In 2019, it divested from companies with significant assets in fossil fuels [50], and in 2021, it established a presidential taskforce on social determinants of health, including environmental health, and created a committee on climate change and mental health to start providing recommendations for how, among other goals, it can reduce its carbon footprint. In accordance with this goal, the APA committee proposed an action paper (Malinas P, Wortzel JR, Haase E, Lee J, Fleming J: Toward making the carbon footprint of the APA Annual Meeting carbon neutral) that was passed by the APA Assembly in November 2021 (Item 2021A2 12.O) to reduce the carbon footprint of the APA Annual Meeting by at least 50% by the year 2030.
However, efforts to reduce medicine's carbon footprint may only be nominal if they are not matched by corresponding changes in the culture of medicine on the individual level. For example, efforts to address racism in medicine primarily through policy changes, such as implementing affirmative action in medical school admissions, have had only limited effectiveness because they have not addressed the implicit racism among individual medical personnel [51]. Similarly, medical professionals need to become educated about their many subtle, personal contributions to medicine's carbon footprint and identify the factors that perpetuate these practices. Educators across a variety of medical specialties [52,53], including psychiatry [54], are beginning to develop and integrate core learning objectives and classes pertaining to sustainable health care into preclinical and clinical training to help convey the importance of developing a more environmentally sustainable medical culture. In many cases, these curricular changes have been driven by medical trainees themselves who see the importance of learning this material [55]. There have also been efforts to start incorporating environmental sustainability into the quality-improvement programs that are already commonly built into many hospital systems and in which many medical trainees are actively involved [56]. Through these changes, the system-wide interventions to make medicine greener will hopefully be supported by a more widespread medical culture that sees the significance of supporting efforts to address the impacts of climate change on their patients and on the doctor-patient relationship.
Conclusion
The Intergovernmental Panel on Climate Change has determined that the world is at a "code red"-a dramatic reduction in humanity's carbon footprint is needed within this decade to prevent worst-case scenarios of global warming [57]. This reduction is necessary to preserve both human health and the health of the planet and all of its organisms. It is unconscionable that doctors should contribute so significantly to this crisis. Currently, cultural principles and psychological forces conspire to keep medicine environmentally unsustainable. Through education, physicians can foster a medical culture that socializes trainees and current practitioners into a practice of medicine that is more sustainable and preserves the fiduciary responsibility at the heart of the doctor-patient relationship. Doctors all over the world are starting to consider ways in which they can decrease their institutional and personal carbon footprints, and this consideration will role-model for trainees the importance of this task. Academic institutions are also investing in how to explicitly educate medical trainees and personnel to be more sustainable in their practices to propagate a culture that appreciates and supports the institutional changes underway to make medicine greener. Psychiatry as a specialty has been particularly active in this space, though continued efforts on the systems and individual levels are needed to maintain this momentum.
We encourage our readers to investigate how they can reduce their professional carbon footprints and engage in educating their colleagues, trainees, and the general public about this topic. Organizations such as the Climate Psychiatry Alliance [58] are actively considering these issues and offer opportunities to get involved with this work. Resources like My Green Doctor [46] can be used to find ways of reducing personal carbon footprints. Reducing professional travel and strategic use of telepsychiatry can also dramatically reduce carbon production. Psychiatrists also have a particularly important role to play in helping their colleagues in the rest of medicine recognize and overcome the psychological and social barriers that contribute to the perpetuation of a medical culture that is environmentally unsustainable. If medicine can make this transition in modifying its professional culture to be greener and to meet its professional obligations, its actions will speak louder than its words, and we can hope that this transition will not only protect the integrity of the doctorpatient relationship but will also serve to inspire the rest of society to follow suit.
Declarations
Disclosures On behalf of all authors, the corresponding author states that there is no conflict of interest.
|
2022-07-26T13:35:14.211Z
|
2022-07-25T00:00:00.000
|
{
"year": 2022,
"sha1": "0d4ac80fca40c1721d7ff0859c4a9bc531f56a92",
"oa_license": null,
"oa_url": "https://link.springer.com/content/pdf/10.1007/s40596-022-01688-z.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "ca531eb4f1db4af91209c2878b403fdd4efda4ee",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
14756150
|
pes2o/s2orc
|
v3-fos-license
|
Keeping confidence: HIV and the criminal law from HIV service providers’ perspectives
We present qualitative research findings about how perceptions of criminal prosecutions for the transmission of HIV interact with the provision of high-quality HIV health and social care in England and Wales. Seven focus groups were undertaken with a total of 75 diverse professionals working in clinical and community-based services for people with HIV. Participants’ understanding of the law in this area was varied, with many knowing the basic requirements for a prosecution, yet lacking confidence in the best way to communicate key details with those using their service. Prosecutions for HIV transmission have influenced, and in some instances, disrupted the provision of HIV services, creating ambivalence and concern among many providers about their new role as providers of legal information. The way that participants approached the topic with service users was influenced by their personal views on individual and shared responsibility for health, their concerns about professional liability and their degree of trust in non-coercive health promotion approaches to managing public health. These findings reveal an underlying ambivalence among many providers about how they regard the interface between criminal law, coercion and public health. It is also apparent that in most HIV service environments, meaningful exploration of practical ethical issues is relatively rare. The data presented here will additionally be of use to managers and providers of HIV services in order that they can provide consistent and confident support and advice to people with HIV.
Introduction
The criminalisation of HIV transmission and exposure has increased globally over the past decade. This trend is most noticeable in high-income countries with concentrated HIV epidemics in Western Europe and North America, while over 20 African countries introduced HIV-specific laws criminalising HIV exposure and transmission in the decade to 2010 (Cameron & Reynolds, 2010). Existing social research offers insight into the overarching public health impact of criminalisation 1 on those who are most likely to be involved in HIV transmission and exposure (Adam, Elliott, Corriveau, Travers, & English, 2012;Burris, Beletsky, Burleson, Case, & Lazzarini, 2007;Dodds & Keogh, 2006;Dodds, Weatherburn et al., 2009;Galletly & Dickson-Gomez, 2009;Horvath, Weinmeyer, & Rosser, 2010;Mykhalovskiy, Betteridge, & McLay, 2010;UNAIDS, 2013;Weait, 2013). This body of work demonstrates that criminalisation has a limited capacity to support HIV precautionary behaviour, such as enabling people to use condoms or disclose their HIV status to a sexual partner, and on balance is likely to have a negative impact on public health goals.
Concern has also been raised about the extent to which criminal prosecutions for the transmission of HIV hamper trusting relationships between HIV service providers and service users (Lowbury & Kinghorn, 2006). Research undertaken in Canada has examined this relationship (Mykhalovskiy et al., 2010;Mykhalovskiy, 2011;O'Byrne & Gagnon, 2012), finding that service providers were often uncertain how to discuss criminalisation and that legal concerns eroded patients' trust in HIV and health services. The law also contributed to a tendency for many providers to frame their HIV prevention advice within a universal moral obligation to disclose known HIV infection in all settings, irrespective of the degree of transmission risk.
In England and Wales, the Offences Against the Person Act 1861 may be used to prosecute a person alleged to have intentionally (Section 18) or recklessly (Section 20) transmitted HIV (or any other serious sexually transmitted infection) to another person. 2 To date, the only convictions have been for reckless transmission, for which the maximum punishment is five years' imprisonment (ancillary orders regulating future disclosure and sexual behaviour are also available), and the number of successful prosecutions has been very low. In contrast to many other jurisdictions where research of this type has been undertaken, in England and Wales there is no liability where someone merely exposes another to the risk of transmission (unless there was a deliberate, but failed, attempt to transmit the virus). The prosecution has to prove beyond reasonable doubt that the defendant was the source of the complainant's infection (Bernard et al., 2007). Where the person to whom HIV has been transmitted consented in advance to the risk of transmission, there is no liability (Weait, 2005a(Weait, , 2005b. For consent to provide a defence, it has to be based on the complainant's actual knowledge of the defendant's HIV infection at the time transmission occurred (Weait, 2005a). In almost all cases that knowledge will be based on the defendant's prior disclosure of status, but there is no independent legal obligation for people with diagnosed HIV to disclose their status prior to sex in England and Wales. However, numerous authors have described HIV status disclosure as a complex and context-specific set of practices, which can be explicit or implicit and understood in a variety of ways by (potential) sexual partners (Adam, Husbands, Murray, & Maxwell, 2008;Adam et al., 2014;Flowers, Duncan, & Frankis, 2000;Green & Sobo, 2000;Klitzman & Bayer, 2003;Marks & Crepaz, 2001;Sheon & Crosby, 2004;Zablotska et al., 2009).
Guidance for prosecutorsfamiliar to some but not all HIV service providers in this jurisdictionadvises that recklessness probably will not be established where a person has taken appropriate precautions, such as condom use (Crown Prosecution Service, 2011). Health care workers have a responsibility to fully advise a patient on ways to protect their partners from infection, including the use of condoms. In 2013, the British HIV Association released a briefing paper outlining the responsibilities and duties of health care staff with regards to the criminal law and transmission of HIV (Phillips & Poulton, 2013), which developed an earlier draft paper that had been in circulation since 2006 (Anderson et al., 2006). While outlining the circumstances in which a case of HIV non-disclosure could be referred to the police, the more recent guidance stresses: Health care professionals have a central role to advise and support patients in decision making and to maintain confidentiality ( … ) There is individual and public interest in maintaining confidentiality; this may be outweighed in order to prevent serious harm to others. (Phillips & Poulton, 2013, p. 3) It goes on to emphasise that no information should be released to the police unless patient consent has been established or there is a court order requiring this. However, such guidance is optional and not binding, and there has been no audit undertaken on the extent of its successful dissemination. Although some voluntary programmes of embedded professional development for specialist HIV registrars will include a section on criminalisation, there is no training requirement on HIV and the criminal law for clinical staff. Furthermore, in the absence of an HIV sector-wide professional body, there is no equivalent guidance for those working in community-based settings, although a number of organisations have published their own resources about how to manage issues relating to criminalisation for people living with diagnosed HIV (e.g. Bernard & Carter, 2014;Terrence Higgins Trust and National AIDS Trust, 2010).
While taking account of this bio-socio-legal context, the project described here specifically explores the ways that criminal prosecutions for HIV transmission in England and Wales are handled and perceived by those who deliver health and social care services for people with HIV. Our starting point is the argument that normative critiques of HIV criminalisation should extend beyond examinations of the law's influence on sexual and preventive behaviours, in order to gain a better understanding of how the law both shapes and illuminates the broader social relations within which HIV is experienced (Mykhalovskiy, 2011). Essential convergences in sociological investigations of both crime and health care are rarely unified, leading Timmermans and Gabe (2002) to argue that the 'medico-legal' borderland where the law and the clinic directly interact offers opportunity to develop a unified field of study that explores the convergent and divergent means through which these two traditional mechanisms of power can be better understood. They furthermore suggest that explorations of these interactions will help those involved to question the basis for their taken-for-granted norms and proceduresas the borderland is likely to be a site of frequent contestation and ambiguity. HIV criminalisation does not only force an intersection of the law and medicine, it also illuminates the divergent impulses embedded within medicine and public health, as well as diverse approaches to HIV prevention such as harm reduction as opposed to harm elimination. Indeed, we are only able to understand HIV criminalisation in the light of broader social relations and the frequently conflicting values paradigms that construct them.
As Bauman (1991) has described it, one of the key goals of modern society is to root out ambivalence in order to ensure that everything has its place, ultimately enabling a sense of control and peace to prevail. He points out that certainty is at the centre of the twin dreams of legislative and scientific reason. However, what the responses to HIV criminalisation have revealed in those jurisdictions where they have been critically examined, is that just under the surface of the confident professional response to HIV, is a considerable mix of ambiguity and unease. This study examines the specific articulations of this disrupted 'order' among HIV professionals working in England and Wales.
Methods and sample
In 2012, seven focus groups were conducted in England and Wales (Dodds et al., 2013). Of the seven groups, four were undertaken with hospital-based staff (referred to as 'clinical service providers'/'clinicians') who routinely provide services to people with HIV in areas of contrasting higher and lower HIV prevalence. In the UK, treatment and monitoring of people with HIV are almost exclusively undertaken at specialist HIV and sexual health clinics. Three further focus groups undertaken with 'community service providers' or 'non-clinical providers' comprised professionals providing HIV services in the community.
Recruitment of the 75 participants was undertaken with the support of local key stakeholders, and a summary of workplace types and job roles is given in Table 1. Participants worked in 12 different HIV charities and four hospitals. In addition, two social workers were employed by local municipal authorities.
Each focus group lasted for about 90 min and was facilitated by two researchers. With the consent of participants, the discussions were digitally recorded and transcribed for thematic analysis (Braun & Clarke, 2006) assisted by the use of NVivo 10 software. Participants were asked to discuss their own and their service users' knowledge and perceptions of criminalisation, how and when criminalisation arises within diverse HIV service settings, who raises the issue, and the perceived impact of this issue on broader HIV care and support. Taking each of the substantive areas of the question guide as a starting point, members of the research team worked in pairs to list the emergent themes arising in the annotated transcripts for each group, undertaking constant comparison with the data until each list was exhausteda method that utilises both inductive and deductive processes (Layder, 1998). The themes were then collated to ensure consistency, ensuring the elimination of overlap prior to thematic coding of the data. Ethics approval was granted by the Research Ethics Committee of the London School of Hygiene & Tropical Medicine. Local research ethics approval was also obtained where necessary.
Results
This section offers a summary of the key findings arising from the thematic analysis, focusing on the following topics: how participants understood the law as it relates to Note: Some participants ticked more than one workplace setting and more than one job role. *'Other' job roles included: dietician; pharmacist; clinical psychologist; director of services; peer support worker; team leader; student.
criminal prosecutions for HIV transmission, how such understandings were transposed into practice and procedure in the workplace, and characterisations of responsibility and public health impact.
Understanding the law
Accurate understanding of and ability to communicate about the law are two important and distinct skills for those who inform service users about the criminal law or are expected to field questions on the topic. Many participants had a basic understanding of the conditions that could lead to a prosecution.
I think the important thing is that transmission actually has to take place. So it is not just about unsafe sex -it is about transmission essentially happening. Somebody has to become positive. (Clinical service provider) However, many participants expressed confusion about the technical legal meaning of recklessness, and what a sufficient defence might be against such a charge. Arriving at a mutually agreed legal definition of reckless grievous bodily harm was far from straightforward, with many participants struggling to find accurate and concise means of distinguishing between common-sense uses of recklessness and this particular form of criminal liability.
So the way I understood it was, that the law is defined into an act and a mental state. And the original law applied to intention, which is to intentionally and to wilfully desire to do it. And it's kind of flowed out into recklessness, which is sort of omission, or by not caring, or not caring if you transmit. But not taking reasonable precaution, or by not telling people it's kind of involved wider of what my understanding about what the original law was meant to be? (Community service provider) There were a number of instances where participants' understanding of the law was guided more by a sense of morality as it related to reckless behaviour, than by a firm understanding of the legal principle articulated within the quote above. For instance, a few participants partly based their definition of recklessness on the number of partners with which a person with HIV was having unprotected sex, which is irrelevant to liability.
Although there is technically no legal requirement to disclose HIV status in England and Wales, disclosure behaviours did emerge as a touchstone within focus group discussions. Building personal capacity, and locating opportunities to disclose one's HIV status to potential and current sexual partners were agreed to be important goals within HIV support and prevention services, alongside recognition of the many structural and social factors that need to be in place before safe disclosure is feasible (Adam et al., 2014;Smith, Rossetto, & Peterson, 2008). Participants were keen to point out that where it is safe to do so, disclosure is a key element in developing self-acceptance, building a culture where HIV is increasingly normalised, and contributing to the informed consent of risk for sexual partners. In order to avoid ambiguity (and any ambivalence with regard to their own responsibilities), some service providers advised people with HIV to disclose their HIV infection as the only means to avoid legal liability, therefore favouring a message of universal HIV status disclosure in place of tailored harm reduction.
That is where I go with my casework, that the only way you can be safe from the law is if you are completely honest with people, and telling any person you are having any sexual contact with about your status, because that's the only way that that person can fully consent and they can never have any comeback. (Community service provider) This was by no means a unanimous viewpoint. Instead, when such ideas about the protective function of disclosure were introduced, they were frequently challenged by those who felt that where a person with HIV had used condoms and/or had maintained an undetectable viral load and thereby considerably reduced the chances of passing on infection (Cohen et al., 2011), they could not and should not be criminally charged. 3 Within the groups, such exchanges were characterised by stark contrasts between those who focused primarily on the calculation of risk attached to particular protective behaviours (such as condom use and/or an undetectable viral load) as opposed to those who instead felt that there was a universal moral obligation to disclose, regardless of the risk that a particular sexual encounter might carry. These findings illuminate the way that participants' own comfort with harm reduction (as opposed to risk elimination) influenced how they advised people with HIV about avoiding criminal liability. They furthermore demonstrate that discourses about risk management are far from stable or uniform among this group of professionals.
Practice and procedure
There was considerable variation in each group about the extent to which criminal prosecution for HIV transmission arose within their work settings, and whether it had influenced regular practices such as record keeping or what they considered to be the limits of confidentiality. This is not surprising, given the diverse roles of participants and their different workplace cultures.
Many said that they personally avoided addressing the issue directly with service users, or minimised the detail that they tried to convey, because they lacked confidence in their capacity to talk knowledgeably about the law.
The law is so, kind of, not clear that it is very hard to clarify anything and we do have documentation we give out occasionally from the criminal … CPS [Crown Prosecution Service]. I would find it to be very hard to be very clear, honestly. It is very vague I think, how we talk. (Clinical service provider) Some described how one or two colleagues were utilised as an 'in-house' ad hoc information resource on the law, and this was linked to discussions in all groups that highlighted participants' lack of access to qualified criminal legal advice. Very few mentioned or demonstrated awareness of the British HIV Association's position papers about the appropriate management of this issue in clinical settings described at the outset of this paper (Anderson et al., 2006;Phillips & Poulton, 2013). Perhaps it is not surprising then, that communication practices varied considerably, depending on a participant's certainty in their own understanding of the law, as well as their perspective on the appropriateness of raising the topic in a particular consultation.
Most providers talked about the way that their ability to build a trusting relationship with service users was reliant upon a values-led approach that was neutral, 'user-led' and responsive to the particular needs of the individual in front of them for a short period of time. In each group, some described how they judged the best means of approaching the topic of criminalisation in this light. They acknowledged this was complex information to convey, which needed to be well-timed and appropriately tailored for each individual, although there was a clear pattern that emerged between clinical and community-based service providers.
In all of the HIV clinics where this research was undertaken, information about criminal prosecutions was provided to patients, and many clinicians described routine practices such as its inclusion as a topic to be covered on standardised checklists for new patients, to ensure it had been discussed. In contrast, those working in community organisations frequently waited until a service user raised the issue before discussing it, as most felt their primary function was helping to meet the immediate and tangible needs that their service user brought to the consultation (such as emotional support, health information, housing advice). However, there was a degree of professional decision-making employed in these settings as well, as some working in community settings said there were circumstances in which they might raise the prospect of a criminal prosecution where a service user with HIV reported that they engaged in higher risk behaviours.
In the main, service providers discouraged their service users from making criminal complaints (where this had arisen).
I cannot think of any case where someone came and said they wanted to prosecute and then actually walked away and still wanted to prosecute. As soon as you give information and emotional support, you find an immediate shift. Especially if you signpost them onto services. You see a change of mind very fast if you support them in the right way. (Community service provider) Nonetheless, a few talked about the importance of supporting all service users, including those who had independently decided that they wanted to make a criminal complaint, and there were two further individuals who each described at least one occasion where they had asked a patient if they had considered contacting the police about their infection.
When it came to record keeping, it was mainly those working in clinical settings who tended to document as much detail given by the patient as possible (including sexual behaviour that carried a risk of HIV transmission to others), alongside detail about information and advice imparted by the clinician. Some said that rigorous standards of documentation were an essential component of medical practice, and their practice had not changed in the light of criminalisation. Many clinical participants said that good documentation would also protect any professional whose decisions or actions were scrutinised in the future, while others wanted to maintain records that may help to defend a patient, thereby keeping careful records of protective or precautionary behaviour, or disclosure whenever it was reported. Other clinicians described how their awareness of criminal prosecutions made them acutely aware of the need for even more rigorous documentation than they had undertaken in the past. Across the groups, it was often senior clinical participants who raised the role of the new patient checklist which (among other things) had helped to ensure that all new patients were informed about the criminal law, while at the same time protecting professional liability, as it provided a means of systematically recording that the topic had been covered with each individual patient. At the same time, nurses and other junior staff revealed that this 'solution' failed to address their lack of confidence regarding how to get this information across clearly and unproblematically.
In contrast, those working in community-based organisations tended to have been influenced by criminal prosecutions in the opposite direction. Some said they were now far more cautious about the content of service-user records, and their security, due to a renewed awareness that any records could be requested by the courts.
If I am working with a person who has high risk behaviour I do not document it in detail, just in case further down the line there is someone with a warrant. (Community service provider) There were also, however, participants from community-based organisations who described how their record keeping might function as a means of supporting a service user if a criminal complaint was ever made against them. For instance, many would carefully record when disclosure to a sexual partner was reported, or when problems that had prevented disclosure (such as unequal or abusive relationships) were described, in case that might be important to someone's defence in a future criminal trial.
Concerns about records being seized by police in criminal investigations for HIV transmission prompted discussion with participants about confidentiality and how this was explained to service users. Most participants working in both clinical and community settings said that they took care to explain to service users how their data would be protected, while also mentioning that there were specific circumstances in which they may be required to release it to the police.
Responsibility and public health
Unlike some of the other topics described above, there were few clear patterns between the ways that clinical or non-clinical providers discussed responsibility and public health. Some participants argued that as long as a person with diagnosed HIV had full awareness of how to prevent HIV transmission and was aware of the potential consequences, they bore primary responsibility for taking precautions.
… if someone has a known infection, they should assume responsibility for themselves to keep up to date with the advice that has been given to them. (Clinical service provider) However, this was a minority perspective in nearly all focus groups, with most participants arguing that allocation of responsibility was not uniform and that it needed to be understood within specific circumstances that can constrain precautionary behaviour. These participants focused on the social structures shaping the lives and experiences of people with HIV, such as pervasive social and economic inequality, power imbalance, HIV stigma and fears for safety and security.
The woman may not have the power to be able to truly consent to having sexual relationships. Plus, added on to that, she definitely doesn't have the power to be able to disclose. But she also, because of immigration and things like that, may not have the power to leave at that moment. So, I mean that is where recklessness becomes really … I mean, is it reckless behaviour if it is potentially lifesaving for her? (Community service provider) Some took this point further, arguing that consensual sex implied a shared responsibility for taking precaution against possible infection.
Researcher: Thinking more generally, where do you feel responsibility for HIV transmission lies? P1: Assuming it's consensual and not coerced. Then it's shared. P2: As long as they know about condoms and safer sex, everyone should be responsible for their own sexual health. (Clinical service providers) Uncertainties about professional responsibilities and ethical obligations of HIV service providers pervaded every focus group discussion. Sometimes these concerns arose from uncertainty about the extent of professional legal liability in such circumstances. Participants debated the extent to which service providers owed a primary responsibility to the service user in front of them, or whether there was also a similar obligation to protect the health of others who may be at risk of infection.
It's about duty of care as well. We have a duty of care to the index patient but not necessarily the person who might be infected. (Clinical service provider) Certainly not all participants took the same view, and these discussions revealed that service providers wanted to better understand their own legal liabilities, and to clarify the extent to which professional ethical guidelines may consider duty of care as a rationale that enables them to consider a breach in confidentiality, rather than obliging them to do so, as evidenced in the following comment. Underlying these discussions was a pervasive sense of professionals feeling torn between duties to service users and to the broader health of the public. One participant described a case where a patient who was known to be abusive towards sexual partners had been named as a sexual contact.
It was a very uncomfortable position to be in, because I still didn't say, 'Are you going to take him to court?' or whatever. I would have happily listened and given them information if they wanted to, or if they had suggested it, but you know, you kind of have two hats on: you have got your clinical hat on, and your public health hat on. You do not want to be colluding with people like this guy … they are a minority, but they are potentially involved in transmission. (Clinical service provider) There were also participants who made it clear that for them, such conflicts were rare, as they made sure not to allow their own moral positions to influence their dealings with service users.
If they are knowledgeable and consenting in some ways, to be honest, it is none of my business. (Clinical service provider) These debates about the practice of public health ethics and professional ethics among HIV health and social care service providers appeared in many cases to be the first time that such discussions were widely aired between colleagues. Such issues tended to be approached with caution and hesitation in the focus group format, with those in more junior or inexperienced positions tending to defer to the more senior and confident voices in the room. This raises interesting questions about the practical governance and management of ethical discourse and practice in HIV service settings, to be followed up in the discussion below.
Despite the many approaches to criminalisation described above, no one, when directly asked what they thought prosecutions accomplished in public health terms, was able to describe a beneficial public health outcome. This is an interesting finding given that there were some participants who gave accounts of recommending or supporting the making of a criminal complaint, accompanied by a greater number who experienced a conflict between a duty of care to their service user and to those at risk of acquisition. Perhaps this is because such providers found that criminalisation helps to manage moral concerns about behaviour, by providing punishment for past transgressions, rather than any sense that they were actually likely to bring about wider public health gains.
In contrast, others felt that criminalisation brought only harmful consequences for their working environments, and by extension, for health outcomes among their service users. I think it also affects the trust relationship between workers and service users, and clinicians and service users at times, sometimes in quite a negative way. You see quite a few people who have been damaged by the process. And it's a long bridge-building process to re-establish the trust in procedures. (Community service provider) This comment is representative of the many concerns that participants raised about the ultimate impact of criminal prosecutions, which can lead to increased stigma, reduced trust between service users and providers, and traumatic consequences for those who get involved in such cases. Such findings help to clearly demonstrate that the outcomes of criminal justice can be directly at odds with the goals of public health and individual well-being. Members of one clinical team talked in detail about the detrimental mental health impact that involvement in a case had on one complainant: It felt like we were going back to the day when she got the diagnosis, and we stayed there with her for about six months in terms of the infection and not being able to move on from how this happened to her. (Clinical service provider) Reviewing this last set of findings as a whole, the vast majority of participants felt that if they were to overemphasise potential liability and criminal responsibility in discussions with people with HIV, this would have little benefit, and threatened to erode the trust and stability that had been so carefully maintained in order to enable vulnerable people to access their service. It was because of these concerns about potential damage that most participants said they would not want to directly support or provide evidence for the prosecution of cases. At the same time, many participants felt that they had inherited a responsibility for imparting accurate legal information to service users with HIV, even though this was not a part of their job description or training. Considerable anxiety about what constituted professional responsibility within this context was on display during the focus groups, arising from a widespread lack of confidence in conveying accurate legal information and advice while also striking a correct balance between informing and frightening service users. In such conditions, most participants expressed frustration about expectations within their professional roles and practices that felt at odds with their overriding professional responsibility to attend to the needs of the individual.
Discussion
The aim of this study was to better understand how the criminalisation of HIV transmission interacts with the provision of HIV services in England and Wales from the perspective of those delivering HIV treatment, care and support. In doing so, we also exposed some of the fragile and divergent interpretations of professional roles and responsibilities within and between institutional networks, as well as the values that underpin these. This discussion focuses on the sense of professional ambivalence that frequently comes to the fore in the face of HIV criminalisation, driven by the challenging nature of attending to public health values while working as a front line HIV service provider. We also consider a range of ways that participants sought to deal with this ambivalence in comparison with other research findings.
The majority of study participants described feeling caught between a clinical medical ethics of individual autonomy (grounded in human rights), and a public health ethics which emphasises the good of the collective (O'Neill, 2002). This tension has long existed, but is frequently obscured by a discourse of healthcare ethics which tends to be dominated by narratives of individual autonomy, developed to help manage the power dynamics in individual patient-doctor relationships (Gostin, 2003;Mann, 1997;O'Neill, 2002). Most study participants described the ways in which criminalisation had forced them to confront the divergent imperatives of individual autonomy, criminal justice and public health, often resulting in ambivalence about their professional values, which in turn had led on to a deep unease around the entire topic.
In essence, we would argue that these service providers are confronting a core problem with aspirational public health discourse. In the main, key public health values are narrated (both in training and in available academic literature and policy documentation) for the benefit of public health officials and policy makers. Frontline providers of health, social and care services outside of the strict 'public health' sphere may find that they have difficulty translating such values into tangible, immediate decisions about advice and intervention at the individual level (Gostin, 2003;Jennings, 2003), with little sense of 'who has to do what for whom ' (O'Neill, 2002, p. 8). Criminalisation appears to have nudged open pre-existing (yet routinely unacknowledged) fault lines in professionals' values and responsibilities frameworks. This dilemma is both produced and disrupted by the criminal law's entry into the field of HIV, given that no participants could locate any public health benefits arising from prosecutions for HIV transmission. At a practical level, some participants constructed solutions to help avoid or ameliorate such ambivalence, each being subject to varying degrees of acceptability among participants.
In the main, the concept of 'the responsible person with HIV' was largely undisturbed among service providers in their routine engagement with clients and patients, as most described people with HIV as being very cautious to minimise onward transmission risk. When outliers emerged (people with HIV who reported having unprotected intercourse with multiple partners over time) a small number of service providers in each focus group maintained order within their values systems by casting them as irresponsible. Such professionals had formulated a personal legal definition of recklessness as it related to HIV that would include any person with HIV whom that service provider had deemed to be irresponsible. The creation of a moral order of this type helped such participants avoid the values fault lines described above. This was most evident among those who described how they used the criminal law as a means of warning errant service users about the implications of their ongoing risky behaviourand they described a feeling of empowerment or relief about being able to use the law as an externalised tool to help dispel their moral unease. In a few isolated cases, this went further, with service providers suggesting and supporting the pursuit of a criminal complaint. In our study, sharp discussions emerged between participants who maintained that successful HIV prevention was predicated upon harm reduction, and those who instead had started to promote universal HIV disclosure as both legally and morally safer. Just as Mykhalovskiy (2011) found, this application of the criminal law to HIV enables some professionals to taking up the opportunity for moral entrepreneurship, thereby adopting a rule enforcer role (Becker, 1963).
Many providers wanted a simple universal set of HIV prevention messages to use with people with HIV, demonstrating that commitment to HIV harm reduction (boasting a veritable toolbox of behavioural choices for people with HIV 4 ) is contested and patchy. Despite the currents of constant technological change which shape the landscape of HIV prevention, criminalisation's inherent focus on responsibility for HIV sero-status disclosure among people with HIV forces professionals to decisively position their personal and professional values to an extent that has not been demanded of them previously. Some demonstrated greater capacity to manage these demands than others.
A related response to professional ambivalence was clinical service providers' description of systematic note-keeping. In the clinical sphere, advice and discussion about criminalisation became incorporated into a pre-existing checklist and documentary working culturewhich also served to guard against professional liability, again echoing Mykhalovskiy's (2011) findings. Others (particularly those working in community settings), described a response to criminal prosecutions that had taken them in the opposite direction, in that they had actively reduced the degree of detail in their notes in order to reduce the likelihood that these could be used in a criminal case. Reliance on managerial checklists has long been recognised as a key feature of allocating roles and responsibilities within large institutional systems (Smith, 2005). Such findings bring to mind Bowker and Leigh Star's notion of 'information infrastructures' (2000) which articulate the modern information society's response to the moral imperative for order and classification. Their work draws out the tension between the benefits of classification, and the problems of rendering responses, motivations or actions invisiblesuch as some junior members of clinical staff in our study who often expressed unease about having to raise the issue of criminalisation which was a new addition to the patient checklist. Their managers' confidence that the matter was being dealt with was predicated on the existence of the information infrastructure, rather than necessarily because of knowledge about how these conversations were actually undertaken. On numerous occasions, the focus group participants pointed out that it was the first time they had ever been asked to explore these feelings of unease and professional confidence with colleagues.
When asked directly about the influence of criminalisation on working practices, a sizeable proportion of our participants actively rejected the idea that their practices had been unduly influenced by the criminal law. Therefore, in contrast to Mykhalovskiy's (2011) findings where there was a sense that having 'an eye to the law' was a pre-eminent and increasingly overriding concern in HIV service provision, a substantial number of our participants argued that correct data management and counselling practice proceeded without hindrance by such concerns. These assertions did not always correspond with specific discussions about record keeping. While our participants could not rule out self-censoring by service users, there was no evidence to suggest that the law had made providers afraid to talk with them about sex. The differences between our work and that undertaken earlier could be in part because key elements of criminal liability differ significantly between the UK and Canada. 5 Yet, in more recent research among HIV specialist nurses in Canada, Sanders (2015) reveals that changes to documentation practices in the light of criminalisation include a mixture of those who document more, and those who document less as a direct result of potential use of professionals' notes in criminal proceedings, similar to our findings.
Conclusion
Bauman (1991) argued that the key task of modernity is to make order of disorder, often by enforcement of a singular legislative logic. His understanding of the aversion to ambivalence in modern systemsunderlined also by Smith (2005) and Bowker and Leigh Star (2000) help in the critical examination of our findings, where we see a prevalent impulse to locate clean and clear mechanisms to reassert control. Indeed, we also found some opposition to this dominant tendency, voiced by those who saw practical and ethical value within the multiplicity of choice and the wide array of perspectives and outcomes that made up the diverse experience of people with HIV. In both responses, we can observe how professionals respond (and what tools they reach for) when their values systems are complex and at times contradictory. A few existed comfortably within that space of ambivalence, where they felt the pull of divergent considerations; most did not.
It is not that long since clinical training better enabled students to prepare for such circumstances. A longitudinal investigation of 1950s medical training in the US demonstrated that at that time, doctors were trained to 'sit' with uncertainty as a necessary component of their work and to develop the confidence to manage it as a part of their approach to patients (Fox, 1957). We can only wonder how many professionals already working in and entering the HIV sector in the current era (in both clinical and non-clinical settings) are adequately prepared to deal with the moral and professional complexities that inevitably emerge. Arguably, where such tensions have been papered over through routinisation or silence, HIV criminalisation has stripped back this veneer. If so, one of the key recommendations arising from this research is that staff teams will benefit from regular collective discussions about HIV and the criminal law, as well as other issues that pose moral or ethical dilemmas. Such discussions among staff might encourage greater consistency in approaches and communication styles, and the mixing of clinical and non-clinical service providers in such discussions may facilitate exchanges of views and approaches. The focus groups themselves appeared to offer a rare opportunity for staff teams as well as mixed professionals to collectively consider the issue.
Ultimately, the findings offer us insight into the various ways that criminalisation exposes key tensions for HIV service providers. Providers often feel at least partially responsible for HIV prevention and public health themselves, even if they recognise the responsibility and choice available to service users. This study has revealed the layers of complexity which criminalisation adds to the relationship between service providers and service users, both in clinical and community settings. It has generated unease, due to its disruption of a coherent set of professional ethics. In so doing, it has impacted on the ways in which the majority of HIV service providers, whose primary concern is the health and well-being of their service users, understand the scope and substance of their role. The exploration of such issues is something that is best tackled directly, and we hope that these findings will encourage practitioners to consider taking the time to talk with and listen to colleagues' contrasting standpoints and opinions. Ultimately, this should lead to greater transparency and coherencenot only in working practice related to HIV criminalisation, but also in underlying professional values.
|
2016-05-12T22:15:10.714Z
|
2015-07-10T00:00:00.000
|
{
"year": 2015,
"sha1": "d96cc7f6c4fd767c8c160c0feada3b9286c41a59",
"oa_license": "CCBYNCND",
"oa_url": "https://europepmc.org/articles/pmc4647852?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "c93ea843d88d5a7d9f860614ee823074f50b66dc",
"s2fieldsofstudy": [
"Law",
"Medicine",
"Sociology"
],
"extfieldsofstudy": [
"Sociology",
"Medicine"
]
}
|
251145572
|
pes2o/s2orc
|
v3-fos-license
|
Electronic Word of Mouth Analysis of Brand Attachment on MSME Products
This study aims to determine the effect of electronic word of mouth (eWOM) on brand attachment on MSME products. The technique used in this research was non-probability sampling with the purposive sampling method. The data collection technique in this research was a questionnaire distributed to 197 respondents via a google form. The data analysis used was f-test analysis, t-test, and simple linear regression analysis. Based on the results of the t-test, it was found that the t-count on the electronic word of mouth variable was 5.491 with a significance level of 0.000 and t-table = 1.972 with the probability of t being sig <0.05 for the brand attachment variable. Meanwhile, based on the f-test results, the calculated F value was 30,152 with a significance level of 0.000 and F table = 3.89. The probability of 30,152 is greater than 0.05 and the calculated F value > F table showing that electronic word of mouth had a positive and significant effect on the brand attachment variable. The results of the study indicate that the presence of eWOM in a product can affect the brand attachment of an MSME product. The form of eWOM on social media can be seen when producers/sellers post on social media about product-related information, followers will respond (mention, comment, repost). At that time, the brand awareness of the product will increase.
INTRODUCTION
Social media is a means for Micro, Small, and Medium Enterprises (MSMEs) to market their products. SMEs have been affected by the COVID-19 pandemic because there is a physical distancing regulation to prevent the spread of the virus, making a decline in people's purchasing power for MSME products. To encourage product sales to remain high in this pandemic period, MSME actors should sell their products through digital platforms that are very attached to the era of globalization. However, the fact is, there are still many MSMEs that have not moved from conventional sales to digital methods, as it was conveyed directly by the Minister of Cooperatives and SMEs Teten Masduki. He said that currently, only about 13% or 8 million MSMEs have entered the digital ecosystem. Under the same conditions, the increase in digital sales in e-commerce increased by 26% or reached 3.1 million transactions [1]. Through social media, MSMEs can form and build product brands that will be sold, which can be assessed and attached to consumers so that they can establish communication and be profitable for MSME actors themselves. The combination of existing factors with technological factors produces an interactive marketing media so that media can create interactions between producers, consumers, and markets. The emotional attachment between product brands and consumers is called brand attachment. According to [2], brand attachment is a deep and strong emotional bond that connects one person to another across space and time. This theory explains that brand attachment does not have to be reciprocal. Meanwhile, [3] define brand attachment as the strength of the bond that connects the brand with a person. Therefore, based on the explanation above, this brand attachment can help SMEs to promote their products.
Improving brand attachments is important so that consumers can feel the emotions of MSME products with themselves and can make their products stick with themselves. This will be beneficial for existing MSME actors if they engage with their consumers. One way for MSME actors to form brand attachments is through word of mouth (WOM) or talk about products that are already tied to consumers and other consumers and with technological advances, consumers can share their WOM on social media to make it affordable for social media users. The conversation is called electronic Word of Mouth (eWOM). Through eWOM, consumers on social media will invite their followers to feel the MSME products that have been told on social media, and indirectly new potential consumers will try and feel the attachment to their products and then will provide eWOM as well. According to [4], every consumer who has consumed a product will give his/her assessment of the product, and it cannot be denied because it comes from oneself. Then, if the consumer is satisfied or dissatisfied with the consumption of the product, the consumer will give a review of the product to others. The existence of eWOM on social media will help form and increase brand attachments to products owned by MSME actors. In research conducted by [5] and research by [6], the results showed that there was an influence between the variables of brand awareness, brand image, brand satisfaction, brand trust, and brand attachment so that eWOM positively can increase brand satisfaction, brand trust, and brand attachment. In addition, research conducted by [7] and research by [8] which focused on eWOM found that consumers wanted social interaction, desire to get economic incentives, attention to consumers others, and the potential to increase their self-worth which is the main factor leading to eWOM behavior.
In this study, the problem faced is that there are still many MSME actors who have not switched digitally so the forming of brand attachments with consumers through eWOM on social media is still minimum. In addition, research on the relationship between eWOM and the formation of brand attachments is still rarely found. In this study, the elaboration of the research that has been done by [6] and [5] with [8] and [7] was conducted with some adjustments to suit the research object and field conditions of this research. Therefore, the purpose of this study is to find out how the influence of eWOM on social media on the Brand Attachment of MSME products that have gone online. Based on the above background, it is interesting to conduct a study with the title "e-WOM Analysis of Brand Attachment on MSME Products".
Brand Attachment
The concept of brand attachment developed from psychology known as attachment theory, which was coined by [2]. The level of emotional attachment to an object can predict the nature of an individual's interaction with the object [2]. For example, individuals who are attached to someone are very likely to be committed and willing to sacrifice for that person [9]. According to [9] described the consumer-brand relationship as the individual-object relationship in attachment theory. They argued that consumers' emotional attachment to a brand can predict the consumer's commitment to the brand (e.g., brand loyalty) and their willingness to make financial sacrifices to get the brand.
Two important factors that represent brand attachment conceptually are brand-self connection and brand prominence. The brand-self connection is a cognitive and emotional relationship between the brand and self. This connection is important to facilitate the fulfillment of utilitarian, experiential, and or symbolic needs. Meanwhile, brand prominence is the extent to which positive feelings and memories about the object of attachment are perceived as the top of mind [3]. Positive memories about the object of attachment (brand) will be more prominent for people who are very attached to the object of attachment than consumers who show weak attachment.
Electronic Word of Mouth (eWOM)
According to [8], positive or negative statements made by potential and actual consumers who have used the products or services of a company and can be accessed by many people and institutions via the internet are also called with electronic word of mouth (eWOM). According to [10] mentioned that eWOM offers various ways to exchange information, which can be done confidentially or anonymously, and provides geographical and temporal freedom. eWOM also has a uniqueness that WOM does not have, one of which is that it is permanent [11].
Dimension of eWOM
In their research, [8] In this study, only 5 dimensions were used, namely assistance, concern for others, expressing positive feelings, economic incentives, and helping the company.
The relationship between eWOM variable and Brand Attachment
A positive perception of a product or service will stimulate positive memories so that it creates an emotional attachment to the product or service.
Advances in Economics, Business and Management Research, volume 657
According to [11] mentionsed when there is an exchange of information through eWOM, consumers will evaluate the product. In addition, positive eWOM can also persuade potential customers and influence consumer perceptions of a product review or product recommendation by other customers. Consumers' emotional attachment to a brand can be used to predict consumer commitment to the brand and consumer's willingness to make financial sacrifices to get the brand.
METHODS
This study was conducted to ensure the reliability and validity of the previously determined measures. Both analyzes were used to test whether the data obtained were valid and reliable so that they could be used for further research. Hypothesis testing was carried out using SPSS 25 assisted regression analysis. The collected data were analyzed using a 5-point Likert system rating scale from strongly disagree to strongly agree to get interval data and be given a score. This study involved 197 respondents.
The primary data used for the research were collected by using questionnaires. While secondary data were collected from online newspapers, literature, journals, books accessed via the internet, and others. The sample selected as respondents were students in Bandung who know MSMEs, with the sample collection technique in this study was non-probability sampling with purposive sampling method, namely the sampling technique provided that it meets certain criteria. According to [12] stated that the determination of a sample size greater than 30 and less than 500 is an appropriate and reasonable amount for research in general.
A. Description of Respondents Characteristics
The number of surveys analyzed further in this study was 197 respondents. According to the results of the questionnaire, most of the respondents were 64.4% female and 35.7%, male. Based on age, the majority of respondents aged 18 to 22 years were 87.7% and those who were at least under 18 years were 1.5%. The majority of social media used is Instagram with a percentage of 70.1%, Youtube with a percentage of 8.8%, Facebook and Twitter with a percentage of 5.7%, and followed by Whatsapp and TikTok. The majority of respondents saw and talked about MSME products in several sectors including culinary, fashion and clothing, beauty, crafts, and services.
B. Validity Test
The result of validity test show by Table 1 Based on the results from Table 1, it can be explained that the indicators for the eWOM variable namely the assistance platform, concentration for others, expressing positive feelings, economic incentives and helping the company, and the variable brand attachment have a significant value of 0.000 < 0.05 are declared valid.
C. Reliability Test
The Resut of reliability test result show by Table 2 as a follow: The reliability test in this study was measured using Cronbach's Alpha. Table 2 shows that all of the research instruments have a Cronbach's Alpha coefficient of 0.737 > 0.06 which means reliable.
D. Nomality Test
The result of normality test show by Figure 2 as a follow:
Advances in Economics, Business and Management Research, volume 657
Based on Figure 2, the data being tested is normally distributed and meets the assumption of normality, because the data is spread out in a diagonal line and spreads around the link.
E. Heterocedasticity Test
The result of heterocedasticity test show by Figure 3 as a follow:
Figure 3. Heteroscedasticity Test Results
Based on Figure 3 above, it means that there is no heteroscedasticity in the data being tested because the data has been scattered, and there is no clear pattern and dots in the spread image above, and they are below the number 0 on the Y-axis.
F. Simple linear regression test result
The summary model of the simple linear regression test show by Table 3 as a follow: Table 3, the summary model of the simple linear regression test above explains the magnitude of the correlation/relationship (R2) value, which is 0.366. From the output, the coefficient of determination (R Square) is 0.134, which implies that the influence of the independent variable electronic word of mouth on the dependent variable of brand attachment is 13.4%.
G. t-Test
The result of t-test show by Figure 4 as a follow: Based on the t significance number in the table above, it can be seen that the t count is 5.491 with a significance level of 0.000 and t table = 1.972. The probability of 5.491 is greater than 0.05 and the value of t count > t table. Thus, it can be concluded that the eWOM variable (X) has a direct significant effect on brand attachment (Y).
H. F-Test
The F-test result show by Tavle 5 as a follow:
I. Effect of e WOM on Brand Attachment
The t-test table shows that the t-count value of (5.491) is greater than the t-table of (1.97220) which means that H1 and H0 are rejected. It can be concluded that electronic word of mouth (eWOM) has a positive effect on brand attachment.
The results of the study indicate that the presence of eWOM in a product can affect the brand attachment of an MSME product. The form of eWOM on social media can be seen when producers/sellers post on social media about product-related information, followers will respond (mention, comment, repost). At that time, the brand awareness of the product will increase. This also happens to the brand image of the product; it will look good or bad depending on the opinions of product followers on Instagram social media. Brand knowledge consisting of brand awareness and brand image is the main area of eWOM communication that occurs in products. When product followers respond to producer/seller posts and this process continues to be carried out between fellow followers, many-to-many communication will be formed. After the establishment of brand awareness and brand image, it will then form brand satisfaction and brand trust. When followers Advances in Economics, Business and Management Research,volume 657 receive the required information about the product and the producer/seller provides solutions and services with a fast and satisfying response to consumers, a consumer brand satisfaction is formed for the product. When producers/sellers can provide information about their products honestly and sincerely, producers/sellers can provide recommendations to consumers/customers about products that are suitable and according to their needs, which will form consumer brand trust for the product. When brand satisfaction and brand trust have been formed on the product in Instagram social media, then brand relationships lead to the formation of brand attachments. Brand attachment is formed because the producer/seller through the account on Instagram can create interaction between followers and products. Giving rewards to followers through quizzes or challenges in Instagram can also form consumer brand attachments to products.
The results of this study are also consistent with the previous research that social-media brand pages with trendiness information were effective in attracting consumers' attention and were deemed to be important in strengthening consumers' ability in recognizing the brand [13]. According to [8] also stated that "consumers may be exposed to electronic WOM through websites, blogs, chatrooms or email". In addition, [8] also defined eWOM as "any positive or negative statement made by potential, actual, or former customers about a product or company which is made available to a multitude of the people and institutes via the Internet". Thus, it can be said that eWOM is something that is integrated. Electronic Word of Mouth (eWOM) will have a maximum effect if it is used in an integrated manner with other social media, such as Twitter, Facebook, blogs, broadcast email, chat BBM/WhatsApp. According to [10], eWOM is an important aspect of an expression of consumer satisfaction with a brand and may have a critical impact on brand image and brand awareness. eWOM is showing signs that it will become more important in the future as a wider social networking application. It is also explained that much of the focus of eWOM research has been on blogs, customer review sites, social media, and web pages. Recent empirical research has also found that the usage of firm-initiated SMM activities with the management of user-generated content is an effective strategy in building brand knowledge as well as purchase intentions [14].
CONCLUSIONS
This study aims to analyze the effect of social media eWOM which acts as an independent variable on brand attachment as the dependent variable. It was found that the Electronic Word of Mouth (eWOM) variable in social media had a positive and significant influence on brand attachment. This shows that the more eWOM on Instagram that is received, the greater the influence on brand attachment.
|
2022-07-29T15:07:51.249Z
|
2022-01-01T00:00:00.000
|
{
"year": 2022,
"sha1": "be88f69e86f7ad60812b40b69becba8507446467",
"oa_license": "CCBYNC",
"oa_url": "https://www.atlantis-press.com/article/125976007.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "798fac72db6da2fe80553eeaf2288a61b3ae3551",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
}
|
13966017
|
pes2o/s2orc
|
v3-fos-license
|
Rapid Quantification of Myocardial Lipid Content in Humans Using Single Breath-Hold 1H MRS at 3 Tesla
A rapid, proton magnetic resonance spectroscopy method to evaluate human myocardial lipid levels in a single breath-hold at 3 T using a commercial whole-body system is presented. During a 10 s breath-hold, water unsuppressed and suppressed spectra were acquired by two phased array coils using a short-echo time spectroscopic stimulated echo (STEAM) sequence electrocardiogram-triggered to mid-diastole. Lipid-to-water ratios were obtained in the septum of 15 healthy volunteers, (0.46 ± 0.19)%. These results agreed well with ratios obtained from averaged spectra acquired in seven multiple breath-holds, (0.45 ± 0.20)%, providing increased signal-to-noise ratio but requiring longer acquisition times. Excellent correlation was found between the two methods (r = 0.94, P < 0.05). Reproducibility of 1H MRS for measuring myocardial lipid levels in a short breath-hold was acceptable in five repeated measurements within the same subject (coefficient of variation = 19%). Thus, single breath-hold proton spectroscopy allows reliable and quick quantification of myocardial lipids at 3 T. Magn Reson Med, 2011. © 2011 Wiley-Liss, Inc.
Recent advances in cardiac proton magnetic resonance spectroscopy ( 1 H MRS) have enabled the noninvasive study of myocardial lipid metabolism in humans (1-3) typically requiring a 4-10 min acquisition. Initial studies in humans have confirmed findings from animal research, suggesting that cardiac lipid levels may be considered as a potential biomarker for myocardial dysfunction (4)(5)(6)(7). Therefore, a further characterization of the pathological role of myocardial lipid deposits may benefit from the application of a noninvasive tool such as 1 H MRS.
The acquisition of high-quality cardiac proton spectra is technically demanding. The signal-to-noise ratio (SNR) for myocardial metabolites is low because of their low concentrations and due to the significant distance from the radiofrequency (RF) coils. Nowadays, clinical 3 T scanners with phased-array receive coils are increasingly common, with the advantages of increased SNR (8) and higher spectral resolution. However, high-field MR spectroscopy suffers from increased magnetic field inhomoge-neities (9) that pose further challenges for the water signal suppression, essential to detect the weak metabolite signals. Moreover, cardiac and respiratory motion results in changing voxel location and can, therefore, affect shimming and water suppression. Electrocardiogram gating is generally adequate for compensating cardiac motion. Various approaches have been proposed to reduce the influence of respiratory motion, including respiratory gating (1,2,10) and navigator gating with volume tracking (3,11,12). Despite the progress of cardiac 1 H MRS as a research tool, only few studies so far have examined cardiac metabolism in humans using this tool at 3 T (11)(12)(13).
Here, a single breath-hold cardiac-gated 1 H MRS acquisition is proposed as a time-efficient approach to measure myocardial lipid levels at 3 T. Breath-holding is commonly used for respiratory motion compensation in thoracic MR imaging, with comfortable end-expiration breath-hold periods limited to 25 s for healthy volunteers and 9 s for patients (14,15). Hence, the aim of this study was to investigate whether cardiac lipid levels can be measured accurately within a single breath-hold using 1 H MRS at 3 T, and to evaluate the applicability and reproducibility of this method as a routine clinical research tool.
Subjects
In 15 healthy volunteers (11 men, four women; mean age 6 standard deviation (SD), 34 6 10 years; age range, 22-58 years; body mass index (BMI), 19-30 kg/m 2 ), myocardial lipid normalized to myocardial water content (myocardial lipid levels) were evaluated using 1 H MRS. The volunteers were asked to fast overnight. None of the volunteers had a history of cardiovascular disease, diabetes, or any other chronic disease. The Research Ethics Committee at our institution approved the study protocol, and all participants gave informed consent.
H MRS Technique
All studies were performed on a 3 T MR scanner (Tim Trio, Siemens Healthcare, Germany). The 1 H MRS sequence was based on a conventional STEAM sequence that was modified to achieve a short TE of 10 ms. Water suppression was achieved using a WET module (16). The standard prescan approach was used for adjusting the RF pulse scaling factor, which utilizes a subject dependent global calibration. Additionally, a calibration pulse sequence was implemented to evaluate the optimal water suppression pulse scaling factor. Cardiac gated spectra were acquired using different water correction scaling factors in a 17 s breath-hold and the one yielding the most effective water-suppression was chosen and applied in the following experiments.
During the single breath-hold acquisition, the modified STEAM sequence was repeated four times: a spectrum without water suppression was obtained first, with the frequency centred at the water frequency followed by, three consecutive water-suppressed measurements with the frequency set to 3 ppm (Fig. 1). The effective repetition time (TR) was cardiac gated and controlled by the pulse program to be at least 2 s and the total spectroscopy acquisition time was less than 10 s. To determine whether or not the single breath-hold technique provided sufficient SNR for an accurate myocardial lipid assessment, it was compared with the multiple breathhold method. This consisted of six breath-holds of about 16 s each, five breath-holds which allowed for the acquisition of 35 nonaveraged water-suppressed spectra. Also four nonaveraged water spectra were acquired with a minimum TR of 4 s in a separate breath-hold setting the water suppression RF pulse power to zero. The total ac-quisition time was 5-7 min, including time for the volunteers to recover between breath-holds.
Validation of Single Breath-Hold Lipid Content Measurement
For the validation of the method, the volunteers were positioned supine, the body coil was used for transmission and the anterior and posterior phased-array body coils for receiving, resulting in 15 channels of data. Cardiac gated MR cine imaging was performed to acquire the standardized four-chamber and short axis views for appropriate voxel placement in the septum and for determination of the respective trigger delay required for middiastole (Fig. 2b,c). Shim adjustment was performed on a 3D field map obtained from a cardiac-gated gradient double echo acquisition within a single breath-hold (18-22 s). Myocardial 1 H MRS data were obtained at end-expiration from a 22 Â 12-19 Â 32-36 mm (8-15 mL) voxel centered in the interventricular septum far from epicardial fat (Fig. 2). All acquisitions were electrocardiogramtriggered to mid-diastole. Spectroscopy parameters for FIG. 1. Single breath-hold acquisition scheme: a water-unsuppressed scan (WS OFF) was followed by 3 water-suppressed scans (WS ON), with a 2-s delay in between acquisitions. Each scan was cardiac gated, and a trigger delay (TD ¼ 635 6 82 ms, mean 6 SD, n ¼ 15) was used to position each STEAM module at the same time in mid-diastole for both WSON and WSOFF. The WET module (duration: 218 ms) was applied with the amplitude of the RF pulses set to 0 during the WSOFF scan.
FIG. 2. a: Illustration of reproducibility of single breath-hold 1 H MRS measurements compare with multiple breath-hold measurements. Within subject reproducibility was demonstrated by five repeated spectra (S 1 -S 5 ) acquired in the same session from a 22x12x32 mm voxel positioned in the interventricular septum of a healthy volunteer (male, BMI ¼ 28 kg/m 2 ) as shown in (b) and (c). Depiction of the myocardial lipid resonance (-CH 2 ) n , at 1.3 ppm obtained in a single breath-hold (3 averages, myocardial lipid content: 0.61%) showed sufficient SNR as confirmed by almost the same spectrum (M) acquired using multiple breath-hold 1 H MRS (35 averages, myocardial lipid content: 0.64%). The spectra were scaled to the respective water amplitude. the custom STEAM sequence included a TE of 10 ms, a mixing time of 7 ms, 1024 points were acquired at a bandwidth of 2000 Hz. Effective repetition times of at least 4 and 2 s were chosen to approach complete relaxation of the water and lipid signals, respectively. Furthermore, the scan frequency was set at 4.7 ppm during water-unsuppressed acquisitions and at 3 ppm during water-suppressed acquisitions, to minimize the effects of the large chemical shift displacement (% 420 Hz corresponding to 4.5 mm in the feet to head direction) between the lipid and water peaks at this field strength.
To investigate the reproducibility of the single breathhold method, this technique was repeated five times in each subject without repositioning the voxel. To assess the accuracy of the single breath-hold method, the multiple breath-hold method was applied at an identical voxel location. Imaging and prescan adjustments were repeated again before multiple breath-hold acquisitions. Total session time comprising positioning of the patient, septum localization, shimming, water suppression factor adjustment, and acquisition of a single breath-hold spectrum was on average 15 min. This time was increased by a further 5-7 min for the multiple breath-hold spectra acquisition.
Spectral Postprocessing
The algorithms for signal combination from individual coil elements (17) and averaging from different acquisitions were written in Matlab. The combination of the individual signals was performed using the amplitude and phase of the time-domain signal without water suppression to represent the weighting and phase correction factors required for weighted signal summation. The multiple 1 H MRS acquisitions (i.e., three during single breath-hold measurements and 35 during multiple breath-holds measurements) were phase-corrected with the zero-order phase of the dominant peak in the spectra (typically the residual water peak or the lipid peak) before averaging (12). Furthermore, individual signals acquired within different breath-holds were frequency aligned prior to the summation.
All spectral quantification was performed in the time domain, using the AMARES algorithm included in the jMRUI package (18). A total of six peaks were specified to fit the metabolite contributions in the water-suppressed spectra by using a Lorentzian line shape, representing total creatine with methylene group CH 2 at 3.9 ppm, trimethylammonium compounds at 3.2 ppm, total creatine with methyl group CH 3 at 3.0 ppm (Cr), lipids at 2.2 ppm, lipids (-CH 2 ) at 1.3 ppm, and lipids with methyl group CH 3 at 0.9 ppm. The prior knowledge applied was a restriction of linewidths to below 40 Hz for all resonances and only the global zero-order phase was fitted, while the first-order term was kept constant. The amplitude of the lipid resonance at 1.3 ppm was selected for myocardial lipid quantification. The water peak amplitude from the water-unsuppressed scans was used as internal reference. The lipid content was calculated as a percentage relative to water as the amplitude of the lipid peak divided by the amplitude of the water peak, and multiplied by 100. The SNR of the lipid peak was estimated by taking the ratio of the lipid time domain signal intensity to the noise SD extracted from the final 100 points of the signal in the time domain. Also, an estimate of the SNR was obtained by dividing the Cramer-Rao standard deviation (CRSD) of the lipid peak, which is an indicator of the accuracy of the spectral quantification provided by the AMARES fitting algorithm, by the lipid peak amplitude and converted to a percentage (rCRSD%) (19).
Statistical Analysis
Statistical analyses were performed using SPSS (version 16.0; Chicago, IL). Results were expressed as mean 6 SD. To determine the reproducibility of the technique, coefficients of variation (CV) of the five repeated myocardial lipid content measurements in a single breath-hold were calculated as CV ¼ (within-subject SD)/(within-subject mean of five myocardial lipid% measurements) Â 100. The Pearson correlation coefficient (r) and linear regression analysis were used to examine the relationship between myocardial lipid content measured in a single breath-hold and multiple breath-holds and between myocardial lipid content and BMI. Bland-Altman analysis was performed to evaluate the agreement between myocardial lipid levels by single and multiple breath-holds measurements. A paired t-test was used to compare myocardial lipid content, CRSD% and linewidth values between the two acquisition approaches. A P < 0.05 was considered statistically significant.
RESULTS
Representative spectra from the septum of the healthy volunteers using both, single and multiple breath-hold methods are shown in Fig. 2a. The spectra showed a well-defined resonance corresponding to myocardial lipids at 1.3 ppm, relative to the residual water at 4.7 ppm. Other resonances, such as trimethylammonium at 3.2 ppm, creatine at 3 ppm, and other lipids at 0.9-2.5 ppm were also visible in the multiple breath-hold spectra.
Constructive averaging of the 35 STEAM spectra acquired using the multiple breath-hold method resulted in the expected increase in SNR and lower lipid rCRSD% compared with the single breath-hold method (SNR: 24 6 14 vs. 6.8 6 4.2, P < 0.05; rCRSD%: 4.4 6 3.4 vs. 9.1 6 5.9, P < 0.05; multi vs. single breath-hold). In the 35 STEAM spectra acquired using the multiple breath-hold method, the mean phase variation for all subjects was 26 6 19 (mean 6 SD, n ¼ 15). The water full-width-at-half-maximum in the unsuppressed water spectra was 14.3 6 3.2 Hz and 13.3 6 2.4 Hz (P ¼ 0.38) when using the single breath-hold method (one average) and the multiple breath-hold method (four averages), respectively.
The mean myocardial lipid content in healthy volunteers with both single and multiple breath-hold 1 H MRS was (0.46 6 0.19)% and (0.45 6 0.20)%, respectively (P ¼ 0.94). Within-subject reproducibility of myocardial lipid levels over repeated 1 H MRS measurements in a single breath-hold showed a CV of 19%. Moreover, a strong correlation was confirmed between myocardial lipid levels obtained in a single breath-hold (randomly selected from the five repeated single breath-hold measurements) and multiple breath-holds (r ¼ 0.94 P < 0.05, Fig. 3). Bland-Altman analysis (Fig. 4) showed an excellent agreement of the lipid levels measured in a single breath-hold and in multiple-breath-holds, with a mean difference between both methods of (À0.02 6 0.07)%. All the differences between the two measurement methods were contained within the 95% limits of agreement (from À0.13% to 0.11%).
For the BMI range represented in this study, the myocardial lipid levels increased linearly with BMI ( Fig. 5a, single breath-hold: r ¼ 0.71, P < 0.05; Fig. 5b, multiple breath-holds: r ¼ 0.68, P < 0.05). Moreover, the difference between the slopes of regression lines for both methods was not different (P ¼ 0.98). The 15 volunteers were then divided into two groups: normal (BMI < 25 kg/m 2 , four male, two female) and overweight (BMI ! 25 kg/m 2 , seven male, two female). Myocardial lipid levels were significantly different between the two groups and independent of the method of measurement used (
DISCUSSION
This study has shown that myocardial lipid levels can be quantified during a short single breath-hold using cardiac triggered proton spectroscopy at 3 T. The mean lipid levels obtained using single and multiple breathhold methods agreed well. Water was chosen as the in-ternal reference as it is assumed that the water content remains constant in normal and pathological conditions as shown by Bottomley and Weiss (20) in the dog model of myocardial infarction. Other metabolites, such as creatine not only suffer from low SNR but may also vary in pathologies.
The spectroscopic quality, characterized by the linewidth of the unsuppressed water signal (14.3 6 3.2 Hz), was good using the single breath-hold method and similar to values published at 1.5 T. van der Meer et al. (3) reported a myocardial water signal linewidth of 10.3 Hz using navigator gating and volume tracking for respiratory motion correction and 11.4 Hz without navigation. Other study obtained a 12 Hz myocardial water linewidth using a sequence gated to the respiratory cycle (21). Because linewidth is dependent on the T 2 -relaxation time and field homogeneity, the linewidth would be expected to increase as the field strength goes from 1.5 T to 3 T. The similar values were attributed to the good local shimming achieved at 3 T in this study that used a cardiac gated 3D field map shimming method that controlled both the 1st and 2nd order shim coils. Furthermore, the CRSD was on average 9.1% of the myocardial lipids demonstrating that single breath-hold acquisitions showed sufficient spectral quality (22) over the wide range of myocardial lipid levels observed (0.14-0.73%). This finding is also supported by the fact that an almost identical myocardial lipid range (0.14-0.79%) was observed using multiple breath-holds acquisition, with intrinsically lower CRSD (4.4%).
The reproducibility of the single breath-hold method was validated by repeating single breath-hold lipid measurements at the same myocardial location for each subject. Although the volunteers were not repositioned in our reproducibility assessment, all the reference scans (i.e., scouting, frequency adjustment, RF-reference voltage measurement, shim current, and water suppression calibration) were invalidated and forced to be re-calibrated in between measurements. In our experience, the calibration procedure rather than the patient positioning represent the main source of variation for this kind of measurements (due to breath-hold variability). A CV ¼ 19% was obtained. Szczepaniak et al. (1) showed similar CV (17%) for spectroscopy measurements of myocardial lipid levels using a pressure belt for respiratory gating. Felblinger reported a CV of 13% using double triggering based on the electrocardiogram signal (10). Also, findings in this study compared well with more recent results obtained at 1.5 T using navigator gating and volume tracking (3).
Volunteers were divided into two groups according to their BMI. The threshold of 25 kg/m 2 was based on the National Institutes of Health classification of overweight by BMI (23). Importantly, single breath-hold 1 H MRS showed sufficient SNR for the assessment of cardiac lipid levels, even for low, i.e., normal BMI volunteers. Moreover, a statistically significant positive correlation between BMI and myocardial lipid levels was found, which was also reported in previous studies where other methods were used for spectroscopy acquisition (1,2,24).
This study demonstrated the time efficiency obtainable from single breath-hold MR acquisitions. In a 10 s breathhold, both unsuppressed and suppressed water signals were acquired with sufficient resolution for myocardial lipid levels quantification in healthy volunteers. Compared with multiple breath-hold averaging, single breath-hold 1 H MRS acquisition time decreased from 16 to 10 s, which allowed for a comfortable breath-hold and more importantly can be applied to patients, who can generally hold their breath only for shorter times compared to volunteers (15).
No T 1 corrections were considered in the calculation of the metabolite ratios. The T 1 of lipids was to be between 300 and 400 ms (estimated based on values reported for 1.5 T (25)). Hence, the TR of 2 s was adequate to ensure full relaxation. The T 1 -value for healthy myocardium was measured to be 1.2 s (26). Although the water signal was not fully relaxed at a TR of 4 s, the correction factor of 3% is within the accuracy of the method. Importantly, water was fully relaxed for the single-breath-hold method. Furthermore, the lipid level values obtained in this study were in concordance with previous published data (3,5,13,27).
The spectroscopy voxel was larger (8-15 mL) compared with previous studies (2-9 mL) (1,3,13,20,24,28) However, contamination from ventricular blood can be excluded due to the dark blood properties of the STEAM sequence (20,28,29), which we confirmed in preliminary work by placing the voxel entirely in the blood pool (data not shown). Moreover, the spectroscopic volume was carefully positioned in the interventricular septal wall, avoiding areas of pericardial fat, confirmed by a single spectral peak at 1.3 ppm (1). The water-suppressed localized volume was based on the creatine resonance frequency (3 ppm) for possible assessment of the entire myocardial metabolic range, including creatine. For these experimental settings (%3.4 kHz STEAM sinc refocusing pulses, length ¼ 2.6 ms and voxel volume ¼ 8-15 mL), the creatine and lipid resonances had % 82% of their corresponding voxels in common. A creatine peak was clearly visible in 12 of the 15 multiple breathhold spectra acquired. However, creatine was not quantified in these spectra as the aim of this study was to establish and validate the single breath-hold technique. Furthermore, single breath-hold cardiac spectra did not have sufficient SNR for reliable creatine quantification. Moreover, future single breath-hold experiments should be acquired at the lipid resonance.
Although only 15 volunteers were included in this study to investigate feasibility and to validate single breath-hold cardiac spectroscopy, the sample size was sufficient to achieve statistical significance by single breath-hold and multiple breath-hold lipid levels correlating with a slope of unity. However, 1 H MRS was only performed in healthy volunteers with a BMI range 19-30 kg/m 2 . Hence, the performance of the technique specifically in obese patients requires further investigation. Notably, a 10 s breath-hold is well within the comfortable range for healthy volunteers (14) and also for patients (15). A breath-hold period of similar duration is routinely used for functional cardiac MR imaging in our institution. Importantly, the minimal time requirement The error bars in panel (a) represent the SD of the five repeated scans using the single breath-hold method. Volunteers with BMI ! 25 kg/m 2 showed myocardial lipid levels significantly higher than volunteers with BMI < 25 kg/m 2 (P < 0.05). No significant difference was found between methods for both BMI groups.
of the proposed protocol combined with the use of standard cardiac receive coils allows this method to be added to any routine cardiac MR exam without substantial increase of scan time.
CONCLUSION
Single breath-hold 3 T 1 H MRS is a feasible and reproducible method for the measurement of cardiac lipid levels in a spectrum varying from low to more elevated cardiac lipid levels in healthy volunteers. Additionally, this is a quick tool to better characterize the influence of cardiac lipid accumulation on cardiac function or to evaluate the effects of pharmacological or dietary interventions.
|
2018-04-03T06:22:11.555Z
|
2011-06-30T00:00:00.000
|
{
"year": 2011,
"sha1": "50060a0d9602585cfc83d5b9c09315eae6936e50",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/mrm.23011",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "50060a0d9602585cfc83d5b9c09315eae6936e50",
"s2fieldsofstudy": [
"Medicine",
"Physics"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
256365205
|
pes2o/s2orc
|
v3-fos-license
|
Stress of Conscience Questionnaire (SCQ): exploring dimensionality and psychometric properties at a tertiary hospital in Australia
This study explored the psychometric properties and dimensionality of the Stress of Conscience Questionnaire (SCQ) in a sample of health professionals from a tertiary-level Australian hospital. The SCQ, a measure of stress of conscience, is a recently developed nine-item instrument for assessing frequently encountered stressful situations in health care, and the degree to which they trouble the conscience of health professionals. This is relevant because stress of conscience has been associated with negative experiences such as job strain and/or burnout. The validity of SCQ has not been explored beyond Scandinavian contexts. A cross-sectional study of 253 health professionals was undertaken in 2015. The analysis involved estimates of reliability, variability and dimensionality. Exploratory factor analysis (EFA) and confirmatory factor analysis (CFA) were used to explore dimensionality and theoretical model fit respectively. Cronbach’s alpha of 0.84 showed internal consistency reliability. All individual items of the SCQ (N = 9) met the cut-off criteria for item-total correlations (> 0.3) indicating acceptable homogeneity. Adequate variability was confirmed for most of the items, with some items indicating floor or ceiling effects. EFA retained a single latent factor with adequate factor loadings for a unidimensional structure. When the two‐factor model was compared to the one‐factor model, the latter achieved better goodness of fit supporting a one-factor model for the SCQ. The SCQ, as a unidimensional measure of stress of conscience, achieved adequate reliability and variability in this study. Due to unidimensionality of the tool, summation of a total score can be a meaningful way forward to summarise and communicate results from future studies, enabling international comparisons. However, further exploration of the questionnaire in other cultures and clinical settings is recommended to explore the stability of the latent one-factor structure.
Background
The term 'stress of conscience' has emerged to conceptualise an existential dimension of stress health professionals may develop from frequently encountered stressful situations in health care, perceived as leading to a troubled conscience [1][2][3][4]. Despite the heterogeneity of clinical settings, the generic sources of frequently encountered stressful situations across health care Open Access *Correspondence: y.jokwiro@latrobe.edu.au 1 College of Science, Health and Engineering, School of Nursing and Midwifery, La Trobe University, Melbourne, Australia Full list of author information is available at the end of the article settings include perceived demanding workload, lack of support from leadership/management and staff conflict [3,4]. In such situations, health professionals perceived a gap between the reality of practice and their ideal practice, between structural demands and their own aspirations to provide the quality care they feel the person in need of care deserves [3][4][5][6]. Glasberg et al. [3] found that health professionals, reflecting on these stressful situations, often described punitive feelings of guilt, embarrassment and/or shame accompanying an experience of a troubled conscience. The extent to which they experienced a troubled conscience depended on individual appraisal of the stressful situation, which in turn is thought to be influenced by personal and professional ethical beliefs [1,[3][4][5]. Hence, Glasberg et al. [3] coined the term 'stress of conscience' , to highlight and explore this existential dimension of workplace stress for health professionals.
The Stress of Conscience Questionnaire (SCQ) was developed to explore stress of conscience among health professionals [3]. The impetus for Glasberg and colleagues [3] to develop the SCQ came from their review of literature, which identified a gap in tools for assessing the phenomenon associated with everyday stressful workplace situations in which health professionals perceived that their actions or inactions contradicted their conscience. In addition, ethical studies in health care linked failure to heed the voice of conscience with negative workplace outcomes such as health professionals distancing themselves from persons in need of care clients, experiencing burnout, ill-health and staff attrition [7][8][9][10][11]. Based on these findings, Glasberg et al. [3] developed the SCQ and hypothesised that stress of conscience could be used as an early predictor of such negative workplace outcomes. This hypothesis received empirical support in recent Scandinavian studies which explored stress of conscience in a clinical setting using the SCQ [12][13][14][15]. Therein, high levels of stress of conscience positively correlated with ratings of burnout and job strain, while negatively correlating with job satisfaction [12,13,15]. Consequently, the SCQ could be a useful tool for detecting and understanding when health professionals feel stressed and are potentially on a detrimental path towards burnout and attrition. As such, the SCQ is worthy of further scrutiny beyond the Scandinavian context where the questionnaire has been validated.
The Stress of Conscience Questionnaire (SCQ), is a nine-item instrument for assessing stressful situations and the degree to which they cause a troubled conscience for health professionals [3]. This questionnaire asks respondents to first rate the frequency of which he/she perceives nine commonly occurring stressful situations present in their clinical setting on a scale of 0 'Never' to 5 'Every day' [3,4]. Secondly, questionnaire asks respondents to rate the individual extent to which these situations are perceived as leading to a troubled conscience, on a scale from 'not at all' to 'a very troubled conscience' [3,4]. The initial validation of the SCQ, identified two latent factors: 'internal demands' (Factor I) of the workplace, and 'external demands and restrictions' (Factor II) from sociocultural and religious beliefs [3]. Although two factors were identified, most studies have elected to present and interpret the result of the SCQ as a total sum score of all items ranging from 0 to 225 without the subscales [6,12,13]. Confidence in the utility of the subscales is yet to be established.
In terms of reliability, the SCQ was found to be reliable (Cronbach's α = 0.83) in a Swedish sample of hospital staff, as well as in samples of staff from municipal and community health care centres [2,3,16]. In terms of dimensionality, although the initial SCQ validation indicated two latent factors (Factor I = 1nternal Demands and Factor II = External Demands and Restrictions), there were high cross-loadings for Items 1, 3 and 8 on both factors [3]. The final factor structure included Item 1 'How often do you lack time to provide care that the patients' needs?' in both published latent factor structures [3]. The re-validation by Ahlin et al. [4] also retained two latent factors, albeit these were different to the original theoretical interpretation, and a new interpretation was not provided. Instead, Ahlin et al. [4] suggested that the SCQ could be regarded as unidimensional after exclusion of Item 6 'Is your private life ever so demanding that you don't have the energy to devote yourself to your work as you would like' . Furthermore, a study with a Finnish sample also retained two latent factors, which were inconsistent with the initial validation, and the factor outcomes were not theoretically interpreted [17]. It is plausible to conclude that, from the studies above, dimensionality of the questionnaire is yet to be settled, which warrants further exploration of the questionnaire in other contexts. Indeed, Glasberg et al. [3] and Ahlin et al. [4] also recommended exploration of the SCQ in other clinical settings, professions and cultural context.
To conclude, it seems pertinent to further explore the psychometric properties and dimensionality of the SCQ within an Australian context to provide further scrutiny beyond Scandinavian contexts. Findings could provide data and confidence (if upheld) to collect and compare results from the SCQ scale internationally.
Aim of the study
This study aimed to explore psychometric properties and dimensionality of the SCQ in a sample of health professionals from a tertiary level hospital in Melbourne, Australia.
Study sample
The study was conducted in a sample of 253 nurses, medical doctors and allied health professionals across emergency, medical and surgical wards and a geriatric ward in a 560-bed Australian tertiary-level hospital. Administrative staff and other auxiliary staff members were excluded. A total of 500 questionnaires were distributed and 253 questionnaires where returned (51%).
Stress of conscience questionnaire (SCQ)
The English version of the SCQ presented by Ahlin et al. [4] was used in this study. The questionnaire achieved a Cronbach's alpha of 0.83 for the nine-item total validation in a Swedish context [4]. The SCQ is composed of nine two-part items (Part A and Part B) measuring commonly occurring stressful situations present in their clinical setting and the extent these situations are perceived as leading to a troubled conscience [22]. Part A assesses the frequency of such situations on a six-point Likert scale ranging from 0 (never) to 5 (every day). Part B assesses the extent to which these situations are perceived as leading to troubled conscience, on a visual analogue scale that runs from 0 (no, it does not trouble my conscience at all) to 5 (yes, it troubles my conscience greatly). The SCQ individual item score (index score) is obtained by multiplying part A and B ratings to generate a range from 0 to 25 points.
Study procedure
The questionnaires which included demographic data such as age, sex and experience were delivered to the wards in a box that was stored in the nurse unit manager's office, together with a sealed return box. The questionnaires were handed out to the staff during ward hand over. A participant information letter, which outlined the purpose of the study and guaranteed anonymity, accompanied each questionnaire. Participants were informed in the letter that consent was implied if they voluntarily completed and returned the questionnaire. All data were collected in October 2015 from voluntary participants.
Statistical analysis
Questionnaire variability was analysed in terms of floor and ceiling effects, and a cut off score of > 15% on the minimum and maximum scores for each item was set [18]. Internal consistency reliability was evaluated by Cronbach's alpha (> 0.7), item-total correlations (> 0.3) and inter-item correlations (0.2-0.4) [19,20]. Exploratory factor analysis (EFA) was used to investigate instrument dimensionality. Kaiser-Meyer Olkin measure of sampling adequacy (KMO) and Bartlett's test of sphericity were analysed first to determine suitability of the data to undergo factor analysis, the cut of were > 0.6 and < 1.0 and statistical significance (p < 0.001) respectively [21][22][23]. Confirmatory factor analysis (CFA) was used to examine the adequacy of the resulting factor model. To evaluate model fit, this study used a range of absolute and incremental model fit indices, including the ratio of chi-square to degrees of freedom (X2/df ), comparative fit index (CFI), adjusted goodness of fit (AGFI); root mean square error of approximation (RMSEA), PCLOSE and Akaike Information Criteria (AIC), [24][25][26]. The factor structure in this study was also compared to the two-factor model proposed by Glasberg et al. [3]. The Statistical Package for Social Sciences (SPSS) and AMOS, Version 24.0 was used for statistical analysis of the data (SPSS, Chicago, IL, USA).
Ethical considerations
The study adhered to the principles of the Helsinki Declaration and the National Health and Medical Research Council's statement for the ethical conduct in human research. The study was approved by the Human Research Ethics Committee (LNR15, 299) to use implied informed consent, which meant that consent was obtained from participants if and when they returned a completed study questionnaire after reading the information letter which outlined the process. The reasoning behind this was to protect participant anonymity, privacy and autonomy, as far as possible by distributing study questionnaires at ward levels, making sure the informed consent to participate was made actively, individually and independently by those staff that completed and returned study questionnaires. This means that informed consent was implied in their active, autonomous and anonymous decision to participate.
Sample characteristics
The sample consisted predominantly of registered nurses (n = 205, 81%), who were female (n = 217, 85.8%) as indicated in Table 1. The mean age was 32.9 (SD = 10.0) and the average length of time working in the ward was 9.2 years (SD = 9.0). The employment status was divided almost equally between full time (51.1%) and part-time/ casual workers (48.9%). The specialty areas from which the sample was drawn are indicated in Table 1. Table 2.
Reliability
All individual items met the cut-off criteria for item-total correlations above 0.3 as shown in Table 2. Further evidence of satisfactory internal consistency reliability was indicated by a total Cronbach's alpha of 0.84, not being increased by deleting any of the items. The results were also consistent if the Part A and Part B questions were measured for reliability separately or as combined index scores (Part A multiplied by Part B).
Dimensionality
Exploratory factor analysis (EFA) was conducted on the index scores (Part A multiplied by Part B) as proposed by Glasberg et al. [3]. All SCQ items had correlations within the recommended range of 0.30 to 0.70 with at least one other item. The Kaiser-Meyer-Olkin (KMO) was 0.84 and Bartlett's test of sphericity was significant (p < 0.001). The Kaiser criteria of an eigenvalue > 1, the Cattel scree test and parallel analysis yielded one single latent factor which explained 44% of the total variance. All items met the criterion of communalities exceeding 0.3 in the principle component analysis (PCA). The unrotated factor matrix loadings were greater than 0.55 (Table 2). When maximum likelihood extraction and principle axis factoring was performed, a single factor structure was also retained with adequate factor loadings.
The single latent factor was compared with the two latent factors proposed by Glasberg Table 3. Factor loadings ranged from 0.40 to 0.77 as shown in Fig. 1. In contrast, the two-factor model fit indicated that some fit indices were not adequate (CMIN/DF = 3.521; P-value = 0.000; CFI = 0.910; AGFI = 0.871; RMSEA = 0.100 and PCLOSE = 0.000). Factor loadings for the two-factor model ranged from 0.42 to 0.75 and two latent factors also closely correlated, which suggests a lack of distinct factors as displayed in Fig. 2. When the two-factor model was compared to the one-factor model, the latter received the lowest AIC score (AIC = 123.768) and the chi-square test was not significant (P-value = 0.146), which also supported the one-factor model as better fitting the data.
Discussion
Exploration of the psychometric properties and dimensionality of the Stress of Conscience Questionnaire (SCQ), based on a sample of health professionals working in a tertiary-level Australian hospital, indicated satisfactory reliability and variability estimates. Also, the scale was found to be unidimensional as one single latent factor was confirmed. This suggests SCQ results can be aggregated, interpreted, and communicated as one summative score aggregating all individual 9-Items without the use of any subscales.
Although most SCQ items showed adequate variability, Item 1A and item 3A had a ceiling effect and a few other items showed a floor effect, indicating that more than 15% of results were aggregating at the top or bottom scoring alternative. The limited variability among these items could be problematic if the SCQ is used to assess variance over time or in pre and post interventions studies [24], and it remains unknown if this is due to data characteristics of this study or shortcomings in the questionnaire. Further studies would be valuable to explore the variability of these items. Higher mean scores obtained for Items 1 'do you often lack time to provide the care? Item 3 'Do you ever have to deal with incompatible demands?' and Item 7 'Does your work affect your private life?' were consistent with previous studies [12,13]. Although Ahlin et al. [4] suggested removing Item 6 'Is your private life ever so demanding that you don't have the energy to devote yourself to your work as you would like?' , this study demonstrated that the item should be retained due to having an adequate correlation with other items and a factor loading of 0.58. The Cronbach's alpha of 0.84 and item-total correlations (ranging between 0.30 and 0.70) indicated that all items reliably measured a single underlying construct with acceptable homogeneity [24]. Reliability scores were stable both when Part A and Part B questions were treated separately or as index scores (Part A multiplied by Part B) as indicated in Table 2. Initial validation by Glasberg et al. [3] and subsequent revalidation by Ahlin et al. [4] also showed adequate internal consistency, indicating stability of the SCQ across different samples and settings. This is the first study to explore and confirm unidimensionality of the SCQ in an English-speaking context, which adds further evidence and confidence for use of the Stress of Conscience Questionnaire. The results of the EFA yielded a single latent factor, which explained 44% of the total variance. Factor loading of each item was greater than 0.55 on the first extraction factor, meaning that all items were indicators of the latent factor, Stress of Conscience. However, the studies by Glasberg et al. [3], Ahlin et al. [4] and Saarnio et al. [17] retained two latent factors. Although these studies produced two stable latent factors, there were higher cross loading on both latent factors. The initial validation by Glasberg et al. [3] included item 1 in both latent factor solutions. In addition, the rotated two-factor solution by Ahlin et al. [4] was inconsistent with the theoretical interpretations proposed by Glasberg et al. [3]. According to Ahlin et al. [4], all items except for Item 6 had higher loadings on the first factor (all > 0.48) compared with the second factor in the unrotated solution, which indicated a unidimensional structure. Ahlin et al. [4] concluded that this outcome could be a result of the index scores, which equalizes stressors and troubled conscience (Part A multiply by Part B). The factor structure in this study meets the criteria for unidimensionality, confirming that calculations of the arithmetic mean, from the summation of all index scores, provides meaningful data for interpretation, comparison and communication of results. This was reinforced by the CFA which indicated that the one-factor structure had better model fit (CMIN/DF = 1.340; P-value = 0.146; CFI = 0.990; AGFI = 0.948; RMSEA = 0.037 and Although some model fit indices were also acceptable for the two-factor model, the latent factors were highly correlated, suggesting that they were not distinct factors. The one-factor model also received the lowest AIC score (AIC = 123.768) and the chi-square test was not significant (P-value = 0.146), indicating that this was the most parsimonious model for the data analysed [25,26]. Therefore, the results of this study indicate that the SCQ is best conceptualized as a unidimensional measure with a single latent factor.
External Demands and RestricƟons
There were some limitations of this study of importance to consider. The self-reported data may be liable to social desirability bias, and thus needs cautious interpretation. However, the data collection process was anonymous to encourage participants to be truthful. The cross-sectional and contextual location of the data implies cautious interpretation of the findings, and further data from other contexts and countries is needed. Results from reliability and dimensionality testing also need to be interpreted with caution, as the criteria for assessing goodness of fit are relative rather than absolute [25]. The sample consisted mainly of female nurses, which may limit generalizability for different genders and other professions. The stability of the single latent factor also needs to be assessed as the parallel analysis factor explained 44% of the total variance.
Conclusion
The Stress of Conscience Questionnaire achieved satisfactory reliability and variability for assessing frequently encountered stressful situations, and the degree individual health professionals experience a troubled conscience in their workplace. The factor structure in this study met the criteria for unidimensionality, suggesting that a simple sum score of items is a feasible and reliable way forward to quantify and explore this phenomenon across countries, and different contexts. Highlighting and discussing ethical challenges is at core of healthcare, and the SCQ can be a significantly helpful tool for clinical managers in this process.
|
2023-01-30T15:14:49.921Z
|
2020-10-20T00:00:00.000
|
{
"year": 2020,
"sha1": "a8f38bbe7aa9dc3a589c6ef3970089eed8af8623",
"oa_license": "CCBY",
"oa_url": "https://bmcpsychology.biomedcentral.com/track/pdf/10.1186/s40359-020-00477-3",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "a8f38bbe7aa9dc3a589c6ef3970089eed8af8623",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": []
}
|
44034684
|
pes2o/s2orc
|
v3-fos-license
|
Coexistent Membranous Nephropathy with Doubly ANCA-Associated Crescentic Glomerulonephritis: A Case Report and Review of Literature
Introduction: Membranous nephropathy (MN) is the most common causes of the nephrotic syndrome in nondiabetic, Caucasian adults. Pauci-immune necrotizing and crescentic glomerulonephritis (PNCGN) typically present with rapidly progressive glomerulonephritis. Coexistent MN and PNCGN is a rare occurrence. We report a case of both MPOand PR3-ANCA associated NCGN with MN that presented as rapidly progressive glomerulonephritis. Case presentation: A 46-year-old female presented with nausea and vomiting. On physical examination, the patient was a febrile and normotensive. Blood tests showed acute kidney injury and anemia. Urinalysis demonstrated numerous dysmorphic red blood cells with granular casts and nephrotic range proteinuria. Further testing showed negative ANA, positive anti-dsDNA, PR3-ANCA and MPO-ANCA. Kidney biopsy revealed the diagnosis of concurrent PNCGN with membranous nephropathy. The diagnosis of concurrent ANCA-associated NCGN with Membranous nephropathy was made. High dose intravenous methyl prednisolone was initiated. Unfortunately, the patient developed diffuse alveolar hemorrhage and underwent 6 cycles of plasmapheresis, intravenous Cyclophosphamide and pulse dose steroids with transitioned to oral prednisone and mycophenolate. On follow up, her disease seemed to be well suppressed without dialysis. Conclusion: Membranous nephropathy with PNCGN is a rare concurrent glomerulopathy, and even more rare with both MPO and PR-3 positivity. The diagnosis of MN with PNCGN should be considered in patients who present with RPGN and nephrotic range proteinuria. *Corresponding author: Wisit Cheungpasitporn, Department of Internal Medicine, Bassett Medical Center and Columbia University College of Physicians and Surgeons, Cooperstown, New York 13326, USA, E-mail: wisit.cheungpasitporn@ bassett.org Received October 24, 2011; Accepted November 15, 2011; Published November 17, 2011 Citation: Chanprasert S, Cheungpasitporn W, Eldred AK (2011) Coexistent Membranous Nephropathy with Doubly ANCA-Associated Crescentic Glomerulonephritis: A Case Report and Review of Literature. J Nephrol Therapeutic 1:106. doi:10.4172/2161-0959.1000106 Copyright: © 2011 Chanprasert S, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Introduction
Membranous nephropathy (MN) is the most common cause of primary nephrotic syndrome in nondiabetic, Caucasian adults, accounting for more than one third of cases [1]. Most of patients with MN have preserved renal function at the time of presentation. Renal failure usually develops gradually in patients with MN and only rarely is complicated by acute kidney injury. Pauci-immune necrotizing and crescentic glomerulonephritis (PNCGN) typically present with rapidly progressive glomerulonephritis (RPGN). Coexistent MN and PNCGN is a rare occurrence. We report a case of both MPO-and PR3-ANCA associated NCGN with membranous nephropathy that presented as rapidly progressive glomerulonephritis.
Case Presentation
A 46-year-old Caucasian woman presented to emergency department with nausea, vomiting and weight loss. Medical history was remarkable for gastroesophageal reflux disease (GERD), fibromyalgia and depression. She noticed poor appetite and 40-pound weight loss over 2 months. Patient denied any recent history of upper respiratory tract infection, or skin infection. She was taking 4,800 mg of ibuprofen per day and no other medications. On physical examination, the patient was a febrile, normotensive with a blood pressure of 120/80 mmHg and her urine output over 24 h was approximately 2 L of dark red urine. There were multiple tiny ischemic lesions on the distal part of fingers, more pronounced on both thumbs. There was no active synovitis appreciated.
Laboratory testing showed normocytic anemia (HGB 8.6 gm/ dL, HCT 25.1%, MCV 89.7fL, and MCH 28.5 pg), elevated serum creatinine of 4.1 mg/dl (base line 0.8 mg/dL), serum albumin level of 3.0 g/dL, total cholesterol 95 mg/dL, and triglycerides 147 mg/ dL. Microscopic urinalysis demonstrated numerous dysmorphic red blood cells, granular casts. She had nephrotic range proteinuria with a urine protein to creatinine ratio of 3.68 and UPEP only notable for albuminuria. Renal ultrasound disclosed no hydronephrosis with normal sized kidney 10.7 cm on the right and 10.3 cm on the left. The possibility of rapidly progressive glomerulonephritis from systemic vasculitis was raised by the nephrology evaluation. Interestingly, further testing showed a negative ANA, highly positive anti-dsDNA of >45.0 IU/ml, positive PR3-and MPO-ANCA (both >8.0; Mayo Clinic laboratory), and low level of C3 and C4 (60 and 11.5 mg/dl, respectively). Other laboratory tests including HBsAg, anti-HCV, anti-HIV, rheumatoid Factor, anti-CCP antibody and anti-GBM were negative. Serum protein electrophoresis showed polyclonal gammopathy and low albumin. During the course of admission the patient developed palpable purpura. Dermatology consultation was requested. The pathological findings of skin biopsy were consistent with leukocytoclastic vasculitis. PNCGN with glomerulonephritis was the most likely diagnosis at that point. However, since the anti-dsDNA was highly positive and the C3 and C4 levels were low, systemic lupus erythematosus (SLE) was also in the differential diagnosis.
A renal biopsy with CT-guided was then performed (Figure 1), and sampling for light microscopy included 29 glomeruli, four of which were globally sclerotic. The results showed focal segmental necrotizing and crescentic glomerulonephritis. Immunofluorescence revealed granular global capillary wall positivity of 1-2+ intensity for IgG, C3, kappa, and lambda. Electron microscopy revealed minute, stage 1, predominantly segmental subepithelial deposits as well as infrequent segmental mesangial deposits. These findings were consistent with focal segmental necrotizing and crescentic glomerulonephritis, pauciimmune type with membranous glomerulopathy, segmental, stage 1.
The diagnosis of concurrent PNCGN (both PR3-and MPO-ANCA) with membranous nephropathy was made. High dose intravenous methylprednisolone was initiated for 3 days, and cyclophosphamide was ordered after the biopsy result was observed. Unfortunately, the patient developed diffuse alveolar hemorrhage. The patient was incubated and then transferred to an outside hospital, in order to receive plasmapheresis. She received hemodialysis and underwent 6 cycles of plasmapheresis, intravenous cyclophosphamide (CY) and pulse dose steroids. She was transitioned to oral prednisone and mycophenolate mofetil. After the hospital discharge, the patient was feeling well. She was asymptomatic. She denied leg swelling, shortness of breath, hemoptysis, cough or sputum production. On recent follow up, 6 months after diagnosis, she continues to do well. Serum creatinine decreased to 1.9 mg/dL while proteinuria decreased to 2.1 g/24 h. Both PR3-and MPO-ANCA level decreased to 4.4 and 4.6 respectively (Mayo Clinic laboratory).
Discussion
Membranous nephropathy is characterized histologically by the formation of subepithelial immune complex deposits with resultant changes to the glomerular basement membrane (GBM), most notably GBM spike formation [2]. Fibrinoid necrosis and crescent formation is rarely seen in membranous nephropathy, except in those cases associated with systemic lupus erythematosus, corresponding to ISN/ RPS lupus nephritis class III and V or IV and V [3,4], hepatitis B or C virus infection and treatment with penicillamine, hydralazine and propylthiouracil [5][6][7][8]. In general, the absence of evidence of SLE, findings of MN with necrosis and crescent formation should raise the possibility of two potential superimposed disease processes, anti-GBM disease and ANCA-associated NCGN [9]. In our case vignette, SLE was also in the differential. Ten percents of SLE patients may have negative ANA and positive anti-dsDNA. The dominant process on biopsy as well as the clinical finding; including pulmonary hemorrhage, support PNCGN as the prominent process. We cannot be sure she does not also have lupus, ISN/RPS 2004 class V.
Unlike MN, which often has an insidious course progressing to renal failure over many years, patients with superimposed crescentic GN generally have a more aggressive clinical course and may present with or progress rapidly to renal failure [10,11]. These patients may present with a rapidly progressive glomerulonephritis or develop a nephritic picture after initially presenting with a nephrotic syndrome.
Coexistent MN and PNCGN is a rare occurrence, with only 25 reported cases in the English literature in which clinical and pathologic findings are detailed [9,[12][13][14][15][16][17][18]. Thirteen patients had P-ANCA by indirect immune fluorescent (IIF) staining, seven of whom were tested with ELISA and found to have MPO-ANCA. Eight patients had C-ANCA by IIF. Two patients were tested with ELISA only and were found to have MPO-ANCA. The remaining two patients, one had both MPO-and PR3-ANCA and the other had an atypical ANCA. Our case presentation was the second case report of both MPO-and PR3-ANCA-associated NCGN with membranous nephropathy.
Most of patients with MN and PNCGN were diagnosed simultaneously at presentation as was our case vignette. On the other hand, the situation of concurrent MN and anti-GBM disease, in which MN preceded the development of anti-GBM nephritis in close to 50% of reported cases [19,20]. The reason for concurrent MN and PNCGN is unclear. A report of all biopsies received between 2000 and 2008 at a high-volume renal pathology unit found 14 cases in which both MN and PNCGN were detected. Based on the expected incidences for each disease entity in this population, the authors concluded that the co-existence of MN and PNCGN was coincidental [9]. However, Hanamura et al. [21] recently reported that myeloperoxidase may form immune complexes and develop membranous nephropathy-like lesions in some cases of PNCGN [21]. The findings are also notable for moderate to severe interstitial inflammation and extensive tubulitis which also likely relate to ANCA seropositivity. Immunofluorescence reveals granular global capillary wall positivity of 1-2+ intensity for IgG, C3, kappa, and lambda. Electron microscopy reveals minute, stage 1, predominantly segmental subepithelial deposits as well as infrequent segmental mesential deposits. Taken together, these findings appear sufficient to establish the Coexistent MN and ANCA-associated NCGN.
Patients with both MN and PNCGN are likely to have heavier proteinuria than patients with PNCGN alone. All of 25 reported cases with urine output had proteinuria [9,[12][13][14][15][16][17][18]. From the recent report, the mean 24-h urine protein for these patients was 6.5 g, compared with 1.7 to 2.5 g in patients with ANCA disease alone [22]. Hematuria was documented in all but one patient, who was oliguric. In our case presentation, the patient also had hematuria and nephrotic range proteinuria with a urine protein to creatinine ratio of 3.68.
Although the treatment of MN is often supportive, renal vasculitis characteristically responds well to treatment with cyclophosphamide (CY) and prednisone [11]. Among the 25 reported cases of MN and PNCGN, induction therapy consisted of prednisone and CY in 20 patients, prednisone and azathioprine in one patient, and prednisone alone in one patient. One patient received mycophenolate mofetil and two patients also received plasmapheresis [9,[12][13][14][15][16][17][18].
Patients with MN and PNCGN have outcomes that appear to be worse than that of other patients with MN alone, with 50% reaching endpoints of death or ESRD [9]. Most of the reported cases needed dialysis and in one case the lesion of membranous nephropathy with crescents recurred after renal transplantation.
As a conclusion, although the association of PNCGN and MN is rare, it has a more aggressive clinical course when compared with MN alone. However, it is unclear if this dual glomerulopathy does worse than PNCGN alone. The diagnosis of MN with PNCGN should be considered in patients who present with RPGN and nephrotic range proteinuria.
Consent
The patient provided written consent for the publication of this case report and any accompanying images. A copy of the written consent is available for review by the journal's Editor-in-Chief.
|
2019-03-12T13:03:22.986Z
|
2011-11-17T00:00:00.000
|
{
"year": 2011,
"sha1": "1f6344082bad67b0c16afba38229149da17d75f6",
"oa_license": "CCBY",
"oa_url": "https://www.omicsonline.org/coexistent-membranous-nephropathy-with-doubly-anca-associated-crescentic-glomerulonephritis-a-case-report-and-review-of-literature-2161-0959.1000106.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "79a8cc1bc52a4132e474448f757cdee3cbdfd30d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
30918407
|
pes2o/s2orc
|
v3-fos-license
|
H-Ras and phosphoinositide 3-kinase cooperate to induce alpha(1,3)-fucosyltransferase VII expression in Jurkat T cells.
The alpha(1,3)-fucosyltransferase FucT-VII is essential for the biosynthesis of selectin ligands, but the signaling pathways mediating FucT-VII induction in T cells and other lymphocytes are poorly understood. We have shown previously that sustained activation of Ras in Jurkat T cells induces FucT-VII transcription, which requires the Raf-MEK-ERK pathway. In this study we report that FucT-VII induction is specific to the H-Ras isoform. Jurkat T cells retrovirally transduced with constitutively active H-Ras but not N- or K-Ras up-regulated expression of FucT-VII. Pharmacological inhibition studies also revealed that phosphoinositide 3-kinase (PI3K) activity is required for H-Ras-mediated FucT-VII induction. However, the ability of H-Ras to selectively induce FucT-VII is not a function of the inability of the N- or K-Ras isoforms to activate Raf or PI3K pathways. The use of effector-loop domain mutants of H-Ras, which are impaired for their ability to interact selectively with individual effectors alone or in combination with active Raf, indicated that induction of FucT-VII requires the concomitant activation of at least three signaling pathways. These studies show that H-Ras mediates FucT-VII induction in Jurkat T cells via the activation of the Raf, PI3K, and a distinct, H-Ras-specific effector signaling pathway.
T cells, once activated after antigen encounter, migrate to extravascular compartments via a cascade-like process involving capture, rolling on the endothelial surface, arrest, and eventually transmigration (1). Initial recognition of the vessel wall is mediated by selectins, a family of carbohydrate binding adhesion molecules that interact with specific glycoconjugates on the cell surface (2)(3)(4). The selectins (E-, P-, and L-selectin) are Type I transmembrane proteins containing a carbohydrate recognition domain (2). E-and P-selectins are induced on the endothelium during inflammation (5), whereas L-selectin is constitutively expressed by most types of circulating leukocytes and recognizes ligands on endothelial cells (2,3,6). Although the structure of the carbohydrate ligands of selectins is still not completely understood, considerable progress has been made in identifying glycosyltransferases responsible for their biosynthesis (3).
One such enzyme, the ␣(1,3)-fucosyltransferase FucT-VII, 1 is essential for the biosynthesis of all selectin ligands in all cells in which it has been examined, including monocytes, neutrophils, and other myeloid cells as well as the biosynthesis of Eand P-selectin ligands on activated T cells (7,8). Neutrophils from mice deficient in FucT-VII exhibit sharply reduced E-and P-selectin ligands, whereas activated T cells from these mice do not express detectable levels of selectin ligands (9,10). Although myeloid cells constitutively express L-selectin and functional ligands for E-and P-selectins, expression of selectin ligands in T cells is inducible and highly regulated (3). Naïve CD4 ϩ T cells do not display selectin ligands due to the lack of expression of the ␣(1,3)-fucosyltransferase FucT-VII (11,12). Induction of FucT-VII in CD4 ϩ requires T cell activation, and expression levels are high in Th1 cells and substantially lower in Th2 cells generated in vitro (11)(12)(13)(14). Moreover, studies from our laboratory reveal that engagement of the T cell receptor (TCR) leads to the induction of FucT-VII, which is further enhanced by interleukin 12 and inhibited by interleukin 4 (12,15). However, TCR engagement results in the activation of a number of signaling molecules (16), and it is not well understood which are relevant for FucT-VII induction.
We have recently shown that enforced expression of constitutively active Ras in Jurkat T cells leads to the expression of FucT-VII, implicating Ras as a FucT-VII regulator (17). Ras proteins are small guanine nucleotide-binding proteins that function as molecular switches in signal transduction cascades, regulating cell proliferation, survival, and differentiation (18,19). Mammalian cells express four isoforms of Ras: H-Ras, N-Ras, K-Ras4A, and K-Ras4B, with K-Ras4A and -4B resulting from alternative splicing of the fourth exon of the K-Ras gene (18). K-Ras4B accounts for more than 90% of the total K-Ras and will be referred to as K-Ras. These Ras proteins are 85% homologous (18) and are identical up to the last 24 carboxyl-terminal amino acids (20). Despite their great structural and biochemical similarity, mounting evidence suggests that the Ras isoforms are not merely redundant. Ras isoforms exhibit cell-specific differences in their intrinsic transforming potential (21), their ability to be activated by guanine nucleotide exchange factors or deactivated by GTPase activating factors (22)(23)(24), to activate certain specific signal transduction pathways, such as the NF-B pathway (25), and to determine certain TCR signaling outcomes (26). Further evidence from knock-out mice attest to the distinct functions of the Ras iso-forms; N-Ras-and H-Ras-deficient mice develop normally (27,28), and even double knock-outs are normal (29), but K-Rasdeficient mice die during embryonic development (30).
Gaining insight into the mechanism of Ras-induced FucT-VII expression is complicated, given the plethora and diverse functions of Ras effectors. Ras can stimulate the Raf serine/threonine kinases, subsequently activating the ERK mitogen-activated protein kinases (18), which have been shown to be essential for FucT-VII induction (17). Apart from Raf, other known effectors include the phosphoinositide 3-kinase (PI3K), protein kinase C⑀, RIN1, AF6, MEKK1, and Nore1 (18,31). Ras also activates a family of guanine nucleotide exchange factors for the Ral small GTPases, the RalGDP dissociation stimulators (RalGDS) (32)(33)(34), which along with the PI3K and Raf, are the best defined Ras effectors (18).
In this study we show that induction of FucT-VII in Jurkat T cells exhibits isoform specificity, with only H-Ras, but not N-or K-Ras, able to induce FucT-VII expression. We demonstrate that PI3K, an important Ras downstream effector, is required for H-Ras-induced FucT-VII expression and that H-Ras mediates FucT-VII induction through the concomitant activation of at least three signaling pathways.
EXPERIMENTAL PROCEDURES
Cell Culture and Pharmacological Inhibitors-Jurkat cells were grown in RPMI 1640 with 10% fetal calf serum plus antibiotics. The PI3K inhibitor Ly294002 was added at the time of retroviral infection and was replenished every 24 h. Ly294002 was dissolved in dimethyl sulfoxide (Me 2 SO) and used at a range of final concentrations, from 10 M up to 50 M, as indicated. No effect in viability was observed at these concentrations, although the efficiency of retroviral infections, as assessed by GFP levels, was progressively decreased.
Retroviral Infections-Production of recombinant retrovirus pseudotyped with the vesicular stomatitis virus glycoprotein G was performed as described previously (17). Briefly, cDNA encoding the different proteins was subcloned into murine stem cell virus (MSCV)-IRES-GFP (35) upstream of the internal ribosome entry site (IRES) and enhanced green fluorescent protein (GFP). This plasmid was cotransfected with the plasmid encoding vesicular stomatitis virus glycoprotein G under the control of the cytomegalovirus late promoter into GP293 cells (Clontech) using the combination of LipofectAMINE and Plus Reagent (Invitrogen). Supernatants containing recombinant retrovirus were recovered 48 h later and were either used immediately for retroviral infections or frozen at Ϫ80°C for later use. In some experiments, cells were coinfected with two retroviruses, the MSCV-IRES-GFP as above and a second virus containing a distinct cDNA and in which the GFP sequence was replaced by the cDNA encoding murine H-2K k , the major histocompatibility class I antigen (36). Jurkat cells were spininfected in the presence of 10 g/ml Polybrene (Sigma) with 1 ml of retrovirus-containing supernatant, and FACS analysis was performed 2 to 3 days of growth after the retroviral infection. The viral supernatants were titered to achieve the same infection efficiency across the same experiment.
Flow Cytometry Analysis-We and others have previously shown that the HECA-452 monoclonal antibody recognizes a FucT-VII reporter epitope(s) on Jurkat and other cells and that HECA-452 staining, mRNA levels, and enzyme activity correspond closely and quantitatively over a wide range (8,12). Retrovirally infected Jurkat cells were stained with previously determined optimal amounts of HECA-452 in FACS buffer (phosphate-buffered saline, 1% fetal calf serum, and 0.1% azide), washed with FACS buffer, and stained with cyanine 5 (Cy5)conjugated goat anti-rat IgM (Jackson ImmunoResearch Laboratories, West Grove, PA) for 2 or 3 color experiments. Anti-H-2K k -phycoerythrin (anti-H-2K k -PE) was obtained from Pharmingen (BD Biosciences). Isotype-matched controls were included in all experiments and used to set gates. Data were acquired using the FACSCalibur, FACSort Flow, or LSR II cytometers (BD Biosciences) and analyzed using the CellQuest Pro or FACSDiva Software.
cDNA Constructs and Site-directed Mutagenesis-The constitutively active form of H-Ras (v-H-Ras) is the double point-mutant G12R/A59T, which has been described previously (17). The H-Ras activation mutant G12V, G12R, and the double point-mutant G12V/A59T were generated using the QuikChange site-directed mutagenesis kit (Stratagene, La Jolla, CA). The H-Ras effector mutants T35S, E37G, and Y40C were also generated by site-directed mutagenesis using the G12R/A59T H-Ras as template. The activated Ras isoforms N-Ras and K-Ras4B (referred to as K-Ras) bear the G12V-activating mutation and were a gift of Brian Van Ness, University of Minnesota. The Rac1 G12V and Rac2 Q61L constitutively active mutants were gifts of Gary Bokoch, Scripps Research Institute, La Jolla, CA. The RafBxB cDNA has been described previously (17). All constructs were verified by bidirectional sequencing and cloned into the retroviral vectors as described earlier.
H-Ras Specifically Induces FucT-VII-We have previously
shown that retroviral transduction of the Jurkat T cell line with a constitutively active form of H-Ras leads to FucT-VII induction (17). To determine whether there is isoform specificity in Ras-induced expression of FucT-VII, we infected Jurkat T cells with retroviral constructs expressing the cDNA of activated forms of H-, N-, or K-Ras (K-Ras4B). Because the retroviral vector produces a bicistronic message encoding both GFP and the cloned cDNA, the amount of GFP fluorescence serves as a quantitative marker of the Ras protein levels on a per-cell basis. The expression of FucT-VII in Jurkat T cells is quantitatively associated with cell surface expression of epitopes defined by the monoclonal antibody HECA-452 (8,37), allowing the use of this antibody as a reporter for FucT-VII expression on a per-cell basis. We analyzed by cell flow cytometry the percent of retrovirally infected Jurkat cells that stain brightly with the HECA-452 antibody (Fig. 1A). As expected, H-Ras induced FucT-VII in a significant portion of the cells expressing the highest levels of GFP. In contrast, cells infected with K-Ras expressed FucT-VII at comparatively very low levels, although higher than the empty retrovirus. Furthermore, N-Ras-transduced cells stained with HECA-452 close to the empty vector levels (Fig. 1B). These data show that FucT-VII induction in Jurkat T cells is H-Ras-specific.
Ras Isoform Specificity in FucT-VII Induction Is Not Due to Inability to Activate ERK1/2-Previous studies from our laboratory show that H-Ras induces FucT-VII expression in part via activation of the Raf-MEK-ERK pathway (17). To determine whether the ability of H-Ras to specifically induce FucT-VII is due to differential activation of the Raf-MEK-ERK pathway, we retrovirally transduced Jurkat T cells with retrovirus containing no cDNA, H-Ras, N-Ras, or K-Ras cDNA. The cells were lysed 2 days after the infection, and the lysates were subjected to SDS-PAGE and Western analysis using antibodies against ERK1/2 proteins as well as the phosphorylated ERK1/2 forms (phospho-ERK1/2) (Fig. 1C). Each of the Ras isoforms strongly activated the ERK1/2 proteins, whereas the empty vector control exhibited almost no ERK1/2 activation. These data show that the H-Ras isoform specificity in FucT-VII induction is not due to the inability of the N-or K-Ras isoforms to activate ERK1/2 in these cells.
The Ability of H-Ras to Induce FucT-VII Requires Activation but Is Not a Function of the Activating Mutation-The activated form of H-Ras (v-Ha-Ras) contains two activating point mutations, the Gly-12 to Arg and Ala-59 to Thr (G12R/A59T). Both mutations activate H-Ras and other Ras family proteins by blocking their intrinsic GTPase activity (38,39). However, the N-and K-Ras isoforms contain only the activating mutation Gly-12 to Val (G12V), which also blocks GTPase activity (39). We, therefore, investigated whether the ability of H-Ras to induce FucT-VII is due to the specific activating mutation.
We generated the H-Ras single-point activating mutants 12R (T59A) and 12V (T59A) as well as the wild-type form of H-Ras (12G, 59A) via site-directed mutagenesis. Jurkat cells were retrovirally transduced with empty vector constructs or constructs expressing H-Ras G12R/A59T, H-Ras G12R, H-Ras G12V, or wild-type H-Ras. The ability of the different Ras mutants to induce FucT-VII expression was determined by staining with the monoclonal antibody HECA-452. All the activation mutants were able to induce FucT-VII at the same, if not higher levels as H-Ras 12R/59T (Fig 2, A and B). Hence, the Ras isoform specificity observed is not due to the different mutations that activate the protein. Importantly, the wild-type H-Ras failed to induce FucT-VII, indicating that overexpression of the protein per se is not sufficient for FucT-VII expression but, rather, that H-Ras activation is required.
PI3K Activity Is Required for Induction of FucT-VII by H-Ras-Previous studies from our laboratory show that although the Ras effector Raf is required for the H-Ras induced FucT-VII expression in Jurkat cells, it is not, however, sufficient (17). To investigate the possible involvement of PI3K, another prominent downstream effector of Ras, we determined whether inhibition of PI3K activity abolished H-Ras induction of FucT-VII. Jurkat cells were retrovirally transduced with H-Ras or empty vector and were grown in the presence or absence of Ly294002, a highly specific pharmacological inhibitor of PI3K (40). At the concentration of 50 M Ly294002, induction of FucT-VII in H-Ras-transduced cells was abolished, with HECA-452-staining levels equivalent to empty vector (Fig. 3, A and B). As before, a significant portion of the Jurkat cells infected with H-Ras and grown in the absence of Ly294002 stained with HECA-452 at high levels (Fig. 3A). Furthermore, Ly294002 inhibited FucT-VII induction by H-Ras in a dose-dependent manner ranging from 10 to 50 M (Fig. 3C). Similar results were obtained when wortmannin, a chemically distinct PI3K inhibitor, was used (data not shown). PI3K activity is, therefore, required for the H-Ras induced FucT-VII expression in Jurkat cells.
The Ras Isoform Specificity Is Not a Function of Differential PI3K Activation-We have shown that although the Ras effector Raf is required for FucT-VII induction (17), the observed Ras isoform specificity is not due to the ability or inability of the Ras isoforms to activate ERK1/2 (Fig. 1C). Because PI3K activity is also involved in the regulation of FucT-VII expression, we investigated whether the observed Ras isoform specificity is a function of the differential ability of the Ras isoforms to activate PI3K. As a measure of PI3K activity, we used the phosphorylation levels of Akt, a major downstream effector of PI3K. Jurkat cells retrovirally transduced with H-, N-, K-Ras, or empty vector control were lysed 2 days after infection. The lysates were subjected to SDS-PAGE and Western blot analysis using antibodies against the active form of Akt phosphorylated at Ser-473 (phospho-Akt) and against total Akt proteins (Fig. 3D). Cells expressing each of the Ras isoforms displayed high levels of activated Akt. However, the empty virus control exhibited equivalently high levels of activated Akt, a phenome-non that can be attributed to the fact that Jurkat cells are deficient in the expression of the lipid phosphatase and tensin homologue (PTEN), which results in strong constitutive activation of the PI3K pathway, including Akt (41). Our results indicate that the ability of H-Ras but not the other isoforms to induce FucT-VII is not attributable to differences in PI3K activation by different Ras isoforms. Also, given that the levels of phosphorylated Akt in cells infected with the empty virus are equivalent to those of cells transduced with each Ras isoform, Ras is not responsible for PI3K activation in this system.
Induction of FucT-VII by H-Ras Requires the Activation of at Least Three
Signaling Pathways-Ras proteins activate a plethora of downstream effectors, of which the best characterized are Raf, PI3K, and the RalGDS (18,(31)(32)(33). In an attempt to identify Ras effectors that participate in the regulation of FucT-VII, we employed mutated H-Ras proteins that are impaired for their ability to interact with specific subsets of effectors. The H-Ras mutants were generated via site-directed mutagenesis and carry, in addition to the activating double point mutations G12R/A59T, different point mutations in the effector-loop domain (residues 32-40), which is essential for interactions with most, if not all, of the known effector proteins. Replacement of the Thr-35 with Ser (T35S) enables the selective activation of Raf (42) but abolishes interaction with Ral-GDS and PI3K (43). The mutant H-RasE37G can no longer bind Raf nor activate PI3K but retains binding to RalGDS (43). Last, H-RasY40C (Tyr-40 with Cys) selectively binds PI3K and fails to interact with RalGDS or Raf (43). Jurkat cells were retrovirally transduced with empty vector, H-Ras, or the H-Ras effector-loop domain point mutants and were stained with HECA-452 (Fig. 4A). Although H-Ras-infected cells stained at high levels with HECA-452, cells infected with each of the different effector mutants did not (Fig. 4, A and B). In particular, H-RasT35S, which activates the Raf-signaling pathway, which we have shown to be essential for FucT-VII induction (17), failed to induce FucT-VII, consistent with previous results showing that expression of active Raf is not sufficient to upregulate FucT-VII expression. As mentioned above, due to the PTEN deficiency in Jurkat cells, the PI3K signaling pathway is constitutively activated. Thus, cells infected with H-RasT35S have two signaling pathways activated, Raf and PI3K, both of which are essential for FucT-VII induction. However, cells expressing H-RasT35S did not up-regulate FucT-VII, indicating that activation of at least three signaling pathways is required for FucT-VII expression.
We further investigated whether different combinations of H-Ras effector-loop mutants and Ras effectors could induce FucT-VII. To achieve this goal, two retroviral vectors were employed, one encoding a bicistronic message for the cloned cDNA and GFP, and the other encoding the catalytic domain of c-Raf-1, RafBxB (17), and the mouse major histocompatibility I H2-K k gene (36). Jurkat cells were coinfected with the retroviruses and stained with the anti-H2-K k monoclonal antibody conjugated to phycoerythrin and HECA-452. Expression of the GFP and H2-K k genes served as markers of the cells infected with the two different retroviruses as analyzed via 3-color flow cytometry. Cells expressing both GFP and the H2-K k antigen were analyzed for staining with HECA-452 as before (Fig. 5, A and B). Jurkat cells were infected with various combinations of retroviral vectors (Fig. 5B, Table I Table I). Even cells coinfected with H-RasE37G and Raf, which activates the RalGDS and Raf signaling pathways, respectively, failed to induce FucT-VII. In these coinfected cells, due to the combination of PTEN deficiency and the retroviral infections, all three major Ras signaling pathways, Raf, PI3K, and RalGDS, were activated, but FucT-VII induction was still not observed. Furthermore, coexpression of Raf and H-RasY40C, which activates the PI3K and the Raf pathway, also failed to induce FucT-VII, further confirming that although Raf and PI3K signaling pathways are essential, their activation is not sufficient to induce FucT-VII expression. Finally, expression of the constitutively active forms of the Rho GTPases Rac1 and Rac2, which can interact with some Rasinduced pathways but also trigger a distinct set of effectors, did not lead to FucT-VII induction either alone or in combination with active Raf. Taken together, our results suggest that activation of at least three signaling pathways, the Raf, PI3K, and a third unknown pathway, is essential for FucT-VII induction in Jurkat T cells.
Our results further indicate that the third pathway must be H-Ras-specific, since both the Raf and the PI3K signaling cascades are activated in Jurkat cells expressing H-, N-, or K-Ras, whereas only H-Ras is able to induce FucT-VII expression.
DISCUSSION
Ras proteins are small guanine nucleotide-binding proteins that function as molecular switches in signal transduction cascades, regulating cell proliferation, survival, and differentiation (18,19). Four Ras isoforms are expressed in mammalian cells, H-Ras, N-Ras, K-Ras4A, and K-Ras4B, with K-Ras4A and -4B resulting from alternative splicing of the K-Ras gene (18). Mounting evidence suggests that, despite their structural and biochemical similarities, the different isoforms have distinct functions. Knock-out mouse systems revealed that although N-Ras-and H-Ras-deficient mice develop and reproduce normally (27,28), K-Ras-deficient mice die during embryonic development (30). Furthermore, the Ras isoforms exhibit distinct potential in their transforming ability (21), their activation or deactivation by guanine nucleotide exchange factors or GTPase activating factors, respectively, (22)(23)(24) and their ability to activate the NF-B pathway (25) or affect TCR signaling outcomes (26). Here we show another example of the distinct functions of the Ras isoforms. Our data demonstrate that cells transduced with constitutively active H-Ras up-regulated expression of FucT-VII, whereas active K-Ras had only a minor effect on FucT-VII induction, and the effect of active N-Ras was equivalent to the empty vector controls. This distinct ability of the H-, N-, and K-Ras isoforms to induce FucT-VII expression represents yet another difference in the biological functions of the Ras isoforms.
Ras proteins stimulate a plethora of downstream effectors such as the Raf serine/threonine kinases, the PI3Ks, and the RalGDS family (18,(31)(32)(33). In previous studies, we demonstrated that H-Ras induces FucT-VII expression in part by activating the Raf-MEK-ERK pathway (17). We, therefore, examined whether differential activation of this pathway by the three isoforms is responsible for the specific ability of H-Ras to induce FucT-VII. Our results show that all three isoforms strongly activated this pathway, as evidenced by the elevated levels of phosphorylated ERK1/2, indicating that the inability of the N-and K-Ras isoforms to induce FucT-VII induction is not due to the inability to activate the ERK1/2 pathway in these cells. Rodriguez-Viciana et al. (44) also reported that H-, N-, and K-Ras isoforms exhibited equal ability in activating ERKs. Taken together, our data suggest that H-Ras specifically induces FucT-VII expression and that this isoform specificity is not a function of differential ERK1/2 activation.
Because different activating point mutations were present in the Ras isoforms tested (G12R/A59T in H-Ras and G12V in Nand K-Ras), we also determined that the inability of N-and K-Ras to induce FucT-VII was not due to the different activating mutations. H-Ras proteins bearing only the G12R or G12V point mutation up-regulated the expression of FucT-VII as well as H-Ras G12R/A59T (v-H-Ras), indicating that the activating mutation could not account for the isoform specificity. Furthermore, induction of FucT-VII required active mutants of H-Ras and was not due to mere overexpression because wild-type H-Ras (12R/59A) had no effect on FucT-VII expression. Collectively, these results show that the ability of H-Ras to induce FucT-VII in Jurkat cells requires activation but is not a function of the activating mutation.
Results from a previous study in our laboratory, where Raf is required but not sufficient to induce FucT-VII expression (17) prompted the hypothesis that concomitant activation of two or more signaling pathways may be required for FucT-VII induction. Apart from Raf, another well characterized Ras effector is PI3K (45). Our data show that Ly294002, a highly specific pharmacological inhibitor of PI3K (40), inhibited expression of FucT-VII in Jurkat cells transduced with the active form of H-Ras in a dose-dependent manner and completely blocked expression at the highest concentrations. The structurally unrelated PI3K inhibitor wortmannin (46) also inhibited FucT-VII expression (results not shown). We, therefore, examined whether the Ras isoform specificity of FucT-VII induction is a function of differential activation of PI3K by determining the levels of activated Akt, a major downstream effector of PI3K. However, cells transduced with the empty virus exhibited Akt phosphorylation levels equivalent to cells transduced with each Ras isoform, a phenomenon that can be attributed to the PTEN deficiency in Jurkat cells, leading to strong constitutive activation of PI3K (41). Studies have shown that all three Ras isoforms are equally potent in activating the p110␣ and p110␥ isoforms of the PI3K catalytic subunits (44). It is possible that in primary T cells Ras-induced activation of PI3K is required for FucT-VII induction, but in Jurkat cells PI3K activity levels are already high so that further activation by the Ras isoforms cannot be achieved. Regardless, our results indicate that PI3K activity is required for FucT-VII induction by H-Ras. Whether Akt or members of the Tec family of non-receptor tyrosine kinases, such as Itk (47), which are recruited to the plasma membrane as a result of PI3K activity or other pathways, may actually mediate FucT-VII induction remains to be determined.
To gain insight into other Ras effectors that may mediate FucT-VII expression, we employed H-Ras effector-loop-domain mutants that have been characterized as impaired in their ability to recruit specific subsets of effectors. Our results show that activation of only Raf (by the H-Ras T35S mutant), Ral-GDS (E37G mutant), or PI3K (Y40C mutant) was not sufficient to induce FucT-VII expression, consistent with results above showing a requirement for both Raf and PI3K. However, although both Raf and PI3K signaling pathways are essential for FucT-VII induction, concomitant activation of these two pathways is not sufficient to induce FucT-VII. Activation of the Raf pathway either by expressing active RafBxB or the H-Ras mutant T35S failed to induce FucT-VII expression in Jurkat T cells, in which the PI3K pathway is constitutively active. Moreover, coexpression of Raf and the H-Ras mutant Y40C, which activates the PI3K pathway, also failed, further confirming that activation of more than 2 pathways is necessary for FucT-VII induction. In addition, activation of at least Raf, PI3K, and RalGDS by coexpressing Raf and the H-Ras E37G mutant also did not induce FucT-VII expression, suggesting that the third essential pathway is not mediated by RalGDS but, rather, by a distinct downstream effector. However, Ras proteins interact with a plethora of downstream effectors, and although the ability of Ras partial-loss-of-function mutants to activate Raf, PI3K, or RalGDS has been well characterized and has contributed to the analysis of Ras signaling pathways in a variety of systems (43,44,48), little is known about other effector interactions that are retained or abolished in these mutants. Our data indicate that this third Ras effector-signaling pathway is activated by H-Ras but not by at least some of the H-Ras effector-loop domain mutants. Furthermore, the inability of the N-and K-Ras isoforms to induce FucT-VII indicates that this third unknown effector is selectively activated only by H-Ras and, therefore, likely accounts for the observed H-Ras specificity of FucT-VII induction.
Recent studies focus on differences between Ras isoforms in localization to distinct plasma membrane microdomains to explain their distinct biological functions. The Ras isoforms differ in the last 25 carboxyl-terminal amino acids, the hypervariable region (49), which contains a CAAX motif necessary for posttranslational modifications that control localization to the inner surface of the plasma membrane (50). Further studies show H-Ras localizing in caveolae, lipid rafts, and disordered membrane, N-Ras in caveolin-positive and caveolin-negative do-mains, and K-Ras in disordered, non-raft plasma membrane (51,52). Therefore, differential localization of the Ras isoforms may account for their ability or inability to induce FucT-VII expression, a possibility that will have to be investigated. Endomembrane signaling, initiating from the Golgi apparatus or the endoplasmic reticulum, has also been considered as an explanation for the isoform differences (26,53). Ectopically expressed H-Ras in Jurkat cells localizes, apart from the plasma membrane, to the Golgi apparatus, where H-Ras-mediated TCR-signaling requires phospholipase C␥ and RasGRP1 (54). FucT-VII regulation could be mediated by H-Ras activation on the Golgi apparatus, with the involvement of phospholipase C␥ and RasGRP1. Differences in posttranslational modifications in Ras isoforms, such as palmitoylation (18), affect not only subcellular localization, as mentioned before, but also interactions with certain regulators and/or effectors (55,56). Thus, the specific ability of H-Ras to induce FucT-VII expression may be due to the distinct localization of H-Ras in microdomains in the plasma membrane or Golgi apparatus, where interactions take place with specific effectors that colocalize in those domains. Alternatively or in addition, the posttranslational modifications per se may be responsible for the H-Ras specificity by hindering or favoring interactions with specific regulators and effectors.
Based on our results, FucT-VII induction in Jurkat cells is specifically mediated by H-Ras via the activation of at least three signaling pathways (Fig. 6). Although all three Ras isoforms can activate the Raf-MEK1/2-ERK1/2 signaling cascade, which is required for FucT-VII expression, only H-Ras exhibits the ability to activate an additional unknown downstream effector essential for FucT-VII induction. Furthermore, PI3K activity is required for H-Ras induction of FucT-VII, but differential PI3K activation cannot account for the isoform specificity, at least in Jurkat T cells. Last, effector-loop domain point mutants alone or in combination with active Raf failed to induce FucT-VII, pointing to a requirement for three or more signaling pathways in FucT-VII regulation. Taken together, these results indicate that H-Ras induces FucT-VII expression via concomitant activation of Raf, PI3K, and a third, H-Rasspecific, signal transduction cascade.
|
2018-04-03T01:45:26.617Z
|
2004-09-17T00:00:00.000
|
{
"year": 2004,
"sha1": "9f3698886c4074dd70bb6408c675644f537ac32a",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/279/38/39495.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "2868d4d5d1ca23aa2ab2744c2202b5df8a141610",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
169001040
|
pes2o/s2orc
|
v3-fos-license
|
Study on Hainan Tourism Development Strategy from the Perspective of Regional Tourism
At present, Regional tourism has become the focus of all walks of life. It marks a new stage of China’s tourism development and makes a profound change in development strategy. Regional tourism will lead China’s tourism industry to a new level, open up a new world of the overall strategy. Hainan was identified as the country’s first “regional tourism to create demonstration province”, for the exploration experience, making a demonstration, It’s not only the trend, but also the inevitable result of experience in the construction of Hainan International Tourism Island, which has the overall strategic significance
Introduction
After entering a new strategic opportunity, more and more provinces and cities have put forward new strategic objectives such as building world-class tourist destinations, world-class tourist cities or world-famous tourist cities. In 2016, the national tourism administration decided to launch the national tourism demonstration zone, which is of epoch-making significance for tourism development. Under the opportunity, Hainan was recognized as the country's first "global tourism to create a demonstration province" for the country to explore the experience and make a demonstration. It's not only the trend, but also the inevitable result of experience in the construction of Hainan International Tourism Island, which has the overall strategic significance.
The Concept of full Regional Tourism
The so-called "full regional tourism" refers to the active integration of various sectors, together with the various departments , the city residents twill take participate in, take full advantage of the destination of all the attractive elements for visitors to travel to provide the whole process, Products, so as to fully meet the full range of tourist experience requirements. Pursued by the "global travel", no longer stay in the growth of tourist trips, but the quality of tourism promotion. pursuing "full regional tourism" is no longer stay in the growth of tourists, but the quality of tourism to enhance the pursuit of tourism on the quality of life to enhance the meaning of the pursuit of tourism in the people of the value of new wealth revolution.
Full regional tourism emphasizes the fusion of residents and tourists, the aim is to make tourism destination feel more like home to the residents and tourists, rather than be tourists "theme parks", people is not actor of "theme park". In the global tourism strategy, the residents are "home" of the owner, the tourists are also the "home" in the original part. Theme park can only stay for a short time, home is a unique place to be cared for forever. In the whole area of tourist destination space, the various industries have been effectively integrated through appropriate means, making tourism the "catalyst" and "melting head" of the industry integration in the regional space.
Development Status of Hainan Tourism Industry
In 2016, Hainan's tourism revenue reached 66962 million yuan, with 6023.59 million visitors and a growing number of tourist receptions. Tourism revenue was ranked 12th in the country, with a comprehensive consideration of factors such as population size. Hainan province ranked No. 21 in per capita tourism income and was at the national level. The following will analyze the tourism situation of Hainan province from two aspects of tourism industry elements and business level of Hainan province:
Factor Level of Hainan Tourism Industry
3.1.1. Tourist accommodation. Star hotels in Hainan Province originated in the nineties of the last century. Up to 2014, the province's hotel hotel more than 3450, of which five-star and five-star standard operated 70 hotels, internationally renowned hotel management group 22, hotel brand 47. In 2015 national star hotel a total loss of 2.258 billion yuan, Hainan hotel industry profitability of 782 million yuan, the profit ranked third in the country. And the average price of 513.25 yuan per night, higher than the national average price (367 yuan) 39.85%.
Travel Shopping.
In the promotion of tax-free shopping policy, Hainan tourism shopping revenue accounted for the proportion of tourism revenue increased from 14% in 2010 to 2016 20.1%, an increase in Hainan Province in 2015, duty-free shopping revenue increased by 47.8% in 2010, But in 2015 only 10% of Hainan Island visitors to participate in duty-free shopping. In 2016 Hainan taxfree sales of 4.624 billion yuan, accounting for only 2015 global tax-free sales of 60.5 billion US dollars of 7.64%. At present, the preferential tax rate of Hainan is low, the attractiveness is relatively insufficient, and the tax exemption policy of Beijing and Shanghai is gradually liberalized.
Tourism and Catering Industry.
Hainan tourism and catering both Han, Li, Miao, Fujian and Guangdong, Southeast Asian various flavor, relatively well-known dishes about 232 species, 137 kinds of snacks, ingredients to seafood category, tropical products, high nutritional value, but the food is mainly low-end structure, service level and operating efficiency is also not high. Ginkgo, Xiang e Qing, meal for all foreign well-known food and beverage brands such as Hainan, Hunan, Sichuan, Jiangsu cuisine, Cantonese cuisine, such as local flavor dishes in Hainan blossom everywhere, characteristics of Dan Home, Li and Miao ethnic food fish seafood such as lack of influential brands, market recognition is not high.
By the end of 2016, a total of 405 travel agencies in Hainan. Outbound companies grew from 11 in 2009 to 41 in 2016, an average annual increase of 37%. Haikou travel agencies gathered significantly, accounting for 64% of the province's total, more than twice as much as Sanya. Travel agencies to domestic business, 2016 tourists received the amount of tourists is 8.5 times the amount of travel, which domestic tourism profits accounted for the province's travel agency profits as high as 82.7%. Hainan Province, travel agencies to promote the role of inbound tourism is about twice the national average, Hainan travel agency income accounted for 19% of regional income income, the national travel agency income accounted for only 8% of the total regional income.
Tourism Status and Product Status of Hainan Province
3.2.1. Coastal leisure travel. At present, Hainan possess Sanya yalong bay, west coast of Haikou, Qionghai Boao resort and other nine bay holiday resort, gradually formed "leisure vacation,duty-free shopping and health" of the trinity, construction including the international top brands Binhai hotel zone, world-class yacht leisure community, national health and fitness sanatorium three major national and international brands.
Countryside Tour.
As a new highlight product of Hainan tourism, rural tourism covers a wide range of areas, including tropical village construction sightseeing, tropical modern agriculture sightseeing, tropical traditional pastoral tourism, etc. By the end of 2016, 152 rural tourism demonstration sites were identified. It has set up a variety of theme country tourism product system, including agritainment, leisure manor, hundred mile hundred village, water township fishing village, style town and so on. In the mode of operation, the "rush inside" mode, Qionghai model and tourism industry town pattern are typical of the tourism rich model. In addition, the town of Boao City, Qionghai City, "Asia Forum" permanent site, to the exhibition economy as the theme of increasing tourism, Boao has become a demonstration of China's exhibition tourism. In 2015, the "national strategy" will be from the top strategic concept into the pragmatic cooperation stage, Hainan should be from the central international tourism island construction strategic positioning, based on the province, grasp the objective laws, will build China Tourism Special Zone as an international tourist island The core of the building.
Economic Environment Analysis
From the last two years of China's economic situation, economic growth to achieve the goal, but facing downward pressure. Economic structure continued to optimize, in 2016, for example, the fourth quarter increase in the growth rate of 8.1%, an increase of 30.7 trillion yuan, contributed 51.6% to GDP growth. The contribution rate of consumption to economic growth was 51.2%, the growth rate increased by 1.2 percentage points year on year, and new consumption hot spots such as retail sales of social consumer goods, online retail and information consumption.
In 2016, Hainan Province, the per capita GDP reached 39225 yuan, per capita disposable income of urban residents reached 25,487 yuan, the per capita net income of farmers reached 10,152 yuan. To the tourism industry as the leading industrial structure of the initial formation of the three industry ratio of Tourism economy is developing rapidly, tourism products, tourism services, comprehensive reception capacity to continuously improve the tourism industry to further strengthen the province's economic support.
Analysis of Social Humanistic Environment
Hainan to tropical climate, blue sea sand,environmental quality as the characteristics of the tourism resources of the only individual resources and form a combination of tropical climate and geographic advantages, relatively independent, have made Hainan tourist zone, international tourism island potential.
Compared with the international well-known tropical island, Hainan Li and Miao culture with cultural scarcity and strong directivity, is a vivid expression of the national culture of Hainan. The bigger the Li and Miao culture, the formation of a tropical island with Li style sense of Lenovo and propaganda linkage, become a key breakthrough in the international Hainan sea island resources is not strong. According to the characteristics of the humanities the distribution of resources in Hainan, Hainan formed by Volcano Village, Hairui Tomb as the representative of the northern folk group in history, Lili's beauty for the Western fishing culture group led to Qionghai Tanmen, for the South China Sea to the east culture group flourished in Five Fingers Group, as the core of the southern culture. The four cultural groups overlap group with the ecological circle space, a gradient of different mountain, sea, Wen Xiang, combination of resources, help to create a difference in the development of sea exploration, fishing town, town folk customs, ancient villages, DanJia fishing, Li and Miao copycat The diversification of products.
Analysis of Science and Technology Environment
The development of science and technology, Power Renaissance "has become an important strategic national development, all sectors of the domestic industry's strong awareness of science and technology development, for the development of our economy by relying on scientific and technological progress of social consciousness also grow with each passing day and improve the level of national science and technology, science and technology level, at the same time, also promote the tourism informatization, intelligent, ecological the level has been improved rapidly. Many achievements have been made in science and technology of Hainan province and tourism work, science and technology to create a good environment for tourism development, such as the wisdom tour of Hainan tourism comprehensive cloud platform, Hainan 3D tourism scenic area project, sunshine wing line mobile APP and" immediately tour Hainan "micro business in the construction of the platform, and visitors free WIFI project and the Hainan tourism satellite account of construction work.
Value problem
The construction of Hainan international tourism island, maritime Silk Road, Hainan Province, more than one regulation and other related policies and Hainan's tropical island, the sea, forest resources provide the basis for Hainan tourism development in the external environment opportunities, to enable Hainan to environmental threats and machine will make a positive reflection.
"The State Council on promoting the construction of Hainan international tourism island development certain opinions", Hainan province "international tourism island" construction is promoted to the national strategic level, Hainan province became the focus of national policy. The construction of Hainan international tourism island in the tourism market comprehensive renovation, the implementation of various preferential policies, the reform of tourism management system to achieve the remarkable achievement, financial insurance, visa exemption, yacht cruise, sports lottery
Inimitability Problem
Hainan's unique geographical location and resource advantages determine the other provinces and regions in China do not have access to these resources and capacity conditions, other provinces and regions compared to the cost disadvantage. But the tourism resources in Hainan and Southeast Asia neighboring countries are highly homogeneous, while by price and other factors, there is a certain threat of substitutes.
Organization Problem
The policy support system of Hainan international tourism island need to optimize the policy system, the lack of top-level design, leading to some preferential policy target is unknown, the open policy has not been effectively implemented. At the same time, the development and utilization of resources is still at the primary stage of tourism resources to products, low conversion rate, Hainan needs to pilot, as important national tourism Pathfinder in the continued efforts. The number of Hainan tourism talents supply shortage, the market demand gap is relatively large; the existing tourism practitioners the overall quality and level is low, college education accounted for 67% of the total number of professional and technical personnel of tourism is less; the unreasonable talent structure, talent and foreign language talents lack of high end tourism; brain drain problems the traditional tourism service consciousness is lower trade restrictions, tourism practitioners of social status is not high and low wages, many tourism workers, employee loyalty is not high.
High quality resources occupied.
Tourism real estate momentum, Gold Coast line and lake ecological zone occupied, "sea of the Great Wall" "lake the Great Wall" part of land has a row upon row of golf, the occupation of forest land resources, land resources with high quality real estate, golf and other projects accounted for according to the homogenization of the product, repeated construction, lack of product characteristics, resources the protection and utilization and land use approval limits imminent.
Insufficient investment.
Hainan per capita GDP, per capita income of urban residents, three people per capita income of rural residents is lower than the national average, with Five Fingers Group City, Lingao County, Baisha County, Baoting County, Qiongzhong County, five national key poverty alleviation object, weak economic base, weak investment, lack of stamina.
Opportunity Analysis
6.3.1. On the international level. The Asia Pacific tourism development opportunity brought about the rise of Asia Pacific.2013 to become the world's largest regional tourism revenue growth is expected to 2030, the Asia Pacific region will become the world tourism center. As an international tourist island of Hainan will become an important window for China reception overseas tourism, development prospects. high; development and Reform Commission officially approved the Guilin international tourist resort construction, to the other provinces of preferential policy of gradual liberalization of the coastal city of Hainan has introduced; and the approximation of the domestic tourism products. Competition; international, and Maldives, Hawaii, Bali Island and other islands compared to Hainan tourism resources, high cost, only is not strong, lack of cultural support, participate in international competition at a disadvantage.
6.4.2. Supporting lag behind. Tourism transportation system is not perfect, the traffic dredge not free, but as the island island transportation capacity is limited, since the driving system is not perfect, the green way has not yet formed the series; tourism advisory service system is not perfect, has not formed a unified system of tourism, tourism advisory function further promotion; rapid development of tourism related industries, lagging behind the development of weak supporting; foundation service system is not perfect, not for individual service system, standardized facilities and services, the characteristics of defects.
Policy still open.
Although Hainan Islands enjoys a tax rebate, 26 countries visa free policy, but the rebate amount still has large gap with the international, and visa in direct line less, visa policy to promote the effect of greatly reduced. Hainan needs to further open the free visa free pilot rights, promoting the implementation of preferential policies. Table Strength Weakness Innate foundation--China is unique tropical resources and environment, and good foundation for development; accumulated experience--More than 20 years of accumulated experience in the development of tourism industry; preferential policy--Special economic zones open to the outside world, enjoy a variety of preferential policies; Domestic popularity--Domestic brand awareness is high, and high degree of aspiration; Talent service--Inadequate human resources cause the soft environment of tourism service is weak; Resources are occupied--High quality land is dominated by homogeneity of a single project; Economic input--The weak economic base leads to insufficient investment in tourism development International brand--Marketing publicity is poor, and the international tourism market is not well known. Opportunity Threat international level--The world tourism is booming and the Asia Pacific tourism sector is booming; national level--International tourism island and the latest industrial policy opportunities, 21st century sea Silk opportunities; Hainan level--Tourism related industry policy opportunities, the western development, the South China Sea Foreign development opportunities, attention to the opportunities for tourism to enrich the people.
Internal and external competition--The domestic provinces of tourism rise competition, Asia -Pacific island tourism competition fierce; Support lag behind--The rapid development of tourism, supporting the development of supporting industries lag Poor support; Policy implementation--Although a number of policies, but the intensity of benefit people policy still inadequate; It is clear from the above table that the advantages of Hainan tourism are more obvious, mainly reflected in its rare advantages as an objective advantage; secondly, the government's preferential policies for tourism in Hainan are also the objective conditions for the development of tourism in Hainan. Experience is a subjective advantage. In contrast, Hainan tourism development disadvantage is subjective factors, such as talent services, marketing, publicity, resources, etc., of which only economic investment is the biggest obstacle to its development, and economic development and tourism development is complementary So there is still room for break. From the perspective of development opportunities and challenges, in the international tourism island and industrial policy opportunities in front of Hainan Province, the development of tourism industry needs a strong implementation of the "growth" strategy, based on sustainable development, to take targeted strategic initiatives The
SWOT Analysis Conclusion
In short, to seize the rare opportunity to actively respond to the challenges should be Hainan tourism development decision-makers held by the attitude. Opportunity Growth strategy --international; --Sustainable; --Innovation driven, dare to think, dare to try; --Deepen international Cooperation in tourism.
Torsional strategy --Informatization Strategy --full region full timing --Institutional innovation; --Enhance the quality of Tourism and international competitiveness. Threat Diversified business strategy -Integration development; --Differentiation; --Scale; --Vigorously develop the eight tourism industry Defensive strategy --Regional co-ordination; --Ecology; -Optimize the organizational structure -Declaration of tourism free trade area;
Conclusion
Hainan Provincial Tourism Bureau positively respond to the national call of Hainan, the development of the full regional tourism is treated as a major scenic spots to the planning and construction in Hainan. Realize the tourism lighting glorious brilliance of the sun and the moon to last. This article is based on the comprehensive analysis of many aspects of the Hainan tourism industry development advantages, disadvantages, opportunities, threats. These are analyzed from the qualitative point of view, which are inadequacy of empirical and necessary data demonstration, it will be the focus of future studies.
|
2019-05-30T13:21:40.467Z
|
2017-12-01T00:00:00.000
|
{
"year": 2017,
"sha1": "4390fbb774ee205ac48cd6b9c617a1507ac0ddec",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/100/1/012076",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "25f522a261afbb67adfaa96a70374799b18c238f",
"s2fieldsofstudy": [
"Business",
"Geography",
"Economics"
],
"extfieldsofstudy": [
"Physics",
"Business"
]
}
|
198835687
|
pes2o/s2orc
|
v3-fos-license
|
The Outcomes of Cooperation of Kazakhstan and Turkey in the Field of Education
Official relations between Turkey and Kazakhstan were established in December 1990. The Minister of Culture Namik Kemal Zeybek and the Kazakh State Culture Committee signed an agreement for the carrying out of common cultural work between two countries in education system, research projects, exchange experience of experts and scholars in the practice. This agreement regained cultural ties interrupted long time ago. The official visit of the head of the Kazakh State Culture Committee paid to Turkey on the January 31, 1991, and the cooperation agreement signed by the Minister of Health on February 14 of that year strengthened cultural relations between the two countries. Before gaining the independence of Kazakhstan the official visit of the Turkish President Turgut Ozal to Kazakhstan on March 15 and signing the agreement "on the relationship of Kazakh Soviet Socialist Republic and the Republic of Turkey" led to the strengthening friendship Kazakh-Turkish relations and further development in this direction.
Introduction
The official visit of the head of the Kazakh State Culture Committee paid to Turkey on the January 31, 1991, and the cooperation agreement signed by the Minister of Health on February 14 of that year strengthened cultural relations between the two countries. Before gaining the independence of Kazakhstan the official visit of the Turkish President Turgut Ozal to Kazakhstan on March 15 and signing the agreement "on the relationship of Kazakh Soviet Socialist Republic and the Republic of Turkey" led to the strengthening friendship Kazakh-Turkish relations and further development in this direction. This agreement opened the way for the opening of embassies and the expansion of cultural ties between the two countries.
The visit of Turgut Ozal showed new faces of Kazakhstan to the Turkish society. Turkish periodicals covered the future relationship between Kazakhstan and Turkey and the official invitation of the President of Kazakhstan Nursultan Nazarbayev to Turkey. At the meeting Turgut Ozal and Nursultan Nazarbayev spoke about the issues of transition to a market economy, shared recommendations for strengthening a stable currency, ensuring the conditions for a decent investment for Turkish entrepreneurs. In his interview Turgut Ozal said on the further strengthening of relations between Turkey and Kazakhstan (Atatürik aytqan eken, 2010).
Based on resolution of the Supreme Council of the Republic of Kazakhstan of January 15, 1992, was adopted the law on "Science and scientific-technical policy of the Republic of Kazakhstan". Within the framework of the law Kazakhstani students had the opportunity to study in about 30 universities in Turkey, including a highlevel educational institution in Istanbul "Bosphorus", in Ankara "Middle East Technical University" (METU), "Bilkent" (Ergöbek, 2007).
Results and discussion
The education system in Turkey is western-style. Learning system is in the form of western Turkey. First four years are devoted to study only the subjects of main specialty, general subjects are not even taught as in the system of Kazakhstan. Of course there is an advantage that students can get full, comprehensive education. For example, from orientation to the profession a student may be able to master his skills, but the lack of progress in general background and poor logical thinking skills can complicate the process of solving problems. In the system of Kazakhstan students study to be fully trained engineers they also study all the general subjects (for example, history of Kazakhstan, philosophy, economics, mathematics, physical education, etc.).
In Turkey undergraduate students have to pass 8-10 subject-exams each semester. End-of term tests, examinations are held mainly in written form. However, the test oral form is also used. Students who are eager to study are provided with all necessary educational conditions. For example, all students are provided with full accommodation and all modern equipment.
For example, the graduates of the famous Fatih University work in countries such as the United States and Western Europe. Since 1992, in accordance with the international agreements Kazakh students studying in Turkish institutions of higher education will be on the Turkish government grant. According to the data of 2009, 737 students are enrolled at 27 Universities of Turkey. 544 Turkish citizens are studying in institutions of higher education in Kazakhstan. Turkey grants each year 70 Mongolian students, including 10 Kazakh youngsters living in that country. Each year, 25 percent of the state budget is allocated to science and education. Annually students enroll Colleges and universities by the one-step exam organized by the Selection and Placement Center.
The mentioned center is functioning under the Supreme Council Education organization. Foreigners wishing to study in Turkey take examinations once a year held by the Selection and Placement Center and are selected according to results. Exam questions are only available in English and Turkish. The youngsters who have got a right to be educated in Turkey are taught the Turkish language within one year in the Center of the Turkish language under the Rectorate of the University of Ankara. More than 3 million people per year get the education according to National Education Ministry. Co-education courses encompass more than 9 million participants. The work of so-called "TIKA" which is being conducted under the Turkish Cooperation and Development Administration is surprising.
According to the Secretary General of the platform "Eurasia Dialogue" Ismail Tas they are trying to promote cultural and economic ties between Kazakhstan and Turkey. There is the Rectors Association "Universities Unit" in Istanbul. Thanks to this organization there are institutions, higher education schools built under the support of Turkish entrepreneurs. In the coming years, they are going to carry out such projects. Furthermore, the Turkish side has funded the university buildings and dormitories, and other resources necessary for the construction (Nazarbayev, 2000).
Nowadays more than 200 Kazakh girls and boys are studying economics, construction, journalism, medicine and law in higher education institutions of Turkey. However most of Kazakh students in Turkey carry out their own ethnical business such as production, sewing and selling of leather goods. Since 1991 the connection between Kazakhs of Turkish origin and the Republic of Kazakhstan is progressing. One of the most important events for Kazakhs living in Turkey was the meeting held on the 28-29 th of March 1997. There is no separatism towards Kazakh community from Turkish people. The level of relationships between local people, government and administration is high.
In 2005 based on the decree of the President of the Republic of Kazakhstan «On the improvement of actions of state management authorities in terms of economic» has been formed the organization of Science and Technology of the Republic of Kazakhstan. The Ministry tasks were as follows: − To carry out the policy in the sphere of international scientific and technical development of Kazakhstan; − To coordinate the training of scientific and pedagogic personnel of Kazakhstan; − To organize international cooperation in the scientific and technological sphere of Kazakhstan.
Turkey is closer to Kazakhstan on cultural, religious and linguistic issues. Negotiations between the two countries' Ministers of Education encompassed the current assessment of relations in the field of higher education and some solutions to several issues. In particular, because of negotiations it was signed a Memorandum of Cooperation in the field of professional-technical education. According to the memorandum, the part of the Republic of Turkey agreed to share knowledge and experience in the field of science with the Kazakh side. Especially the experience of Turkey is interesting on the part of the goods, textiles, tourism, and several technical vocational and technical training areas.
On the meeting with the members of the structure led by the Chairman Professor Nuket Yetïş of TUBITAK of the Minister of Education and Science of the Republic of Kazakhstan was said that they were interested in working together with the Turkish side, including invitation to participate in common programs within the framework of the EU.
On the meeting with the Chairman of the Board of Higher Education of the Republic of Turkey Erdogan Tezïçpen were discussed ways to increase cooperation between the two countries' institutions of higher education. The sides highlighted the point of organizing scientific symposiums and other events between the two countries' universities. Kazakhstani Minister expressed the willingness to cooperate with a number of prestigious universities of Turkey in accordance with the standards of the European countries.
Currently, about 700 Kazakh students are studying in Turkey. In turn, more than 500 Turkish students are studying in Kazakhstan. So far, about 900 Kazakhstani youth graduated from Turkish universities. In Chapter 10 of "Education Act" №319 of the Republic of Kazakhstan of July 27, 2007, the services of international cooperation in the field of education and economic activity were considered. They are as follows: 1. International cooperation in the field of education of the Republic of Kazakhstan is carried out based on legislation of the Republic of Kazakhstan and international treaties of the Republic of Kazakhstan.
2. The organization of education in accordance with the competent authority in the field of education in coordination with own peculiarities has the right to access foreign education, to establish direct links with scientific and cultural organizations and foundations, to make bilateral and multilateral terms on cooperation, to participate in the exchange international programs of students, undergraduates, doctoral students, the teacher and workers, to enter international non-governmental organizations (associations). Military educational institutions in accordance with the international treaties and agreements with foreign citizens have the right to carry out training of specialists in a row. Educational organizations shall have the right to engage in foreign economic activity in the manner specified by the charter specified in educational institutions and laws of the Republic of Kazakhstan.
3. The order of the implementation of international cooperation in the educational institutions of the Republic of Kazakhstan established by the authorized body in the field of education.
4. The establishment of International and foreign educational institutions and (or) branches in the Republic of Kazakhstan is carried out based on international agreements or in accordance with the decision of the Government of the Republic of Kazakhstan.
5. Licensing, accreditation and certification of international institutions and other states or their legal and educational institutions, their affiliates in the territory of the Republic of Kazakhstan unless stipulated by international treaties ratified by the Republic of Kazakhstan will be carried out in accordance with the laws of the Republic of Kazakhstan (Tuimebayev, 2010).
The head of state addressed the people of Kazakhstan on March 6, 2009, in his message "Through Crisis to Renovation and Prosperity" specific tasks for the implementation of the further modernization of the economy and to ensure the development of the country in the post-crisis employment strategy. During this difficult period the system of education and science is developed due to the constant attention and support of the head of state. The education budget in 2009 comprised 702,0 billion tenge, compared to 2008 increased by 9.5% (in 2008 -641,1 bln. tenge). Because of the funds allocated for education 20 thousand students are studying abroad. Taking into consideration these aspects the education system of our country and its pedagogic personnel's quality must be defined in accordance with the strategy of innovative and industrial development (Tuimebayev, 2010).
In 2009, October 22, within the frameworks of the official visit of the President of the Republic of Kazakhstan N. Nazarbayev it was signed cooperation agreement in the sphere of science and technology between the Minister of Education and Science of the Republic of Kazakhstan and Turkey. In the terms of this agreement there is cooperation in conducting common scientific and research projects, exchange of scholars and experts, carrying out scientific conference and symposiums, and interchange of scientific and technical information and documents. In that order after defining the field of cooperation of two parties there is commission which prepares proceedings and programs in carrying out the agreement terms and implements common projects within the frameworks of the program "International cooperation in the field of science on 2010-2012" (Sawdabaev, 1996.
Conclusion
As Zh. Tuimebayev pointed out since 1992 there are more than 4 thousand Kazakhstanis who graduated from Turkish universities. As a sample of friendship between two states under support of heads of states there was opened Yassaui International Kazakh-Turkish University which became one of the prestigious universities. There are also S. Demirel Private University and Foreign Languages and Business University as well as 30 Kazakh-Turkish lyceums and 9 secondary schools (Qalïev, 2010).
To strengthen scientific, education and cultural relations between Turkey and Kazakhstan the two countries share the same high school in the city of Turkestan the Yassaui International Kazakh-Turkish University. To make University of Turkic states Turkish government during the years 1993-2001 donated 86 million dollars. For the past 10 years about 25 thousand people from Kazakhstan went through internship in the Republic of Turkey.
Turkey's economic potential is higher than Kazakhstan's and it has its own way to reach international market. As well as it has high reputation in the European Community. Therefore, a model of economic reform to adjust the laws of market relations will play a huge role for Kazakhstan. Consequently, economic, political and cultural spheres of further development of relations contribute to the strengthening of the economic and social sectors of the two countries. Turkish culture, education, ecology, medicine, joint peace, military, political programs contribute to promotion of mutual desire to strengthen regional economic alliances on a global arena.
|
2019-07-26T13:34:33.723Z
|
2017-12-01T00:00:00.000
|
{
"year": 2017,
"sha1": "85da20f2bb003a06b57ab06212700de228cc490b",
"oa_license": null,
"oa_url": "https://www.ijeba.com/journal/144/download",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "47395c5b3e40a4c63cc668d6152f348980fb270d",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": [
"Political Science"
]
}
|
249387047
|
pes2o/s2orc
|
v3-fos-license
|
Pregnancy-Associated Atypical Hemolytic Uremic Syndrome and Life-Long Kidney Failure
Atypical hemolytic uremic syndrome (aHUS) is a rare disorder characterized by a triad of thrombocytopenia, thrombotic microangiopathy, and acute renal failure. The background pathogenesis of aHUS stems from mutations in the genes of the complement cascade. However, certain circumstances including normal physiological conditions such as pregnancy, environmental factors, or triggers, can activate genetically predisposed individuals and lead to aHUS. We present a case of a young female who presented with acute renal failure and later was diagnosed with aHUS. Possible potential triggers were investigated, and it is believed that pregnancy was associated with the development of aHUS in this young genetically predisposed female leading to life-long acute renal failure. This case highlights a unique case of a devastating systemic disease triggered by a normal physiological phenomenon. Moreover, it reiterates the importance of early diagnosis and how it is imperative to proceed to treatment promptly to prevent chronic renal failure. Further reporting of cases is warranted to monitor the incidence and improve prognostic outcomes in patients with aHUS.
Introduction
Atypical hemolytic uremic syndrome (aHUS) is an extremely rare disorder with an estimated incidence of between 0.5 and 2 patients per year. It is characterized by a triad of thrombocytopenia, thrombotic microangiopathy, and acute renal failure. Although the background pathogenesis of aHUS is associated with mutations in the genes of the regulatory proteins involved in the complement cascade, a genetic mutation by itself is not enough to cause aHUS [1]. Certain circumstances, environmental factors, or triggers can activate genetically predisposed individuals and lead to aHUS. We present a case in which pregnancy triggered the activation of aHUS in a genetically predisposed young female which rapidly led to acute renal failure.
Case Presentation
An 18-year-old female presented to the emergency department with complaints of progressively worsening epistaxis, on and off for one year, involving both nares and associated with the passage of clots. Her past medical history was significant for pre-eclampsia and preterm delivery via C-section at 26-weeks of gestation. The patient was two years post-partum at the time of initial presentation. A review of symptoms was positive for headaches and irregular menstrual cycles. Physical examination was remarkable for blood pressure (BP) of 161/104 mmHg, marked conjunctival pallor with left-sided costovertebral angle tenderness, and a split S2. The rest of the physical examination was unremarkable.
Initial laboratory investigation revealed hemoglobin of 7.2 g/dl, hematocrit of 22.7%, and platelet count of 163000/μL. Chemistry was remarkable for a blood-urea-nitrogen (BUN) of 120 mg/dL, creatinine of 17.47 mg/dl (baseline creatinine two years ago during pregnancy was 1.1), glomerular filtration rate of 3.2 mL/min, and phosphorus of 8.7 mg/dL. Coagulation parameters (activated partial thromboplastin time (aPTT), prothrombin time (PT)/international normalized ratio (INR)), reticulocyte count, haptoglobin, lactate dehydrogenase (LDH), and aspartate aminotransferase (AST)/alanine aminotransferase (ALT) were within normal ranges. Urine analysis was significant for 300 mg/dl protein, moderate blood, and trace ketones. Spot protein was 4550 mg/gram roughly equivalent to 4.6 g/day of proteinuria. Cardiac markers and urine toxicology screen were negative. EKG showed sinus rhythm, and chest X-ray revealed no acute pathology. The patient was admitted with the impression of hypertensive emergency in the setting of acute renal failure.
On the medical floor, the patient's BP was stabilized, and nephrology was consulted for evaluation of acute renal failure. Further laboratory workup was negative for autoimmune and vasculitis etiology, i.e., anti-DNA, anti-smooth muscle antibody (ASMA) AB, c-antineutrophil cytoplasmic antibodies (ANCA), p-ANCA, antiphospholipid AB, lupus anticoagulant; HIV and hepatitis serologies were negative. No initial peripheral smear was present. C3 was mildly decreased at 66 mg/dL (ref range 81-157 mg/dL), with normal C4 levels. Kidney ultrasound showed increased renal cortical echogenicity. Considering the acuity of renal failure, the patient underwent renal biopsy which showed thrombotic microangiopathy, global and focal segmental glomerulosclerosis with collapsing features, low grade immune complex mediated glomerulonephritis, severe arterio-nephrosclerosis, and 90% of interstitial fibrosis/tubular atrophy (Figures 1-2). Patient was started on hemodialysis during the admission with no further acute complications during the hospital stay. She was discharged with three times weekly maintenance dialysis and follow up with hematology and nephrology for further care.
and scant inflammatory cells
Subsequent workup with hematology was remarkable for schistocytes on peripheral smear, persistently low hemoglobin ~10 gm/dl, low platelets ~120 K, elevated reticulocyte count of 2.87% (reference range 0.5%-1.7%), ADAMT13 activity level of 76 (reference range >61) and negative stool culture for Shiga toxin and entero-hemorrhagic Escherichia coli 0157:H7. Atypical HUS was suspected, and patient's renal biopsy slides were sent for complement membrane attack complex (C5b-9) staining, which returned positive. Further genetic testing was remarkable for negative prothrombin G20210A mutation, negative factor V Leiden mutation, and positive mutation in the CFHR1 gene. A diagnosis of aHUS was made based on clinical history, renal biopsy findings, and genetic testing. She was started on eculizumab injection every two weeks while waiting for possible workup for renal transplant.
Discussion
Hemolytic uremic syndrome (HUS) is a type of thrombotic microangiopathy consisting of intravascular hemolysis, thrombocytopenia, and thrombi in small vessels and capillaries, leading to organ damage, most commonly acute renal failure. It is commonly divided into two types based on likely etiology, typical HUS and atypical HUS.
Typical HUS, caused by Shiga-like toxic producing Escherichia coli (STEC) infection, accounts for 90% of the cases of HUS [2]. The pathogenesis of the remaining 10% of HUS, also known as atypical HUS, stems from genetic mutations in one of the multiple genes (commonly C3, CFB, CFH, CFI, MCP (CD46), and CFH-CFHR) encoding the proteins which are part of the complement pathway [3]. However, as previously discussed, a genetic mutation is not enough to lead to the presentation of aHUS by itself. It only leads an individual to become genetically predisposed to the development of aHUS in the presence of triggers. Common triggers include the presence of an acute infection, H1N1 influence, varicella, underlying cancer, autoimmune disease, drugs, or pregnancy.
Initial presentation of atypical HUS can vary from vague symptoms of fatigue and illness to dyspnea, bleeding, and high blood pressure. Patients with aHUS are more likely to develop serious, often lifetime complications such as renal failure, requiring hemodialysis, if symptoms are not identified in the early course of the presentation and if treatment is delayed. An observational study in France of 214 patients with aHUS showed that 29% and 56% of children and adults, respectively, progressed to endstage renal disease (ESRD) or death within a year of follow-up [3]. Diagnosis is complicated and involves a clinical history consistent with hemolytic anemia, thrombocytopenia, and kidney dysfunction in the presence of identified genetic mutations in genes associated with aHUS, or when two or more members of the same family are affected by the disease for at least six months apart. Other more common thrombotic microangiopathies such as thrombotic thrombocytopenic purpura (TTP) and typical hemolytic-uremic syndrome share similar presenting characteristics and hence make it critical for clinicians to include aHUS in their differentials when considering the diagnosis of thrombotic microangiopathies. The absence of a history of diarrhea, negative stool cultures for Shiga toxin, hypertension, hematuria, and proteinuria should prompt the nephrologist towards a probable diagnosis of aHUS and warrant further workup with hematology. Upon diagnosis, treatment with supportive care and immunotherapy should be prompted.
A trigger from pregnancy accounts for 20% of all aHUS cases reported in women [4]. A systemic review of literature has shown only 60 rare cases of pregnancy-triggered aHUS with 66 pregnancies. Fifty-four out of the 60 pregnancies (~90 percent) were associated with nulliparous women (58%), occurred post-partum (94%), and were more often after cesarean delivery (70%) [4]. Common preceding obstetric complications included pre-eclampsia, hemorrhage, and fetal death. Women with aHUS associated with first-time pregnancy achieved higher rates of disease remission when treated with eculizumab compared to those who did not receive treatment with eculizumab (88% vs 57%, P=.02) [4].
Treatment modalities have included plasma exchange and eculizumab, a monoclonal antibody against complement protein C5. The FDA has approved the drug since 2011 for the management of aHUS [5].
Multiple studies have demonstrated the high effectiveness of eculizumab. A case study in an adolescent with relapsing unclassified aHUS demonstrated the temporary termination of the microangiopathic hemolytic activity upon initiation of eculizumab. However, continued renal damage as a result of the preceding and subsequent aHUS activity ultimately led to ESRD in the same patient, therefore pointing to the strong likelihood that therapeutic success may depend on early initiation of eculizumab. The optimal duration of therapy is variable and remains to be determined [6].
Our case represents a similar presentation of a very rare case in which a nulli-gravid female with an obstetric history complicated with pre-eclampsia was found to have acute renal failure secondary to atypical HUS. Her initial delayed presentation (lost to follow-up for two years) led to the delayed diagnosis and lifetime complication of acute renal failure requiring hemodialysis. There remains limited clinical data and treatment experience in pregnant or post-partum females diagnosed with aHUS and the timeline between initial presentation and the onset of aHUS and/or acute renal failure.
Conclusions
aHUS remains a rare disorder, with overlapping clinical symptoms to many other similar disease processes, which poses a challenge to clinicians and delays diagnosis. This case highlights a unique case of a devastating systemic disease triggered by a normal physiological phenomenon of pregnancy leading to advanced morbidity. Moreover, it reiterates the importance of early diagnosis and how it is imperative to proceed to treatment promptly to prevent chronic renal failure.
Additional Information Disclosures
Human subjects: Consent was obtained or waived by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
|
2022-06-06T15:13:12.654Z
|
2022-06-01T00:00:00.000
|
{
"year": 2022,
"sha1": "9fdd09647c64be787cc48c8fdd42fd3ff7d33568",
"oa_license": "CCBY",
"oa_url": "https://www.cureus.com/articles/98169-pregnancy-associated-atypical-hemolytic-uremic-syndrome-and-life-long-kidney-failure.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2ddfd13964e83f55bfff4a040f88c58552ea3b9c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
247467176
|
pes2o/s2orc
|
v3-fos-license
|
Magnesium Levels and Diastolic Blood Pressure (DBP) as a Vasospasm Prediction Metric in Patients With Aneurysmal Subarachnoid Hemorrhage (SAH)
Introduction Vasospasm is a significant cause of morbidity and mortality in patients with aneurysmal subarachnoid hemorrhage (SAH). The purpose of this study is to evaluate a possible link between vasospasm in patients with aneurysmal SAH and magnesium and blood pressure levels. Methods Subjects were selected based on chart review of patients presenting to a comprehensive stroke center in Southern California with aneurysmal SAH. 27 were included based on the following criteria: patients greater than 18 years of age, aneurysmal SAH, clinically symptomatic vasospasms and at least one diagnostic confirmation - either from a transcranial doppler (TCD) or digital subtraction angiogram (DSA). The following exclusion criteria also applied: 1) incomplete documentation in the medical record; 2) patients <18 years of age; and 3) patients without TCD measurements. Results In an overall analysis of all patients with or without vasospasm, it was found that the presence of vasospasm was significantly correlated with diastolic blood pressures (DBPs) on day of vasospasm with an r value of 0.418 and p<0.001. Average daily DBPs throughout hospital stay were also correlated with vasospasm with an r-value of 0.455 and p<0.001. Changes in magnesium overall were also significantly related to left Lindegaard ratios with an r value of -0.201 and p value of 0.032. Lindegaard ratios were significantly correlated with age with r values of 0.510, p<0.001, and r=-0.482, p<0.001 for left and right, respectively. A change in magnesium was inversely correlated to the left Lindegaard ratio with an n of 31 and p value of 0.014 (r= -0.439) in patients with vasospasm. We also found a lower incidence of vasospasm in patients older than 65. Conclusion Monitoring magnesium and increases in DBP might be effective as a prophylactic adjunct method in patients with SAH in an effort to predict clinical vasospasm.
Introduction
Vasospasm is a significant cause of morbidity and mortality in patients with aneurysmal subarachnoid hemorrhage (SAH). The purpose of this study is to evaluate a possible link between vasospasm in patients with aneurysmal SAH and magnesium and blood pressure levels.
Methods
Subjects were selected based on chart review of patients presenting to a comprehensive stroke center in Southern California with aneurysmal SAH. 27 were included based on the following criteria: patients greater than 18 years of age, aneurysmal SAH, clinically symptomatic vasospasms and at least one diagnostic confirmation -either from a transcranial doppler (TCD) or digital subtraction angiogram (DSA). The following exclusion criteria also applied: 1) incomplete documentation in the medical record; 2) patients <18 years of age; and 3) patients without TCD measurements.
Results
In an overall analysis of all patients with or without vasospasm, it was found that the presence of vasospasm was significantly correlated with diastolic blood pressures (DBPs) on day of vasospasm with an r value of 0.418 and p<0.001. Average daily DBPs throughout hospital stay were also correlated with vasospasm with an r-value of 0.455 and p<0.001. Changes in magnesium overall were also significantly related to left Lindegaard ratios with an r value of -0.201 and p value of 0.032. Lindegaard ratios were significantly correlated with age with r values of 0.510, p<0.001, and r=-0.482, p<0.001 for left and right, respectively. A change in magnesium was inversely correlated to the left Lindegaard ratio with an n of 31 and p value of 0.014 (r= -0.439) in patients with vasospasm. We also found a lower incidence of vasospasm in patients older than 65.
Introduction
Vasospasm is a significant cause of morbidity and mortality in patients with aneurysmal subarachnoid hemorrhage (SAH). Up to 40% of patients with SAH will develop vasospasm at some point in their care [1]. Predicting when a patient is in vasospasm is vital to the care of these patients, especially when clinical signs of vasospasm are not apparent. In order to determine whether a patient is in vasospasm, cerebral angiography and transcranial dopplers (TCDs) have traditionally been used to supplement the clinical exam [2]. However, the use of these technologies relies on availability of both a skilled operator and functioning instrumentation. In many clinical settings, one or more of those may not be readily available for rapid diagnosis. A surrogate determinant would be valuable in these cases.
Magnesium therapy in patients with aneurysmal SAH has been a very controversial topic. Studies have 1 1 2, 3 1 1 found variable outcomes from giving regular doses of magnesium sulfate to patients, due to a possible vasodilator effect decreasing the incidence of delayed cerebral ischemia that is caused by vasospasm [3,4]. Another study showed that magnesium could be as effective as nimodipine in reducing the incidence of delayed neurological deficits [5]. One major trial showed, however, that regular magnesium infusion did not provide clinical benefit [6]. However, the predictive effects of the temporal association of magnesium levels have not been well studied. The link between magnesium levels and the incidence and severity of vasospasm has not been evaluated thoroughly. The timing of which phenomenon occurs first has not been established. The body, in response to vasospasm, may be consuming high levels of magnesium for its vasodilatory effect, which could lead to low serum levels of magnesium [7]. However, it is possible that magnesium levels are low due to a different cause in SAH patients. This lower concentration of magnesium has less participation in vasodilatory affects possibly leading to higher chances of vasospasms.
The purpose of this study is to evaluate a possible link between vasospasm in patients with aneurysmal SAH and magnesium and blood pressure levels. Our hypothesis is that patients with low magnesium levels and high diastolic blood pressures (DBPs) will be in vasospasm, as confirmed by TCD measurements.
Study Design
Subjects were selected based on chart review of patients presenting to a comprehensive stroke center in Southern California with aneurysmal SAH. Patient charts were selected based on ICD-10 code for aneurysmal SAH and reviewed for data.
Inclusion and Exclusion Criteria
Inclusion criteria were: patients greater than 18 years of age, aneurysmal SAH, clinically symptomatic vasospasms, and at least one diagnostic confirmation -either from a TCD or digital subtraction angiogram (DSA). The following exclusion criteria also applied: 1) incomplete documentation in the medical record; 2) patients <18 years of age; and 3) patients without TCD measurements.
Data Collection
A sample size of 37 was identified with aneurysmal SAH. Of the 37, 27 were included based on the inclusion and exclusion criteria. All patients received the institutional treatments of hypervolemia, 2 g of magnesium treatment twice per day, statin, nimodipine, and medication to keep systolic blood pressure (SBP) less than 130 prior to surgery and permissive hypertension after the aneurysm was secured. The data included age, gender, aneurysm location, daily magnesium levels, daily TCD and/or DSA values, daily vital signs including SBP and DBP, length of hospital and intensive care unit stay, and outcome at discharge.
Statistical Analysis
Statistical analysis was performed in SAS software for Windows. All statistical analyses were two-sided, and a p-value of <0.05 was considered statistically significant using Pearson coefficients.
Results
A total of 37 patients were screened and identified with SAH on presentation to the medical center. 10 patients were excluded: seven patients were transferred to other facilities before any data was collected, and two expired, while one other patient did not undergo clinical testing for vasospasm. Therefore, there were 27 patients followed with SAH that had at least one diagnostic test demonstrating evidence offer evidence of vasospasm. A total of 126 data points were obtained from these patients.
Patients' age ranged from 24 to 82 years old with a median age of 53. There were 17 males and 10 females. A total of 33.3% (9/27) of qualifying patients had evidence of at least one instance of vasospasm determined by either TCD or DSA. The distribution of vasospasms based on demographic factors is demonstrated in Table 1. This was broken down to 35.2% (6/17) of males and 30% (3/9) of females experienced a vasospasm at least one time throughout hospital stay. 2022 The correlations between all the factors obtained were assessed. It was found that the presence of vasospasm was significantly correlated with DBP on the day of vasospasm with an r value of 0.418 and p<0.001. Further, average daily DBPs throughout hospital stay were also correlated with vasospasm with an r-value of 0.455 and p<0.001. Changes in magnesium overall were also significantly related to left Lindegaard ratios with an r value of -0.201 and p value of 0.032. Lindegaard ratios were significantly correlated with age with r values of 0.510, p<0.001 and r=-0.482, p<0.001 for left and right, respectively.
Although not identified as correlated within all patients, when doing a subgroup analysis investigating vasospasm demonstrated through TCDs, it was found that a change in magnesium was inversely correlated to the left Lindegaard ratio with an n of 31 and p value of 0.014 (r=-0.439). A similar relationship was not seen in patients without vasospasm as there was no relationship between changes in magnesium and Lindegaard ratios (right or left) with p values of 0.129 and 0.623, respectively.
When utilizing t-testing to determine if there was a difference between the vasospasm and no-vasospasm groups, it was found that the DBP on day of vasospasm, average daily DBP, and DBP prior to the vasospasm event were significant, all with p<0.001.
We also found a lower incidence of vasospasm in patients older than 65. These data are highlighted in Figure 1.
Discussion
The results of this retrospective analysis of magnesium levels in a population of patients with SAH carries several implications regarding the monitoring and the prospect of identifying vasospasm. Although prior studies have mixed results in the effectiveness of magnesium as a treatment for vasospasm [5][6][7], a temporal association used to predict vasospasm is revealed by our results. While patients who did not experience clinical vasospasm showed no relationship when magnesium levels trended downwards, these results demonstrate that patients who do experience vasospasm had a significant moderate relationship in decreasing magnesium levels compared with periods of active vasospasm. These findings demonstrate an adjunct clinical lab test that can help with diagnosis of cerebral vasospasms, potentially with a temporal predictive capability. This is especially important in patients who have already demonstrated vasospasm, as we can use additional lab values to monitor recurrence of vasospasm. Multiple studies evaluating high-dose magnesium as a prophylactic supplement to reduce the risk of poor outcomes in SAH has been explored, and with close monitoring of magnesium levels may warrant further investigation of developing protocols and guidelines for its routine dosing in patients of perceivably higher risk [8,9].
Our study did not demonstrate any significant association between those with vasospasm and SBP values. However, we did demonstrate the link between patients with vasospasm and elevated daily DBPs. Similarly, a retrospective analysis by Faust et al. demonstrated that increases in mean arterial pressure attributed to changes DBP were significant predictors of vasospasm and poor outcome [10]. With a larger sample size and further studies, a more direct link can be investigated and its use in treating patients with vasospasm can be broadened.
We also note a decrease in incidence of vasospasm in patients above the age of 65, as seen in Figure 1.
Although prior studies have shown that older age may either result in increased vasospasm or have no link to vasospasm rate [3], our study shows a decrease in incidence. We postulate that older patients have more atherosclerotic vessels, so these vessels would be stiffer. However, this would need to be further investigated with a detailed analysis of the changes in stiffness and size of patients via angiography. Future studies could focus on this aspect to determine whether there truly is a link between stiffer vessels due to age and atherosclerosis and decreased incidence of vasospasm.
Limitations of this analytical method, as with all Pearson coefficient analysis, is that the linear relationship is assumed -decreasing magnesium levels lead to a linear increase in the risk of vasospasm. It is possible that the relationship may be nonlinear as well. Additionally, our results only demonstrate associations and not causation. There may be additional confounding variables within the pathophysiology of vasospasms that contribute to the findings that are encountered. It is well known that magnesium levels also affect calcium levels with a possible role for parathyroid hormone as well [11]. SAH is also a complicated pathologic process requiring close hemodynamic management as well as implications for controlling body temperature, coagulopathies, and blood sugar levels [12]. As a result, there are many confounding variables worth exploring that could yet still dwarf the significance of magnesium levels in the incidence of vasospasm.
Future direction would be to prospectively correlate the more frequent temporal association between serum magnesium levels, with or without supplementation. There also needs to be a prospective study to correlate more frequent changes in DBP and the occurrence of vasospasms. This can then lead to the development of a predictive and readily available screening test. Also, as stated earlier, a larger patient population needs to be studied to make even more distinct connections and allow us to improve the treatment of patients with vasospasm. In facilities that do not have TCD and do not want to subject patients to daily angiograms, tracking lower blood concentrations of magnesium when given as twice daily supplementation or tracking the DBP may guide the neurointensivist towards the detection of continued occurrence of vasospasms.
Conclusions
Monitoring magnesium might be effective as a prophylactic adjunct method in patients with SAH to predict clinical vasospasm. The moderate negative correlation demonstrates that a decrease in magnesium levels despite twice per day supplementation, at least in combination with signs and symptoms of vasospasm, is associated with a higher severity of clinical vasospasm. In addition, patients in vasospasm had a correlation to higher DBP values. As always, patients should be carefully observed in an intensive care setting with closed arterial blood pressure and ECG monitoring, as well as frequent measurements of magnesium levels, coagulation studies, blood sugar, and other routine labs and imaging as indicated.
Additional Information Disclosures
Human subjects: Consent was obtained or waived by all participants in this study. Arrowhead Regional Medical Center issued approval 22-05. The information provided was reviewed and approved by the Institutional Review Board Member. No future action is required. Please note final Approval for use. Animal subjects: All authors have confirmed that this study did not involve animal subjects or tissue. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
|
2022-03-16T15:10:52.860Z
|
2022-03-01T00:00:00.000
|
{
"year": 2022,
"sha1": "b2c6e44e814c43c709e4446a138408ae254d8820",
"oa_license": "CCBY",
"oa_url": "https://www.cureus.com/articles/87103-magnesium-levels-and-diastolic-blood-pressure-dbp-as-a-vasospasm-prediction-metric-in-patients-with-aneurysmal-subarachnoid-hemorrhage-sah.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3d35d7dc124ccc40a1210809df4b12fa198e734f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
25936482
|
pes2o/s2orc
|
v3-fos-license
|
The efficacy of hemostatic radiotherapy for bladder cancer-related hematuria in patients unfit for surgery
ARtIclE INFO _________________________________________________________ ___________________
INTRODUCTION
The incidence of bladder tumors has been reported to be 10.1 per 100000 male patients and 2.5 per 100000 female patients (1).Urothelial carcinomas are primarily found in elderly patients.Two distinct groups of tumors occur: muscle invasive bladder tumors (MIBT) and non-muscle invasive bladder tumors (NMIBT).The management of these patients is very different, and the prognosis involves a minimum amount of risk of local pro-gression and metastasis for more advanced tumors (2).Surgery remains a key element to successful management for both types of tumors.
Elderly patients can become unfit for surgery.These patients must be included in a comprehensive, global care network which attempts to alleviate the patient's symptoms.This remains the main objective of support and palliative care (3).
External beam radiotherapy (EBRT) offers effective tumor control with a minimum amount of invasiveness, and symptom reduction for he-morrhage or pain.Unfortunately, the optimal schedule for palliative EBRT in bladder tumors unfit for surgery remains a matter of debate (4).
The aim of our study was to evaluate the efficacy of palliative EBRT for a distressing symptom for patients and their caregivers, i.e. gross hematuria.Two different schedules were used.This provided the more affected population with a hypofractionated regimen, in order to preserve their quality of life.
MATERIALS AND METHODS
This retrospective, bi-center study was conducted on thirty-two patients treated between January 1993 and January 2009 in the Department of Urology at Rouen University Hospital, Rouen, France and the Becquerel Cancer Center in Rouen, France.
Forty consecutive patients were considered for enrollment.Eight patients were excluded from the study (six prior pelvic radiation therapy, one had Crohn's disease, one had von Willebrand disease).Thirty two patients (Table-1) with bladder cancer were enrolled for palliative pelvic external beam radiation therapy from bladder cancer-related hematuria.All patients were considered unfit for surgical treatment due to age or medical comorbidities.The comorbidities involved are listed in Inclusion in this study was based on the following criteria: Gross hematuria from advanced bladder cancer.All patients were considered grade 3 according to the Common Terminology Criteria for Adverse Events (CTCAE) 4.02 classification.
Histological proof of bladder cancer (NMI-BT or MIBT); 1. Failure of local treatment (embolization, electrocoagulation and intravesical instillation of hemostatic agents); 2. Unfit for radical surgical treatment (cystectomy).Exclusion criteria of this study were: • Contraindication to perform radiotherapy (i.e.digestive disease); • Patients submitted to prior radiotherapy in the pelvic area; • Patients with a coagulation disorder.
All patients were evaluated by an urologist and a radiation oncologist, and had previously undergone transurethral resection of the bladder (TURB).Bladder cancer was confirmed by pathological tests, based on the 2004 WHO classification, and classified into two groups: muscle invasive tumors (MIBT) and non-muscle invasive tumors (NMIBT).
Pathological tumor stage was assessed according to the 2002 modification of the TNM (UICC) classification.
The general health conditions before and after treatment were assessed based on the Eastern Cooperative Oncology Group performance status (ECOG PS Table-3).
External radiotherapy was performed using high energy photon therapy, with four orthogonal beams.The more recent patients underwent a computer tomography (CT) dosimetry scan.The clinical target volume (CTV) was the bladder.Lymph nodes were not considered for treatment, in this palliative setting.Beam definition was performed using a simulator or via a bone localization, with no contrast agent.Additional areas were assessed to take into account bladder filling and patient movements.
Two protocols A and B were used, depending on the general health conditions of the patient as follows: Protocol A (13 patients) delivered 30 Gy in 10 fractions for 2 weeks, if the ECOG PS was less than or equal to 2. Patients in group B (19 patients) underwent a hypofractionated regimen of 20 Gy in 5 fractions for 1 week if they had a ECOG PS of more than 2.
The patients were evaluated 2 weeks and 6 months after the beginning of radiotherapy.The common terminology criteria for adverse events CTCAE version 4.02 (2009 version) (5) published by the US Department of Health and Human Service was used to estimate the intensity of the hematuria .
The treatment was considered successful when the hematuria had completely stopped (grade 1).Relapse was defined as the presence of a gross hematuria during the evaluation consultation or the need for other procedures to achieve hemostasis.
Ischemic heart disease 11
Chronic kidney disease 10
Morbid obesity 9
Chronic heart failure 7 Sleep apnea requiring aid 7
CKD requiring dialysis 3
Cirrhosis (Child-Pugh A) 2 Non-bladder metastatic cancer 1 Blood transfusion, in the absence of gross hematuria, was not an exclusion criteria for our study.Statistical analysis was performed using XLSTAT V 7.5 software (Addinsoft, 2004).Two-sided p-values were obtained using Mann-Whitney non--parametric bilateral tests.Results were considered significant when the p-value was below 0.05.
RESULTS
Thirty-two patients were evaluated.The main results are shown in Table -5.
Group A was younger than group B (75.53 vs. 84 years, p = 0.006).The mean performance status (PS) based on ECOG criteria was 2.56 for the general cohort.Group A had a lower PS than group B (1.38 vs. 3.37; p < 0.0001).The mean number of major comorbidities was 2.72 in the general cohort, with less co-morbidities in group A than group B (1.92 vs. 3.26, p = 0.092).
Twenty-five patients had MIBT (78.2%) and 7 patients had NMIBT (21.8%).The majority of patients had tumor stage T2 (37.6%), and the most frequent tumor grade was grade 3 (90.6%).No significant differences between treatment groups were found concerning tumor stage or grade.
Malignancy grade 3 was found in 29 patients (90.6%), 13 group A patients had a grade 3 tumor, while 16 group B patients had a grade 3 tumor (p = 0.139).
Sixteen patients (50%) had lymph node involvement.More patients in group B than group A had lymph node involvement (LNI) (5 vs. 11, p = 0.288).Eleven patients had metastasis at inclusion (34%), which was more frequent in group B than in group A (8 vs. 3, p = 0.201).
Thirteen patients (41%) underwent the standard EBRT regimen based on ECOG PS test results.Nineteen patients (59%) had more severe health conditions and were selected to undergo a hypofractionated EBRT regimen.
After two weeks of EBRT, twenty-two patients (69%) presented no hematuria.There appeared to be a better response to EBRT in the muscle invasive group than in the non-muscle invasive group (Figure -1) although no statistical significance was reached (72% hematuria-free vs. 57% p = 0.469).
In PS-related subgroup analysis (Figure-2), seven group A patients had no hematuria after two weeks of RT (54%).In group B, fifteen patients (79%) were hematuria-free after two weeks of hypofractionated RT (p = 0.139).
There was no statistical difference in the length of the period to relapse found between the subgroups: In tumor stage subgroups (3.1 months vs. 3.3 p > 0.05) or in PS subgroups (3.4 vs. 3.5 months p > 0.05).After 2 years, 7 patients (23%) had no hematuria.
DISCUSSION
Bladder tumor management remains a routine procedure in clinical urology practice.The standard treatment for non-metastatic bladder tumors remains surgery (1,3,6).
Contra-indication to general anesthesia or epidural anesthesia, due to a severe general state or to severe comorbidities can hinder surgical therapies.For patients with urothelial bladder cancer that are unfit for surgery, radiotherapy associated with chemotherapy remains a possible alternative (3,6,7).
However, insufficient local tumor control has been previously reported using this management approach (8).Many patients unfit for surgery will suffer from ongoing tumor-related symptoms (pain, urgency, hematuria).Life expectancy is short (under 9 months) in this population (2,9).Some cases of severe hematuria, especially in patients with cardiovascular history, can be life-threatening (10).Non-surgical hematuria management involves bladder irrigation, coagulation correction, and sometimes intravesical procoagulant agent use or hypogastric arterial embolization.Other treatments have been reported, such as formalin, F2 prostaglandin or alum intravesical instillation, or hyperbaric oxygen therapy.However, results have been disappointing and their use cannot be widely encouraged (11).
Patients with bladder cancer and no surgical option should benefit from support and palliative care, which is defined by the alleviation of symptoms without extending the life expectancy of the patient (12,13).Palliative treatment must strike a balance between efficacy, convenience, toxicity and duration.The main objective of palliative irradiation remains to provide adequate symptomatic relief throughout a patient's anticipated life span while limiting the risk of both acute and late treatment-related complications (14).
Radiation therapy has a double effect on urothelial tumors: Shortly after RT, endothelial cell damage occurs, leading to small and medium vessel rupture or thrombosis.Longer term effects include the disappearance of the microvascular network (15).Other effects that could lead to hemostasis can include: vasoconstriction and platelet aggregation through a decrease in endothelial nitric oxide synthase, an increase in interleukin 1 and TNF alpha through endothelial cell damage, and fibrinolysis inhibition due to the inhibition of plasminogen 1 activator.(16)(17)(18).
Hypofractionation involves delivering the same dose as the standard radiation schedule, in a lower number of sessions using a larger daily fraction.This translates into higher long-term tissue toxicity.However, the population submitted to support and palliative care usually has a short life expectancy and bothersome symptoms.Therefore, short-term relief is a more pressing concern than late toxicity.The ease of execution, reduction of patient transportation and rapid efficacy justify the use of hypofractionated radiotherapy for patients in palliative care (19).These facts suggest that more severe patients should be good candidates to hypofractionated EBRT since the impact on their quality of life would be less important.
Therefore, this therapeutic management should be proposed for populations with short--term limited life expectancy.In fact, Scholten et al. reported high rates of 5-year overall complications (33%) and severe complications (9%) (20).
According to guidelines, EBRT remains an option to reduce bleeding from various cancers.This palliative and hemostatic approach is mainly used due to its antitumoral effects and easy administration.Moreover, its cytotoxic effects allow for limitation or regression of an irradiated tumor, which can provide a decrease in tumor--specific symptoms (14,21).
The hemostatic effects of EBRT have been previously reported in patients affected by lung carcinomas.Brundage et al. have shown an 80% reduction in bleeding in non small cell lung cancers.Hypofractionation has also been reported to be as effective as the standard procedure (22,23).
The efficacy of palliative EBRT has also been observed in gynecological cancers.Biswal et al. (24) reported 100% of bleeding control within three fractions of EBRT.Kraiphibul et al. ( 25) also observed EBRT efficacy in uterine cervix cancers: 62.9% of cancer-related bleeding was controlled within three fractions and 97% within five fractions of radiotherapy.
Duchesne et al. ( 4) compared two hypofractionated EBRT schedules in a prospective trial, with no difference between a 35 Gy-10 fraction and 21 Gy-3 fraction concerning hematuria improvement (55% improvement vs. 50%, p = 0.198).Clinical evaluation was performed at 3 months.Symptoms such as nocturia, dysuria, and urgency were also improved after EBRT, with no significant difference between the two schedules.
Our retrospective study focused on the hemostatic effect of EBRT on bladder cancer.Other urinary symptoms were not taken into account in our data analysis.Limitations include the retrospective nature of this study and the limited patient cohort.
Two EBRT schedules were proposed: 30Gy in 10 fractions (Group A -2 weeks) or 20Gy in 5 fractions (Group B -1 week).ECOG PS defined which schedule would be assigned: More severe patients (ECOG > 2) underwent the hypofractionated group B procedure.Group B patients were therefore less subjected to transportation, with less impact on their quality of life, and alleged non--inferior efficacy (4,19,26).
As reported in the Duchesne et al. study, long term follow-up in large cohorts was difficult to obtain considering the patient's short-term life expectancy.Therefore, we considered the objective of hemostatic EBRT to be rapidly effective, and well tolerated.Therefore, an early 2-week medical evaluation was carried-out.Clinical improvement and sustainability was evaluated at the 6-month medical evaluation.
Efficacy following a 2-week evaluation was high: 22 patients (68.75%) showed no sign of gross hematuria (CTACE grade 1).Rapid control of the bleeding, which was a major objective of the patient care was reported.However, efficacy rates dropped at the 6-month follow-up evaluation: 69% of the patients had relapsed.This corroborates the findings of Duchesne et al., who reported approximately 60% of hematuria-free patients 3 months after EBRT with 35 Gy or 21 Gy schedules (4).Two years after EBRT, only 23% of patients were hematuria-free.
Analysis by ECOG performance status and EBRT schemes suggested a benefit in terms of hematuria control in favor of hypofractionation.The results were once again below the significance threshold: Hypofractionation (Group B) had a 21% relapse rate after 2 weeks, whereas standard EBRT scheme (Group A) had a 46% relapse rate (p = 0.139).
The present data suggest that hemostatic EBRT can be an interesting option for patients with hematuria related to bladder cancer unfit for surgery.This therapeutic approach should be used for patients with poor health status, with limited life expectancy, due to the limited sustainability of its effect.Patients with a life expectancy of more than 3 months should not be candidates for hemostatic EBRT.Practitioners should strive to improve the patient's general state of health in order to enable more aggressive, but more effective alternatives.
A benefit was found in hypofractionation in terms of hematuria control, although its use should be considered only for patients with more severe general conditions.These patients were able to benefit most from rapid bleeding control with limited transportation and did not suffer from long-term tissue toxicity due to limited life expectancy.Maintaining quality of life should remain the foremost objective for the management of patients with bladder cancer who are not candidates for surgery.
Srinivasan et al. reported encouraging results with hypofractionated EBRT on advanced bladder cancer (26).Jose et al. reported a positive effect on 62% of the patients who underwent hypofractionated palliative EBRT, although significant bladder and digestive toxicity was observed, one year after EBRT (27).
Further research is required to establish the higher efficacy of hypofractionated EBRT.Larger cohorts are needed to confirm the trend that we have reported in our retrospective series.This would be useful to define the place of hypofractionated EBRT as the standard therapy for bladder cancer-related hematuria in severe patients unfit for surgical management.
CONCLUSIONS
Palliative external beam radiation therapy is an effective short-term option for bladder cancer-related gross hematuria when patients are unfit for surgery.Hypofractionation appears to be an interesting option.However, this approach should be used only for patients with poor life expectancy, as its long term toxicity is higher than the standard schedules.Its efficacy compared to usual schedules requires further study.
The effectiveness of hemostatic EBRT seems of limited sustainability, limiting its use to situations when others ways of hemostasis have failed.Hemostatic EBRT cannot take the place of surgical options as a means of bleeding control, and it cannot offer long-term control for patients with a longer life expectancy.
Figure 1 -
Figure 1 -Results based on tumor stage (initial staging).The rate of relapse is shown in blue, green and red.
Figure 2 -
Figure 2 -Results based on ECOG -PS Eastern Cooperative Oncology Group performance status (therefore the External beam radiotherapy scheme was used).The rate of relapse is shown in blue, green and red.
CT =
Computer tomography CTCAE = Common Terminology Criteria for Adverse Events CTV = Clinical target volume ECOG PS = Eastern Cooperative Oncology Group performance status Gy = Grays units of radiation LNI = Lymph node involvement MIBT = Muscle invasive bladder tumors NMIBT = Non-muscle invasive bladder tumors RT = Radiation therapy TURB = Transurethral resection of the bladder
Table 4 -CTCAE -Common Terminology Criteria for Adverse Events grading for hematuria.
Definition = A disorder characterized by laboratory test results that indicate blood in the urine
|
2017-06-26T13:31:47.078Z
|
2013-11-01T00:00:00.000
|
{
"year": 2013,
"sha1": "c93da108b2690d5c0a1ee397e37db6fe5af4d365",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/ibju/a/gZJTddGZ8SQ4fFyDrXm4VRQ/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "3093b6930c1ec1b20dfcfb99c7aeee687b9dcc41",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
2654500
|
pes2o/s2orc
|
v3-fos-license
|
Carboxylic ester hydrolases from hyperthermophiles
Carboxylic ester hydrolyzing enzymes constitute a large group of enzymes that are able to catalyze the hydrolysis, synthesis or transesterification of an ester bond. They can be found in all three domains of life, including the group of hyperthermophilic bacteria and archaea. Esterases from the latter group often exhibit a high intrinsic stability, which makes them of interest them for various biotechnological applications. In this review, we aim to give an overview of all characterized carboxylic ester hydrolases from hyperthermophilic microorganisms and provide details on their substrate specificity, kinetics, optimal catalytic conditions, and stability. Approaches for the discovery of new carboxylic ester hydrolases are described. Special attention is given to the currently characterized hyperthermophilic enzymes with respect to their biochemical properties, 3D structure, and classification.
Introduction
The synthesis of specific products by enzymes is a fundamental aspect of modern biotechnology. This biocatalytic approach has several advantages over traditional chemical engineering, such as higher product purity, fewer waste products, lower energy consumption, and more selective reactions due to the high regio-and stereo-selectivity of enzymes (Rozzell 1999). One of the industrially most exploited and important groups of biocatalysts are the carboxylic ester hydrolases (EC 3.1.1.x) (Jaeger and Eggert 2002;Hasan et al. 2006).
Carboxylic ester hydrolases are ubiquitous enzymes, which have been identified in all domains of life (Bacteria, Archaea, and Eukaryotes), and in some viruses. In the presence of water, they catalyze the hydrolysis of an ester bond resulting in the formation of an alcohol and a carboxylic acid. However, in an organic solvent, they can catalyze the reverse reaction or a trans-esterification reaction ( Fig. 1) (Krishna and Karanth 2002). Most carboxylic ester hydrolases belong to the a/b-hydrolase family, and share structural and functional characteristics, including a catalytic triad, an a/b-hydrolase fold, and a co-factor independent activity. The catalytic triad is conserved and is usually composed of a nucleophilic serine in a GXSXG pentapeptide motif (where X is any residue), and an acidic residue (aspartate or glutamate) that is hydrogen bonded to a histidine residue (Heikinheimo et al. 1999;Jaeger et al. 1999;Nardini and Dijkstra 1999;Bornscheuer 2002).
There are two well-known groups within the family of carboxylic ester hydrolases: lipases and esterases. Esterases differ from lipases by showing a preference for short-chain acyl esters (shorter than 10 carbon atoms) and that they are not active on substrates that form micelles (Chahinian et al. 2002). Other groups include, for instance, arylesterases and phospholipases. The physiological role of carboxylic ester hydrolases is often not known, but nevertheless, many have found applications in industry; amongst other in medical biotechnology, detergent production, organic synthesis, biodiesel production, flavor and aroma synthesis, and other food related processes (Panda and Gowrishankar 2005;Salameh and Wiegel 2007a).
The use of enzymes in industrial processes also has its restrictions. Many processes are operated at elevated temperatures or in the presence of organic solvents. These conditions are detrimental to most enzymes and therefore there is a growing demand for enzymes with an improved stability. In this regard, especially, enzymes from hyperthermophiles are promising candidates because these enzymes generally display a high intrinsic thermal and chemical stability (Gomes and Steiner 2004). In recent years, many new hyperthermophiles have been isolated and the genomes of a rapidly increasing number have been completely sequenced. Hyperthermophiles have proven to be a good source of new enzymes (Atomi 2005;Egorova and Antranikian 2005;Unsworth et al. 2007), including many putative esterases and lipases.
At this moment, most esterases and lipases used in the industry are from mesophiles, basically, because they were the first to be identified and characterized. Esterases and lipases have only been isolated from a small number of hyperthermophiles (Table 1). An excellent review on thermostable carboxylesterases from hyperthermophiles appeared in 2004 . However, since then, many new hyperthermophilic carboxylic ester hydrolases have been described. Therefore, in this review, we aim to present an overview of the currently characterized carboxylic ester hydrolases from hyperthermophiles. We will focus on the identification of new carboxylic ester hydrolases, the biochemical properties, and 3D structures of characterized enzymes, and their classification. For details on the application of these enzymes, we refer to other reviews that cover this aspect extensively (Atomi 2005;Hasan 2006;Salameh and Wiegel 2007a).
Hyperthermophiles
Hyperthermophiles are generally defined as micro-organisms that grow optimally at temperatures above 80°C (Stetter 1996). They have been isolated from both terrestrial and marine environments, such as sulfur-rich solfataras (pH ranging from slightly alkaline to extremely acidic), hot springs, oil-field waters, and hydrothermal vents at the ocean floor. Consequently, they show a broad physiological diversity, ranging from aerobic respirers to methanogens and saccharolytic heterotrophs (Stetter 1996;Vieille and Zeikus 2001). Hyperthermophiles can be found in both prokaryotic domains, viz. the Bacteria and the Archaea. In phylogenetic trees based on 16S rRNA, they occupy the shortest and deepest lineages, suggesting that they might be closely related to the common ancestor of all extant life (Stetter 2006). For this reason and because they are a potential source of new biocatalysts, the genomes of several hyperthermophiles have been completely sequenced (Table 1).
All biomolecules of hyperthermophiles must be stabilized against thermal denaturation. The simplest approach for DNA stabilization would be to increase the GC-content of the DNA. However, it has been established that the GCcontent of hyperthermophiles does not correlate with the optimal growth temperatures (Table 1). Instead, other mechanisms are used to stabilize DNA, such as an increased intracellular electrolyte concentration, cationic DNA-binding proteins, and DNA supercoiling (Unsworth et al. 2007). Thus far, all completely sequenced hyperthermophiles have a reverse gyrase catalyzing a positive supercoiling of their DNA. A reverse gyrase is, however, not a prerequisite for hyperthermophilic life, but it can be seen as a marker for growth at high temperatures .
Proteins from hyperthermophiles have also been optimized for functioning at elevated temperatures. There is no single mechanism responsible for the stability of these hyperthermophilic proteins, rather, it can be attributed to multiple features. Features that contribute to the stability of hyperthermophilic enzymes include (a) changes in amino acid composition, such as a decrease in the thermolabile residues asparagine and cysteine, (b) increased hydrophobic interactions, (c) an increased number of ion pairs and salt bridge networks, (d) reduction in the size of surface loops and of solvent-exposed surface, (e) as well as increased intersubunit interactions and oligomeric state (Vieille and Zeikus 2001;Robinson-Rechavi et al. 2006;Unsworth et al. 2007). Besides these structural adaptations, proteins can also be stabilized by intracellular solutes, metabolites, and sugars (Santos and da Costa 2002).
Biomining for new enzymes
Traditionally, new biocatalysts were discovered by a cumbersome screening of a wide variety of organisms for Fig. 1 Reactions catalyzed by carboxylic ester hydrolases: a hydrolysis, b esterification, and c transesterification the desired activity. A modern variant is the metagenomics approach, which involves the extraction of genomic DNA from environmental samples, its cloning into suitable expression vectors, and subsequent screening of the constructed libraries (Lorenz and Eck 2005). This approach has been successfully applied to isolate new biocatalysts, including carboxylic ester hydrolases from hyperthermophiles (Rhee et al. 2005;Tirawongsaroj et al. 2008). This approach can potentially result in unique enzymes (no sequence similarity), but obviously depends on functional expression. At present, with many complete genome sequences available, bioinformatics has become an important tool in the discovery of new biocatalysts. This is a high-throughput approach for the identification, and in silico functional analysis, of more or less related sequences encoding potential biocatalysts. Sequence similarity, based on sequence alignments and motif searches, is most commonly used for assigning a function to new proteins (Kwoun Kim et al. 2004). Many sequences in the available databases have already been annotated as putative esterase or lipase. However, even more carboxylic ester hydrolases can be identified when BLAST and Motif searches, in combination with pair-wise comparison with sequences of known carboxylic ester hydrolases, are used. The advantage of this approach compared to traditional activity screening is the direct identification of new and diverse carboxylic ester hydrolases, which would otherwise have not been detected due to a low level of expression.
Such a bioinformatics approach has been successfully applied to identify new carboxylic ester hydrolase sequences in the completely sequenced genomes of several selected hyperthermophiles. In order to have as many candidates as possible, sequences that were assigned a different function, but did have the characteristics of carboxylic ester hydrolases, were also included, such as acylpeptide hydrolases. The results are given in Table 2. A typical strategy includes: BLAST-P searches (Altschul et al. 1997) using sequences of known carboxylic ester hydrolases as template; in parallel, searching InterPro (Hunter et al. 2009) for potential candidates. The resulting sequences can then be further analyzed (for conserved motifs and domains) using the NCBI Conserved Domain Search (Marchler-Bauer et al. 2009).
Properties of characterized esterases
The first carboxylic ester hydrolase isolated and characterized from a hyperthermophile was a carboxylesterase from Sulfolobus acidocaldarius Gorisch 1988, 1989). Since then, many new esterases have been characterized. At this moment, most carboxylic ester hydrolases described from hyperthermophiles are esterases, and only (Table 3).
Substrate preference
Enzymes are classified and named according to the type of reaction they catalyze (Enzyme Commission). The carboxylic ester hydrolases catalyze the hydrolysis of carboxylic acid esters, but they can be further clustered into different groups based on their substrate of preference. Two well-known members of this family are the esterases and true lipases. The majority of the characterized hyperthermophilic carboxylic ester hydrolases are esterases.
Lipases have been described for many mesophiles, mainly microbial and fungal, and are exploited for biotechnological applications (Gupta et al. 2004;Hasan et al. 2006). However, until recently no true lipase, hydrolyzing longchain fatty acid esters, had been identified in hyperthermophiles. The first lipase was characterized from the archaeon A. fulgidus (Levisson et al., manuscript in preparation) (Table 3). This lipase shows maximal activity at a temperature of 95°C and has a half-life of 10 h at 80°C. It displays highest activity with p-nitrophenyl-decanoate (pNP-C10) and is capable of hydrolyzing triacylglycerol esters of butyrate (C4), octanoate (C8), palmitate (C16), and oleate (C18). Two lipases from the thermophile T. lipolytica, LipA and LipB, have been characterized and are very stable at high temperatures (Salameh and Wiegel 2007b). Both enzymes show maximal activity at 96°C and have the highest activity with the triacylglycerol ester trioleate and pNP-C12. LipA and LipB retained 50% of their activity after 6 and 2 h incubation at 100°C, respectively, indicating that these two lipases are the most thermostable ones so far reported. Unfortunately, attempts to clone the two lipases were unsuccessful. A few mesophilic lipases may operate at temperatures above 80°C, but they usually have short half-lives. An exception is a mesophilic lipase that was isolated from a Pseudomonas sp., which showed a half-life of over 13 h at 90°C (Rathi et al. 2000). In comparison, the well-known lipase B from Candida antartica (CALB, Novozym 435) has a half-life of only 2 h at 45°C (Suen et al. 2004).
Esterases have a preference for short to medium acylchain esters (Table 3). Several enzymes from hyperthermophiles have been tested for activity toward esters with various alcoholic moieties other than the standard pNP-esters or 4-methylumbelliferyl (4MU) esters (Fig. 2). The esterase from P. calidifontis displays activity toward different acetate esters and showed highest activity on isobutyl acetate (Hotta et al. 2002). Furthermore, it was able to hydrolyze sec-and tert-butyl acetate. At present, only few enzymes can catalyze the hydrolysis or the synthesis of tertiary esters. This is because known esterases and lipases cannot hydrolyze esters containing a bulky substituent near the ester carbonyl group.
Other esterases have been characterized for their ability to resolve mixtures of chiral esters. The kinetic resolution of the esterase Est3 from S. solfataricus P2 was investigated using (R,S)-ketoprofen methyl ester (Fig. 2) . The enzyme hydrolyzed the (R)-ester of racemic ketoprofen methylester and showed an enantiomeric excess of 80% with a conversion rate of 20% in 32 h. In another study, the esterase Sso-Est1 from S. solfataricus P1 (Sehgal et al. 2001) was identified as homolog to the mesophilic Bacillus subtilis ThaiI-8 esterase (CNP) (Quax and Broekhuizen 1994) and Candida rugosa lipase (CRL) (Lee et al. 2001), which are used for the chiral separation of racemic mixtures of 2-arylpropionic methyl esters. The enzyme was characterized biochemically for its ability to resolve mixtures of (R,S)-naproxen methyl ester under a variety of reaction environments Kelly 2002, 2003). Sso-Est1 showed a specific reaction toward the (S)naproxen ester in co-solvent reaction conditions with an enantiomeric excess of C90%.
In addition to esterases and lipases, other ester hydrolase types have been identified in hyperthermophiles, including two phosphotriesterases and an arylesterase that were found in S. acidocaldarius, S. solfataricus MT4 and P1, respectively Porzio et al. 2007;Park et al. 2008). The phosphotriesterases showed maximal activity on the organophosphate methyl-paraoxon (dimethyl p-nitrophenyl phosphate) and the arylesterase showed maximal activity on paraoxon (diethyl p-nitrophenyl phosphate) ( Fig. 2; Table 3). Besides this phospho-esterase activity, esterase activity (on pNP-esters) was also observed for both enzymes. Stable organophosphatedegrading enzymes are of great interest for the detoxification of chemical warfare agents and agricultural pesticides.
Stability against chemicals
Stability and activity in the presence of organic solvents and detergents are important properties of an enzyme if it is to be used as a biocatalyst in the industry. Several hyperthermophilic carboxylic ester hydrolases have been tested. The esterase from P. calidifontis (Hotta et al. 2002) displays high stability in water-miscible organic solvents, and exhibited activity in 50% solutions of DMSO, methanol, 1VE6, 1VE7 Gao et al. (2003Gao et al. ( , 2006; Bartlam et al. (2004); Wang et al. (2006); Zhang et al. (2006Zhang et al. ( , 2007Zhang et al. ( , 2008Yang et al. (2009) Aeropyrum pernix Rhee et al. (2005Rhee et al. ( , 2006; Byun et al. (2006Byun et al. ( , 2007 Metagenomic acetonitrile, ethanol, and 2-propanol. In addition, the enzyme retained almost full activity after 1 h incubation in the presence of the above-mentioned organic solvents at a concentration of 80%. In comparison, the lipases from the mesophiles Pseudomonas sp. B11-1 (Choo et al. 1998) and Fusarium heterosporum (Shimada et al. 1993) were completely inactivated after incubation with acetonitrile. In addition to stability against solvents, the Pyrobaculum enzyme also has high thermal stability, with a half-life of approximately 1 h at 110°C (Table 3). The esterase from S. solfataricus P1 ) also displayed good stability against organic solvents, comparable to the enzyme from P. calidifontis. In addition, addition of 5% non-ionic detergents, such as Tween 20, stabilized the Sulfolobus enzyme. Moreover, the enzyme retained 45 and 98% activity in the presence of 5% SDS and 8 M urea, respectively. The lipase from the mesophile Penicillium expansum shows a much lower stability against detergents or organic solvents (Stocklein et al. 1993). The esterase EstD from T. maritima does not display resistance to detergents and retained only 0 and 43% activity in the presence of 1% (w/v) SDS and 1% (v/v) Tween 20, respectively. However, EstD does show good resistance against organic solvents since it remained active in the presence of 10% (v/v) solvents, which is comparable to the esterase from P. calidifontis. The esterase Est3 from S. solfataricus P2 displayed good resistance against mild detergents , it retained 51 and 99%, respectively, activity in the presence of 10% (w/v) Tween 60 and 10% (w/v) Tween 80, but displayed lower stability against organic solvents than the other three esterases described above.
Thermal stability
The most thermostable carboxylic ester hydrolase described to date is an esterase from P. furiosus (Ikeda and Clark 1998) (Table 3). It is extremely stable with half-lives of 34 and 2 h at 100 and 120°C, respectively. The enzyme has optimal activity at a temperature of 100°C, which is in good agreement with the optimal growth temperature of Pyrococcus (100°C). Highest activity was obtained with the substrate MU-C2, however, also little activity toward pNP-C18 was detected indicating it has a very broad substrate tolerance. Another very stable esterase was detected in crude extracts of P. abyssi (Cornec 1998). This enzyme has a half-life of 22 h and 13 min at 99 and 120°C, respectively. Maximal esterase activity was observed at least 65-74°C, however, temperatures above 74°C were not investigated due to instability of the substrate. The enzyme is active on a broad range of substrates, capable of hydrolyzing triacylglycerol esters and aromatic esters, but is restricted to short acyl chain esters of C2-C8 with an optimum for C5 fatty acid esters. Unfortunately, no sequence information has been reported for both Pyrococcus esterases. Most of the characterized carboxylic ester hydrolases from hyperthermophiles are optimally active at temperatures between 70 and 100°C (Table 3), which is often close to or above the host organism's optimal growth temperature. Furthermore, it is interesting to note that some carboxylic ester hydrolases, such as the esterase from S. shibatae (Huddleston et al. 1995) and the acetyl esterase from T. maritima (Levisson et al., manuscript in preparation), after heterologous expression in E. coli, show a transient activation during stability incubations, indicating that they probably need a high temperature in order to fold properly. Compared to their mesophilic counterparts they perform similar functions, however, due to intrinsic differences hyperthermophilic enzymes are stable and can operate at higher temperatures. It is difficult to indicate exactly which factors contribute to this higher thermal stability since (as discussed before) many different factors are involved.
Structures
Most carboxylic ester hydrolases conform to a common structural organization: the a/b-hydrolase fold, which is also present in many other hydrolytic enzymes like proteases, dehalogenases, peroxidases, and epoxide hydrolases (Ollis et al. 1992). The canonical a/b-hydrolase fold consists of an eight-stranded mostly parallel b-sheet, with the second strand anti-parallel (Fig. 3). The parallel strands, b3 to b8 are connected by helices, which pack on either side of the central b-sheet. The sheet is highly twisted and bent so that it forms a half-barrel. The active site contains the catalytic triad usually consisting of the residues serine, aspartate, and histidine (Heikinheimo et al. 1999;Jaeger et al. 1999;Nardini and Dijkstra 1999). The substratebinding site is located inside a pocket on top of the central b-sheet that is typical of this fold. The size and shape of the substrate-binding cleft have been related to substrate specificity (Pleiss et al. 1998). The 3D structures of several hyperthermophilic esterases have been solved (Table 3) (Fig. 3). The first reported structure of an hyperthermostable esterase was for the esterase AFEST of A. fulgidus (PDB: 1JJI) (De Simone et al. 2001). AFEST is an esterase that belongs to the hormone-sensitive lipase (HSL) group of esterases and lipases. The structure was refined to 2.2 Å resolution and showed that AFEST has the typical a/b-hydrolase fold. The active site is shielded by a cap region composed of five a-helices. Access to the active site of many lipases and some esterases is shielded by a mobile lid, whose position (closed or open) determines whether the enzyme is in an inactive or active conformation. AFEST is an esterase that prefers pNP-C6 as a substrate and shows maximal activity at 80°C. It is stable at high temperatures with a half-life of 1 h at 85°C (Manco et al. 2000b). A comparison of the AFEST structure with its mesophilic and thermophilic homologs, Brefeldin A from Bacillus substilus (BFAE) (PDB: 1JKM) (Wei et al. 1999) and EST2 from Alicyclobacillus acidocaldarius (PDB: 1EVQ) (De Simone et al. 2000), showed which structural features contribute to its thermal stability. The comparison revealed an increase in the number of intramolecular ion pairs, and a reduction in loop extensions and ratio of hydrophobic to charged surface residues (De Simone et al. 2001;Mandrich et al. 2004).
The structure of the esterase EstE1 was solved to 2.1 Å resolution (PDB: 2C7B) (Byun et al. 2007). This enzyme, which was isolated from a metagenomic library, also belongs to the HSL group and is closely related with AFEST. EstE1 has the canonical architecture of the a/bhydrolase fold and also contains a cap domain like other members of the HSL group (De Simone et al. 2001). It exhibits highest esterase activity on short acyl chain esters of length C6 and has a half-life of 20 min at 90°C (Rhee et al. 2005). The thermal stability of EstE1 seems to be achieved mainly by its dimerization through hydrophobic interactions and ion-pair networks that both contribute to the stabilization of EstE1 ). This strategy for thermostabilization is different from AFEST and shows that there are a variety of structural possibilities to acquire stability.
The crystal structure of an acylpeptide hydrolase (ap-APH) from the archaeon A. pernix was solved to 2.1 Å resolution (PDB: 1EV6) ( Bartlam et al. 2004). Acylpeptide hydrolases are enzymes that catalyze the removal of an N-acetylated amino acid from blocked peptides. The enzyme shows an optimal temperature at 90°C for enzyme activity and is very stable at this temperature with a halflife of over 160 h. It is active on a wide range of substrates, including p-nitroanilide (pNA) amino acids, peptides, and also pNP-esters with varying acyl-chain lengths with an optimum for pNP-C6 (Gao et al. 2003). The structure of the acylpeptide hydrolase/esterase apAPH belongs to the prolyl oligopeptidase family (Bartlam et al. 2004). The structure is comprised of two domains, the N-terminal domain is a regular seven-bladed b-propeller and the C-terminal domain has the canonical a/b-hydrolase fold that contains the catalytic triad consisting of a serine, aspartate, and histidine. It was shown that a single mutation (R526E), completely abolished the peptidase activity on Ac-Leu-pnitroanilide of this enzyme while esterase activity on pNP-C8 was only halved . Any mutation at the 526 site resulted in decreased peptidase activity due to loss of the ability of R526 to bind the peptidase substrate, while most of the mutants had increased esterase activity due to a more hydrophobic environment of the active site. This result shows that the enzymes can evolve such that they discriminate between substrates only by a single mutation.
The most recently elucidated structure belongs to an esterase, EstA, from T. maritima (PDB: 3DOH) (Levisson et al. 2009). The enzyme displayed optimal activity with short acyl chain esters at temperatures equal or higher than 95°C. Its structure was solved to 2.6 Å resolution and revealed a classical a/b-hydrolase domain, which contained the typical catalytic triad. Surprisingly, the structure also revealed the presence of an N-terminal immunoglobulin (Ig)-like domain. The combination of these two domains is unprecedented among both mesophilic and hyperthermophilic esterases. The function of this Ig-like domain was investigated and it was shown that it plays an important role in multimer formation, and in the stability and activity of EstA.
A high-resolution structure of an enzyme leads to a better understanding of its reaction mechanism, how it interacts with other proteins, what contributes to its stability, and may provide a basis for enzyme optimization and drug design. Because it is nowadays relatively easy to setup crystallization trials using commercially available screens and also because the current high-throughput crystallization projects are responsible for a large increase in the number of solved structures (Fox et al. 2008), it is expected that more structures of hyperthermophilic esterases will become available in future.
Classification
Enzymes can be classified on basis of their substrate preference, sequence homology, and structural similarity. Classification of enzymes based on sequence alignments provides an indication of the evolutionary relationship between enzymes. Still, structural similarity is preserved much longer than sequence similarity during evolution. On the other hand, sequence homology and structural similarity are not always correlated with the substrate preference of an enzyme. Altogether, classification of enzymes is not straightforward.
Several classifications of esterases and lipases into distinct families have been completed. In one such study, 53 bacterial esterases and lipases were classified into eight families based on their sequence similarity and some of their fundamental biological properties (Arpigny and Jaeger 1999). Many new esterases and lipases have since then been identified, including several, such as EstD from T. maritima ), which could not be grouped into one of these eight families. Therefore, new families for these enzymes have been proposed. However, this early classification has provided a good basis for more refined classification of the esterases and lipases. Most of the recent studies are based on sequence and structural similarity, and are accessible at online databases. Some relevant databases will be briefly discussed: The Lipase Engineering Database (LED), the Microbial Esterase and Lipase Database (MELDB), the Carbohydrate Active Enzyme (CAZy) database, and the ESTHER database.
The LED (http://www.led.uni-stuttgart.de) combines information on sequence, structure and function of esterases, lipases and related proteins sharing the same a/b-hydrolase fold (Pleiss et al. 2000;Fischer and Pleiss 2003). The database contains more than 800 prokaryotic and eukaryotic sequences, which have been grouped into families based on multi-sequence alignments. The functionally relevant residues of each family have been annotated. The database was developed as a tool for protein engineering. The LED will be updated in the forthcoming year (personal communication with Prof. Dr. Juergen Pleiss). The classification will not change, but the number of proteins and families will increase substantially. MELDB (http://www.gem.re.kr/meldb) is a database that contains more than 800 microbial esterases and lipases (Kang et al. 2006). The sequences in MELDB have been clustered into groups according to their sequence similarities, and are divided into true esterase and lipase clusters. The database was developed in order to identify, conserved but yet unknown, functional domains/motifs and relate these patterns to the biochemical properties of the enzymes. According to the authors, new enzymes of other completely sequenced microbial strains will be added on a regular basis. CAZy (http://www.cazy.org) is a database that contains enzymes involved in the degradation, modification, or creation of glycosidic bonds (Cantarel et al. 2009). One class of activities in this database is the carbohydrate esterases (CE). These enzymes remove esterbased modifications from carbohydrates. Carbohydrate esterases have been clustered into 15 families. These families have been created based on experimentally characterized proteins and sequence similarity. The database is continuously updated based on the available literature and structural information. The ESTHER database (http:// bioweb.ensam.inra.fr/esther) contains more than 3500 sequences of enzymes belonging to the a/b-hydrolase fold (Hotelier et al. 2004). These enzymes have been clustered into families based on sequence alignments. This database is updated regularly, and furthermore contains information about the biochemical, pharmacological, and structural properties of the enzymes. Extremophiles (2009) 13:567-581 577 Novel developments and future perspectives In recent years, many new hyperthermophilic bacteria and archaea have been isolated. The genomes of several of these hyperthermophiles have been sequenced and in future this number will increase rapidly due to forthcoming sequencing projects [GOLD genomes online; (Liolios et al. 2008)]. This increase in sequence information will accelerate the identification of new carboxylic ester hydrolases with new properties. Hitherto, traditional screening has been used to identify new enzymes, however, bioinformatics and metagenome screening will contribute more and more to this identification process. A major drawback of metagenome screening is that in order to function well, the genes of interest need to be functionally expressed in the heterologous screening host. Therefore, recently a new two-host fosmid system for functional screening of (meta)genomic libraries from extreme thermophiles was developed (Angelov et al. 2009). This system allows the construction of large-insert fosmid libraries in E. coli and transfer of the recombinant libraries to extreme thermophile T. thermophilus. This system was proven to have a higher level of functionally expressed genes and may be of value in the identification of new carboxylic ester hydrolases from hyperthermophiles. However, in addition to the identification of new carboxylic ester hydrolases, also their characterization is indispensable. The classification of esterases into families is an ongoing process and many of the current databases are incomplete. A promising approach is the superfamily-based approach, which combines theoretical and experimental data, and can reveal more information about a protein family (Folkertsma et al. 2004). A new completely automatic program capable of constructing these superfamily systems is 3DM (Joosten et al. 2008). This program is able to create a new superfamily of the carboxylic ester hydrolases based on structural and sequence similarity. In addition, superfamily systems generated by 3DM have also been proven to be powerful tools for the understanding and predicting rational modification of proteins (Leferink et al. 2009).
Many new protein structures are becoming available. These structures will provide a basis for modern methods of enzyme engineering, such as directed evolution and rational design, to broaden the applicability of these enzymes. In the past, these methods have been proven to enhance enzymes to meet specific demands, including increased stability, activity, and enantioselectivity (Dalby 2007). In future, the identification of new esterases and the possible methods to engineer them provide tools to find thermostable esterases that are able to perform a vast array of reactions.
Open Access This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.
|
2014-10-01T00:00:00.000Z
|
2009-06-21T00:00:00.000
|
{
"year": 2009,
"sha1": "7ca488d32d6a379377a26e441c33183e58da2ecf",
"oa_license": "implied-oa",
"oa_url": "https://europepmc.org/articles/pmc2706381?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "7ca488d32d6a379377a26e441c33183e58da2ecf",
"s2fieldsofstudy": [
"Biology",
"Chemistry",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
216388562
|
pes2o/s2orc
|
v3-fos-license
|
Hydraulic Potential Energy Model for Hydropower Operation in Mixed Reservoir Systems
The forecast‐informed hydropower operation for mixed reservoir systems, which consist of parallel and cascade reservoirs, is of considerable importance in practice; however, this operation still lacks an analytical basis in theory. From the perspective of energy, this paper introduces the concept of “hydraulic potential energy” and mathematically derives the energy transformation formula for multi‐reservoir hydropower operation. Based on this formula and the rolling single‐period forecast, a maximum hydraulic potential energy model (E1 model) is proposed. The presented rigorous proofs demonstrate the deficiencies of the commonly considered objectives or principles of minimizing the outflow‐induced energy‐cost, while the objective function of the E1 model is demonstrated to be superior. If the constraints of power output and reservoir storage are nonbinding, the derived optimal spatial principle for hydropower operation is (1) to equalize the Relative Marginal Energy (RME) among reservoirs or (2) if this status is not feasible, to release water and generate hydropower first from the reservoirs that have the largest RME values. Considering the uncertainty of future inflow, the E1 model is extended to the two‐stage hydraulic potential energy model (E2 model). A case study of a hypothetical mixed three‐reservoir system demonstrates the superior performance of the E2 model compared with the conventional K‐value principle and the minimum energy‐cost model. This paper provides an innovative spatial principle and a practical model for the realization of forecast‐informed hydropower operation, which contributes to the optimization of the large‐scale energy market.
Introduction
With the decelerating construction of large-scale water storage facilities in developing and developed countries (MWR, 2013;WCD, 2000), the integrated operation of multiple reservoirs has been a growing concern for maintaining the operational effectiveness and maximizing the benefits (Labadie, 2004). Second only to the dams for irrigation purposes, hydropower dams are extremely common, which comprise 20% of the large dams worldwide (Herschy, 2012). Furthermore, the reservoir systems can be designed and operated primarily for hydropower generation in some mountainous regions (Zeng et al., 2013). The hydropower operation is a complex nonlinear problem due to the existence of the hydraulic head effects (i.e., the relationship between water level and reservoir storage/release) and the complementary effect (i.e., the release becomes more productive with increasing storage) (Cheng et al., 2008;Zeng et al., 2013). This operation is further complicated by the competition and cooperation among parallel reservoirs and the close hydraulic linkages between upstream-downstream reservoirs (Balibar, 2017;Labadie, 2004;Nathan, 1982).
Given the intricate nature of the multi-reservoir hydropower operation, this issue has been discussed for decades by many researchers, see Yeh (1985), Labadie (2004), or de Queiroz (2016) for reviews. The majority of recent studies concentrated on developing the optimization methods and technical algorithms (Lee & Labadie, 2007;Lima et al., 2013;Liu et al., 2018). Deterministic optimization and stochastic optimization are two basic techniques applied to formulate long-term guidelines or short-term decisions. The deterministic optimization is used to develop operation rules or make decisions in a sense that future inflows are represented by historical long-term observations or synthetically generated series, or the assumed facts known in detail in advance (e.g., Cheng et al., 2008;Keppo, 2007). The stochastic optimization involves the realization of the operation under the probabilistic descriptions of random streamflow processes without the presumption of perfect foreknowledge of future inflows (e.g., Lee & Labadie, 2007;Maria & Mario, 1996). The objective functions of these optimization techniques are typically set as maximizing the (expected) energy production or utilities from the production, or minimizing the (expected) cost of satisfying the energy demand, over time-horizon (Liu et al., 2018;Sasireka & Neelakantan, 2017). Until now, extensive optimization algorithms and applications have been carried out to retrieve the (near) optimal solutions, see Ahmad et al. (2014) for a recent review. Despite intensive studies on the application of these optimization techniques, a considerable gap still exists between research and practice, especially in the operations with rolling forecasts (Labadie, 2004). The main reasons for this discrepancy are as follows: (1) Many reservoir system operators are skeptical regarding the retrieved results due to the insufficient background of the physical mechanisms involved in the optimization. (2) To optimize the operation of large hydropower systems, mathematical manipulations must be performed, which often incur a high computational cost (e.g., the curse of dimensionality); however, such manipulations cannot always guarantee the optimality of the solutions (e.g., convergence to the local optimum).
An alternative option to cope with such a complex large hydropower operation problem is to identify a better approach of supplying the energy with forecasting. A few theoretical studies have shown that investigating the properties of hydropower systems is fundamental to reform optimization models, which is helpful to explore the optimal rules and efficient algorithms (e.g., Zhao et al., 2014). Nicolaos and Jakob (1970) and later (Turgeon, 1980) simplified the hydropower system into one equivalent reservoir by aggregating the storage and inflow of the individual reservoirs in units of energy, which effectively surmounted the dimensionality problem in both long-term planning and short-term scheduling processes. Teegavarapu and Simonovic (2000) and Mo et al. (2013) proposed short-term operation models that minimize the system energy loss/consumption by water release. A real-world case study showed that, in general, the solution of maximizing the stored energy was the best short-term strategy (Xu et al., 2015). Accordingly, from the perspective of these energy-based targets, the typical hydropower operation rules such as the K-value principle (Wang et al., 2014) and storage effectiveness index rule (Lund, 1996;Lund & Guzman, 1999) were derived. The objective of maximizing the stored energy in reservoirs has long been regarded as equivalent to minimizing the released energy target (Piekutowski et al., 1993). Unfortunately, the energy principles used in these methods are derived from direct but empirical targets, which necessitates more elaborate theoretical analyses and discussions. Thus, it is imperative to formulate a systematic mathematical description for understanding the principles of mixed hydropower systems, which will facilitate the development of more efficient operation methods. To fill this gap, this paper aims to derive the fundamental spatial principles for hydropower operation in mixed reservoir systems and propose new energy-based operation models that can be the representative of both short-term (i.e., hourly or daily) and medium-term (i.e., weekly or monthly) operations. This paper is organized as follows: Section 2 introduces the concept of hydraulic potential energy and elaborates the internal relationship among energy forms. Section 3 describes the theoretical maximum hydraulic potential energy model (E1 model) and mathematically derives several important properties; in addition, the optimal spatial principle for mixed reservoir hydropower operation is described, which is followed by the proposal of the two-stage hydraulic potential energy model (E2 model) for practical use. Section 4 evaluates the operational performance of the practical E2 model using a hypothetical three-reservoir case study. Finally, the conclusions are drawn in Section 5.
Problem Formulation
For the remainder of this work, the following notations are used: Notation 1. We denote by S = {1,2,…,n} the set of reservoir (and power plant) indices within a mixed n-reservoir system. For convenience, let n be the index of the most downstream reservoir; Notation 2. We define a partial order "≤" over the set S: we say that j ≤ i if either j = i or the release from reservoir j can be recaptured by reservoir i; Notation 3. Set S ≤i ≔ {j ∈ S|j ≤ i} be the set of reservoirs in the upstream of reservoir i (including reservoir i itself);S ≥i ≔ {j ∈ S| i ≤ j} be the set of reservoirs in the downstream of reservoir i (including reservoir i); S >i ≔ {j ∈ S|i ≤ j and j ≠ i} be the set of reservoirs in the downstream of reservoir i (excluding reservoir i); Notation 4. We denote by S (i) the set of reservoirs that have hydraulic connection with reservoir i, namely S (i) ≔ S ≥i ∪ S ≤i ; Notation 5. We denote by S u(i) the set of upstream reservoirs immediately adjacent to reservoir i. Clearly, S u (i) ⊆ S <i .
Formulation of the Multi-Reservoir Hydropower System Operation
The operation of the multi-reservoir hydropower system must balance the release of multiple periods and multiple reservoirs to maximize the power generation. The release in a single period yields hydropower and affects the carryover storage of the reservoir and those of downstream reservoirs. Furthermore, the release from a single reservoir is not always sufficient for satisfying the system hydropower requirements. Thus, a joint operation is deemed essential.
In this paper, V i t denotes the active storage above the reservoir dead storage V i min . The system dynamics can be described by the water conservation equation where V i t-1 and V i t are the beginning and ending active storages of reservoir i ∈ S = {1,2,…,n} in time period t (m 3 /s·Δt is considered to harmonize the measurement unit, where Δt is the operation time interval); I i t is the external (intermediate) inflow that arises from rain and tributary streams that are expected to flow into reservoir i during period t (m 3 /s), as represented by forecasts; R i t and W i t are the power release (m 3 /s) and non-power release (water spill, m 3 /s), respectively, namely, the sum (R i t þ W i t ) is the reservoir total outflow; e i t is the evaporation, which is a function of the average reservoir surface area and, thus, is expressed as a function of the reservoir average storage; and S u(i) is the set of reservoirs in the upstream region that are immediately adjacent to reservoir i (see Notation 5). This equation shows that the outflows from the upstream reservoirs turn into the inflows into the next downstream reservoirs.
The reservoir hydraulic head varies with the reservoir storage and the tailrace water level (Bayón et al., 2009;Zhao et al., 2014). The hydraulic head of reservoir i at the end of period t and the average hydraulic head along period t are are the reservoir elevation-storage function and the tailrace elevation-outflow function, respectively. The power output of reservoir i during period t depends bi-linearly on the hydraulic head and power release where η i is a constant factor that represents the efficiency of power generation.
In the hydropower system, the following constraints must also be considered: the lower and upper limits on reservoir storage (V i min ,V i max;t ), power release (R i min;t ,R i max;t ), power output (P i min;t ,P i max;t ), reservoir system firm power output (P min,t ), and nonnegative spill (W i t ). These constraints can be defined as follows 10.1029/2019WR026062
Water Resources Research
Equation 1, and Equations 5-9 are the basic constraint sets of the mixed reservoir hydropower systems, in which V i t , R i t , and W i t ; i ∈ S are the decision variables of the current period to be considered. The following analyses and discussions are based upon these equations. For conciseness, from now on, we write the head
Definition of the Hydraulic Potential Energy
The above equations indicate that the hydropower operation is clearly distinct from the water supply operation, since the former contains the process of energy transfer besides water quantity. Nicolaos and Jakob (1970) first proposed the idea of "potential energy" of the hydropower system by converting the reservoir water stores into their potential energy equivalents and formulating a one-dam model. The follow-up researchers further clarified the potential energy by estimating the hydropower produced by the complete depletion of reservoir for given storages (Becker & Yeh, 1974;Secundino & Adriano, 1993;Terry et al., 1986). Nevertheless, the mathematical basis of this representation is not clear.
In this study, we propose a new representation as follows. According to the concept of gravitational potential energy in physics, the stored energy depends on the mass and the centroid distance between the upstream and downstream water bodies. If there is only one reservoir with a storage V t , the gravitational potential energy U is where ρ is the density; k is the storage unit conversion factor; g is the local gravitational field;z t is the geometric centroid of the reservoir waterbody that is above the dead storage; and H min is a constant value that represents the reservoir minimum hydraulic head.
The determination of the centroid is complicated, which renders further theoretical analysis difficult. To make the mathematical deduction clearer and more concise, this paper transforms the potential energy by substituting the centroid distance (z t +H min ) by the effective hydraulic head (H t = h t +H min , where h t is the depth of the reservoir storage), and substituting the constant term (ρkg) by the power generation efficiency (η). Since this simplified term no longer represents the strict potential energy but is a type of power output in the hydraulic system, we define this term as the "hydraulic potential energy" This substitution is mathematically reasonable if the focus is on searching for the extreme values in hydropower operation because for any given reservoir, the centroid z t is a monotonically increasing function of the The transition function f depends on the shape of the reservoir waterbody, which can typically be obtained via regression based on field measurements (a special case is presented in Appendix I, in which the function f can be directly derived by approximating the shape of the waterbody as an inverted pyramidal frustum shape). Therefore, maximizing the hydraulic potential energy is mathematically identical to maximizing the gravitational potential energy.
Therefore, the hydraulic potential energy at the end of period t for the complete system is as follows where S ≥i is the set denoting reservoirs (power plants) downstream of reservoir i (including reservoir i, see Notation 3). The cumulative effective hydraulic head (∑ j∈S≥i η j H j t ) is adopted because reservoirs in series allow downstream reservoirs to recapture/reuse the upstream release.
It is natural to generalize the concept of the hydraulic potential energy for various contexts. We define the inflow-hydraulic potential energy as φ I t , estimating the additional time average of the hydraulic potential energy into the reservoir system by the inflow over time period t.
These terms estimate the hydraulic potential energy consumption, waste, and losses from the system by power release, non-power release (water spill), and evaporation, respectively. Note that the reservoir outflow from reservoir i alone (R i t þ W i t ) can further be reused by downstream reservoirs and, thus, these energy losses through the release are only related to each reservoir itself. For the sake of briefness, we assume that both e i t and φ e t are functions of the reservoir initial storages (φ e t ¼ ∑ i∈S e i t ∑ j∈S≥i η j H j t ). The variables of inflow, release and hydraulic head are presented in the average conditions rather than as an integral over time interval because (1) for short-term operation, the variance of inflow/outflow can be overlooked due to the relatively short time step; (2) for medium-or long-term operation, the integration of instantaneous values is overly complicated for further analysis, thereby requiring a simplification. This is a common manipulation when calculating the hydropower production or energy losses (Feltenmark & Lindberg, 1997;Xu et al., 2015).
Subsequently, we define the system hydraulic potential energy increment as φ V t , which accounts for the impacts of changing storages and hydraulic heads between the beginning and end of period t In particular, if the reservoir head function can be piecewise-linearized, namely, if
Equation 17 becomes
where ∑ j∈S≥i η j ΔH j t is the cumulative increment of the downstream hydraulic heads.
Theorem 1. (Energy Transformation Formula) The following identity holds where the hydraulic potential energies are defined as in Equations 11-17.
A straightforward calculation shows the following identity Adding the term ∑ i∈S V i t ∑ j∈S≥i η j H j t þ ∑ i∈S V i t−1 ∑ j∈S≥i η j H j t−1 to both sides of Equation 19, we get The above identity coincides with Equation 18. Hence, Theorem 6 is proved. █ This formula expresses the transformation relationship of the various hydraulic potential energy types in period t for mixed hydropower systems, thereby referred to as the "energy transformation formula".
Hydraulic Potential Energy Model
The concept of the hydraulic potential energy offers a unified definition of both energies stored and lost. In this sense, Theorem 6 reflects the energy conversion and conservation law among reservoirs and operation periods. According to the energy transformation formula, by maximizing the hydraulic potential energy at the end of period t, it is possible to functionally realize the objective of minimizing the energy losses by the outflow. Intuitively, maximizing the hydraulic potential energy additionally allocates an energy storage; consequently, more potential energy is retained for future operation (a rigorous proof of this statement is provided in Section 3.2).
Therefore, mathematically, the operational objective for mixed hydropower systems that are based on rolling forecasts is to maximize the system carryover hydraulic potential energy, subject to a variety of physical and economic constraints (hereinafter referred to as the "E1 model")
10.1029/2019WR026062
Water Resources Research accounts for the decision variable set to be determined at the beginning of period t. The constraints of the system include water continuity constraints, reservoir storage constraints corresponding to the physical limitations, mandatory release constraints, output capacity constraints reflecting the turbine limitations, and the minimum system power output requirement, as have been listed in Equation 1, and Equation 5-9, respectively. Note that the parameters in the constraints can be varied in different time periods and scenarios. For example, the minimum release can be enlarged in the high-flow season for environmental protection, and the minimum power output of the individual reservoirs can sometimes be enlarged into a firm output due to the requirement of power-supply reliability.
This model is valid if reservoir systems are primarily used for hydropower generation or a hydropower and water supply integrated operation. The downstream water demands are assumed to be satisfied by the minimum water releases. The practical value of this research lies in the observation that many reservoir systems located in mountains and valleys are operated only for hydropower generation for a major part of the year (except on some days that involve flooding), such as in the case of the upper Yangtze River and upstream of the Lancang-Mekong River in Southwest China (Jiang et al., 2018;Zeng et al., 2013). Furthermore, a single-objective hydropower analysis is the basis of a multi-objective problem analysis. If all the single-objective problems are well understood and addressed, the multi-objective operation becomes easier. In other words, performing a single-objective analysis is essential and valuable for the realization of a multi-objective analysis, since the core of the latter is to ensure a balance among objectives. The proposed hydropower operation model can be merged into multi-objective optimization models to represent the hydropower aspects.
Remark 1. In the present study of the E1 model, the current-period forecast I i t ; i ∈ S is assumed to be known perfectly while the subsequent forecast is either beyond our knowledge or unknown.
Properties of the E1 Model
In this section, we focus on the characteristics of the E1 model. More concretely, we prove that 1. The optimal path of the E1 model decision variables depends only on the independent variables V i t ; i ∈ S (see Remark 15); 2. The optimal solution of the E1 model is a reasonable approximation to the optimal strategy of maintaining the overall potential energy (see Remark 18); 3. If head function in Equation 2 can be approximated with concave functions, the optimal solution to the E1 model is unique (see Proposition 19).
Proof. See Appendix II. █ Similarly, we can prove that this conclusion also holds for H i t .
Proof. See Appendix III. █ Since F P F R , the constraints of the power release and power output in Equations 6-7 are equivalent to F P max;t for some i 0 ∈ S, we consider the following perturbations Since the reservoir storage and total outflow remain unchanged, the hydraulic head should still be H i0;* t . The power output of reservoir i 0 is increased by η i0 εH i0;* t . Consequently, the power output burden of the other reservoirs ∑ i∈S i0 f g P i t is reduced by η i0 εH i0;* t > 0; thus the hydraulic potential energies of these reservoirs can be further enlarged This finding also contradicts the fact that ϕ * t is the optimal hydraulic potential energy. This proves Proposition 13. █ Proposition 13 suggests that the optimization process of the E1 model does not lead to a water spill until both the storage and power output constraints (Equations 5 and 7, respectively) for any i ∈ S are binding.
Remark 2. From the continuity constraint in Equation 1, it follows that the decision variables
are not independent and the relationship among them can be expressed as There exist various combinations of R i t and W i t values that can be formed when R i t þ W i t is specified. Following case (ii) in Proposition 13, for any i ∈ S, once the variables V k t ; k ∈ S ≤i have been fixed, the optimal path of the E1 model is to release water for power generation until P i;* t ¼ P i max;t , and only the extra water is spilled in this case. R i;*Path t and W i;*Path t denote the corresponding values of this optimal path under the fixed According to this expression, the optimal path of the E1 model decision variables always depends solely Proposition 2. If X * t satisfies identities: (i) V i;* t < V 0 i max;t for any i ∈ S, and (ii)P j;* t > P j min;t for all j ∈ S ≥i , the inequality in Equation 8 becomes an equality Proof Suppose the contrary proposition holds, that is, there exists We have proved that the optimal path of the decision variables to the E1 model relies exclusively on V i t ; i ∈ S (see Remark 15). Subsequently, we consider the following perturbation in which the decisions for all other reservoirs remain unchanged If we recall Equation 4, the power output of reservoir i 0 is proportional to the product of reservoir release and denote the slopes of head function; and the power output P i t depends largely on R i t instead of H i t . It follows from Lemma 9 and condition (ii) in Proposition 13 that We thus obtain In addition, the higher water storage in reservoir i 0 yields a larger hydraulic potential energy (ϕ i0 The system hydraulic potential energy thus becomes This finding contradicts the fact that ϕ * t is the optimal hydraulic potential energy. Therefore our assumption does not hold, and Equation 8 must thus be an equality. Proposition 16 is thus proved. █ It follows from Theorem 6 that the objective function of the E1 model is The optimal values of the other two variable sets R i t ; W i t ; i ∈ S under a fixed V i t ; i ∈ S can be determined easily by using Equations 21 and 22. Note that both φ e t and φ t -1 are known quantities at the beginning of decision because they are functions of the reservoir initial storages. The E1 model objective can be rephrased as Remark 3. Combining Proposition 13 and Proposition 16 yields the following: if W i;* t > 0 for any i ∈ S or ∑ i∈S P i* t > P min;t , the reservoir must already be full or bound by release/power-output constraints. In other words, if the reservoir systems are strictly regulated within the feasible regulatory regions, the spilled-water-hydraulic potential energy satisfies φ W * t ¼ 0 and power-release-hydraulic potential energy satisfies φ R* t ¼ P min;t . Thus, the energy losses that are associated with the reservoir outflow of the optimal solution to the E1 model are minimal ( φ R* t þ φ W * t ¼ P min;t ). Furthermore, the active constraints (Equations 5 and 7) ensure that the hydraulic potential energy target does not diverge from the target of minimizing the energy-cost due to the outflow. In addition to this energy-cost target, the proposed E1 model additionally maximizes the sum of the inflow-hydraulic potential energy and the system energy increment (
in Equation 23
). If Remark 8 holds, we confidently conclude that the optimal solution to the proposed multi-reservoir hydropower operation model (the E1 model) is a reasonable approximation to the optimal strategy of maintaining the overall potential energy.
To further analyze the E1 model, some convex properties are expected. However, this model is not guaranteed to be a convex problem. Similar to the objective formula of the E1 model, the non-convexity of hydropower production has long been regarded as a key obstacle in hydropower problem (Feltenmark & Lindberg, 1997;Goor et al., 2011). The approximation methods, such as the piecewise linear approximation (Borghetti et al., 2008), convex function approximation (Zhao et al., 2014), are typically applied to bypass this difficulty and enhance the computational efficiency.
The convex function approximation can take a variety of forms, for example, it can be assumed that the head varies in a piece-wise linear manner with storage or outflow (Lima et al., 2013) or the upstream or downstream level is constant (El-Hawary & Christensen, 1979). As a mathematical operation of concave functions, the sum of a concave function and a linear function is concave. It is easy to prove that, under both the conditions, the reservoir head functions are also concave functions of the decision vari- Following these methods, we can denote the approximation to the head function defined in Equation 2 by concave functions. Subsequently, the following property can be obtained.
; i ∈ S can be approximated with concave functions, the E1 model is a convex optimization problem.
Recall that a convex problem is of the form max Xt φ t X t ð Þs:t: where C is a convex set, and φ t is a concave function over C.
Proof. We first check the concavity of the constraints. From Equation 21, for any i ∈ S, , is also concave functions of the decision variable X t ¼ V 1 t ; V 2 t ; ⋯; V n t À Á T , as determined by obtaining the second-order partial derivative of the variable.
Given the negative linear correlation between R i t and V i Now, we consider the objective. Since H i t is a concave function of X i t , the non-negative combination ∑ j∈S≥i η j H i t is also concave. Thus the objective function (∑ i∈S V i t ∑ j∈S≥i η j H i t ) of the E1 model is a concave function of X i t .
To conclude, the E1 model is a convex optimization problem of maximizing a concave function over a convex set. █ As an important aspect of a convex problem, any local maximum of the E1 model is a global maximum. Since the objective function is a strictly concave function, this model has a unique optimal solution X * t on compact sets.
Optimal Spatial Principle for Multi-Reservoir Hydropower Operation
The theoretical reservoir operation rule is useful for practical operations and for understanding the operation of multiple reservoir systems. This section derives the allocation rule for hydropower system operation.
Theorem 2. (Relative Marginal Energy principle) If both the power output and the storage capacity constraints are nonbinding, the necessary condition for hydropower operation is to equalize the Relative Marginal Energy (RME) over the reservoirs, where denotes the decision variable to the E1 model. The E1 model can be rewritten as Equation 20).
We convert this primal problem into the Lagrangian dual problem s:t: β i j ; β 0 ≥ 0; i ∈ S; j ∈ 1; 2; :::5 f g Provided that the functions of the objective and constraints are continuously differentiable, the Karush-Kuhn-Tucker (KKT) conditions can be used to obtain the necessary condition for the optimal decision; namely, there exist α i and β i j ; β 0 ≥ 0; i ∈ S; j ∈ 1; 2; :::5 f g , such that where α i ; β i j ; β 0 are parameters defined with the KKT conditions.
Because the KKT conditions are difficult to be physically interpreted owing to the complex form, we assume that, at the optimum state, all the storage and power release/output constraints are not binding (g i 1 X t ð Þ < 0, g i 2 X t ð Þ < 0,g i 3 X t ð Þ < 0,g i 4 X t ð Þ < 0, for any i ∈ S). Consequently, we obtain β i j ¼ 0 for any i ∈ S, j ∈ {1,2,...5}.
By Proposition 13 and Proposition 16, It can thus be determined that the E1 model decision variables where The optimal spatial principle for the mixed reservoir hydropower operation is to equalize the following term for each reservoir, i.e., Substituting g 0 and φ R t for the above equation, we get Equation 24. In particular, if the reservoir tail-water levels have a linear relationship with the outflow, the above optimal principle becomes solvable. Specifically, the explicit expressions of φ(X t ), g 0 (X t ) and β 0 are given in Appendix IV.
Equivalently, we find This formula implies that, without the binding constraints of the power output and storage capacity, the optimal spatial principle is equivalent to the equalization of MEI V i t À Á values. Recalling Proposition 19, we can conclude that this MEI principle is globally necessary and sufficient on the condition that the tail-water rises linearly with the outflow.
This proves Theorem 21. █ It should be emphasized that only the MEI principle depends on the tail-water assumption; the RME principle requires no implicit assumption. Considering the duality principle, irrespective of the convexity of the primal maximization problem (the E1 model), the solution to the dual problem (Equation 26) always provides an upper bound to the solution of the primal problem. In other words, even if the tail-water assumption does not hold or the E1 model is a non-convex optimization problem, the KKT conditions in Equation 27 and the optimal RME principle in Equation 24 are still necessary for the determination of the optimal solution to the E1 model.
The denominator in MEI
is always positive since the head function has been proven to be an increasing function of V i t (see Lemma 9). The MEI values are not independent among reservoirs due to the effect of cascade reservoirs and the satisfaction of the relation g 0 X t should be nonnegative since the KKT parameter satisfies β 0 ≥ 0.
The seemingly complicated expressions of the RME and MEI are actually easy to understand. They are interpreted physically as the relative marginal energy effectiveness of the system power production and the system potential energy ∂φ t =∂V i t À Á that arises from a marginal change in reservoir storage V i t , respectively. Due to the independence of V i t ; i ∈ S, the strategy of reducing V i0 t of a reservoir i 0 ∈ S corresponds to the reduction of the storage of only i 0 while those of all other reservoirs V i t ; i ∈ S i 0 f g remain unchanged. In other words, the amount of water that is released from reservoir i 0 runs through all downstream reservoirs j ∈ S ≥i0 until it exits the system before the end of the time-step. From the perspective of energy transformation, reservoir operators typically expect there to be a higher hydropower energy production while a lower system hydraulic potential energy loss for every unit of release.
Remark 4. If the RME (or MEI) cannot be equalized, minimization of the dispersion of the RME (or MEI) becomes the optimal spatial principle. In this case, the strategy is to release water from the reservoirs that have the largest RME (or MEI) values and to restore water in the reverse order.
Remark 5. Consider a cascade reservoir system in which all reservoirs have the same characteristics and hydrological conditions. For instance, a i is assumed to be zero. At the beginning of the time period, Theorem 21 (the MEI principle) yields ∂H i t =∂V i t and V i t ) are the same for all i ∈ S. A straightforward calculation obtains the following expression implies that under identical conditions, a marginal storage change of in reservoir i produces a larger relative marginal energy supply compared with that of the subsequent downstream reservoir i+1. Thus, we conclude that the E1 model has a tendency to release water from the upstream reservoirs first; however, the restoration occurs from the downstream reservoirs first according to the MEI V i t À Á value.
Theorem 21, along with Remark 23, indicates that the maximum hydraulic potential energy model tends to release more water from the reservoirs with a relatively higher hydropower productivity, and a lower system potential energy loss (a larger RME or MEI) is incurred. In particular, the observation of Remark 24 is in agreement with the operation rule identified in the previous studies focusing on reservoirs in series (e.g., Lund & Guzman, 1999;Xu et al., 2015).
For parallel reservoirs, an illustrative example is provided. Consider a hypothetical two-parallel-reservoir system, as described in Table 1 and Figure 2(a)-2(b). Let the system firm power output be 50 MW. The average values of the 24-month inflow are 25 m 3 /s and 32 m 3 /s in the two cases. Let the reservoir elevation-storage relationships be polynomial functions and the tailrace elevation-outflow relationships be linear functions. The decision is made to satisfy the stipulated energy demand based on the rolling single-period forecast (see Remark 8).
To evaluate the effectiveness of the MEI principle, two objectives, namely, (1) maximizing the system hydraulic potential energy (the E1 model, which is defined in Equation 20), and (2) minimizing the difference between the MEI values for each reservoir (the MEI principle in Theorem 21), are compared. The traversing method is applied to calculate the optimal power generation combination.
The results demonstrate that the optimal solutions under the two objectives are exactly the same. The optimal operation processes and the corresponding marginal energy index values (Equation 25) for each reservoir are presented in Figure 2(c) -(d). Within periods 3 to 5, the formula MEI V 1 t À Á > MEI V 2 t À Á always holds, and the optimal operations correspond to releasing all water from Reservoir 1; however, the opposite trend is Figure 2(a) H 2 in Figure 2(a) 24-month inflow series I t 1 in Figure 2(b) I t 2 in Figure 2(b) observed after period 6. In periods 2 and 6, the optimal operation is to collaborate for power generation for the two reservoirs and to equalize the MEI values MEI V 1;* t ¼ MEI V 2;* t . These results are consistent with the implications of Remark 23 and Theorem 21. After period 20, the storage of Reservoir 1 reaches its upper boundary. The optimal operations correspond to the release of the entire inflow from Reservoir 1, while the remaining power demand is fulfilled by Reservoir 2.
Comparison With Typical Hydropower Operation Methods
The hydraulic potential energy model and the optimal spatial principle are presented from the mathematical perspective. This section compares the proposed model and principle with certain typical energy-based hydropower operation models and principles that have long been utilized in similar studies. These principles include the minimization of the energy-cost with power release and spill (e.g., Mo et al., 2013;Teegavarapu & Simonovic, 2000), the K-value principle (e.g., Wang et al., 2014), and the Storage Effectiveness Index (SEI) principle (Lund, 1996).
The minimum energy-cost model involves minimizing the cost of energy release and the surrogate cost that is associated with water spill while satisfying energy demand. The objective function based on the rolling forecasts can be expressed as follows (Teegavarapu & Simonovic, 2000;Xu et al., 2015) min where C i is a constant that represents the surrogate cost of for each unit of spill from reservoir i. This model can be rewritten informally as min φ R t þ φ W t . As stated in Remark 18, the objective function for the E1 model, which differs from the traditional objective function, can eventually realize the minimization of the energy losses by the outflow. That is, the (unique) optimal solution X * t ¼ V i;* t ; R i;* t ; W i;* t ; i ∈ S to the E1 model is also an optimal solution to min φ R t þ φ W t . In contrast, the optimal solutions to min φ R t þ φ W t for n > 1 are not unique. For instance, consider n = 2, the optimal solutions to the E1 model satisfy In this case, we have only five equalities but six decision variables Considering the degrees of freedom, the optimal solutions are not unique. This finding suggests that such conventional objectives that focus only on the outflow-energy-cost are not optimal; however, this aspect represents a necessary-but-not-sufficient condition for the system energy maintenance. In contrast, the E1 model possesses significant advantages over the conventional models, in terms of at least additionally maximizing the storage (or energy) allocation among individual reservoirs in the system.
In terms of the operation principle, the K-value principle involves minimizing the impact of the hydropower revenue while avoiding the release of all upstream available inflows. The K-value estimates the percentage reduction in hydropower head, and it is defined for reservoirs in parallel and in series in Equations 34 and 35, respectively. The operation of hydropower reservoirs follows the rule that the reservoirs with a small K-value supply water first and store last (Wang et al., 2014).
where CI i t is the cumulative intermediate inflow to reservoir i from the current period t to the end of a drawdown cycle (i.e., between October 1 st of one year and March 31 st of the next year) or a refill cycle (i.e., between April 1 st and September 30 th ); EV CI i t À Á is the expected value of the cumulative inflow to reservoir i, represented by the means of historical statistics; F i t-1 and H i t-1 are the surface area and hydraulic head of reservoir i at the beginning of period t.
The details of the SEI principle can be found in Lund (1996). The K-value and SEI principles share a similar operating mechanism. If the system natural inflows are not sufficient for satisfying the system power requirement in the drawdown season, these principles are used to estimate the ratio of energy loss that is caused by the extra release requirement and the power shortfall of each reservoir. The reservoirs with the lowest ratios must be drawn down first. To illustrate the three methods, a system that consists of two parallel reservoirs is considered (Figure 3). One is a large reservoir (R1) with a mild elevation-storage relationship, and the other is a smaller reservoir (R1) with a substantially steeper elevation-storage relationship. The firm energy requirement for the current operation step is P min,t . Figure 3 (a) illustrates the operation strategies of the K-value and SEI principles. In these strategies, all natural inflow is first released R 1 t and R 2 t À Á and the shortfall of the firm hydropower production due to the insufficient inflow is subsequently estimated as ΔP ¼ P min;t − ∑ 2 i¼1 P i where P i I;t is the power output produced by releasing the natural inflow. Since the additional release from R1 results in a substantially smaller head reduction compared with that of R2, eliminating the same power 10.1029/2019WR026062
Water Resources Research
shortfall also leads to a smaller energy loss. Therefore, an additional release is required from R1 R 3 t À Á . The final operation decision is to release R 1 t þ R 3 t from R1 and release R 2 t from R2. In contrast to the above method, the proposed E1 model directly determines the optimal release decisions ( R 1;* t and R 2;* t ) by maximizing the system hydraulic potential energy while meeting the power output target (see Figure 3(b)). Theoretically, the operation priority can also be judged by the value of MEI V i t À Á at the beginning of and during the operation. Considering the large size difference between the two reservoirs, the strategy of satisfying the system power requirement entirely by R1 while maintaining R2 at a higher level substantially outperforms the simultaneous release of both reservoirs (the K-value and SEI operation strategies).
The model of minimizing the outflow-energy-cost, along with the K-value and SEI principles, implicitly assumes that the initial storage distribution among reservoirs is already at the optimal level. Consequently, releasing all the inflow and minimizing the energy loss by an extra drawdown lead to the maximization of the system stored energy. However, this assumption is not necessarily correct, and the opposite phenomenon may occur in some cases. The additional shortcomings of these commonly used methods are as follows: (1) The energy cost, along with the stored energy, lacks a formal definition and has not been extensively discussed. (2) The release quantity allocation among reservoirs is not ascribed if the K-value or SEI are close. (3). Release priorities that are based on only the initial condition are insufficient, since the changes in the water level within the operation period might change the priority. The proposed E1 model possesses the necessary characteristics for overcoming these limitations.
Two-Stage Hydraulic Potential Energy Model
The E1 model is constructed based only on the current-period forecast and not on the distant future inflow (see Remark 8), which involves a high uncertainty. Although the optimal solution to the E1 model can lead to a higher system hydraulic potential energy (see Remark 18), it does not consider the future risks of reservoir overdraft nor overflow. For instance, if the inflows of the next period are low, the upstream reservoirs may not have sufficient water to satisfy the minimum power output (overdraft), which would decrease the reliability of power-generation in the long run; alternatively, if the following inflows are high, the downstream reservoirs may need to spill water due to the proximity to their storage capacity (overflow). This merits special attention for arid regions, in which the stored water resource within the system is reserved not merely for hydropower but also for the provision of the water supply for copying with potential future droughts. Therefore, a reasonable operation strategy need to consider both the current-period energy and the associated risks for the subsequent periods, namely, the failure to satisfy the minimum output and the risk of water spillage. To reduce these risks, the upstream reservoirs should not release excessively (the likelihood of overdraft increases due to the deficit V i t ) and the downstream reservoirs should not restore excessively (the likelihood of overflow increases due to the surplus V i t ). In other words, the hydraulic potential energy must be maximized not over the whole effective reservoir Figure 4.
The upper boundary of the minimum-output guaranteed region U i min;t can be determined via the empirical envelope curve method. The envelope curve method involves operating reservoirs from the dead water level at the end of the drawdown season and proceeding backward from the upstream to downstream reservoirs, releasing water according to the corresponding minimum power-output of each time period, until the end of the refill season; subsequently, the typical reservoir storage paths under typical inflow combinations are obtained; finally, U i min;t is regarded as the upper envelope of the storage paths of each reservoir.
For the spill-control region, an excessive emphasis on lowering the reservoir level substantially influences the reservoir energy storage. Furthermore, it is difficult to define a clear spill-control region for each reservoir since the amount of water that should be spilled is interlinked among reservoirs during operation. Therefore, an implicit spill-control region can be determined by minimizing the cumulative loss of the spilled-water-hydraulic potential energy in the current and next periods Therefore, it is possible to incorporate the operation regions into the E1 model (Equation 20); via this approach, we can obtain a modified multi-reservoir hydropower operation model (referred to as the "E2 model"). Let n for any t} and S II ≔ i ∈ S 0 ≤ V i t−1 < U i min;t − V i min ; for any t n o . Based on two period rolling forecasts, the E2 model involves maximizing the carryover system hydraulic potential energy and minimizing the expected next-period energy losses from the overflow or overdraft, subject to a variety of constraints in the upcoming two periods, as follows Subject to is the maximum power head (defined in Appendix III); M i t is a large penalty coefficient if the minimum-output constraint is violated for reservoir i during period t; otherwise, M i t ¼ 0. This strategy forces reservoir to release less water, and the minimum output release occurs if the initial reservoir storage approaches the minimum-output guaranteed region to avoid any future overdraft.
Remark 6. Recalling the proof and assumption in Proposition 19, the second term of the E2 model objective, namely, −∑ i∈SI W i tþ1 H i max;tþ1 , is a linear function of variables V i t , and the third term, namely, −∑ i∈SII is a concave function. Hence, the additionally two objectives do not affect the concavity of the hydraulic potential energy model. The E2 model also represents a convex optimization problem with a unique optimal solution.
The E2 model explores the maximum mathematic expectation of the energy storage by considering the future inflow uncertainty. This approach implicitly combines the spill-control region by balancing the hydraulic potential energy in period t and the potential spillage in period t+1, thereby ensuring the expected economic benefits while avoiding the expected risk of a water spill as much as possible.
Case Study and Discussion
In this section, the proposed forecast-informed E2 model is applied to a hypothetical three-reservoir hydropower system. This case is based on real-world data that were collected from two reservoirs in Southwest China and one reservoir in Southeast China. Reservoirs with distinctive storage capacities and hydrological conditions are selected from power systems to better evaluate the adaptability of the E2 model. The results and discussions are subsequently presented.
Experimental Setting
The structure of the three-reservoir system under consideration is illustrated in Figure 5(a), and the reservoir elevation-storage curves and elevation-outflow curves are presented in Figure 5(b) -(d). The basic characteristic data regarding the hydropower system are given in Table 2.
The E2 model is applied to optimize the hydropower operation based on inflow forecasts. The historical inflow records for the three-reservoir system are utilized as the input for rolling forecasts. Due to the limited availability of the inflow data, the monthly step inflow data from the years 1991 to 2010 are employed for medium-term operation test, while the daily step data from the years 2006 to 2009 are adopted for short-term operation test. Upon calculation, the rolling inflow forecast for the current operation period (per month or day) is assumed to be perfect as prescribed in Section 3, whereas that for the next period contains no information other than the monthly mean values of historical statistics. The numerical solution of the E2 model is calculated using the Particle Swarm Optimization algorithm (Kennedy & Eberhart, 1995). The operation decision is made at the beginning of each operation period but based on the inflow forecasts of the upcoming two periods. Apart from the E2 model, the same system with forecasting is solved again using the conventional K-value principle (Equations 34 and 35) and the minimum energy-cost model (Equation 33).
For comparison, the ideal solution is further calculated using dynamic programming (DP) with the objective of maximizing the long-term hydropower generation, as expressed below.
The total number of operation periods is T; and the reservoir storage increments are set at 5.27×10 6 and 1.73×10 6 m 3 for the medium-and short-term operation tests, respectively. The Open Multi-Processing (OpenMP) parallel programming interface (Hermanns, 2002) is used to increase the optimization efficiency. The ideal solution that is obtained via DP is the deterministic optimal trajectories under the perfect prediction of long-term inflow sequences; consequently, this solution is regarded as the upper bound for all the operation results. Figure 6 presents the medium-term reservoir water level regulation processes of the three energy-based operation methods (E2 model, K-value principle, and minimum energy-cost model), and the ideal solution obtained via DP. The corresponding system output processes are presented in Figure 7. All three reservoirs exhibit inter-annual variability: The reservoir storages are relatively low in the low-flow seasons and high in the high-flow seasons. Comparing the regulation processes of the three reservoirs, Reservoir 2 is nearly full or empty on most occasions while Reservoirs 1 and 3 seldom reach their storage boundaries.
Medium-Term Operation Results
The results that are obtained from the E2 model (see the blue lines in Figure 6 and Figure 7) exhibit an inclination to release more water from Reservoirs 1 and 3 while maintaining Reservoir 2 at a high level. By using a piecewise linear function to approximate the reservoir tailrace elevation-outflow relationship, the MEI values of the E2 model solutions are obtained and plotted, as shown in Figure 8. between MEI V 2;* t and the other two MEI values. This phenomenon occurs because Reservoir 2 has a substantially smaller storage and its storage capacity constraint is always binding. According to Figure 7, the E2 model tends to satisfy the system firm output (620 MW) as much as possible, and it maintains reservoirs at high water levels once the output target has been satisfied. Only when the expected next-period inflow is large or the reservoir is expected to overflow, does the system release extra water compared to the output demand, thereby producing extra power. These two traits support the theoretical properties proved in Section 3.2.
The K-value principle (green lines in Figure 7) tends to release extra water, which often exceeds the demand. The main similarity between the K-value principle and the E2 model is that they both maintain a high level for Reservoir 2. This occurs because Reservoir 2 is located downstream of Reservoir 1 and it has a substantially steeper elevation-storage relationship (see Figure 5 (c)) and a smaller storage capacity, thereby demonstrating a higher power efficiency. However, a drawback of the K-value principle is that the reservoirs with small K-values are made to consistently release water, which leads to more rapid and frequent water level decreases in the large reservoirs. Ultimately, the system sometimes fails to satisfy the power requirement at later periods.
As analyzed in Section 3.4, the optimal solutions to the minimum energy-cost model are not unique. The optimal operation processes differ substantially among optimization runs, with the long-term mean system power output ranging from 679.5 MW to 697.6 MW. One of the superior solutions is adopted and represented by the orange lines. The operation process of Reservoir 3 is similar to that resulting from the K-value principle, whereas the overuse of Reservoir 2 is clearly distinctive from the behavior of the E2 model and the K-value principle. The model of minimizing the energy-cost tends to release more water in the downstream reservoir than in the upstream reservoirs, and this finding accords with those of reported studies (Li et al., 2009).
To evaluate the efficiency and effectiveness of the four solutions that are described above, Table 3 compares several key indicators that reflect the operational reliability and economic benefits. The results demonstrate that the ideal solution obtained using DP, the E2 model solution, and the minimum energy-cost solution can guarantee the firm output; however, the K-value result exhibits failure in some dry years. The E2 model can realize satisfactory hydropower productivity and overall revenue. Compared with that of the minimum energy-cost solution, the power generation of the E2 solution is increased by 162.06 GWh, while the spill rate is decreased by 1.75%. The power generation of the K-value principle lies in between that of the E2 model and that of the minimum energy-cost model. This is because the K-value principle distinguishes drawdown periods from refill periods. The corresponding performance is similar to that of the minimum energy-cost model in drawdown periods; however, the performance approaches that of the E2 model in refill periods. In the results regarding the for power contribution, only subtle differences are observed among the three reservoirs. On average, Reservoir 1 exhibits the largest proportion, followed by Reservoir 3 and Reservoir 2. Interestingly, compared with the regulation processes that are presented in Figure 6, the reservoirs that exhibit larger fluctuations generate less hydropower electricity. This aspect is attributable to the complementary effect between storage and release (Li et al., 2009). A one-unit release from a hydropower reservoir becomes less productive with the decrease of the reservoir water level. This suggests that the preservation of high water levels in certain reservoirs is necessary.
The marked differences between the ideal solution (DP framework) and the three forecast-informed solutions lie in the availability of long-term inflow forecasts. The former solution is based on the perfect prediction of long-term inflows, and, thus can best utilize all future possible spilled water, thereby yielding the best solution that corresponds to the maximum total revenue. In contrast, since the perfect long-term prediction is not available in practice, water spillage is mostly inevitable in practical hydropower operation. Maintaining high levels to promote the unit release hydropower productivity is a compromise strategy for short-term and medium-term hydropower operations. If the operators want to increase the water utilization, thereby leading to higher power generation and less water spillage, extension of the forecast lead-time and improvement of the inflow forecast accuracy might be helpful.
Short-Term Operation Results
The daily inflow data of each reservoir were collected from 2006 to 2009 for short-term operation test. In this period, approximately 67.2% of the annual total precipitation occurs in the refill season (May 1 st to October 31 st ), and the daily mean inflows of the three reservoirs are 123.67, 101.06, and 393.14 m 3 /s. Table 4 compares the performances regarding hydropower output and reliability (the probability that output is not lower than the minimum demand) of DP, the E2 model and the minimum energy-cost model. The K-value principle is not compared since it is primarily applied to medium-term operation or coupled with operation charts. On average, the system daily hydropower output from 2006 to 2009 always exceeds the minimum output (620 MW). In contrast to the substantial differences in the inflows between the refill and drawdown seasons, the system power generation in the refill season accounts for only 51.84% (Min E-cost model result) to 53.38% (DP result). The E2 model outperforms the minimum energy-cost model overall. The system power generation of the E2 model is 6.8% higher than that of the other model, and the corresponding reservoirs provide a more reliable hydroelectricity supply. The frequency of failures of the minimum energy-cost model is increased from less than 0.5% to 12.6% for R1 and to 4.2% for R2.
Limitations and Discussion
This study is a single-objective analysis of multi-reservoir hydropower operation. Although many mountainous reservoirs (e.g., Jiang et al., 2018) are covered by this case, decisions pertaining to water resources management often inevitably involve tradeoffs among competing goals or objectives. In addition to hydropower, most reservoirs implement a multi-objective operation, which typically contributes to services such as water supply, irrigation, flood control, navigation, fisheries, and ecological maintenance. For these cases, the proposed model must be further investigated so that it can be merged into multi-objective optimization models to represent the hydropower aspects.
In a multi-objective operation, objective functions subject to a set of constraints may be weighed and later solved via multi-objective optimization techniques. These techniques can be classified into two main categories: (1) aggregation of the objectives under predefined objective priorities/weights, and (2) Pareto domination approaches if no preference information is available/considered (Burke & Silva, 2006;Olukanni et al., 2018). The simple expression of the hydraulic potential energy proposed in this paper renders the hydropower objective easily accessible in combination with the other operation objectives. Meanwhile, the observed characteristics and laws in hydropower operation can substantially facilitate the solution of multi-objective operation problem.
The second limitation of this study is that only two period inflow forecasts are employed, and the current forecast is assumed to be perfect. This assumption seems to limit the application of the proposed models (the E1 and E2 models) since inherent forecast uncertainty is inevitable. Typically, a longer forecast horizon corresponds to more uncertain inflow information; thus, the forecast uncertainty may become the dominating factor that guides the reservoir operation decision (Zhao et al., 2012). Nevertheless, this issue can be addressed by improving the nowcast and short-term forecast information, or by maximizing the expected objective value. A more extensive analysis on this is left for future work.
Conclusions
This paper presents a neat mathematical structure and forecast-informed operation model for mixed reservoir hydropower systems by introducing the concept of the hydraulic potential energy. The maximum hydraulic potential energy model (E1 model) is proposed. The major findings are summarized as follows: First, if the power output and storage constraints are nonbinding, the derived optimal spatial principle is (1) to equalize the Relative Marginal Energy (or Marginal Energy Index if the tail-water levels vary linearly with the outflow) among reservoirs or (2) if this is not feasible, to release water from the reservoirs that have the largest RME (or MEI) values first and to store water in the reverse order. Second, the objectives that focus on only the outflow-energy-cost and principles that are based on this concept are not optimal but represent the necessary-but-not-sufficient condition for system energy maintenance. Third, the E1 model can realize the objective of minimizing the energy-cost and the additional objectives that correspond to the maximization of the storage effect on the inflow-hydraulic potential energy and the system energy increment; the latter quantity has rarely been clarified or studied. Fourth, by considering the long-term water and energy supplies of the system, the two-stage hydraulic potential energy model (E2 model) can enhance the hydropower supply reliability and water reserves. These findings have important implications for future operation practice. The RME (or MEI) reflects the marginal effectiveness of system power production relative to the hydraulic potential energy that arises from the change in the reservoir storage. The derived spatial principle can help explain and guide the release/storage ordering among reservoirs. The computational difficulty of locating the global optimal solution for a complex nonlinear optimization model remains substantial. With an extensive understanding of the spatial principle, it is also possible to develop more efficient optimization algorithms for multi-reservoir hydropower operation. Although the proposed E1 model is of limited utility in applications, it lays the foundation for hydropower system operation. The E1 model, in combination with its properties, clarifies the insufficiency of the operation strategies or principles that are based on minimizing the energy-cost, thereby providing a reference for possible improvement. This finding is important because several established hydropower systems are based partly on the latter ideas. The optimality of the E1 model also guarantees the validity and reliability of the practical E2 model, which can be potentially applied to diverse basins and reservoir systems since the model and data are independent.
|
2020-03-19T10:30:37.243Z
|
2020-04-01T00:00:00.000
|
{
"year": 2020,
"sha1": "67867128d4235fc70d43e260fc40c6508e1007c5",
"oa_license": "CCBY",
"oa_url": "https://agupubs.onlinelibrary.wiley.com/doi/pdfdirect/10.1029/2019WR026062",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "966e57f8250b764f81b3a373c85cac9435c424a1",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
262195956
|
pes2o/s2orc
|
v3-fos-license
|
Enhanced performance of lead sulfide quantum dot-sensitized solar cells by controlling the thickness of metal halide perovskite shells
The metal halide perovskite CH3NH3PbI3 (MAP) can be applied as the shell layer of lead sulfide quantum dots (PbS QDs) for improving solar power conversion efficiency. However, basic physics for this PbS core/MAP shell QD system is still unclear and needs to be clarified to further improve efficiency. Therefore, in this study, we investigate how MAP shell thickness affects device performance and dynamics of charge carriers for PbS QD-sensitized solar cells. Covering the PbS QDs with the MAP shell layers of an appropriate thickness around 0.34 nm greatly suppresses charge carrier recombination at surface defects along with improved carrier transport to neighboring oxide and polymer layers. Therefore, this MAP shell thickness provides the highest open-circuit voltage, short-circuit current density, and fill factor for solar cells. Overall power conversion efficiencies of these solar cells reached about 4.1%, which is about six-fold higher than that for solar cells without MAP (about 0.7%). Additionally, use of the MAP shell layers can prevent oxidation of PbS QDs and, therefore, makes a degradation of solar cell performance slower in air.
Introduction
Since the discovery of dye-sensitized solar cells (DSSC) by Grätzel et al. [1][2][3], their device performance has rapidly improved to a level for practical applications.Quantum dots (QDs) have also been used as the sensitizer for solar cells because of their unique properties, such as bandgap tunability, high extinction coefficient, and low-cost solution processing [4][5][6][7][8].However, QD-sensitized solar cells still have power conversion efficiency (PCE) lower than a theoretical maximum value which is originated from defects acting as carrier traps and recombination centers at QD surfaces [9][10][11][12][13][14].As for solar cells sensitized by lead sulfide (PbS) QDs used in this study, surface defects are also a factor that strongly limits solar cell performance [15].The presence of surface defects causes instability of QDs and solar cells to be degraded [16,17].It is widely known that coating QDs with thin shell layers is very effective in passivating surface defects, which makes it possible to prevent carrier recombination and enhance charge carrier extraction and stability for QD-sensitized solar cells [18][19][20].
We recently demonstrated that PbS QDs coated with shell layers of the metal halide perovskite CH 3 NH 3 PbI 3 (MAP) are directly formed inside a mesoporous TiO 2 (mp-TiO 2 ) layer and that PbS QD-sensitized solar cells with MAP shell layers have improved solar cell performance when compared with those without MAP shell layers [17].To sufficiently passivate surface defects and reduce carrier recombination for PbS QDs, an increase of MAP shell thickness is necessary.However, too large MAP shell thickness is expected to impede carrier extraction from devices because of increased electrical resistance.Therefore, it is very important to optimize MAP shell thickness.However, the optimization of MAP shell thickness for PbS QD-sensitized solar cells has not been performed yet to date.In this study, we investigated the effect of MAP shell thickness on performance of PbS QD-sensitized solar cells.Additionally, we carried out time-resolved photoluminescence spectroscopy (TRPL), transient photovoltage (TPV) measurement, and X-ray photoemission spectroscopy (XPS) on PbS QDs with different MAP thicknesses for a better understanding of carrier recombination and extraction behaviors in solar cells.We found that use of a certain MAP shell thickness around 0.34 nm leads to a greatly improved power conversion efficiency of 4.1% because of reduced carrier recombination along with enhanced carrier extraction.The MAP shell layers provided another merit that stability of solar cells is improved in air by preventing the oxidation of PbS.These results would contribute toward fabrication of PbS QD-sensitized solar cells with high efficiency and air stability.
Preparation of TiO 2 films
On top of glass substrates coated with a fluorine-doped tin oxide (FTO) layer (Pilkington, TEC8), a compact TiO 2 (c-TiO 2 ) layer with a thickness of ca.50 nm was prepared by a spray pyrolysis deposition method using 20 mM solution of titanium diisopropoxide bis (acetylacetonate) (Sigma Aldrich) in ethanol at 450 • C. The c-TiO 2 layer plays a role in preventing formation of shunting paths between electrodes.Paste containing anatase-phase TiO 2 nanoparticles with an average diameter of ca.50 nm was then screen-printed on the c-TiO 2 layer to prepare an mp-TiO 2 layer with a thickness of ca. 1 μm [21,22] and annealed at 500 • C for 1 h in an air.The mp-TiO 2 layer was subsequently submerged in a 20 mM TiCl 4 solution at ambient temperature for 12 h, rinsed with distilled water, and then sintered at 450 • C for 15 min to enhance the contact between the TiO 2 particles.
Synthesis of MAI
MAI was synthesized by reacting 30 mL of methylamine (40% in methanol, TCI) and 32.3 mL of hydroiodic acid (57 wt% in water, Aldrich) in 250 mL round-bottom flask at 0 • C for 3 h with stirring.The precipitate was obtained by removing the solvents using a rotary evaporator at 50 • C. The obtained yellowish raw product (MAI) was washed with diethyl ether three times and finally purified by recrystallization from a mixed solvent of diethyl ether and ethanol.After filtration, the white MAI powder was dried at 60 • C in a vacuum oven for 24 h.
Film and device fabrication
Film fabrication was performed via spin-assisted successive ionic-layer-adsorption reaction (S-SILAR) process.EDT molecules behave to terminate PbS QD surfaces for stabilization and prevent overgrowth of PbS QDs [11].Next, MAP shell layers were formed on PbS QD surfaces by a two-step method, which includes spin-coating a PbI 2 solution, followed by spin-coating a methylammonium iodide (MAI) solution [23].After baking the films at 70 • C, we observed a rapid change of film color to dark brown, indicating the formation of the MAP structure (Figure S1) as reported in literature [24].To deposit PbS QDs on the mp-TiO 2 layer by the S-SILAR method, we prepared 100 mL of a 5 mM PbI 2 (Sigma-Aldrich) solution in N,N-dimethylformamide (DMF, Sigma-Aldrich), 100 mL of a 5 mM Na 2 S (Aldrich) solution in methanol (Burdick & Jackson)/deionized water (95:5 in volume), and 100 mL of an EDT/acetonitrile (1:99 in volume) solution.We successively spin-coated the PbI 2 , Na 2 S, EDT solutions on the mp-TiO 2 layer.The solution amount used for each spin-coating was 100 μL and each spin-coating speed was 1500 rpm for 20 s.The above spin-coating procedure was repeated 20 times to incorporate PbS QDs into the mp-TiO 2 layer.To form MAP shell layers on the PbS QDs, the mp-TiO 2 layer was first infiltrated with PbI 2 by spin-coating 200 μL of a PbI 2 solution in DMF at 4000 rpm for 30 s and drying at 70 • C for 30 min under argon atmosphere.Then, 100 μL of the solution of MAI (0.035 M; 2-propanol) was dropped over the layer and dried at 70 • C for 10 min [25].
On top of the core/shell QD layer, a hole transport layer was deposited by spin-coating poly-3-hexylthiophene (P3HT, Rieke metals, >98% region regularity) solution of a 15 mg mL − 1 concentration in 1,2 dichlorobenzene at 2500 rpm for 60 s.The solution of PEDOT: PSS [poly (3,4-ethylenedioxythiophene) polystyrene sulfonate] (Clevios, Al4083) was diluted with double the amount of methanol and spin-coated on the P3HT layer at 2000 rpm for 30 s.To finalize the solar cells, an 80-nm-thick gold electrode layer was thermally evaporated on the PEDOT:PSS layer under a pressure of 5 × 10 − 6 Torr.To improve electrical connections, we attached leads to both the FTO and Au electrodes using an ultrasonic soldering iron (USS-9200, MBR Electronics).The measured active area for the device was 0.16 cm 2 .
Film and device characterization
The current density voltage (J-V) characteristics were recorded utilizing a 3A Class AAA solar simulator (Newport, 64023A) paired with a 450 W xenon lamp (Newport, 6279NS) and interfaced with a potentiostat (CH Instruments, CHI 600D).For all devices, the J-V profiles were obtained by limiting the active region using a metal mask that covers 0.096 cm 2 .The illumination intensity was calibrated with a standard Si-reference cell (Oriel, VLSI standards), where 1 sun corresponds to 1000 W m − 2 .EQE measurements in the G. Seo et al. wavelength range of 300-800 nm were conducted with a fully automated system, which included a 300 W xenon lamp (Newport, 66902) with monochromator (Newport, Cornerstone 260), and a multimeter (Keithley, 2002).The EQE values in a longer-wavelength region from 500 to 1700 nm were measured using the same setup mention earlier, except a 1000 W xenon lamp (Newport, 69935) was used.For EQE measurements, light intensity calibration at each wavelength was conducted using either a Si photodetector or an InGaAs photodetector.TRPLS decay curves were derived using an optical parametric oscillator laser (Spectra-Physics, basiScan), energized by Nd-YAG (Spectra-Physics, INDI-40-10).The excitation pulse had a duration of 7 ns, with its energy modulated between 1.0 and 0.1 mJ per pulse via neutral density filters.Decay curves were captured at the emission peak for each specimen utilizing a NIR photomultiplier tube (Hamamatsu, H10330-75) coupled with a monochromator (Princeton Instruments, SP2300).The photomultiplier output was logged using a 500 MHz digital oscilloscope (Agilent, DSO X-3054A).A cut-off filter, situated ahead of the photomultiplier, eliminated any scattered excitation light.TPV decay assessments were carried out employing a 10 Hz ns laser (EKSPLA, NT342A-10) for slight perturbation illumination and a 150 W Xe lamp (Zolix) as a bias light source.The instrument was directly interfaced with a 500 MHz digital oscilloscope (Agilent DSO X-3054A), with its input impedance configured to 1 MΩ in an open-circuit condition.Neutral density filters modulated the bias light intensity for assorted V OC , and a considerably attenuated laser pulse at 550 nm generated a voltage transient, not surpassing 20 mV.XPS evaluations of chemical states were conducted using the Quantum 2000 system, utilizing an Al Kα X-ray source (1486.6 eV) and a hemispherical electron analyzer.The derived XPS data offered mean chemical details over a spatial expanse of 100 μm in diameter and a depth between 5 and 10 nm.
Results and discussion
The architecture of PbS QD-sensitized solar cells fabricated in this study is glass substrate/fluorine-doped tin-oxide (FTO)/compact TiO 2 (c-TiO 2 )/sensitizer-incorporated mesoporous TiO 2 (mp-TiO 2 )/poly(3-hexylthiophene-2,5-diyl) (P3HT)/poly(3,4-ethylenedioxythiophene) polystyrene sulfonate (PEDOT:PSS)/Au, where the sensitizer is PbS QDs covered with MAP shell layers as shown in Fig. 1a.The energy-level diagram of these devices is shown in Fig. 1b.Electron-hole pairs generated in the PbS QDs under solar illumination are separated into free electrons and holes, which are transported into mp-TiO 2 and P3HT, respectively.These electron and hole transport through the MAP shell layers seem efficient in terms of the similar hole and electron transport levels for PbS and MAP.The MAP shell layers play an important role in reducing carrier recombination by passivating surface defects of PbS QDs, reducing electrons coming back from mp-TiO 2 , and preventing a direct contact between mp-TiO 2 and P3HT, the source of shunting paths.
The PbS core/MAP shell QDs used for the above devices were prepared using methods reported previously and illustrated in Fig. 1c [17,23].PbS QDs were directly formed inside an mp-TiO 2 layer by repeated spin-coating of three solutions each containing PbI 2 , Na 2 S, or 2-ethanedithiol (EDT) as described in experimental section in detail [17].To control MAP shell thickness, different concentrations of PbI 2 solutions were used for the two-step process while the MAI solution concentration was fixed.The PbI 2 solution concentrations ranged from 10, 30, 50, and 100 mM.As illustrated in Fig. 1d, larger MAP shell thicknesses can be obtained when the higher PbI 2 solution concentrations are used, which will be discussed later.
PbS core/MAP shell QDs were removed from mp-TiO 2 films by scratching using a knife and suspended in anhydrous ethanol (Sigma-Aldrich) by ultrasonication.The QD suspension was dropped onto a grid and dried for transmission electron microscopy (TEM).Fig. 1e shows a TEM image of PbS core/MAP shell QDs, which were fabricated using the 50 mM PbI 2 solution for the two-step process.The white and red arrows in this TEM image indicate the PbS core and the MAP shell layer, respectively.The average PbS core diameter was about 3.7 nm.The crystalline structures each for the PbS core and the MAP shell were already discussed in our previous paper [17].
We performed the same TEM measurements on other PbS core/MAP shell QDs fabricated using the different PbI 2 solution concentrations, and analyzed their size distributions using open-source software ImageJ [26,27].The obtained QD size distributions are shown in Fig. 2. The average QD diameters were 4.0 nm for the 10 mM solution, 4.18 nm for the 30 mM solution, 4.38 nm for the 50 mM solution, and 4.60 nm for the 100 mM solution, respectively.By taking the PbS core diameter into account, the average MAP shell thicknesses could be estimated to be 0.15 nm for the 10 mM solution, 0.24 nm for the 30 mM solution, 0.34 nm for the 50 mM solution, and 0.45 nm for the 100 mM solution, respectively.It is clear that use of the PbI 2 solutions with the higher concentrations results in an increase of MAP shell thickness.We hereafter used these average MAP thicknesses for the discussion.
Fig. 3 shows cross-sectional scanning electron microscope (SEM) images of mp-TiO 2 films with PbS core/MAP shell QDs.There were tiny pores in the whole regions of the mp-TiO 2 films when the 10 (Figs. 3a), 30 (Fig. 3b), and 50 mM (Fig. 3c) PbI 2 solutions were used to form the MAP shell layers.These pores could be filled with P3HT when a P3HT layer is spin-coated on top of the mp-TiO 2 films.As for the mp-TiO 2 films fabricated using the 100 mM PbI 2 solution (Fig. 3d), the pores still exist near the c-TiO 2 surface, but were not clearly observed near the mp-TiO 2 film surface.This could be because of filling the pores with excess MAP.In this case, P3HT molecules cannot reach the lower part of the mp-TiO 2 films to fill the pores.Therefore, the holes generated under illumination cannot move properly to a hole transport layer of P3HT, probably making solar cell performance lower [28][29][30].
We prepared three kinds of samples for UV-Vis absorption spectroscopy.The first sample was an mp-TiO 2 film with PbS QDs fabricated using the S-SILAR method only (no MAP included).The second sample was an mp-TiO 2 film with MAP fabricated using the two-step method only (no PbS QD included).The third sample was an mp-TiO 2 film with PbS core/MAP shell QDs fabricated using both the S-SILAR and two-step methods.To fabricate the second and third samples, the 50 mM PbI 2 solution was used.Fig. 4a shows the absorption spectra of these three samples.Absorption wavelengths of PbS QDs depend on QD diameters and are typically in the nearinfrared region between 700 and 1400 nm [31].Our PbS QDs without the MAP shell layers had an absorption peak at about 1020 nm.The PbS QD diameter estimated from this absorption wavelength using a method reported previously was 3.6-4.0nm, which agrees with that estimated from the TEM images [10].It is worth mentioning here that covering the PbS QDs with the MAP shell layers causes the absorption peak to redshift from 1020 to 1170 nm.This may mean an increase of the MAP shell thickness.For the MAP sample without the PbS QDs, the absorption originating the MAP structure was located at about 780 nm.The similar peak was observed in the PbS core/MAP shell QD sample.In addition to the absorption peaks discussed earlier, these absorption spectra contained other peaks, which could be caused by optical interference effects.
External quantum efficiency (EQE) curves of PbS QD-sensitized solar cells in the absence and presence of MAP shell layers are presented in Fig. 4b.The shapes of the two EQE curves were very similar, indicating that MAP does not contribute to solar power conversion in our QD systems although metal halide perovskites can be widely used as the light absorber of solar cells [23,[31][32][33].Introduction of MAP shell layers into PbS QDs led to an increase of EQE by about twice, the reason of which is discussed later.
Current density-voltage (J-V) characteristics of PbS QD-sensitized solar cells with the different MAP shell thicknesses were measured under AM 1.5G solar illumination at 100 mW cm − 2 and are shown in Fig. 4c.To increase the statistical significance of our efficiency data, 50 samples were fabricated for each and measured; the photovoltaic parameters, short circuit current density (J SC ), open circuit voltage (V OC ), fill factor (FF), and PCE, obtained from the J-V characteristics are summarized in Table 1 and plotted as a function of MAP thickness in Figure S2.These parameters strongly depended upon the MAP thicknesses (the PbI 2 solution concentrations) and became maximum at a MAP thickness of 0.34 nm; the value was about 4.1%.This value was about six-fold higher than that for solar cells without MAP (about 0.7%) To understand a reason for the improved solar cell performance, we carried out TRPL on PbS QD samples because TRPL results can provide us important information on dynamics of photo-excited carriers [34].For TRPL measurements, we prepared mp-TiO 2 films Fig. 2. Distributions of QD sizes calculated from TEM images.
G. Seo et al. with PbS core/MAP shell QDs using the same methods mentioned earlier.In the viewpoint of charge transfer, we can expect the generated holes transfer without trap from PbS core to P3HT hole conductor through the MAP shell.The TRPL decay curves measured at the PL peak wavelength (1020 or 1170 nm) are shown in Fig. 5a.Next, we fitted the TRPL decay curves using a tri-exponential decay function.The fast component (τ1) in the tri-exponential decay function mainly attributed to the charge extraction to the transport layers.The rather slower components (τ2 and τ3) mainly due to the trap-assisted recombination or Auger recombination [35,36].
Fitting parameters obtained here are summarized in Table 2.Then, the electron transport efficiency for each sample was calculated using a method reported in the previous report [37] with the fitting parameters; the calculated results are shown in Fig. 5b.Since electron transport to mp-TiO 2 competes with carrier recombination at surface defects, some of electrons cannot reach mp-TiO 2 .Therefore, the electron transport efficiency calculated here represents a ratio of the number of electrons reaching mp-TiO 2 to the total number of electrons generated by photoexcitation before recombination [34].The TRPL decays became quicker and the electron transport efficiency increased when the MAP shell thicknesses were increased from 0 to 0.32 nm.This points to improved electron transport to mp-TiO 2 through MAP, probably because of passivated surface defects by introducing MAP shell layers for PbS QDs.However, increasing the MAP shell thicknesses to 0.45 nm resulted in an increase of PL lifetime and decrease of electron transport efficiency.This is because of too large shell thickness making carrier transport though MAP inefficient [38].
To gain further insight into carrier dynamics, we used a TPV measurement method.TPV techniques allow for direct monitoring of the bulk lifetime of photogenerated charge carriers, which defines the photovoltage of the solar cell.Consequently, TPV can be utilized to monitor charge carrier kinetics by identifying the charge carrier lifetime and determining the charge carrier density of the states in the device under relevant conditions.The result can reflect the trapping of the photo-generated charge and the quality of contact between different layers.PbS QD-sensitized solar cells with MAP shell layers revealed the longer recombination lifetime (τ n ) at every V OC compared with that of other solar cells (Fig. 5c).In addition, we provide a detailed comprehensive voltage variation data that is based on the measurement time, and is included in Figure S3of the supplementary material.This provides evidence that hole and electron transport to neighboring mp-TiO 2 and P3HT layers is efficient and recombination of electrons and holes is slower in solar cells with MAP shell layers.These results agree well with the TRPL results.
From the above mentioned TRPL and TPV results, we can conclude that use of MAP shell layers passivates surface defects of PbS QDs, causing carrier recombination to reduce and hole and electron transport to mp-TiO 2 and P3HT to increase.Therefore, the J SC and FF values of solar cells were increased by introducing the MAP shell layers for the PbS QDs.In addition to the J SC and FF, the V OC values were increased as well, which is common in reported core/shell QD-sensitized solar cells.Yang et al. demonstrated that splitting of quasi-Fermi levels caused by reducing surface defects, leading to a low electron trap results in an increase of V OC for core/shell QDsensitized solar cells [20].Speirs et al. introduced CdS shell layers for PbS QDs and found an increase of V OC because of a band offset [19].These Fermi splitting and band offset could be the possible sources of the increased V OC in our devices.However, we need further studies to clarify more detailed factors affecting V OC .
PbS QDs covered with MAP shells are expected to have better air stability than uncovered PbS QDs have.Therefore, we evaluated air stability of PbS QD samples in the absence and presence of MAP shell layers by XPS.We carried out XPS on the as-prepared QD samples.After storing these QD samples in air for one week, we measured their XPS profiles again.As can be seen in Fig. 6a and b, the MAP-covered and uncovered PbS QD samples exhibited the similar two Pb 4f peaks around 142.5 (4f 5/2 ) and 137.5 (4f 7/2 ) eV before and after the storage in air, which originate from Pb 2+ ions of PbS or MAP.The uncovered PbS QDs, which were synthesized from PbI 2 and Na 2 S, had no I 3d peak (Fig. 6c), indicating the effective removal of I − ions from the samples during our S-SILAR method.On the other hand, the covered PbS QDs had a clear I 3d peak at about 619 eV (Fig. 6d), indicating the presence of MAP containing I − ions.The I 3d peak with the similar intensity and position still existed even after the storage.This may mean that the MAP structure does not degrade significantly during the storage.Both the covered and uncovered PbS QDs had a S 2p peak before the storage, which are located at 160− 162 eV and can be assigned to S 2− ions of PbS.After the uncovered PbS QDs were stored in air for one week, a peak originating from PbSO 3 appeared at about 163 eV (Fig. 6e) while the XPS data was unchanged for the covered PbS QDs (Fig. 6f).These results
Ai
. The electron transport efficiencies (η) calculated based on a method reported in Ref. [37] are also shown in this provide clear evidence that the MAP shell layers can prevent oxidation from occurring in PbS QDs.In fact, our solar cells with the MAP shell layers demonstrated better air stability in comparison with those without the MAP layers (Figure S4).
Conclusions
Based on the experimental observations, we propose that there are a number of defect states on surfaces of PbS QDs, which cause carrier recombination to occur and carrier extraction from devices (carrier transport to neighboring layers) to reduce as illustrated in Fig. 7. Therefore, it is important to passivate these defect states by use of the shell layers.In this study, we used MAP as the shell layers for PbS QDs and optimized MAP shell thickness.We demonstrated that use of the MAP shell thickness around 0.34 nm effectively passivates the surface defects along with enhanced carrier transport as proved with detailed analyses by TRPL, TPV, and XPS and, therefore, increased PCE from 0.7 to 4.1%.Additionally, we obtained improved air stability of solar cells by utilizing the MAP shell layers.These results will be valuable for establishing the basic physics of carrier dynamics in core/shell QD systems and fabrication of QD-sensitized solar cells with high efficiency and air stability.
Fig. 1 .
Fig. 1.(a) Schematic illustration of a PbS core/MAP shell QD.(b) Energy-level diagram of QD-sensitized solar cells fabricated in this study.Illustrations showing (c) S-SILAR and two-step methods for the fabrication of PbS core/MAP shell QDs.(d) MAP shell thickness can be controlled by PbI 2 solution concentrations used for the two-step method.(e) TEM image of PbS core/MAP shell QDs with a diameter of 3.0-5.5 nm.
Fig. 4 .
Fig. 4. (a) Vis-NIR absorption spectra of PbS QDs, MAP, and PbS core/MAP shell QDs film on a FTO substrate prepared by S-SILAR method.(b) EQE curves of PbS QD-sensitized solar cells with and without MAP shell layers of 0.34 nm.(c) J-V curves of PbS QD-sensitized solar cells with various MAP shell thicknesses.
Fig. 5 .
Fig. 5. (a) TRPL decay curves of PbS QDs with different MAP shell thicknesses.(b) Plot of electron transport efficiency as a function of MAP shell thickness.(c) Recombination lifetime (τ n )-V OC plots for solar cells.
Table 2
Parameters obtained by fitting TRPL decay curves shown in Fig. 5a with a tri-exponential decay function [f (x) = ∑ i A i e − t/τi ].The average lifetimes (τ avg ) were calculated using an equation: τ avg = ∑ i Ai • τi ∑ i
Table 1
Summary of device performance of PbS QD-sensitized solar cells with different MAP shell thicknesses. table.
G.Seo et al.
|
2023-09-24T16:12:50.613Z
|
2023-09-19T00:00:00.000
|
{
"year": 2023,
"sha1": "c1ed8d790f84752cd8a61fd75dc17c3e6e496dbc",
"oa_license": "CCBY",
"oa_url": "http://www.cell.com/article/S2405844023074844/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "22f708c5d75bb8f58d66ac676db7b302049e9adc",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
19906782
|
pes2o/s2orc
|
v3-fos-license
|
Psychological factors in oral mucosal and orofacial pain conditions
ABSTRACT The psychological aspects of chronic pain conditions represent a key component of the pain experience, and orofacial pain conditions are not an exception. In this review, we highlight how psychological factors affect some common oral mucosal and orofacial pain conditions (namely, oral lichen planus, recurrent aphthous stomatitis, burning mouth syndrome, and temporomandibular disorders) with emphasis on the significance of supplementing classical biomedical treatment modalities with appropriate psychological counseling to improve treatment outcomes in targeted patients. A literature search restricted to reports with highest relevance to the selected mucosal and orofacial pain conditions was carried out to retrieve data.
INTRODUCTION
The International Association for the Study of Pain (IASP) has defined pain as "an unpleasant, sensory, and emotional experience associated with actual or potential tissue damage, or described in terms of such damage." [1] This definition does not only include the sensory aspect of pain but also the emotional and interpretive or cognitive aspects of pain. The IASP has also described chronic pain as pain lasting longer than 6 months. [2] The emotional factors are more significant in chronic than in acute pain and assert a significant influence that has to be recognized and addressed in order for effective management of chronic pain conditions, including orofacial pain, to take place. medicine. A literature review on individual aspects within these two extensively broad areas requires restrictions to be applied to the searched literature. Accordingly, to limit the scope of this review, oral mucosal lesions were limited to oral lichen planus (OLP) and recurrent aphthous stomatitis (RAS) while orofacial pain conditions were limited to burning mouth syndrome (BMS) and temporomandibular joint disorders (TMD). Literature search was initially performed using MEDLINE/PubMed databases with different combinations of "psychological factors," "psychiatric factors," "oral lichen planus," "recurrent aphthous stomatitis," "orofacial pain," and "orofacial pain classification" as key words. Exclusion criteria included articles on mucosal conditions or orofacial pain other than the aforementioned entries as well as those published in languages other than English. No restrictions on articles types or dates of publication were applied. In addition, individual articles retrieved manually from the reference list of the relevant papers were also included in the study. Thereafter, papers with the highest relevance to the review topic were selected with the consideration of the total number of references allowed.
PSYCHOLOGICAL ASPECTS OF PAIN PERCEPTION
Proper pain assessment, and subsequent management, should take into consideration both the somatosensory input (nociception from the body tissues) and the psychosocial input (influence from the higher centers). Therefore, pain classification has been based on two levels or axes. [4] Axis I represents the physical factors that are responsible for the nociceptive input, while Axis II represents the psychological factors that influence the pain experience. Chronic pains, as opposed to acute ones, often have significant Axis II factors. Psychological intensification of chronic pain may proceed until the suffering is wholly disproportionate to the peripheral nociceptive input as in somatization. Pain may lack an adequate source of input that is anatomically related to the site of pain, it may be felt in multiple and sometimes changeable locations, bilateral pain may become evident in the absence of bilateral sources of noxious input, and the complaint may display unusual or unexpected responses to therapy which may further complicate the management. [3,4] The significant impact of psychological factors on orofacial pain conditions, including mucosal lesions, has been well established. [1,2,[5][6][7][8][9] Psychopathological disorders were even shown to be common among orofacial pain patients. [9] Furthermore, it has been postulated that persistent orofacial pain as a manifestation of psychological factors in the presence or absence of organic pathology may become a source of significant personal distress and life disruption. [10] A recent report found that as the levels of pain-related disability increase, the perception of psychological influence on pain initiation and aggravation also escalates. [11] Biobehavioral is a term that integrates the important roles biological factors play in governing human functioning with the influences of behavioral factors, including principles of learning, interpersonal processes, and techniques for self-change. [5] Biobehavioral factors may promote or prolong physical dysfunction as well as thought processes and emotions that may be distorted as a result of this dysfunction.
These factors are as important to consider as the physical disease factors if the pain patient is to return to normal functioning, especially in the case of chronic pain.
Accordingly, the biobehavioral model and cognitive behavior therapy (CBT) approaches were introduced to establish an effective and comprehensive management of chronic pain conditions. Biobehavioral interventions are designed to address both excitatory factors for pain (e.g., expectations, negative emotions, parafunctional behaviors) and inhibitory factors (e.g., confidence, relaxation, positive emotion). These tools are designed to provide patients with skills to understand and manage their pain experience. [5] When these approaches were applied in the management of orofacial pain conditions, significant positive results were reported, and hence, it was recommended to utilize these approaches in such conditions. [12][13][14][15][16][17] However, it appears that orofacial pain management is still largely dependent on biomedical interventions and is lacking proper implementation of psychological interventions. [18,19]
OROFACIAL PAIN CLASSIFICATION
A convenient classification of orofacial pain can be based on etiologic factors [20] and thus would include • Dentoalveolar Although dental pain is largely acute in nature, the majority of other orofacial pain conditions are chronic (e.g., mucosal conditions and musculoskeletal pain) and as such will have a significant psychological part. The following discussion will focus on psychological aspects of some of the most common mucosal and orofacial pain conditions, namely RAS, OLP, BMS, and TMD.
PSYCHOLOGICAL FACTORS IN COMMON MUCOSAL CONDITIONS, ORAL LICHEN PLANUS, AND RECURRENT APHTHOUS STOMATITIS
A relationship was postulated between psychological factors and the occurrence and long-term course of some common oral mucosal conditions; namely OLP and RAS. [6] The two conditions are widely believed to be initiated and aggravated by many factors, including stress and anxiety. [21][22][23] Hence, terms such as psychosomatic diseases and stress-related oral ulcerations are frequently used in literature to refer to such conditions. [6,24] Likewise, oral mucosal conditions are likely to cause significant levels of stress and anxiety in affected individuals. [25] Several studies found higher levels of stress, anxiety, depression, and mental disturbance among OLP patients as compared to non-OLP controls. [26,27] Furthermore, it was reported that more than 50% of studied OLP individuals were able to correlate the occurrence of stressful events with the time of onset/exacerbation of OLP. [28,29] Anxiety and mental stress may even drive the progression of reticular pattern of OLP to erosive or ulcerative forms. [6] Stress alters the regulation of both the sympathetic and parasympathetic branches of the autonomic nervous system, with consequential alterations in hypothalamic pituitary adrenal axis. These changes play pivotal roles in regulating immune surveillance mechanisms, including the production of cytokines that control the inflammatory process as well as events responsible for healing. [30] Accordingly, it seems plausible that a stressed patient is prone to immune-mediated conditions (e.g., OLP) due to significant disturbance in psychobiologic balance.
Patients with persistent RAS often show elevated anxiety levels. [31,32] In a well-designed prospective study, [33] 160 RAS patients were followed up weekly by a telephone survey for up to a 1 year providing data on the occurrence of RAS episodes and details of any stressful events they experienced during the previous week. Stressful life events were significantly associated with the onset of RAS episodes but not with the duration of the RAS episodes. Experiencing a stressful life event increased the incidence of RAS episode by almost three times, and mental stressors had a larger effect than physical stressors on the occurrence of RAS episodes.
The mechanisms whereby stress may result in RAS episodes are not well understood. It has been suggested that increased levels of salivary cortisol, [34,35] or reactive oxygen species (a possible determinant of stress level in the individual) [36] in the saliva, may lead to the onset of lesions. Furthermore, stress may simply stimulate self-induced trauma thereby initiating an episode of RAS. As mentioned earlier, stress can affect different components of the immune system including the distribution, proliferation, and activity of inflammatory cells, phagocytosis, and production of cytokines and antibodies. [37]
P S Y C H O L O G I C A L F A C T O R S I N BURNING MOUTH SYNDROME AND T E M P O R O M A N D I B U L A R J O I N T DISORDERS
BMS is chronic disorder characterized by a burning sensation or other dysesthesias, while the clinical appearance of the oral mucosa is within normal limits. BMS etiopathogenesis is not fully understood although there is some evidence that a dysfunction in central and/or peripheral nervous system plays an important causative role. [38] The prevalence of psychiatric disorders in BMS is high, but their actual role in the pathogenesis of BMS remains unclear. Several studies have reported high frequency of psychiatric morbidity in BMS with depression being the most prevalent disorder. [39,40] Interestingly, although BMS patients are subjected to elevated psychological stress, the onset of their symptoms is not necessarily directly associated with stressful life events. BMS patients may have a unique psychological profile with higher levels of depression, anxiety, hypochondria (excessive worry about having a serious illness), and cancerophobia. [38] Psychological factors are well-recognized risk factors for TMD. [41,42] Depression and sleep disturbances were shown to be significantly higher in TMD patients as compared to controls. [43] Psychosocial stressors can enhance TMD possibly through increased corticosteroid levels or aggravation of parafunctional habits or even through activation of sympathetic nervous system. [41,42,44] The evaluation of psychological profiles between different subgroups of TMD patients has led to conflicting results. Whereas some studies found that significant psychological differences exist between patients with either muscle or jaw joint problems, [45] others found no differences between subgroups. [46] It was shown that chronic TMD patients have higher rates of depression and somatization as compared to acute TMD patients. [47] In a study that involved 1149 TMD patients from three highly specialized university-based centers, pain-related disability was found to be strongly related to depression levels and somatization as well as pain duration following 6 months. [48] In managing TMD, CBT is typically added to a program of standard treatment that includes use of an intraoral appliance, medications, and a jaw rest program.
Results from clinical trials that included long-term follow-up data showed that CBT intervention causes a significant decrease in pain self-reports and pain interference in daily activities. [49] In one study, CBT alone was enough to relieve TMD symptoms in 112 out of 134 patients who had pain and/or limited jaw movement in <2 months without any further treatment. [50] CONCLUSION Psychological factors are key players in the initiation and perpetuation of several oral mucosal and orofacial pain conditions. However, despite the evidence presented in the literature for such a relationship, it appears that these factors are still underestimated and psychological interventions are underutilized by many clinicians. There is a crucial need to familiarize clinicians with the psychological aspects of common orofacial pain conditions and to highlight the importance of psychological intervention, where applicable, to provide an effective long-term pain management in affected patients.
Financial support and sponsorship
Nil.
|
2018-04-03T01:03:50.701Z
|
2017-10-01T00:00:00.000
|
{
"year": 2017,
"sha1": "3d1417ad4d78cb423fe4549979c5e5d55162fcac",
"oa_license": "CCBYNCSA",
"oa_url": "http://www.thieme-connect.de/products/ejournals/pdf/10.4103/ejd.ejd_11_17.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "8a6d42b2013819c20a2acb598a91f55fb6132d0c",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
232245957
|
pes2o/s2orc
|
v3-fos-license
|
Supply chain design to tackle coronavirus pandemic crisis by tourism management
The rapid growth of the COVID-19 pandemic in the world and the importance of controlling it in all regions have made managing this crisis a great challenge for all countries. In addition to imposing various monetary costs on countries, this pandemic has left many serious damages and casualties. Proper control of this crisis will provide better medical services. Controlling travel and tourists in this crisis is also an effective factor. Hence, the proposed model wants to control the crisis by controlling the volume of incoming tourists to each city and region by closing the entry points of that region, which reduces the inpatients. The proposed multi-objective model is designed to aim at minimizing total costs, minimizing the tourist patients, and maximizing the number of city patients. The Improved Multi-choice Goal programming (IMCGP) method has been used to solve the multi-objective problem. The model examines the results by considering a case study. Sensitivity analyses and managerial insight are also provided. According to the results obtained from the model and case study, two medical centers with the capacity of 300 and 700 should be opened if the entry points are not closed.
Introduction
Among natural disasters, infectious diseases are one of the leading causes of death in humans. There have been many pandemics and events in human history such as polio, smallpox, cholera, and HIV, which have caused injury or death to many people [1]. The influenza pandemic in Spain 1918Spain -1919, is estimated at 20 to 50 million people worldwide [2,3]. Also, the influenza A(H1N1) virus in 2009, spread over most of the countries causing a lot of deaths [4]. Also, many acute illnesses include respiratory infections, malaria, measles, and diarrhea. Common infectious diseases after natural disasters are closely related to unsanitary health conditions and malnutrition that affect the population [5].
The expansion of communication between communities has increased the speed of transmission of infectious diseases time [6]. A pandemic is a condition in which prevalence is beyond imagination and a pandemic can be controlled with proper management over a period of time [7]. A recent pandemic, COVID-19 (Corona Virus Disease 2019), affected the entire world, causing extensive human and financial damage. The prevalence of coronavirus has increased the number of visits to medical centers (MC) and has often caused problems for these centers [8].
One of the most important factors in the increasing expansion of the COVID-19 is the lack of compliance with the relevant health * Corresponding author.
E-mail address: paydar@nit.ac.ir (M.M. Paydar). and safety tips. Unnecessary transit and travel are some of the factors in maintaining and expanding COVID-19. By controlling travel, we can help reduce the spread of this pandemic and prevent contamination of clean areas [9]. Also, by closing the entry points of some areas, it is possible to prevent congestion in MCs and reduce medical services. By reducing the number of tourists entering the city, it is possible to prevent the spread of this virus and further infection in that area. It also threatens the place when tourists return to their origin. Therefore, by enforcing the laws prohibiting inter-regional traffic at appropriate times in times of crisis and pandemic outbreak, this chain of disease transmission can be terminated sooner and the level of response of MCs can be increased to reduce the workload and fatigue of medical staff. Also, if needed, mobile MCs that could be set up quickly can be built to cover more patients.
Due to a large number of patients and their rapid growth in pandemics, the problem of cost management and crisis control is essential. Therefore, given the importance of human health concerning financial issues, we must look for a way to properly and accurately manage this chain. Mathematical modeling is one of the management methods that leads to a logical and accurate answer by considering various conditions and parameters, leading to the achievement of the correct result in a logical time [10]. In such a situation, decisions cannot be made, based on the single objective models [11].
In this study, a supply chain is designed for MCs and patients during a pandemic, based on the case study of COVID-19.
This multi-objective model includes city patients, tourist patients, existing and potential MCs, and aims to minimize total costs, minimize tourist patients, and maximize city patients to provide better services. Considering the objectives of the problem and the COVID-19 pandemic that poses a threat to health and environmental pollution, sustainability's economic, social, and environmental aspects have been considered in this study. This leads to more and better medical services that increase patient satisfaction, and better management of the chain helps to reduce the accumulation of environmental pollution.
Literature review
This section reviews some previous studies on COVID-19 and some other similar pandemics disease. Following the revelation of the existing research gaps, these articles have been expressed and compared. Then, the research gap is explained to clarify and to reveal the importance of the proposed problem.
COVID-19 pandemic and tourist management
As mentioned in the introduction, there have been pandemics so far that research like Dowdy et al. [12] that have used decision tree models to manage the cost of infectious disease testing in India, Dasaklis et al. [1] that analyzed some researches in the context of Epidemic control and their effects, and Winskill et al. [13] that investigated factors related to malaria transmission were considered in terms of cost in Africa. Among the factors they studied were internal spraying residues, insecticide-treated nets, and seasonal malaria prevention chemotherapy has been done. In the recent pandemic, COVID-19, studies have been conducted in various fields, and most of these studies include statistical analysis.
The Impact of the COVID-19 on the treatment of diseases and health has been presented in some articles. Zareie et al. [14] examined the impact of the COVID-19 on people's health and the rate of casualties. Govindan et al. [15] that proposed a decision support system for managing the demand for health systems in COVID-19, Ren et al. [16] investigated the choice of medication for COVID-19 heart disease patients, Cusinato et al. [17] studied the impacts of COVID-19 in repurposing drugs, Aggarwal et al. [18] predicted disease using decision support systems, and Ngoc Su et al. [19] investigated the human resource in the hospital in Vietnam using statistical analyses.
This pandemic has also caused significant changes and impacts in the environment, with articles such as Amankwah-Amoah [20] that have examined the environmental aspects of COVID-19 sustainability by analyzing the influential factors in the aviation industry. Also, Saadat et al. [21] examined the extent to which environmental pollution was reduced during the COVID-19. Moreover, Kargar et al. [22] investigated the waste management of medical centers, Vaka et al. [23] that compared the factors affecting renewable energy in Malaysia in the COVID-19 pandemic in Asian countries, and Zou et al. [24] developed a distribution system for supermarkets in pandemic situation.
The recent pandemic has had a direct and significant impact on the tourism industry. With the reduction of travel and travel bans, tourism faced many changes and challenges. Karim et al. [25] used the reports of Malaysia in COVID-19 to find out the changes in the tourist and hospitalization process. They investigate the problem with the conceptual methodology technique. Their study aimed to predict future Malaysian tourism management using statistical analyses. Qiu et al. [26] estimated the urban desire to pay for reducing dangers of the pandemic among tourism process by the method of triple bounded dichotomous choice contingent valuation. They want to forecast the costs with determined scenarios by informing people. Kock et al. [27] studied the future of tourism and the effects of COVID-19 in travels. They investigate the mental impacts of this pandemic on tourism industry changes to find a pattern for planning based on empirical findings. Gössling et al. [28] studied the impacts of the recent pandemic, COVID-19, on tourism and traveling in the first days of spreading it. They used the statistics of this disease to find the procedure and reduction of tourism in that period. They report their observations from all of their investigations. Higgins-Desbiolles [29] described the impacts of email on the tourism industry among COVID-19 which weaken this industry and the jobs that are related to it. They argued about the future of the students and education in tourism. Of course, there is not much explanation for this, so the tourism pattern is changing due to the spread of the COVID-19.
Studies that have been done in the field of COVID-19 control, to the best of our knowledge are often statistical and do not have a mathematical model. Aydin and Yurdakul [30] used machine learning to analyze the performance of the countries in COVID-19. They analyzed the data via clustering and decision tree method. Chinazzi et al. [31] examined the impact of traffic restrictions in Wuhan, China, from January to February 2020. They found that the prevalence was reduced by about 50 percent with 90 percent restrictions. Kraemer et al. [32] examined the impact of import restrictions in Wuhan, China. With official data from China, they found that the incidence had decreased as imports from the city declined.
Research gap
With the spread of the COVID-19 pandemic around the world and the spread of its unforeseen events, the importance of managing this crisis and the need for a regular network to control the disease is essential. Mathematical modeling is one of the most accurate and good management methods by which the most reliable and closest results to reality can be achieved and a better prediction of the crisis in the future can be provided.
Previous researches, shown in Table 1, are presented in Section 2.1. to reveal existing research gaps. Most of these articles have used statistical analysis to assess the prevalence or estimate of potential casualties. In this study, a mathematical model for the prevalence of infectious diseases in each region is designed based on the COVID-19 case study. This problem simultaneously considers the multi-objective of cost minimization, minimizing the tourist patients, and maximizing the number of city patients, which have not been yet proposed any multi-objective mathematical model in this context. Also, so far, no model has been considered for controlling tourists and travelers entering the city, taking into account medical centers, patients, and tourists. However, there is almost no mathematical model for tourist management in other non-medical fields. The proposed model helps to improve the COVID-19 pandemic control condition by mathematically optimizing a multi-objective problem.
Problem statement
The presented multi-objective and multi-period model is designed to manage the COVID-19 crisis and provide better medical services to patients. The model includes the city that tourists enter the area from different entry points. City patients and tourist patients go to medical centers (MC) for medical services. They receive services depending on the type of their disease, which is acute and requires hospitalization or non-acute. Due to the hospitalization of some patients, the time of hospitalization is considered in the model.
In the proposed model, the first objective function is minimizing total costs. Four terms of this objective function are described in (1a)-(1d). Term (1a) calculates the treatment cost of city patients in MCs, term (1b) is the treatment cost of tourist patients in MCs. Term (1c) shows the opening cost of new MCs; and the fourth term, (1d), is the cost of lack of services for the tourist patient. The second objective function, Eq. (2), minimizes the tourist patients whom inpatients in MCs to avoid travel more. This function helps to control the management of the tourist in entry points to find the right time to impose a traffic ban. And the third one, in Eq. (3), maximizes the number of city patients whom inpatients in MCs; to prioritize the city patients and reduce the number of tourist patients to impose traffic restrictions and to increase the capacity of more treatment and service capacity. Constraints: Constraint (4) shows the capacity of MCs. Inequalities (5) and (6) are the balance between the inpatients in MCs by considering the hospitalization period. Constraint (7) is the balance between the discharged patient and the existing patients. Constraint (8) indicates the balance of hospitalization of tourist patients in the MC. Constraints (9)- (14) indicate the decisions related to opening new MCs. Constraint (15) guarantees that tourist patients must not exceed the maximum tourist of that point. Constraint (16) specifies the minimum available medical services in MCs. Constraint (17) ensures that If no city patient needs MC services, tourist patients could use the services. Constraints (18) and (19) are calculating the number of patients of the city and tourist patients according to their infection rate coefficient. Constraints (20) and (21) express types of decision variables.
Solution approach
Considering different criteria for decision-makers in disaster situations to avoid and control the troubles is inevitable. Though, multi-objective problems will appear. In such a problem which there is required to consider several goals at the same time, it is very important how to analyze and solve these problems. Therefore, due to the nature of the problem, we should look for a way to find a single-objective function that provides the best answer compared to other methods. There are many methods to transform the multi-objective problem into a single-objective one; in this research, to solve the multi-objective problem, the improved multi-choice goal-programming (IMCGP) method has been used.
Goal programming (GP)
The goal programming (GP) method is one of the most important models of multi-objective planning. This method was proposed by Charns and Cooper in 1961 [33]. This approach was proposed for systems that have conflicting and multiple goals. All GP proposed methods have a common texture and all of them aim to minimize unfavorable deviations from the aspirations. Goal-programming is able to consider different goals compared to linear programming. It also allows deviations from goals and thus creates flexibility in the decision-making process. In other words, GP shows the way to move simultaneously towards several goals. Unlike linear programming, which maximizes or minimizes the goal, GP minimizes the deviations between the intended goals and the actual results.
Revised multi-choice goal programming (RMCGP)
The GP sets an aspiration level for each objective function and solves the problem accordingly. Due to insufficient and accurate information, it may be difficult to determine aspiration levels. To solve this, the multi-choice goal programming (MCGP) was presented by Chang in 2007, where decision-makers can have multiple goals (Chang, 2007). Chang extended the MCGP model and proposed revised multi-choice goal programming (RMCGP), where instead of a number for objective functions, decisionmakers could have a range for each objective [34].
Improved multi-choice goal programming (IMCGP)
Jadidi et al. [35] have presented a model that combines the RMCGP method and the GP with the priority function by considering a goal interval instead of a single goal. They believe that in some cases the value of the objective function may exceed the expected level, which will result in a penalty for the model; that is not considered in previous models [35]. Accordingly, because the probability of unforeseen and sudden events occurring in crisis is high, the IMCGP method has been used to solve the proposed problem. The proposed model includes three objectives, minimizing total costs, minimizing the tourist patients, maximizing the number of city patients. The model obtained from the IMCGP method is as follows, shown in Eqs. (22) to (29). Eq. (22) is the new single-objective function. Equations (23)- (29) are the new constraints that will be added to the model.
Subject to: Thus α k is a zero-one continuous coefficient, and the normalized distance shows the obtained objective function from g + k . g − k indicates the kth value of the objective function in the undesirable state.
indicates the level of aspiration to be determined by the decision-maker. In this proposed model, it is assumed that the upper bound of the aspiration level g k,max is equal to the value of g + k , which represents the value of the kth objective function in the desired state, while the lower bound of the aspiration level g k,min can be greater than or equal to the value of g − k . The range is divided into more desirable and less desirable ] . β k represents the normalized distance of the kth objective function from g k,min . If the value of the obtained objective function kth is greater than g k,min , a penalty will be added to the model which takes a value between zero and one. y k is a zero-one variable, W α k and W β k are the weights of each objective, and h i (x) is the ith model's primary constraints.
Case study
The presented model is designed based on a real case study of the COVID-19 crisis in Mazandaran, one of the most populous provinces of Iran, located in the north of Iran. Mazandaran is one of the areas in the country that has a high range of patients; and because of its specific region and climate, has a large number of tourists and important entry points to the province and a lot of traffic. The population of Mazandaran is about 3 million people and tourists enter this province from 3 main entry points.
This model includes the city that tourists enter the area from 3 different entry points (s 1 ), (s 2 ), and (s 3 ). City patients and tourist patients go to MCs. They receive services depending on the type of their disease, which is acute (p 1 ) or non-acute (p 2 ). Due to the hospitalization of some patients, the time of hospitalization is considered in the model.
There are 5 MCs and 2 types according to their capacity. MCs always need to have at least a certain amount of medical services available. If it is necessary, 3 new potential MCs could be established in field or temporary. If there is no city patient, tourist Table 4 The maximum tourist of entry points in the first time period.
CW st
Entry point s 1 s 2 s 3 5000 2800 3500 Table 5 The capacity of MC.
patients could be admitted and there is no shortage of services; Otherwise, by restricting entry points, more tourists should be prevented from entering the area. Each patient leaves the MC after undergoing a certain time of treatment. The considered time horizon in this study is 15 days. Some important parameters are mentioned in Section 5.1, and the results, sensitivity analyses and managerial insights in Section 5.2, Section 5.3 and Section 5.4, respectively.
Input parameters
According to the Iran Ministry of Health and Medical Education's statistics, the infection rate coefficient of the city with patient type p, are considered 0.0001 and 0.0004 for acute and nonacute, respectively [36]. The infection rate coefficient of tourist patients from each entry point is shown in Table 2, patients in each MC in each type in Table 3, and the maximum tourist of each entry point in the first time period in Table 4. Table 5 is shown the capacity of each MC with its type. Hospitalization period is 7 days for acute patients. The minimum available medical services for the patients in the first time period is shown in Table 6. The city inpatients and the tourist inpatients in each MC are presented in Tables 7 and 8 in the first time period, respectively.
Results
The presented model is solved in a computer with CPU Core i5 and 8GB RAM in 27 min by Lingo17 software with Branch Table 6 The minimum available medical services in the first time period.
SB pt
Patient type p 1 p 2 300 200 Table 7 The city inpatients in the first time period. Table 8 The tourist inpatients in the first time period. Table 9 The obtained values of objective functions. and Bound (B&B) method. The IMCGP is used to transform the presented multi-objective model to its single-objective one. The weight of each objective function is considered 0.3, 0.3, and 0.4, according to experts and decision-makers. The obtained values of each objective function and the value of IMCGP are given in Table 9. The total lack of services for tourist patients (TLS) is given to decide better about the prohibition policies to ban the entry points. The number of opened MCs are also reported as a New MCs in the tables. The goals are 50,000,000, 800 and 500,000, respectively for first to third objective functions. Some other key results are presented in Tables 10 to 12, the tourist patient from points in the first time period, the number of tourist patients in the first time period, and the opened MCs respectively. Table 12 The opened MCs.
Sensitivity analyses
The sensitivity analyses of some parameters, which would affect more to decision making are done. sensitivity analyses of the infection rate coefficient of the city and weights of the objective functions in the solution method are presented in Tables 13 and 14.
Sensitivity analysis of infection rate coefficient of the city
The infection rate coefficient of the city is one of the most influential factors to make decisions for the COVID-19 crisis. Hence, changing this coefficient can affect the results of the model impressively. This coefficient can be controlled in the city by applying entry ban rules at the entry points. This means that by reducing tourists and closing points, the infection rate can be controlled. The comparison of the value of objective functions is illustrated in Fig. 1 and the total lack of services for tourists (TLS) is shown in Fig. 2. The results of the sensitivity analysis of this parameter are reported in Table 13. As shown in Table 13 and Figs. 1 and 2, by decreasing the non-acute patients' infection rate, the TLS and new MCs will decrease; and the IMCGP reduced. Though, the whole chain got better. Therefore, it is necessary to control entry points to reduce the infection rate.
Sensitivity analysis on weights of objective functions
The weight of the objective functions in the solution method indicates the importance of that objective to the decision-maker. Therefore, changing the weight of the goals leads to significant changes in the results. Therefore, analyzing the sensitivity of this parameter and examining the changes in the results can help make better decisions. Table 14 demonstrate the different weights for objective functions and the key results. In state 5, by increasing the importance of the first objective function, the TLS and new MCs increased. In state 2, by decreasing the importance of the third objective function, the new MCs decreased but the TLS increased.
Managerial insights
Given the prevalence of the COVID-19 pandemic and the importance of controlling this crisis for communities, and considering the results of model solving and sensitivity analysis, the following management suggestions are provided for the regions affected by this crisis. Depending on the goals (minimizing the total cost, minimizing the tourist patients, maximizing the number of city patients), the following suggestions can help decision-makers make better decisions and reduce potential losses.
In all decisions and planning, cost is an important and influential factor that always increases in costs affects on other factors of the network. Therefore, in the proposed model, considering the cost, we can help the correct management of this crisis with mathematical planning. According to the aforementioned data and results, two medical centers with capacities of 300 and 700, potential MC 2 and 3, should be built to prevent the lack of medical services and to reduce injuries and casualties. Also, due to the minimum service available, these medical centers should be built.
Due to the importance of controlling the entry points to the cities and regions, to prevent and reduce the prevalence of COVID-19, the importance of the second and third objective functions becomes apparent. When the number of tourist patients in the city increases and there is no ability to care for and provide medical services to all patients in the city and tourist patients, this crisis should be managed by closing the entry points, which also helps the reduction of the infection rate in the city.
Conclusion
According to the impact of proper management on reducing losses and casualties in crises and the importance of COVID-19 pandemic control in the world, it is important to provide a comprehensive and regular program to control and improve the situation of this crisis. Iran and Mazandaran were one of the main regions involved in this crisis, which have been studied as a case study for the proposed model in this research.
The multi-objective and multi-period proposed model includes the city that tourists enter the area from entry points. Minimizing total cost, minimizing the tourist patients, and maximizing the number of city patients, are the objective functions of the presented model. City patients and tourist patients go to MCs to get medical services. Depending on the type of their disease, they receive services. There are two types of disease, acute and non-acute. the time of hospitalization is considered in the model. There are types of MCs according to their capacity. There are considered a minimum available medical services in MCs. So, the new potential MCs will be opened. If there is no city patient, tourist patients can be admitted and there is no shortage of services. By restricting entry points and route's ban policies, tourists should be prevented from entering the city.
The multi-objective model has been solved by the IMCGP method and to convert to a single-objective model. To evaluate the proposed model, a case study is considered and the data and results are described. Sensitivity analyses were then performed for more effective and sensitive parameters. Managerial suggestions are also provided to improve the COVID-19 crisis management process.
To expand the problem in the future, the uncertainty of some parameters can be considered in future research. For longer periods of time and larger areas and dimensions of the problem, meta-heuristic methods and some exact algorithms can be used. Other objectives can also be added to this problem and other variables such as transportation can be considered.
Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
|
2021-02-24T14:07:13.721Z
|
2021-02-20T00:00:00.000
|
{
"year": 2021,
"sha1": "fba5bfeaf607c3fc25feb6cc3d4f0cf9aceeb24b",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.asoc.2021.107217",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "b3688b80a7ee4b075e3f8178239e4664986ee8d6",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
81979710
|
pes2o/s2orc
|
v3-fos-license
|
Mobile Outreach: An Innovative Program for Older Orthopedic Patients in Care Facilities
Introduction: The worldwide incidence of fragility fractures is increasing and the greatest burden is borne by the oldest population. Mobile Outreach, an innovative orthopedic-based program providing on-site musculoskeletal care for individuals in nursing care facilities, was implemented as part of our Geriatric Orthopaedic Trauma Program. The objectives of this report are to describe characteristics of patients cared for through Mobile Outreach and to report specific services provided. Program Description: Based from a nonprofit, private hospital that serves as the community’s level 1 trauma center and teaching hospital, the Mobile Outreach Program is directed by an orthopedic surgeon with geriatric subspecialization and staffed by a full-time geriatric nurse practitioner. Patients receive care for musculoskeletal concerns and fracture assessments at their nursing care facilities by a Mobile Outreach care provider. Referral for care is from nursing care facilities or as scheduled postoperative follow-up. Results: In 2016, the program treated 458 patients (76% female) in the patients’ care settings for a total of 689 visits. The mean age was 81 years (standard deviation = 14; range 25-107). Care of patients included nonoperative fracture care in 100 (22%), postoperative fracture follow-up in 149 (33%), injections for pain management in 184 (40%), and other orthopedic care in 25 (5%). Visits occurred at 88 facilities, mean 7 visits per site (range 1-57). Conclusions: Mobile Outreach was implemented to improve postoperative fracture care in the elderly patients. The program also provides on-site nonoperative fracture care and care of frail elderly individuals with chronic musculoskeletal conditions. This report aims to establish the feasibility of a program focused on the provision of appropriate, coordinated care for older fracture patients in their care facility. Level of Evidence: Level V.
Introduction
The worldwide incidence of fragility fractures is increasing. 1,2 These injuries are often associated with significant loss of independence and increase in morbidity and mortality. [3][4][5][6] The greatest burden is borne by the oldest population who are not only at greatest risk for fracture but also have the least physiological reserve for recovery.
Approximately 20% to 50% of patients with hip fracture come from nursing homes. 7,8 Ninety percent of patients with hip fracture are discharged from the hospital to postacute care facilities. 9,10 Given the overwhelming number of patients with hip fracture who either temporarily or permanently reside in nursing care facilities, care must be well coordinated throughout the acute and postacute processes. 11,12 In addition, as bundled care reimbursement models are implemented, acute care hospitals and physicians must focus their attention on an integrated acute to postacute model to most efficiently affect fracture recovery. 13 The Mobile Outreach Program is an orthopedic-based program through which on-site musculoskeletal care for individuals in care facilities is provided. Orthopedic services rendered by Mobile Outreach include telephone consultation for initial injury assessment, on-site nonoperative fracture care, joint pain management, postoperative follow-up checks, and coordination of hospital direct-admit processes when injuries and patient circumstances warrant hospital admission and/or surgical treatment. Mobile Outreach is part of a comprehensive elder orthopedic care service, called the "Masters Orthopaedic Program," which began in 2003 at Regions Hospital in St. Paul, Minnesota. The program is founded upon evidence-based principles for geriatric orthopedic co-management for in-hospital elderly patients with fracture; comprehensive secondary fracture prevention and bone health counseling; and orthopedic service provision at nursing care facilities (Mobile Outreach; Figure 1).
The goal of Mobile Outreach is to provide patient-centered, compassionate care to older individuals who have sustained a fracture or musculoskeletal injury. Given that postoperative, in-clinic visits for elderly patients with fracture may not provide considerable value; transportation for this cohort of patients may be difficult and costly; and most musculoskeletal care can be provided in nursing care facilities (with the assistance of portable X-ray capabilities). 14 Mobile Outreach meets a widely unrecognized and unmet need. The objectives of this report are to detail an innovative care model not previously described in the orthopedic literature, to describe the characteristics of patients who have been cared for through Mobile Outreach from January 1, 2016, through December 31, 2016, and to report specific services provided during this time period.
Program Description Setting and Target Population
Regions Hospital is nonprofit 527-bed, private hospital in the HealthPartners care network. It is a level 1 trauma center and teaching hospital affiliated with the University of Minnesota. The hospital is located in St. Paul, Minnesota, and is associated with 136 facilities (48 long-term care, 66 assisted living, 10 independent living, and 12 preferred transitional care units) in which HealthPartners Partnering Care Senior Services practitioners provide care. The hospital has an area of operation that includes the 7 counties that surround St. Paul, Minnesota, and also western Wisconsin. It serves an elderly population (65 years and over) of approximately 500 000 within a 50-mile radius.
Patient Engagement
Patients enter the Mobile Outreach care pathway in 1 of 2 ways. First, in-hospital patients with fracture are referred for orthopedic follow-up when they are both discharged to a skilled care facility and identified as vulnerable to challenges with postoperative visits. These challenges include long waits in clinic, consultation when family members might not be present, and the coordination and cost of transportation to and from clinic. Second, existing residents in nursing facilities are referred for musculoskeletal concerns and fracture assessments to Mobile Outreach by their primary care provider.
Program Model and Processes
Based at Regions Hospital, an orthopedic surgeon with geriatric subspecialization directs the program. An in-hospital multidisciplinary team (including representatives from anesthesiology, emergency medicine, hospital medicine, palliative care, perioperative and orthopedic nursing, nutrition, physical therapy, case management, and other medical providers) supports the overall program. In addition to the orthopedic director, resources dedicated to Mobile Outreach at the time of this report included a full-time, dedicated nurse practitioner with geriatric orthopedic specialization.
Mobile Outreach accomplishes its goals of providing the best care possible to frail elderly patients by focusing on 5 components: 24/7 phone consultations with care providers from nursing care facilities On-site (in nursing facility) acute injury visits Procedural visits (cortisone injections for arthritis, splint or cast management for fractures, etc) Postoperative and postfracture treatment visits Facility education and training Phone consultations. If a geriatrician or primary care nurse practitioner identifies a concern regarding the musculoskeletal health of any patient in a nursing facility, that provider may initiate a call to Mobile Outreach through a dedicated pager. Pager coverage is 24 hours, 7 d/wk. If a resident in a nursing care facility falls, is observed with a musculoskeletal concern such as impaired use of an extremity, or is bearing the stigmata of injury in the form of bruising or swelling, a standard radiograph of the area in question is ordered by the facility staff. Radiographs are obtained on-site through a third-party provider offering portable imaging services. Radiographs are transferred electronically, in an Health Insurance Portability and Accountability Act (HIPAA) compliant manner, to the Mobile Outreach team for review and consultation. When nonoperative intervention is indicated, management is initiated or directed by phone. If operative intervention is required, coordination begins immediately for the direct admission of the patient to Regions Hospital's orthopedic floor. Emergency department visits, where delays can be long, may thereby be bypassed. 15 Preoperative orders, necessary diagnostic testing, and even scheduling of the operating suite, can occur before the patient arrives at the hospital ( Figure 2).
Acute injury on-site visits. If a care facility resident sustains a musculoskeletal injury or fracture that can be well cared for within that facility (examples include distal radius fracture in a frail patient, minimally displaced proximal humerus fracture, or ankle fracture in a nonambulatory individual), the Mobile Outreach nurse practitioner travels to the facility to provide appropriate definitive care for the injury. Care may include cast, splint, or sling application. Appropriate follow-up and radiographic review can also be undertaken without the patient leaving the care facility. As above, this orthopedic care provision is necessarily supported by the availability of radiographs taken via portable equipment brought to the individual's room in the nursing facility or assisted living apartment. The potential stress and burden on the patient, afforded by transportation to the hospital or clinic, is thereby eliminated.
Procedural visits. Through the development of collaborative care relationships with primary care providers in many nursing care facilities, the nonemergent orthopedic needs of many nursing residents can also be met through Mobile Outreach. Many care facility residents have degenerative or acute conditions that may require orthopedic attention and intervention. Therefore, the Mobile Outreach nurse practitioner also schedules visits to facilities to provide care, such as cortisone injections for arthritic conditions, on-site splint, brace, or cast management, as well as appropriate osteoporosis or bone health consultation. [16][17][18] Postoperative visits. As standard of care, orthopedic surgical patients are seen in clinic at 2, 6, and 12 weeks following their procedures. At these visits, wound checks and rehabilitation evaluations are undertaken. Radiographs are reviewed. The majority of older orthopedic trauma patients treated at Regions Hospital are discharged to a transitional care, skilled nursing, or nursing home facility. Return to clinic for these, often frail, individuals may cause physical, mental, and even financial hardship. Therefore, for those patients unable to easily return to clinic from a postacute care facility, the Mobile Outreach nurse practitioner travels to the care facility to complete postoperative evaluations. Once again, radiographs are preordered and obtained on-site prior to the Mobile Outreach provider visit. This arrangement frees the patient and his or her family from the burden of arranging transportation to the clinic, of waiting to be seen in clinic, and of being transported back to the nursing care facility.
Education and training. Treatment of Mobile Outreach patients within the various nursing care facilities results in an enhanced environment of shared learning and training. The nurse practitioner provides staff training on geriatric orthopedic concerns and best practices through small and large group educational presentations and hands-on skills labs. generated by the Mobile Outreach nurse practitioner position and therefore, in our opinion, justify at least some organizational subsidy of such a program.
There are acknowledged inefficiencies when a single provider covers geographically diverse sites, such that the direct revenue generated by the position may not cover the full cost. However, there are many secondary and tertiary cost benefits realized across the organization that far outweigh the gap between position cost and revenue generated. As previously reported, 19 we estimate cost-of-care savings (in 2015 dollars) per encounter as shown in Table 1. Other cost benefits include release of clinic slots for new patients and the utilization of advanced practice providers (APPs or NPs) to offset physician effort. A highlighted financial analysis previously described an annual cost-of-care savings estimate in 2015 of $197 283 for over 300 patients served through 530 encounters. 19 In addition to the financial resources to support such a program, an electronic medical record (EMR) system and access to portable X-rays are cornerstones to a Mobile Outreach Program. Within our program, the Mobile Outreach nurse practitioner utilizes the hospital EMR system (EPIC, Verona, Wisconsin) to input each referral request and to document all clinical encounters. In addition, electronic transfer and digital access through multiple portable radiographic providers allow the Mobile Outreach nurse practitioner to embed into the patient EMR and/or download into picture archiving and communication system (PACS), the X-rays taken at a patient's residence.
Data Collection and Analysis
A retrospective review of the Mobile Outreach provider and patient records was undertaken for the time period of January 1, 2016, through December 31, 2016. The goals of the review were to determine the number of patient visits and procedures accomplished by a single Mobile Outreach nurse practitioner, the relevant patient demographics, the types of procedures and reasons for patient visits, and the number of facilities (or points of service) visited. Determination was also made whether the visit was the initial encounter with Mobile Outreach or if there were prior encounters for the same patient.
Patient demographics and relevant clinical characteristics were extracted from the EMR by manual review. Analyses and descriptive statistics were accomplished utilizing a simple Excel spreadsheet (Microsoft Inc, Redmond, Washington). The institutional review board was consulted for a determination that this quality reporting initiative did not constitute clinical research and therefore did not require ongoing institutional review board oversight and monitoring.
Results
Between January 1, 2016, and December 31, 2016, the Mobile Outreach program treated 458 patients (76% female) whose point of service was the patient's care setting (transitional care unit, assisted living, or skilled nursing facility). The mean and median age of patients treated by Mobile Outreach was 81 years and 84 years, respectively (standard deviation ¼ 14 years; range 25-107 years). Thirteen patients (3%), who received Mobile Outreach care were <50 years old. While this cohort would not be considered "geriatric," their needs (cognitive, medical, physical, and social challenges) were similar to the older population that the Mobile Outreach Program serves. Given these considerations, these patients were also accepted for Mobile Outreach care (Figure 3). The on-site services rendered included first-time visits for nonoperative fracture care in 100 (22%) patients, postoperative fracture follow-up care in 149 (33%) patients, injections for pain management in osteoarthritic conditions in 184 (40%) patients, and other orthopedic injury and postoperative care in 25 (5%) patients. One hundred forty-nine patients (33%) received more than one Mobile Outreach visit in the study period (range 2-7 visits) for a total of 689 care visits completed.
The 689 Mobile Outreach care visits occurred at 88 different care facilities with an average of 7 visits per location (range 1-57 visits per site) and included 6 visits to 4 patient homes. The locations of the various nursing facilities were within a 30-mile radius of the hospital setting where the program is based. Of the 689 visits, 190 (28%) occurred at the "top 5" locations, all having 20 visits in 2016 (Figure 4).
Of the 458 patients, 161 (79%) receiving care were first-time Mobile Outreach patients during the reviewed 2016 time period. Of these, 149 (33%) of 458 were cared for on 2 or more occasions. The program had previously treated 97 (21%) patients prior to the 2016 study period. Of the 97 patients, 58 (60%) were provided cortisone injections for arthritic joint pain relief, 19 (20%) were seen for nonoperative fracture care and follow-up, 16 (16%) were evaluated for postoperative care (including 1 for total shoulder arthroplasty follow-up), and 4 (4%) were provided care for nonoperative, nonfracture musculoskeletal injuries.
An orthopedic provider from our practice initiated the Mobile Outreach visit in 209 (46%) patients, while the providers at the nursing facility contacted the Mobile Outreach nurse practitioner and requested a consultation in 240 (52%) patients. A Mobile Outreach visit was requested in 9 (2%) patients whose original care was provided for by an outside hospital, but whose follow-up care was requested of the Mobile Outreach program due to patient need or family/patient preferences.
Based upon the type of services rendered by the Mobile Outreach provider, utilizing previously reported estimates for cost-of-care savings to the patient and/or payer, the total net estimated 2016 savings were $214 689 (Table 1). This included $69 600 in transportation savings and reduced service charges (for an APP/NP instead of a physician visit in clinic). Savings for avoided visits altogether, due to telephone consults with geriatrician providers onsite, totaled $12 925. Finally, avoidance of hospital and emergency room fees due to on-site fracture care and direct hospital admission, totaled $132 174. Estimates of transportation savings alone would comprise of $55 580 or 26% of the total $214 689 savings generated in 2016 (Table 1).
Of the 458 patients treated by the Mobile Outreach service in 2016, death occurred in 14% (n ¼ 66) according to the EMR records, at the time of this review.
Discussion
The aging of the population presents considerable challenges for the field of orthopedics. Fragility fracture rates are increasing. 1 Hip fractures, one of the most costly fractures in terms of morbidity and mortality, are expected to double in number in the next 50 years. 20,21 Most studies of elderly patients with fracture have focused attention on advances in perioperative surgical and medical care through integrated geriatric comanagement models or fracture liaison services. 7,22,23 These geriatric orthopedic co-management and bone health strategies have been shown to decrease hospital length of stay, decrease surgical complications, improve in-hospital mortality, and facilitate appropriate osteoporosis management. 17,24 Little attention, however, has been focused on the impact of clinical interventions provided outside the hospital setting (and not specifically devoted to bone health), such as during the postacute phase of orthopedic care and beyond. 25 To our knowledge, on-site musculoskeletal care and consult services by geriatric orthopedic providers has not been previously described in any other published model.
An innovative model of orthopedic fracture care called Mobile Outreach has been successfully implemented in our health-care setting. The Mobile Outreach intervention strategy differs from usual orthopedic care and telehealth in orthopedics, 26 in that a geriatric nurse practitioner, also trained in orthopedic care, provides clinical, orthopedic focused care at the patient's place of residence-nursing home, assisted living, or skilled nursing facility. The preliminary results presented provide evidence of the feasibility and impact that such a program can have in the provision of orthopedic care to the residents of nursing care facilities.
In our urban setting, the Mobile Outreach provider clinic schedule allows for 6 to 8 patient encounters per day. In 2016, through our Mobile Outreach Program, we fielded orthopedic consultation calls from providers at nursing care facilities, saw postoperative patients, facilitated direct admission to the hospital, and accomplished on-site fracture care and joint injections for a total of 458 patients in 689 care visits. Either initial or follow-up fracture care was provided to over 250 patients. One hundred forty-nine of these individuals were postoperative patients. Older patients who had painful musculoskeletal injuries and conditions were spared the disruption of care transition and transport. They could stay in a familiar environment and keep their usual schedule. In other words, in a patient-centered manner, their orthopedic health-care needs were met.
New care models, such as the Mobile Outreach Program, have considerable relevance to orthopedists and especially to those who provide fracture care in elderly patients. The "Triple Aim"-a focus on patient satisfaction, quality of care, and cost-has become a greater focus by health-care organizations, providers, and patients. 17,27 Additionally, although the Center for Medicare and Medicaid Services mandate for the bundled payment of hip fracture care (surgical hip/femur fracture treatment) has been cancelled, efforts to coordinate and decrease the cost of hip fracture care are nearly inevitable. 28 Our report illustrates that the Mobile Outreach Program of orthopedic care for the frail elderly patients at their place of residence is a feasible method of care delivery.
Although, in the primary care field, providing on-site medical care by geriatric nurse practitioners is widely accepted, 29 this approach is novel in the provision of orthopedic care. Also, more recently, Hospital-at-Home (HaH) programs have been trialed for certain limited diagnosis-related groups. [30][31][32] Although orthopedic care has not been described specifically, Federman et al reported that HaH bundled in-hospital and postacute care for 19 separate medical conditions resulted in an improvement in clinical outcomes and patient experience. 33 Through the Mobile Outreach program, orthopedic care rendered includes postoperative hip fracture care, which can specifically bring the most appropriate care to this vulnerable patient population. 34 This may reduce postsurgical emergency department visits, readmissions, morbidity, mortality, as well as overall medical care costs. Work is underway to evaluate these potential care process outcomes. This report did not aim to prove the efficacy of the Mobile Outreach service, but instead to provide a foundation to establish its feasibility and to summarize information which may be helpful in the planning of future clinical studies and/or programmatic planning. Further analysis is warranted to appropriately assess the full value of such a program in relationship to improved patient outcomes, cost of care, and patient experience.
|
2019-03-20T13:03:53.087Z
|
2019-03-06T00:00:00.000
|
{
"year": 2019,
"sha1": "e113581fc4d8e43fc0f9834dcc82dbf8d4ce5bbf",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/2151459319826476",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e113581fc4d8e43fc0f9834dcc82dbf8d4ce5bbf",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
193908802
|
pes2o/s2orc
|
v3-fos-license
|
Food Porn as Visual Narrative: Food Blogging and Identity Construction
Globalization has transformed the way modern society produces, prepares and consumes food. For the past twenty years, studies on the globalization of food have focused primarily on the effects of fast food industries on societies. The term 'McDonaldization' was coined by sociologist George Ritzer in his seminal work "The McDonaldization of Society" (100-7). A decade later, the topic remains the same but the focus has shifted to the East with the popularity of eastern cuisine, especially Japanese sushi in Western cultures (Carroll 451). Through the proliferation of fast food chains and sushi, new cultures of fusion food have emerged to diminish the social differences in eating. However, there has been a revival in class-based eating as in a recent phenomenon known as 'food porn.' Although it may sound outrageous, food porn is defined as 'food that is so sensationally out of bounds of what a food should be that it deserves to be considered pornographic' (McBride 38). This refers to the mouthwatering, highly-stylized images of food displayed in magazines, cook shows and social media that is meant to induce the desire to eat. A key feature of food porn is its fantasy-like, unattainable quality, as most home cooks can never reproduce the exact dish presented. As noted by critic Richard Magee in "Food Puritanism", ornamental cooking is only valued for its attractive surface appearance, but is fully divorced from its taste or nutritional aspect (26-38). It is about putting on a show; a culinary performance to stimulate the sight of the audience and whet their appetites. The line between fiction and reality is often blurred in such visual representations of food.
KOH 125 because certain food blogs such as Rasa Malaysia and Makansutra have gained status as national ambassadors of their respective countries.
The purpose of this paper is to examine the ways that self-identity and group identity are constructed by a Malaysian and a Singaporean food blogger through the visual narrative of food.
A semiotic reading is offered using the images of food as an "anchor" to crystallize identity.
With a focus on class-consciousness, I argue that food bloggers reproduce and reinforce ideologies promoting wealth, elegance, prestige, elitism and capitalist-consumerism. As a result, the identity of food bloggers is synonymous with a narrative of affluence. To achieve a performance of glamour, they employ certain food photography techniques that create the "postcard effect" to satisfy the pleasure of the tourist gaze. I will analyze the deliberate framing of the tourist gaze in food blogs as a form of aggressive image advertising to represent identity.
Finally, the social implications of food blogging will be discussed in detail. This study will lay the groundwork for future research on Southeast Asian food culture, thus contributing to the field of sociology, linguistics and media studies.
Data collection
Due to the millions of food blogs floating in the World Wide Web, it is impossible to analyze all of them, so a criteria was required for selection. A sample of one Malaysian and one Singaporean food blog was chosen for case study: Bangsar Babe and Miss Tam Chiak. These two blogs have established a cult following and are featured in various publications such as newspapers, magazines and radio shows. Both bloggers are women in their late twenties who live in the city, have an independent career and enjoy eating and travelling for food. Bangsar Babe and Miss Tam Chiak were selected on the basis of i) variety: so as to represent a range of food types, KOH 126 recipes and food places ii) relevancy: the two blogs are actively updated at least once a week to ensure currency of post ii) consistency: both blogs are written in English and each food review is followed by accompanying photographs and food descriptions. Only blog entries from March to November 2013 for a period of eight months were analyzed for this study.
Data analysis
The overall layout and design of both blogs were first evaluated to ascertain the general atmosphere of the blogs. In particular, the blog headers were examined as a key section expressing the personality of the bloggers. Then images of food alongside their textual descriptions were decoded for their semiotic messages. Roland Barthes talked about "submitting the image to a spectral analysis of the messages it may contain" (32) in his agenda-setting work, "The Rhetoric of the Image." Using this approach, the connotative or ideological messages of food porn were analyzed by reading the signifiers arranged in a series of chronological photographs captured by the bloggers. Images of food functioned as an anchor to identity.
According to Barthes in The Responsibility of Forms, anchoring is "a means of control, it bears a responsibility, confronting the projective power of the figures, as to the use of the message" (29).
Food porn anchored the narrative of identity by crystallizing the ideology of the bloggers through visual metonymy. The metonymic images of food presented in bits and pieces throughout the blogs were read like comic strip narratives of the self. Fragments of these images were pieced together like a jigsaw puzzle to form the larger narrative representing the identity of the bloggers.
Identity became a collage of "postcard" snapshots depicting food that was assembled together to construct the dominant narrative of the individual blogger.
The strategies that were used to produce the "postcard" effect of food were discussed.
Using Laura Mulvey's theory of technology as visual pleasure and the notion of the tourist gaze, KOH 127 food porn was analyzed as a screen to project attitudes towards places and people which shape the self-image of the bloggers in the eyes of the public. The pleasure of the tourist gaze was gratified from the vicarious experience of watching graphic simulations of food and eating without being physically there. The impact of such indirect participation from the tourist gaze was summarized.
Findings and Discussion Layout and design
The sole author and mistress of Miss Tam Chiak is Maureen Ow, a Singaporean Chinese woman who began recording her gastronomic adventures in 2007. Her blog has a simple, basic layout with a plain white background which makes it easy to navigate for readers. A huge blog header displays rotating images of food which allows her visitors to feast their eyes on a variety of meals and drinks. These images are changed once every three days to keep things fresh. The title of her blog, "Miss Tam Chiak", is written in large font size 72 with a brief description that reads "Food. Travel. Photography." Everything in her blog is bigfrom the gigantic texts to the supersized pictures which occupy the centerspread (see fig. 1). Images of decadent food eclipsing the written text have the effect of focusing the reader's eyes on the food and glossing over the words, which seem less important. The blog design has an overall effect of being modern and minimalist in appearance. In the biography section, Maureen describes herself as "tam chiak", the Hokkien adjective for "greedy" as the language reflects her identity as a Singaporean Chinese. Although the blog was originally written in Chinese, she has changed the language to English for the convenience of her readers who are not proficient in Chinese.
Maureen readily adapts her blog according to the needs of her audience and so negotiates her own identity through their feedback. Interestingly, she claims food blogging is a "serious hobby" and not a job. This creates the impression that she keeps a casual diary of what she eats on a KOH 128 daily basis. Furthermore, she distances herself from any food establishment by asserting that she is not a professional, which reinforces the idea of her being a casual food lover. In contrast, Bangsar Babe takes a more commercialized approach to blogging. The author is Tiong Sue Lyn, a Malaysian Chinese woman who is an ex-beauty pageant winner. Her blog is divided into three columns with multiple sections and links. The blog header shows a photo of her clad in a bikini beside the beach which rotates to display images of her dressed in branded clothing posing with handbags, shoes and accessories (see fig. 2). The motivation behind the blog header is to advertise her sponsors. Interestingly, the use of her body which functions as a model figure to display branded apparel suggests there is a homology between the human body and food. Both are consumer objects which serve as fashion statements, making fashion KOH 129 synonymous with food. There are also random advertisements placed across the site for foodrelated products. The title of the blog is derived from the author's place identity, as she has stayed in Bangsar since she was two years old. Originally a humble food diary in 2007, Bangsar Babe has expanded to include lifestyle posts. The intertwining of food porn, travel and fashion is made explicit in the blog. The author has also posted a series of images depicting her achievements in the media, including being crowned "Miss Popular" in Miss World Malaysia 2009. The layout and design of Miss Tam Chiak and Bangsar Babe express two very different personalities. While the former identifies with a common dialect spoken by the Chinese community in Singapore, the latter identifies with a glamorous, upscale part of Kuala Lumpur which she calls her hometown. The distinctive approaches that they take in their blogs have a specific influence on the way foods are visualized in their works.
Food porn
The whole concept of Miss Tam Chiak revolves around the blogger's amazing photography skills.
While her writings are brief and succinct, her food photos are bright, vivid and large so that they instantly grab the attention of the eyes. In a post entitled "The Seafood International Celebrates 30 th Anniversary with 30-Course Dinner" dated 16 October 2013, thirty seafood dishes were compiled into a colourful mosaic of spectacular images (see fig. 3). This mosaic served as a visual summary of what Maureen ate in a restaurant. They anchored her narrative of the dinner she enjoyed that night. She presented all thirty dishes in a chronological order, beginning with the appetizers, entrée, proceeding to the main courses and ending with desserts and free-flowing wine. More impressively, a choice of fine wines and liquors were displayed in a glass cabinet stuffed with ornamental grapes (see fig. 4). Seafood and wine is a rich combination that projects a cultured, metropolitan and sophisticated gourmet. Wine is a signifier that represents the intellectual and romantic qualities of the Europeans.
KOH 132
Maureen arranged her photos to follow a metonymic structure and pattern. Her food reviews typically begin with a picture of the venue she is dining in, a posh-looking bar, a hotel or a restaurant, then a step-by-step visual account of each meal she has tasted, before ending with a dessert or beverage. Venues play an essential role in constructing a narrative of elegance in her pictures, as an upscale-looking place connotes refined taste. In a post entitled "Penang Buffet at Princess Terrace, Copthorne King's Hotel" dated 5 August 2013, the author went to a Singaporean hotel to eat hawker food from Penang. An image of the hotel's interior with carpet, tables and soft lighting were first shown to set the atmosphere. Under the camera, something as common as Penang hawker food was cast in a new, seductive light. Foods such as Char Kuey Teow, Rojak, Laksa and Teh Tarik received a sensual, provocative portrayal. The camera zoomed in on the chunks of succulent prawns on noodles, blurring out the background details, and directing our gaze to the juicy meat. As a result, Penang hawker food becomes a lavish feast.
Even unhealthy fast food was glorified as something exotic. Hamburgers were shown oozing with eggs and dripping with cheese. The author described them as "the sauciest, juiciest, tastiest bestest burgers around, the crispiest, freshest, hottest, bestest fries around and the chilliest, thickest, creamiest, bestest milkshakes around."("Tripple O's at Orchard Towers", 3 Oct 2013).
The over-the-top textual descriptions reinforced the message that only the blogger would have the privilege of eating the best burger in the world. It is special and unobtainable.
But the blogger's obsession with food goes beyond taste. There is a strong aesthetic appreciation of her meals and she goes to great lengths to display food artistically. In "Ezoca: Kaiseki Ryori", a collection of Japanese bowls were arranged on a tray like a modern art piece, while "Exquisite Dynasty Feast at Summer Palace, Regent Singapore" showed strips of abalone and mushrooms placed together to form a Chinese painting. This concern for beauty in food KOH 133 reflects the image of an individual who can afford to invest lots of time, energy and resources into pursuing culinary perfection. The author is constructing the narrative of a lady of leisure who spends her days consuming rare, exclusive foods in hotels and restaurants on an almost daily basis based on the frequency of her blog posts.
This identification with leisure extends to the domestic kitchen setting. Sometimes Maureen posts photos of her home cooking in her "Recipe" section. In "Recipe-Healthy Strawberry Parfaits" dated 8 October 2013, she showed a metonymic sequence of the process of making healthy parfaits. First, she displayed the fresh ingredients. Next, she posted a picture of cereals in a glass jar, followed by slices of strawberry embedded in yogurt. Finally, the finished parfait looked feminine in pink. Besides anchoring her gender identity through domesticity, Maureen used the "Recipe" section to identify her friends and relatives. The various images of her cooking dinner for Grandpa and baking birthday cakes for her friends merge to build a dominant narrative of a sweet, caring, filial, friendly, family-oriented and health-conscious Asian woman. Relationships with loved ones are explored through the medium of food: she lovingly prepares special dishes for friends and family. Her preoccupation with healthy eating also speaks volumes of her desire to take care of her body and this promotes the ideology of healthy living, which is a concern of the educated class.
Unlike her Singaporean counterpart, the Bangsar Babe is officially involved in the advertising and publishing industry. Sue Lyn's food photos contain a signature of her website, Bangsarbabe.com, for the purpose of trademark and copyright. Therefore, every image is labeled as intellectual property. Furthermore, each picture is uniquely attached to a caption which provides a textual description of it. As a brand ambassador, Sue Lyn is more likely to insert herself into the sequence of photographs when doing a food review. For example, in an entry KOH 134 called "King Cole Bar, the St. Regis Bali Resort" on 1 April 2013, the blogger was seen having tea in a luxury resort in Bali. The venue was exhibited with a caption that reads "Elegant and Sophisticated." The blogger herself is shown drinking a cup of tea in a white dress (see fig. 5).
The caption credits her sponsor for the clothes. The photo was geared towards displaying fashionable clothing that matched the ambience of the café. In another post "Fine Dining at Kayuputi, St. Regis Bali" dated 24 April 2013, she is seen enjoying a glass of wine, then carrying a shopping bag while dressed in matching frock and hat. The insertion of Sue Lyn's physical self into her food pictures creates the narrative of a consumer "package" that is allinclusive. Dress, hat, shopping bag, café, wine and body are linked in a metonymic chain of signifiers. The ideology of a modern hedonistic lifestyle is crystallized through the image.
KOH 136
However, the author wrote about facing pressure from her audience to continue reviewing food. Responding to a feedback by a disappointed reader, she decided to satisfy his demands by posting a series of food images in "Random food updates," 6 September 2013. The photos supposedly depict what she ate for the past two weeks: Mediterranean seafood, wine, dessert, kimchi, pasta and latte. The author was careful to select an assortment of hybrid, upscale food that represents a global tastebud. The performative aspect of her pictures reflects the blogger's identity as an idol who entertains her audience with food. Thus a celebrity-fan relationship is enacted between the blogger and her readers through this performance of glamour.
The Tourist Gaze
The gaze of the tourist is characterized by a simulated experience of visiting a place without engaging the self in the real political or cultural complexities of the place (Sadler and Haskins 195). In Bangsar Babe, the author maps out her foodie adventures according to location, compressing the realities of Malaysia as a country into a collage of postcard food images. The narrative of Malaysia as an exotic melting pot is firmly anchored in the visuals of hybrid food presented. Most importantly, she directs the tourist gaze to a unique aspect of Malaysia which is street food culture. Sue Lyn often reveals obscure food stalls that are known only to the locals living in that area, therefore assuming the role of a native informant. No food is too humble to be blogged about. She goes so far as to introduce "mixed economy rice" found in the section of Kuala Lumpur called Pudu so that readers, put in the position of tourists, can experience honest, "authentic" Malaysian food by looking at her photos. The myth of authenticity is a driving force KOH 137 behind her blog. However, even in the midst of presenting street food, the sequence of food images is sometimes interrupted with pictures of the blogger posing in fashionable clothing.
Conversely, Miss Tam Chiak makes use of the author's advanced photographic skills to craft the narrative of Singapore as a modern, clean, progressive and attractive destination. She achieves this by letting her camera linger on images of clean streets, green sceneries, posh bars, elegant hotels, modern infrastructure and orderly shopping malls. Only these positive aspects of the city are captured by her lens. The effect is similar to gazing at a pure and pristine Singapore.
All these pictures are included as part of her food reviews. The emphasis on buildings and venues in her food posts provide an opportunity to showcase the development of her city. For example, an entry headlined "Brunch at GRUB, Bishan Park" on 13 September 2013 narrates Maureen's experience of visiting a park for a meal. She displays photos of lush greenery, trees and flowers planted neatly (see fig. 7). This scenery anchors the message of an environmentfriendly Singapore that preserves nature. Finally, she includes a picture of an adorable pet dog with clean, well-groomed fur standing in the park. This projects the myth of a warm, friendly, and animal-loving neighbourhood. The tourist gaze, directed by the camera lens, is able to experience her food journey in the park vicariously through the metonymic sequence of photos.
The blogger's photography actively constructs Singapore and creates a tourist-friendly narrative filled with a nationalist agenda. Miss Tam Chiak;13 Sept. 2013;.
Conclusion
The concept of the blog as a rotating billboard that displays identity has led to the birth of food porn as a form of aggressive self-image marketing. Bloggers compete to turn images of food and themselves into postcards so that they can attract the attention of consumers which, in turn, generates revenue. It does not matter if these images are genuine or faithful copies of the original. All that matters is that they need to be visually persuasive enough to instill the desire to consume. In the words of James Donald, "what matters about myth and magic is not their truth, but their effectiveness" (Visual Culture79). To be effective, pictures of food may need to undergo a process of photo editing and other visual manipulations using technology. between the bloggers and food images then, is that of a cult and myth. The former is an embodiment of consumerist-capitalist values. The latter justifies it as right and natural.
A number of implications have arisen from this study. Firstly, although globalization has been said to eradicate class differences in eating, the practice of food blogging has merely reintroduced and reinforced class-consciousness in the readers. Food porn is bourgeoisie art peddling the fantasy of high living to the masses. There is also the illusion of freedom of choice and the sense of empowerment from consuming such food images when all it boils down to is the idea of marketability. On the pretext of expressing their individuality, bloggers will only present the most visually appealing meals to the readers, thus limiting options and shaping a demand for these foods. Secondly, there are ethical and moral concerns regarding the role of KOH 140 food bloggers as figures of celebrity and authority with the power to influence the eating decisions of the public, especially that of the younger generations. As the self-appointed gatekeepers of gourmet culture, bloggers needs to be held accountable for spreading information which gets passed on as facts. Finally, the distance between people and food consumption is growing wider and wider. Eating is no longer a spontaneous activity, but done through the indirect, vicarious experience of watching graphic simulations of food. Modern society is being increasingly alienated by technologies of visual pleasure which erode the eating experience itself.
Considering the findings of this study, more research needs to be done in future regarding the effects or consequences of absorbing food porn ideology over a certain length of time. A quantitative survey should be done to identify the type of netizens who are most likely to be food porn addicts and the amount of time or frequency spent gazing at these images. Specifically, there is an urgent need to educate the public regarding the unconscious implications behind the seemingly innocuous practice of food blogging.
|
2019-06-19T13:18:22.245Z
|
2015-12-30T00:00:00.000
|
{
"year": 2015,
"sha1": "ce620e4c62b5444a7c5e1a514e0bb30f81e4390c",
"oa_license": "CCBY",
"oa_url": "https://sare.um.edu.my/article/download/3264/1326",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "6b2ff09a422bf0db8700c72001f634fc0b0d6f0b",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Sociology"
]
}
|
9797520
|
pes2o/s2orc
|
v3-fos-license
|
Modeling turbulent wave-front phase as a fractional Brownian motion: a new approach
This paper introduces a general and new formalism to model the turbulent wave-front phase using fractional Brownian motion processes. Moreover, it extends results to non-Kolmogorov turbulence. In particular, generalized expressions for the Strehl ratio and the angle-of-arrival variance are obtained. These are dependent on the dynamic state of the turbulence.
INTRODUCTION
Earth turbulent atmosphere introduces spatial and temporal variations in wave-front that lead to image degradation of optical systems. Astronomical telescopes, laser beam projection systems, and optical communication systems are limited by the presence of turbulence. In particular, the resolution of a ground-based telescope is notably modified. Generally, the telescope aperture is assumed to be smaller than the outer scale of the turbulence, so spatial frequencies of the turbulence with wavelength of the order of the aperture diameter, D, impart a random tilt on the incident wave-front. This wave-front tilt translates to simple image motion at the image plane. It is the dominant atmospheric aberration across the telescope pupil. Statistical characterization of the image motion is of paramount importance because of its implications on the design of adaptive optics systems.
In order to characterize temporally and spatially the statistics of the wave-front phase ϕ several sensing methods have been used. 1 They use single, 2 double, 3, 4 and multiple 5, 6 (Shack-Hartmann) aperture sensors to measure the wave-front tilt. The centroid of the long-exposure images formed by each aperture is directly proportional to the slope of the wave-front across it.
As usual, the phase structure function is where ρ ′ , ρ ∈ R 2 , and · stands for the average using some unknown probability distribution. 7 Whenever a Kolmogorov developed turbulence is present, under the small perturbation and near-field approximations, the latter is turned into the widely where r 0 is the Fried parameter 9 linked to the spatial statistical properties of the refractive index, and C 2 ϕ is the phase structure constant, roughly near to 6.88.
Interferometric measurements have corroborated the expression in Eq. (2). Many of these measurements have been made under the conditions mentioned above. But, significant departures from the 5/3 exponent have been experimentaly observed. 2,5,6,10,11 In particular, for near to the ground measurements, exponents in the range (1, 5/3] have been determined experimentally. It is well-known that atmospheric turbulence is not always in its fully developed state, thus deviations from this simple model are likely-non-Kolmogorov turbulence. The phase structure function can then be generalized 6,12 to include these results as follows, where β is the exponent associated with the phase spectrum, r 0,β is the generalized Fried parameter and C 2 ϕ,β is a constant maintaining consistency between the power spectrum and the structure function of phase fluctuations. If a Kolmogorov spectrum is chosen: β = 11/3, C 2 ϕ,β ≈ 6.88 and r 0,β = r 0 ; thus, we recover Eq. (2).
In order to model turbulence-degraded wave-fronts by Kolmogorov turbulence, Schwartz et al. 13 have suggested that these are fractal surfaces described by a fractional Brownian motion (fBm) process with Hurst parameter 5/6 and a fractal dimension equal to 13/6. Fractal properties are attributed to both the spatial and temporal behavior, and they are directly related through the Taylor hypothesis, or frozen turbulence approximation. The value of the Hurst parameter is in accord with the predictability of real stellar wave-front slopes. 14 Moreover, several algorithms for adaptive optics have been designed based on this statistical prediction. 13
FRACTIONAL BROWNIAN MOTION AND ITS ASSOCIATED NOISE
Usually, natural phenomena behaving randomly are labeled as noises. These noises are characterized through the estimation of their power spectrum, W(ν). [22][23][24] Empirically, an enourmously wide range of these spectra have been observed to follow power-laws proportional to |ν| −β , for some exponent β. Better known as 1/f β -type noises; 25 they are classified according to the value of the exponent, e. g. Ref. [23, ch. 3].
Since its first formalizations in the earlies 1900's (independently modeled by L. Bachelier and A. Einstein) the Brownian motion caught the attention of physicists.
It is the most common representative of 1/f 2 -type noises; thenceforth, processes with such spectra are known as brown noises. On the other hand, the derivative of the Brownian motion is called white noise. The fact that it only can been defined as a distribution (in some probability space) is found in its tail-divergent power spectral distribution, i. e. β = 0. Afterwards, any process between these two, with power exponent 0 < β < 2, is referred as a pink noise. The last category is for those processes with 2 < β < 3, they are considered black noises.
A (stochastic) process X(t) is self-similar with index H if, for any c > 0, that is, both processes are equal in distribution. The coloured noises are self-similar, with exponent H = (β − 1)/2, as the generalized Fourier transform of their spectra can show. 26 Also, it suggests the presence of a slowing decaying auto-correlation, and thus of memory.
Nevertheless, this colour classification is rather rough. It is insufficient knowledge of the power spectra to create stochastic processes modeling the randomness of quantities observed in the real world. As these random quantities tend to appear in dynamics equations other properties are needed to give them sense, e. g.: bimanual rhythmic coordination differential equation, 22 Black-Scholes market equation, 27 ray-optics equation, 28 etc..
Because of all the 'good' properties it endows, stationarity is desired. It is said a processes X is (wide sense) stationary if for any τ ∈ R.
Since we have lost ergodicity, the Wiener-Khinchin theorem fails. On the other hand, the existence of stationary increments does not contradicts the self-similar property.
A process with stationary increments is such that the probability law of its increments Natural phenomena exhibit in general a non-gaussian behavior. Nevertheless, it is usual to append these to a gaussian distribution since, in this way, they become analytically tractable, Ref. [23, p. 35]. Moreover, choosing this distribution leaves unaffected the memory properties described by the spectrum. That path will be followed here.
There is only one family of processes which are self-similar, with stationary increments, and gaussian: the fractional Brownian motion (fBm). 29 The normalized family of these gaussian processes, B H , is the one with 30 for s, t ∈ R. Here E[ ·] refers to the average with gaussian probability density. The power exponent H is the Hurst parameter and its range is bounded. While the condition H > 0 guarantees their (mean-square) continuity, H < 1 avoids degeneracy. 26 Another more intuitive argument can be drawn. It is well-known these curves have fractal dimension equal to 2 − H. 31 Because they are embedded in the plane, H > 0.
On the other hand, continuous parameterized curves should have dimension greater than one, and thus H < 1.
These processes exhibit memory, as can be observed from Eq. (5), for any Hurst paremeter but H = 1/2. In this case successive Brownian motion increments are as likely to have the same sign as the opposite, and thus there is no correlation.
Otherwise, it is the Brownian motion that splits the family of fBm processes in two.
When H > 1/2 the correlations of successive increments decay hyperbolically, and this sub-family of processes have long-memory. Besides, consecutive increments tend to have the same sign, these processes are persistent. For H < 1/2, the correlations of the increments also decay but exponentially, and this sub-family presents shortmemory. But since consecutive increments are more likely to have opposite signs, it is said that these are anti-persitent.
Fractional Brownian motions are continuous but non-differentiable processes (in the usual sense), and only give spectra exponents between 1 and 3. Nevertheless, fBm processes can be generalized to allow derivatives. A simple dimensional inspection suggests that the latter should have spectral exponent equal to β = 2H − 1; thus, covering the range −1 < β < 1.
Formally, continous processes are not called noises since they can be integrated pathwise. That is, given any continuous process exists for any realization of the integrands. On the other hand, noises are not pathwise is not the limit of area approximating sums for any realization, i. e., there is no calculus in the classical sense.
The first construction of a Stochastic Calculus was made by Itô around 1940 for Brownian motions. Later, these results were extended to more general processessemi-martingales 32 and infinite dimensional Wiener (Brownian) processes. 33 The White Noise Analysis due to Hida, focused in the white noise rather than the Brownian motion as a fundamental entity, is of particular interest here.
Like common sense suggests the lack of conventional derivatives should be overcome through distributions. This is the basic idea underlying the white noise calculus.
The problem is thus to embed these distributions into the right probability space. Let φ be an element of the Schwartz space S(R) (the space of rapidly decreasing smooth real-valued functions), and ω is any element of the dual S * (R). Therefore, the white noise is defined as the bilinear map W such that, ·, · is the bilinear map and (·, ·) the usual internal product in L 2 (R). The space S * (R) turns out to be a gaussian probability space and its elements ω the events. Moreover, the pairing coincides with the Itô integral, i. e. φ, ω = R φ dB, 34 and using its properties (Ref. [34, p. 15]) it is found: that is, the white noise, as it was defined, is the derivative of the Brownian motion.
In the last decade different approaches have been given to extend the stochastic calculus to fBm. The range of persistent processes have been particularly fruitful, [35][36][37][38] not only because of its applications in practical problems also for its regularity properties. Duncan et al. 39 successfully extended the white noise calculus to this range by means of a tool termed Wick product. These ideas were recently picked up by Elliott and van der Hoek 27 who have given a complete calculus for all values of the Hurst parameter. A brief outlook based on their work is given in Appendix A.
WAVE-FRONT MODELING AND APPLICATIONS
Let ϕ be the phase difference between the average and perturbed wave-front. As it was argued at the introduction, it is a realization of a fractal surface. Moreover, the small perturbation and near-field approximations guarantee structures functions like Eqs. (2) or (3). That is, it has stationary increments. Now, as always, it is assumed that the process ϕ is gaussian-see for example Ref. [8, p. 293]. Its power spectrum is also observed to follow a power law; thus, it is self-similar. At least it is valid within the inertial range, which is limited by two characteristic scales-the outer and inner scales, L 0 and l 0 respectively.
Now, letB
be the isotropic fractional Brownian motion (ifBm). It is gaussian, self-similar and, under condition (B.3) (given at the Appendix B), has stationary increments. Therefore, we can define the generalized phase difference as where C ϕ is defined as in Eq.(2), and H = 5/6 in the Kolmogorov turbulence case.
Its structure function is, where the last step is made under the condition |(ρ − ρ ′ )/r 0 | 3/2 ≪ 1, which guarantees this process has stationary increments-see Appendix B. Observe that since the Fried parameter can be interpreted as the diameter of the coherence area of the perturbed wave-front the last restriction is compatible. As it was stated earlier at the introduction, the structure function power exponent is restricted to the range (1, 5/3] for near to the ground measurements. Therefore, the Hurst exponent is confined to 1/2 < H ≤ 5/6.
A. Strehl ratio
The Considering a circular aperture of diameter D receiving an optical signal it was shown: 12 where D w (ρ) is the wave structure function. In the near-field approximation the wave structure function is replaced by the phase structure function. By using Eq. (8) leads In the case of Kolmogorov turbulence (H = 5/6) the following well-known expression is recovered: 41 Remember that, for small phase aberration σ 2 ϕ << 1, the Strehl ratio can be expressed as a function of the phase variance: 42 S ≃ exp(−σ 2 ϕ ). This formula implies that the normalized intensity is independent of the nature of the aberration and is smaller than the ideal unity value by an amount proportional to the phase variance. Under the definition in Eq. (7), it is Therefore, not only through the Fried parameter the quality of beam propagation can be set but also the Hurst parameter is relevant.
B. Angle-of-arrival variance
The path difference, or wavefront corrugation, of the wavefront surface from the average plane is simply as usual, λ is the wavelength. Light rays are normal to the wavefront surface within the framework of Geometric Optics. The angle-of-arrival, at each normal plane, is where W ϕ (ν) is the power spectrum of ϕ(ρ). It should be stressed that in this expression it is applied the Wiener-Khinchin theorem; thence, the phase is modeled as a stationary random variable.
A divergent integral is obtained under the assumptions of Kolmogorov turbulence, small perturbation and near field approximation. In order to make it summable Roddier introduces a high and low frequency cut-offs, D −1 and L −1 0 respectively. It is a 'more realistic expression' where the aperture diameter and the turbulence outer scale are involved. That is, We can obtain the following result, integrating the above equation and considering D ≪ L 0 : A more precise relation was given by Tatarskȋ: 43 where the proportionality coefficient is in radians squared units. However, coefficients ranging from 0.342, 44, 45 0.358, 3, 4 to 0.365 46 have been given. It should be noted that these coefficients were obtained by using just only phase differences, so it is necessary that the wave-front remains unchanged over the whole aperture. Then, the pupil size must be smaller than the inner scale of the atmospheric turbulence, i. e. D < l 0 .
Now, according to Eqs. (7) and (11), we have that where C z = λC ϕ /2π. Therefore, the angle-of-arrival is where W H is the fractional white noise as defined at Appendix A. The total variance of the angle-of-arrival is Let us calculate the fractional white noise variance using its chaos expansion, Eq. (A.5), and the Wick product in Eq. (A.7). Since, it is and considering statistical dependent variables are treated as if they were independent with respect to the average when Wick multiplied, E W H ⋄ W H = E W H ·E W H = 0 · 0 = 0-the last step is for the noise being a zero-mean gaussian variable. We have the following Finally, from Eq. (15) and the latter equation: The sum in the above equation can be analytically calculated. Consider where the Fourier transform 47 property of the Hermite functions was employed. There-fore, for the intermediate steps one has to use the orthogonality and parity of the Hermite functions. First note from this equation that the covariance is stationary. But, if we set ρ = ρ ′ the angle-of-arrival variance is divergent! Then let us follow Roddier's idea and introduce an adequate cut-off to the above, Observe that for H equal to 5/6, it is σ 2 m,5/6 = 0.452552 σ 2 m -where σ 2 m is the variance obtained by Tatarskȋ. The cut-offà la Roddier notably reduces the value estimated by Tatarskȋ and others. As it was pointed out earlier, the scales considered throughout this paper are above the inner scale: in particular, D > l 0 . Then, the difference between these variances is plausible.
As we have seen, the removal of high frequencies is due to the finite size of the aperture. 48 In fact, since many scales are involved this filtering must be introduced in order to smooth out. Let us properly introduce this effect.
Define the smoothed fractional white noise as follows: given φ ρ (s) = φ(s − ρ), is a noise built up by the contribution of each white noise with weighted function φ ρ .
Therefore, the variance of the smoothed noise is where φ * (ν) = φ(−ν) have been used. Finally, the generalized angle-of-arrival variance acquires the form Observe that the function φ is a distribution-like function, it must satisfy the condition: The natural election for φ is the Fourier transform of a pupil with diameter D.
Its normalized version according to Eq. (20) is,
Therefore, using Eq. (19) and the pupil filtering function, Finally, it is σ 2 m,5/6 = 1.04313 σ 2 m . As we remove high frequencies the noise becomes more regular, and the wave-front variance approaches to that of Tatarskȋ.
CONCLUSIONS
This paper introduces a stochastic process, the ifBm, to model the turbulent wavefront phase. Not only it gives the right structure function for non-Kolmogorov turbulence, but also adds well-known statistic properties of the wave-front phase. Moreover, our model allows to extend results for two relevant optical quantities: the Strehl ratio and the angle-of-arrival variance. The expressions for these quantities depend on the Hurst parameter, thus on the dynamic state of the turbulence. 11 Remember that this parameter is related to the site location where the measurements are made.
In particular, the expression obtained for the angle-of-arrival variance when H = 5/6, Eq.(21), is almost identical to the classical one found by Tatarskȋ when high frequencies are filtered out. Nonetheless, for a Hurst parameter different from the one above a dependence with the wavelength appears. Up to now, it is unclear for us if such dependence exists for non-Kolmogorov turbulence. That is, if the Fried parameter is independent or not from the Hurst parameter.
Using the formalism presented here a wider range of power spectra can be studied.
Such as multifractal processes where the power exponent changes across frequency ranges.
Also, asymmetric power spectra gives rise to self-affine surfaces; the phase ϕ is scaled differently depending on chosen axis. Thus, two Hurst parameters can control this behavior, and this formalism is applicable again.
Finally, since phase distortions of a wave-front transform into amplitude distortions in the wave cross sections, a similar analysis should be possible for the amplitude.
APPENDIX A
The purpose of this appendix is not to give a complete exposition of the calculus developed by Elliott and van der Hoek, 27 but an introduction of the tools used in this work.
First, let M H be an operator defined for any 0 < H < 1 such that, where the hat stands for the Fourier transform, c 2 H = Γ(2H + 1) sin πH, and the function φ is defined as in Sec. 2. The generalized fractional white noise is Afterwards, the stochastic processes subject to the same probability space are defined through what is called Chaos expansion. Shortly, any stochastic process X can be written as the formal sums Here it is defined α! = α 1 !α 2 ! . . . α n !, the factorial of the finite non-negative integer multi-index α. While H α (ω) = n i=1 H α i ( ξ i , ω ) represents the stochastic component of the process, and it is build up through the Hermite functions: with H n the Hermite polynomials. These functions form an orthogonal basis satisfying:
|
2014-10-01T00:00:00.000Z
|
2004-03-01T00:00:00.000
|
{
"year": 2004,
"sha1": "5704f5cfe77be116ecb309f46c38d8826bebbadf",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/physics/0403005",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "5704f5cfe77be116ecb309f46c38d8826bebbadf",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
}
|
251983407
|
pes2o/s2orc
|
v3-fos-license
|
Antifungal activity of alexidine dihydrochloride in a novel diabetic mouse model of dermatophytosis
Dermatophytosis is one of the most prevalent fungal infections and a major public health problem worldwide. Recent years have seen a change in the epidemiological patterns of infecting fungi, corresponding to an alarming rise in the prevalence of drug-recalcitrant dermatophyte infections. In patients with diabetes mellitus, dermatophytosis is more severe and recurrent. The potency of promising new antifungal drugs in the pipeline must be expanded to include dermatophytosis. To facilitate this effort, we established a clinically pertinent mouse model of dermatophyte infections, in which diabetic mice were infected with Trichophyton mentagrophytes on abraded skin. The diabetic mouse model was optimized as a simple and robust system for simulating dermatophytoses in diabetic patients. The outcome of infection was measured using clinical and mycological parameters. Infected mice with fungal lesions were treated with oral and topical formulations of terbinafine or topical administration of the FDA-approved and repurposed pan-antifungal drug alexidine dihydrochloride (AXD). In this model, AXD was found to be highly effective, with outcomes comparable to those of the standard of care drug terbinafine.
Introduction
Dermatophytes are a group of fungi that cause superficial infections limited to the stratum corneum of the epidermis or to the hair and nails. Trichophyton, Microsporum, and Epidermophyton are the most common causes of dermatophytosis (de Hoog et al., 2017). While infections caused by these fungi are rarely life-threatening, they cause considerable morbidity, cosmetic embarrassment, and impose a significant financial burden. Dermatophytes are the foremost cause of cutaneous mycoses worldwide, prevalent in 20%-25% (or~2 billion) of the global population (Hay et al., 2014;White et al., 2014). The United States alone records over 5 million outpatient visits due to dermatophytosis, with an annual burden of almost one billion dollars in associated direct medical costs (Brown et al., 2012).
While dermatomycoses can affect immunocompetent individuals, patients with diabetes mellitus are particularly susceptible to this infection (Garcıá-Humbrıá et al., 2005;Muller et al., 2005). Poor glycemic control and obesity are the top reasons for the high rates of infection in diabetic patients (Venturini et al., 2011;Celestrino et al., 2021). Of particular concern is the changing clinicoepidemiological scenario of dermatophyte infections, especially in tropical parts of Asia and Africa (Rippon, 1985;Coulibaly et al., 2017;Adebiyi and Gugnani, 2020). For example, this disease has been deemed an "epidemic" in tropical countries such as India, which has the second-largest population of diabetics (Rajagopalan et al., 2018). Dermatophytosis in India is attributed to rampant and irrational use of over-the-counter antibiotics and corticosteroid drug combinations Verma and Madhu, 2017;Poojary et al., 2019;Das et al., 2020). In particular, recent years have seen a worsening in the disease severity with multiple, larger circumscribed inflammatory skin lesions harboring an overabundance of fungal loads similar to a biofilm-like etiology (Poojary et al., 2019). Furthermore, there has been a perceptible epidemiological shift in species from the previously predominant anthropophilic T. rubrum to T. mentagrophytes complex, a zoophilic fungus (Poojary et al., 2019;Adebiyi and Gugnani, 2020). Such a changing trend in etiology has paralleled the rate of increase in households harboring domestic pets, an important source of transmission (Segal and Elad, 2021). Making matters worse is the emergence of drug recalcitrance in these fungi, resulting in inevitable recurrences despite prolonged antifungal treatment (Martinez-Rossi et al., 2018;Khurana et al., 2019).
Topical and oral use of anti-dermatophytic drugs such as terbinafine and itraconazole have traditionally been the drugs of choice for ring worm infections (tinea corporis) caused by Trichophyton spp (Hainer, 2003;Singh et al., 2020). However, acquired resistance to these antifungal compounds is a rapidly emerging problem in developing countries (Monod et al., 2021). Indeed, there is a valid need for the discovery of novel drugs with enhanced effective and safe profiles.
Animal models of dermatophytosis have proven invaluable in evaluating the efficacy of antifungal molecules and for mechanistic understanding of fungal pathogenesis. Although guinea pigs have been the most commonly used animals in studies of dermatophytosis due to their likeness to human skin (Saunte et al., 2008;Shimamura et al., 2012), their use suffers from several shortcomings, including the lack of knockout animals. To circumvent this limitation, studies have exploited the mouse model for experimental dermatophytosis to understand disease pathology and host response (Hay et al., 1988;Nakamura et al., 2012;Baltazar Lde et al., 2014). We and others have extensively reported on the advantages of using mice in fungal infections, including their affordability, relative ease of use, and amenability to induce various disease conditions such as immunosuppression or diabetes (Capilla et al., 2007;Sano et al., 2014;Uppuluri et al., 2018;Gebremariam et al., 2019;Gebremariam et al., 2021). Thus, in establishing a new animal model for dermatophytosis, we considered three main factors: diabetic predisposition for clinical relevance; most prevalent zoonotic dermatophyte; and a model simple and affordable enough to use in research laboratories equipped for small rodents. Here, we optimized a diabetic mouse model of dermatophytosis, yielding a clinical picture akin to that observed in humans (Gebremariam et al., 2021). We further harnessed this simple, robust, and reproducible model to test the efficacy of a standard of care antifungal agent, terbinafine, and a new broad-spectrum antifungal molecule, alexidine dihydrochloride (AXD), recently discovered by our group (Mamouei et al., 2018). Our results show that the efficacy of AXD was analogous to that of terbinafine, resulting in complete clinical and mycological cure compared to infected untreated controls.
Assessment of progression of infection
The pathophysiology of diabetic mice infected with T. mentagrophytes (ATCC26323) was monitored over time. The overall success rate of infection in our studies was 100%, based on clinical and mycological outcomes (Table 1; 17 of 20 mice were successfully infected). Mice that resisted infection turned out to be those that did not develop diabetes (<250 mg/dl urine glucose). The earliest signs of infection appeared between days 3 and 4, post infection when the skin of mice was visibly red and erythematous ( Figure 1). The lesions gradually became worse by 7 to 13 days, post-infection, and exhibited plaque-like erythema, edema, and hyperkeratosis. Fungal infection was confirmed by skin scraping followed by culturing, which revealed T. mentagrophytes in all animals on days 4, 7, and 13 (Table 1). Interestingly, while the hyperkeratotic lesions persisted up to day 17, the culture positivity rates were reduced to 60%, indicating that the infection was starting to resolve. Accordingly, by day 21, shedding of skin crusts led to a significant visible reduction in hyperkeratosis (p <0.01 versus other time points; Figure 1) and a further reduction in fungal growth to 40% (Table 1). The histopathology of skin sections from day 13 revealed a normal stratum corneum, while that of the infected skin displayed signs of inflammation characterized by acanthosis (thickness/ hyperplasia of the epidermis), spongiosis (edema in the epidermis), and moderate cellular infiltration (Figures 2A, B). Supplementary Figure S1 shows the fungal filaments penetrating the hair follicles (arrow) and immune infiltration around the area of infection (star; also enlarged). Additionally, only the infected areas show acanthosis (two-sided arrows), while the uninfected loci of skin taper back to normal. The presence of marked dermal edema, acanthosis, and cellular infiltrates predominantly composed of mononuclear cells has been frequently elucidated in dermatophytosis (Hay et al., 1983;Nantel et al., 2002;Saunte et al., 2008).
Disease progression in the diabetic mouse model paralleled that of previous studies established in the immunocompetent guinea pig model. However, the presence of diabetes as a physiologically relevant-predisposing factor induced far more severe lesions, peaking at days 7-10 versus days 10-15 in other published models (Saunte et al., 2008;Fontenelle et al., 2014). In fact, such erythematous and hyperkeratotic manifestations are frequently witnessed in diabetic patients or those receiving immunosuppressive corticosteroid treatment (Fraga-Silva et al., 2015;Poojary et al., 2019). In a parallel control arm, when immunocompetent mice were infected with the dermatophyte, they displayed significantly mild erythema and dryness (Supplementary Figure S2A), with only 10% of the mice culture positive and clearance of fungi by day 17 (data not shown).
Hyperglycemic mice have previously been used to investigate the host response to dermatophytosis (Venturini et al., 2011;Fraga-Silva et al., 2015;Almeida et al., 2017). Clinical picture of dermatophytosis by T. mentagrophytes on the skin of mice and efficacy of antifungal drugs: Skin of the back of diabetic mice were infected with 1 × 10 7 conidia and the clinical picture of infection was monitored over time. Arrows indicate redness and edema. Infected skin at day 7 was also treated topically with AXD (20 µg) or terbinafine (1% topical, or oral gavage 75 mg/kg). Observe the complete clearance of hyperkeratosis and redness post treatment with both drugs. However, these studies did not elaborate on the clinical picture of infection or the amenability of the model to evaluate drug efficacy. Besides, all these studies induced type I diabetes using the drug alloxan, which is toxic to the pancreas and can lead to severe debilitation and mortality in mice. Our studies used streptozotocin, which has a high inductive capacity, less toxicity, and more specificity for pancreatic beta cells than alloxan (Lenzen, 2008).
Testing efficacy of antifungal compounds in the diabetic mouse model of dermatophytosis
We also used the developed diabetic mouse model for preclinical evaluation of alexidine dihydrochloride (AXD), an FDAapproved repurposed molecule, recently identified by us to have a broad-spectrum activity against pathogenic fungi (Mamouei et al., 2018). In this study, the MIC 80 of AXD in vitro was determined to be 0.32 µg/ml against T. mentagrophytes and 0.64 µg/ml versus T. rubrum, two of the most commonly isolated dermatophyte species from tinea infections (Supplementary Figure S2B). This dose of AXD has previously been shown by our laboratory to be effective against other human fungal pathogens, including Candida albicans and Cryptococcus neoformans (Mamouei et al., 2018). The MIC of terbinafine against T. mentagrophytes was 0.008 µg/ ml, a concentration previously shown by us and others to be effective against the fungus.
Since the peak of infection in this model was between days 7 and 12, mice were treated with antifungal drugs for 6 days starting on day 7 of the infection. To facilitate the delivery of AXD onto the skin of mice, we followed a strategy previously reported by us, where AXD was incorporated into an in situ gelling formulation composed of 20% w/v P407 and 1% w/v Poloxamer 188. We and others have demonstrated that thermosensitive P407 hydrogels can be effectively used for the intra-oral and intra-vaginal application of nanoparticles without affecting their inherent properties and release (Date et al., 2012;Date et al., 2015;Chen et al., 2021;Liu et al., 2021). Furthermore, studies have shown that Poloxamer 407 thermosensitive hydrogel can also potentiate delivery and sustained release of antimicrobials for improved efficacy against microbial biofilms (Bernegossi et al., 2020;Lp et al., 2020;Liu et al., 2021). Hence, P407 thermosensitive gel containing AXD was deemed suitable Histopathological analysis of skin: Skin biopsies obtained from uninfected, infected or drug treated mice skin were fixed, sectioned and stained with H&E plus PAS stain. (A) shows structure of intact skin from uninfected control, (B) exhibits infection, and presence of extensive hyphae on the epidermal layer. Infection causes acanthosis (two headed arrow) and spongeosis (arrow), (C, D) show complete clearance of hyphae from the skin, and regeneration of the stratum corneum (although acanthosis is still observed). Scale bars as indicated in µm.
for topical use. Terbinafine (TER) was used as a positive control since this drug is the standard of care drug for the treatment of dermatophytosis (although resistance to terbinafine is emerging) (Babu et al., 2017;Nenoff et al., 2020).
Oral and topical applications of terbinafine as well as topical applications of AXD yielded similar mycological and clinical clearance of infection. On day 7 post-treatment (i.e., day 13 postinfection), lesions were completely healed with a striking reduction in infection post topical treatment with the two drugs ( Figure 1). Besides the visual inspection, the efficacy of the two drugs was also confirmed by monitoring clinical and mycological efficacy. Whereas the infected and untreated mice demonstrated skin regions and culture positivity, mice treated with AXD and TER displayed significantly reduced erythema (>83%-91% efficacy; p <0.0001; Table 1) and a complete absence of fungal growth on culture, indicating 100% mycological efficacy post-treatment (Table 1). Histopathological analysis on day 13 of infected skin treated with topical terbinafine or AXD confirmed complete clearance of fungi from all mice (Figures 2C, D; Supplementary Figure S3). Even after scrutinizing multiple sections tested from skin specimens of several different mice, the worst-case scenario found was a single focus of very few residual hyphal cells (Supplementary Figure S2C). Considering that these mice were culture negative, it is likely possible that these filaments are inviable. Alternatively, it could also be that these isolated hyphal strands escaped treatment and therefore could be considered as "persister cells" that could reinitiate growth and cause a relapse of infection or drug recalcitrance. Despite treatment with the two drugs, hyperplasia and edema of the epidermis were not reversed ( Figure 2D, double-sided arrow). Such inflammation parameters have also been observed previously in the guinea pig model of dermatophytosis. Future studies could be undertaken to investigate if, or when the epidermis reverts to its normal architecture, a reappearance of infection once treatment is stopped. Such is often the case in human infections, where recurrence occurs weeks or even months after the course of treatment is completed.
Our results for terbinafine efficacy are in agreement with the findings of Ghannoum et al., who demonstrated that both topical and oral preparations of terbinafine are 90%-100% potent against T. mentagrophytes in a guinea pig model (Ghannoum et al., 2004;Saunte et al., 2008). Terbinafine has the propensity to efficiently bind keratinocytes, rapidly penetrate the stratum corneum, and persist in the skin at concentrations multifold higher than its MIC in vitro (Shear et al., 1991;Faergemann et al., 1993). This is the first study to evaluate the efficacy of AXD in an animal model of dermatophytosis. AXD, a member of the bisbiguanide class of antiseptics, has been noted as an anticancer drug lead because of its apoptotic activity in vitro and in vivo (Yip et al., 2006). Furthermore, this compound has been tested as an antiplaque agent and mouthwash with the potential to be used in endodontic treatment to eliminate biofilms (Lobene and Soparkar, 1973;Spolsky and Forsythe, 1977;Kim et al., 2013;Ruiz-Linares et al., 2017). Interestingly, in all these applications, AXD is potent at concentrations manifold higher than the MIC80 demonstrated in this study. This accentuates the potential of AXD as a stellar antidermatophytic drug. Indeed, our previous report highlighted the potential of AXD as a broad-spectrum antifungal drug with activities against biofilms and azole-resistant fungi, while exhibiting low mammalian-cell toxicity (Mamouei et al., 2018). As a next step in this series of investigations, it will be important to examine the pharmacokinetics and biodistribution of AXD on the skin.
In conclusion, we have presented a simple, physiologically relevant animal model that mimics infections in humans with T. mentagrophytes and harnessed this model to unravel the stellar activity of a novel molecule, AXD, against dermatophytosis. This diabetic mouse model can be applied for efficacy testing of new antifungals against dermatophytes, evaluation of diagnostic candidates, or to study the less understood host response to dermatophytoses in the background of hyperglycemia.
Fungal strain and growth conditions
The dermatophyte strain Trichophyton mentagrophytes ATCC 26323 was used throughout this study. This is a virulent clinical isolate from an aggressive ringworm infection isolated from a patient in Vietnam (Hashimoto et al., 1972;Hashimoto and Blumenthal, 1977;Suh et al., 2018). Dermatophytes were subcultured from the primary Sabouraud agar plate (containing 0.4 g/L cycloheximide and 0.5 g/L chloramphenicol; SD+ agar) to oat meal agar medium to induce conidiation. Plates were incubated at 35°C for 7 days or longer until colonies developed abundant spores. The conidial spores were then carefully collected by gently flushing 5 ml of phosphate buffer saline (pH = 7.4) on top of the colonies and aspirating the suspension into a sterile collection tube. Fungal spores were enumerated using a hemocytometer.
Antifungal agents
Terbinafine for oral gavage treatment was obtained from Novartis (Summit, NJ) and Terbinafine 1% cream (Lamisil ™ ) was obtained commercially. Alexidine dihydrochloride powder was obtained from Sigma (St. Louis, MO). Both drug powders were dissolved at a concentration of 1 mg/ml in 10% DMSO. For the preparation of alexidine thermosensitive gel, 10 mg of AXD was dissolved in 10 ml of water using a vortex mixer and ultrasonic bath. Poloxamer 407 (2 g) and Poloxamer 188 (100 mg) were then dispersed into AXD solution with the help of a vortex mixer and the dispersion was stored overnight in the refrigerator to dissolve Poloxamer 407 and Poloxamer 188, leading to an in situ gelling formulation containing AXD. The P407 in situ gelling formulation containing AXD was stored in the refrigerator until further use.
Animal model
All animal-related study procedures were compliant with the Animal Welfare Act, the Guide for the Care and Use of Laboratory Animals, and the Office of Laboratory Animal Welfare and were conducted under an IACUC approved protocol 31789-01 by The Lundquist Institute at Harbor-UCLA Medical Center. Male ICR mice (20 to 23 g) were rendered diabetic with a single intraperitoneal injection of 210 mg of streptozotocin/kg of body weight in 0.2 ml of citrate buffer 10 days prior to the fungal challenge, as we have previously described (Ibrahim et al., 2003). This dose of streptozotocin causes diabetes in 80 to 90% of the injected mice. Glycosuria and ketonuria were determined with keto-Diastix reagent strips (Bayer, Elkhart, Ind.) 7 days after streptozotocin treatment. Consistent with the establishment of DKA, diabetic mice had a decrease in blood pH from 7.8 (normal for mice) to 7.3-7.2, associated with increased levels of urinary glucose (moderate increase of 250 mg/dl to a high level of >1,000 mg/dl) and urinary ketone bodies (moderate levels of 2 to 4 mg/dl to a high concentration of ≥5 mg/dl) as determined by Keto-Diastix strip testing. Mice were anesthetized by i.p. injection of 0.2 ml of a mixture of ketamine at 82.5 mg/kg (Phoenix, St. Joseph, MO) and xylazine at 6 mg/kg (Lloyd Laboratories, Shenandoah, IA). The sedated mice were kept on heat pads (Fisher Scientific) which were prewarmed to 37°C. The backs of the mice were shaved using an electric shaver. As reported previously (Ghannoum et al., 2004;Saunte et al., 2008), a 3 × 3 cm area on the shaved skin was scraped gently with sandpaper to disturb the epidermidis and infected with 5 × 10 7 cells/ml of T. mentagrophytes conidia. For infection, 50 µl of PBS containing the spores was applied and rubbed on the skin of mice using a pipette tip until the application dried on the skin. Uninfected control mice skin was applied with PBS. For drug treatment, 7-day infected mice were treated with oral terbinafine (75 mg/kg), and the entire surface of the skin was applied with 1% topical terbinafine or AXD topical thermosensitive gel (20 µg). Treatment was continued once daily for 6 days.
Evaluation of the outcomes of infection
For clinical assessments, the skin of 20 mice was visually monitored daily for erythema (E), crusting or hyperkeratosis (H), at various time points post-infection (0 to 17 days). Due to hyperglycemia, the infection was severe and reproducible. The clinical parameters for erythema were scored blindly as the following: none 0, mild or spotty 1, well defined 2, or inflamed 3. Additionally, dryness and crusting were scored similarly, with mild dryness at 1 and hyperkeratosis at 3. Thus, the clinical score had a range from 0 (no infection) to 3 (worst outcome). These scores were used to compare the efficacy of the antifungal drugs. Percent efficacy was calculated as 100 − (T/C × 100) where T = the total score of the treatment group and C = the total score of the untreated control (infected) group. Total score = average clinical score from animals in the same group. One-way ANOVA or Student's t-test was used to analyze data using Graphpad prism software (p <0.05 was considered significant).
Mycological assessments were performed in a repeat set of experiments with 20 infected mice. The skin was scraped from parts of the infectious lesion using a sterile scalpel, and the skin dust, as well as 10 uprooted hairs, was collected in an empty Petri dish. Specimens were used for culture on SD+ plates, incubated at 35°C for 3 days, and fungal growth was monitored. The presence of even one colony of fungus was considered culture-positive.
Histopathology
Skin samples were obtained from three animals per group on day 13 of the study. Skin (~1 cm 2 ) was excised using sterile scissors from sacrificed animals. Skin samples were fixed in zincbuffered formalin, embedded in paraffin, sectioned, and stained with H&E and PAS for visualization of the epidermis and fungal morphology, respectively.
In vitro MIC testing
An antifungal susceptibility assay was performed by following the Clinical and Laboratory Standards Institute (CLSI) guidelines, document M38-A2 for filamentous fungi (Wayne, 2008). Conidial spores were isolated as described above and used at a final density of 1 to 3 × 10 3 cells/ml for testing. The concentration range of terbinafine evaluated was 0.001-0.5 mg/ml; and that of AXD was 0.02-20 mg/ml. The minimal inhibitory concentrations (MICs) were defined as the lowest concentrations that led to complete inhibition of observable growth of T. rubrum and T. mentagrophytes after 4 days.
Data availability statement
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
Ethics statement
The animal study was reviewed and approved by The Lundquist Institute at Harbor-UCLA Medical Center.
Author contributions
SN performed all studies. AD provided the thermogel formylations of AXD. AI contributed to the concept, troubleshooting, and editing the manuscript. PU was responsible for the concept, design, experimentation, writing, and editing the manuscript. All authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication.
Funding
We would like to thank the following agencies for their financial support to carry out this project: the NIH NIAID R01AI141794 awarded to PU, the NIAID 1R01AI141202-01 awarded to AI, and the NIH NIGMS P20GM103466 awarded to AD.
Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher's note All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Supplementary material
The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/ fcimb.2022.958497/full#supplementary-material SUPPLEMENTARY FIGURE S1 Hyphal invasion of the epidermis: Extensive hyphal invasion of the stratum corneum (thick arrow) on day 13 of T. mentagrophytes, which causes acanthosis (two headed arrow) and infiltration of inflammatory cells (star) at the site of infection (see inset for a zoomed in view).
SUPPLEMENTARY FIGURE S3
Skin histology images post drug treatment: Zoomed out (Magnification 2x) images of surface area of skin to visualize clearance of infection in TER and AXD treated mice. Scale bars = 500 µm.
|
2022-09-02T13:21:32.461Z
|
2022-09-02T00:00:00.000
|
{
"year": 2022,
"sha1": "4a5a45e266d8ff26fc24910dd307a8cc53d2b3f7",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "4a5a45e266d8ff26fc24910dd307a8cc53d2b3f7",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
18608050
|
pes2o/s2orc
|
v3-fos-license
|
Recent Results on N=2,4 Supersymmetry with Lorentz Symmetry Violating
In this work, we propose the N=2 and N=4 supersymmetric extensions of the Lorentz-breaking Abelian Chern-Simons term. We formulate the question of the Lorentz violation in 6 and 10 dimensions to obtain the bosonic sectors of $N=2-$ and $N=4-$ supersymmetries, respectively. From this, we carry out an analysis in N=1, D=4 superspace and, in terms of $N=1-$ superfields, we are able to write down the N=2 and N=4 supersymmetric extensions of the Lorentz-violating action term.
Introduction
The formulation of physical models for the fundamental interactions in the framework of quantum field theories for point-like objects is based on a number of principles, among which Lorentz covariance and invariance under suitable gauge symmetries. However, mechanisms for the breakdown of these symmetries have been proposed and discussed in view of a number of phenomenological and experimental evidences [1,2,3,4,5]. Astrophysical observations indicate that Lorentz symmetry may be slightly violated in order to account for anisotropies. Then, one may consider a gauge theory where Lorentz symmetry breaking may be realized by means of a term in the action. A Chern-Simons-type term may be considered that exhibits a constant background four-vector which maintains the gauge invariance but breaks down the Lorentz space-time symmetry [1].
In the context of supersymmetry (SUSY), the issue of Lorentz violation has been considered in the literature in different formulations: in ref. [6], supersymmetry is presented by introducing a suitable modification in its algebra; in ref. [7,8], one achieves the N = 1−SUSY version of the Chern-Simons term by means of the conventional superspace-superfield formalism; in ref. [9], the authors adopt the idea of Lorentz breaking operators. More particularly, considering the importance of extended supersymmetries in connection with gauge theories, we propose in this work an N = 2 and an N = 4 extended supersymmetric generalization of the Lorentz-breaking Chern-Simons term in a 4-dimensional Minkowski background. We start off with the Chern-Simons term in (1 + 5) and (1 + 9) space-time dimensions and adopt a particular dimensional reduction method, see [10], to obtain the bosonic sector in D = (1 + 3) of the N = 2 and N = 4 supersymmetric models, respectively. This is possible because in N = 1, D = 6-and N = 1, D = 10-supersymmetries, the bosonic sector has the same number of degrees of freedom as the bosonic sector of an N = 2, D = 4 and N = 4, D = 4, respectively [11]. Once the bosonic sectors are identified, we adopt an N = 1, D = 4-superfield formulation to write down the gauge potential and the Lorentz-violating background supermultiplets to finally set up their coupling in terms of N = 2 and N = 4 actions realized in N = 1-superspace. The result is projected out in component fields and we end up with the complete actions that realize the extended supersymmetric version of the Abelian Chern-Simons Lorentz-violating term.
N = 2-Lorentz-violating term
The N = 2 supersymmetric generalization of the Abelian Chern-Simons Lorentz breaking term can be built up using superfield formalism in an N = 1 superspace background, having coordinates (x µ , θ a ,θȧ) [10]. Using the fact that the bosonic sector for N = 2 in D = 6 and make the dimensional reduction for D = 4. The D = 4 Chern-Simons term proposed originally by [1] is (2.1) We propose for D = 6 the Chern-Simons term in the form whereμ = µ, 4, 5. The gauge field has 6 components, then we redefine it as Aμ ≡ (A µ ; ϕ 1 ; ϕ 2 ). The background tensor Tλρσ has 20 components, but we can redefine it as Tλρσ ≡ (R ρσ ; S ρσ ; ∂ µ v; ∂ µ u). The fields R ρσ and S ρσ has 6 components each one, and the other 8 components are redefined as 2 vectors that we write as a gradient of the scalars fields v and u. Then, the number of components is reduced to 14.
The dimensional reduction is done considering that there are not dependence of the fields in the x 4 , x 5 coordinates. It is clear that εμνκλρσAμAκ∂νTλρσ = 0, so we obtain, integrating by parts, the reduced Lagrangian as follows: In order to make the supersymmetrization of the Lagrangian (2.3) using the superspace formalism, we have to define some complex fields that can be found in superfields. We define these bosonic fields as Notice that we have introduced the new real scalar fields t and w that are bosonic fields that do not appear in the bosonic Lagrangian (2.3). These fields will be necessary in the supersymmetric version to maintain the same number of degree of freedom between bosonic and fermionic sector due the scalar superfields are defined with complex scalar fields. Each tensor field, R µν and S µν , appears as the real part of the complex tensor field whose imaginary parts are given in terms of their dual fields, as we see in (2.4) and can be found in [14]. The vector superfield V that accommodates A µ in the WZ-gauge is written as: which fulfills the reality constraint, V = V † . The scalar superfield that accommodates ϕ and ϕ * is written as and this complex conjugateΦ. These superfields obey the chiral condition:DΦ = DΦ = 0. The scalar superfields that accommodate s, r and r and their respective complex conjugate fields are: and their complex conjugate superfieldsS andR which satisfy the chiral condition: The spinor superfields that contain R µν , S µν and their corresponding dual fields are written as and their complex conjugate superfieldsΣ andΩ that are also chiral:D˙bΣ a = D bΣȧ = D˙bΩ a = D bΩȧ = 0. We can notice that we have to introduce two extra background complex scalar fields, ρ and φ, to match the bosonic and fermionic degrees of freedom. Now, we are interested in building up the supersymmetric action. For that, we take into consideration the canonical (mass) dimensions of the superfields; based on these dimensionalities, and by analyzing the bosonic Lagrangian (2.3), we propose the following supersymmetric action, S br : We therefore observe that the action (2.11) is manifestly invariant under N = 1supersymmetry. The component-field content of the N = 2-supersymmetry is accommodated in the N = 1-superfields). Indeed, the action (2.11) displays a larger supersymmetry, N = 2, realized in terms of an N = 1 -superspace formulation.
This Lagrangian in its component-field version reads as below: We point out the pieces corresponding to the bosonic action (2.3) in the complete component-field action above: We can notice that this Lagrangian describes the bosonic sector (2.3) and its superpartners. We find here the N = 1 supersymmetrization of the Chern-Simons term presented in [7], where the first term is the same as proposed by [1], considering the constant vector as the gradient of a scalar. Since the gradient vector is a constant, we have that s = α + β µ x µ . We see in the Lagrangian the presence of the bosonic real scalar fields, t = s + s * and u = r + r * , and the complex scalar fields, ρ and φ, that do not appear in the bosonic Lagrangian (2.3). These scalar fields appear in the supersymmetric generalization in order to keep the bosonic and fermionic degrees of freedom in equal number. We point out that the bosonic fields D, D * , f, f * , h, h * , g and g * play all the role of auxiliary fields. The bosonic fields s, s * , R µν , S µν , ρ, ρ * , φ, φ * , r, r * and the fermionic fields ξ,ξ, τ,τ , F,F , χ,χ, G,Ḡ, ζ,ζ work as background fields also responsible for the breaking the Lorentz invariance.
N = 4-Lorentz-violating term
In a very close analogy to the procedure adopted in the previous section, we succeed in writing down the N = 4 model by means of a reduction from 10 to 4 dimensions. We propose for D = 10 the Chern-Simons term in the form The background tensor Tλρσ has 120 components, and we can redefine it as 4,5,6,7,8,9 is the space-time index and I, J = 1, 2, 3, 4, 5, 6 is an internal index. We consider that there is no dependence of the fields on the x 4 , x 5 , x 6 , x 7 , x 8 , x 9 coordinates. Then, we have 6 anti-symmetric tensor fields R I ρσ with 6 components each one and 15 vectors written as gradients of 15 scalars represented by the anti-symmetric index I, J. Therefore, the number of independent components is reduced to 52.
Next, we need to redefine the gauge field as Aμ ≡ (A µ ; ϕ I , I = 1, 2, 3, 4, 5, 6) where ϕ I is real scalar fields. Observing that εμνκλρσδτβγAμAκ∂νTλρσδτβγ = 0, we obtain, integrating by parts, the Lagrangian as follows: This is the bosonic sector of the action term to be supersymmetrized. In this way, is necessary to define new fields to be partners inside the superfields. They are similar to the procedure of the previous section, but now there are internal index. In terms of superfields, we have two sectors: Based on dimensional analysis arguments for the bosonic sector, as it has been done for the N = 2 case, and noticing that some superfields now have internal symmetry index, we propose the following N = 4 supersymmetric action: We can observe that the action (3.3) is invariant under N = 1-supersymmetry and there is a larger symmetry, the N = 4-supersymmetry as well.
This N = 4 Lagrangian in its component-field version reads as follows: We can ascertain the presence of the bosonic sector (3.2) by means of the terms below: We can notice that this Lagrangian fairly accommodates the N = 4 bosonic sector (3.2). We re-obtain here the N = 1 and N = 2 supersymmetrization of the Chern-Simons term presented in ref. [7] and in (2.12), respectively. We notice that N = 4 Lagrangian is similar to N = 2 but now existing an internal index in same fields. The fields β I , t, u IJ and ρ IJ , that do not appear in the bosonic Lagrangian (3.2), were introduced in order to keep the bosonic and fermionic degrees of freedom in equal number. We can see that the bosonic fields D, D * , f I , f * I , h, h * , g IJ and g * IJ works as auxiliary fields. The bosonic fields s, s * , R I µν , ρ I , ρ * I , r IJ , r * IJ and the fermionic fields ξ,ξ, τ I ,τ I , F I ,F I , ζ IJ ,ζ IJ work as background fields breaking the Lorentz invariance.
Concluding remarks and comments
In the important context of studying the gauge invariant Lorentz-violating term formulated as a Chern-Simons action term , we propose here its N = 2 and N = 4 supersymmetric versions. This program can be carry out in a simple way with the help of a dimensional reduction method; here, we have chosen the methodà la Scherk, but it would also be interesting to contemplate other possibilities, such as the proceduresà la Legendre orà la Kaluza-Klein. With our reduction scheme, we could treat the extended supersymmetric version in terms of simple N = 1 superspace to supersymmetrize the Chern-Simons like term, as proposed by Jackiw, written in terms of a constant background vector here parametrized as the gradient of the scalar function α + β µ x µ , where α and β µ are constants.
Another interesting point we should consider is the possibility, once we have now the full set of SUSY partners of the Lorentz-breaking vector, to express the central charges of the extended models whenever topologically non-trivial configurations are taken into account. This would allow us to impose bounds on the central charges in terms of the phenomenological constraints already imposed on the vector responsible for the Lorentz covariance breakdown.
|
2014-10-01T00:00:00.000Z
|
2005-02-26T00:00:00.000
|
{
"year": 2005,
"sha1": "66b012c221d136b4cdef594e3e2abaf2ca94ea97",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "6035295aee4c45176361bf6c9a9a9ac960c5e0a8",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
269514600
|
pes2o/s2orc
|
v3-fos-license
|
The Protective Effect of the Crosstalk between Zinc Hair Concentration and Lymphocyte Count—Preliminary Report
Background: An imbalance between pro- and anti-inflammatory mechanisms is indicated in the pathophysiology of atherosclerotic plaque. The coronary artery and carotid disease, despite sharing similar risk factors, are developed separately. The aim of this study was to analyze possible mechanisms between trace element hair–scalp concentrations and whole blood counts that favor atherosclerotic plaque progression in certain locations. Methods: There were 65 (36 (55%) males and 29 (45%) females) patients with a median age of 68 (61–73) years enrolled in a prospective, preliminary, multicenter analysis. The study group was composed of 13 patients with stable coronary artery disease (CAD group) referred for surgical revascularization due to multivessel coronary disease, 34 patients with carotid artery disease (carotid group) admitted for vascular procedure, and 18 patients in a control group (control group). Results: There was a significant difference between the CAD and carotid groups regarding lymphocyte (p = 0.004) counts. The biochemical comparison between the coronary and carotid groups revealed significant differences regarding chromium (Cr) (p = 0.002), copper (Cu) (p < 0.001), and zinc (Zn) (p < 0.001) concentrations. Spearman Rank Order Correlations between lymphocyte counts and trace elements in the analyzed groups were performed, revealing a strong correlation with zinc (R = 0.733, p < 0.001) in the control group (non-CAD, non-carotid). Conclusion: Significant differences in hair–scalp concentrations related to atherosclerosis location were observed in our analysis. The interplay between zinc concentration and lymphocyte count may play a pivotal role in cardiovascular disease development.
Introduction
Among various trace elements' significance, the modulatory role of zinc has been reported in immunological mechanisms [1].It affects multiple aspects of immune system functions, from cell development to the mediation of their activation [2].Zinc-dependent proper cytokine function and secretion indicate its role in innate and adaptive immunological responses [3].Zinc is believed to be a balancing agent enabling a sufficiently strong but Life 2024, 14, 571 2 of 12 not overshooting immunological response [1].Zalewski et al. [4] discussed the significance of Zn on endothelial function by nitric oxide (NO) generation.NO is crucial for large and microvascular vessel integrity and function.In in vitro models, the modulatory role of zinc for T regulatory cells and a significant reduction in interleukin expression were observed [5].
There is accumulating evidence that immunological activation is imperative for atherosclerosis development and progression [6] and that cardiovascular adverse events increase this threat [7].The imbalance between pro-and anti-inflammatory mechanisms is indicated in the pathophysiology of atherosclerotic plaque [8].Recent reports highlighted this phenomenon as a potential target for new therapies identification [9].The inflammatory activation was found to be related not only to chronic progression [10] but also to acute cardiovascular events [11,12] and was presented as an independent potential survival modulator [13].Zernecke et al. [14] in their analysis presented a strong atheroprotective role of innate lymphoid cells-2.Immune system activation possesses contradictory properties as it may play a fundamental role in different stages of plaque progression or activate protective mechanisms against atherosclerosis development.Thus, it is essential to have an in-depth understanding of the effectors of its initiation and progression.
Coronary artery and carotid artery diseases, despite sharing similar risk factors, develop independently [15][16][17].Atherosclerotic plaques tend to occur in specific locations; however, the relationships or interactions between various vascular systems remain unknown.It seems reasonable that some mechanisms exist that favor atherosclerotic plaque progression in certain locations.Importantly, the development of atherosclerosis is multifactorial, and inflammatory activation may be triggered by different agents.
The aim of our analysis was to find a possible relation between zinc concentrations and inflammatory indices that may vary between healthy controls and patients presenting with coronary or carotid diseases.
Patients
There were 65 (36 (55%) males and 29 (45%) females) consecutive patients enrolled in a prospective analysis.The study group was composed of 13 patients with stable multivessel coronary artery disease (CAD group) referred for surgical revascularization, 34 patients with carotid artery disease (carotid group) admitted for vascular procedure, and 18 patients in a control group (control group) presenting with normal coronary angiograms and carotid doppler ultrasounds.
The coronary artery disease (CAD) group was composed of 11 men and 2 women with a median age of 70 (59-73) years admitted to the Cardiac Surgery Department for surgical revascularization.There were 6 patients diagnosed with left main coronary artery disease, 5 patients with three-vessel disease, and two more patients referred for surgery due to two-vessel disease.They were characterized by the following co-morbidities: hypercholesterolemia (n = 13, 100%), arterial hypertension (n = 13, 100%), diabetes mellitus (n = 10, 77%), and atrial fibrillation (n = 5, 38%).On preoperative echocardiographical examination, the left ventricular ejection fraction was estimated to be 55 (50-60%) with a left ventricular diastolic diameter of 55 (53-59) mm.
The control group was composed of 18 (10 men and 8 women) patients with a median age of 66 (61-71) years who were diagnosed with hypertension at the Internal Disease Department, with confirmed normal coronary angiography and ultrasound carotid doppler.Arterial hypertension (n = 15, 83%), hypercholesterolemia (n = 12, 67%), and diabetes mellitus (n = 2, 11%) were diagnosed as co-morbidities.The detailed characteristics are presented in Table 1.
Hair-Scalp Trace Element and Blood Sample Measurements
On admission, hair and blood samples were collected.Patients' data concerning demographic and clinical data were gathered.The obtained biochemical results were scrutinized together with patients' characteristics.
All samples of peripheral blood were collected under equal conditions.The inflammatory system activation markers were obtained from blood analysis including neutrophil count, lymphocyte count, monocyte count, haemoglobin, platelet count, and relevant indexes-the aggregate index of systemic inflammation (AISI), neutrophil-to-lymphocyte ratio (NLR), monocyte-to-lymphocyte ratio (MLR), platelet-to-lymphocyte ratio (PLR), systemic inflammatory index (SII), serum creatinine, and lipid profiles-were immediately measured with a routine haematology analyzer (Sysmex Euro GmbH, Norderstedt, Germany).Glomerular filtration rate (GFR) was estimated by the simplified formula of the Modification of Diet in Renal Disease (MDRD).
Hair samples (0.5 g by the patient) were cut in a similar manner from the scalp, just above the patient's neckline, with the use of titanium scissors.All the samples were stored in plastic containers.They were kept at room temperature until the analysis was performed.
Hair samples were routinely washed thoroughly, stirring with acetone, deionized water, 0.5% Triton X-100 solution, and deionized water.All the hair samples were, thereafter, dried and cut into smaller pieces.The next step involved sample digestion in a highpressure closed microwave system (Ethos One, Milestone, Sorisole, Italy).Briefly, 200 mg of already prepared samples was accurately weighed into the microwave vessels and then 3 mL of 65% HNO 3 and 1 mL of 30% H 2 O 2 were added.After that, samples were diluted to exactly 50 mL and were then ready for the measurement process.An inductively coupled plasma mass spectrometer (ICP-MS 7100x Agilent, Santa Clara, CA, USA) was used for the detection of the following elements-chrome (Cr), Mangan (Mn), copper (Cu), zinc (Zn), iron (Fe), lead (Pb), cadmium (Cd).
The instrumental parameters were optimized using a tuning solution (Agilent).Spectral interferences were reduced by a helium mode.The non-spectral and matrix interferences were reduced using an internal standards solution containing 10 µg/L Y and Tb introduced in parallel with all analyzed solutions.
Analytical Figures of Merit
The validity of the analytical method was assessed by analyzing the certified reference material (CRM) NCS ZC 81002b Human Hair (Beijing, China).The CRMs were digested according to the same procedure as that for the hair samples.Validation parameters such as linearity, precision, limit of detection (LOD), and trueness were evaluated.The linearity of the calibration curve was calculated as the correlation coefficient (R), the value of which was greater than 0.9996 for all analytes.The linear range of the calibration curve of elements was reached from the detection limit up to 100 µg/l.The LOD was defined as 3.3 s/b, where s is the standard deviation corresponding to 10 blank injections and b is the slope of the calibration graph.The LOD values were in the range of 0.006 µg/g for Cd to 10 µg/g for Ca.Precision values were calculated as the coefficient of variation (CV) (%), which ranged from 1.5% to 3.4% for all elements.Trueness was evaluated by applying the certified reference material and expressed as recovery values (%) ranging from 94% to 107%, respectively.
Statistical Analysis
The Shapiro-Wilk test was applied for the normality of the distribution of variables.The t-test, Cochran-Cox test, Mann-Whitney test, or Fisher's exact test was used where applicable to compare the variables between the two groups.Spearman correlation analysis was used to describe the correlation between the variables.Uni-and multivariable models were used to predict either coronary or carotid disease.Receiver operator characteristic (ROC) analysis for zinc hair-scalp concentration for atherosclerosis prediction was carried out.Statistical analysis was performed using Statistica 13 by TIBCO.p values < 0.05 were considered statistically significant.
Coronary Artery Group
There were no perioperative deaths in this group and the median [IQR] hospitalization time after surgery was 8 days (6-10).All procedures were performed as off-pump surgery (beating heart surgery).The mean (SD) number of performed grafts was 2.3 (2-2.5).Ten patients underwent total arterial revascularization, including eight with two internal mammary arteries and one with three arterial grafts application (two mammary arteries and a left radial artery).
Laboratory Results
The laboratory analysis including peripheral blood counts performed to compare all groups.There was a significant difference between the CAD and carotid groups regarding lymphocyte counts (p = 0.004), large unstained cell (LUC) counts (p = 0.003), and the NLR (p = 0.049).Since the neutrophil count was insignificant, the NLR differences were related to lymphocytes.
We noticed significant differences in hair trace element concentrations between the CAD and carotid groups regarding chromium Thereafter, we focused on lymphocyte counts between the presented groups, as shown in Figure 1.
We noticed significant differences in hair trace element concentrations CAD and carotid groups regarding chromium (1.65 (1.08-7.56)mg/L vs. mg/L) (p = 0.002), copper (17.4Thereafter, we focused on lymphocyte counts between the presente shown in Figure 1.
Trace Elements Results
The hair-scalp trace metal concentrations were analyzed in all groups.ical comparison between the coronary and carotid groups revealed significa regarding chromium (p = 0.002), copper (p < 0.001), and zinc (p < 0.001) conc the control and CAD groups, trace element hair analysis presented significa related to zinc (p < 0.001).In the carotid group, compared with the control g cantly different concentrations of chromium (p < 0.001), copper (p = 0.013), 0.001) were found in hair samples, as presented in Table 2.
Correlation between Trace Elements and Lymphocytes
Spearman Rank Order Correlations between lymphocyte counts and t in the analyzed groups were calculated as presented in Table 3, revealing a lation with zinc (R = 0.733, p < 0.001) in the control group (non-CAD, non-ca sented in Figure 2.
Trace Elements Results
The hair-scalp trace metal concentrations were analyzed in all groups.The biochemical comparison between the coronary and carotid groups revealed significant differences regarding chromium (p = 0.002), copper (p < 0.001), and zinc (p < 0.001) concentrations.In the control and CAD groups, trace element hair analysis presented significant differences related to zinc (p < 0.001).In the carotid group, compared with the control group, significantly different concentrations of chromium (p < 0.001), copper (p = 0.013), and zinc (p < 0.001) were found in hair samples, as presented in Table 2.
Correlation between Trace Elements and Lymphocytes
Spearman Rank Order Correlations between lymphocyte counts and trace elements in the analyzed groups were calculated as presented in Table 3, revealing a strong correlation with zinc (R = 0.733, p < 0.001) in the control group (non-CAD, non-carotid), as presented in Figure 2.
Multivariable Analysis
The uni-and multivariable analyses were performed for atherosclerosis prediction (either coronary or carotid disease).Demographic, clinical, and hair-scalp metal concentration data were taken into account in the analyses.Only trace elements representing significant differences between both groups were taken into account in the analyses (Zn, Cu), as presented in Table 4. Hair-scalp zinc concentration was found to be significant (OR 1.03, 95% CI: 1.01-1.05,p = 0.001) in the multivariable model for disease prediction.
Receiver Operator Curve Analysis
Receiver operator curve (ROC) analysis for the prediction of atherosclerosis related to hair-scalp zinc concentration was performed.The whole study group (65 patients) was divided into two subgroups: control group vs CAD + carotid group.The ROC curve analysis related to zinc concentration presented an AUC of 0.913, yielding sensitivity of 88.2% and specificity of 58.8%, as presented in Figure 3.
Receiver Operator Curve Analysis
Receiver operator curve (ROC) analysis for the prediction of atherosclerosis related to hair-scalp zinc concentration was performed.The whole study group (65 patients) was divided into two subgroups: control group vs CAD + carotid group.The ROC curve analysis related to zinc concentration presented an AUC of 0.913, yielding sensitivity of 88.2% and specificity of 58.8%, as presented in Figure 3.
Discussion
The results of our preliminary report indicate a strong correlation between hair-scalp zinc concentration and lymphocyte count in the control group.Zinc is a balancing agent that keeps the immune system in check.The main finding of this study is related to the possible guarding role of zinc in immunological responsiveness related to zinc-lymphocyte correlation.
Our analysis pointed out the role of zinc in the increased risk of either carotid or coronary disease occurrence in a multivariable model.The hair-scalp zinc concentrations were significant for atherosclerosis occurrence.Hair-scalp biochemical analysis was chosen as the source of trace element concentration analysis [17], though disparities between the organs were also reported [18].
Discussion
The results of our preliminary report indicate a strong correlation between hairscalp zinc concentration and lymphocyte count in the control group.Zinc is a balancing agent that keeps the immune system in check.The main finding of this study is related to the possible guarding role of zinc in immunological responsiveness related to zinc-lymphocyte correlation.
Our analysis pointed out the role of zinc in the increased risk of either carotid or coronary disease occurrence in a multivariable model.The hair-scalp zinc concentrations were significant for atherosclerosis occurrence.Hair-scalp biochemical analysis was chosen as the source of trace element concentration analysis [17], though disparities between the organs were also reported [18].The results of our analysis indicate a possible warden role of bodily zinc concentrations (measured in hair-scalp samples) and lymphocytes.As the control group (non-CAD and non-carotid) was characterized by moderate values of zinc, while the CAD and carotid groups presented low and high concentrations, respectively, we revealed a strong correlation with lymphocyte count.The presence of both mentioned factors may play a protective role against atherosclerosis development.This is the first study, to our best knowledge, that points out a possible supervisory mechanism between trace elements (zinc) and lymphocytes in atherosclerotic lesion formation.Our preliminary report suggests that any derangements in zinc body accumulation provoke the loss of a delicate balance between trace metal elements and inflammatory system components that may provoke atherosclerotic lesion progression.Our analysis did not focus on specific inflammatory reactions but only on lymphocyte count analysis and its relation to zinc concentration.
The correlation was lost when zinc disturbances were noticed in our analysis.Interestingly, low and high hair-scalp zinc concentrations were found in patients presenting with coronary and carotid artery disease, respectively.Despite non-significant differences in classical cardiovascular risk factors between the groups, culprit atherosclerotic lesions were found in different locations related to hair-scalp zinc concentrations.This phenomenon may be explained by two factors: zinc concentration and a lost correlation between this trace element and lymphocytes.
The results of our preliminary report may indicate that low zinc concentrations combined with a lost correlation in the lymphocyte-zinc axis may characterize increased risk of coronary artery development.The relationship between inflammatory activation and serum trace elements concentration in coronary artery disease was postulated in our previous analysis [19].
Our results are consistent with those of previous reports that suggested the beneficial effect of zinc supplementation on serum lipids and biomarkers of oxidative stress and inflammation in patients suffering from coronary artery disease [20].In Banik et al.'s meta-analysis [21], the potential role of low levels of Zn in the pathogenesis of CAD was presented.The reduction in serum zinc ion concentrations and occurrence of coronary heart disease was investigated and presented by Meng et al. [22].Liu et al. [23] proposed a zinc-α2-glycoprotein (ZAG) novel serum adipokine as a potential marker for coronary artery diagnosis markers.Decreased ZAG levels were independently associated with the presence of atherosclerotic epicardial lesions.Interestingly, in a recent study by Zang et al. [24], a possible relationship between low serum zinc concentrations and coronary artery disease risk combined with worse survival rates was suggested.
In our analysis, higher levels of hair-scalp zinc concentrations characterized patients with carotid disease.The results show that in coronary and carotid disease, zinc may play an important role in plaque location.In previous studies, higher zinc concentrations alongside fibrous aortic plaque were presented by Mendis [25].Stadler et al. [26] in their analysis suggested that zinc in human atherosclerotic lesions binds to matrix components.Therefore, zinc may be associated with plaque stability by promoting accelerated calcification.The strong correlation between zinc fluorescence and atherosclerotic plaque content was presented by Kopriva et al. [27].
Multifaceted contributions of immune system compounds may drive atherosclerosis.The specific characteristics of immune cell dysregulation within atherosclerotic lesions are still poorly understood [28].Lymphocytes' role in atherosclerosis is claimed to be related to various aspects of plaque formation.Although it is largely unknown how they migrate to the lesion sites, some studies suggest a regulating role of chemokines and their receptors and L-selectins [29].The suppressive role of zinc deficiency for altered lymphocyte maturation, activation, apoptosis, and lymphopenia suggesting an inefficient immune response was described [30,31].Zinc disturbances also affect T lymphocyte maturation and function, including the regulation of many enzymes related to oxidative stress [32].Jeong et al. [33] presented the association between the copper-zinc ratio in hair and the NLR, indicating an oxidative burden of individuals predisposed to obesity-related comorbidities.The results of in vitro studies suggest possible therapeutic options for zinc supplementation in various dysregulated immune conditions [34].Zinc excess impairs immune cell-specific functions [35,36] and was in human studies related to decreased high-density lipoprotein cholesterol and impaired immune reactions [37].
The results of our preliminary report present a strong correlation between atherosclerosis (either coronary or carotid disease) and zinc concentration, as presented in the receiver operator curve analysis.Further studies are required on zinc hemostasis as it may play a pivotal role in lesion development.Based on these results, zinc could be taken into consideration for future cardiovascular risk modification protocols.
Study Limitation
This study was performed on a limited number of patients as a preliminary report.The analyzed lymphocyte counts did not indicate lymphocyte types nor their activation status, which may play crucial roles in the explanation of the pathophysiological mechanisms.The multivariable model was performed on data obtained from a relatively small group of patients, limiting the number of factors included in the analysis.
Conclusions
Significant differences in hair-scalp concentrations related to atherosclerosis location were presented in our analysis.The interplay between zinc concentration and lymphocytes may play a pivotal role in the absence of cardiovascular disease development.The zinc concentration disturbances followed by the loss of correlation between zinc and immune cells may help researchers understand the pathophysiology of atherosclerotic plaque location in human organisms.
Figure 1 .
Figure 1.Lymphocyte counts in the presented groups, including significant differ CAD and carotid groups (p = 0.004).
Figure 1 .
Figure 1.Lymphocyte counts in the presented groups, including significant differences between CAD and carotid groups (p = 0.004).
Figure 2 .
Figure 2. Correlation between zinc and lymphocyte count in the control group.
Figure
Figure Correlation between zinc and lymphocyte count in the control group.
Life 2024 ,
14, x FOR PEER REVIEW 8 of 12
Figure 3 .
Figure 3. Receiver operator curve for prediction of atherosclerosis (CAD or carotid) in relation to hair-scalp zinc concentration.
Figure 3 .
Figure 3. Receiver operator curve for prediction of atherosclerosis (CAD or carotid) in relation to hair-scalp zinc concentration.
Table 2 .
Laboratory results and trace elements hair-scalp concentration [mg/L] comparison between groups.
Table 3 .
Correlations between lymphocyte counts and hair-scalp trace elements.
Table 3 .
Correlations between lymphocyte counts and hair-scalp trace elements.
Table 4 .
Uni-and multivariable analyses for atherosclerosis prediction.
Table 4 .
Uni-and multivariable analyses for atherosclerosis prediction.
|
2024-05-03T15:12:48.735Z
|
2024-04-29T00:00:00.000
|
{
"year": 2024,
"sha1": "7e47ecc60ed5ae7c5025978d5328cfe6c1c6cfaa",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "d91e10672e851e00070d9e730dbeaab23e3ced7b",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
55283035
|
pes2o/s2orc
|
v3-fos-license
|
STUDY THE EFFECTS OF KNOWLEDGE MANAGEMENT ON THE MANAGEMENT OF MEDIA ORGANIZATIONS
Today, it is obvious that the organization can succeed, if they are led by qualified decisions and the decision cannot be worthy without knowledge. Since media organizations are facing with new needs and diversity of the audience, tastes, interests and tendencies of a new audience and technology changes; therefore, knowledge management in the organization seems necessary in order to accommodate with environmental conditions and strengthening. The research method was a descriptive correlational and the study population included all employees of media organizations (radio, television, press, etc.) in Tehran city and 380 employees and managers were selected and analyzed by using available sampling method. The data collection tool is a questionnaire in this study. The data obtained were analyzed through questionnaires using SPSS software and Pearson and regression statistical tests. In general, the results showed that the use of knowledge management, knowledge preservation, knowledge transfer, knowledge creation, knowledge application has positive and significant impact on the management of media organizations. In addition, regression test results confirmed the strength of this relationship as much as the probability level of 99 percent.
PROBLEM STATEMENT
Organizations are always influenced by the environment, which is called "effective factors".These factors and variables are usually less supervised.However, if the organization can identify and control the effective environmental factors and reduce the amount of their complexity, can better perpetuate its survival.Today, the management of organizations can be successful considering the circumstances and the requirements of external and internal environments and proportional to the changes (Zomorodian, 2004).It seems that the most appropriate method to prevent deterioration and continue surviving is to increase information and dissemination of knowledge between employees of different levels of organization (Alvani, 1994).Because knowledge is one of the most important and most valuable asset of any organization.Knowledge is a driving force for organizational growth.Today's era is the era of knowledge-based organizations.In order to achieve new knowledge resources, knowledge management pay attention to new theories, such as community-oriented knowledge management, which aims to achieve massive resources of customer knowledge (Retna & Tee NG, 2011).
Most experts in knowledge management believe that this is a comprehensive management concept, which is a combination of human dimensions, psychology, sociology and technology.In fact, knowledge management allows organizations to distribute ideas, documents and information.In long term, knowledge management can create a unique culture by which knowledge should be considered as a continuous task that is constantly growing and changing.But a more general definition of knowledge management raised by Snowden which is based on clear distinction between tacit and explicit knowledge and consists of identification, optimization and active management of smart investments, which is stored in the form of explicit knowledge in artifacts such as books, etc. or in the form of tacit knowledge in the minds of individuals or groups.Optimization of explicit knowledge is carried out through permanent access to knowledge artifacts and optimization of tacit knowledge is done by creating communities and groups to keep track of various knowledge (Snowden, 2000).Knowledge management is significantly effective in improving the reliability of decision-making processes and the quality of its results and is used to determine the relationships between new information, knowing the facts, learn and determine the system values.Knowledge management helps to facilitate the flow of knowledge and can lead to faster and more effective integration of customer knowledge (Retna & Tee NG, 2011).Knowledge management also contributes to transparency in the process of integration of knowledge in other groups, such as employees (Change et al, 2010).Once customer relationship management be implemented, knowledge management program can expand current knowledge in relation to the customer (Retna & Tee NG, 2011).
Knowledge management provides tools, processes and databases to share knowledge to customers and employees.This enables organizations to realize the value of customer knowledge integration and ultimately, it is used to provide superior service to customers.Therefore, employees are more willing to share knowledge, so that they can see the value derived from it (Murry, 2006).Tsong also believes that knowledge management is a process through which organizations employ their own collected data (Tsong, 2009).
Knowledge management is a complex and dynamic issue, the success of knowledge management requires a systematic approach that considers all the factors, components and processes of knowledge management (Abtahi and Salavati, 2006).Implementation of knowledge management systems should communicate between individuals, so that they enable to think together and spend some times to share information, views and experiences that are useful for their company (MladKova, 2012).Many of these organizations believe that knowledge is their most important asset, but in practice they are less loyal to this claim.One of the main reasons for this is that organizations do not know how they refer to knowledge management, knowledge management models for this purpose are examined in this section.Various models have been shaped based on the attitude that experts have adopted in relation to knowledge management.
Knowledge management is the systematic process of searching, selecting, organizing, filtering and displaying information in a way that employees understand the specific context of improving and organization and gain a better understanding of their experiences.Knowledge management processes help organizations in problem solving, dynamic learning, strategic planning, decision making and protect intellectual property from erosion and degradation and leads to increased flexibility and increase organizational intelligence (Erabi and Mousavi, 2010).Knowledge management is a new foundation and organizational perspective that changes relationships between employees in an organization, focuses on the chain relationship between knowledge and action and continuously improve organizational efficiency.Useful knowledge is something that leads to an effective and flexible think (Toumi, 2002).To this end, it is important to identify sources of knowledge.Efficient knowledge management can achieved clear and tangible results from resources, develop a culture of knowledge sharing within the organization and solves the issues of the day.Three overall knowledge management activities include: information management, qualitative movement and human movement or human factors (Prusak, 2001).
Today, it is obvious that the organization can succeed, if they are led by qualified decisions and the decision cannot be worthy without knowledge.Since media organizations are facing with new needs and diversity of the audience, tastes, interests and tendencies of a new audience and technology changes; therefore, knowledge management in the organization seems necessary in order to accommodate with environmental conditions and strengthening.Salehi (2011), in an article entitled "the importance of knowledge management in media organizations" examined the status of knowledge management in radio and improving its quality.He concluded that knowledge management has not good situation in radio and in the end gave recommendations for its improvement.
LITERATURE BACKGROUND
Najaf Beigi, R. (2009) in an article titled "learning organization model in the Islamic Republic of Iran Broadcasting" came to the conclusion that IRIB is away from the effective situation of a learning organization and employee performance in team learning and changing in the mental models is more satisfying than managers and other features of the level of learning efforts in two groups are the same.A practical model and practical advice in this regard is proposed In order to reduce the distance to effective conditions and strengthen the required skills in IRIB based on the analysis results and theoretical arguments.Hashemi (2011), in his thesis, examined the impact of implementation of knowledge management on the economic effectiveness of media (a survey in IRIB).The findings confirmed the effect of knowledge management on the economic effectiveness and this effect was not only significant in terms of development of knowledge.By comparing the results of this study and previous investigations, he found that all dimensions and variables used in these conceptual and analytical models are effective in increasing economic effectiveness.
Irandoust (2015) in a dissertation, entitled "the proposed model of knowledge management for IRIB using the administrators, professors and experts in the IRIB and universities" proposed an applicable native model to this organization.In this regard, 15 managers and professors who are teaching in the field of knowledge management in universities of IRIB, Tehran and Allameh Tabatabaei and are active to carry out projects in this area and in general, were familiar with the area, as well as managers at the IRIB who work somehow with KM, were selected by theoretical sampling method; then, using in-depth interviews and mechanisms of underlying theories, the interviews gathered and the data were analyzed in three stages including open coding, axial and selective.After analyzing the interview, 226 concepts were extracted and these concepts were divided in 73 sub-categories, 21 categories and finally 3 axial categories.With open, axial, and selective coding, three-dimensional model in knowledge management can be created in IRIB.The dimensions of this model include "centers and knowledge resources, infrastructure requirements of knowledge management and knowledge management process".The first dimension refers to the knowledge centers in IRIB that knowledge resources are the main focus of knowledge management process.Infrastructure requirements in the form of ten categories refers to the infrastructure necessary for the implementation of knowledge management in IRIB and finally, we reached a process in IRIB, which has 9 element and refers to the effectiveness of knowledge flow in the IRIB.
METHODOLOGY
The overall objective of this study was to investigate the role of knowledge management on the management of media organizations.The research method was a descriptive correlational and the study population included all employees of media organizations (radio, television, press, etc.) in Tehran city and 380 employees and managers were selected and analyzed by using available sampling method.The data collection tool is a questionnaire in this study.The data obtained were analyzed through questionnaires using SPSS software and Pearson and regression statistical tests.
ANALYSIS OF THE RESEARCH FINDINGS
The main hypothesis: It seems that implementation of knowledge management has a positive and significant effect on the management of media organizations.Pearson test was used to determine the effect of knowledge management on the management of media organizations.Since the significance level of the test is equal to 0 and less than 1% and as a result, knowledge management is effective on media organization as much as 99 percent.In addition, the amount of correlation is equal to (0.765) and is positive and this shows that the effect is direct.According to the table above, the coefficient of determination shows that 58% of the changes in the management of media organizations are related to the variable of knowledge management.
379
Regression test was used to evaluate the relationship between these two variables.Since the significance level of the test is equal to 0 and less than 1%, we conclude that the severity of this effect is significant as much as probability level of 99 percent.
First secondary hypothesis: It seems that knowledge creation has a positive and significant effect on the management of media organizations.Pearson test was used to determine the effect of knowledge creation on the management of media organizations.Since the significance level of the test is equal to 0 and less than 1% and as a result, knowledge creation is effective on media organization as much as 99 percent.In addition, the amount of correlation is equal to (0.776) and is positive and this shows that the effect is direct.According to the table above, the coefficient of determination shows that 60% of the changes in the management of media organizations are related to the variable of knowledge creation.
379
Regression test was used to evaluate the relationship between these two variables.Since the significance level of the test is equal to 0 and less than 1%, we conclude that the severity of this effect is significant as much as probability level of 99 percent.
Second secondary hypothesis: It seems that knowledge preservation has a positive and significant effect on the management of media organizations.Pearson test was used to determine the effect of knowledge transfer on the management of media organizations.Since the significance level of the test is equal to 0 and less than 1% and as a result, knowledge transfer is effective on media organization as much as 99 percent.In addition, the amount of correlation is equal to (0.741) and is positive and this shows that the effect is direct.According to the table above, the coefficient of determination shows that 53% of the changes in the management of media organizations are related to the variable of knowledge transfer.Regression test was used to evaluate the relationship between these two variables.Since the significance level of the test is equal to 0 and less than 1%, we conclude that the severity of this effect is significant as much as probability level of 99 percent.
Third secondary hypothesis: It seems that knowledge preservation has a positive and significant effect on the management of media organizations.Pearson test was used to determine the effect of knowledge preservation on the management of media organizations.Since the significance level of the test is equal to 0 and less than 1% and as a result, knowledge preservation is effective on media organization as much as 99 percent.In addition, the amount of correlation is equal to (0.741) and is positive and this shows that the effect is direct.According to the table above, the coefficient of determination shows that 55% of the changes in the management of media organizations are related to the variable of knowledge preservation.Regression test was used to evaluate the relationship between these two variables.Since the significance level of the test is equal to 0 and less than 1%, we conclude that the severity of this effect is significant as much as probability level of 99 percent.
Forth secondary hypothesis: It seems that knowledge application has a positive and significant effect on the management of media organizations.Pearson test was used to determine the effect of knowledge application on the management of media organizations.Since the significance level of the test is equal to 0 and less than 1% and as a result, knowledge application is effective on media organization as much as 99 percent.In addition, the amount of correlation is equal to (0.807) and is positive and this shows that the effect is direct.Regression test was used to evaluate the relationship between these two variables.Since the significance level of the test is equal to 0 and less than 1%, we conclude that the severity of this effect is significant as much as probability level of 99 percent.
CONCLUSION AND RECOMMENDATIONS
In general, the results showed that the use of knowledge management, knowledge preservation, knowledge transfer, knowledge creation, knowledge application has positive and significant impact on the management of media organizations.In addition, regression test results confirmed the strength of this relationship as much as the probability level of 99 percent.Today, the old ways of managing organizations cannot be responsive to changes in the surrounding environment and uncertainty in organizational environments has increased due to the increasing complexity and speed of developments.As a result, organizations need to have knowledge and widespread awareness of environmental factors, so that they can adapt to environmental changes.The experts pay more attention to knowledge management in order to solve the problems caused by environmental changes, new technologies and gain competitive advantage.Knowledge-based organizations must have the ability to adapt to environmental conditions and strengthen their ability to problem-solving.Each individual should be encouraged in order to collect information, so that knowledge management can be improved.All employees should be aware of the kind of knowledge that may be useful for the organization, so that they can acquire this knowledge when they face with it.Knowledge can be achieved through formal channels such as conferences, internet, newspapers, magazines and informal channels such as social gatherings, movies and other items.Knowledge-based organizations should be creative about thinking and learning.Including activities to encourage dynamic thinking and creative learning, we can point out to encourage for doing creative and risky endeavors and holding educational workshops.Employees in this department should be taught in the field of knowledge preservation and knowledge retrieval.In fact, they should be aware of the kind of knowledge they need knowledge and resources to save them.Employees need to know how to communicate with centers of knowledge and access to information from around the world.This department must maximize knowledge transfer within the organization.Rotation and continuous changes in duties is highly effective way to transfer knowledge in the organization.The preserved knowledge stored be easily accessible for all tasks.Knowledge transfer should be considered as a professional responsibility and a part of the job.Units and projects that carry out knowledge production, should be supported.
Table 2 :
Summary of model
Table 3 :
Analysis of variance
Table 5 :
Summary of model
Table 6 :
Analysis of variance
Table 8 :
Summary of model
Table 9 :
Analysis of variance
Table 11 :
Summary of model
Table 12 :
Analysis of variance
Table 14 :
Summary of modelAccording to the table above, the coefficient of determination shows that 65% of the changes in the management of media organizations are related to the variable of knowledge application.
Table 15 :
Analysis of variance
|
2018-12-10T22:51:21.836Z
|
2016-08-10T00:00:00.000
|
{
"year": 2016,
"sha1": "27333b76187d02c69c5e7cdb286055d203702a77",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.7456/1060agse/029",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "27333b76187d02c69c5e7cdb286055d203702a77",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Sociology"
]
}
|
49411641
|
pes2o/s2orc
|
v3-fos-license
|
The Pfaffian Technique : A ( 2 + 1 )-Dimensional Korteweg de Vries Equation
The (2 + 1)-dimensional Korteweg de Vries (KdV) equation, which was first derived by Boiti et al., has been studied by various distinct methods. It is known that this (2 + 1)-dimensional KdV equation has rich solutions, such as multi-soliton solutions and dromion solutions. In the present article, a unified representation of its N-soliton solution is given by means of pfaffian. We’ll show that this (2 + 1)-dimensional KdV equation is nothing but the Plücker identity when its τ-function is given by pfaffian.
Introduction
The solitary wave, so-called because it often occurs as a single entity and is localized, was first observed by J. Scott Russell on the Edinburgh-Glasgow Canal in 1834.It is known that many nonlinear evolution equations have soliton solutions, such as the Korteweg de Vries equation, the Sin-Gordon equation, the nonlinear Schrödinger equation, the Kadomtsev-Petviashvili equation, the Davey-Stewartson equation, etc.In order to study the property of nonlinear evolution equations, methods are developed to derive solitary wave solution or soliton solution to nonlinear evolution equations.Some of the most important methods are the inverse scattering transformation (IST) [1] method, the bilinear method [2]- [7], symmetry reduction method [8], the Bäcklund or Darboux transformation method [9] and so on.Having soliton solutions is one of the basic integrable properties of nonlinear evolution equations.
In this paper, we are interested in the general expression of N-soliton solution to the (2 + 1)-dimensional KdV equation, which was first derived by Boiti et al. by using the idea of the weak Lax pair [10].This system can also be obtained from the inner parameter-dependent symmetry constraint of the KP equation [11].Recently, the dromion solutions and some exact solutions are studied by Lou and Wazwaz respectively [12]- [14].While as for the uniformed expression of its N-soliton solution is unknown yet.
In this article, we'll study the N-soliton solution to the (2 + 1)-dimensional KdV system (1).A compact form of the N-soliton solution to Equation ( 1) is obtained by means of pfaffian technique, which is given in section 2. Conclusion and further discussions are given in section 3.
N-Soliton Solution to the (2 + 1)-Dimensional KdV Equation
Given a nonlinear evolution equation, if it has 3-soliton solution, then this equation is of great possibility of having N-soliton ( 3 N ≤ ) solution.Pfaffian technique is one of the methods that can help us to determine whether the evolution equation has multisoliton solutions or not.In this section, we first review some properties of pfaffian.
Pfaffian
Pfaffians are antisymmetric functions with respect to its independent variables where ˆj denotes the absence of letter j.For example, when n = 2, we have There are various kinds of pfaffian identities.In this article, we just introduce the so-called Plüker relation for pfaffians ) ( ) which we are going to use.Hereafter, we let ( )
N-Soliton Solutions
The Hirota form of the (2 + 1)-dimensional KdV system (1) is which is obtained by the dependent variable transformations Here the Hirota bilinear operator with n and m are arbitrary nonnegative integers.
In [14], the 3-soliton solution to the (2 + 1)-dimensional KdV system (3) is obtained where ; , , via the perturbation method.It claims that the N-soliton solutions for 4 N ≤ can also be obtained by using perturbation method, but the explicit expression of the multisoliton solution is not given.
Conclusion
In this article, a compact form of the multi-soliton solution to the (2 + 1)-dimensional KdV system is given via the pfaffian technique.As one can see, the key point of the proof is to derive suitable expressions of the differential formulae of pfaffian τ-function f.It is worth pointing out that the method used in this article is different as the one for the proof of the BKP equation, which the differential formulae of the pfaffian τ-function depend only on the "differential operators" n d .
and(11) into the right hand side of Equation (3), we obtain nothing but the Plücker relation for pfaffians(2) pfaffian function (6) solves the (2 + 1)-dimensional KdV system (3).Note that in order to derive the differential formulae of the pfaffian function (6), we have to define another extra letter β besides the "differential operators" n d .The multi-soliton solution to the nonlinear (2 + 1)-dimensional KdV system (1) can be obtained by substituting pfaffian function (6) into the dependent variable transformation (4) directly.
|
2018-06-26T00:08:30.411Z
|
2016-10-13T00:00:00.000
|
{
"year": 2016,
"sha1": "e35139c1c75f6201ba06809ee22dd0f06296e578",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=71554",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "e35139c1c75f6201ba06809ee22dd0f06296e578",
"s2fieldsofstudy": [
"Mathematics",
"Physics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
242443832
|
pes2o/s2orc
|
v3-fos-license
|
A Case Series Outlining the Relationship between Dolichoectasia and Ectrodactyly-Ectodermal Dysplasia-Clefting Syndrome
with decayed teeth. For four years before her diagnosis, she had been treated for chronic renal failure and hypertension, as well as having had congestive heart failure many times, each time requiring her admittance to the hospital. At birth, the patient presented with lobster claw deformities of both feet but no other congenital abnormalities. She later developed lancinating pain in her right mandible that was sensitive to movement-chewing or talking mainly.
A Case Series Outlining the Relationship between Dolichoectasia and Ectrodactyly-Ectodermal Dysplasia-Clefting Syndrome
Cameron John Sabet * University of Pennsylvania, USA with decayed teeth. For four years before her diagnosis, she had been treated for chronic renal failure and hypertension, as well as having had congestive heart failure many times, each time requiring her admittance to the hospital. At birth, the patient presented with lobster claw deformities of both feet but no other congenital abnormalities. She later developed lancinating pain in her right mandible that was sensitive to movement-chewing or talking mainly.
Eventually, this pain grew to the point that she could not eat or speak at all. Vertebrobasilar angiograms revealed dilation, elongation, and displacement of the basilar artery [3].
All of the case reports describing EEC patients, even the ones from before angiograms were incorporated into the diagnostic checklist for clinicians, agree on some form of mental divergence of the patients afflicted with EEC. For example, one report mentioned "moderate psychomotor" dysfunction, another patient was reported to be placed in an institution for mentally divergent people, another presented with peripheral facial nerve palsy, while yet another case presented with an 81 IQ score [4,5].
The presentation of EEC in relation to dolichoectasia is extremely rare not only due to the fact that each separate condition is already rare, but that EEC is usually attributed to some familial or sporadic inheritance while dolichoectasia usually develops in the presence of longstanding hypertension [6]. We hope that this article better outlines the current literature surrounding the relationship between EEC and dolichoectasia, as well as highlights the lack of understanding arising from the *Corresponding author: Cameron John Sabet, University of Pennsylvania, USA
Ectrodactyly-Ectodermal
Dysplasia-Clefting Syndrome (EEC) is a rare heritable condition marked by ectodermal dysplasia with nails, teeth, sweat glands, and hair, eye and lacrimal duct malformations, midfacial hypoplasia, ectrodactyly (loss of fingers or toes), syndactyly, clinodactyly, auricular anomalies, short height, orofacial cleft palates, genitourinary anomalies, malformations within the central nervous system (CNS) leading to mental impairment or hearing loss, nevocellular nevi, flat noses, and hypopigmentation [1,2]. The connection between dolichoectasia and EEC has not yet been outlined in a formal literature review or even case series, so this article aims to explore a series of case reports to better highlight the various etiologies, symptoms, and presentations of EEC as they relate to dolichoectasia.
The dilation and elongation of arteries in the cranial cavity is called cerebral arterial dolichoectasia. To date, the correlation between dolichoectasia and EEC has not been studied thoroughly, though one case report specifically mentioned the possibility of those two conditions being frequently paired without delving into the literature to verify this claim. EEC, specifically lobster claw, occurs both in a familial inheritance pattern as well as sporadically.
One of the earliest articles suggesting a correlation between dolichoectasia and EEC was a paper describing a 44-year-old black female who was born with bilateral lobster claw deformities on her feet, but no other structural abnormalities. Her two siblings had experienced seizures, but no structural abnormalities. This 44-year-old had very little body hair and presented
|
2021-09-09T20:44:56.117Z
|
2021-07-31T00:00:00.000
|
{
"year": 2021,
"sha1": "897504435fe579b2a35e99382ffb9793b8754642",
"oa_license": "CCBY",
"oa_url": "https://clinmedjournals.org/articles/iavm/international-archives-of-vascular-medicine-iavm-4-010.pdf?jid=iavm",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "0976a9abcea0ea331e76505d176d0f5a98864072",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
233439793
|
pes2o/s2orc
|
v3-fos-license
|
Multiclass ECG Signal Analysis Using Global Average-Based 2-D Convolutional Neural Network Modeling
: Cardiovascular diseases have been reported to be the leading cause of mortality across the globe. Among such diseases, Myocardial Infarction (MI), also known as “heart attack”, is of main interest among researchers, as its early diagnosis can prevent life threatening cardiac conditions and potentially save human lives. Analyzing the Electrocardiogram (ECG) can provide valuable diagnostic information to detect different types of cardiac arrhythmia. Real-time ECG monitoring systems with advanced machine learning methods provide information about the health status in real-time and have improved user’s experience. However, advanced machine learning methods have put a burden on portable and wearable devices due to their high computing requirements. We present an improved, less complex Convolutional Neural Network (CNN)-based classifier model that identifies multiple arrhythmia types using the two-dimensional image of the ECG wave in real-time. The proposed model is presented as a three-layer ECG signal analysis model that can potentially be adopted in real-time portable and wearable monitoring devices. We have designed, implemented, and simulated the proposed CNN network using Matlab. We also present the hardware implementation of the proposed method to validate its adaptability in real-time wearable systems. The European ST-T database recorded with single lead L3 is used to validate the CNN classifier and achieved an accuracy of 99.23%, outperforming most existing solutions.
Background
Coronary Heart Disease (CHD), also known as Cardiovascular Disease (CVD) is a result of lack of blood supply to the heart organ. CHD is attributed to many different types of arrhythmia, which are generally defined as irregular, slow, or rapid heart beats. Acute Myocardial Infarction (MI) can result in death, but fatalities usually depend on the severity of an arrhythmia. The rising mortality rates due to these heart diseases have demanded early diagnosis of cardiac conditions before they progress to acute MI and leading to death. The most common tool to diagnose different types of arrhythmia is Electrocardiogram (ECG). Thorough analysis of ECG has gained much attention among researchers to accurately and effectively diagnose arrhythmia and critical cardiac conditions. Traditional diagnosis involves bedside ECG recording and the physician's presence to analyze and diagnose a condition. However, such methods are time consuming and in most severe cardiac cases, time is crucial in diagnosing such a condition.
Real-time monitoring has overcome this shortcoming by sending the acquired ECG data from the body-attached portable and wearable device to a central location for automated diagnosis using advanced machine learning methods and algorithms. Real-time monitoring systems have reduced the requirement of patients traveling to the clinic and the doctor's or physician's presence. It has improved the telehealth experience for most users and facilitates early diagnosis and treatment towards saving lives. Early diagnosis can save costs in the healthcare industry, as about 17.9% of the national cost is due to CVD [1]. ECG analysis and classification in real-time monitoring systems is carried out through multiple processes described by the stages-based model in Reference [2]. These processes can be grouped into three steps, that is, data acquisition, preprocessing and feature engineering and classification for machine learning approaches. One such approach is proposed in this study.
In this study, we aim to detect Normal (N), ventricular ectopic (V), and ST-segment changes that contribute towards the diagnosis of MI. We build on top of our prior work [3] and present a three-layer model for the classification of ECG into three classes of Normal, ST-change, and V-change types. The model is based on a Convolutional Neural Network (CNN) that works at layer three of the model and trained on 2-D ECG images; a snapshot of the ECG wave between two consecutive R-peaks. The extracted ECG within every two consecutive R-peaks is referred to as an image for the purpose of the machine learning experiments in this study. The proposed model is referred to as our 2-D CNN throughout this paper. This model can be used for real-time monitoring and integrated with portable and wearable devices for ECG acquisition and preprocessing. The images can be sent to a central location for classification by applying our proposed 2-D CNN. Devices like smartwatches and smartphones are now an integral part of our daily lives. These devices can be leveraged as portable and wearable devices for real-time monitoring of the health status. This encourages the users' cooperation to accept this as a diagnostic support tool towards improving health and the quality of life.
To evaluate our 2-D CNN classifier, we have acquired ECG signals from the publicly available European ST-T database (ESCDB) [4]. Though the standard 12-lead ECG and/or ECG collected from several leads provide most effective results, recent research has shown that ECG data acquired from a single lead, mostly used in portable real-time monitoring devices such as smartphones and smartwatches [5][6][7], has started to gain acceptance for detecting certain arrhythmia types and pathological rhythms such as atrial fibrillation. Consecutive recording of multiple single-lead ECG can also be effective for detecting ischemic diseases and MI [5]. Reduced number of leads has tremendous benefits in realtime monitoring systems as less number of sensors will be attached to the body surface, making it more convenient for any user to employ. To this end, ESCDB is an appropriate database for evaluating algorithms and techniques in this context, as it has single-lead ECG recordings. The CNN algorithm in this work is implemented in Matlab and the preprocessing of ECG data is achieved in Python 3.7.
Electrocardiography
Electrocardiography, although invented more than a century ago by a Dutch physiologist Willem Einthoven, still remains the most useful and readily available investigation in the field of cardiology throughout the world. The Electrocardiogram (ECG) is the graph showing the recorded cyclic electrical activity of the heart, received through the electrodes of the ECG leads attached to the body surface, as shown in Figure 1. The electrical activity is generated by the cardiac tissues, as small potentials, which are amplified by the electrocardiograph and recorded on graph paper as ECG. The ECG is usually recorded with twelve leads, known as conventional leads. The first set of six leads is called limb leads, which record the potential across the frontal plane. These six limb leads are further divided into three standard limb leads, also known as bipolar leads and three unipolar limb leads, named AVR, AVL, and AVF . The second set of six conventional leads is called precordial leads, also known as chest leads. These leads record the potential across the horizontal plane and are named V1 to V6. These twelve conventional leads are oriented towards the heart in such a manner that reflect, together a two-dimensional view in two perpendicular planes, the frontal plane and the horizontal plane. A third dimension in the sagittal plane can be added, for example, by an esophageal ECG or the measurement may be done using three orthogonal vectorcardiographic leads such as in the Frank's orthogonal lead system containing seven electrodes [8]. However, with the growing technology and computer vision techniques, the diagnosis of heart conditions from analyzing the ECG signal recorded with a single lead, or recorded sequentially from multiple single-lead ECG, has started to gain clinical acceptance in the recent years, especially using portable monitoring devices [5]. These advances can further be employed to significantly improve the real-time detection and automated diagnosis of different cardiac conditions such as MI, also known as "heart attack" and ischemic heart disease. The importance and significance of electrocardiography is well recognized not only in the diagnosis of MI, but also in detecting conduction disturbances or abnormalities and various types of arrhythmia. The advent of modern medical therapy of unstable angina and acute MI, and the development of interventional cardiology, has increased the significance of electrocardiography in contemporary cardiology. The ECG waves and fiducial points are named in alphabetical order. These are called the P, QRS-complex, and T-waves, as shown in Figure 2. The R-peak is generally known as the focal point of an ECG beat, and the time interval between two consecutive R-peaks is called the R-R interval (also referred to as the RR interval). The amplitude, shape, and time interval of these waves provide significant information about the state of the heart. The QRScomplex reflects the ventricular depolarization generated by the positive wave when impulse spreads towards the positive pole of the respective ECG lead [9,10]. MI usually affects the ventricles, and therefore the QRS abnormalities are associated with the ST-T abnormalities. The ST-segment elevation is mostly the earliest change in acute MI and its early detection is of much significance from the medical treatment point of view. Premature ventricular complexes (PVC), also known as ventricular ectopic (V), are the premature beats originating from slow ventricular activation and reflected as a wide QRS complex in ECG. These may reflect underlying cardiac diseases, such as the ischemic heart disease. According to the Association for the Advancement of Medical Instrumentation (AAMI), the non-life threatening arrhythmia types can be categorized into five classes: non-ectopic or Normal (N), supraventricular ectopic (S), ventricular ectopic (V), fusion (F), and other unknown (Q) categories [11]. The process of detecting different types of arrhythmia into its appropriate cardiac condition is called classification.
Three-Layer Process
Diagnosis of a cardiac condition requires proper detection of the arrhythmia with Computer-Aided Design tools such as machine learning techniques. The detection process involves ECG signal acquisition, removal of noise by preprocessing, identifying the fiducial points in the ECG wave such as the QRS-complex and ST-segment, as shown in Figure 2, and classifying it into different types of arrhythmia. We present these steps as a structured model consisted of a three-layer process, shown in Figure 3. Layer 1 behaves as the sensor layer responsible for data acquisition. Layer 2 acts as a coordinator between the first and third layer and is responsible for any preprocessing. Layer 3 has the role of a central location responsible for identifying and classifying the ECG signal. We explain the implementation of this three-layer model as follows: The very first step in any ECG analysis is to acquire the ECG signal. For evaluation of algorithms and techniques, ECG signals are usually acquired from publicly available databases. We have acquired ECG signals from the ESCDB database at the first layer of this process and evaluated our method. The selection of datasets is discussed in Section 5.3. We have also evaluated our method in real-time with ECG signals acquired from the AD8232 sensor discussed in Section 6.4. The second layer consists of ECG preprocessing, since the acquisition of the ECG signal inherits embedded circuit noise and external power-line noise. It is very crucial to filter these noise sources for better detection of the fiducial points, classification and reducing false negatives which are more critical compared to false positives. We have used average filtering to denoise the ECG signal and to perform different experiments in this study, discussed in Section 5.3. However, there are other methods to filter noise from ECG data, such as the ones presented in References [12][13][14][15][16]. Since our model uses 2-D images with a CNN, we need to convert the ECG signal from one-dimension (1-D) to a (two-dimensional) 2-D image at this preprocessing layer. This is achieved using Python by plotting the ECG signal between every two consecutive R-peaks, and saving it as an image. Layer 3 takes these images as input to our proposed 2-D CNN, trains the network with automated feature engineering and classifies the ECG image into three classes representing either "Normal", "ST-change", or "V-change". The proposed 2-D CNN model is discussed in detail in Section 5.4.
Motivation
Ever since the COVID-19 pandemic has started, it has not only flooded the hospitals with patients and limited the number of intakes, but has also changed the way treatment is provided. Patients, now, have to make an appointment, wait in the parking lot for the room to be available and sanitized, and confront limited availability of physicians. This has introduced another layer of hurdle in the early diagnosis and treatment of cardiac conditions. The rapid rise of COVID-19 has intrigued many researchers to perform research in this area and develop real-time continuous monitoring systems that can identify these cardiac conditions at the user's residence. This motivated us towards this study as a contribution to the ongoing research of real-time ECG analysis and classification.
In the recent years, numerous methods and techniques have been developed for realtime monitoring systems, but these are evaluated with ECG data recorded with multiple leads [17][18][19], available on public databases such as MIT-BIH (MITDB) [20]. Real-time monitoring systems heavily depend on portable and wearable devices such as bodyattached sensors, patches, smartwatches and smartphones. Such sensors and devices usually record and acquire the ECG data with a single lead to improve user experience, convenience, and acceptance. This is our second motivation to perform this study to analyze ECG data recorded with a single lead for real-time monitoring. We have used the single-lead ECG data provided by the ESCDB database to evaluate our method and to support the reliability and accuracy of real-time diagnosis, detection, and monitoring.
Most advanced machine learning techniques have analyzed ECG data in one-dimension (1-D) to classify different arrhythmia types. However, their models generally employ many layers in the form of a neural network or CNN structure. The computational complexity of such structures is discussed later in Section 7. It is seen that complexity grows with more layers in the CNN structure. Complex designs and systems do not perform at fairly high and acceptable accuracy levels. CNN and deep learning are known best for computer vision and image classification. It is worth exploring the smart and automated feature engineering capability of a CNN to learn ECG fiducial points and classify based on supervised learning. This motivated us even further to perform this study by taking the 1-D ECG signal and converting it into a two-dimensional (2-D) image to train and evaluate our proposed 2-D CNN model and classify ECG into three classes.
Key Contributions
This study aims to contribute to the growing area of research in ECG analysis and detection of different arrhythmia types in real-time to prevent various cardiac conditions and to improve telehealth practices. In this paper, a less complex CNN structure is proposed that can be feasible for real-time ECG monitoring, particularly useful for detecting the onset of a heart attack, with extensive experiments and analysis performed on the ESCDB database. Our contributions to this area of research can be summarized as follows: 1.
Present an overview of ECG and its significance in detecting different arrhythmia types and cardiac conditions. 2.
Present a layer-based model for ECG analysis including acquisition, preprocessing and classification processes and summarize the components.
3.
Present an optimized CNN network based on a global averaging technique to improve the classification accuracy significantly.
4.
Present a detailed literature review of ECG analysis and classification algorithms using traditional and machine learning approaches for both offline simulations and real-time systems.
5.
Discuss and present the proposed CNN architecture and summarize its components and parameters for our simulation results. 6.
Present detailed results for three simulation experiments performed using our proposed model and its comparison with related work in this area. 7.
Present a hardware implementation of our proposed model in accordance with the three-layer ECG analysis process. 8.
Discuss ECG classification and outline applications for real-time monitoring systems, including portable and wearable devices and ECG sensor networks for the adaptation of our proposed model.
Paper Organization
This paper is organized as follows-in Section 4, the detailed literature and related work is presented for this paper. This section is further divided into two sections. In Section 4.1, we provide detailed related work of traditional approaches for ECG classification. Section 4.2 describes the related work of machine learning approaches for ECG classification. The proposed model is explained in Section 5 with four subsections, discussing ECG acquisition and preprocessing in Sections 5.1 and 5.2, respectively. Dataset preparation for the experiments is described in Section 5.3, and the architecture of the proposed CNN model is explained in Section 5.4. In Section 6, we provide details of the experiments performed in this study, followed by four subsections that present details of four experiments. Sections 6.1-6.3 present details of the first, second, and third experiments, including results and graphs, respectively. In Section 6.4, we present a real-time monitoring system of our proposed model using hardware implementation. In Section 7, results of the experiments are discussed, and performance evaluations are presented. Research tools and applications of our proposed model are presented in Section 7.1. Section 8 concludes this paper and provides insights for future directions.
Related Work
Typically, an output of a system is based on a function applied to the input. Traditional algorithms work in such scenarios where outputs are generated by applying predefined rules or functions to the inputs. These rules or functions remain the same. Traditional algorithms require manual observation and optimization to achieve the desired results. Whereas machine learning algorithms generate the output, optimize the underlying function and then regenerate the output that is close to the expected output in an automated fashion. Both traditional and machine learning algorithms have applications towards clinical diagnosis. We present related work in this section that have used these methods to analyze the ECG signal for the detection and classification of its different arrhythmia types. Traditional approaches of ECG analysis generally refer to conventional signal processing techniques that employ various filters and time and/or frequency domain transforms. This is while machine learning based ECG analysis techniques are relatively newer compared to traditional approaches. These techniques involve machine intelligence to learn the trend of data and make predictions.
ECG analysis and classification rely upon proper identification of fiducial points such as the ST-segment and QRS-complex. The process of fiducial points detection is called Feature Engineering (FE). Traditionally FE is performed by manually observing the ECG graph by a doctor, which results in a diagnosis. In the recent development of modern technology with real-time monitoring systems, fiducial points such as the QRS complex and arrhythmia detection is now an automated FE process performed by mathematical techniques such as the Tompkins Wavelet Transform (WT) [21,22], machine learning techniques such as CNN [23], and arrhythmia detectors by Long-Short Term Memory (LSTM) [24] evaluated with data acquired by sensors. Fiducial points detection methods that are evaluated with the ESCDB dataset include thresholding and windowing techniques [25,26], time-domain techniques [27][28][29] for ST-segment detection, and positionbased QRS detectors [30]. Other methods include WT [31][32][33], Discrete Wavelet Transforms (DWT) [34][35][36][37], Windowing Algorithms [38] and Finite Impulse Response (FIR)-based adaptive filters [39].
However, our proposed 2-D CNN model eliminates the need of FE as it learns these features on the fly during the training cycle with convolutions and feature maps. Therefore, we find it unnecessary to report performance metrics for the FE methods discussed above. One can refer to Reference [2], a recent survey paper on ECG signal analysis, for further review of these FE methods and their performances reported in the literature. In this study, we glance at traditional and machine learning approaches for classification of ECG into different arrhythmia types and detection of ST-segment changes and ischemia (i.e., MI). Furthermore, a comparison is provided between our proposed approach and others.
Traditional Approaches
Ischemia detection by analyzing the ST-segment deviation using Isoelectric Energy Function (IEF) was introduced in Reference [40] and has been evaluated on the ESCDB database. In Reference [41], the authors presented a method that uses the Pan-Tompkins algorithm to detect the ST-segment deviations with a success rate of 97.03% and error of 2.97% on the ESCDB database. ST deviation (elevation or depression) based classification of arrhythmia into normal and abnormal classes was presented by Reference [42] and achieved a sensitivity of 98.2%, and 97.17% positive predictive value (ppv) when evaluated with the ESCDB database. A Time-frequency based approach to classify MI was proposed by Reference [43] and achieved 94.23% accuracy, 95.72% sensitivity and 98.15% specificity when evaluated with the ESCDB database. Another ischemic beat classification with Genetic Algorithm (GA) and Multicriteria Decision Analysis (MDA) was presented by Reference [44] and achieved 91% for both sensitivity and specificity. A rule-based method to classify ST morphology into normal and abnormal was introduced in Reference [38] and achieved 90.1% accuracy and 98.9% sensitivity when evaluated with ESCDB. Another method has been introduced by Reference [45] to detect ischemia based on statistical features of the ST-segment deviation and performed classification of normal and abnormal beats with 97.71% sensitivity and 96.89% ppv on the ESCDB database.
Machine Learning Approaches
Using machine learning techniques, the authors of Reference [46] have proposed employing Decision Trees (DT) and Random Under Sampling (RUS) boosting-based techniques to detect the ST-segment and T-wave anomalies in ECG from the same ESCDB database with a sensitivity of 86%. In Reference [36], ST-deviation is detected by an ensemble classifier-based backpropagation neural network. The deviation is obtained by subtracting the detected ST-segment from the isoelectric level of its beat. They have achieved sensitivity of 90.75%. The basic Support Vector Machine (SVM) is a kind of administered learning model, which is generally known as a binary classifier and groups information into two classes using isolating hyperplanes. SVM was proposed by Vapnik, an algorithm that extracts a function to classify unknown data [47] and mainly separates data into two classes based on supervised learning. This makes SVM a strong classifier candidate for application towards ECG signal classification into two classes of normal and abnormal [48]. There are variations of SVM such as a Multiclass Support Vector Machine (MSVM) and Complex Support Vector Machine (CSVM) that can be used to classify ECG arrhythmia types into multiple classes, as presented by Reference [49]. In Reference [35], the authors have detected the ST-segment episodes and changes and have classified arrhythmia into six classes using SVM. The Rule-Based Decision Tree (RBDT) approach to classify ischemic and arrhythmic beats into normal and abnormal was introduced as a fuzzy expert system by the authors of Reference [50]. Rules are derived based on the ST-segment value depending on the time between the R-peak and the start of the ST-segment slope. In Reference [51], the authors have used an ensemble learning technique called Adaptive Boosting (AdaBoost) also known as meta-learning, used to enhance binary classification efficiency in detecting abnormal beats from the ECG signal and have evaluated on three databases of MITDB, QT [52], and ESCDB. Studies have shown that Artificial Neural Networks (ANN) are powerful data analysis tools. Analysis of ECG with an ANN-based approach to detect ischemic episodes was presented by Reference [53]. In Reference [54], the authors presented a Multi-Module Neural Network System (MMNNS) to classify S and V heartbeats evaluated on the MITBIH and ESCDB databases. A Densely connected CNN (DenseNet) based classifier which classifies four ECG patterns was presented by Reference [55] and evaluated on two databases, including the ESCDB database. Classification of ST-segment into normal, depressed, and elevated levels using multiple features extracted with the Random Forest (RF) technique was achieved in Reference [56] with 86.9% accuracy, 85.18% sensitivity (ST normal), 87.35% sensitivity (ST depressed), and 88.06% sensitivity (ST elevated) on the ESCDB database. CNN is best known for computer vision applications and works great in image classification. Two dimensional (2-D) CNN model to classify arrhythmia types using ECG signal as a converted image has been presented by References [57][58][59] and evaluated on the MITBIH database.
Proposed Model
Our proposed model works in four steps as follows: 1.
ECG data acquisition, which is explained in Section 5.1 2.
Preprocessing of the acquired data for the denoising process and conversion of 1-D ECG signal to 2-D image, explained in Section 5.2 3.
Data is organized into multiple datasets. Description of this procedure is explained in Section 5.3. The organized datasets are used to train the proposed CNN architecture to perform multiple experiments described in Section 6.
4.
A 2-D CNN model is trained on the organized datasets. Explanations are provided in Section 5.4.
Each of these steps of the proposed model are further explained in their respective sections.
Data Acquisition
Automatic detection and classification of cardiac conditions such as ischemia and MI with advanced machine learning techniques requires evaluation of these methods, techniques and algorithms for better accuracy to avoid false positives and false negatives. For research and evaluation purposes, clinically pre-recorded ECG signals are publicly available to evaluate the efficiency and performance of these methods. In this study, we have used the ESCDB available on Physionet data bank website https://www.physionet. org. This database contains ECG waveforms recorded by a Holter Machine with recordings of 2 h per patients and includes 70 males and 8 females, aged 30 to 84 years old. Each patient was suspected of myocardial ischemia as diagnosed. Each annotated recording contains ECG data collected with two ambulatory chest leads; Lead 3 (L3) and Lead 5 (L5); sampled with 250 Hz as the sampling frequency, and 5 µV as the amplitude of the smallest step (precision) measured in voltage. The annotations provide the beat type, gender, age, clinical outcome, imbalance in electrolytes, and a summary of its pathology. This database was coordinated by the Institute of Clinical Physiology of the National Research Council (Pisa, Italy) and the Thoraxcenter of Erasmus University (Rotterdam, Netherlands). While this database provides annotations per beat, it is found to have nonischemic ST-segment changes due to drifts in the ST-segment deviation level or postural changes leading to false positives. On the other hand, it may contain beats detected as ischemic but with no ST-T change in nature, leading to false negatives. Finally, the definition of ischemia has been updated since this database has been posted. Examples of the ECG waveforms of Lead 3 and Lead 5 from the ESCDB database are shown in Figure 4.
Preprocessing
In real-time ECG monitoring, once the ECG signal is acquired at Layer 1 of the ECG signal analysis process, it is sent to a coordinator at Layer 2 for further processing. This coordinator can be a Personal Digital Assistant (PDA), smart App on a smartphone, or a microcontroller device that processes this ECG data. The second layer is mainly responsible for cleaning (denoising) the ECG signal and the detection of R-peaks so that the ECG wave can be captured between two consecutive R-peaks called R-R interval and transformed into a 2-D image for classification with our 2-D CNN model at Layer 3.
The running ECG signal is first cleaned from any noise sources (such as internal/ embedded or external noise) that may have been introduced by sensors or location at the time of acquisition. There are many methods available to denoise the ECG signal such as State Space Recursive Least Square (SSRLS) adaptive filter [13], Adaptive Notch Filter (ANF) [14] and Fast Fourier Transform (FFT) [15]. However, for removing Gaussian noise, impulse noise or salt and pepper noise from 1-D signals and/or 2-D images, linear filters such as the average and weighted filters can be used. Weighted filters can reduce high frequency components, but sharp details in the signal or image may be lost [60]. As this study proposes a method for real-time systems, the acquired ECG signal may be contaminated with power-line, power supply or radio signal noise. Therefore, we have used a rather simple and less complex average window filter for denoising such noise sources [61]. A moving average window filter takes the average of the neighboring values while moving along the ECG signal. The number of neighboring values becomes the window size of the filter. It removes fluctuations and smooths out the noise by working as a low pass filter. After many iterations, the optimized window size of 5 (N = 2) has been found to provide the optimum trade-off between greater amount of noise reduction and loss of signal details such as compromising the signal shape and/or morphology of the fiducial points [60] to produce a clean ECG signal. This denoising filter process is applied in real-time on the running ECG signal regardless of the position of the R-peaks and the other fiducial points, and is performed before transforming the 1-D ECG signal into a 2-D image representation. The moving average filter can be mathematically defined by Equation (1).
As the cleaned ECG signal between every two consecutive R-peaks is converted to a 2-D image in this preprocessing layer, the R-peaks should be detected accurately after denoising and before signal to image conversion. There are methods such as Discrete Wavelet Transform (DWT) [62], Windowing Algorithm [63], Empirical Mode Decomposition (EMD) based RR detector algorithm [64] and selective decomposition [65] for R-peak detection in real-time monitoring systems. A complete cycle of the ECG can be detected based on discrete data presented in Reference [66]. Adaptive thresholding and local maximums with search-back mechanisms can be used effectively to detect the R-peaks. However, for the purpose of simulation, we have used the R-peak annotations on the pre-recorded ECG signals. Using Python 3.7 IDE, the "rdsamp" function of the WaveForm-DataBase (WFDB) package reads the record file that contains the cleaned ECG signal and uses the starting and ending sample numbers to plot the ECG signal in a two dimensional image. These starting and ending sample numbers are the sample numbers of two consecutive R-peaks and are provided to "rdsamp" function for each R-R interval to generate the image containing one complete cycle of the ECG wave. Figure 5 depicts an example of a noisy and cleaned Lead 3 signal after the moving average denoising filter is applied. It can be observed that the quality of the signal has improved without compromising the characteristics of the ECG wave. The extracted cleaned ECG wave between every two consecutive R-peaks is then converted into a 2-D image using the Python code and sent to Layer 3 for classification into one of the three classes. The conversion of 1-D ECG signal to 2-D ECG image can be assumed as a snapshot (screenshot) taken between every two consecutive R-peaks and stored as an image whose pixel intensity values, throughout the image, altogether compose the shape of the ECG wave.
Dataset Preparation
Initially, we have selected thirty records from the ESCDB database, which are grouped into four collections. We have then selected images (ECG data between two consecutive R-peaks converted to 2-D image) from each collection and created four datasets, which we have used in our experiments. These thirty records are organized in collections as follows. The first collection has fifteen records and consists of ECG signals recorded with Lead 3. The second collection has six records containing ECG signals recorded with Lead 5. The third collection has six records containing ECG signals recorded with Lead 3. The fourth collection has nine records containing ECG signals recorded with Lead 3.
Three datasets are created for intra-patient analysis [67] using collection 1, 2, and 3 named Dataset1 (DS1), Dataset2 (DS2), and Dataset3.1 (DS3.1). Additionally, Dataset3.2 (DS3.2) is created for inter-patient analysis [68,69], as follows. In the intra-patient division scheme, data extracted from one patient may appear in both training and test sets of the machine/deep learning model. Whereas, in the inter-patient division scheme, data derived from one patient appears in only one set; either training or test set of the machine/deep learning model. DS1 consists of a total of 600 images, 300 images per class taken from the first collection. DS2 contains a total of 600 images, 300 images per class taken from the second collection. DS3.1 has a total of 900 images, 300 images per class taken from the third collection, and DS3.2 also has a total of 900 images, 300 images per class taken from fourth collection. However, as DS3.2 is created for inter-patient analysis [67][68][69][70][71][72], it is further grouped into two datasets, DS3.2 TrainingSet containing 200 images from six records for training the classifier and DS3.2 TestingSet contains 100 images from three records for validation of the classifier. These images are the ECG waves within two consecutive Rpeaks processed at Layer 2 of our proposed three-layer ECG signal analysis process shown in Figure 3. Table 1 shows these database collections and dataset selection. The record numbers and the number of images are also presented.
CNN Architecture
A CNN-based model typically consists of an input layer, CNN kernel layers also known as filters or feature maps, pooling layers, a fully connected layer and an output layer. The size of each layer depends on the problem and its optimization defines the efficiency of the model [73]. We present a 2-D CNN based classifier model that performs the automated feature engineering and learns the fiducial points with global averaging presented at the feature map level of the CNN. The proposed method, integrates the feature engineering and classification capabilities as compared to traditional approaches that requires pre-extracted features. This minimizes the complexity, time, and overhead of the diagnosis process. The CNN model consists of 7 layers as shown in Figure 6 and uses the Adaptive Moment (ADAM) method for backpropagation [74]. ADAM is a stochastic optimizer and updates weights based on value and gradient only. It calculates the gradients for weights optimization during the 2-D CNN training. ADAM works great with CNN and its other variations as it combines the advantages of Adaptive Gradient (AdaGrad) and Root Mean Square Propagation (RMSProp). The proposed CNN architecture takes image as the input with the optimized size of 28 × 28 pixels. This size has been found after several trials and iterations. The size of the 2-D image need not be zero-padded to the duration of the longest R-R interval. Therefore, the ECG data between two consecutive R-peaks of a long R-R interval (e.g., corresponding to bradycardia), is transformed to an image with the fixed size of 28 × 28 pixels, similarly as in the case of an ECG with shorter R-R intervals. However, the ECG morphologies would appear more condensed or spread out. Thus, after the training process, the system will learn to differentiate such ECG images from normal ones or other classes. It is important to note that larger image sizes require more number of neurons and makes the system computationally ineffective. This in turn puts more computational burden during the training process. The "augmentedImageDatastore" function of Matlab is used to covert the input image into 28 × 28 pixels.
Convolution with the optimized kernel size of 5 × 5 is applied to the image, and generates four feature maps. These feature maps get rectified by the Rectified Linear Unit (ReLU) activation function expressed in Equation (2). ReLU rectifies the gradient vanishing problem and is less complex as compared to TanH and Sigmoid functions [75]. The rectified feature maps go through the pooling layer. We have used global average pooling that is known for object localization. This layer generates the average value per output class which becomes a regression problem for the next layer of the fully connected network. The fully connected layer's output is normalized to a probability distribution by the Softmax activation function (calculated by Equation (3)). Softmax generates predicted output values between 0 and 1 for each class.
During the training process, the amount that the weights are updated is referred to as the step size or the learning rate. The learning rate is, specifically, a configurable hyper-parameter used in the training of neural networks. The Loss Function or error is another important component of neural networks and points to the prediction error of a neural network. The method to calculate the loss is called Loss Function. It is also referred to as the cost function as a measure of error between the value the model predicts and the actual value. In a neural network, the cost (loss) function is generally minimized and the gradient of the loss with respect to the weight parameters are updated in several iterations to converge to a final validation loss (error). In in this work, the predicted and labeled values are used to calculate the error using the cross-entropy (loss) function expressed by Equation (4). The backpropagation process then optimizes the weights to minimize the error by calculating the gradients and updating the weights with the ADAM resolver algorithm , as shown in Algorithm 1.
Algorithm 1: Adaptive Moment ADAM
Return W t Hyper-parameters : α > 0 -learning rate β 1 ∈ [0, 1] -1st moment decay rate β 2 ∈ [0, 1] -2nd moment decay rate > 0 -numerical term Lloss/error calculated f rom cross entropy (Equation (4) The details of the proposed architecture including CNN and its parameters, are summarized in Table 2. By designing a CNN with the structure of Figure 6 and the listed parameters in Table 2, the proposed 2-D CNN ECG classification idea of this paper can be reconstructed and the results presented in Section 6 can be reproduced.
where P is the probability of each class, N is the total number of classes, and x is the calculated output after forward pass. The details of the ADAM algorithm and how the weights are updated during the training process of a CNN architecture, is expressed below.
Results
We performed three experiments based on simulations with variants of the CNN architecture based on our proposed 2-D CNN classifier. In this section, we present the results of each experiment, including the classified output for each class, accuracy, and loss graph. The performance metrics are discussed in Section 7. Each of these experiments uses a Matlab augmented function to convert the ECG data between every two consecutive R-peaks into a 28 × 28 pixel, gray-scaled image as an input to the 2-D CNN from its appropriate dataset. Datasets DS1, DS2, and DS3.1 are based on the intra-patient division scheme and are further split into 70/30 training and testing sets with labels for training and validation of the CNN algorithm. The "splitEachLabel" function in Matlab is used that takes a ratio (0.7 in our case) as an argument and creates two separate image datasets, one for training and the other for testing for DS1, DS2 and DS3.1. Dataset DS3.2 is based on the inter-patient division scheme and is split into training set (DS3.2 TrainingSet) and testing set (DS3.2 TestingSet) with a ratio close to 70/30 (approximately 66.6/33.3). Records used to construct the DS3.2 TrainingSet are different from the records used for the DS3.2 TestingSet (as shown in Table 1), making both sets an independent collection of records. The training options and 2-D CNN parameters are shown in Table 2.
First Experiment
The first experiment classifies the ECG signal into two classes: Normal and Abnormal. This experiment uses dataset DS1 containing a total of 600 images from our database collection 1 as shown in Table 1. The 2-D CNN architecture/network shown in Figure 6 is trained on the training set derived from DS1. Validation is performed using the testing set, and Figure 7 shows the output of the forward pass of an ECG image classified as Normal or Abnormal after the validation is completed. Figure 8 shows the accuracy graph, and Figure 9 shows the loss function for the training progress.
Second Experiment
The second variant of the CNN architecture performs the second experiment to classify the ECG signal images into two classes of Normal and Abnormal using dataset DS2 from our database collection 2, shown in Table 1. DS2 has a total of 600 images recorded with Lead 5 and is further split into training and testing sets and to train and validate the proposed network. Validation yields classification results shown in Figure 10.
Third Experiment
The third experiment is performed for both intra-patient and inter-patient classification of the ECG signal into three classes: Normal (N), Ischemic beat (ST-change), and V-change using datasets DS3.1 and DS3.2 from our database collection 3 and 4 respectively, shown in Table 1. DS3.1 contains a total of 900 images, and our proposed 2-D CNN is trained on the training set from DS3.1. Figure 11 shows the classification results of three images classified as Normal, ST-change, and V-change based on the intra-patient division scheme. Figures 12 and 13 show the accuracy and loss progress graphs, respectively. We have repeated the third experiment and used DS3.2 to classify ECG signal into the three classes for the inter-patient scheme. Figure 14 shows the classification results of three images classified as Normal, ST-change, and V-change based on inter-patient scheme. The accuracy and loss progress graphs for the inter-patient scheme using DS3.2 are shown in Figure 15.
Fourth Experiment
We have also performed a fourth experiment with hardware implementation, illustrated in Figure 16. This experiment follows the process of our proposed three-layer ECG signal analysis model presented in Figure 3 of Section 1.3. At Layer 1, ECG data is acquired with the AD8232 (Analog Devices, Inc., Norwood, MA, USA) ECG measurement board, that is directly attached to an Arduino mega 2560 with wires. AD8232 is an analog single lead, low-power integrated frond-end heart monitor that is used for a variety of vital signs monitoring applications. It is a 3-pin lightweight portable sensor from Analog Devices that operates on 3.3V DC voltage and gives an analog output. Other sensors such as Zio Patch [76] and Shimmer [77], a Bluetooth (BT)-based wireless sensor, can also be used to acquire ECG. At Layer 2, the coordinator Arduino mega microcontroller device receives the data for preprocessing. The ECG signal is sent to the smartphone for graphical representation using the IEEE802.15.1 Bluetooth protocol and displayed with a smartphone app designed in the open source visual studio code IDE. The smart app is programmed in Javascript using the "react" library and "react-native" framework. However, wireless IEEE802.11x and Zigbee IEEE802.15.4 [78] protocols can also be used to send the data from the controller to the smartphone app. The coordinator then sends the preprocessed data to Layer 3 at a central location in the form of images using the IEEE802.11x based wireless connection for classification using our trained 2-D CNN algorithm running on Amazon Web Services (AWS) cloud. These images occupy bandwidth when traveling over networks such as wireless or Global System for Mobile Communication (GSM). Bandwidth requirements can cause latency and become a hurdle in a successful data transfer process. Data compression can overcome this problem by compressing the data to reduce the overall packet size during the transfer. The data is not compressed between the coordinator and Layer 2 in this experiment for the purpose of simplicity. However, lossless compression techniques such as Quad Level Vector (QLV) [79,80] and Huffman coding are available to address the bandwidth requirements. The purpose of the fourth experiment is to show that our proposed architecture can be adopted for real-time monitoring systems using portable and wearable devices.
Discussion
We performed multiple experiments (Exp), and in the first three simulation experiments, preprocessing was repeated to create augmented images in gray-scale with the size of 28 × 28 pixels. Images were shuffled every epoch during the training for each simulation experiment to achieve better training. Table 3 shows the optimized parameters used during the training process for the simulation experiments. The Validation Frequency is calculated from Equation (5).
Observing the results of Figure 12 yields that our method of 2-D CNN has achieved the best accuracy of 99.26% in detecting three classes of Normal, ST-change, and V-change with the intra-patient scheme. Our method in the second experiment has also improved the classification accuracy shown in Figure 10 when identifying two classes of Normal and Abnormal, as compared to other methods which attempted to classify ECG signal into Normal and Abnormal classes. It can be concluded that our proposed method of 2-D CNN has outperformed other machine learning and traditional methods when tested with ESCDB. Following the notion that intra-patient division may result in a biased system [70], inter-patient division is thus, recommended for cases where the classification module will be applied on new patient data [67][68][69][70][71][72]. Figure 14 shows the results of a preliminary experiment performed for the inter-patient scheme, and is yet to be explored as future work for adaptation in real-time monitoring systems.
We achieved better results with less number of layers in the CNN network structure; thereby yielding much less complexity. The complexity of a CNN can be calculated in the form of Big O notations [81], expressed by Equation (6).
where d is the number of convolutional layers, n l denotes the filter's width in a given lth layer and s l and m l are the sizes of the filter and output feature map, respectively. Table 4 shows the reported performance metrics including F1-score (f1), success-rate and positive predictive value (ppv) of the related work in comparison with our study. We have calculated the accuracy (acc), sensitivity (sen), and specificity (spe) performance metrics based on the True Positives (TP), True Negatives (TN), False Positives (FP) and False Negative (FN) values, using Equations (7)-(9), respectively, to evaluate our method. These metrics are summarized in Table 5 and are additionally shown in the confusion matrices of Figure 17 for our simulation experiments.
Research Tools and Applications
There are PC-based and hardware-based tools available to test and evaluate our model. As for PC-based tools, we used Matlab for our simulation experiments, as well as for designing, training and evaluation of our proposed 2-D CNN model. Other PC-based tools such as Python and Labview provide libraries that can be used in simulations. We used Python for database analysis and dataset preprocessing. On the other hand, the "R" tool provides a more robust view of analyzing datasets. System dynamics modeling can also be used to evaluate the effectiveness of our model for health monitoring tools, in general, in a broader societal perspective [82,83]. As for hardware-based tools, we used the ECG sensor AD8232, and coordinator Arduino Mega 2560 for our fourth hardware experiment presented in Section 6.4, and shown in Figure 16. ECG sensors with clips, cup electrodes or patches can be used with our proposed model for ECG acquisition. Other hardware tools such as System on Module (SoM), System on Chip (SoC), emulator boards such as AM335X and NXP Nexperia 8550 can be used in addition to other Open Source HardWare (OSHW)-based emulators such as Arduino, ADuCM361 and Duino Olimexino, to analyze and classify ECG with our proposed 2-D CNN classifier and three-layer process architecture. However, the implementation of real-time ECG monitoring systems will introduce some difficulties in the daily workflow of clinicians, especially when many patients are referred to the clinic with suspected pathology. Cloud servers should be restructured effectively to aid prioritizing the notifications sent to clinicians and patients. In addition, legal issues, regulatory standards and security, privacy and confidentiality protocols play vastly important roles here as well. Particularly, the systems should be highly accurate in ECG acquisition, processing and analysis so that suspected pathology is not missed to reduce the warning threshold for the owner (user) of the monitoring device (e.g., smartwatch). The devised ECG algorithms implemented on these systems should be, therefore, able to reduce the false negatives to as low as possible to avoid missing suspected cardiac rhythms. Clinical trials of such devised systems can be further investigated for their performance, but are beyond the scope of this paper.
The applications of the proposed model presented in this study are not limited to ECG analysis but rather have a wide range of applications including the telehealth and the electronics industry. Besides diagnosis of cardiac conditions in real-time for adults, it can also be used to monitor fetus ECG and detect abnormalities. A method in Reference [84] shows how to achieve a cleaned ECG signal of the fetus. Our proposed method is scalable and can be implemented on microcontroller-based devices such as TI MSP430 and TMS320-6713 that can later be adopted and used in portable simulators such as Fluke Prosim and TriSmed TSM3000B for research purposes. Our method can further be implemented and integrated within smart devices such as smartwatches and smartphones for real-time monitoring and diagnosis. This method can also be implemented in electronic circuits to monitor sinusoidal signals and detect abnormalities such as noise interference or intruder information tapping in the signal. Electronic signatures are usually an image of a person's handwritten signature. These electronic signatures can be validated using our proposed method once trained on the signatures dataset.
Conclusions and Future Work
ECG monitoring is vital to diagnose any abnormality in the heart. Early detection and treatment of ischemia and MI can save lives. ST-segment changes are early signs of a heart attack and are classified with better accuracy with our proposed method. Initial and timely diagnosis of cardiovascular diseases can drive the acceptance of a solution and plays an essential role in a patient's health status during an active cardiac condition. This study showed that we had improved the diagnosis time by presenting a less complicated system for real-time monitoring performed with both simulation-based experiments (Experiments 1-3) and a hardware-based experiment (Experiment 4). We have introduced a three-layer process to analyze and classify ECG in both simulations and in real-time as presented in Experiment 1 for Normal and Abnormal classes. We have proposed a 2-D image-based CNN classifier that classifies three classes including ST-changes. We have presented detailed literature reported on ECG classification based on both traditional and machine learning techniques and compared their performance evaluation metrics with those that are achieved with our approach. Multiple experiments were performed to evaluate our model and the best accuracy of 99.26% with an error of 0.0371 was achieved with the intra-patient division scheme and the accuracy of 87.33% with an error of 0.2647 was achieved with the inter-patient division scheme when evaluated on the ESCDB database. We presented the research tools used in this study and shed light on other PC-based and hardware-based tools available for the research community to further explore and improve ECG classification. We presented real-life applications in the health industry, electronics industry and others that our proposed model can be used for. The need for feature engineering has been eliminated with our approach since CNN learns features automatically during the training process. Our proposed method has much less complexity as compared to others in this area of research, making our model feasible to be implemented in real-time monitoring systems.
As a future direction, we plan to evaluate our model with multiple databases using the inter-patient division scheme and classify more arrhythmia types such as fusion and other unknown beats. In addition, we plan to collect new images of ECG and create a dataset of our own to reflect real-life scenarios, and further, evaluate our method on a real-time system presented in this study. Moreover, we plan to convert the trained network from Matlab to C-code to implement in microcontroller-based systems and to test it for a portable and wearable device to perform real-time monitoring and classification. Furthermore, we plan to develop an application, integrated with a microcontroller-based system, to monitor the health of a person's heart. Funding: This research was funded in part by the UB Partners CT Next Innovation Grant 2019-2020. Also, the authors acknowledge funds received from the University of Bridgeport to buy equipment to support this research.
Data Availability Statement:
Publicly available datasets were analyzed in this study. This data can be found here: https://physionet.org/content/edb.
Conflicts of Interest:
The authors declare no conflict of interest.
Abbreviations
The following abbreviations are used in this manuscript:
|
2021-04-29T07:36:25.007Z
|
2021-01-14T00:00:00.000
|
{
"year": 2021,
"sha1": "a5fc1b151272e5ae8e1df31d551ecb3f0124ede3",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-9292/10/2/170/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "a5fc1b151272e5ae8e1df31d551ecb3f0124ede3",
"s2fieldsofstudy": [
"Computer Science",
"Medicine"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
128628510
|
pes2o/s2orc
|
v3-fos-license
|
Characterization of the Boreal Summer Upwelling at the Northern Coast of the Gulf of Guinea Based on the PROPAO In Situ Measurements Network and Satellite Data
1 Université Félix Houphouët-Boigny de Cocody, UFR-SSMT, LAPA-MF, 22 BP 582 Abidjan 22, Cote d’Ivoire 2 Université Abomey-Calavi de Cotonou, 01 BP 526 Cotonou, Benin 3 Institut National Polytechnique Houphouët-Boigny de Yamoussoukro (INPH), Chercheur Associé au LAPA-MF de l’Université Félix Houphouët-Boigny de Cocody, BP 1093 Yamoussoukro, Cote d’Ivoire 4Centre de Recherche Océanologique d’Abidjan (CRO), 29, Rue des Pêcheurs, BPV 18 Abidjan, Cote d’Ivoire
Introduction
Coastal upwellings are characterized by seasonally low sea surface temperature (SST).They generally result from the response of the coastal ocean to alongshore winds, leading to the production of a relatively intense current with a small offshore and a large alongshore component [1].This causes the pumping of cooler and nutrient-rich waters from the subsurface to the ocean surface.Upwelling areas are economically important even though the global area constituted by these regions is less than 1% of the global ocean [2].Moreover, coastal upwellings have a great impact on local climate.Particularly, the coastal ocean surface conditions in the Gulf of Guinea situated in the northeastern equatorial Atlantic influence the West African climate [3].Understanding the ocean dynamic of this region is then of great interest, (i) firstly because the Gulf of Guinea is the principal source of the water vapour which constitutes most of the precipitation on the continent.For example, Gu and Adler [4] linked the rainfall peak in May along the coastal area of the Gulf of Guinea to the seasonal forcing of the ocean.Eltahir and Gong [5] observed that the intensity of the West African monsoon depends on the meridional gradient of the static humid energy in the boundary layer between the ocean and the continent.(ii) Secondly, this tropical Atlantic area has the largest SST seasonal amplitude of about 5-8 ∘ C [6].A coastal upwelling is observed each year along the northern coast of the Gulf of Guinea during the boreal winter and summer periods, that is, from January to February and from June to October, respectively, off Côte d'Ivoire and Ghana [7,8].June and October correspond to transition periods where an upwelling could be observed.These months are characterized by a progressive fall of SST in June and a return of warm water in October corresponding, respectively, to the beginning and International Journal of Oceanography the end of the upwelling.However, the end of the upwelling could be sometimes observed at mid-October [9].Satellite SST data have been extensively used for global oceans monitoring.These studies cover the Mauritania coastal zone [10], the Senegalese upwelling [11], and the northern coast of the Gulf of Guinea [9,12,13].For example, Aman and Fofana [12,13] used SST data from Météosat satellite to characterize the seasonal upwelling variability along the coast of Côte d'Ivoire.No study was undertaken using in situ coastal data since these measurements stopped in the earlier 1990s.The last studies were those of Verstraete et al. [14], Arfi et al. [15], Picaut [16], and Colin [17].These works showed that (i) the cooling is firstly induced by the large-scale structure of the Guinea current off the continental shelf which tilts the thermocline towards the coast and then is amplified by the increase of the zonal wind component at the coast [17].(ii) Secondly, the study of the mean sea level reveals a wave with a 15-day period throughout the year.This oscillation clearly appears during the upwelling season [16].And, (iii) an annual cycle is found in mean sea level amplitude of about 20 cm at Tema, Takoradi, and Abidjan [15].
This paper proposes to use two datasets derived, respectively, from satellite measurements and new in situ SST records from onset sensors moored along the northern coast of the Gulf of Guinea.The study aims to test the ability of the SST derived from the onset sensors to characterize the northern coastal upwelling.It will allow using these new in situ measurements along the northern coast of the Gulf of Guinea in coming works linked to fisheries and to climate.For instance, Binet [18] showed that Sardinella aurita migrates towards the surface and the shore during the upwelling period.Ali et al. [9] highlighted the influence of the coastal upwelling on the precipitations along the northern coast of the Gulf of Guinea.However, given the short time series available with the onset sensors (during 2005-2011), satellite measurements will be used to achieve this work.SST measurements with onset sensors are monitored by the Regional Program of Physical Oceanography in West Africa (PROPAO) and now by the Jeune Equipe Associée à l'IRD named Analyses Littorale, Océanique et Climatique au nord du Golfe de Guinée (JEAI ALOCGG).The objectives of this program are to study the coastline and coastal erosion, the ocean conditions in the Gulf of Guinea and tropical Atlantic, the regional climate, and the role of the Gulf of Guinea in regional conditions.This project provides an opportunity to create a regional databank which could contribute significantly to the understanding of the complex mechanisms of this upwelling.
Section 2 presents the onset sensors data and the satellite measurements.Section 3 outlines the validation of the onset sensors SST as a good candidate to study the northern coastal upwelling of the Gulf of Guinea.The upwelling characterization is also undertaken using its onset date, its end date, and its duration.Section 4 describes the variability of this phenomenon.The summary and conclusion are provided in the last section.
Data and Method
Two types of data are used in this study to characterize the coastal upwelling at the northern coast of the Gulf of Guinea: SST records from onset sensors and satellite SST measurements from Tropical Rainfall Measuring Mission Microwave Imager (TRMM/TMI).
2.1.Data.The Regional Program of Physical Oceanography in West Africa (PROPAO) in situ measurements network is composed of several onset sensors moored along the coasts of Lagos (Nigeria), Cotonou (Benin), Takoradi (Ghana), and Sassandra (Côte d'Ivoire).The sensors have been installed at Cotonou in 2005 and 3 years later (2008) in the other stations.They are autonomous sensors TIDBIT (model 5, version 2) with a temperature range from −5 ∘ C to +37 ∘ C and an accuracy of ±0.2 ∘ C (Figure 1).The signatures of the coastal upwelling events are analyzed using the daily SST derived from the TRMM Microwave Imager (hereafter TMI) sensor [19] from May 1 to November 30 to fully integrate the upwelling season [7,8] during the period 1998-2011.TMI is a nine-channel passive microwave radiometer with operating frequencies ranging from 10.65 to 85.5 GHZ which provides a nearly complete dataset sampling between ±38 ∘ latitude.This product is available daily at 3-day running average.Optimal interpolation is used to fill points with no data, thus yielding a complete data coverage over the ocean.The lowest frequency channel of this radiometer yields SST images even in the presence of nonrain clouds, in contrast to infrared radiometers that require cloud-free condition.The additional channel at 10.7 GHZ allows measuring SST with a good precision during strong averse of tropical areas [20].Some studies [21][22][23] have shown the good quality of the TMI dataset whose new version has been validated by comparison with moored buoys data [24].In this study, the 14-year daily SST data are obtained from Remote Sensing Systems (ftp://ftp.discover-earth.org/sst/daily/tmi/)from 1998 to 2011 at 0.25 ∘ × 0.25 ∘ spatial resolution.TMI SST are then extracted at each grid point close to PROPAO stations (Cotonou (6.22 ∘ N-2.26 ∘ E), Takoradi (4.88 ∘ N-1.75 ∘ W), Abidjan (5.19 ∘ N-4.02 ∘ W), and Sassandra (4.95 ∘ N-6.08 ∘ W)) (Figure 2).Ali et al. [9] observed that the upwelling extends from the coast to 4 ∘ N.But it could extend beyond [8].Those offshore TMI grid points can then document the beginning date of the major upwelling (hereafter onset date), the vanishing date of the phenomenon (hereafter end date), and its duration during the boreal summer.Ghana is not the same at Benin.This value ranges between 24 ∘ C and 26 ∘ C off Côte d'Ivoire and Ghana while it is about 26 ∘ C or 27 ∘ C off Benin (data not shown).
Method.
To overcome the choice of the SST threshold for the upwelling characterization period, a cumul method which is an adaptation of the method of Liebmann and Marengo [26] is used.It is used to find the major upwelling season onset date, its end date, and its duration during the boreal summer period.Once, SST are averaged between the onset date and the end date for each year to study the upwelling variability.This allows characterizing objectively the upwelling period from both datasets.
For a given year and within the corresponding upwelling period (from May 1 to November 30), annual mean and standard deviation of SST are performed through (1) for each station.Then, daily standardized anomalies (instead of simple anomalies used by Liebmann and Marengo [26]) are calculated to provide the same statistical weight to all SST values and to eliminate seasonality.The initial value of the cumul (which is an adimensional value) is taken as the standardized anomaly at May 1.The cumul at May 2 represents the sum of the standardized anomalies at May 1 and at May 2. The following cumuls are calculated as the sum of the standardized anomaly of the current day and those of the previous days.This method is summarized in (2).These calculations are applied for each year and for the daily climatology which represents the mean SST of the same calendar days, leading to 214 daily mean values.The maximum (resp., minimum) of the cumul curve versus the date indicates the onset date (resp., end date) of the upwelling season.The duration of the phenomenon is obtained by differencing the Julian days of the end date and of the onset date.It will allow studying the variability of the upwelling season at the northern coast of the Gulf of Guinea.consider (2) between the results of both datasets is interesting as they are from different sources and from different sites of measurements.It allows concluding that in situ SST data could be used to study the upwelling at the northern coast of the GG.Then, it is highly recommended to use the acquisition of in situ SST measurements based on onset equipment as (i) it appears very useful for the coastal upwelling characterization and since (ii) satellite data are not available near the coasts.However, given the shorter time series of the in situ SST, TMI SST will be used in the following sections.
Upwelling Characterization Using In Situ and TMI SST
Figure 5 shows the climatology of the daily cumuls from TMI SST during 1998-2011 period for the four nearest grid points of PROPAO network stations.The maximum and the minimum of the cumul curve versus the date represent, respectively, the boreal summer upwelling onset date and end date.The SST climatological value which corresponds to the maximum of the curve is about 26 ∘ C (data not shown).This result concurs with those of Arfi et al. [15] and Ali et al. [9].This threshold allows an early detection of the cooling areas if SST is spatially plotted.
The boreal summer upwelling off Sassandra and Abidjan begins approximately at July 3 and ends at October 17 while it begins at June 28 and ends at October 13 off Takoradi and Cotonou.The upwelling at the four sites lasts 15 weeks.These results are consistent with those of several authors [7][8][9]16] who noted similar duration of the upwelling season.The upwelling signal appears firstly off Cotonou and off Takoradi coasts approximately one week before its westward extension to Abidjan and Sassandra [16].This westward extension is associated with a semiannual oscillation [27].Its origin could be more eastward off Nigeria or Cameroon coast.A simple calculation of the appearance velocity of the phenomenon, for example, between the two capes (Takoradi: 1.75 ∘ W; 4.88 ∘ N and Sassandra: 6.08 ∘ W; 4.95 ∘ N) separated by 4.33 ∘ of longitude, gives 78 cm/s when considering a 1-week lag between the two maxima of their respective curves of cumul.This velocity is consistent with the speed of a coastally trapped Kelvin wave travelling at 70 cm/s [16].1).The first case shows that the upwelling initiates eastward at Cotonou and extends westward in 1999, 2000, 2001, 2002, 2007, and 2011.This case represents 43% of all years.This observation is consistent with Picaut [16] who showed that the signal of the boreal summer upwelling at the northern coast of the Gulf of Guinea mainly propagates from east to west.However, the causes of this propagation are poorly known.The westward extension of the upwelling may be slower some years.stations in 2 weeks during 2011.These remarks could be related to the initiation of a lower upwelling (see Figure 8).The second case relates to the upwelling initiation at cape of three points close to Takoradi and/or at cape of Palmas close to Sassandra.The phenomenon can be initiated at one cape and then extends both sides of it.This case occurs during 1998,2003,2005,2008,2009, and 2010 and represents 43% of all years.During 2004 and 2006 (14% of the 14 years studied), the upwelling appears simultaneously at both capes and then extends to other stations.These last two situations are similar to the cape effects described by Marchal and Picaut [28].They could be explained by the dynamical interaction between the Guinea current and the cape of three points which induces a lower slope of the thermocline upstream of the cape and an accumulation of warm water downstream of it.This could also be amplified by the eastward orientation of the capes.However, these different cases show the complexity of the study of the upwelling at the northern coast of the Gulf of Guinea for which several mechanisms may contribute to its initiation.These could be (i) local and remote actions of the wind that induce the role of equatorial and coastal Kelvin waves [29][30][31], (ii) the potential role of the Guinea current International Journal of Oceanography and the cape effects [32,33], or (iii) the coastal transport and the Ekman pumping [34].
Boreal Summer Upwelling Variability at the Northern Coast of the Gulf of Guinea
Figure 7 shows the interannual evolution of the upwelling duration calculated as the difference between the onset date and the end date (see Figure 6) of the phenomenon.The linear regression fit indicates Nonsignificant negative trends at Cotonou and at Takoradi and Nonsignificant positive trends at Abidjan and at Sassandra showing a weak variation of the duration at all the stations.This figure shows also successive years of long or short duration of the phenomenon.The upwelling duration in the PROPAO stations network has not changed even though there are some years in which the season could be abnormally long or short.Then, the 75th and 25th percentiles of the duration distribution are computed.If a duration is higher (resp., lower) than the 75th (resp., 25th) percentile, the upwelling season is considered to be abnormally long (resp., short).Restricting the calculation to the 75th and to the 25th percentiles allows the selection of a small number of events for a more efficient analysis.The mean value for all durations higher than the 75th percentile is equal to 115 days for all stations.The mean value is equal to 102 days for the 25th percentile.Particularly, the duration reaches 122 days at Sassandra and 111 days at Cotonou.The graphs indicate a nonuniformity of the duration of the upwelling International Journal of Oceanography season.For example, in 1999, upwelling season is the longest at Cotonou and the shortest at Sassandra and at Abidjan.In 2000, the longer duration is consistent with the results of Ali et al. [9]. Figure 8 presents the interannual evolution of the standardized SST anomaly during the selected upwelling period for each nearest grid point of the PROPAO Network stations.From the daily climatology during May 1 through November 30 of the 1998-2011 period, standardized anomalies are computed with TMI datasets.Then, annual values are calculated by averaging all daily standardized anomalies for each upwelling period based on the curve of the cumul.The 75th and 25th percentiles of the standardized anomalies are plotted too.The mean warming trend for all stations is about 0.18 standardized anomalies per year (∼0.18/year).(Let us note that since standardized values are used, the unit of the performed trend value is an adimensional value versus the year.)The correlation coefficient associated with the linear regression fit is significant at 95% confidence level and is equal to 0.65, 0.78, 0.70, and 0.74, respectively, at Sassandra, Abidjan, Takoradi, and Cotonou.Abnormal cooling periods are observed at all stations from 1998 to 2001.This is consistent with Ali et al. [9] who noted higher values of their upwelling indexes for these particular years.From 2005 to 2011, significant warming occurs at all stations.Particularly, the warming is abnormally high in 2008 and 2010.These oceanic conditions could be explained by a mechanism similar to the phenomenon observed in 1984 by Colin [8].He noted that the Guinea Current was close to the continental shelf but weak in 1984.It could not ensure a significant upward motion of the thermocline and induce a strong upwelling.This situation lasted until the upwelling period from July to September.It contributed to amplifying positive SST anomalies which resulted in a warming of the ocean surface.Figure 8 shows also nonuniformity of the warming or of the cooling.For example, significant cooling is noted from Cotonou to Abidjan in 2001 while Nonsignificant warming is observed at Sassandra.Aman et al. [35] indicated that the minimum of the sea level at São Tomé, which precedes boreal summer upwelling, was the same in 2001 and 2002 with an abnormal duration in 2001 (∼2 months) than in 2002 (∼1 month).This situation is associated with strong negative sea level anomalies at Cotonou, Téma (close to Takoradi), and Abidjan and positive sea level anomalies at San-Pédro (close to Sassandra).Verstraete and Park [36] and Park [37] noted also similar observations during the upwelling season in 1992.The minimum of the sea level at São Tomé is the same in 1992 and 1993.But this minimum is longer in 1992 (∼2 months) than in 1993 (∼2 weeks).These two situations could explain the strong cooling at Cotonou, Takoradi, and Abidjan and the warming at Sassandra in 2001.
Conclusion
This study aims to characterize the boreal summer upwelling at the northern coast of the Gulf of Guinea during the 1998-2011 period.Two datasets derived, respectively, from satellite measurements and new in situ records from onset sensors moored along the northern littoral of the Gulf of Guinea from May 1 to November 30 of each year are used.
The ability of the in situ SST to characterize the upwelling is firstly tested.This study compares SST provided by the TRMM Microwave Imager (TMI) sensor [19] during 2005-2011 and in situ SST records derived from the PROPAO stations network.An adaptation of the method of Liebmann and Marengo [26] is used to find the beginning (or onset) date of the upwelling season, its vanishing (or end) date, and its duration.The cumul values, which are a new variable performed with this method, allow being free from the use of a fixed threshold which may not fit each station's proper characteristics (mean and variance).The results show similar onset date, end date, and duration of boreal summer upwelling since both datasets are recorded from different sources.Moreover, high correlation is found between daily SST of both datasets.The coefficient of determination is 0.82 and is significant at 95% confidence level.These observations indicate a good agreement between both datasets.The use of the in situ SST data could contribute significantly to the characterization of the upwelling at the northern coast of the Gulf of Guinea.
Yearly calculations of the boreal summer upwelling onset date, end date, and duration are computed for four PROPAO Network stations (Cotonou, Takoradi, Abidjan, and Sassandra).Since in situ SST time series are shorter for these stations, TMI time series have been selected for some point coordinates close to each station.It shows that the upwelling phenomenon generally starts eastward at Cotonou and Takoradi, mostly one week before its extension to Abidjan and Sassandra.Nonsignificant trend is found in the upwelling onset date and the end date, and there is nonuniformity in the duration.Moreover, the upwelling onset date reveals different areas of initiation showing the complexity of this phenomenon.
Finally, variability of the boreal summer upwelling for the four stations is studied using standardized SST anomalies.It shows significant trend.Some years are found to be significantly warm or cold and they are related to known physical process.
This work has been undertaken in the framework of the Jeune Equipe Associée à l'IRD named Analyses Littorale, Océanique et Climatique au nord du Golfe de Guinée (JEAI ALOCGG).It aims to study the coastline and coastal erosion, the ocean conditions in the Gulf of Guinea and tropical Atlantic, the regional climate, and the role of the GG in regional conditions.Our results highlight the opportunity to use the new SST in situ measurements from onset sensors to characterize the boreal summer upwelling at the northern coast of GG.It becomes urgent to continue the acquisition of SST data based on onset sensors along the coasts of GG because of the climate change issues since satellite measurements do not reach the coast.This project represents an opportunity for the participating countries of PROPAO Network to create a regional databank in order to contribute significantly to the understanding of the complex mechanisms of this upwelling.
Figure 1 :
Figure 1: (a) An autonomous onset sensor TIDBIT (model 5, version 2) used to measure coastal SST data.(b) A view of sensor installation by technicians at Sassandra site (Côte d'Ivoire) (Courtesy of PROPAO).
Figure 2 :
Figure 2: PROPAO network of sensors.The rectangle shows the oceanic zone.The stations (dots) and the nearest TMI grid points are shown (triangles).The sensor position of Cotonou is encircled.
Figure 3 :Figure 4 :
Figure 3: Daily climatology of SST from May 1 to November 30 in 2005-2011 from the onset sensor at Cotonou and from TMI at a point close to the station.SST thresholds 24 ∘ C, 25 ∘ C, 26 ∘ C, 27 ∘ C, and 28 ∘ C are plotted.
Figure 4 (InternationalFigure 5 :
Figure4(a) illustrates the climatology of the cumul of TMI SST at a grid point (2.625 ∘ E; 5.375 ∘ N) close to the station of Cotonou (see Figure2) and of the in situ SST at this station (6.22 ∘ N-2.26 ∘ E) during 2005-2011.The Cotonou station is used because of its longer in situ SST time series compared to those of Sassandra and Takoradi, whose time series start in 2008.Both curves show similar boreal summer upwelling onset date and end date even though the data are recorded from different sources.The cumul values of in situ SST are higher than those of SST derived from TMI.The linear regression fit between both daily SST datasets is also plotted (Figure4(b)).The coefficient of determination is 0.82 and is significant at 95% confidence level.The good agreement
Figure 6 :
Figure6presents the interannual evolution of the upwelling onset date (black curve) and of the end date (red curve) versus the years for all stations.Linear regression fits are superimposed on the graphs.Nonsignificant trend of upwelling onset date (late onset date at Cotonou and Takoradi and early onset date at Abidjan and Sassandra) is noted for all stations.Nonsignificant trend (based on the Student test) of upwelling end date lasts at Takoradi, Abidjan, and Sassandra while early upwelling is noted at Cotonou.For example, the upwelling season lasts until November in 1999 for Cotonou, Abidjan, and Sassandra stations while it vanishes in early 2004 for all stations (mid-August at Cotonou, at the end of September at Takoradi, Abidjan, and Sassandra).The curve of the upwelling onset date points out two main cases concerning the beginning of the phenomenon (Table1).The first case shows that the upwelling initiates eastward at Cotonou and extends westward in1999, 2000, 2001, 2002, 2007, and 2011.This case represents 43% of all years.This observation is consistent with Picaut[16] who showed that the signal of the boreal summer upwelling at the northern coast of the Gulf of Guinea mainly propagates from east to west.However, the causes of this propagation are poorly known.The westward extension of the upwelling may be slower some years.For example, the upwelling extends from Cotonou to the other
Figure 7 :
Figure 7: Interannual evolution of the duration of the upwelling season.Linear regression fit (dashed) and the 75th and 25th percentiles (horizontal lines) are superimposed.
Figure 8 :
Figure 8: Interannual evolution of standardized SST anomalies during each upwelling period.Linear regression fit (dashed) and the 75th and 25th percentiles (horizontal lines) are superimposed.
These sensors are calibrated at the Institut Franc ¸ais de Recherche et d'Exploitation de la Mer (IFREMER) at Brest (France) before the first use.The calibration involves the correction of the measured hourly SST dataset by a 6-degree polynomial function for each sensor or for each event which can be an immersion or a withdrawal.The corrected datasets are archived on the PROPAO website (http://www.nodc-benin.org/fr/propao/).
Table 1 :
Onset date as Julian day for the four nearest grid points of PROPAO network stations.Underlined dates show upwelling initiation eastward towards Cotonou.Bold dates refer to initiation at one cape while bold and italic dates refer to initiation at both capes.
|
2019-04-24T13:13:15.385Z
|
2013-09-12T00:00:00.000
|
{
"year": 2013,
"sha1": "44e2ce8cfd105188dfb1839f159c73a420cf3891",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1155/2013/816561",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "7f8a5562cd9ddb19b1fd9d31f2cdcc721107ee15",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Geology"
]
}
|
213064783
|
pes2o/s2orc
|
v3-fos-license
|
Transition Metal Aluminum Boride as a New Candidate for Ambient-Condition Electrochemical Ammonia Synthesis
Highlights Molybdenum aluminum boride single crystals as layered ternary borides were firstly applied for the electrochemical N2 reduction reaction under ambient conditions and in alkaline media, displaying excellent electrocatalytic performances at the low overpotential. Through the combination of the strong interaction of Al/B band and N orbitals and the special crystal structure exposing more active sites, synergistic effect of the elements was verified to achieve the enhancement of N2 reduction reaction process and the limitation of hydrogen evolution reaction. Electronic supplementary material The online version of this article (10.1007/s40820-020-0400-z) contains supplementary material, which is available to authorized users.
Introduction
Ammonia (NH 3 ) is not only an important chemical in industrial production, including pharmaceutical, synthetic fibers and fertilizer production, but also an energy conversion carrier, such as being an ideal storage medium for hydrogen (H 2 ) [1][2][3][4]. Additionally, it is the only currently known carbon-free energy carrier that does not release carbon dioxide (CO 2 ) emissions [4]. However, as the most abundant molecule in the atmosphere, nitrogen (N 2 ) is extremely difficult to be converted into NH 3 due to its strong bond energy, low polarizability and lack of a dipole moment [5]. At present, ammonia is mainly produced by the Haber-Bosch process at high temperature and pressure that reduces N 2 to NH 3 within coal-based or natural gas-based ammonia plants [6]. However, the harsh reaction conditions and the use of natural gas as the hydrogen source lead to large energy consumption and serious greenhouse gas emission [7][8][9][10]. Therefore, it is of great significance to design and develop a sustainable and environmentally benign approach for NH 3
synthesis.
Recently, the electrocatalytic N 2 reduction reaction (NRR) using aqueous electrolytes for synthesizing ammonia at ambient conditions has attracted intensive research interest [5,11,12]. Motivated by the impressive advantages, including mild conditions supporting the feasibility to reduce the energy input and cutting down the carbon footprint, and the simple reactor designs that outweigh the complexity of ammonia production plants [12], the electrocatalytic NRR under an ambient condition has achieved considerable progress since 2016 [11]. Until now, various nanomaterials have been reported as potential catalysts for NRR electrocatalysis, including noble metals (Au, Ru, Pd, Pt, Ag, etc.), transition metals (Fe, Ti, Mo, Cr, Co, etc.) and their oxides, carbides, nitrides and sulfides, metal-free materials (B, C, N, S, P, etc.), and their relevant composites. Development of these materials, coupled with some effective strategies to improve the catalytic performance including defect engineering, interface engineering, electrolyte manipulation and cell design, has been with the goal of improving NH 3 yield and Faradaic efficiency (FE) [13][14][15][16][17][18][19][20][21]. Despite its progress in a short time, this research field is undoubtedly in its infancy and faces many problems. Firstly, most catalysts show a higher overpotential for the electrochemical NRR than for the hydrogen evolution reaction (HER) [22,23]. Therefore, most published research findings exhibited limited selectivity and activity in aqueous solutions due to the strong HER competition [17,[24][25][26][27][28][29][30][31][32]. Secondly, previous reports indicated that nonaqueous solutions or hydrophobic catalysts could suppress HER by limiting proton concentration [33,34]. However, the lack of proton supply would also limit activity. Hence, electrocatalysts that selectively and efficiently reduce nitrogen to ammonia remain elusive. Thirdly, the amount of ammonia produced by the electrochemical NRR method is usually so small that it is difficult to attribute it solely to electrochemical nitrogen fixation and exclude contamination [35]. There are various possible sources of ammonia: On the one hand, it can be present in air, human breath or ion-exchange membranes [25]; on the other hand, it can be generated from labile nitrogen-containing compounds (for example, nitrates, amines, nitrites and nitrogen oxides) that are typically present in the nitrogen gas stream [26], in the atmosphere or even in the catalyst itself [36]. Besides, N 2 gas shows low solubility in water so the amount of N 2 actually involved in the NRR is very small. Additionally, the average catalyst loading is less than 1 mg cm −2 , which limits the total current density to less than 10 mA cm −2 and NRR partial current density to as low as ~ 0.1 mA cm −2 [11]. Therefore, these limitations collectively result in the inferior yield and selectivity of ammonia in the electrocatalyzed NRR process.
An alternative strategy for achieving a high surface area uses "multicomponent" materials, in which different parts of the structure can behave as the active catalytic sites and the inert HER competitive sites. Such architectures allow for the implementation of the active site separation concept, which has been shown to be effective in a number of catalysts [37], for example, MAX phases and MXenes [38,39]. MAX phases, as nanolayered ternary compounds, comprise a large family of M n+1 AX n materials where typically M is an early transition metal, A is an A group element (for example, Al or Si), X is carbon or nitrogen and n = 1-3 [40,41]. MXenes, a novel family of 2D metal carbides and nitrides, can be derived by chemically etching and exfoliating MAX phases [42].
Similarly, MAB phases, first named in 2015 [43], are structurally similar to MAX phases, which have received increasing attention due to their combination of ceramic and metallic material properties: high flexural strength, compressive strength, oxidation resistance, metallic conductivity and high thermal conductivity [41,44]. Meanwhile, 1 3 these MAB phases, as electrocatalytic materials, have attracted our attention due to their atomically layered crystal structure of ternary compound. MoAlB crystallises in the orthorhombic cmcm space group and is arranged as slabs of trigonal prismatic Mo 6 B ceramics, which are the orthorhombic β-MoB phase, interleaved with two metallic planes of Al atoms. Additionally, the two-dimensional derivative of MAX phases, MXenes, has shown great promise for a large range of chemical processing applications including hydrogen and oxygen evolution catalysts, electro-storage devices and environmental adsorbants [45][46][47], which inspires us to explore the possibility of similar electrochemical properties of MAB phases. For example, Ma et al. [48] designed a hybrid film of overlapped g-C 3 N 4 and Ti 3 C 2 (MXene) nanosheets as a highly efficient oxygen electrode. The hybrid film through Ti-N x interaction, forming a porous free-standing film with hydrophilic surface and conductive framework, exhibits excellent performance in catalyzing the oxygen evolution reaction (OER). Further, due to oxygen terminations on the basal plane providing catalytic active sites, Jiang et al. [49] reported a method to significantly improve the HER performance of Ti 3 C 2 MXene by modifying terminations of MXenes on the basal. This has been confirmed for the Fe 2 AlB 2 and MoAlB as MAB phases or their two-dimensional derivatives, which were found to play a part in the oxygen and hydrogen evolution processes [37,44].
Herein, the behavior of MoAlB single crystals (SCs), as a new type of NRR catalysts based on the transition metal aluminum boride phase (MAB phase) family, is reported for the first time. In brief, the MoAlB SCs were supported on a free-standing copper foam (Cu foam) to make an electrode for electrocatalytic NRR in alkaline electrolytes under ambient conditions. In the electrocatalytic NRR system, due to the strong interactions between Al/B atoms and nitrogen atoms, the competitive HER was largely suppressed. In this work, the MoAlB SCs exhibited high NRR activity and selectivity at a low applied potential at room temperature and ambient pressure in a 0.1 M KOH electrolyte. The catalysts are able to be prepared with low cost due to the high abundance and low price of the starting materials. These results are superior to most reported catalysts and distinguish MoAlB SCs as a promising catalyst in electrochemical NRR applications.
Electrocatalysts Synthesis
Bulk MoAlB powders were prepared using the following procedure. Mo, B and Al powders were mixed with a molar ratio of Mo:Al:B = 1:1.3:1. The powder mixture was cold pressed to 220 MPa in a 15-mm-diameter steel die. The pellet was placed in an alumina crucible and heated in a tube furnace under flowing argon to 1200 °C at 5 °C min −1 and held for 2 h before cooling to room temperature at 5 °C min −1 . The reacted sample was crushed into < 45 µm powder, placed in a 12.7-mm-diameter graphite foil-lined graphite die, and hot pressed to further react intermediate MoB and Mo-Al phases. The die and sample were heated in a hot-press furnace under flowing argon to 1400 °C at 10 °C min −1 and held for 30 min. Pressure was applied gradually from 800 °C to a maximum of 50 MPa at 1400 °C. The surface of the solid MoAlB sample was ground to remove graphite, then was mechanically crushed and sieved to < 45 µm particle size.
MoAlB single crystals (SCs) were prepared using a modification of a reported procedure [50]. As shown in Scheme 1, the samples were prepared by first synthesizing MoB powder by mixing Mo and B powders in a stoichiometric ratio (Mo:B = 1:1). The powder mixture was cold pressed to 220 MPa in a 15-mm-diameter steel die. The pellet was placed in an alumina crucible and heated in a tube furnace under flowing argon to 1200 °C at 5 °C min −1 and held for 2 h before cooling to room temperature at 5 °C min −1 . The reacted MoB powder was crushed and mixed with Al powder with a molar ratio of MoB:Al = 1:1.3. The pellet was placed in an alumina crucible and heated in a tube furnace under flowing argon to 1000 °C at 5 °C min −1 and held for 15 h before cooling to room temperature at 5 °C min −1 . The loosely sintered products were carefully crushed into powder by mortar and pestle and placed to obtain the MoAlB SCs. Finally, the cleaned copper foam was immersed in a MoAlB SC ink with the aid of a Nafion binder to obtain a MoAlB SC/Cu foam. (Details of the electrode preparation can be found in Sect. 2.2.2).
Preparation of the Electrode
Catalyst ink 0.25 g of the catalyst material was suspended in 9 mL deionized water and 1 mL of 5 wt% Nafion ® solution, which is predominantly water, was added. Hence, the catalyst material formed 25 mg mL −1 of the ink. The solution was then ultrasonicated for 1 h in an attempt to break down any agglomerated particles and aggregates to as small as possible to obtain a uniform solution.
Pretreatment of electrode The 1-cm 2 copper foams were ultrasonicated in 0.1 M HCl, deionized water and acetone, respectively. Then the electrodes were placed in an oven to dry. After the pretreatment, the electrodes were dipped in the above ink three times. After each dip, it was ensured that the 1-cm 2 electrode was completely covered in the ink.
The coated electrode was then placed in an oven at 100 °C for 5 min. The same coating process was then repeated a further two times. In the end, the cleaned copper foam was immersed in the catalyst ink with the aid of a Nafion binder to obtain the electrode.
Proton Exchange Membrane Pretreatment
A Nafion ® 117 membrane was cut into small pieces and then treated with 3 wt% H 2 O 2 water solution, deionized water, 1 mol L −1 H 2 SO 4 and deionized water for 1 h at 80 °C, respectively. Finally, the obtained membrane was repeatedly rinsed until neutral pH was obtained and then was preserved in deionized water.
Electrochemical Measurements
All electrochemical measurements were carried out on a CHI760e electrochemical station at 20 °C. Electrochemical measurements were carried out on a three-electrode system with Pt wire as the counter electrode, Ag/AgCl (3.5 M KCl) as the reference electrode and modified copper foam as the working electrode. The gas-tight two-compartment electrochemical cell was separated by a piece of Nafion ® 117 membrane at room temperature. 250 mL min −1 of N 2 (99.99%) was introduced to the cathode portion of the system from 30 min beforehand, until the end of the reaction. All of the potentials in this work were calculated to a reversible hydrogen electrode (RHE) scale based on the Nernst equation (ERHE = E Ag/AgCl + 0.059 × pH + 0.2046). The value of 0.2046 depended on the KCl concentration in the reference electrode. (Details of detection of ammonia can be found in the supporting information (SI)).
Materials Characterization
The nano-/microstructure of as-prepared MoAlB SCs was examined by scanning electron microscopy (SEM).
Figures 1a and S7
show the rod-like MoAlB SCs formed at 1000 °C, which have an average length of approximately 10 μm. Furthermore, the observed morphology of MoAlB SCs by SEM is consistent with that identified by transmission electron microscopy (TEM) in Fig. 1b. The scanning electron microscopy energy-dispersive X-ray spectroscopy Table S1). The highangle annular dark-field scanning transmission electron microscopy (HAADF-STEM) images in Fig. 1c, d focus on regions within a few micrometers of the crystal surface. The images were collected in the [001] and [010] crystallographic directions. Corresponding 2D structural models along the same zone axis, shown as colored insets in Fig. 1c, are compared to the contrast image and show good agreement. This supports the hypothesized formation of the layered ternary borides as a result of a stepwise intercalation of Al into MoB during the formation of MoAlB. Additionally, the contrast image clearly shows the atomic sequence of the crystal structures: an Al double layer distributes between the two adjacent MoB layers. The double layers of bright dots correspond to the Mo atoms while the gray dots in between correspond to the Al layers. Figure [43,51], verifying the quality of this synthesis process. Figure S11 shows the XPS spectrum of MoAlB SCs, with four peaks appearing at 74.0, 232.0, 288.1 and 531.5 eV corresponding to the Al 2p, Mo 3d, C 1s and O 1s electrons, respectively. However, the presence of the B 1s electrons was not detected in the XPS analysis of the MoAlB SCs because B atoms are too light to be detected using this technique. Figure 2b provides a high-resolution XPS spectrum of the Mo 3d signal deconvoluted into two peaks located at 228.2 and 231.5 eV, which can be assigned to the Mo 3d 5/2 and Mo 3d 3/2 electrons of Mo in Mo-Al-B, respectively [52]. It is attributed to Mo atom bound with Al and B atom, respectively. Figure 2c shows the XPS spectrum of Al 2p, fit with two components: one for Al 2 O 3 , the other for Mo-Al-B. The binding energy peak of Mo-Al-B in the MoAlB SCs is at 73.0 eV and coincides with the peak of 73.0 eV obtained from our spectrum of elemental Al. The other peak at 74.9 eV corresponds with Al 2 O 3 . The presence of Al 2 O 3 here is consistent with the analysis of the XRD data. Figure 2d shows the XPS spectrum of the B 1s with only one strong peak at 188.5 eV. This indicates that no boron oxide was detected on the MoAlB SCs sample and most likely the boron was fully reacted at this point. These collective data are indicative of the successful synthesis of MoAlB SCs.
Electrochemical Nitrogen Reduction
To evaluate the electrocatalytic NRR activities of MoAlB SCs under ambient conditions, electrochemical tests were performed in N 2 -saturated 0.1 M KOH electrolyte, including linear sweep voltammetry (LSV), chronoamperometry and impedance. All tests were performed in a two-compartment cell separated by a proton-conductive cation exchange membrane (Nafion ® 117), in which the protons (H + ) can react with N 2 to form ammonia over the catalyst. At first, the LSV curves for MoAlB SCs in Ar-and N 2 -saturated 0.1 M KOH solutions were measured to verify the source of ammonia (Fig. 3a). In the Ar-saturated solution, the increase in current density after − 0.1 V versus RHE is caused by the HER, which competes with the NRR. In contrast, when the applied potential is more positive than − 0.4 V versus RHE, a clear reduction in the current density for the N 2 -saturated solution is observed compared with that of the Ar-saturated solution. This provides evidence that the catalytic reduction of N 2 to NH 3 does in fact take place in this system. When the applied potential was set more negative than − 0.4 V versus RHE, the current densities in N 2 -saturated and Ar-saturated solutions were very close, likely due to the dominant behavior of HER compared to the NRR in this system. In addition, for further confirmation of successful ammonia synthesis, the corresponding NH 3 concentrations were measured by using the ammonia-selective electrode method for qualitative analysis of ammonia in the electrolyte after 1 h of electrolysis in the presence of continuous Ar and N 2 bubbling (further details are provided in Fig. S13), which shows that the values of NH 3 yield are derived from NH 3 concentrations. In the Ar system, negligible ammonia was detected in the electrolyte due to background signal interference. These results demonstrate that the N sources for ammonia synthesis are exclusively provided by the N 2 feed gas, indicating that the electrocatalytic N 2 reduction can be realized by the as-prepared MoAlB SCs.
For evaluating the advantage of the single-crystal structure in the NRR process, two different samples with the same composition and different structures, bulk MoAlB (polycrystalline phase) and MoAlB SCs were synthesized by different methods. Compared to bulk MoAlB, as shown in Fig. 3b, the MoAlB SCs sample shows better electrochemical N 2 reduction performance, which can be attributed to the uniform crystal orientation that exposes dominantly [010] facets [53]. Because the area of [0l0] plane was much larger than [l00] and [00l] plane [53]. Meanwhile, compared with MoAlB SCs, as shown in Fig. S9, bulk MoAlB is a polycrystalline structure and particles are a little larger in size. Furthermore, it is known that the catalytic performance is determined by the size of particles as a key role. One is due to increasing of specific activity per metal atom generally with decreasing size of the particles [54]. The second one is to expose more catalytic sites because of small size. Thus, a hypothesis is able to be presented that more active sites and higher specific activity per metal atom are provided by the MoAlB SCs for facilitating the NRR process. Figure 3c shows the LSV curves of pure Cu foam, MoAlB SCs/Cu foam, Al/Cu foam, B/Cu foam, Mo/Cu foam and MoB (1:1)/Cu foam for electrocatalytic NRR. At all applied potentials, pure Cu foam had a much lower current density onset potential than that of others, which can be attributed to the inert HER activity of pure Cu foam. In addition, as shown in Fig. 3d, Meanwhile, comparing MoB samples at the different ratios, it is found that changing the relative amount of Mo and B has little influence on the overall activity. These results further confirm that only MoAlB SCs within the Mo-Al-B system possess activity toward the electrocatalytic NRR. Therefore, it is probably inferred here that the Mo, Al and B elements along with the unique structure of the MoAlB SCs could play a synergistic role in the electrocatalytic NRR. Figure 3e shows the Faradaic efficiencies (FEs) and ammonia yields of MoAlB SCs under various applied potentials ranging from 0.0 to − 0.35 V versus RHE. The data in this figure were obtained based on the ammonia-selective electrode method. As observed in Fig. 3e, ammonia yields under various applied potentials show no obvious difference. However, FEs experience a gradual decreasing trend as the applied potential is shifted from 0.0 to − 0.35 V. In fact, as shown in Fig. 3f, a remarkable increase in the current density is observed with the increase in applied potentials. A likely explanation is that, due to the dominance of HER at higher overpotentials, the surface of MoAlB SCs was mainly occupied by evolving hydrogen molecules which would block the mass transfer of N 2 to the surface of MoAlB SCs. This limits the electrocatalytic NRR activity [55] and results in the decline of the FEs.
In addition, to confirm the reliability of the ammoniaselective electrode method for ammonia detection, it was compared with a colorimetric method using an indophenol blue reagent, which gave consistent results (Fig. 3g). It is also found that the NH 3 yield values determined by the colorimetric method are slightly higher than those determined by the ammonia-selective electrode method, possibly due to contaminants (metal residues, etc.) [56]. In conjunction, it has been reported that the determination of ammonia-selective electrode is not interfered by the contaminants [57]. Because N 2 H 4 is a possible by-product during the electrocatalytic N 2 reduction process, the colorimetric method was also used to determine whether any N 2 H 4 was produced. No N 2 H 4 is detected in the electrolyte after 1 h of electrolysis in the presence of continuous N 2 bubbling (Fig. 3h), indicating that the as-prepared MoAlB SCs electrode has good selectivity for the NRR.
In Fig. 3e, MoAlB SCs exhibit higher FEs at low overpotentials. Although the highest FE was 63.7% at 0.0 V versus RHE, the value has low credibility due to a large relative error value resulting from a very low current density. Therefore, in all comparative experiments, an applied potential of − 0.05 V versus RHE was determined to be the most appropriate and was used. The NH 3 yield, which was normalized based on the weight of the catalysts, and FEs of MoAlB SCs at − 0.05 V versus RHE are 9.2 µg h −1 cm −2 mg −1 cat. and 30.1%, respectively (Fig. 3e). As far as we know, the NH 3 yields and FEs that the MoAlB SCs achieved at a low applied potential are comparable to recently reported NRR electrocatalysts (Table S2). In this work, an ultralow applied potential (− 0.05 V versus RHE), closed to theory potential, is used for MoAlB SCs, making it one of the most active and selective electrocatalyst candidates for future NRR research at ambient conditions.
The stability of the MoAlB SCs for electrocatalytic N 2 reduction was evaluated by consecutive recycling electrolysis at 0.05 V versus RHE. The ammonia yield and current efficiency data contain no significant fluctuation during five consecutive cycles (Fig. 3i), indicating the high stability of MoAlB SCs for electrochemical N 2 reduction. Additionally, the stability of MoAlB SCs was also assessed by scanning at a constant potential of − 0.05 versus RHE for 10 h. The current density presented no obvious changes (Fig. S14), further indicating that MoAlB SCs can effectively produce NH 3 over a long period of time. Besides, morphologies and elemental analysis of MoAlB SCs/Cu foam electrode before and after NRR stability tests were characterized by SEM. The SEM images present rod-like morphology, which have no obvious changes (Fig. S15). As shown in Fig. S15 and Table S3, except potassium element observed in the electrode after NRR stability tests, other element species and the corresponding amounts of atoms in EDS region scan analysis look almost the same. The fluorine is from Nafion ® solution and potassium is from KOH electrolyte. Therefore, these results confirm that this nanolayered ternary boride has an excellent chemical stability chemical structure during the NRR process.
Mechanistic Study
To evaluate the electrocatalytic NRR mechanism of MoAlB SCs under ambient conditions, electrochemical comparison tests were performed. Firstly, as shown in Fig. 4a, compared to MoAlB SCs, pure Cu foam, Al/Cu foam, B/Cu foam, Mo/Cu foam and MoB (1:1)/Cu foam specimens exhibit almost no ammonia detection at a low applied potential (− 0.05 V versus RHE). This confirms that they possess no electrocatalytic activity toward the NRR process. In conjunction, MoAlB SCs and Al metal show higher FE values than the other compared specimens, which is attributed to the suppression of the HER process. Additionally, this is also confirmed by electrochemical impedance spectroscopy (EIS, Fig. 4b). The electron transfer resistance (R t ) at the electrode surface is derived from the semicircle domains of impedance spectra, which is used to describe the interface properties of the electrode. The semicircle diameter of MoAlB SCs is much smaller than that of the control group of catalysts [Mo and MoB (1: 1)]. However, diameters for the B, Al and pure Cu foam are much smaller than that of MoAlB SCs. On the one hand, this is due to the lower contact and charge transfer impedance in MoAlB SCs consisting of Al and B. On the other hand, poor reactivities shown by B, Al and pure Cu foam indicate that less charge transfer is involved in the reaction, which is consistent with the data in Fig. 4a. In a previous report [58], it has already been verified that maingroup metals (p metals) can exhibit much higher electrochemical NRR selectivity and activity than the intensively studied transition metals (d metals) due to the stronger interactions between the p orbitals of metal substrates and nitrogen absorbers. Meanwhile, to the best of our knowledge, most metals with theoretically high electrocatalytic NRR activity are transition metals, which exhibit very poor selectivity due to strong HER competition [22,23]. Therefore, it is conceived that catalysts comprised of aluminum and boron may bind nitrogen more strongly than hydrogen and could exhibit higher NRR selectivity. However, because their binding to nitrogen is so strong and the desorption of certain intermediates would be very slow, their NRR activity could be limited [58]. Due to the critical step of N 2 adsorption for the NRR process, the N 2 adsorption behaviors of MoAlB SCs, Mo and MoB (1:1) were further evaluated by N 2 -TPD as shown in Fig. S16. Two adsorbed N 2 peaks in as-prepared catalysts and only one adsorbed N 2 peak in Mo are observed. The peak at about 150 °C is related to physical adsorption but not for Mo. The peak at 340 °C observed for MoAlB SCs, Mo and MoB (1:1) is related to the chemisorption species of N 2 . This result indicates that nitrogen vacancies could introduce many chemical adsorption sites on the surface of the catalysts. Because chemisorption is generally associated with activation, these chemical adsorption sites will activate N 2 for nitrogen fixation [59]. Thus, the higher nitrogen vacancies concentration of MoAlB SCs causes the more chemical adsorption sites, leading to the higher NRR performance. Therefore, we propose that the Mo, Al and B elements in MoAlB SCs should play a synergistic role in the electrocatalytic NRR process.
To differentiate the catalytic site from the three elements in the system, Figs. S17 and 4c show that MoAlB SCs exhibit higher reduction current density and NH 3 yield than a second MAB phase, Fe 2 AlB 2 . This compound is very structurally similar to MoAlB, except that only one Al plane interleaves the trigonal prismatic slabs rather than two. This indicates that Mo most likely plays a catalytic role rather than Al and B in the electrochemical NRR process. Thus, on the basis of the above discussion, the mechanism of electrochemical NRR based on MoAlB SCs is described in Fig. 4d. A synergistic effect is involved in the electrochemical reaction. Firstly, N 2 is adsorbed and further accumulated on the surface of the MoAlB SCs by the strong binding between N and Al or B. Subsequently, with H + absorbing and binding with N on the surface of MoAlB SCs, Mo acting as a catalytic site reduces N 2 to NH 3 gradually.
Conclusions
In summary, MoAlB single crystals have been reported as a new candidate electrocatalyst for ambient-condition electrochemical ammonia synthesis and have demonstrated a high level of activity toward the electrochemical NRR in alkaline electrolytes. The as-synthesized MoAlB SCs afforded an NH 3 yield of 9.2 µg h −1 cm −2 mg −1 cat. and a Faradaic efficiency of 30.1% at − 0.05 V versus RHE. As revealed by the spectroscopic studies and electrochemical NRR tests, the outstanding NRR activity of MoAlB SCs was attributed to the synergistic role of Mo, Al and B atoms. Furthermore, mechanistic studies showed that MoAlB SCs possess facile NRR activity and selectivity due to their strong N 2 adsorption and ability to overcome the competing hydrogen evolution reaction at reactive sites. The excellent catalytic performance and long-term stability of MoAlB SCs, combined with its convenient synthesis process, suggests that this system will be able to play a crucial role as a candidate pathway in the electrocatalytic NRR processes.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/.
Electronic supplementary material
The online version of this article (https ://doi.org/10.1007/s4082 0-020-0400-z) contains supplementary material, which is available to authorized users.
|
2020-03-05T11:04:59.801Z
|
2020-02-01T00:00:00.000
|
{
"year": 2020,
"sha1": "3701ca56f45cc7e77c86e8643cbe4a643bd27314",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s40820-020-0400-z.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "42707cff430638d5ed98fdae4ae5136ddc2078c4",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
}
|
254311261
|
pes2o/s2orc
|
v3-fos-license
|
DISTAL FEMORAL FRACTURES FROM HIGH-ENERGY TRAUMA: A RETROSPECTIVE REVIEW OF COMPLICATION RATE AND RISK FACTORS
ABSTRACT Objective Determine complications’ incidence and risk factors in high-energy distal femur fractures fixed with a lateral locked plate. Methods Forty-seven patients were included; 87.2% were male, and the average age was 38.9. The main radiographic parameters collected were distal lateral femoral angle (DFA), distal posterior femoral angle (DPLF), comminution length, plate length, screw working length, bone loss, and medial contact after reduction and plate-bone contact, location of callus formation, and implant failure. The complications recorded were nonunion, implant failure, and infection. Results Complex C2 and C3 fractures accounted for 85.1% of cases. Open fractures accounted for 63.8% of cases. The mean AFDL and AFDP were 79.8 4.0 and 79.3 6.0, respectively. The average total proximal and distal working lengths were 133.3 42.7, 60.4 33.4, and 29.5 21.8 mm, respectively. The infection rate was 29.8%, and the only risk factor was open fracture (p = 0.005). The nonunion rate was 19.1%, with longer working length (p = 0.035) and higher PDFA (p = 0.001) as risk factors. The site of callus formation also influenced pseudoarthrosis (p = 0.034). Conclusion High-energy distal femoral fractures have a higher incidence of pseudoarthrosis and infection. Nonunion has greater working length, greater AFDL, and absence of callus formation on the medial and posterior sides as risk factors. The risk factor for infection was an open fracture. Level of Evidence III; Retrospective Cohort Study .
INTRODUCTION
Distal femur fractures are common orthopedic problems affecting individuals across varied age groups, ranging from young patients with high-energy trauma to elderly patients with an injury associated with osteoporosis and a lower energy mechanism of trauma such as simple falls. For both groups, surgical fixation is the treatment of choice. 1 Lateral locking plate (LLP) has become the standard method of fixation because of its biomechanical property to resist varus collapse, multiple fixation points in the short distal fragment, and technical ease implant. 2,3 As this technique has been used in various fracture patterns, ranging from low-energy fractures to high-energy fractures, moderate nonunion, infection, and implant failure rates have been reported. 4,5 The risk factors for complications after LLP include patient-related factors (such as age, sex, habits, and comorbidities), fracture characteristics (such as type of fracture, comminution, bone loss, and soft tissue injury), and fixation-related factors (such as reduction, plate length, working length, and number of screws). 6 Factors associated with complications and failures should be determined separately according to the mechanism of trauma. Both patients and fractures are different in low-energy and high-energy trauma, and most likely, the complication rates and risk factors may also be different between them. The goals of this study were to examine a population of patients with high-energy distal femur fractures treated with LLP to determine the incidence and risk factors of complications.
MATERIAL AND METHODS
This retrospective study was performed at the Instituto de Ortopedia e Traumatologia da Faculdade de Medicina da Universidade de São Paulo, an urban university-based level 1 trauma center, between 2012 and 2018. Data were collected through a retrospective chart review and review of existing radiographs. Ethical approval was provided by the Scientific and Ethical Committee of the University under the protocol 2.827.192. Written informed consent was obtained from all patients. The inclusion criteria were as follows: type A and type C distal femur fractures, open reduction and internal fixation with LLP, age > 18 years, victims of high-energy trauma, no previous procedures in the knee, a minimum of 9 months of follow-up, complete radiographic examination, and signed informed consent. The exclusion criteria included low-energy fractures, periprosthetic fractures, type B distal femur fracture, intramedullary fixation, dual plating fixation, contraindication for surgery or anesthesia, would infection prior to internal fixation, pathologic fractures, and associated neurovascular injury. Demographic data on the following were collected: age, sex, mechanism of trauma, associated injuries, OTA/AO classification, 7 and Gustilo classification 8 for open fractures. The surgical technique followed established recommendations provided in the literature. 9,10 All patients were fixed with a stainless-steel LLP (De Puy Synthes, USA). Weight-bearing as tolerated was allowed during the postoperative rehabilitation. The radiographic parameters evaluated were the quality of articular reduction, lateral distal femoral angle (LDFA), posterior distal femoral angle (PDFA), length of comminution, length of the plate, screw working length, number of screws proximal and distal, bone loss, medial contact after reduction and plate bone contact, location of callus formation, and implant failure. (Figure 1) The quality of reduction was classified binarily as anatomical or nonanatomical reduction. The coronal plane alignment was measured using the LDFA. AP radiographs were used to measure the angle on the lateral side between the anatomical axis of the femoral shaft and the articular line. The PDFA measured the sagittal alignment on the lateral view with the angle between the femoral shaft and the line parallel to the articular line with the Blumensaat line as a reference. (Figure 1) The length of the plate was defined by the number of holes proximal to the articular cluster, and the total working length was defined as the distance spanning the fracture site between the two screws on each side closest to the fracture. 11 The proximal working length was defined as the distance between the fracture and the immediate proximal screw, and the distal working length as the distance between the fracture and the immediate distal screw. (Figure 1) According to its location (anterior, posterior, medial and lateral) the location of the callus formation was noted. Union was defined as the presence of a minimum of three of four bridging cortices on AP and lateral X-rays at 6 months. 12 Failure to meet the minimum requirement of the bridging cortices was recorded as nonunion. The following complications were recorded: implant failure, deep infection, nonunion, and reoperation. Mechanical implant failure was defined as any failure of the implant, including plate break, screw breakage, plate loosening, bending of the plate, and screw disengagement. 13 Infection was defined according to the fracture-related infection criteria published by Metzemakers et al in 2018. 14 Descriptive statistics included means and standard deviations for continuous variables and counts (percentages) for categorical variables. Statistical analysis of infection and nonunion was performed using the chi-square test or Fisher's exact test. Comparative analysis was performed according to the outcome and compared using Student's t-test. Odds ratios were estimated with the respective 95% confidence intercal and adjusted with the model of multiple logistic regression with the variables that presented with a descriptive level of bivariable analysis less than 0.10 (p<0.10). Statistical analysis was performed using IBM SPSS software for Windows version 22.0, with a significant level of 5%.
RESULTS
During the observation period (2012-2018), a total of 56 patients with high-energy distal femur fracture were treated with LLP. Nine patients were excluded from the study due to incomplete follow-up or radiographic control. Among the 47 included patients, 41 (87.2%) were men and six (12.8%) were women, with an overall average age of 38.9 ± 12.9 years.
The most frequent trauma mechanism was motorbike accidents in 27 (57.4%) cases, followed by motor vehicle accidents in nine (19.!1%) cases, and falls from height and run over by a car in four (8.5%) cases each. Associated injuries occurred in 31 (65.9%) cases. (Table 1) According to the OTA/AO classification, 24 (51.1%) fractures were type 33C3, 16 (34.0%) were type 33C2, and the remaining seven (14.9%) were type A ( Table 1). The average length of comminution was 50.1 ± 31.3 mm. Thirty (63.8%) fractures were open, of which 28 (80.0%) were Gustilo type 3A and two (6.7%) were Gustilo 3B (Table 1). Articular anatomical reduction was achieved in 35 (74.5%) patients. The plate length was 13 holes in 34 (72.3%) patients, 11 holes in two (4.3%) and nine holes in 11 (23.4%) patients. The coronal alignment measured by the LDFA average was 79.8°± 4.0° and that by the sagittal plane PDFA was 79.3°± 6.0°. The average total working length was 133.3 ± 42.7 mm. The proximal working length was 60.4 ± 33.4 mm, and the distal working length 29.5 ± 21.8 mm. More details can be seen by comparing radiographical parameters and nonunion in Tables 2 and 3 Table 4). The presence of associated injuries almost reached a statistically significant risk factor (p=0.055). None of the other patient characteristics had a positive effect on the postoperative infection rate (p>0.05). Nonunion was noted in nine (19.1%) cases. Statistical analysis revealed a strong correlation between nonunion and a longer total working length (p=0.035) and higher values of PDFA (p=0.001). The likelihood of nonunion increased by 31% for each unit with a higher PDFA ( Table 5). The location of the callus formation was also correlated with the development of nonunion (p=0.034). The least influenced nonunion development location was medial callus formation, followed by posterior callus formation. (Table 4) Some results emphasized the lack of correlation between nonunion and length of comminution (p=0.165), bone loss (p=0.071), and medial contact after reduction (p=0.138). Infection did not correlate with the development of nonunion (p>0.05).
DISCUSSION
Distal femoral fractures have a bimodal distribution -high-energy trauma in young patients and low-energy trauma in elderly patients. 15 The systemic condition of the patients and the characteristics of the fracture are completely different between the two groups, In young patients multiorgan injury (polytrauma) is the main systemic concern, followed by other associated orthopedic injuries. In contrast, in elderly patients, the frail systemic condition, comorbidities, and polypharmacy are the main concerns. In young patients with high-energy injuries, fractures tend to be intra-articular, have more displacement and comminution, and more severe soft tissue compromise. In contrast, in elderly patients, fractures tend to be simple, non-comminutes, and extra-articular and the main concern in fixation is bone quality. 16,17 Despite occurring in the same anatomical area, high-and low-energy fractures are two completely different types of fractures. In our view, studies to analyze the risk of complications should separate the risk of high-energy fractures from that of low-energy fractures. This is because the risks and consequences of both are different. This may explain the wide range of incidence of complications, such as nonunion varying from 6% 18 to 38% 19 and infection from 3% 20 to 15%. 17 To our knowledge, this is the first study to include only high-energy fractures with a significant number of patients (n=47) to determine the incidence and risk factors of complications. In a review by Ebraheim et al., 19 among the 19 studies, the number of patients varied from 1 to 31. Similar to that reported in the literature, in our study, the average age was 38.9%, most patients were young, and there was a male predominance (87.2%). In contrast to the predominant cause of injury (motor vehicle accident) reported in the literature, due to the characteristics of the traffic in the city, the main cause of injury was motorbike accidents (57.4%) in our study. In contrast to low-energy trauma, where an isolated injury is more common, associated injuries were reported in 65.9% patients in our study. Another characteristic of high-energy trauma is the type of fracture with more complex, comminuted, and articular involvement. In our series, 85.2% fractures were C2 and C3 types. The nonunion rate was 19.1% (9/47 patients). The incidence of nonunion was highly correlated with a longer working length (p=0.035) and higher PDFA (0.001). Two factors influence the total working length -the extension of the comminution and the decision of the surgeon to insert the screws closest to the fracture. Longer comminutions lead to longer working length; however, with the use of long plates, the surgeon can increase the proximal working length and position the screw distant from the fracture. We did not observe the influence of extension of comminution on the nonunion rate (p=0.165). However, the proximal working length was almost double the distal working length (63.8 mm vs. 29.6 mm), causing an imbalance in the total working length. One may consider decreasing the total working length by inserting the proximal screw closer to the fracture, thus decreasing the proximal working length. This aligns with what Peschiera et al. 21 called in their article as unbalanced fixation as risk factor for nonunion. In a study conducted by Ricci et al., 11 longer working length was an independent risk factor for nonunion. Based on the results reported by Kiyono et al., 22 leaving one hole empty on either side of the fracture may decrease the incidence of nonunion in both simple and comminuted fractures. A higher PDFA also had a positive correlation with nonunion (p=0.001). Each increase in the angle increased the risk of nonunion (p=0.025). A high PDFA indicates a lack of reduction of the extension deformity caused by the gastrocnemius muscle. The result is the creation of a gap in the posterior side of the femur. The callus formation results showed that the two most important locations for callus formation to avoid nonunion were medial and posterior. During surgery, it is important to pay attention to the reduction in the sagittal plane, which is occasionally difficult because of the external guide of the plate that interferes with the C-arm image. In contrast to the findings reported by Karam et al. 23 and Ebraheim et al., 19 the presence of comminution or extension of comminution was not a risk factor for nonunion (p=0.165) in our study. There was no correlation between bone loss and nonunion (p=0.071), but analysis of the absolute numbers showed that 50% cases with bone loss developed nonunion (3/6). In addition, there was also no correlation with medial contact after reduction (p=0.138), but analysis of absolute numbers showed that almost 50% cases with nonunion did not have medial contact. Individual analysis of the nine cases of nonunion showed that they all had a hypotrophic type of nonunion with little callus formation on the medial and posterior side. The low implant failure, regardless of the 19.1% nonunion rate, may be explained with the use of long plates (11-and 13-holes plates in 95.7%). The long plates and the long lever arm prevented plate pullout. This is in line with the recommendation of many authors to use long plates to avoid failure. 11,15,22 Long plates allow for longer working lengths, but care should be taken even in long plates to keep the working length short. 23 The deep infection rate was 29.8% (14/47 patients), and the only predictive factor was open fracture (p=0.005). In this study including only high-energy fractures, the incidence of open fracture was 63.8% (30/47), and among these cases, 43.3% (13/30) developed deep infection. Regardless of initial care with abundant lavage and debridement and staged treatment with external fixation, the incidence of infection was high. The combination of severe soft tissue injury and the comminution of the fracture puts this injury at a high risk of infection when caused by high-energy trauma. Bai et al. 20 studied the incidence of infection in 665 distal femur fractures and found an infection rate of 3.6%. The low number of infections can be explained by the inclusion of low-energy fractures, representing 30% cases and representing < 20% of the infected cases. Looking at only the high-energy cases, they represented 83.3% of the infections and also had open fractures as risk factor. This study has several limitations. This study was retrospective, therefore, the final decision about the implant and its application was made by the operating surgeon and could not be controlled experimentally. A low number of patients may have influenced the results. Several patients who initially met the inclusion criteria were unable to complete the 9-month follow-up. Any radiographic measurement may be inconsistent because of the magnification of the image and imprecise measurement. In conclusion, the incidence of complications is higher in high-energy distal femur fractures than in low-energy fractures. We found a strong correlation between nonunion and the total working length of the fixation and the increase in the PDFA. Callus formation on the medial and posterior sides had a negative influence on the nonunion rate. The only risk factor for infection was open fracture. Multiple logistic regression analysis (full model).
|
2022-12-07T17:36:34.642Z
|
2022-12-02T00:00:00.000
|
{
"year": 2022,
"sha1": "9340a740f3687068bf4cee7e0911f3dc126116f8",
"oa_license": "CCBYNC",
"oa_url": "https://www.scielo.br/j/aob/a/J8N8xGdxB7VqXnZdHhfgspP/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "47c0faf36b230d0cf10dfcdb6adbcc9f66a8e22e",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
3665036
|
pes2o/s2orc
|
v3-fos-license
|
N-(3,4-Dichlorophenyl)thiourea
In the title compound, C7H6Cl2N2S, the benzene ring and the mean plane of the thiourea fragment [—N—C(=S)—N] make a dihedral angle of 66.77 (3)°. Intermolecular N—H⋯S and N—H⋯Cl hydrogen bonds link the molecules into a three-dimensional network.
In the title compound, C 7 H 6 Cl 2 N 2 S, the benzene ring and the mean plane of the thiourea fragment [-N-C( S)-N] make a dihedral angle of 66.77 (3) . Intermolecular N-HÁ Á ÁS and N-HÁ Á ÁCl hydrogen bonds link the molecules into a threedimensional network.
N-(3,4-Dichlorophenyl)thiourea
Hai-Bo Shi, Wei-Xiao Hu and Yan-Fang Lin S1. Comment Thiazoles and their derivatives are found to be associated with various biological activities such as antibacterial, antifungal, anti-inflammatory activities (Holla et al., 2003).The title compound, N-(3,4-dichlorophenyl)thiourea(I),is an important intermediate in the synthesis of thiazole and their derivatives. In our work, we present its crystal structure. In (Table 1) link the molecules into a three-dimensional network.
S2. Experimental
The title compound was obtained by refluxing 3,4-dichloroaniline(48.6 g, 0.3 mol), 36% aqueous HCl(30.4 g,0.3 mol) and ammonium thiocyanate(22.8 g, 0.3 mol) in water for 7 hr, then a white precipitate was observed and filtered. The solid was recrystallized from alcohol to give the pure product. This was dissolved in THF, and the solution evaporated gradually at room temperature to afford single crystals of (I).
Special details
Geometry. All e.s.d.'s (except the e.s.d. in the dihedral angle between two l.s. planes) are estimated using the full covariance matrix. The cell e.s.d.'s are taken into account individually in the estimation of e.s.d.'s in distances, angles and torsion angles; correlations between e.s.d.'s in cell parameters are only used when they are defined by crystal symmetry. An approximate (isotropic) treatment of cell e.s.d.'s is used for estimating e.s.d.'s involving l.s. planes. Refinement. Refinement of F 2 against ALL reflections. The weighted R-factor wR and goodness of fit S are based on F 2 , conventional R-factors R are based on F, with F set to zero for negative F 2 . The threshold expression of F 2 > σ(F 2 ) is used only for calculating R-factors(gt) etc. and is not relevant to the choice of reflections for refinement. R-factors based on F 2 are statistically about twice as large as those based on F, and R-factors based on ALL data will be even larger.
Fractional atomic coordinates and isotropic or equivalent isotropic displacement parameters (Å 2 )
x y z U iso */U eq
|
2017-06-28T18:59:55.818Z
|
2009-09-09T00:00:00.000
|
{
"year": 2009,
"sha1": "bc3bc421c9be756b984800d6372cf3b98cc3b5f0",
"oa_license": "CCBY",
"oa_url": "https://journals.iucr.org/e/issues/2009/10/00/bg2297/bg2297.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "841971579674ce62d6cc12205bc7827f90928e9d",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
29174741
|
pes2o/s2orc
|
v3-fos-license
|
Quantum simulation of energy transport with embedded Rydberg aggregates
We show that an array of ultracold Rydberg atoms embedded in a laser driven background gas can serve as an aggregate for simulating exciton dynamics and energy transport with a controlled environment. Spatial disorder and decoherence introduced by the interaction with the background gas atoms can be controlled by the laser parameters. This allows for an almost ideal realization of a Haken-Reineker-Strobl type model for energy transport. Physics can be monitored using the same mechanism that provides control over the environment. The degree of decoherence is traced back to information gained on the excitation location through the monitoring, turning the setup into an experimentally accessible model system for studying the effects of quantum measurements on the dynamics of a many-body quantum system.
We show that an array of ultracold Rydberg atoms embedded in a laser driven background gas can serve as an aggregate for simulating exciton dynamics and energy transport with a controlled environment. Spatial disorder and decoherence introduced by the interaction with the background gas atoms can be controlled by the laser parameters. This allows for an almost ideal realization of a Haken-Reineker-Strobl type model for energy transport. Physics can be monitored using the same mechanism that provides control over the environment. The degree of decoherence is traced back to information gained on the excitation location through the monitoring, turning the setup into an experimentally accessible model system for studying the effects of quantum measurements on the dynamics of a many-body quantum system. Introduction: Excitation transport through dipole-dipole interactions [1,2] plays a prominent role in diverse physical settings, including photosynthesis [3,4], exciton transport through quantum-dot arrays [5], and molecular aggregates [6][7][8]. Of crucial importance is the competition between the fundamentally coherent transport mechanism and the coupling to the environment, which has been under intense scrutiny in the context of photosynthesis (e.g. [1,[9][10][11][12][13]) and recently experienced a resurge of interest (e.g. [14][15][16][17][18][19]). Often, clean studies of excitation transport are impeded by the large number of degrees of freedom in these systems, for example, strongly coupled vibrational modes [9,20]. Ultracold atoms prepared in highly-excited Rydberg states exhibit similar dipolar state-changing interactions [21][22][23][24][25][26] as found in organic molecules, but are considerably simpler to study. Due to their strong interactions and relative ease to control using lasers, Rydberg atoms have been proposed as quantum simulators for quantum spin models [27,28] and electron-phonon interactions [29]. Aggregates formed by networks of Rydberg atoms (Rydberg aggregates) [30,31] are also ideally suited to the study of dipolar energy transport in an experimentally accessible system, as recently demonstrated [32].
Here we study energy transport through a Rydberg aggregate embedded within an optically driven background gas that acts as a precisely controlled environment. This system extends the one recently used to observe diffusive excitation transport [32] by separating the aggregate degrees of freedom from those of the background gas. The background gas is electromagnetically rendered transparent for a probe beam. Only in the vicinity of the aggregate atoms, interactions disrupt this transparency, causing each aggregate atom to cast a shadow with radius given by the interaction strength. We demonstrate parameters for which a larger absorption shadow is cast by the atom carrying an excitation, allowing us to infer its location. We show that the background gas simul- gate. An assembly of several Rydberg atoms in a state | s (large blue) and one in a state | p (large orange) is linearly arranged with spacing d in a background atomic gas (shades of green). These background atoms are then addressed with an EIT scheme (right panel), providing detection signals within radii R c,s/p around each aggregate atom.
taneously causes a back-action on the aggregate which can give rise to non-Gaussian disorder as well as sitedependent dephasing. The resulting excitation transfer dynamics can be described by a master equation similar to the one introduced by Haken-Reineker-Strobl (HRS) [33][34][35] to study the transition from coherent to incoherent transport.
The experimental realization of a controllable HRStype model will benefit the study of excitation transport in an open system, be it semi-conductors or light harvesting. For the latter extensions to exciton-vibrational coupling and non-Markovian environments may be required [9,20,30,36,37]. Finally, we show how decoherence in this system is intimately linked to the information obtained by the background gas acting as a quantum measurement device. In particular, despite strong aggregate-background interactions, decoherence vanishes if the background atoms do not allow one to infer the location of the excitation.
Scheme and model: The system we propose consists of a chain of N Rydberg atoms with spacing d forming the aggregate sketched in Fig. 1. Such an arrangement can be created by exciting Rydberg states from a trapped ultracold atomic gas using tightly focused laser beams [38,39], or by pulsed or chirped excitation in the dipole blockade regime [40,41], which gives rise to spatially correlated Rydberg excitation patterns [42][43][44][45][46][47][48][49][50][51]. N − 1 atoms are initially prepared in the state | s = | νs with principal quantum number ν and angular momentum l = 0, while a single atom is excited to the state | p = | νp , with angular momentum l = 1. This | p excitation can then migrate through the aggregate through resonant dipoledipole exchange interactions [22,23]. In addition, the aggregate is immersed in a gas of M background atoms, initially prepared in the electronic ground state | g , the positions of which could be random or arranged in a regular fashion. These atoms are coupled by two laser fields from | g via a short-lived intermediate state | e (spontaneous decay rate Γ p ) to a third Rydberg level, | r = | ν s [52][53][54][55][56][57][58]. Aggregate and background atoms could be the same or different atomic species.
This system is governed by the many-body Lindblad master equation for the density matrixρ (h = 1) The Hamiltonian consists of three parts,Ĥ =Ĥ agg + H EIT +Ĥ int , for the aggregate, the background gas of three-level atoms and van-der-Waals (vdW) interactions [23,59,60] The aggregate atoms are labeled by Latin indices such as n and m. Restricted to the Hilbert space with a single excitation and setting the constant energy splitting between s and p to zero, we can writê where | π n = |ss..p..ss (all aggregate atoms are in | s except the n'th, which is in | p ) and W nm = C 3 /|r n − r m | 3 . Here C 3 is the dipole-dipole interaction strength and r n is the position of aggregate atom n. We call eigenstates of (2) excitons [61,62]. For simplicity we have ignored vdW interactions between aggregate atoms [63].
The Hamiltonian for the background gas in the rotat-ing wave approximation readŝ where Ω p,c and ∆ p,c are the probe and coupling Rabi frequencies and detunings respectively. Typically Ω p Ω c and ∆ p + ∆ c = 0 which corresponds to conditions of electromagnetically induced transparency (EIT) used for Rydberg atom detection [32,64].
Background atoms interact among themselves and with the aggregate through vdW interactionŝ For simplicity we assume isotropic interactions. To use the background gas as a probe for the state of the aggregate it is necessary that the interactions are state dependent. The interaction strength between two atoms α, n is V when they are in states | r and | a ∈ {| s , | p }. As concrete examples we consider | s = | 43s , | p = | 43p in 87 Rb, with two choices | r = | 38s or | r = | 17s for the upper state of the EIT ladder. The former realizes power laws η(a) = 6, 6, 4 for a = r, s, p, respectively, with |V (rp) | |V (rs) |, due to a nearly resonant process: 43p + 38s ↔ 41d + 38p [65], and the latter has η(p) = 6 and V (rp) ≈ V (rs) .
Excitation detection: On resonance (∆ p,c = 0), the background gas becomes transparent for the probe beam described by Ω p . However, close to the aggregate atoms, interactions V (ra) αn > V c = Ω 2 c /(2Γ p ) destroy the transparency [32,64] (see also [66]). This creates an absorption shadow around each aggregate atom, the radius R c,a = (2C η(a),ra Γ p /Ω 2 c ) 1/η(a) of which depends on the state a ∈ {s, p}, as sketched by blue (orange) circles in Fig. 1. Through this difference we can infer the location of the p-excitation.
Effective aggregate model: To derive an effective model for the aggregate alone we proceed by adiabatically eliminating the internal states of the background atoms following the approach described in Ref. [67]. This is justified when the time scale on which background atoms would approach a steady state, set by the atomic decay rate 1/Γ p , is shorter than that for excitation transport 1/W (d) = d 3 /C 3 (for details see [68]). The evolution of the reduced aggregate density matrixρ (agg) = nm ρ nm | π n π m |, in the case ∆ p,c = 0, V (rr) αβ = 0 and to leading order in Ω p , obeys: where we have introduced the background-aggregate in- mα . Note the imaginary contributions toL (α) eff . For the case ∆ p,c = 0, see [68]. The effective Hamiltonian (6) describes a mean energy shift of aggregate site n due to the interaction with the level | r of the background atoms, weighted by the steady-state occupation of | r . The strength of the second term (7) is set by the two-level atom photon scattering rate γ eff ≈ Ω 2 p /Γ p within the critical radius of an aggregate atom. Imaginary off-diagonal terms in (5) arising from imaginary parts of (7) can be interpreted as a contribution to the disorder [68], while real ones describe dephasing mechanisms. The relative contributions of disorder and dephasing terms can be controlled by choosing Rydberg states with different interactions and through the EIT laser parameters. Eq. (5) furnishes a Haken-Reineker-Strobl type model [35] for excitation transport. All scenarios from dominant dephasing to dominant disorder can be realized by varying the intermediate state detuning ∆ p while keeping the two-photon detuning fixed: ∆ p + ∆ c ≈ 0. In particular, for large ∆ p the contribution of dephasing can be significantly reduced, see [68].
In the following we analyze the influence of the disorder and dephasing introduced by the background gas. More explicitly Eq. (5) and P γ (γ nm ), for the probability with which an individual background atom α contributes to disorder and dephasing in an ensemble average over background atom positions (E n disorder from (6), nm disorder from (7), P γ dephasing). Both the width and the shape of these distributions can be controlled by the laser parameters and the background atom density. In Fig. 2 for low densities and weak interactions we find significant outliers in the atomic distance distribution that cause non-Gaussian disorder which can crucially modify excitation transport [69]. By controlling the placement of individual background atoms using microstructured optical traps, even more exotic forms of disorder could be studied.
The effects of disorder and dephasing on transport can be seen in Fig. 2 (c), where we show a single realization of Eq. (5) for N = 11 atoms immersed in a gas of randomly but homogeneously distributed background atoms. In a corresponding ensemble average, the spatial width of the excitation distribution over aggregate sites σ 2 n = n 2 − n 2 carries the transport signatures Fig. 2 (d). Parametrizing σ 2 n (t) = St ξ , we find ξ = 2 for ballistic transport (Ω p = 0), ξ = 1 for typical diffusive transport resulting from Fig. 2 (a) and ξ = 0.69 for sub-diffusive transport arising from the non-Gaussian disorder in Fig. 2 (b). Imaging and measurement-induced decoherence: The degree of decoherence present in this system is intimately linked to the action of the background gas acting as a real-time probe of the aggregate, making it an appealing model to demonstrate measurement-induced decoherence [70]. Since the background gas degrees of freedom have been eliminated in the effective model, we demonstrate this effect with simulations of the full master equation, which also serve to verify model (5). We study an aggregate with N = 3, probed by two randomly distributed pairs of background atoms, using a quantum- jump Monte-Carlo technique [71,72]. The background atom pairs have a separation ∆r = 0.3 µm, yielding V (rr) (∆r) = 730 GHz [73] to include significant interactions between background atoms. We initially prepare the aggregate in state | π 1 and all background atoms in their ground state | g .
Each background atom α heralds the arrival of an aggregate excitation through the optical susceptibility χ α (t) = Γ p /Ω p Tr(ρσ (α) eg ), the imaginary part of which yields the optical absorption. The average optical susceptibility of the background gas χ(x) is approximated by spatial binning of the χ α (t) from many simulations. To monitor the excitation transport, one can infer the location of the | p state by subtracting from χ(x) a reference signal χ ref (x) corresponding to the absorption of an inactive aggregate (chain of only | s states) as in [32]. We see in Fig. 3 that the resulting signal is directly linked to the probability distribution of the excitation p n (t) =Tr(ρ[| π n π n |]). These simulations also show that background-background interactions V (rr) αβ are relatively benign for the chosen states and densities.
The dephasing of the aggregate depends strongly on the position of the background atoms. In particular a given background atom only provides significant information on the excitation location if it is located in a ring between the two critical radii R c,s < r < R c,p as visible in Fig. 3. This is demonstrated in Fig. 4 where we place one background atom at a distance δ from each site as shown in the top panels. For δ < R c,s background atoms permanently scatter a large number of photons, nonetheless the aggregate dynamics proceeds coherently (panel a). In contrast, for R c,s < δ < R c,p , despite a smaller total number of scattered photons, aggregate decoherence is strong.
The connection between information provided by the scattered photon and decoherence is explicit in the quantum-jump algorithm: The inset of Fig. 4 (b) shows for a single realization how the state of the aggregate p 2 (blue dashed) is linked to quantum jumps of the | e population of its probe atom (green). This link only occurs when the state of the background atom and the state of the aggregate are significantly entangled in the moment of spontaneous decay. Since this is not the case in panel (a), single trajectories (not plotted) there show no effect of quantum jumps on the state of the aggregate.
Conclusions and outlook:
We have shown that a Rydberg aggregate embedded in an optically coupled background gas realizes a flexible quantum simulator of a Haken-Reineker-Strobl type model for energy transport. Site-dependent dephasing and disorder can be controlled through laser intensities, frequencies and background atomic density. Furthermore, this system could be extended to study other fundamental features believed to be at play in photosynthetic light-harvesting: We have seen evidence for non-Markovian features and non-trivial relaxation when the time scale on which the background atoms reach their steady state is made comparable to transport time scales, a regime not discussed here. The analogue of internal molecular vibrations could be engineered as in Ref. [29]. Disorder distributions could be controlled even further using an additional class of background atoms [75]. All these features would extend the HRS type model proposed here to quantum simulations of light-harvesting processes in a similar spirit but with complementary technology to the proposals of Ref. [37,76,77]. Decoherence of the aggregate arises through continuous monitoring of the location of the excitation, providing a hands-on example of measurement-induced de-coherence of a quantum state. Further applications of this system could be monitoring and decoherence of adiabatic excitation transport involving external (motional) degrees of freedom [78,79].
SUPPLEMENTAL INFORMATION
Steady state of an EIT system: For a single aggregate atom in state a interacting with one background atom [64] at a distance δ, consider the Hamiltonian of that background atom, The detuning ∆ will in this case be given by the interaction with the aggregate atom as ∆ = V (ra) (δ). We can solve the corresponding master equation including spontaneous decay from state | e for its steady stateρ and obtaiñ We further define the steady state susceptibilitỹ This expression can be used to describe the timedependent absorption signal in Fig. 3: For W (d) Γ p , we find that each background atom α adiabatically follows the aggregate state through χ α (t) = χ adiab,α (t) = mα is the overall interaction of the specific background atom α with the entire aggregate if the latter is in the state | π n .
Adiabatic elimination of background atom excited states: Following [67] we now formally adiabatically eliminate the excited states of all background atoms to arrive at an evolution equation for the aggregate alone. The essential step is to divide the (many-body) Hilbert space into a space of interest and its complement. The former is represented by the projector where the first part acts on the state space of the aggregate atoms and the second on that of the background atoms. We introduced | g = | g . . . g , which is the state where all background atoms are in | g . The complement of the space projected onto byP g is thus formed by all many-body states involving any | e or | r state for the background atoms, projected onto byP e = 1 −P g . Segregating the total Hamiltonian from the main article into segments using the projection operator formalism [67] to first order in Ω p , we obtain: After this segregation, the effective equation after adiabatic elimination of the complement of our space of interest is [67]: HereL α is the Lindblad operator introduced in the main article. Central to the effective equation are the non-Hermitian HamiltonianĤ NH =Ĥ e − i αL † αLα /2 and its inverseĤ −1 NH , which we obtain now. It can be seen that if we neglect V (rr) αβ andĤ agg as we will do from now, the HamiltonianĤ NH decomposes into the following block structure: whereĤ (nα) NH acts within the space spanned by | e α and | r α only. Similar blocks arise inĈ + andL α .
In that basis,Ĥ (nα) NH reads explicitly: mα is the overall interaction of the specific background atom α with the entire aggregate if the latter is in the state | π n . We further haveĈ Due to the block structure (23), we find the inverse of H NH when we find the inverse ofĤ (nα) NH , which is: Using also (25), we obtaiṅ eff | π n π n |, eff | π n π n |, in Eq. (21) and Eq. (22). In the limit ∆ p,c → 0, we arrive at the expressions (5) to (7) of the main article.
In the main article we point out that even strongly absorbing background atoms whose location however does not allow one to infer the excitation location do not contribute to aggregate decoherence. This is fully captured in the model just derived, where Lindblad opera-torsL (α) eff in Eq. (7) of the main article for background atoms that are within a critical radius regardless of the excitation location n are proportional to a unit matrix and hence cause no decoherence (sinceÔ in the superoperator LÔ[ρ] commutes withρ, asÔ ∼ ).
In a more explicit form of the HRS-type master equa-tion, defining W nn = 0, we havė for the matrix elements ofρ (agg) = nm ρ nm | π n π m |. We can interpret E n as diagonal disorder, γ nm as dephasing, and nm as correction to the diagonal disorder, as explained in the next paragraph. Note that (20) is obtained as leading order of a perturbative expansion inĈ ± and by assuming thatĤ g can be treated as "small" compared toĤ e . Higher orders and corrections due to finiteĤ g withinĤ (nα) NH can be incorporated as described in [67] but have not been required here. Disorder correction nm : Since it depends on two aggregate site indices n, m in a non-trivial fashion, the interpretation of nm is not obvious at first. However, numerical evaluation shows that for the parameters of Fig. 2 (a) in the main text, nm can be approximated by The Gaussian distribution P in Fig. 2 (a) is reproduced by this expression with a standard deviation overestimating the correct one by ∼ 10%. Thus, nm can be approximated by sums of contributions arising from single aggregate sites, and consequently the term i(E m − E n + nm )ρ nm in Eq. (30) can be cast into the form i(E m − E n )ρ nm , with E k = E k − 2 α H (kα) eff | C4,rp=0 . This is the reason we refer to nm as to a correction to the diagonal disorder.
The approximation (31) does not hold true for all sets of parameters, in particular not for those of Fig. 2 (b). However, there nm is negligible. Dephasing in the continuum limit: The dephasing rates γ nm in Eq. (30) are obtained as a discrete sum over all background atoms α. In the limit of a continuous background atom density (from here on denoted by ρ bg ), these rates can be given as a closed analytical expression, provided that the interatomic distance of the aggregate satisfies d R c,rs , R c,rp and that the probe and control fields are applied resonantly (∆ p = 0 = ∆ c ). In this continuum limit γ nm = γ(1 − δ nm ) actually becomes site independent.
The dephasing γ acquires three contributions, one depending solely on the rs-interaction, one depending solely on the rp-interaction, and one depending on both: γ = γ rs + γ rp + γ rs,rp .
The first two contributions have the simple form γ rp = π 2 Ω 2 p cos(π/8)Γ p ρ bg R 3 c,rp , A similar analytical solution can be found for the on-site energy shifts (28), but since these also become independent of the site-index, and thus disorder vanishes, it is not shown here.
|
2015-04-08T09:38:57.000Z
|
2015-03-27T00:00:00.000
|
{
"year": 2015,
"sha1": "75d72cf974ddaccdfcf26be1a5062e090efa8cc4",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1504.01886",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "3a1fe83b9248bba35cacc5c1133be1a7502fd657",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
}
|
219917932
|
pes2o/s2orc
|
v3-fos-license
|
Weighted gene co-expression network analysis to investigate the key genes implicated in global brain ischemia/reperfusion injury in rats
Material and methods. GSE82146 was extracted from the Gene Expression Omnibus, consisting of 15 complete global brain ischemia (CGBI) reperfusion hippocampus samples and 12 non-ischemic control (NIC) hippocampus samples. The differentially expressed genes (DEGs) between the CGBI and NIC samples were selected using LIMMA package, and were then analyzed with weighted gene co-expression network analysis (WGCNA). Using DAVID software, the DEGs in significant modules were run through enrichment analysis. The DEGs in significant modules were merged, and then a protein–protein interaction (PPI) network was built for them using Cytoscape software. After miRNAs and transcription factors (TFs) were predicted for the DEGs using the WebGestalt tool, a TF-miRNA-target regulatory network was built using Cytoscape software. Furthermore, quantitative real-time polymerase chain reaction (qRT-PCR) analysis was conducted to detect the levels of key genes.
Introduction
Ischemia/reperfusion (I/R) refers to a situation where blood is perfused into tissues experiencing ischemia or hypoxia. 1 Although I/R promotes the repair of damage and the recovery of functioning in most cases, it can also lead to an inflammatory response and oxidative injury by inducing oxidative stress. 2 Ischemia/reperfusion is usually related to microvascular injury, and the imbalance of reactive oxygen species (ROS) and nitric oxide (NO) produced by activated endothelial cells is responsible for the subsequent inflammatory response. 3,4 The development of I/R injury is influenced by ischemia time, aerobic degree, collateral circulation, and reperfusion conditions. The I/R injury has a powerful influence on the ischemic cascade of the brain, involving brain trauma and stroke. 5 Hence, the molecular mechanisms of I/R injury need to be investigated to better alleviate its adverse effects in clinical practice.
By inhibiting nuclear factor-κB (NF-κB), ginkgolide B (GB) possesses anti-apoptotic and anti-inflammatory effects and has demonstrated neuroprotective roles in mice with ischemia-induced brain injury. 6,7 The inhibition of P2X7 receptors (P2X7Rs) protects rats from cerebral I/R injury by decreasing inflammatory response and may serve as a novel therapeutic approach for transient global cerebral I/R injury. By increasing B-cell leukemia-2 (Bcl-2) expression and reducing Bcl-2-associated X protein (Bax) expression, propofol functions as a neuroprotective agent in I/R rats. 8 Oxymatrine can protect the brain of stroke rats from focal I/R injury, and the activation of the nuclear factor erythroid 2-related factor 2 (Nrf2)/hemeoxygenase-1 (HO-1) pathway may promote the neuroprotective effects of oxymatrine in the focal brain I/R rat model. 9 MiR-124 mediates the expression of Ku autoantigen 70 (Ku70) and helps to reduce the neuronal death and brain dysfunction caused by I/R. MiR-134 downregulation relieves cerebral ischemic injury through regulating the enhancing cyclic AMP (cAMP) response element-binding protein (CREB) and downstream genes, which provides a potential therapeutic target for the injury. 10 Nevertheless, the genes and miRNAs affecting brain I/R injury have not been thoroughly explored.
In 2016, Wang et al. performed differential expression analysis on I/R in hippocampus CA1 and CA3, and found that CA3 is better at handling ischemic stress. However, the pathogenesis of I/R injury was not comprehensively researched by them. 11 To further identify the key genes and miRNAs involved in I/R injury, within this study, a series of bioinformatics analyses was carried out (such as differential expression analysis, weighted gene co-expression network analysis (WGCNA), enrichment analysis, and network analysis) on the expression profile data uploaded by Wang et al. 11 In addition, quantitative real-time polymerase chain reaction (qRT-PCR) analysis was conducted to confirm the key genes.
Data source
The microarray dataset GSE82146 (species: Rattus norvegicus) from the Gene Expression Omnibus database (GEO, http://www.ncbi.nih.gov/geo) was extracted, which was determined on the platform of GPL17117 ( RaGene-2_0-st) Affymetrix Rat Gene 2.0 ST Array (transcript (gene) version). There were 15 complete global brain ischemia (CGBI) reperfusion hippocampus samples and 12 non-ischemic control (NIC) hippocampus samples in GSE82146. In Long Evans rats (male, 275-300 g), CGBI was induced with the two-vessel bilateral carotid artery occlusion and hypovolemic hypotension model, 12 as previously described. [13][14][15] Our research was approved by the ethics committee of Ningbo No. 9 Hospital, China.
Differential expression analysis
The original data in CEL format was downloaded and preprocessed (including format conversion, filling in of the missing data, background correction (MicroArray Suite method), and data standardization (quartile method)) using the R oligo package (v. 1.36.1; http://www.bioconductor.org/packages/release/bioc/html/oligo.html). 16 Next, the corresponding genes of the probes were annotated based on the annotation platform. For genes with multiple expression values (mapped to multiple probes), the average value was calculated as the unique expression value.
WGCNA to identify disease-associated modules and genes
The WGCNA is a typical algorithm in system biology for constructing a gene co-expression network which can be used to identify modules of the relevant genes. 18 To screen the CGBI-associated modules and genes, the expression values of the DEGs in each group were determined with WGCNA. 18 The detailed processes of network building and module identification included consistency analysis between the datasets, the definition of gene co-expressioncorrelated matrix (the correlation coefficient between gene m and gene n was S mn = |cor (m,n) |), the definition of the adjacent function (the adjacent function was a mn = power (S mn,β ) ), the determination of the parameters for the adjacent function (the weighting coefficient β ≥ 0.8), the measurement of the degree of dissimilarity between the nodes, the identification of gene modules (the number of genes in the module ≥30), and the determination of the correlation between network module and disease state.
Enrichment analysis
Using DAVID software (v. 6.8; https://david-d.ncifcrf. gov/), 19 Gene Ontology (GO) 20 and Kyoto Encyclopedia of Genes and Genomes (KEGG), 21 enrichment analyses were conducted for the DEGs in significant modules. The thresholds for selecting significant terms were p-values <0.05 and counts of involved genes ≥2.
Protein-protein interaction (PPI) network analysis
After the DEGs in significant modules were merged, they were input into the STRING database (v. 10.0; http:// www.string-db.org/) 22 to predict the PPIs among them. The species was rat and the parameter PPI score was set at 0.4. The PPI results were downloaded in TSV format and Cytoscape (v. 3.2.0; http://www.cytoscape.org/) 23 was used to construct the PPI network.
Transcription factor-miRNA-target regulatory network analysis
Using the WebGestalt tool (http://www.webgestalt. org/), 24 miRNAs and transcription factors (TFs) were predicted for the DEGs involved in the PPI network using Overrepresentation Enrichment Analysis (ORA). The species was rat and the reference background was the Affymetrix Rat Gene v. 2.0 ST platform. According to significance levels, the top 10 results of miRNA-target and TF-target were obtained and integrated. Then, a TF-miRNA-target regulatory network was built using Cytoscape software. 23
qRT-PCR analysis
The total RNA of 7 brain I/R tissues and 7 control tissues were isolated using a Trizol total RNA extraction kit (Invitrogen, Shanghai, China) following the manufacturer's instructions. The purity and integrity of RNA were evaluated separately by spectrophotometer (Merinton, Beijing, China) and 2% agarose gel electrophoresis. The primer sequences of key genes were designed for qRT-PCR experiments (Table 1), and were then produced by Sangon Biotech Co., Ltd. (Shanghai, China). The qRT-PCR experiments were carried out using SYBR Green master mix kit (Applied Biosystems, Foster City, USA). The 20-µL PCR amplification system consisted of 10 µL of SYBR Premix Ex Taq (×2), 8 µL of cDNA template (keeping a consistent level after being diluted with ddH 2 O), 1 µL of forward primer (10 µM), and 1 µL of reverse primer (10 µM). The reaction conditions were 40 cycles of 50°C for 3 min, 95°C for 3 min, 95°C for 10 s, and 60°C for 30 s. Afterwards, a melt curve was created. All experiments were repeated 3 times, and glyceraldehyde-3-phosphate dehydrogenase (GAPDH) was utilized as the reference gene.
Statistical analysis
The expression levels of the key genes were analyzed using the 2 −ΔΔCt method. 25 All data are presented as mean ± standard error of the mean (SEM). SPSS v. 22.0 software (SPSS Inc., Chicago, USA) was used to perform statistical analysis, with p < 0.05 serving as the threshold of statistical significance.
Differential expression analysis
There were 390 DEGs in the CGBI samples compared with the NIC samples, including 330 upregulated genes (such as heat shock protein B1 (HSPB1) and heme oxygenase (decycling) 1 (HMOX1)), and 60 downregulated genes (such as nuclear receptor subfamily 4 group A member 2 (NR4A2)). The clustering heatmap suggested that the samples could be obviously differentiated by the DEGs (Fig. 1).
WGCNA analysis and enrichment analysis
In order to meet the prerequisite of scale-free network distribution, the adjacency matrix weighting parameter "power" was explored. As a result, a "power" value of 19 was selected when the square of correlation coefficient first reached 0.8 (Fig. 2).
A co-expression network was constructed based on this "power" value. Firstly, the dissimilarity coefficients among the DEGs were calculated. Using the dissimilarity matrix, hierarchical clustering was performed to obtain a system clustering tree of the DEGs (Fig. 3A). According to the standards of a hybrid dynamic shear tree, the lowest gene number of each network module was set at 30. After network modules were identified using the dynamic shear method, the feature vector "eigengenes" was calculated for each module. Subsequently, the modules underwent clustering analysis, and the modules with close clustering relationships were merged into a new module. Finally, the DEGs were divided into 5 network modules (Fig. 3B), and the grey module was a set of the DEGs that could not be gathered into other modules.
To identify CGBI-associated modules, the feature vector of each module and CGBI was conducted with correlation analysis. The absolute values of correlation coefficients for color key row Z-score height the modules were sorted, and it was found that the values for brown and turquoise modules were higher than 0.8 (Table 2). Meanwhile, the absolute values of gene significance for the modules were also calculated in order to screen CGBI-associated modules (Fig. 4). In addition, the heatmaps for the genes in the brown and turquoise modules are shown separately in Fig. 5A and 5B.
Functional and pathway enrichment analysis
The DEGs in the brown and turquoise modules were conducted with enrichment analysis. The DEGs in the brown module were mainly implicated in the positive regulation of gene expression (GO; p-value = 9.61E-09) and protein processing in the endoplasmic reticulum (KEGG; p-value = 2.86E-07). Also, the DEGs in the turquoise module were mainly involved in protein folding (GO; p-value = 3.33E-07) and aminoacyl-tRNA biosynthesis (KEGG; p-value = 4.58E-07) ( Table 3).
PPI network analysis
The DEGs in the brown and turquoise modules were merged, and a total of 259 DEGs were obtained (including 115 upregulated genes in the brown module, 115 upregulated genes in the turquoise module, 18 downregulated genes in the brown module, and 11 downregulated genes in the turquoise module). Next, the PPI network was built; it had 164 nodes and 620 edges (Fig. 6A). According to the degree values of the network nodes, heat shock protein 90 alpha family class A member 1 (HSP90AA1; degree = 47) and heat shock protein 5 (HSPA5; degree = 25) were key nodes. Additionally, HSP90AA1 and HSPA5 interacted in the PPI network.
TF-miRNA-target regulatory network analysis
A total of 228 TF-miRNA-target regulatory relationships were obtained, involving 6 TFs, 10 miRNAs, and 99 targets (including 46 upregulated genes in the brown module, 45 upregulated genes in the turquoise module, 5 downregulated genes in the brown module, and 3 downregulated genes in the turquoise module). The TF-miRNA-target regulatory network is shown in Fig. 6B. In the regulatory network, the myelocytomatosis oncogene (MYC; TF, degree = 38), heat shock transcription factor 1 (HSF1; TF, degree = 25), and miR-22 (degree = 12) had higher degree values. Importantly, MYC could target HSPA5 in the regulatory network.
qRT-PCR analysis
The levels of HSPB1, HMOX1 and NR4A2 in the brain I/R tissues and the control tissues were detected using qRT-PCR experiments. HSPB1 (p < 0.001; Fig. 7A) and HMOX1 (p < 0.001; Fig. 7B) were significantly upregulated in the brain I/R tissues compared with the control tissues, while NR4A2 (p < 0.001; Fig. 7C) was significantly downregulated. These findings were consistent with the results of differential expression analysis.
Discussion
In this study, 390 DEGs (including upregulated HSPB1 and HMOX1, as well as downregulated NR4A2) were identified in the CGBI samples. Through WGCNA analysis, the brown and turquoise modules were screened as CGBIassociated modules. After the DEGs in the brown and turquoise modules were merged, a PPI network was built for them. In the PPI network, HSP90AA1 and HSPA5 were the key nodes. Moreover, MYC, HSF1 and miR-22 had higher degree values in the TF-miRNA-target regulatory network. Additionally, the qRT-PCR experiments confirmed upregulated HSPB1 and HMOX1 and downregulated NR4A2.
Oxidative stress can induce the phosphorylation of HSPB1 and HSPB5, which play neuroprotective roles Red triangles and green hexagons represent miRNAs and TFs, respectively. Brown and blue represent genes in the brown and turquoise modules, respectively. The higher the degree value of a node, the larger the size of the node is. Arrows indicate the directions of regulation in hippocampal neurons. Kupffer cells, which are the main expression sites of hepatic HMOX1, have anti-inflammatory effects and can resist the oxidative injury induced by I/R. 26 NURR1 (also named NR4A2) contributes to intestinal regeneration following I/R injury by suppressing p21 expression, which may provide new approaches for the therapy of intestinal I/R injury. 27 These findings support the thesis that HSPB1, HMOX1 and NR4A2 are related to the development and progression of I/R injury. The mRNA expression of HSP90AA1 is reduced following I/R and may be promoted by miR-1 inhibition during myocardial I/R. 28,29 A high protein expression of HSPA5 can exert neuroprotective effects and stop neural ischemic injury by attenuating endoplasmic reticulum (ER) stressinduced apoptosis. 30 By negatively mediating ubiquitin carboxyl-terminal hydrolase isozyme L1 (UCHL1) and HSPA5 protein levels, miR-181b downregulation protects mice from ischemic injury and provides a therapeutic strategy for ischemic stroke. 31 HSP90AA1 and HSPA5 interacted in the PPI network, suggesting that HSP90AA1 and HSPA5 might play roles in I/R injury through interaction with each other.
The MYC expression is upregulated after acute I/R injury and may promote the low expression of the anti-apoptotic N-myc downstream-regulated gene 2 (NDRG2), which may be associated with myocardial apoptosis in I/R rats. 32,33 By weakening NF-κB activation and reducing MYC expression, copper/zinc-superoxide dismutase (SOD1) overexpression helps to decrease ischemic damage. 34 Granulocyte colony-stimulating factor (G-CSF) increases HSF1 expression by promoting phosphorylation and the interaction of the signal transducer and activator of transcription-3 (STAT3) with HSF1, which possesses cardio-protective effects in I/R mice. 35 HSF1 prevents the death of cardiomyocytes following I/R partly by activating Akt and inactivating caspase 3 and Jun N-terminal kinase. 36 These reports declared that MYC and HSF1 might also be implicated in the mechanisms of I/R injury. MYC could target HSPA5 in the regulatory network, indicating a role of MYC in I/R injury through mediation of HSPA5.
The miR-22 plays a neuroprotective role by reducing inflammation and apoptosis, indicating that miR-22 can be applied in the treatment of cerebral I/R injury. 37 It can suppress the apoptosis of cardiomyocytes by targeting CREB binding protein (CBP); therefore, miR-22 may serve as a novel target for preventing myocardial I/R injury. 38,39 The miR-22 inhibition helps to keep cardiac mitochondrial function, and thus has therapeutic potential for acute myocardial I/R injury. 40 miR-22 decreases caveolin 3 (Cav3) expression and repairs endothelial nitric oxide synthase (eNOS) activity and NO production, inhibiting cardiac injury after I/R. 41 Therefore, miR-22 might be associated with the pathogenesis of I/R injury by regulating the DEGs.
Conclusions
In conclusion, 390 DEGs were identified between CGBI and NIC samples. Also, HSPB1, HMOX1 and NR4A2 were the key genes associated with I/R injury. Moreover, HSP90AA1, HSPA5, MYC, HSF1, and miR-22 might be implicated in the pathogenesis of I/R injury. However, the experimental study was insufficient and our results must still be further confirmed in subsequent studies.
|
2020-06-20T13:06:44.650Z
|
2020-06-01T00:00:00.000
|
{
"year": 2020,
"sha1": "8945ea0ddd6104fafebe351419866f04dd95c6df",
"oa_license": null,
"oa_url": "https://doi.org/10.17219/acem/121918",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "9d5f265dae78ce30d17e4c4774df3922e6155863",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
251040918
|
pes2o/s2orc
|
v3-fos-license
|
A Sunlight-pumped Two-dimensional Thermalized Photon Gas
The Liouville theorem states that the phase-space volume of an ensemble in a closed system remains constant. While gases of material particles can efficiently be cooled by sympathetic or laser cooling techniques, allowing for large phase-space compression, for light both the absence of an internal structure, as well as the usual non-conservation of particle number upon contact to matter imposes fundamental limits e.g. in fluorescence-based light concentrators in three-dimensional systems. A different physical situation can in principle be expected for dye-solution filled microcavities with a mirror spacing in the wavelength range, where low dimensional photon gases with non-vanishing, freely tunable chemical potential have been experimentally realized. Motivated by the goal to observe phase-space compression of sunlight by cooling the captured radiation to room temperature, we in this work theoretically show that in a lossless system the phase space volume scales as $(\Delta x \Delta p / T)^d = \mathrm{constant}$, where $\Delta x$ and $\Delta p$ denote the rms position and momentum spread and $d$ the dimensionality of the system ($d=1$ or $2$). We also experimentally realize a sunlight pumped dye microcavity, and demonstrate thermalization of scattered sunlight to a two-dimensional room temperature ensemble with non-vanishing chemical potential. Prospects of phase space buildup of light by cooling, as can be feasible in systems with a two- or three-dimensional band gap, can range from quantum state preparation in tailored potentials up to technical applications in diffuse sunlight collection.
I.) Introduction
Compressing the phase space by harnessing coupling to the environment is of interest in a wide range of fields, ranging from applications of laser cooled atoms to solar light collection [1][2][3][4][5]. Along these lines, motivated most prominently by the collection of diffuse sunlight, techniques of non-imaging optics have been developed, aiming at guiding or concentration of light without the requirement to form an image of the source, and involving phase-space densities much below unity, i.e. in a purely classical domain [5]. Spatial light concentration has been observed in luminescent collectors based on dye-doped transparent plates capturing the emission of the optical absorbers falling in a certain emission cone by total internal reflection [4,6]. In some experiments, upon increased reabsorption, the blue wing of the emitted spectrum approaches a blackbody-like spectrum, which illustrates the importance of thermodynamics to light concentration [7].
In blackbody radiation, which is the most common example among thermal radiators emitting into free space, the radiation density S follows the Stefan-Boltzmann law ST 4 , correspondingly it rapidly diminishes with decreasing temperature T, as understood from a non-conservation of the photon number, and the chemical potential vanishes [8]. A lowering of the temperature in such a system will decrease, rather than enhance, the (optical) phasespace density. In contrast, optical quantum gases, which are subjected to light-confining structures on the wavelength scale to yield effectively one-or two-dimensional systems, have been demonstrated to provide an important prerequisite for cooling, namely independent tuning of particle number and temperature [9]. To date, this tunability has allowed for the realization of photon and polariton condensates, where at sufficiently low temperature and large densities particles condense into a macroscopically occupied ground state owing to quantum statistics [10][11][12][13][14][15][16].
In this work, we examine non-imaging optics for light concentration in a two-dimensional system. The reduced dimensionality realizes a low-frequency cutoff, providing a non-trivial ground state, and the mirror topography induces a trapping potential for cavity photons [16].
By repeated absorption re-emission cycles photons are sympathetically cooled to the temperature of the dye solution, which is at room temperature, resulting in a compression of the optical cloud in both position and momentum space. We theoretically show that in a lossless system, upon cooling light in a material-filled optical microcavity phase-space buildup of the collected radiation is well expected, and we give scaling laws for the expected compression. To experimentally investigate the possibility of solar light collection with a lowdimensional photon gas, we have performed a proof-of-principle experiment that is not optimized for high photon collection efficiency realizing a sunlight-pumped dye microcavity.
Following up on earlier work using laser-based optical pumping [9], we here observe a thermalized two-dimensional photon gas with non-vanishing chemical potential employing the3urvaturt-based pumping arrangement. The spectral data of the cavity emission is in good agreement with thermodynamic expectations over the full emission bandwidth.
II.) Experimental principle: Collecting photons in flatland
Our experiment uses an optical microcavity consisting of two highly reflecting curved mirrors spaced by a distance in the wavelength regime, filled with dye solution (Fig.1). The apparatus is an extension of a setup used in previous work [9,10,16]. The mirrors, due to their small spacing, introduce a low-frequency cutoff at ħωcutoff≃2.1 eV, see Fig. 2a, imprinting a spectrum of photon energies restricted to well above the thermal energy kBT≃1/40 eV, with T≃300K being the temperature of the experimental apparatus. This prevents the thermal emission of (optical) photons by the dye molecules. Photons trapped in the microcavity thermalize to the (rovibrational) temperature of the dye (also at room temperature) by repeated absorption and re-emission processes. In the course of the thermalization the longitudinal modal quantum number of the cavity photons remains fixed, as for small cavity distance emission predominantly occurs into a single longitudinal mode. The remaining transverse degrees of freedom make the photon gas in the microcavity effectively twodimensional. The optical dispersion relation becomes quadratic, as for a massive particle, and the4urvaturee of the cavity mirrors induces a trapping potential for photons, as understood from the transverse variation of the required wavelength to match the boundary conditions imposed by the cavity mirrors. Away from the optical axis light with shorter wavelength than in the trap center, corresponding to higher photon energies, fulfills the boundary condition.
We thus expect that a cooling of the photon gas, besides reducing the transverse momentum spread, will lead to a shrinking of the cloud in position space, as illustrated in Fig.2c, similarly to the cooling of trapped atomic gases [2], or the transverse motion in particle accelerators [17].
Photons confined in the resonator in paraxial approximation are described by a dispersion of the form [9] ), ( 2 (1) resembling that of a (two-dimensional) harmonically confined system of massive particles, where m= cutoff/(c/n) 2 is an effective photon mass. Here c denotes the speed of light in vacuum, n the refractive index of the dye medium used in our experiment, and ωcutoff =2c/cutoff, with λcutoff as the cutoff wavelength. Further, x and y are spatial coordinates transversal to the cavity axis, px and py the transverse photon momentum components, and Ω the trapping frequency induced by the mirror curvature. At the here used low photon numbers, the phase-space density remains well below unity such that classical statistical mechanics prescribes the occupation densities in the thermalized case, yielding a Boltzmann distribution of photons above the low-frequency cutoff [9,16].
III.) Expectations for phase-space buildup upon cooling
We are interested to derive the phase-space distribution of a harmonically confined twodimensional photon gas at different temperatures. The thermal average of an observable A in the classical regime is determined by Boltzmann statistics [18] and given by For the rms cloud widths in position and momentum space, one then finds with i={x,y} respectively, which agrees with the equipartition theorem in terms of average potential and kinetic energies of kBT/2 per degree of freedom.
Upon cooling, the two-dimensional harmonically trapped photon cloud shrinks both in position and momentum space, as outlined in Fig.2c. We thus expect the phase-space volume to linearly decrease with temperature, as . Expressed in terms of the angle of a paraxial ray with respect to the optical axis, , we obtain a universal thermodynamic scaling for the product of the area of the effective aperture A and solid angle W given by Other than the thermodynamic scaling laws for light collection efficiencies previously derived for Boltzmann-like photon gases [6,7], eq. (2) contains no material-dependent quantities due to full thermalization of the photonic degrees of freedom possible with the here presented method. We obtain the expected phase-space distribution in thermodynamic equilibrium, with N as the total photon number, which is a Gaussian distribution both in position and in momentum space. In radial coordinates, , the expected position and momentum space distributions take the form ( ), In the following, we consider the situation that incident (hot) radiation is cooled down to the apparatus temperature by radiative contact to the thermalization medium (i.e. dye molecules). This is a much more realistic scenario than a change of the temperature of the apparatus on a timescale of the photon lifetime in the microcavity (for which one would have to account for the temperature dependence of the ratio of photons and dye electronic exciations [19]). In the limit of small losses such that the system remains thermalized, lowering the temperature quadratically reduces the phase-space volume, and we accordingly expect a quadratic increase of the phase-space density. This is also directly seen from eq. (3), which yields a central phase space density When cooling down 5800 K spectral temperature sunlight radiation in an idealized sunlight concentrator to room temperature (300K), we correspondingly estimate an achievable enhancement in phase space density of (5800/300) 2
IV.) Realizing a Sunlight-Pumped Two-Dimensional Photon Gas
The used optical resonator is formed by two highly reflecting dielectric mirrors with R=1m spherical curvature spaced by D0≃1.3m. The microcavity is filled with rhodamine 6G dye dissolved in ethylene glycol, the quantum yield of this dye is ≃95% [20]. Prior to filling, the solution is filtered. In the liquid solution, sub-picosecond timescale collisions with solvent molecules, being much faster than the electronic transitions, rapidly alter the rovibrational state of the molecules. The used dye well fulfills the Boltzmann-like Kennard-Stepanov frequency scaling between the spectral profiles of absorption ⍺(ω) and emission f(ω), which for a small bandwidth can be written as f(ω)/α(ω)exp(-/kBT) [21]. By repeated absorption re-emission processes, this thermal character of the dye molecules is rapidly transferred onto the photon degrees of freedom.
As described above, the short spacing of the cavity mirrors, which induces a large transverse mode spacing, makes the photon gas two-dimensional, with the longitudinal mode number being fixed (to = 7 here). Correspondingly, the contact to the dye heat bath is expected to drive the photon gas to a thermalized distribution above the low-frequency cutoff. Other than in a perfect photon box, thermalization in our experimental system is mainly limited by finite mirror reflectivity. Figure 2b gives the calculated reflectivity of the dielectric mirrors as a function of both wavelength and angle of incidence for unpolarized light, with blue (yellow) color code corresponding to near unity reflection (transmission) coefficient. For radiation propagating under small angles with respect to the optical axis, the reflectivity in the for cavity photons relevant wavelength range of 540-590 nm exceeds 99.995%, and center wavelength of the dielectric coating is at 550 nm. For larger angles of incidence, however, the mirror reflectivity is strongly reduced and shows an intricate wavelength-and angledependent reflectivity pattern that is typical for multilayer dielectric structures.
Correspondingly, any isotropically emitted (spontaneous) radiation cannot fully be re-captured by the cavity, in stark contrast to the ideal photon box model system. In earlier work using a comparable experimental setup, the number of reabsorption events of a photon before being lost from the resonator has been determined to be 3.8 (2.6) in this case near the condensate threshold [16].
In order to pump the dye microcavity with sunlight, incoming day light was directed over a mirror mounted on two orthogonally-oriented motorized rotation stages, as to allow for a We have experimentally studied thermalization of photons in the sunlight-pumped microcavity. Typical parameters are a cutoff wavelength of λcutoff ≃587nm, yielding an effective photon mass m= cutoff/(c/n) 2 ≃7.7·10 -36 kg at a dye refractive index n≃1.44, and a trapping frequency of ( ) Figure 3 gives spectra of the cavity emission recorded by analyzing the transmission through one cavity mirror with a slitless optical spectrometer, for varying concentration of the dye solution. While for a low concentration, see e.g. the 0.03 mmol/l data, a near flat spectral distribution above the lowfrequency cutoff is visible, while the experimental data recorded at a 1 mmol/l dye concentration well agrees with the expectations for a Boltzmann distributed energy spectrum above the cutoff (dashed line). The former data is understood to be due to fluorescence arising from single scattering events given that photons here leak out of the cavity before being reabsorbed; only for higher concentration the reabsorption rate Rabs=()c/n, with as the dye concentration and () as the cross section, becomes large enough that photons relax to a thermal distribution in the trapping potential [22,23]. We however point out that we find the spectra recorded with the used slitless spectrometer at very low concentration of limited significance, as attributed to the for this data large spatial spread of the trapped fluorescing photon cloud. For the largest used dye concentration of 3 mmol/l, the agreement between theory and experiment is less accurate, as attributed to the reduced quantum efficiency of the dye at such high concentrations due to dimer formation, which is known to also modify the spectral properties [24]. A further effect of high concentrations is that photons perform scattering events at a rate becoming comparable to the trapping frequency, which, given the finite recapturing probability of the fluorescence photons, can lead to cavity loss prior to relaxation of the spatial degrees of freedom. Table 1. The given data is normalized to the used dye concentration, in order to account for the different sunlight collection efficiencies, which due to the small single-pass absorption can well be assumed to scale linearly with concentration. Each data point corresponds an average of 3 (8 for the case of the lowest concentration data to increase the signal-to-noise ratio) images recorded with 50ms (spatial) or 100ms (momentum) integration time, see Figs. 4a,b for typical raw data, and Fig. 5 for the calibrated photon density profiles in position and momentum space. In Fig. 5, the data at the lowest concentration can be described by a quasi-thermal distribution at T=600K, a temperature considerably lower than the surface temperature of the sun. For the data with concentrations above 0.3 mmol/l, the phase-space distribution can be well described by a room temperature distribution (Table 1).
Although the observed phase-space density does not increase as would be expected for a lossless system, we nevertheless observe a near constant phase-space density for concentrations up to 1 mmol/l despite the decrease in photon number with concentration.
We attribute the data at 1 mmol/l, see both the spectral, spatial and momentum space data ( Figs. 3 and 5), as evidence for a thermalized two-dimensional photon gas of sunlight photons at room temperature. From the camera signals recorded at a concentration of 1 mmol/l we deduce a typical output power of out = 26(10)pW, corresponding to an average photon number in the microcavity of = 0.19 (7). This is more than 5 orders of magnitude below the onset of Bose-Einstein condensation, for which a critical number of ,000 must be reached [16]. The given photon number corresponds to a chemical potential of = −12.4(5) B , as was calculated by numerically solving = ∑ , ( ), with the Bose-Einstein distribution factor , where we sum over the system eigenstates. Here g(Ei)=2(Ei / +1) denotes the energy degeneracy of modes in the microcavity. Because of µ << -kBT , the term -1 in the denominator of the Bose-Einstein distribution function can be neglected and one arrives at a Boltzmann distribution.
V.) Conclusions
To conclude, we have experimentally demonstrated a sunlight pumped two-dimensional photon gas with non-vanishing chemical potential using a dye-microcavity setting.
Furthermore, we have given universal thermodynamic scaling laws for the scaling of the optical phase-space density with temperature of trapped low-dimensional photon gases.
For the future, it will be interesting to experimentally observe a phase-space buildup of light by cooling. A common challenge both in the fields of optical quantum gases as well as light concentrators is the minimization of photon loss. While the present microcavity experiment relies on a one-dimensional bandgap, future work could exploit systems with a threedimensional band gap, e.g. material-filled photonic crystal cavities [25,26], to minimize losses and overcome limitations of the Liouville theorem. Such three-dimensional geometries could also allow employing absorber materials with larger optical depth to enhance the light collection efficiency. A trapping potential to confine the photon cloud in a photonic crystal may be achieved by employing spatial gradients in the band gap, which is a viable approach for light concentration in propagating geometries [27]. As a further perspective, microcavities or photonic crystals filled with quantum dots of high quantum efficiency can, besides an improved photo-stability, allow for a more broadband absorption as compared to dye molecules [28], which can then well extend into the ultraviolet spectral regime. Incoming sunlight is scattered from the dye molecules, populates the cavity and becomes concentrated. The intensity distribution of the light trapped inside the cavity exhibits a mean spatial width ∆x, and the emission transmitted through the cavity mirrors (here: bottom) diverges with an angle spread ∆ In the experiment, the sunlight is guided by an optical fiber for more stable coupling to the cavity. Table 1.
|
2022-07-26T01:16:23.080Z
|
2022-07-25T00:00:00.000
|
{
"year": 2022,
"sha1": "1de174fbe0e87263d28c29508d691c43cc68bfe8",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "1de174fbe0e87263d28c29508d691c43cc68bfe8",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
92403089
|
pes2o/s2orc
|
v3-fos-license
|
Glial Ca2+ Signaling Controls Endocytosis and K+ Buffering to Regulate Glial-Neuronal Communication at the Soma
Glial-neuronal signaling at synapses is widely appreciated, but how glia interact with neuronal cell bodies is less clear. Drosophila cortex glia are restricted to brain regions devoid of synapses, providing an opportunity to characterize interactions between glia and neuronal somas. Mutations in the cortex glial NCKX exchanger zydeco abolish microdomain Ca2+ oscillatory activity and elevate glial Ca2+, predisposing animals to seizures. To determine how cortex glial Ca2+ signaling controls neuronal excitability, an in vivo modifier screen for the NCKXzydeco seizure phenotype was performed. Our results indicate elevation of glial Ca2+ causes hyperactivation of calcineurin-dependent endocytosis and accumulation of early endosomes. Knockdown of sandman, a K2P channel, recapitulates NCKXzydeco seizures. Restoring glial K+ buffering by overexpressing a leak K+ channel rescues zydeco seizures. These findings indicate cortex glial Ca2+ couples to K+ buffering through calcineurin regulated endo-exocytotic balance and K2P channel expression to modulate neuronal excitability.
Introduction
Glial cells are well known to play structural and supportive roles for their more electrically excitable neuronal counterparts. However, growing evidence indicates glial Ca 2+ signaling influences neuronal physiology on a rapid time scale. In the cortex, glia and neurons exist in equal abundance (Azevedo et al., 2009) and are intimately associated. A single astrocytic glia contacts multiple neuronal cell bodies, hundreds of neuronal processes, and tens of thousands of synapses (Halassa et al., 2007;Ventura and Harris, 1999). Cultured astrocytes oscillate intracellular Ca 2+ spontaneously (Takata and Hirase, 2008) and in response to neurotransmitters (Agulhon et al., 2008;Lee et al., 2010), including glutamate (Cornell-Bell et al., 1990). Glutamate released during normal synaptic transmission is sufficient to induce astrocytic Ca 2+ oscillations (Dani et al., 1992;Wang et al., 2006), which trigger Ca 2+ elevation in co-cultured neurons (Nedergaard, 1994;Parpura et al., 1994) that can elicit action potentials (Angulo et al., 2004;Fellin et al., 2006;Fellin et al., 2004;Pirttimaki et al., 2011). These astrocyte-neuron interactions suggest abnormally elevated glial Ca 2+ might produce neuronal hypersynchrony. Indeed, increased glial activity is associated with abnormal neuronal excitability (Wetherington et al., 2008), and pathologic elevation of glial Ca 2+ can play an important role in the generation of seizures (Gomez-Gonzalo et al., 2010;Tian et al., 2005). However, the molecular pathway(s) by which glia-to-neuron communication alters neuronal excitability is poorly characterized. In addition, how glia interface with synaptic versus non-synaptic regions of neurons is unclear.
All major CNS glial subtypes interact closely with neuronal cell bodies. Protoplasmic astrocytes form extensive contacts on neuronal cell bodies (Allen and Barres, 2009); satellite microglia form contacts between their cell bodies and those of neurons (Baalman et al., 2015); and specialized groups of perineuronal oligodendrocytes reside on neuronal cell bodies (Battefeld et al., 2016;Takasaki et al., 2010). While significant progress has been made in characterizing the contacts between glial cells and axons and synapses, less is understood of the signaling events between any type of glial process and neuronal cell bodies. In this regard, Drosophila provides an ideal system to study glial-neuronal soma interactions as the CNS is compartmentalized into two primary regions: the cell cortex (consisting of neuronal cell bodies and their proximal axons) and the synaptic neuropil (containing all CNS neurites and synapses). Two specialized astrocyte-like glial subtypes, astrocytes and cortex glia, are compartmentalized into these brain regions. Astrocytes extend processes that are confined to the neuropil and associate with synapses, axons, and dendrites . Drosophila astrocytes are similar to mammalian protoplasmic astrocytes in terms of their morphology, tiling behavior, expression profiles, and functions in synapse formation (Muthukumar et al., 2014;Stork et al., 2014), synaptic pruning , and regulation of synaptic function . Cortex glia, in contrast, are restricted to the cell cortex region that is devoid of synapses (Awasaki et al., 2008;Pereanu et al., 2005). Cortex glia extend fine processes into the cell cortex region and encapsulate virtually every neuronal cell body in the CNS (Awasaki et al., 2008) (Fig. S1A). Given these tight associations, cortex glia are thought to provide metabolic support and key nutrients to neurons (Volkenhoff et al., 2015). In addition, cortex glia have been shown to exhibit local microdomain Ca 2+ transients close to neuronal cell bodies that may regulate their function (Melom and Littleton, 2013). However, the role of cortex glia in modulating neuronal activity is unclear.
Previous work in our lab identified zydeco (zyd), a cortex glial enriched Na + Ca 2+ K + (NCKX) exchanger involved in maintaining normal neural excitability. Mutations in NCKX zydeco (hereafter referred to as zyd) predispose animals to temperature-sensitive seizures and result in bang sensitivity (seizures induced following a brief vortex). Basal intracellular Ca 2+ levels are elevated in zyd cortex glia, while near-membrane microdomain Ca 2+ oscillations observed in wildtype cortex glia are abolished. These findings indicate disruption of glial Ca 2+ regulation triggers enhanced seizure susceptibility. To determine how cortex glial Ca 2+ signaling regulates neuronal excitability, we took advantage of the zyd mutation and performed an RNAi screen for modifiers of the seizure phenotype. Here we show that chronic elevation of glial Ca 2+ causes hyperactivation of calcineurin-dependent endocytosis leading to an endo-exocytosis imbalance. In addition, sandman, a K2P channel, recapitulates zyd seizures when knocked down and acts downstream of calcineurin in cortex glia. Sandman was previously identified as the key K + channel that cycles to and from the plasma membrane in a group of central complex neurons to modulate their excitability and control sleep homeostasis in the Drosophila circadian pathway (Pimentel et al., 2016). Our findings suggest similar regulation of sandman in cortex glia could allow dynamic control of K + levels surrounding neuronal somas as a mechanism to gate neuronal excitability. Indeed, overexpression of a constitutively active K + channel in cortex glia can rescue zyd seizures. Together, these findings suggest glial Ca 2+ interfaces with calcineurin-dependent endocytosis to regulate plasma membrane protein levels and the K + buffering capacity of glia associated with neuronal somas. Disruption of these pathways leads to enhanced neuronal excitability and seizures, suggesting potential targets for future glial-based therapeutic modifiers of epilepsy. response is recorded. By progressively increasing stimulation intensity, the voltage threshold that triggers seizure activity in the GF circuit can be directly assayed, providing a readout of neuronal excitability. Most Drosophila bang-sensitive mutations display seizure induction at much lower voltages, consistent with their primary effect on membrane excitability thresholds within the neuron. For example, para bss1 , a gain-of-function bang sensitive mutation in the voltage-gated Na + channel, dramatically lowers seizure threshold (Fig. 1B) (Parker et al., 2011). In contrast, the voltage threshold for seizure induction in zyd was not significantly different from controls (Fig. 1B), indicating basic intrinsic neuronal properties are not altered in zyd animals at rest. To determine if elevated neuronal activity is required for the stress-induced seizures in zyd mutants, we assayed seizure behavior in animals with reduced synaptic transmission and neuronal activity. Panneuronal knockdown of Cacophony (cac), the presynaptic voltage-gated Ca 2+ channel responsible for neurotransmitter release (Kawasaki et al., 2004;Rieckhof et al., 2003), significantly reduced neuronal activity (Fig. S1J) and rescued zyd TS-induced seizures (Fig. 1C). These findings indicate elevated neuronal activity in zyd mutants during the heat shock is required for seizure induction following dysregulation of cortex glial Ca 2+ .
A genetic modifier screen of the zyd seizure phenotype reveals glia to neuron signaling mechanisms
To elucidate pathways by which cortex glial Ca 2+ signaling controls somatic regulation of neuronal function and seizure susceptibility, we performed a targeted RNAi screen for modifiers of the zyd TS seizure phenotype in adult animals. We reasoned that removal of a gene product required for this signaling pathway would prevent zyd TS seizures when absent. We used the pan-glial driver repo-gal4 to express RNAi to knockdown 847 genes encoding membrane receptors, secreted ligands, ion channels and transporters, vesicular trafficking proteins and known cellular Ca 2+ homeostasis and Ca 2+ signaling pathway components (Tables S1, S2). The screen revealed multiple genetic interactions, identifying gene knockdowns that completely (28) or partially (21) rescued zyd seizures, caused lethality on their own (95) or synthetic lethality in the presence of zyd (5), enhanced zyd seizures (37) or triggered seizures (3) in a wildtype background (Fig. 1D, Table S1). Given the broad role of Ca 2+ as a regulator of intracellular biology, we expected elevated Ca 2+ levels in zyd mutants to interface with several potential glial-neuronal signaling mechanisms. Indeed, the screen identified multiple candidates controlling glial-toneuron communication, including regulators of vesicular trafficking, neurotransmitter receptors and ion channels, and Ca 2+ effector proteins that suppress zyd seizures when knocked down specifically in glia (Fig. 1D, Table S1).
Beyond regulators of the zyd phenotype, the screen also identified Stim (stromal interaction molecule), an ER Ca 2+ sensor for store operated Ca 2+ entry (SOCE), as a key Ca 2+ signaling component involved in TS seizure generation. Knockdown of Stim in wildtype adult animals caused seizures that were similar to those observed in zyd mutants (Fig. 1E), suggesting defects in glial microdomain Ca 2+ (zyd) or the SOCE pathway both increase neuronal seizure susceptibility. Recordings of motor central pattern generator (CPG) output at the larval neuromuscular junction (NMJ) showed that Stim RNAi larvae exhibit rapid, unpatterned firing at 38°C, similar to zyd mutants, whereas wildtype larvae retain motor neuron bursting necessary for normal crawling behavior (Fig. 1F). To determine if these distinct Ca 2+ entry pathways impinged on a similar mechanism for seizure induction, we tested genetic interaction between SOCE defects and zyd mutants. If both Ca 2+ entry mechanisms impinged on the same pathway, they should not show synergistic interactions when both are disrupted. However, knockdown of Stim in the zyd background enhanced zyd seizures (Fig. 1E), indicating additive effects and suggesting Ca 2+ is likely to interface with distinct glial-neuronal pathways depending upon the source and location of aberrant Ca 2+ signaling. Indeed, zyd-dependent seizures require calcineurin function, while SOCE pathway dysfunction does not (see below). Distinctions between microdomain and ER-dependent Ca 2+ signaling pathways have been previously observed in mammals as well (Bindocci et al., 2017;Savtchouk and Volterra, 2018).
Given TS seizures in zyd mutants can be fully rescued by reintroducing the wildtype zyd (NCKX) protein in cortex glia, we expected the genes identified in the RNAi pan-glial knockdown screen to function specifically within this population of cells. To directly examine cell-type specificity of the suppressor hits, the RNAi screen was repeated with drivers restricted to either astrocyte or cortex glial subtypes (Alrm-gal4 for astrocytes and NP2222-gal4 and GMR54H02-gal4 for cortex glia, Fig. 1D, Table S1). For the majority of suppressors, rescue with the glialsubtype specific drivers was weaker, likely due to lower RNAi expression levels compared to the stronger repo-gal4 driver line (Fig. 1D). In several cases (nompC, Nmdar2), however, near complete rescue of the zyd seizure phenotype could be achieved when the gene was knocked down using the astrocyte driver Alrm-gal4 (Fig.1D). This surprising observation indicates that although seizure induction in zyd mutants arises from abnormal Ca 2+ signaling within cortex glia, normal astrocyte function is required for either the propagation or maintenance of the subsequent excitability changes that drive the TS seizure phenotype. We recently discovered that Drosophila astrocytes employ Ca 2+ -mediated signaling pathways to dynamically regulate the surface levels of the GABA transporter, GAT, which negatively controls neuronal activity through modulation of synaptic GABA levels (Zhang et al., 2017). Together with the RNAi suppressor screen hits that function within astrocytes, these data indicate astrocytes regulate the seizure-inducing defects arising in cortex glia in the zyd mutant. For the remaining analysis, we focused on the characterization of the cortex glial Ca 2+ -dependent pathway that is mis-regulated in zyd, and how this mis-regulation promotes neuronal seizure susceptibility.
Cortex glial calcineurin activity is required for seizures in zyd mutants
We previously observed that knockdown of glial calmodulin (Cam) eliminates the zyd seizure phenotype (Melom and Littleton, 2013), suggesting a Ca 2+ /Cam-dependent signaling pathway regulates glial to neuronal communication. Cam is an essential Ca 2+ -binding protein that regulates multiple Ca 2+ -dependent cellular processes and is abundantly expressed in Drosophila glia (Altenhein et al., 2006), although its role in glial biology is unknown. Calcineurin (CN) is a highly conserved Ca 2+ /Cam-dependent protein phosphatase implicated in a number of cellular processes in mammals (Rusnak and Mertz, 2000). In the RNAi screen for zyd interactors, panglial knockdown of the regulatory CN B subunit, CanB2, completely rescued both heat-shock and vortex induced seizures in zyd animals (Fig. 1D, 2A-C). In contrast to the continuous neuronal firing observed in zyd mutants, CPG recordings demonstrated that zyd;;repo>CnB2 RNAi#1 larvae exhibit normal rhythmic firing at 38°C similar to controls (Fig. 2B). Similar results were observed using three additional, non-overlapping CanB2 RNAi constructs ( Fig. 2A, 2F). To refine the glial subpopulation in which CanB2 activity is necessary to promote seizures in zyd mutants, we knocked CanB2 down using different glial drivers. CanB2 knockdown in astrocytes resulted in no rescue of the zyd phenotype (Fig. 1D). CanB2 knockdown with a cortex-glial specific driver (NP2222) greatly improved the zyd phenotype. Animals no longer displayed continuous seizures, but the rescue was less robust compared to pan-glial knockdown, likely due to lower expression of the RNAi (Fig. 2D, Fig. S2A). Indeed, rescue was greatly enhanced by cortex glial-specific knockdown of CanB2 using two copies of the CanB2 RNAi construct (Fig. 2D), with ~90% of zyd;NP2222>CanB2 2xRNAi adult animals lacking seizures. The effect of CanB2 knockdown was specific to cortex glia, as it was insensitive to blockade of expression of the CanB2 RNAi in neurons using C155-gal80 (a neuron specific gal4 repressor; Fig. 2D and S2A). To exclude a developmental effect of CanB2 knockdown within glia, we blocked gal4/UAS-driven CanB2 RNAi with gal80 ts and expressed a single copy of CanB2 RNAi only in adult zyd mutants. Adult flies reared at the permissive temperature for gal80 ts to allow CanB2 RNAi expression exhibited significantly less seizures after 3 days, with only ~20% of flies displaying zyd-like seizures by 5 days (Fig. 2E). Zyd seizure rescue by CanB2 knockdown did not result from simple alterations in motility, as repo>CanB2 RNAi and zyd;;repo>CanB2 RNAi animal exhibited normal larval light avoidance responses (Fig. S2B) and adult locomotion (Fig. S2C). In contrast to rescue of the zyd phenotype, CanB2 knockdown could not rescue TS seizures induced by increasing glial Ca 2+ through overexpression of the Ca 2+ channel TRPA (Fig S2D) or RNAi knockdown of Stim (Fig. S2E), indicating glial-triggered seizures in these genetic backgrounds employ distinct Ca 2+ -regulated pathways to alter neuronal excitability. We conclude that CanB2 is required in cortex glia to promote zyd TS seizure activity.
CN is a heterodimer composed of a ∼60 kDa catalytic subunit (CanA) and a ∼19 kDa EFhand Ca 2+ -binding regulatory subunit (CanB). Both subunits are essential for CN phosphatase activity. Previous studies in Drosophila demonstrated several CN subunits (CanA-14F, CanB, and CanB2) are broadly expressed in the adult Drosophila brain (Tomita et al., 2011) and that neuronal CN is essential for regulating sleep Tomita et al., 2011). CN function within glia has not been characterized. The Drosophila genome contains three genes encoding CanA (CanA1, Pp2B-14D and CanA-14F) and two genes encoding CanB (CanB and CanB2) (Takeo et al., 2006). We found that pan-glial knockdown of two CanA subunits, Pp2B-14D and CanA-14F, partially rescued zyd heat-shock and vortex induced seizures (Fig. 2C, F). The rescue was more robust for vortex-induced seizures than those induced by heat-shock, suggesting heat-shock is likely to be a more severe hyperexcitability trigger. Rescue was enhanced by knockdown of both Pp2B-14D and CnA-14F (Fig. 2G), with more than ~90% of zyd;;repo>2xCanA RNAi flies lacking seizures, suggesting a redundant function of these two subunits in glial cells. To further verify the effect of CanA knockdown and the glial subpopulation in which CN activity is required, we overexpressed a dominant negative (DN) form of CanA (Pp2B-14D H217Q ) using either pan-glial (repo-gal4) or two different cortex-glial specific drivers (NP2222-gal4 and GMR54H02-gal4). Overexpressing Pp2B-14D H217Q resulted in ~50% of zyd;;Pp2B-14D H217Q flies becoming seizureresistant, regardless of the driver used ( Fig 2H). Imaging of intracellular Ca 2+ in cortex glia with GMR54H02-gal4 driven myrGCaMP6s revealed that CN knockdown had no effect on either the elevated basal Ca 2+ levels or the lack of microdomain Ca 2+ events observed in zyd cortex glia ( Fig. 2I-J). These observations indicate CN function is required downstream of elevated intracellular Ca 2+ , rather than to regulate Ca 2+ influx or efflux in cortex glial cells. Together, these results demonstrate that a CN-dependent signaling mechanism in cortex glia is required for glialneuronal communication that drives seizure generation in zyd mutants.
Calcineurin activity is enhanced in zyd cortex glia
To characterize CN activity in wildtype and zyd cortex glia we used the CalexA system as a reporter for CN activity. CalexA (Ca 2+ -dependent nuclear import of LexA) (Masuyama et al., 2012) was originally designed for labeling active neurons in behaving animals. In this system, sustained neural activity induces CN activation and dephosphorylation of a chimeric transcription factor LexA-VP16-NFAT (termed CalexA) which is then transported into the nucleus. The imported dephosphorylated CalexA drives GFP reporter expression in active neurons (for schematic representation, see Fig. S2F). The CalexA components were brought into control and zyd mutant backgrounds to directly assay CN activity. A substantial basal activation of CN was observed in control 3 rd instar larval cortex glia at room temperature using fluorescent imaging (Fig. 3A). CN activity and the resulting GFP expression was enhanced in zyd cortex glia (Fig. 3B). Western blot analysis of CalexA-induced GFP expression in adult head extracts revealed enhanced cortex glial CN activity in adult zyd mutants compared to controls as well (23 ± 3% enhancement, Fig 3D-E). RNAi knockdown of CanB2 greatly reduced CalexA GFP expression as expected (27 ± 1% reduction, Fig. 3C-E). These results demonstrate CN activity is enhanced downstream of the elevated Ca 2+ levels in zyd mutant cortex glia, and that CN activity can be efficiently reduced by RNAi knockdown of CanB2.
Pharmacological targeting of the glial calcineurin pathway rescues zyd seizures
Several seizure mutants in Drosophila can be suppressed by commonly used antiepileptic drugs (Kuebler and Tanouye, 2002;Song and Tanouye, 2008), indicating conservation of key mechanisms that regulate neuronal excitability. CN activity is strictly controlled by Ca 2+ levels, calmodulin, and CanB, and can be inhibited by the immunosuppressants cyclosporine A (CsA) and FK506. The CN inhibitor FK506 has been previously shown to reduce seizures in a rodent kindling model (Moia et al., 1994;Moriwaki et al., 1996), suggesting CN can modulate epilepsy in mammals. To assay if zyd TS seizures can be prevented with anti-CN drugs, adult flies were fed with media containing different anti-CN drugs (CsA, FK506 and CN585 (Erdmann et al., 2010)) and tested for HS induced seizures after 0, 3, 6, 12 and 24 hours of drug feeding (red arrowheads in Fig. 4A, Fig. 4B-C). Zyd flies fed with 1 mM CsA for 12 hours showed ~80% less seizures then control (Fig. 4B, C), while two other inhibitors, FK506 and CN585, had much weaker effects (Fig. 4B). Seizure rescue by CsA was dose-dependent, with less robust suppression when flies were fed with 0.3 mM CsA ( Fig. 4D-E). The CsA rescue was reversible, as zyd seizures reoccurred following 12 hours of CsA withdrawal (Fig. 4C). We conclude that pharmacologically targeting the glial CN pathway can improve the outcome of glial-derived neuronal seizures in the zyd mutant.
Cortex glial knockdown of the sandman two-pore-domain K + channel mimics zyd seizures
To explore how CN hyperactivation promotes seizures, we conducted a screen of known and putative CN targets using RNAi knockdown with repo-gal4. We concentrated our screen on putative CN target genes that are involved in signal transduction (Table S3). This screen revealed that pan-glial knockdown of sandman (sand), the Drosophila homolog of TRESK (KCNK18) and a member of the two-pore-domain K + channel family (K2P), caused adult flies to undergo TSinduced seizures similar to zyd mutants (Fig. 5A). Vortex-induced seizures in repo>sand RNAi were less severe than those observed in zyd, with only ~50% of sand RNAi flies showing seizures (Fig. S3A). TS-induced seizures in repo>sand RNAi adults were found to have the same kinetics and temperature threshold as seizures observed in zyd mutants (Fig. 5A, B), and CPG recordings showed that repo>sand RNAi larvae exhibit rapid, unpatterned firing at 38°C, similar to zyd mutants (Fig. 5C). Cortex-glial specific knockdown of sand recapitulated ~50% of the seizure effect when two copies of the RNAi were expressed (Fig. 5A, B). The less robust effect observed with the cortex-glial driver could be due to less effective RNAi knockdown or secondary to a role for sand in other glial subtypes. To determine if sand functions in other glia subtypes to mimic the zyd seizure pathway, we expressed sand RNAi using the pan-glial driver repo-gal4 and inhibited expression specifically in cortex glia with GMR54H02>gal80. In the absence of cortex glialknockdown of sand, seizure generation was suppressed (Fig. 5A). Similar to zyd mutants, sand RNAi animals did not show changes in general activity and locomotion at room temperature ( Fig. 5D). Finally, unlike the case with Stim removal in the SOCE pathway, RNAi knockdown of sand did not enhance the zyd phenotype (Fig. S3B), suggesting seizures due to loss of sand and zyd impinge on a similar pathway.
Mammalian astrocytes modulate neuronal network activity through regulation of K + buffering (Bellot-Saez et al., 2017), in addition to their role in uptake of neurotransmitters such as GABA and glutamate (Murphy-Royal et al., 2017). Human Kir4.1 potassium channels (KCNJ10) have been implicated in maintaining K + homeostasis, with mutations in the loci causing epilepsy (Haj-Yasein et al., 2011). However, Kir channels are unlikely to be the only mechanism for glial K + clearance, as Kir4.1 channels account for less than half of the K + buffering capacity of mature hippocampal astrocytes (Ma et al., 2014). To determine if cortex glial Kir channels regulate seizure susceptibility in addition to sand, we used repo-Gal4 to knock down all three Drosophila Kir family members (Irk1, Irk2 and Irk3). Pan-glial knockdown of the Drosophila Kir family did not cause seizures (Fig. S3C), while knock down of either Irk1 or Irk2 slightly enhanced the zyd phenotype (Fig. S3D). Similarly, repo-gal4 knockdown of other well-known Drosophila K + channels beyond the Kir family also did not cause seizures (Fig. S4C), indicating sand is likely to play a preferential role in K + buffering in Drosophila cortex glia.
The mammalian sand homolog, TRESK, is directly activated by CN dephosphorylation (Czirjak et al., 2004;Enyedi and Czirjak, 2015), while Drosophila sand was shown to be modulated in sleep neurons by activity-induced internalization from the plasma membrane (Pimentel et al., 2016). Regardless of the mechanism by which CN may regulate the protein, we hypothesized that sand is epistatic to CN in controlling zyd-mediated seizures. Indeed, inhibition of CN by RNAi or CsA did not alter sand RNAi -induced seizures (Fig. 5E), placing sand downstream of CN activity. Furthermore, knockdown of sand in the zyd background does not alter the elevated basal Ca 2+ or the lack of microdomain Ca 2+ events in zyd mutants (Fig. 5F, G), suggesting sand is downstream to the abnormal Ca 2+ signaling in zyd. Overall, these findings suggest elevated Ca 2+ in zyd mutants leads to hyperactivation of CN and subsequent reduction in sand function. These results suggest that impairment in glial buffering of the rising extracellular K + during elevated neuronal activity and stress conditions (i.e. heat shock or acute vortex) causes enhanced seizure susceptibility in zyd mutants.
Enhanced endocytosis in zyd cortex glia
We next sought to examine how elevated CN activity in zyd mutants alters sand function. The mammalian sand homolog, TRESK, is constitutively phosphorylated on four serine residues (S264 by PKA, and S274, S276 and S279 by MARK1) (Enyedi and Czirjak, 2015). Two of these residues are conserved in Drosophila sand (S264 and S276, see Fig. S3E for protein alignment). Constitutive dephosphorylation and subsequent activation of sand by CN would be predicted to increase K + buffering following hyperactivation of the nervous system by stressors, and thus less seizure activity would be expectedopposite to what we have observed. If this regulatory mechanism is active in Drosophila cortex glia, we hypothesized that knockdown of either PKA or Par-1 (the Drosophila MARK1 homolog) would lead to enhanced activity of sand and improvement or rescue of zyd seizures. However, neither pan-glial or cortex glial knockdown of either kinase (PKA-C1, PKA-C2, PKA-C3 and Par-1), or overexpression of a PKA inhibitory peptide (PKI), altered the zyd phenotype (Table S4). Together with the prediction that regulation of sand by dephosphorylation should lead to seizure suppression, these results argue against enhanced sand dephosphorylation as the primary cause of zyd seizures.
A second mechanism to link CN hyperactivation to sand regulation is suggested by a previous study demonstrating sand expression on the plasma membrane of neurons involved in sleep homeostasis is regulated by activity-dependent internalization (Pimentel et al., 2016). Cam and CN activate several endocytic Ca 2+ sensors and effectors that control Ca 2+ -dependent endocytosis (Xie et al., 2017). If hyperactivity of CN leads to enhanced internalization of sand and subsequent seizure susceptibility due to decreased K + buffering capacity, interrupting cortex glial endocytosis should suppress zyd seizures. To test this model, we used cortex glial-specific RNAi to knock down genes involved in endocytosis and early endosomal processing and trafficking. Cortex glial knockdown of several essential endocytosis genes, including dynamin-1 and clathrin heavy and light chains, caused embryonic lethality. In contrast, cortex glial knockdown of Rab5, a Rab GTPase regulator of early endosome (EE) dynamics (Langemeyer et al., 2018), and Endophilin A (EndoA), a BAR-domain protein involved in early stages of endocytosis (Verstreken et al., 2002), completely suppressed zyd TS seizures (Fig. 6A). A second, non-overlapping hairpin and a dominant negative (DN) construct for Rab5 (Rab5 DN ) resulted in early larval lethality (Table S5), likely due to more efficient Rab5 activity suppression. However, conditionally expressing Rab5 DN in adult cortex glia partially rescued zyd seizures (Fig. 6B). Rab5 is engaged in the initial step of vesicle endocytosis and recycling (Dunst et al., 2015), suggesting a key role for this process in regulating glial-to-neuronal soma signaling. To determine if the observed rescue was specifically associated with early endocytosis defects, we assayed seizure suppression in zyd mutants following knockdown of the entire family of Drosophila Rab GTPases, most of which are expressed in Drosophila cortex glia (Coutinho-Budd et al., 2017). Beyond Rab5, none of the remaining Rab proteins altered the zyd seizure phenotype or caused a behavioral phenotype on their own following RNAi-mediated knockdown (Table S5).
To assay if excess endocytosis secondary to CN hyperactivity disrupts membrane trafficking, we imaged endosomal compartments by expressing GFP-tagged Rab proteins in cortex glial cells of control and zyd animals. We found that large (>0.1 μm 2 ) Rab5-positive early endosomes accumulated in zyd cortex glia compared to controls (Fig. 6C). Feeding zyd larvae the CN inhibitor, CsA (1 mM), restored the number of Rab5 compartments to control levels (Fig. 6D). These results indicate CN hyperactivation secondary to elevated Ca 2+ levels in zyd mutants increases endocytosis and the formation of early endosomes in cortex glia.
Chronic inhibition of dynamin-mediated endocytosis rescues zyd seizures
Our previous analysis of the zyd mutant indicated that basal intracellular Ca 2+ is elevated in cortex glia, with Ca 2+ levels increasing even more when zyd animals are heat-shocked (Melom and Littleton, 2013). This additional elevation in Ca 2+ could potentially further enhance CN activity and endocytosis beyond that observed at rest. These data raise the question of whether the basal enhancement of endocytosis or the additional heat shock-induced Ca 2+ increase is the primary cause for seizure susceptibility in zyd mutants. To test these two models, we conditionally manipulated endocytosis by overexpressing a TS dominant-negative form of Dynamin-1 (Shi ts ) in zyd cortex glia. This mutant version of Dynamin has normal activity at room temperature and a dominant-negative function upon exposure to the non-permissive temperature (>29˚C, Fig. 6F). Acute inhibition of endocytosis by inactivation of Shi ts in cortex glia did not suppress zyd seizures, suggesting further enhancement of CN activity and endocytosis specifically during the heat shock is not likely to be the cause of the rapid-onset seizures observed in zyd mutants. Given the chronic enhancement in CN activity and endocytosis in zyd mutants demonstrated by enhanced CalexA signaling (Fig. 3) and early endosome accumulation ( Fig. 6C-D), we hypothesized that inhibiting endocytosis prior to exposing animals to a heat shock might improve their phenotype by altering the plasma membrane protein content over longer timescales. We incubated zyd;NP2222>Shi ts flies at a non-permissive temperature for Shi ts (31˚C) for either 3 or 6 hours, and then tested for heat shock-induced seizures at 38.5˚C (Fig. 6E-F). Zyd mutants alone do not undergo seizures at 31˚C, nor does pre-incubation at 31˚C alter the subsequent seizure phenotype observed at 38.5˚C. In contrast, inhibition of endocytosis for 6 hours at 31˚C in zyd mutants co-expressing Shi ts suppressed the subsequent seizures observed during a 38.5˚C heat shock in ~80% of animals (Fig. 6F). A shorter 3 hour inhibition did not cause a significant improvement in seizures. The seizure suppression observed after 6 hours of Dynamin inhibition was reversible, as adults tested 12 or 24 hours after return to room temperature regained the zyd seizure phenotype (Fig. 6G). We conclude that chronic hyperactivation of CN and endocytosis caused by elevated basal Ca 2+ in zyd cortex glia is the primary cause for zyd seizures.
Increasing glial K + uptake rescues zyd-dependent seizures
Genetic analysis of the zyd mutant indicate the primary cause of seizure susceptibility is chronic enhancement in Ca 2+ -dependent CN activity and subsequent increases in endocytosis in cortex glia. We hypothesize this enhancement in endocytosis leads to increased internalization of plasma membrane proteins such as sand, which in turn disrupt K + uptake and buffering by cortex glial cells during periods of intense activation of the nervous system (Fig. 7A). To test this model further, we assayed if artificially increasing cortex glial K + uptake in zyd mutants by overexpressing another K + leak channel could suppress the seizure phenotype. Indeed, constitutive cortex glial overexpression of the open K + channel EKO (White et al., 2001) rescued vortex-induced seizures in ~75% of zyd mutants (Fig 7B). During a heat shock, cortex glial overexpression of EKO led to a dramatic change in the behavior of ~60% of zyd animals, changing the seizure phenotype to hypoactivity or complete paralysis (Fig. 7C). CPG recordings revealed that zyd;;repo>EKO larvae lose all bursting and normal CPG activity at 38˚C (Fig. 7D, middle), while zyd;NP2222>EKO regain normal wildtype-like rhythmic activity (Fig. 7D, right). These results indicate cortex glial K + buffering is critical for neuronal excitability during states of intense excitation as observed following heat shock or acute vortex. During these periods of intense neuronal activity in zyd mutants, defective cortex glial K + buffering surrounding neuronal somas leads to seizures. Enhancing K + buffering can either reverse the seizure phenotype or push the scales toward neuronal hypo-excitability and paralysis. Future studies will be required to determine if sand function is dynamically modulated by normal microdomain Ca 2+ oscillatory activity in wildtype cortex glia in response to changes in neuronal activity, which would provide a robust glial-based homeostatic mechanism to maintain neuronal spiking rates in acceptable ranges.
Discussion
Significant progress has been made in understanding glial-neuronal communication at synaptic and axonal contacts, but whether glia regulate neuronal function via signaling at somatic regions remains largely unstudied. A single mammalian astrocyte can be associated with multiple neuronal cell bodies and thousands of synapses, making it challenging to direct manipulations that alter glial signaling only at neuronal somas. Drosophila cortex glia ensheath multiple neuronal soma but do not contact synapses (Awasaki et al., 2008). These tight associations with neuronal cell bodies make cortex glial an ideal system to explore how glia regulate neuronal function at the soma. In this study, we took advantage of the zydeco TS seizure mutation in a NCKX Ca 2+ exchanger to explore pathways by which cortex glial cells regulate neuronal excitability. We found that elevation of basal Ca 2+ levels in cortex glia leads to hyperactivation of Ca 2+ -CN dependent endocytosis. Seizures in zyd mutants can be fully suppressed by either conditional inhibition of endocytosis or by pharmacologically reducing CN activity. Two well-characterized mechanisms by which glia regulate neuronal excitability and seizure susceptibility are neurotransmitter uptake via surface transporters and spatial K + buffering. Cortex glia do not contact synapses, making it unlikely they are exposed to neurotransmitters. Instead, cortex glial-knockdown of the two-pore K + channel (K2P) sandman, the Drosophila homolog of TRESK/KCNK18, recapitulates zyd TS seizures. These findings suggest impairment in K + buffering during hyperactivity in zyd mutants underlies the increased seizure susceptibility, providing an unexpected link between glial Ca 2+ signaling and K + buffering. Consistent with this model (Fig. 7A), enhancing cortex glial K + uptake by overexpressing a constitutively open K + channel (EKO (White et al., 2001)) reduces neuronal activity in zyd mutants, rescues vortex-induced seizures, and reverses the TS behavioral phenotype from seizures to paralysis.
Astrocytes, as well as other glial subtypes, exhibit dynamic fluctuations in intracellular Ca 2+ in vitro (Fatatis and Russell, 1992;Nett et al., 2002) and in vivo (Nimmerjahn et al., 2009;Porter and McCarthy, 1996). Despite decades of studies on astrocytic Ca 2+ activity, signaling pathways underlying these transients and their in vivo relevance to brain activity are poorly defined and controversial (Fiacco and McCarthy, 2018;Savtchouk and Volterra, 2018). Prior studies examining astrocytic Ca 2+ signaling used several approaches to artificially increase intracellular Ca 2+ (transgenic receptor expression, caged Ca 2+ photolysis and pharmacological or optogenetic stimulation) (Savtchouk and Volterra, 2018). However, it is unclear if these methods mimic physiological astrocytic responses and how these manipulations alter in vivo behavior (Agulhon et al., 2010;Fiacco et al., 2007;Wang et al., 2012a;Wang et al., 2012b). Likewise, the distinction between signaling events mediated by local microdomain Ca 2+ oscillatory activity that require extracellular Ca 2+ entry versus the more well-studied IP3-dependent release of Ca 2+ from ER stores has only been recently appreciated (Bindocci et al., 2017;Fiacco and McCarthy, 2018;Savtchouk and Volterra, 2018). The Drosophila zyd mutation was identified in an unbiased genetic screen for behavioral mutants that triggered TS-dependent seizures, thus establishing the biological importance of the pathway before the gene mutation and cellular origin of the defect was known. The elevation in cortex glial Ca 2+ levels found in zyd mutants provides a mechanism to explore how this pathway influences neuronal excitability. While we focused on CN-dependent endocytosis and K + buffering by cortex glial cells in zyd animals, our study uncovered evidence for additional pathways by which glia modulate neuronal excitability as well. Knockdown of Stim in glia, a key regulator of SOCE, also caused TS seizures but did not require CN activity, suggesting glia possess more than one Ca 2+ -sensitive pathway that regulates neuronal excitability. In addition, we found evidence that astrocyte-like glia modulate the expression of seizures that arise from elevated Ca 2+ signaling within cortex glia, as several RNAi suppressor hits rescue zyd seizures when knocked down using an astrocyte-specific driver. We recently found that Ca 2+ elevation in astrocyte-like glia results in the rapid internalization of the astrocytic plasma membrane GABA transporter GAT and subsequent silencing of neuronal activity through elevation in synaptic GABA levels (Zhang et al., 2017). As such, Ca 2+ -regulated endo/exocytic trafficking of neurotransmitter transporters and K + channels to and from the plasma membrane may represent a broadly used mechanism for linking glial Ca 2+ activity to the control of neuronal excitability at synapses and cell bodies, respectively.
CN is the only Ca 2+ /Cam-dependent phosphatase encoded in the genome and is highly expressed throughout the brain (Furman and Norris, 2014;Goto et al., 1986a, b;Kuno et al., 1992;Polli et al., 1991). CN interacts with numerous neuronal substrates to modulate diverse functions, including receptor and ion channel trafficking, ion channel function and gene regulation (Baumgartel and Mansuy, 2012). Neuronal CN also controls synapse loss, dendritic atrophy, synaptic dysfunction, and neuronal vulnerability (Abdul et al., 2010;Reese and Taglialatela, 2011). In contrast, astrocytic CN expression has been shown to increase during aging, injury and disease. Glial cells rely on CN signaling pathways to regulate phenotypic switching/cellular activation, immune-like responses and cytokine production (Furman and Norris, 2014), with the pathway being intimately involved in neuroinflammation (Furman and Norris, 2014;Kataoka et al., 2009;Nagamoto-Combs and Combs, 2010;Rojanathammanee et al., 2013;Shiratori et al., 2010). The role of CN in astrocytes during normal brain states is unclear . The identification of calmodulin (Melom and Littleton, 2013) and CN as suppressors of glial-originated seizures in zyd mutants indicates a Cam/CN-dependent pathway is hyperactivated and impairs normal cortex glial-to-neuronal soma signaling. Although the mechanism by which CN activity is upregulated in Drosophila cortex glia is different from that observed in mammals, the ability of CN to alter neuronal activity appears similar to mechanisms employed in glial-dependent neuroinflammatory states. During injury or disease states in mammals, activated astrocytes exhibit Ca 2+ dysregulation with higher intracellular Ca 2+ levels, more frequent Ca 2+ oscillations and an elevated expression of Ca 2+ channels and Ca 2+ -regulated proteins (Kuchibhotla et al., 2009;Lin et al., 2007;Sama and Norris, 2013). While its clear how these changes would hyperactivate CN signaling, which has extreme consequences for neuronal function, it is unclear what role basal CN activity has in glial signaling. CN regulates the expression of several key Ca 2+ mediators in multiple cell types Genazzani et al., 1999;Graef et al., 1999;Groth et al., 2007), including plasma membrane Ca 2+ channels, intracellular Ca 2+ release channels, and Ca 2+dependent enzymes (Baumgartel and Mansuy, 2012). However, no obvious phenotypes were observed when we decreased CN activity in CanB2 RNAi cortex glia in control animals (Fig. 3C), suggesting the primary function of CN is only engaged following states of Ca 2+ elevation in cortex glia during periods of strong neuronal activity.
The link between elevated glial Ca 2+ signaling and defects in K + buffering was an unexpected observation in the zyd mutant. Effective removal of K + from the extracellular space is vital for maintaining brain homeostasis and limits network hyperexcitability during normal brain function, as disruptions in K + clearance have been linked to several pathological conditions (David et al., 2009;Leis et al., 2005;Somjen, 2002). In addition to ion homeostasis, astrocytic K + buffering has been suggested as a mechanism for promoting hyperexcitability and engaging specific network activity (Bellot-Saez et al., 2017;Wang et al., 2012a). Two mechanisms for astrocytic K + clearance have been previously identified, including net K + uptake (mediated by the Na + /K + ATPase pump) and K + spatial buffering (via passive K + influx) (Bellot-Saez et al., 2017). However, the complete repertoire of K + channels involved in spatial K + buffering remains elusive (Ma et al., 2014). We found that cortex glial knockdown of the Drosophila KCNK18/TRESK K2P homolog sand triggered stress-induced seizures, indicating sand is involved in K + homeostasis in the brain. Several studies have linked other members of the K2P family, mainly TREK-1 and TWIK-1, to distinct aspects of astrocytic function. TREK-1 regulates fast glutamate release by astrocytes (Woo et al., 2012), TWIK-1 and TREK-1 mediate passive K + conductance in astrocytes (Hwang et al., 2014), and TWIK-1 is recruited to the astrocytic membrane by mGluR3 activation (Wang et al., 2016).
We considered two potential mechanisms for CN regulation of sand, including direct dephosphorylation and altered endocytic trafficking. At rest, the mammalian sand homolog TRESK is constitutively phosphorylated. Following generation of a Ca 2+ signal, TRESK is dephosphorylated and activated by CN (Enyedi and Czirjak, 2015). Constitutive dephosphorylation and activation of sand by CN could potentially result in higher K + efflux from cortex glia and neuronal depolarization, leading to higher seizure susceptibility. However, in this scenario, K + buffering upon hyperactivation of the nervous system should be more efficient, and thus less seizures are expected. Cortex glia knockdown of the two kinases that phosphorylate TRESK, PKA and Par-1, did not cause seizures. As such, we found no evidence that sand activity is regulated by CN dephosphorylation of the protein directly in cortex glia.
Beyond dephosphorylation, sand expression on the plasma membrane of specific sleep homeostat neurons in Drosophila is regulated by activity-induced internalization (Pimentel et al., 2016). Given astrocytes can shape neuronal circuit activity by actively altering their K + uptake capabilities (Wang et al., 2012a), we propose that Drosophila cortex glia regulate the expression levels of sand (and potentially other K + buffering proteins) on the cell membrane in a Ca 2+regulated fashion in normal animals. When Ca 2+ is constitutively elevated in zyd mutants, this regulation is thrown out of balance. Indeed, inhibition of endocytosis in cortex glia can rescue zyd seizures, suggesting that a membrane protein responsible for the neuronal hyperexcitability phenotype is being abnormally internalized in zyd cortex glia. Bypassing sand function by improving cortex glial K + buffering/uptake capacity through overexpression of a leak K + channel (EKO (White et al., 2001)) can reverse the zyd phenotype from seizures (caused by neuronal hyperactivity) to paralysis (caused by neuronal hypoactivity). Together, these findings indicate elevated Ca 2+ levels lead to hyperactivation of CN and elevated endocytosis, sand internalization, and impairment in K + buffering by cortex glia in zyd mutant animals (Fig. 7A).
Accumulating evidence indicate glia play contributive or even causative roles in several neurological disorders, neurodevelopmental syndromes and neurodegenerative diseases including epilepsy, Fragile X syndrome and SMA respectively. Increased glial activity is associated with abnormal neuronal excitability (Wetherington et al., 2008), and pathologic elevation of glial Ca 2+ has been suggested to play an important role in the generation of seizures (Gomez-Gonzalo et al., 2010;Tian et al., 2005). Approximately 50 million people worldwide have epilepsy, making it one of the most common neurological diseases globally (World Health Organization, 2018, http://www.who.int/en/). The traditional view assumes that epileptogenesis occurs exclusively in neurons. However, an astrocytic basis for epilepsy was proposed almost two decades ago (Gomez-Gonzalo et al., 2010;Tashiro et al., 2002;Tian et al., 2005). In a nonpathological state, glia display Ca 2+ oscillations spontaneously (Takata and Hirase, 2008) and in response to physiological neuronal activity (Wang et al., 2006). One widely-studied output of Ca 2+ oscillations is gliotransmission, the glial release of certain transmitters (Agulhon et al., 2008;Lee et al., 2010). Indeed, Ca 2+ -dependent glutamate release from astrocytes causes synchronous currents in neighboring neurons (Angulo et al., 2004;Fellin et al., 2006;Fellin et al., 2004), and is capable of eliciting action potentials (Pirttimaki et al., 2011), suggesting abnormally elevated glial Ca 2+ may produce neuronal hypersynchrony through enhanced gliotransmission. In addition, in vivo work demonstrated that several anti-epileptic drugs reduce glial Ca 2+ oscillations (Tian et al., 2005). Although increased glial activity has been associated with abnormal neuronal excitability, the role of glia in the development and maintenance of seizures, and the exact pathway(s) by which abnormal glial Ca 2+ alter glia-to-neuron communication and neuronal excitability are poorly characterized (Wetherington et al., 2008). A second proposed mechanism by which astrocytes regulate neuronal excitability and seizure susceptibility involves the uptake and redistribution of K + ions (Bellot-Saez et al., 2017;Wang et al., 2012a) and neurotransmitters (Rose et al., 2017) from the extracellular space. In this study, we explored the pathways that are activated within glial cells in response to abnormally elevated glial Ca 2+ that triggers seizures. We found that both mechanisms are at play in Drosophila cortex glial cells, with elevated intracellular Ca 2+ leading to impaired K + buffering. We identified several other suppressors of seizure induction in zyd mutants that are involved in GPCR signaling and vesicle trafficking, suggesting additional glial pathways may impact neuronal activity as well.
Current estimates suggest ~70% of children and adults with epilepsy can be successfully treated with current anti-epileptic drugs (World Health Organization, 2018, http://www.who.int/en/). The observation that several anti-epileptic drugs reduce glial Ca 2+ oscillations in vivo (Tian et al., 2005), together with the fact that ~30% of epilepsy patients are non-responders, suggest that pharmacologically targeting glial pathways might be a promising avenue for future drug development in the field. Several neuronal seizure mutants in Drosophila have already been demonstrated to respond to common human anti-epileptic drugs, indicating key mechanisms that regulate neuronal excitability are conserved from Drosophila to humans. Indeed, zyd-induced seizures can be rescued when animals are fed a CN inhibitor (Fig. 4), indicating pharmacological targeting of the glial CN pathway can improve the outcome of a glialderived seizure mutant. Prior studies have also shown improvement following treatment with the CN inhibitor FK506 in a rodent kindling model (Moia et al., 1994;Moriwaki et al., 1996). These data suggest CN activity regulates epileptogenesis in both Drosophila and mammalian models. Further characterization of how glia detect, respond, and actively shape neuronal excitability is critical to our understanding of neuronal communication and future development of new treatments for neurological diseases like epilepsy.
Drosophila genetics and molecular biology
Flies were cultured on standard medium at 22°C unless otherwise noted. zydeco (zyd 1 , here designated as zyd) mutants were generated by ethane methyl sulfonate (EMS) mutagenesis and identified in a screen for temperature-sensitive (TS) behavioral phenotypes. The UAS/Gal4 and LexAop/LexA systems were used to drive transgenes in glia, including repo-gal4, a pan-glial driver; NP2222-gal4, a cortex-glial specific driver; GMR54H02-gal4, expressed in a smaller set of cortex glial cells; and alrm-gal4, an astrocyte-like glial cell specific driver. The UAS-dsRNAi flies used in the study were obtained from the VDRC (Vienna, Austria) or the TRiP collection (Bloomington Drosophila Stock Center, Indiana University, Bloomington, IN, USA). All screened stocks are listed in supplementary material (Table S2). UAS-myrGCaMP6s was constructed by replacing GCaMP5 in the previously described myrGCaMP5 transgenic construct. Transgenic flies were obtained by standard germline injection (BestGene Inc.). For all experiments described, only male larva and adults were used. In RNAi experiments, the animals also had the UAS-dicer2 transgenic element on the X chromosome to enhance RNAi efficiency. For survival assays, embryos were collected in groups of ~50 and transferred to fresh vials (n=3). 3 rd instar larvae and/or pupae were counted. Survival rate (SR) was calculated as: For conditional expression using Tub-gal80 ts (Figs. 1A, 2E and 6B), animals of the designated genotype were reared at 22°C to adulthood with gal80 suppressing gal4-driven transgene expression (zyd RNAi , CanB2 RNAi and Rab5 DN , respectively). Flies were then transferred to a 31°C incubator to inactivate gal80 and allow gal4 expression or knockdown for the indicated period. For inhibiting transgene expression in cortex glia (Fig. 5A) GMR54H02-lexA was used to express gal80 from LexAop-gal80. For inhibiting transgene expression specifically in neurons (Fig. 2D Transgenic lines generated for this study:
UAS-myrGCaMP6s
LexAop-myrGCaMP6s For a complete list of all RNAi stocks used in this study, see Table S2.
Behavioral analysis:
All experiments were performed using groups of ~10-20 males.
1. Temperature-sensitive seizures/zyd modifier screen: Adult males aged 1-2 days were transferred in groups of ~10-20 flies (n≥3, total # of flies tested in all assays was always >40) into preheated vials in a water bath held at the indicated temperature with a precision of 0.1°C. Seizures were defined as the condition in which the animal lies incapacitated on its back or side with legs and wings contracting vigorously. Paralysis was defined as the condition in which the animal fell to the bottom of the vial and exhibited no movement. For screening purposes, only flies that showed normal wildtype-like behavior (i.e. walking up and down on vials walls) during heat-shock were counted as successful rescue. To analyze behavior in a more detailed manner we characterized four behavioral phenotypes: wall climbing (flies are climbing on vials walls), bottom dwelling (flies are on the bottom of the vial, standing/ walking without seizures), partial seizures (flies are on the bottom of the vial, seizing most of the time) and complete seizures (flies are constantly lying on their side or back with legs twitching). For assaying seizures in larvae, 3 rd instar males were gently washed with PBS and transferred to 1% agarose plates heated to 38°C using a temperature-controlled stage. Larval seizures were defined as continuous, unpatterned contraction of the body wall muscles that prevented normal crawling behavior. For determining seizure temperature threshold, groups of 10 animal were heat-shocked to the indicated temperature (either 37.5, 38, 38.5 or 39˚C). Threshold was defined as the temperature in which >50% of the animal were seizing after 1 minute.
Bang sensitivity:
Adult male flies in groups of ~10-20 males (n=3) were assayed 1-2 days post-eclosion. Flies were transferred into empty vials and allowed to rest for 1-2 h. Vials were vortexed at maximum speed for 10 seconds, and the number of flies that were upright and mobile was counted at 10 seconds intervals.
Light avoidance:
These assays were performed using protocols described previously following minor modifications. Briefly, pools of ~20 3 rd instar larvae (108-120 hours after egg laying) were allowed to move freely for 5 minutes on Petri dishes with settings for the phototaxis assay (Petri dish lids were divided into quadrants, and two of these were blackened to create dark environment). The number of larvae in light versus dark quadrants was then scored (n=4). Response indices (RI) were calculated as: = 4. Activity monitoring using the MB5 system: Adult flies activity was assayed using the multi-beam system (MB5, TriKinetics) as previously described. Briefly, individual males aged 1-3 days were inserted into 5 mm×80 mm glass pyrex tubes. Activity was recorded following a 20-30 minutes acclimation period. Throughout each experiment, flies were housed in a temperature-and light-controlled incubator (25°C, ∼40-60% humidity). Post-acquisition activity analysis was performed using Excel to calculate activity level across 1-minute time bins (each experimental run contained 8 control animals and 8 experimental animals, n≥3).
5. Gentle touch assay 3 rd instar male larvae (108-120 hours after egg laying) were touched on the thoracic segments with a hair during forward locomotion. No response, a stop, head retraction and turn were grouped into type I responses, and initiation of at least one single full body retraction or multiple full body retractions were categorized as type II reversal responses.
Results were grouped to 20 males per assay (n=3).
Electrophysiology
Intracellular recordings of wandering 3 rd instar male larvae were performed in HL3.1 saline (in mm: 70 NaCl, 5 KCl, 4 MgCl2, 0.2 CaCl2, 10 NaHCO3, 5 Trehalose, 115 sucrose, 5 HEPES-NaOH, pH 7.2) containing 1.5 mm Ca 2+ using an Axoclamp 2B amplifier (Molecular Devices) at muscle fiber 6/7 of segments A3-A5. For recording the output of the central pattern generator, the CNS and motor neurons were left intact. Temperature was controlled with a Peltier heating device and continually monitored with a microprobe thermometer. Giant fiber recordings were performed as previously described.
In vivo Ca 2+ imaging
UAS-myrGCaMP6s was expressed in glia with using the drivers described above. 2 nd instar male larvae were washed with PBS and placed on a glass slide with a small amount of Halocarbon oil #700 (LabScientific). Larvae were turned ventral side up and gently pressed with a coverslip and a small iron ring to inhibit movement. Images were acquired with a PerkinElmer Ultraview Vox spinning disk confocal microscope and a high-speed EM CCD camera at 8-12 Hz with a 40X 1.3 NA oil-immersion objective using Volocity Software. Single optical planes within the ventral cortex of the ventral nerve cord (VNC) were imaged in the dense cortical glial region immediately below the surface glial sheath. Average myrGCaMP6s signal in cortex glia was quantified in the central abdominal neuromeres of the VNC within a manually selected ROI excluding the midline glia. Ca 2+ oscillations were counted within the first minute of imaging at room temperature, and then normalized to the ROI area.
Drug feeding
Cyclosporin-A (Sigma Aldrich), FK506 (InvivoGen) and CN585 (Millipore) were dissolved in DMSO to a final concentration of 20 mM. Fly feeding solution included 5% yeast and 5% sucrose in water. Adult males less than 1 day old were starved for 6 hours and then transferred to a vial containing a strip of Wattman paper soaked in feeding solution containing the designated concentration of CN inhibitor, or DMSO as control. Flies were behaviorally tested following 6, 12 or 24 hours of drug feeding.
This disrupts the endo-exocytosis balance of cortex glial membrane proteins such as K2P leak channel sandman that regulate neuronal excitability. Knockdown of sandman mimics the zyd phenotype and the protein acts downstream of CN, indicating impaired extracellular K + homeostasis causes neuronal hyperexcitability and hyperactivity in zyd mutants. B-D.
|
2019-04-03T13:07:30.865Z
|
2018-09-25T00:00:00.000
|
{
"year": 2018,
"sha1": "f4ea4df028a5087fef75e31469db22b0475c795d",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.7554/elife.44186",
"oa_status": "GREEN",
"pdf_src": "BioRxiv",
"pdf_hash": "0c688d1723bb9574ef304821b49136554e5e9050",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Chemistry"
]
}
|
229024235
|
pes2o/s2orc
|
v3-fos-license
|
Computational Simulation Study on the Influence of Spray/Wall Impact on the Mixing Characteristics of Bipropellant Centrifugal Apogee Engine
The present research investigates the impingement between spray droplets and two different coating walls is investigated to analyse effects of impingement on the mixing characteristics of propellants. In a mechanism experiment, a droplet with variable Weber number impinging on the coating walls at different wall temperatures is observed to have different behaviors, including spread, breakup, splash, suspension and rebound. Through this experiment, we establish a mathematical impinging model of these two coatings, which are identified by wall temperature and incident Weber number. These models are taken into the simulation to describe the mixing characteristics of two propellants under thermal atmosphere. The results indicate a significant difference in the mixing situation for two coatings impingement, showing a good agreement with experimental results of firing tests. Compared with SiCrTiZr coating, MoSi2 coating significantly benefits better mixing to improve combustion efficiency with lower temperature simultaneously, because more droplets get into impingement regimes of rebound and participate into central combustion. Therefore, oxygen-rich combustion area is centralized away from chamber wall, which is conducive to survival of liquid cooling film.
Introduction
Since the Apollo Space Program, bipropellant liquid engine has been playing an essential role in the attitude and orbit control of spacecrafts. Centrifugal engine has been widely used in many propulsion systems as a mainstream solution. It is well known that fuel MMH and oxidizer NTO sprayed from a swirl injector into the chamber participate in the combustion to power the spacecraft after fine atomization, fast evaporation and effective mixing. However, as for a centrifugal apogee engine, atomization quality of swirl injector is limited because of low supply pressure provided in the propulsion system with higher flow rate. Thus, it requires a long process to achieve full atomization and evaporation. And those unevaporated droplets impinge on chamber wall and generate a series of behaviors, such as sticking, spread, splash and rebound. Some of droplets stick to the wall and form a cooling film, and others splash or bounce back into chamber for further mixing and combustion. Therefore, the research on spray/wall impact directly affects engine's combustion and working temperature, which has a considerable significance of engine performance. In the ESA/ESTEC Technology Research Program, the ROCFLAM (Rocket Combustion Flow Analysis Module) [1] was developed to run simulations for liquid-film cooled bipropellant rocket engines. It is focused on the formation of cooling film on the platinum alloy smooth wall, which interacts with propellant droplets and gas flow on the boundary layer. It was validated by calculations with different injection characteristics for 400N engine. However, as for a coating wall with rough morphology, the situation could be more different and complicated, because the interaction between droplets and coating wall has totally different mechanisms. In the research of droplet impingement, Naber and Reitz [2] proposed the famous N-R model in 1988 to judge the impinging behaviors based on Weber number. They divided the droplet/Wall impinging behaviors into three states: adhesion, rebound and injection. Bai and Gosman [3] [4] considered the influence of wall temperature and defined a detailed impingement model based on both Weber number and wall temperature. Lee and Ryu [5] then developed a impingement mechanism judged by Weber number and wall temperature into nine categories. Naber and Farrell et al. [6] employed the experiment of an impingment between a single droplet and hot wall. They divided the thermodynamic states into four types: liquid-film evaporation state, boiling state, transition state and Leidenfrost state. Pierre Dunand [7] researched the problem of droplets impinging the wall, when wall temperature was higher than Leidenfrost point. In his work, the two-color method was used to measure splash droplets' temperature and obtain detailed characteristic parameters. Mitrakusuma [8] [9] conducted an experiment to investigate effects of wettability on impingement between a droplet with medium impact Weber number and a hot wall. Al-Roub and Farrell [10] studied the process of droplet impingement on the wall with liquid film. Their researches indicated that liquid film has a significant effect on the spray impinging behavior. Stanton [11] analysed the mechanism of spray-film interaction in detail and established a two-dimensional model of liquid film evolution. In the field of droplet's impingement on a rough wall, Rioboo et al. [12] and Moita et al. [13] summarized influence of various factors on the impinging mechanism. They gave following qualitative conclusions. Roughness of smooth wall (R a /R 0 <3.4e -4 ) led to unstable liquid film, while roughness of rough wall (R a /R 0 >2.5e -3 ) cause collapse of film, where R a was the average wall roughness and R 0 was droplet initial radius. When wall roughness increased, it promoted rapid splashing and finger-like fragmentation, but inhibited corona splash and partial rebound. Randy et al. [14] and M. Bussmann [15] carried out researches about the effect of wall roughness on the droplet impinging characteristics. They analysed the correlation between dimensionless surface average roughness (R a /D 0 ) and splash parameter K c , where D 0 was droplet diameter. With the aid of computational technology development, numerical simulation has gradually become an effective solution to research droplet impingement. Mahulkar et al. [16] [17] researched the process of impingement under the condition of a hot wall employing LES (Large Eddy Simulation). Chia-Fon Lee et al. [18] conducted simulations of spray impingement under the presence of liquid film by DPM (Discrete Phase Model) method and obtained vapor concentration distribution near the wall. Nikolopoulos et al. [19] carried out a two-phase continuous flow simulation. They used VOF (Volume of Fluid) method to analyse splash characteristics of an impingement between droplets and filmed wall. In terms of spray/wall interaction research, Andreassi et al. [20] used experimental methods to analyse the characteristics of spray/wall impact, and then ran the simulation with Bai-Gosman model. This study verified the applicability of that model for a cold wall. Frank Robert Held et al. [21] also studied the spray impingement on a high-temperature wall by means of simulative comparison. To sum up, there are many factors influencing on the spray/wall impingement. For many bipropellant engines, the chamber wall must be coated with high-temperature resistant oxidation coating. So impinging situation in the firing process is very complicated because of high-temperature and rough wall. And it is difficult to model coating wall surface with complex morphology. In this paper, based on experimental results and classical models of other researchers, new mathematical models adapted to this problem is established. These impinging models are used to evaluate subsequent evaporation and mixing under high temperature and rough wall boundary conditions.
Experimental System Introduction
The mechanism experiment is focused on the development of a single droplet when it impinges on a coating wall under given temperature conditions. In this experiment, kinematic characteristics of liquid 3 droplet need to be investigated for mathematical model construction. A high-speed camera with microscopy was used to capture more details of impingement between a single droplet and a coating wall. A droplet generator can produce a droplet with 1.5mm diameters. The temperature control system was consisted of an electric heating plate, a thermocouple and a thermal controller. They constructed a controlled heat condition on a coating wall. A long micro lens was used to distinguish kinematic characteristics of droplet when it impinged on the wall. The experimental system of droplet/wall impingement is shown in Figure 1.
Figure 1.
Experimental system of droplet/wall impingement with high-speed camera and microscope Weber number is a dimensionless ratio of inertial force and surface tension [22]. When Weber number is much larger than 1, the effect of surface tension is relatively small, so the difference of liquid property on droplet/wall impingement is also slight. Due to the strong corrosive and toxic characteristics of propellants, water was used to replace MMH and NTO. The generated particle size of a droplet is 1.5mm, so its incident Weber number only depends on incident velocity of the droplet. This velocity is determined by controlling height of droplet generator, which makes incidence Weber number range from 0 to 2500 and cover injection condition in the chamber, including a variety of droplets from small and low-speed to large and high-speed.
Sample 1 SiCrTiZr coating
Sample 2 MoSi 2 coating Figure 2. Two coating wall samples used in droplet impingement experiment In order to improve authenticity of this experiment, two kinds of real coating samples were used, as shown in Figure 2. Sample 1 was a SiCrTiZr coating with a rough surface and irregular morphology. It was prepared by slurping method. Its surface roughness was about 4.5 microns. Sample 2 was a MoSi2 coating made by vacuum ion plating and infiltration method. Its surface was relatively smooth with some local silicide bumps and roughness of around 2 microns.
Experimental Approach
In the experiment, the incident Weber number of a droplet was gradually increased from small to large for observation. Since the boiling point T B of water at standard pressure is 100 , and the Leidenfrost ℃ temperature T L defined as the temperature at which the droplet rebounds from the insulating vapor layer is around 190 . Therefore, the range from room temperature to 250 can cover several ℃ ℃ mechanisms of impingement when the droplet impacts on the coating wall. At room temperature, there is no heat transfer with wall surface, so the mechanism is relatively simple. When the incident Weber number of a droplet is greater than 596, the surface of sample 1 starts to form a coronal splash. However, for sample 2, only when the incident Weber number increases up to 852, the splash phenomenon occurs. It indicates that the coating surface morphology does have a significant effect on Wall temperature: 250℃ Figure 4. A droplet's impingement on sample 2 at different wall temperature When wall temperature T w is lower than T B , the droplet spreads and then forms a liquid film after impingement, which is like the situation of impinging on a cold wall. With the increase in T w , it changes from spread to splash or rebound mechanism. Because liquid film is strongly disturbed under the boiling condition, coupled with the decrease of liquid surface tension at higher temperature. In this way, the spread film is more likely to break up with a finger-like structure. Meanwhile, as T w goes up, the splashing speed is also accelerated. When wall temperature T w exceeds T L , many suspended and broken droplets appear obviously on the surface of sample 1, as shown in Figure 3. And after . Effect of two different coatings on the broken droplet's behavior Compared with sample 2, sample 1 has a more considerable surface roughness and a looser surface structure. It is easy to form many micro-holes at the solid-liquid interface. It leads to an increase in apparent thermal resistance and a decrease in heat flux between solid and liquid. By contrast, Sample 2 has a low surface roughness and smooth surface morphology. It reduces the probability of cavitation formation, and its apparent thermal resistance is smaller than that of sample 1. When a droplet comes to the Leidenfrost state, the lower apparent thermal resistance improves the evaporation rate of the wall-attached liquid, causing the broken droplet to get more prominent propulsion force and rebound speed to move away.
Impingement Regimes and Transition Conditions
After getting all information, like speed, size and number of droplets after impingement in the experiment, we constructed the classification of impingement regimes and transition conditions, as shown in Figure 6. The vertical coordinate is the incident Weber number of droplets, and the horizontal coordinate is wall temperature.
Sample 1 SiCrTiZr coating
Sample 2 MoSi 2 coating Figure 6. Comparison of impingement regimes and transition conditions The main difference between these two different coatings can be explained in two aspects. Firstly, the critical Weber number of regimes transition condition is different. The critical weber number of the relatively smooth wall is higher than that of the rough wall. When wall temperature is below the Leidenfrost point, the rough wall only needs the critical Weber number of 596 to generate a splash, while the smooth wall requires 862 Weber number. Secondly, when the temperature of the wall surface is higher than Leidenfrost point, the critical Weber numbers of SiCrTiZr coating and MoSi 2 coating are 426 and 639, respectively.
Model Construction
Many impingement models of other researchers are combined with experimental results. Bases on that, an impingement model of coating wall can be constructed for correction and fitting. Figure 7 shows normal velocity v b , tangential velocity u b , droplet size d b and splash angle θ b of droplet before impingement, as well as normal velocity v a , tangential velocity u a , droplet size d a and splash angle θ a of droplet after impingement. According to the difference of T w , the discussion can be divided into three cases: (1) T w < T B , (2) T B ≤ T w < T L and (3) T w ≥ T L . The model of SiMo 2 coating was introduced in detail in our work with Pengfei Fu [23] [24], as shown in Table 1. [26], splash parameters can be defined as well. The mathematical relationship established in those classical models was still followed in the model construction. However, coefficients in this model must be fitted according to experimental results, such as diameter and velocity of splash droplet. By contrast, the model of SiCrTiZr coating is similar to that of MoSi 2 coating, when T w is lower than T L . The only difference is the different critical Weber number in classification. When T w is higher than T L , the model of SiCrTiZr coating changes greatly, and leads to significant effect on mixing and combustion. When We < 426 at high T w , we can get v a = 0, u a = u b and d a = d b because of the suspension of droplet. When We ≥ 426, droplet splashes and suspends. The velocity of splashed droplet is fitted in accordance with experimental data and expressed as, 5 1 . 3 6 2 7 10 We In terms of splashed droplet's size, when We < 800, it is determined by the incident Weber number and expressed as, 5 2 10 We 0.563 When We≥800, the velocity of splash droplet decreases dramatically with the increase in the incident Weber number, according to the experimental data. Thus, it can be described as, In this case, normal velocity of suspended droplet is 0, and tangential velocity remains equivalent to that of incident droplet. Figure 8 shows the fitting results of velocity and diameter of splash droplet. Figure 9. Contour image of MMH vapor distribution with SiCrTiZr coating wall Figure 10. Contour image of NTO vapor distribution with SiCrTiZr coating wall For an engine with SiCrTiZr coating, more droplets of NTO tend to suspend near the chamber wall and then evaporate there after impingement. Figure 9 and Figure 10 indicate that the concentration distribution of NTO vapor and MMH vapor get separated, leading to a poor mixing with a negative effect on combustion efficiency and film cooling. As for diffusion combustion, the poor mixing requires more time and space for completed chemical reactions. Especially for the coating wall with a temperature limit, it is risky that a high oxygen-rich area is formed near the chamber wall. Because it can form extremely high-temperature burning gas due to its high equivalence ratio. This situation further increases wall temperature and worsen the survival condition of liquid film attached to the wall. That may shorten the service life of bipropellant apogee engine. In Figure 11 and Figure 12, we can figure out the notable difference in the concentration distribution of NTO vapor and MMH vapor. The mixing of oxygen-fuel vapor is more uniform and centralized, and the mixing area overlapped by two propellants increases obviously. That is conducive to the formation of central high-temperature flame and the improvement of combustion efficiency. When wall temperature is low, because the critical Weber number of MoSi2 coating is much higher than SiCrTiZr coating, liquid film can spread more in the low-temperature region to cool the engine's injector. When wall temperature is higher than boiling point, induced breakup of liquid film begins. When it comes to the Leidenfrost state, droplets are likely to rebound to central region, subsequently evaporate and further participate in combustion. Figure 11. Contour image of MMH vapor distribution with MoSi 2 coating wall
Comparison with the Firing Test Results
In a sea-level firing test, two engines with different coatings were tested for a comparison. Experimental data showed that, compared with MoSi2 coating, not only the specific impulse of engine coated with SiCrTiZr was decreased by 3s, but also the surface temperature of combustion chamber measured by the infrared thermal imager was much higher, as shown in Figure 13. Figure 13. Comparison of infrared thermal images of engines in the sea-level firing test According to test data of infrared thermography, maximum temperature of engine with SiCrTiZr coating was 317 higher ℃ . And the high-temperature area was obviously larger and close to the upstream injector. The high-temperature situation is very risky for engine's service life. Meanwhile, its 10 lower performance reflects that nonuniform vapor distribution and poor mixing significantly worsen combustion efficiency.
Conclusion
By conducting an experiment of droplet impingement on two different coating walls, a mathematical model is constructed with different classifications of impingement regimes and transition conditions. Through the simulation of propellant mixing based on new model and infrared data in the firing test, it shows that the effect of coating wall surface morphology on combustion efficiency and film cooling is significant for bipropellant centrifugal apogee engine.
(1) Surface morphology of different coatings significantly affects classifications of impingement regimes and transition conditions. Firstly, the rough wall surface is more likely to make droplet splash. Secondly, the rough wall morphology tends to make droplets suspend under the condition of high incident Weber number and high-temperature wall, while the smooth wall surface is more likely to get the droplet rebound.
(2) At high temperature, the effect of wall roughness on droplet spread is weakened by the presence of vapor layer between droplet and wall surface. The rough coating wall can strengthen heat transfer at solid-liquid interface, and enhance the evaporation of droplets.
(3) The propellant mixing in the engine chamber is different in two cases of different coating walls. SiCrTiZr coating tends to cause the problem of oxygen-rich high temperature near the wall with a negative effect on combustion and cooling. For the engine with MoSi 2 coating, more uniform mixing can be formed to improve the combustion efficiency and cooling performance.
|
2020-11-12T09:04:53.384Z
|
2020-10-01T00:00:00.000
|
{
"year": 2020,
"sha1": "6a3e2b14361939ad79454c40b3f8ac797f7ed0c2",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1624/2/022054",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "9f802e7a051d7bc1c77a7073fa0b65ee201cdcc8",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
}
|
118877831
|
pes2o/s2orc
|
v3-fos-license
|
Observation of Dirac-like energy band and ring-torus Fermi surface associated with the nodal line in topological insulator CaAgAs
One of key challenges in current material research is to search for new topological materials with inverted bulk-band structure. In topological insulators, the band inversion caused by strong spin-orbit coupling leads to opening of a band gap in the entire Brillouin zone, whereas an additional crystal symmetry such as point-group and nonsymmorphic symmetries sometimes prohibits the gap opening at/on specific points or line in momentum space, giving rise to topological semimetals. Despite many theoretical predictions of topological insulators/semimetals associated with such crystal symmetries, the experimental realization is still relatively scarce. Here, using angle-resolved photoemission spectroscopy with bulk-sensitive soft x-ray photons, we experimentally demonstrate that hexagonal pnictide CaAgAs belongs to a new family of topological insulators characterized by the inverted band structure and the mirror reflection symmetry of crystal. We have established the bulk valence-band structure in three-dimensional Brillouin zone, and observed the Dirac-like energy band and ring-torus Fermi surface associated with the line node, where bulk valence and conducting bands cross on a line in the momentum space under negligible spin-orbit coupling. Intriguingly, we found that no other bands cross the Fermi level and therefore the low-energy excitations are solely characterized by the Dirac-like band. CaAgAs provides an excellent platform to study the interplay among low-energy electron dynamics, crystal symmetry, and exotic topological properties.
Introduction
Topological insulators (TIs) exhibit a novel quantum state with metallic edge or surface state (SS) within the bulk band gap generated by the strong spin-orbit coupling (SOC). The topological SS in three-dimensional (3D) TIs is characterized by a linearly dispersing Dirac-cone energy band, 1-3 which hosts massless Dirac fermions protected by the time-reveal symmetry (TRS). The discovery of TIs triggered the search for new types of topological materials containing surface or bulk Dirac-cone bands protected by crystal symmetries, as represented by topological crystalline insulators (TCIs) with the Dirac-cone SSs protected by mirror symmetry, [4][5][6] as well as 3D Dirac semimetals (DSMs) with bulk Dirac-cone bands protected by rotational symmetry (such as Cd3As2 and Na3Bi). [7][8][9][10][11] While the Dirac cone in DSMs is spin degenerate, breaking the TRS or space-inversion symmetry (SIS) leads to the Weyl-semimetal (WSM) phase with pairs of spin-split Dirac (Weyl) cones, as recently verified in transition-metal monopnictides. [12][13][14] Dirac-cone states are known to provide a platform to realize outstanding physical properties such as extremely high mobility, gigantic linear magnetoresistance, and chiral anomaly. [15][16][17][18][19][20][21][22] While the DSMs and WSMs are characterized by the crossing of bulk bands at the discrete points in k space (point nodes), there exists another type of topological semimetal characterized by the band crossing along a one-dimensional curve in k space (line node), called line-node semimetal (LNSM). The LNSMs are expected to show unique physical properties different from the DSMs and WSMs, such as a flat Landau level, the Kondo effect, long-range Coulomb interaction, and peculiar charge polarization and orbital magnetism. [23][24][25][26] Despite many theoretical predictions of LNSMs in various material platforms, experimental studies on the LNSMs are relatively scarce. [55][56][57][58][59][60][61][62] Recently, it was theoretically proposed by Yamakage et al. that noncentrosymmetric ternary pnictides CaAgX (X = P, As) are the candidate of LNSM and TI. 41 These materials crystalize in the ZrNiAl-type structure with space group P 6 ___ 2m (No. 189) 63 (for crystal structure, see Fig. 1a). First-principles band-structure calculations have shown that, under negligible spin-orbit coupling (SOC), CaAgX displays a fairly simple band structure near the Fermi level (EF) with a ring-like line node (nodal ring) surrounding the G point of bulk hexagonal Brillouin zone (BZ) (bulk BZ is shown in Fig. 1b). The line node is associated with the crossing of bulk conduction band (CB) and valence band (VB) with Ag s and P/As p character, respectively, and is protected by the mirror reflection symmetry of crystal. When the SOC is included in the calculation, CaAgP still keeps the line node due to the very small spin-orbit gap (~ 1 meV) while a relatively large spin-orbit gap (~ 75 meV) opens along the line node in CaAgAs to make this material a narrow-gap TI. 41 The SOC thus plays a crucial role in switching the LNSM and TI phases in CaAgX. Transport measurements on the CaAgP and In this work, we report the ARPES study of CaAgAs. By utilizing bulk-sensitive soft-x-ray photons from synchrotron radiation, we established the bulk VB structure in the 3D bulk BZ. We suggest that CaAgAs is a narrow-gap TI with an ideal band structure suitable to study the low-energy excitations linked to the bulk Dirac-like band arising from the line node. This is demonstrated by observing the bulk Fermi surface which is solely derived from the VB and CB associated with the single line node, consistent with our first-principles band structure calculations. We discuss the consequence of our observation in relation to the exotic physical properties.
Samples and experimental
High-quality single crystals of CaAgAs were grown on the sintered pellets of CaAgAs (for details, see Method). A typical photograph of our single crystal is shown in Fig. 1c. ARPES measurements were performed with synchrotron light at BL2 in Photon Factory, KEK. Samples were cleaved in situ along the (11 2 ___ 0) crystal plane (a shiny mirror plane in Fig. 1c) as confirmed by the Laue x-ray diffraction measurement on the cleaved surface (typical Laue pattern is shown in Fig. 1d) and the photon-energy (hn) dependence of the band dispersion. This indicates that the cleaved plane is the ky -kz plane in the hexagonal BZ (Fig. 1b). Figure 1e displays the energy distribution curve (EDC) in the wide energy region measured at hn = 580 eV. One can recognize several core-level peaks originating from the Ca (3s, 3p), Ag (4s, 4p, 4d), and As (3s, 3p, 3d) orbitals. No other core-level peaks were found in this energy range, confirming the clean sample surface.
Valence-band structure
First, we present the overall VB structure of CaAgAs. We found that soft-x-ray photons are useful for revealing the bulk electronic states of CaAgAs as in the case of noncentrosymmetric Weyl semimetals such as TaAs, [12][13][14] although we need to sacrifice the energy/momentum resolution compared to the vacuum ultraviolet (VUV) photons. In fact, the obtained VUV data were found to suffer large broadening along wave vector perpendicular to the surface probably because of the final-state effect and rather rough nature of the cleaved surfaces, and therefore we concluded that the VUV photons are not best suited for resolving 3D electronic states of CaAgAs. Figure this confirms that the cleaving plane is (112 ___ 0). The holelike dispersion approaching EF around the G point is well reproduced by the calculations, and therefore it is assigned as the topmost VB with the As 4p orbital character. Moreover, a good agreement of the band width between experiment and calculation signifies no apparent band-renormalization effect, suggesting the weak electron correlation. It is noted that we observe a single holelike band within 1.5 eV of EF in the experiment, while the calculation predicts two holelike bands. Such difference may be due to the finite k/energy-broadening effect as well as the matrix-element effect of photoelectron intensity which turned out to be rather strong in this material.
We comment here that our Hall conductivity measurement of CaAgAs single crystal suggests the existence of hole carriers with carrier concentration of ~ 1.6 × Fig. 2f (cut C in Fig. 2c) where the topmost VB always stays below EB ~ 1 eV without crossing EF.
Ring-torus Fermi surface
Having established the overall VB structure, a next important issue is the electronic structure in the vicinity of EF responsible for the physical properties. Figure 3a displays the ARPES intensity at EF as a function of kx and ky (the GKM plane). One immediately finds a bright intensity pattern surrounding the G point, in particular in 13th BZ, confirming the absence of additional Fermi surface away from the G point.
It is also obvious from Fig. 3b that no Fermi surface exists away from G in the ky-kz (GAKH) plane. As shown in Figs. 3a and 3b, when we overlaid the calculated Fermi surface (green curves) onto the ARPES intensity (note that we assumed the location of EF to be 0.05 eV below the VB top in the calculation to account for a small but finite hole-doping effect in experiment), the high-intensity region coincides with the k region where the calculated Fermi surface exists.
To gain further insight into the Fermi-surface topology, we show in the top panels of Figs. 3c and 3d the ARPES intensity near EF and the intensity obtained by taking second derivative of the EDCs, respectively, along the k cut nearly crossing the G point (cut A in Fig. 3a). One finds a linearly dispersive holelike band originating from the As 4p states which is better visualized in the second-derivative plot in Fig. 3d (by a linear extrapolation of the band dispersion around EF, we have estimated the Fermi velocity to be vF = 2.1 ± 0.1 eVÅ). This band is reproduced by our calculation as shown by red curves in Fig. 3c, and is responsible for the outer ring in Fig. 3a. As shown in cut A of Fig. 3c, there exists another electronlike band in the calculation which originates from the Ag 5s orbital; this band forms the inner ring in Fig. 3a. While the intensity of the electronlike band seems weak in the original intensity (Fig. 3c), the second-derivative image in Fig. 3d shows a finite spectral weight likely arising from the electronlike band.
As shown in cut A of Fig. 3c, the calculated electronlike band intersects the holelike band at ~ 0.1 eV above EF, and forms the nodes at ky ~ ± 0.15 Å -1 under negligible SOC, since cut A is on the (0001) mirror plane (the kx-ky plane) and the nodes are protected by mirror reflection symmetry. 41 This indicates that the electronic states within two opposite nodes across the G point have an inverted band character. Thus, our observation of electronlike feature can be regarded as a hallmark of the band inversion, which is a prerequisite for realizing LNSM or TI. It is remarked that with a finite SOC, an energy gap of 75 meV opens in the caluclation, as can be seen from a difference in the band dispersion with (solid curves) and without (dashed curves) SOC in Fig. 3c. 41 The opening of a spin-orbit gap at the node is also seen in some other LNSM candidates such as Cu3(Pd,Zn)N, 34,35 Ca3P2, 37,49 ZrSiS, 40 CaTe, 50 and fcc alkaline-earth metal 51 .
To clarify whether the node-like feature in CaAgAs is seen at a point or on a line in k space, it is necessary to measure the band dispersion along different k slices around the Fermi surface. For this sake, we show in the middle and bottom panels of Figs. 3c and 3d the intensity for cuts slightly away from the G point (cuts B and C in Fig. 3a) obtained with different hn's. One can recognize that overall band structure along cut B is similar to that along cut A regarding the EF crossing of holelike band and the presence of electronlike feature. This is reasonable since cut B is also on the mirror plane and still crosses the calculated nodal points. On the other hand, along cut C, the holelike band moves downward and shows no EF crossing. These behaviors are consistent with the presence of a ring-shaped nodal feature (nodal ring) on the mirror plane shown by a dashed curve in Fig. 3a.
It should be stressed again that there exists a spin-orbit gap along the nodal line in the calculation. Since the gap is almost isotropic (75±1 meV) along the nodal ring (not shown), the low-energy excitations in CaAgAs are characterized by the excitations across the band gap in the k region involving the entire line node.
Unfortunately, such a band gap (as well as the topological SSs) was not resolved in the ARPES experiment, likely due to the slightly hole-doped nature of crystal.
Considering the fact that (i) the calculated spin-orbit gap is not so small compared to other TIs and (ii) the ARPES-derived band dispersion shows a reasonable agreement with the calculation near EF, it would be more reasonable to regard CaAgAs as a narrow-gap TI, rather than a LNSM. It is also emphasized here that the TI nature of CaAgAs should be distinguished from that of prototypical TIs such as Bi2Se3 since the node never shows up even without SOC in Bi2Se3 unlike CaAgAs.
Sample preparation
High-quality single crystals of CaAgAs were synthesized by the following procedure. An equimolar mixture of calcium chips, silver powder, and arsenic chunks were put in an alumina crucible and sealed in an evacuated quartz tube. The tubes were kept at 773 K for 12 h and then at 1273 K for 12 h, followed by furnace cooling to room temperature. The obtained samples were pulverized, pressed into pellets, and sealed in quartz tubes. The pellets were sintered at 1173 K for 2 h and cooled to room temperature at a rate of 30 K h -1 , resulting in that shiny hexagonal-prismatic single crystals of CaAgAs were grown on the pellets. The quality of the crystal was checked by X-ray diffraction technique using a RIGAKU R-AXIS IP diffractometer.
Calculations
Electronic band-structure calculations were carried out by means of first-principles band structure calculations by using WIEN2k code 65 with the full-potential linearized augmented plane-wave method within the generalized gradient approximation. We used the experimental structural parameters for the calculations. 63 24 × 24 × 36 k-points sampling was used for the self-consistent calculations. 41
Data availability
The data that support the findings of this study are available from the corresponding author upon reasonable request.
|
2019-04-13T18:58:36.201Z
|
2017-08-23T00:00:00.000
|
{
"year": 2017,
"sha1": "0d0932ef81a7ed4bf0c90039b19e388571b75f63",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41535-017-0074-z.pdf",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "0d0932ef81a7ed4bf0c90039b19e388571b75f63",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
257232669
|
pes2o/s2orc
|
v3-fos-license
|
Robust one-shot estimation over shared networks in the presence of denial-of-service attacks
Multi-agent systems often communicate over low-power shared wireless networks in unlicensed spectrum, prone to denial-of-service attacks. We consider the following scenario: multiple pairs of agents communicating strategically over shared communication networks in the presence of a jammer who may launch a denial-of-service. We cast this problem as a game between a coordinator who optimizes the transmission and estimation policies jointly and a jammer who optimizes its probability of performing an attack. We consider two cases: point-to-point channels and large-scale networks with a countably infinite number of sensor-receiver pairs. When the jammer proactively attacks the channel, the game is nonconvex from the coordinator's perspective. However, despite the lack of convexity, we construct a saddle point equilibrium solution for any multi-variate Gaussian distribution for the observations. When the jammer is reactive, we obtain an algorithm based on sequential convex optimization, which converges swiftly to first-order Nash-equilibria. Interestingly, blocking the channel is often optimal when the jammer is reactive, even when it is idle, to create ambiguity at the receiver.
I. INTRODUCTION
Networked multi-agent systems often consist of multiple decision making agents collaborating to perform a task.Popular examples of network systems include ground and/or aerial robotic networks, sensor networks, and the internet of things.In order to achieve a synergistic behavior, the agents often communicate messages over a wireless network among themselves.Typically, a network system architecture will also involve one or multiple nodes communicating with a gateway or base-station.Over these links, the transmitting agent sends messages containing one or more state variables that need to be estimated at the base-station.In particular, remote sensing where one (or multiple) sensor(s) communicates its measurements over a shared wireless channel to one or more non-collocated access points or base-stations is a fundamental building block of many cyber-physical systems [1]- [3,
and references therein].
There are many communication protocols that enable such local communication such as Bluetooth, wi-fi and cellular, X. Zhang was partially supported by China National Postdoctoral Program for Innovative Talents (No. BX2021346), China Postdoctoral Science Foundation (No. 2022M713316), and National Natural Science Foundation of China (No. 12288201).M. M. Vasconcelos was partially supported by the Commonwealth Cyber Initiative.
X. Zhang is with LSEC, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing, China (e-mail: xuzhang_cas@lsec.cc.ac.cn).
M. M. Vasconcelos is with the Department of Electrical Engineering at the FAMU-FSU College of Engineering, Florida State University, USA (e-mail: mm22eo@fsu.edu).
c m I J 4 1 r Y W y k f M c 0 4 2 q y K N g R / + e V V 0 q x W / I u K f 1 c t 1 6 p 5 H A V y Q k 7 J O f H J J a m R G 1 I n D c J J S p 7 J K 3 l z n p w X 5 9 3 5 W L S u O f n M M f k D 5 / M H X 9 S T g A = = < / l a t e x i t > Jammer < l a t e x i t s h a 1 _ b a s e 6 4 = " G 5 + l u l e 6 R E W 7 i c i A s R u K 0 g 3 k N < l a t e x i t s h a 1 _ b a s e 6 4 = " N o / D c Q 4 C 1 g X + + q Z h T U W W W N Q p v z I = " > A A A B 7 3 i c b V D L S g N B E O y N r x h f U Y 9 e B o P g K e y K q M e A F 4 8 R z A O S J f R O Z p M h M 7 P r z K w Q Q n 7 C i w d F v P o 7 3 v w b J 8 k e N L G g o a j q p r s r S g U 3 1 v e / v c L a + s b m V n G 7 t L O 7 t 3 9 Q P j x q m i T T l D V o I h L d j t A w w R V r W G 4 F a 6 e a o Y w E a 0 W j 2 5 n f e m L a 8 E Q 9 2 H H K Q o k D x W N O 0 T q p 3 R 2 g l N h T v X L F r / p z k F U S 5 K Q C O e q 9 8 l e 3 n 9 B M M m W p Q G M 6 g Z / a c I L a c i r Y t N T N D E u R j n D A O o 4 q l M y E k / m 9 U 3 L m l D 6 J E + 1 K W T J X f 0 9 M U B o z l p H r l G i H Z t m b i f 9 5 n c z G N + G E q z S z T N H F o j g T x C Z k 9 j z p c 8 2 o F W N H k G r u b i V 0 i B q p d R G V X A j B 8 s u r p H l R D a 6 q w f 1 l p X a Z x 1 G E E z i F c w j g G m p w B 3 V o A A U B z / A K b 9 6 j 9 + K 9 e x + L 1 o K X z x z D H 3 i f P w p 6 j + 4 = < / l a t e x i t > n < l a t e x i t s h a 1 _ b a s e 6 4 = " 9 t 5 j T k W f v P k O Q l c V R 3 9 7 J w p 6 f I A = " > A A A B 7 X i c b V D L S g N B E J y N r x h f U Y 9 e B o P g K e y K q M e A F 4 8 R z A O S J c x O e p M x s z Block diagram for a remote estimation game between a coordinator and a jammer.The jammer may have access to side information on the channel's occupancy.The coordinator designs the policies for the sensor and the estimator.among others.The choice of a given protocol requires meeting some specifications, but there is no single protocol that achieves all desirable characteristics and uniformly better than the others.For example, Low Power Wide Area Networks (LP-WANs) provide power efficiency and large coverage leading to very cost efficient deployments [4].However, such protocols operate in frequency bands in the so-called unlicensed spectrum, and are therefore vulnerable to malicious agents interested in disrupting the communication link between the anchor-node(s) and the base station using denial-of-service attacks.Denial-of-Service (DoS) is a class of cyber-attacks where a malicious agent, often referred to as the jammer, may disrupt the communication link between the legitimate transmitter-receiver pair.DoS attacks are widely studied at different levels of modeling detail of the communication channel.For example if the channel is assumed to be a physical layer model, the jammer may introduce additional Gaussian noise to the transmitted signal.If the channel is modeled at the network layer by a packet-drop channel, the jammer may increase the probability of dropping a packet.We consider a medium access control (MAC) layer model in which the jammer may decide to block the channel by transmitting an interference signal that overwhelms the receiver, causing a packet collision.
We consider the remote estimation system depicted in Fig. 1, which is comprised of multiple sensor and estimator pairs communicating over a shared wireless network modeled by a collision channel in the presence of a jammer.Each sensor makes a stochastic measurement X i of a physical quantity according to a given distribution, and decides whether to transmit arXiv:2302.14689v1[eess.SY] 28 Feb 2023 it or not to the corresponding estimator.Communication is costly, therefore, the sensors must transmit wisely.We consider two cases: 1. the proactive jammer that cannot sense if the channel is being used by the sensors; 2. the reacitve jammer that can sense the channel, i.e., has access to the number of transmitting sensors P .Jamming is assumed to be costly, therefore, the jammer must act strategically.
Finally, each estimator observes the channel output and declares an corresponding estimate Xi for the sensor's observation such as to minimize the expected quadratic distortion between X i and Xi .We study this problem as a zero-sum game between a coordinator (system designer) and the jammer.Our goal is to characterize equilibrium solutions and obtain efficient algorithms to compute them.The main difference between our model and existing work in this area is the presence of a virtual binary signaling channel that can be exploited by the coordinator to guarantee a minimum level of performance of the system in the presence of DoS attacks.
There exists an extensive literature on strategic communication in the presence of jammers.This class of problems seems to have started with the seminal work of Basar [5], which obtained a complete characterization of the saddle point equilibria when the sensor measurements and the channel are Gaussian.Recently, an extension to the two-way additive Gaussian noise channel was studied by McDonald et al. in [6].A jamming problem where the transmitter and estimator have different objectives was solved by Akyol et al. in [7] using a hierarchical game approach.A jamming problem with and without common randomness between the transmitter and estimator is studied Akyol in [8] and a Stackelberg game formulation was considered by Gao et al. [9].Another interesting problem formulation is due to Shafiee and Ulukus in [10], where the pay-off function is the mutual information between the channel input and output.Jamming over fading channels was considered by Ray et al. in [11] and subsequently by Altman et al. in [12].An LTE network model was considered by Aziz et al. in [13].
Another class of remote estimation problems focuses on the state estimation of a linear time invariant system driven by Gaussian noise under DoS attacks.Li et al. [14] studied a jamming game where the transmitter and jammer have binary actions.A SINR-based model was considered by Li et al. in [15], where the transmitter and jammer decide among multiple discrete power levels.The case of continuum of power levels was studied by Ding et al. in [16].A jamming model over a channel with two modes (i.e., free mode and safe mode) was analyzed by Wu et al. in [17].A jamming problem with asymmetric feedback information and multichannel transmissions was considered by Ding et al. in [18] and [19], respectively.A Stackelberg equilibrium approach to this problem was considered by Feng et al. in [20].The problem of optimizing the attack scheduling policy from the jammer's perspective was considered by Peng et al. in [21].
The model described herein is closely related to the work of Gupta et al. [22], [23] and Vasconcelos and Martins [24], [25], where there is a clear distinction between the channel being blocked vs. idle.As in [22], we assume that the transmission decision U may be available as side information to the jammer, but not the full input signal X.This assumption is realistic in the sense that the bits used to encode X may be encrypted.In the game considered in [22], it is assumed that the receiver is fixed, and the game is played between the sensor and the jammer.Instead, we follow Akyol [8] in which the sensor and estimator are distinct agents implementing policies optimized by a coordinator [26] 1 .
The main contributions of the paper are summarized as follows: 1) For the proactive jammer over a point-to-point channel, we provide the optimal strategies for the coordinator and the jammer that constitute a saddle point equilibrium, which appears as two scenarios depending on the transmission and jamming costs.This result holds even though the objective function is non-convex from the coordinator's perspective.2) For the reactive jammer over a point-to-point channel, we propose alternating between Projected Gradient Ascent (PGA) and Convex-Concave Procedure (CCP) to achieve an approximate first-order Nash-equilibrium.Our numerical results demonstrate that the proposed PGA-CCP algorithm exhibits superior convergence rates compared to the traditional Gradient Descent Ascent (GDA) algorithm.A significant contribution here is that the optimal estimator employs representation symbols with distinct values for the no-transmission and collisions, as opposed to when the jammer is proactive, which uses the mean to estimate the observations in both situations.3) For large-scale networks we assume a problem with a countably infinite number of agents [28, and references therein] under the possibility of a proactive jamming attack.We compute the limiting objective function when the normalized channel capacity converges to a constant κ.In this regime, the zero-sum game between the coordinator and the jammer over large-scale networks is equivalent to a constrained minimax problem.We establish the saddle point equilibrium of the optimal strategies for the coordinator and the jammer, which consists of six scenarios based on transmission cost, jamming cost, and the normalized capacity.
II. SYSTEM MODEL Consider a remote sensing system consisting of n sensors.Let [n] denote the set {1, • • • , n}.Each sensor makes a random measurement, which is represented by a random vector.Let X i ∈ R m denote the measurement of the i-th sensor.We assume for tractability that the measurements are independent and identically distributed Gaussian random vectors across sensors, that is, We denote the probability density function (pdf) of a multivariate Gaussian random vector by: The goal of the sensors is to communicate their measurements to one or multiple receiver over a shared wireless network of limited capacity in the presence of a jammer.
A. Transmitters
We define the following collection of policies: Definition 1 (Transmission policy): A transmission policy for the i-th sensor is a measurable function γ i : R m → [0, 1] such that When the i-th sensor makes a transmission, it sends a packet containing its identification number and its observed measurement as follows.Given X i = x i and U i = 1, the signal transmitted to the receiver is: The reason this is done is to remove the ambiguity regarding the origin of each measurement, since they could correspond to physical quantities captured at different locations, or, potentially, completely different physical quantities.
When a sensor does not transmit, we assume that the signal transmitted corresponds to an empty packet, which is mathematically represented by When a sensor transmits, it encodes the data using a cryptographic protocol.Typically, the LoRaWAN IoT standard uses the 128bit-AES lightweight encryption.The nature of the protocol is not important here, but it implies that when the attacker senses the channel, it cannot decode the content in each transmitted packet.However, it is capable of detecting whether a given channel is used based on a threshold detector on the power level in the channel's frequency band.
We assume that the communication occurs via a wireless medium of capacity κ(n) ∈ (0, n).Notice that the capacity of the channel corresponds to the number of packets that the channel can support simultaneously, and is not related to the information theoretic notion of capacity.
Provided that the channel is not blocked by the attacker, when the total number of transmitting sensors is below or equal to the channel capacity, the receiver observes the packets perfectly.Conversely, when the number of simultaneous transmissions exceeds the channel capacity, the receiver observes a collision symbol.We represent this as follows: Let and One feature of the wireless medium is that it is prone to malicious denial of service attacks known as jamming.There are many types of jamming attacks, but here we focus on two kinds: the proactive and the reactive jammer.
B. Proactive jamming
We define the proactive jammer, as one that decides whether to block the channel or not without sensing the channel.Therefore, at each time instant, the decision to attack is made according to a mixed strategy, such that with a certain probability, it spends a fixed amount of energy to block the network.At the receiver, the jamming attack is perceived as if a collision among many packets has happened.
For the proactive jammer, the decision to block the channel or not is denoted by the variable J, which is independent of P , i.e.,
C. Reactive jamming
Reactive jamming is a more sophisticated attack model in which the jammer first senses whether the channel is occupied or not.Then the jammer adjusts its probability of blocking the channel based on the channel state.The reactive jamming strategy is characterized by a vector where α is the jamming probability when the channel is not occupied and β is the jamming probability when the channel is occupied.
D. Channel output
Then, The channel output is given by:
E. Receiver
Finally, we define the receiver's policy.Let the receiver policy η be a collection of functions where Assume that from Y , the receiver forms n signals Given X i = x i , an estimation policy is a measurable map We assume that a coordinator plays the role of a system designer, and jointly adjusts the transmission and estimation policies at the sensors and the receiver in the presence of a jammer.The approach is akin to a robust design problem in which, the coordinator seeks to optimize the performance of the distributed sensing system, when the operation may be affected by a DoS attack.We assume that the coordinator plays a zero-sum game with the attacker, where the objective function is given by: − dP(J = 1).(16) Note that even when there are multiple sensors, there are only two players, namely, the coordinator and the jammer.We are interested in obtaining policies that constitute saddle point equilibria.
Definition 2 (Saddle point equilibrium): A policy tuple (γ , η , ϕ ) is a saddle point equilibrium if for all γ, η, and ϕ in their respective admissible policy spaces.
III. POINT-TO-POINT CHANNELS
We start our analysis by considering a point-to-point channel that can support at most one packet per time-slot, i.e., n = 1 and κ(n) = 1.In this case, P = U 1 , the objective function becomes From here on, we will ignore the subscripts to simplify the notation.The first step is to assume that, without loss of generality, the estimator at the receiver implements the following map2 where the variables x0 and x1 serve as representation symbols for the no-transmission and collision events, and will be optimized by the coordinator.Let x def = (x 0 , x1 ).The estimation policy in Eq. ( 19) is parametrized by x ∈ R 2m .
A. Proactive jamming of point-to-point collision channels
We obtain the following structural result for the set of optimal transmission policies at the sensor.
Proposition 1 (Optimality of threshold policies): For a point-to-point system with a proactive attacker with a fixed jamming probability ϕ ∈ [0, 1], and an arbitrary estimation policy η indexed by representation symbols x ∈ R 2m , the optimal transmission strategy is3 : Proof: Using the law of total expectation, the definition of the estimation policy in Eq. ( 19), and the fact that (U, X) ⊥ ⊥ J, we rewrite Eq. ( 18) as follows: where ϕ = P(J = 1).Equation ( 21) is equivalent to Finally, when optimizing over γ for fixed η and ϕ, we have an infinite dimensional linear program with the following constraint: The solution to this problem is obtained by comparing the arguments of the two integrals that involve γ, i.e., x ∈ {ξ | γ η,ϕ (ξ) = 1} if and only if Remark 1: Proposition 1 implies that the optimal transmission policy is always of the threshold type.Moreover, this threshold policy is symmetric if and only if x0 = 0.The optimal policy is non-degenerate if ϕ ∈ [0, 1), or degenerate when ϕ = 1.The latter corresponds to a never-transmit policy.
With a slight abuse of notation, the structure of the optimal transmission policy in Proposition 1 implies that the objective function Eq. ( 18) assumes the following form: Proposition 2: Let X ∈ R m be a Gaussian random vector with mean µ and covariance Σ.The function Proof: Non-convexity in x -We set ϕ = 0.5, c = d = 1 and X ∼ N (0, 1).We can numericall verify that: For fixed x, x0 ∈ R m , p(x, x0 , ϕ) is the pointwise minimum of affine functions in ϕ.Therefore, it is concave for all x ∈ R m .Taking the expectation of p(X, x0 , ϕ) with respect to X preserves the concavity in ϕ.
We proceed by minimizing Eq. ( 24) with respect to the estimation policy, which is a non-convex finite dimensional optimization problem over x ∈ R 2m .A classic result in probability theory implies that x 1 = µ.However, due to the lack of convexity it is non-trivial to find the minimizer x 0 for an arbitrary Gaussian distribution.
B. The scalar case
We begin with the result for the scalar case.The general vector case is discussed in Appendix A.
Theorem 1 (Optimal estimator for scalar Gaussian sources): Let X be a Gaussian random variable with mean µ and variance σ 2 .The optimal estimator is Proof: Since x 1 = µ, after ignoring the constants, the objective function becomes After a change of variables, the objective function may be expressed as Taking the partial gradient of J (x, ϕ) with respect to x0 we obtain where (a) follows by exchanging z Note that g(v) is an odd function with g(v) < 0 for v > 0 and h(v) is nonnegative for all v.We analyze the sign of ∇ x0 J (x, ϕ) in three cases: 1) For x0 = µ, h(v) is an even function, which implies that h(v)g(v) is an odd function.Therefore, 2) For x0 > µ, we have 0 ≤ h(v) < h(−v) when v ≥ 0. Since g(v) is an odd function and g(v) < 0 for v > 0, we have 3) For x0 < µ, we have 0 ≤ h(−v) < h(v) when v ≥ 0. Since g(v) is an odd function and g(v) < 0 for v > 0, we have Therefore, we conclude that x0 = µ is the unique minimizer of J (x, ϕ).
Without loss of generality, for the remainder of this section we assume that µ = 0.The optimal transmitter and estimator's strategies for a symmetric Gaussian distribution imply that the objective function for the jammer is given by From Proposition 2, the objective function in Eq. ( 37) is concave with respect to ϕ.Therefore, we can compute the optimal jamming probability ϕ .Let φ be defined as Theorem 2 (Optimal jamming probability for scalar Gaussian sources): Let X be a Gaussian random variable with mean 0 and variance σ 2 .The optimal jamming probability for the optimal transmission policy in Proposition 1 and the optimal estimation policy in Theorem 1 is Proof: First, we represent Eq. ( 37) in integral form as Taking the derivative of the objective function with respect to ϕ, we have Notice that G(ϕ) is a monotone decreasing function with respect to ϕ and the following identities hold If G(0) ≥ 0, then the optimal ϕ = φ due to the fact that G( φ) = 0.If G(0) < 0, the objective function is decreasing in ϕ.Therefore, ϕ = 0.
Lemma 1: Let X be a Gaussian random variable with mean 0 and variance σ 2 .The optimal jamming probability ϕ satisfies Proof: Consider the objective function in integral form as , Theorem 2 implies that the optimal jamming probability is ϕ = φ and consequently In this case, ϕ = 0 maximizes J (γ η ,ϕ , η ), ϕ .Theorem 3 summarizes the saddle point strategy for the game between a coordinator jointly designing the transmission and estimation strategy against a proactive jammer.
Theorem 3 (saddle point equilibrium for scalar Gaussian sources): Given a Gaussian source X ∼ N (0, σ 2 ), communication and jamming costs c, d ≥ 0, the saddle point strategy (γ , η , ϕ ) for the remote estimation game with a proactive jammer is given by: 1) If , the optimal policies are where φ is the unique solution of Eq. ( 38).In both cases, the optimal estimator is: Proof: We need to consider two cases.Case 1 -Assume that 2 +∞ √ c x 2 f (x)dx < d.If the jammer chooses not to block the channel, i.e., ϕ = 0, Proposition 1 implies that the corresponding optimal transmission strategy is γ (x) = 1(x 2 > c).Under this pair of jamming and transmission policies, Theorem 1 yields that x 0 = 0 and x 1 = 0. Therefore, If the optimal transmission strategy is γ and the optimal estimator is η , Lemma 1 implies that Case 2 -Assume that 2 If the jammer blocks the channel with probability φ, Proposition 1 implies that the corresponding optimal transmission strategy is γ (x) = 1 (1 − φ)x 2 > c .Under this pair of jamming and transmission policies, Theorem 1 yields that x 0 = 0 and x 1 = 0. Therefore, If the optimal transmission strategy is γ and the optimal estimator is η , Lemma 1 implies that Remark 2: Notice that in case 2 of Theorem 3, the ratio c/(1 − φ) is constant for any given value of d > 0, which is determined by solving Eq. (38).Therefore, the optimal transmission policy is also uniquely determined by d.
C. Reactive jamming of point-to-point collision channels
In this section, we consider the case in which the attacker can sense whether the channel is occupied or not.Notice that we allow the reactive jammer to block the channel even when the sensor is not transmitting.To the best of our knowledge, the existing literature on reactive jamming attacks precludes that possibility.However, there is a reason why the jammer may engage in such counter-intuitive behavior: when the jammer only blocks a transmitted signal, it creates a noiseless binary signaling channel between the transmitter < l a t e x i t s h a 1 _ b a s e 6 4 = " J b l v t X D w 7 a a T 2 O A t s Z U T P U y 9 5 M / M / r p C a s + B M u k 9 S g Z I t F Y S q I i c n s a 9 L n C p k R Y 0 s o U 9 z e S t i Q K s q M z a Z g Q / C W X 1 4 l z Y u y d 1 2 + r F + V q p U s j j y c w C m c g w c 3 U I U 7 q E E D G C A 8 w y u 8 O Y / O i / P u f C x a c 0 4 2 c w x / 4 H z + A L b X j N s = < / l a t e x i t > Y < l a t e x i t s h a 1 _ b a s e 6 4 = " n 3 B X w K l U Z p R z d f a X S z i E / S U B e s g = " > A A A B 8 n i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 B I v g q S Q q 2 m P B i 8 c K l a t e x i t > ?< l a t e x i t s h a 1 _ b a s e 6 4 = " / 4 Z 3 a u q D 8 0 < l a t e x i t s h a 1 _ b a s e 6 4 = " z 5 u d K G j E g j N A 2 C o w r k M t V + a y X g c = " > A A A B 7 X i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 B I v g q S Q q 2 m P B i 8 c K 9 g P a U C b b T b t 2 s x t 2 N 0 I J / Q 9 e P C j i 1 f / j z X / j t s 1 B W m 9 o j f r y X q x 3 q 2 P R W n J K n q O 0 R 9 Y n z 9 j C Z m J < / l a t e x i t > collision channel Fig. 2. Signaling channel between the sensor and the receiver.The jammer controls the transition probabilities α and β.When α = β = 0, the channel is noiseless, i.e., the receiver can unequivocally decode whether U = 1 or U = 0 from the ouput signal Y .and the receiver, which may be exploited by the coordinator.If the jammer is allowed to "block" the channel when the user is not transmitting, such binary signaling channel will no longer be noiseless because there will be uncertainty if the decision variable at the transmitter is zero or one.This scenario is illustrated in Fig. 2.
Proposition 3: For a fixed jamming policy parametrized by ϕ ∈ [0, 1] 2 , and a fixed estimation policy η parametrized by x ∈ R 2m , the optimal transmission policy is: Proof: For a reactive jammer, the random variables X and J are conditionally independent given U .Using the law of total expectation, and employing the estimation policy in Eq. ( 19), the cost function can be expressed as For fixed x ∈ R 2m and ϕ ∈ [0, 1] 2 , the transmission policy γ that minimizes Eq. ( 57) is obtained by comparing the arguments of the two integrals as follows: Given the optimal transmitter's strategy in Proposition 3, the objective function becomes Therefore, the coordinator wants to minimize J (x, ϕ) over x ∈ R 2m and the jammer wants to maximize it over ϕ ∈ [0, 1] 2 .As in Section III-A, for fixed x ∈ R 2m , J is a concave function of ϕ for any pdf f .However, for fixed ϕ ∈ [0, 1] 2 , J is non-convex in x.Unfortunately, the structure of Eq. ( 59) does not allow us to use the same techniques to find a saddle point equilibrium for the proactive jammer.It is also not clear if saddle point solutions even exist.
From the remainder of this section, we assume that the coordinator and the jammer are solving the following minimax optimization problem 4 : where J (x, ϕ) is given by Eq. (59).
A useful alternative to the saddle point equilibrium are the solutions that satisfy the first-order stationarity conditions of the minimization and the maximization problems, yielding in a larger class of policies, called first order Nash equilibria (FNE) [30]- [33].
Definition 3 (Approximate First order Nash equilibrium): and Proposition 4: The function J (x, ϕ) admits the following subgradients with respect to x and ϕ: and Proof: This result follows from the Leibniz rule.
Problems in the form of Eq. ( 60) where the inner optimization problem is concave and the outer optimization problem is non-convex have been studied under assumptions on the gradients being Lipschitz continuous [31], [33].Under such conditions an algorithm known as the (Projected) Gradient Ascent-Descent (GAD) converges to an ε-FNE.However, the gradients in Eqs.(63) and (64) are not Lipschitz continuous.We will resort to an alternative algorithm that leverages the structure of a difference of convex decomposition present in our problem.
1) Optimization algorithm for a reactive jammer:
To obtain a pair of ε-FNE to the problem in Eq. (60), we alternate between a projected gradient ascent (PGA) step for the inner optimization problem; and a convex-concave procedure (CCP) step for the outer optimization problem.
Algorithm 1 PGA-CCP algorithm Input: PDF f , transmission cost c, jamming cost d Output: Estimated result x and ϕ 1: Initialize k ← 0, ε, x(0) and ϕ (0) 2: repeat 3: x(k+1) = A † (ϕ (k+1) ) g(x (k) , ϕ (k+1) ) + µ 5: k ← k + 1 6: until ε-FNE conditions (Eqs.( 61) and ( 62)) are satisfied We start with the description of the PGA step at a point (x (k) , ϕ (k) ): where {λ k } is a step-size sequence (e.g.λ k = 0.1/ √ k) and the projection operator is defined as which is equal to To update x(k) for a fixed ϕ (k+1) , we use the property that Eq. ( 59) can be decomposed as a difference of convex functions (DC decomposition).Using the DC decomposition we obtain a specialized descent algorithm, which is guaranteed to converge to stationary points of Eq. ( 59) for a fixed ϕ (k+1) .Because the CCP uses more information about the structure of the objective function than standard Gradient Descent methods, it often leads to faster convergence [34], [35].
Because F is a quadratic function of x for a fixed ϕ, we may use the first-order necessary optimality condition of problem Eq. ( 71) to find the recursion for x(k+1) in closed form: The partial gradient of F(x, ϕ) with respect to x is The partial gradient of G(x, ϕ) with respect to x is and A † denotes its Moore-Penrose pseudo-inverse.Then, the update of CCP can be compactly represented as
D. Numerical results
In this subsection, we provide policies that satisfy ε-FNE.Convergence is studied using the "performance index" defined below In this section, "optimality" is in the ε-FNE sense.We begin by presenting the optimal estimation policies for one-dimensional observations.Fig. 3 shows the optimal representation symbols x 0 and x 1 as a function of σ 2 for different jamming cost d, where X ∼ N (0, σ 2 ), c = 1 and ε = 10 −5 .Notice that the representation symbols obtained for the collision and notransmission in the presence of the reactive jammer are always distinct and neither is equal to the mean 0. This is in contrast with the the proactive jammer case, in which x 0 = x 1 = 0. Therefore, the assumption of a fixed receiver with x 0 = x 1 = 0, as in [22], [23], leads to a loss of optimality.
We then present the optimal jamming policies for onedimensional observations.Figure 4 shows the optimal jamming probabilities α and β as a function of σ 2 for different jamming cost d with X ∼ N (0, σ 2 ), c = 1 and ε = 10 −5 .Notice that the optimal jamming probabilities decrease as d increases.Besides, the optimal jamming probability when the sensor does not transmit can be nonzero when d = 1 and d = 1.2, where the jammer aims to deceive the estimator into thinking there has been a transmission that has been blocked.However, when d = 1.5 the optimal jamming probability when the sensor does not transmit is zero for all σ 2 since the jamming cost is high and it is not worth it to deceive the estimator.
We also compare the performance of our proposed PGA-CCP and the traditional GDA algorithms.Figure 5 presents the convergence curves of PGA-CCP vs. GDA for different values of the variance σ 2 , where c = 1, d = 1, and X ∼ N (0, σ 2 ).In this study, we set the step size of PGA-CCP as λ = 0.1 and the step sizes for GA and GD in GDA as λ GA = 0.1 and λ GD = 0.01, respectively.These are values consistent with the ones suggested by the analysis in [33].We performed 100 Monte Carlo simulations for each algorithm with random initial conditions.The results indicate that PGA-CCP converges more than six times faster than GDA.Furthermore, GDA oscillates more with the increase of σ 2 while PGA-CCP decreases steadily with a small standard deviation from the mean of the sample paths.
We proceed by presenting the simulation results for multidimensional observations.Figure 6 shows the convergence to ε-FNE for multidimensional observations, with c = 1 and d = 1.We make 100 Monte Carlo simulations for each algorithm.For each Monte Carlo simulation, the expectations used in Algorithm 1 are approximated by the average of 10 4 samples drawn from X ∼ N (0 m , I m×m ).Notice that PGA-CCP converges quickly to zero while GDA does not even converge to a 0.1-FNE when m = 10 and m = 50.When the dimension of measurements is m = 100, PGA-CCP can achieve 0.02-FNE while GDA does not even converge to 0.3-FNE.Therefore, the numerical examples herein show that our heuristic algorithm is promising relative to GDA especially for high-dimensional remote estimation problems 5 .
IV. LARGE-SCALE NETWORKS
In this section, we consider the remote estimation problem over large-scale networks in the presence of a proactive jammer, where the network consists of countably infinitely many legitimate transmitters that can support a fraction κ ∈ (0, 1) of packets per time-slot transmitted simultaneously, i.e., lim n→∞ κ(n)/n = κ.To keep the notation simple, we will only consider the scalar observation case.However, it is straightforward to extend our results to the vector case using the techniques developed in Appendix A. Our goal is to obtain an expression for the limiting objective function when n approaches infinity and characterize its saddle point equilibrium.Before that, we will provide the objective function of the remote estimation problem over medium-scale networks, where n is finite.
A. Objective function in medium-scale networks
In this section, we consider the remote estimation problem over the medium-scale networks, which consists of n < ∞ transmitters and support κ(n) < n simultaneous packets.Let {U i } n i=1 be the collection of transmission decisions at the sensors.For a given realization of {U i } n i=1 ∈ {0, 1} n , define T = {i | U i = 1} as the index set of all transmitting sensors.Given the channel input {S i } n i=1 and the jammer's decision J, the output of the collision channel of capacity κ(n) < n is given by where ∅ means that the channel is idle and C means that the receiver observes a collision.There are two kinds of collision events.Collisions of the first type are called intrinsic and are caused when the number of transmissions is above the network capacity, i.e., P > κ(n).The second type of collisions are called extrinsic and are caused when the jammer decides to block the channel, i.e., J = 1.
Finally, for the measurement at the i-th sensor, the estimator uses a policy η i determined by where xi0 , xi1 ∈ R m are the representation symbols used by the estimator when the i-th sensor's observation is not transmitted and when a collision occurs, respectively.Since the observations at all sensors are i.i.d., it is natural to assume that the sensors use the same transmission strategy γ.Similarly, the estimators use the same estimation strategy η, i.e., xi0 = x0 and xi1 = x1 , i ∈ {1, • • • , n}.Let x def = (x 0 , x1 ).Using the law of total expectation and the mutual independence of J, U i and {U } =i , the objective function in Eq. ( 16) can be expressed Therefore, in the asymptotic regime, we have Since the coordinator can choose the value of P(U = 1) by adjusting the transmission policy, the problem is equivalent to solving the following two problems and choosing the one with the smaller optimal value: and C. Characterization of saddle points solutions with P(U = 1) ≤ κ, and with P(U = 1) > κ.The following result shows that the optimal objective function value is always obtained by solving Eq. (98).
Therefore, it suffices to consider the constrained optimization problem in Eq. ( 98).Define the Lagrangian function where λ ≥ 0 is the dual variable associated with the inequality constraint P(U = 1) − κ ≤ 0.
Proposition 7 (Optimality of threshold policies): For a large-scale system under a proactive attack with a fixed jamming probability ϕ ∈ [0, 1], dual variable λ, and an arbitrary estimation policy η indexed by representation symbols x = (x 0 , x1 ) ∈ R 2 , the optimal transmission strategy is: For fixed η, λ, and ϕ, Proposition 7 implies that L η, (ϕ, λ) Proposition 8 (Optimal estimator): Let X be a Gaussian random variable with mean µ and variance σ 2 .The optimal estimator is Without loss of generality, set µ = 0. Then the Lagrangian function becomes The optimal values ϕ and λ are coupled.Therefore, we must jointly maximize L over ϕ and λ.Let l λ (κ) denote the unique solution of and let l ϕ (d) denote the unique solution of Theorem 4 (Optimal jamming policy): For a given input pdf f , transmission cost c, jamming cost d, and asymptotic channel capacity κ, the optimal jamming probability and its associated optimal Lagrange dual variable are: Proof: The proof is in Appendix B.
Therefore, we always have Since λ E γ (X) − κ ≤ 0, we have When γ = γ6 , the complementary slackness property is satisfied.Then Theorem 4 implies that Therefore, Following the proof of Theorem 3 and using Proposition 9, we establish a saddle point equilibrium for large-scale networks.
Theorem 5 (Saddle point equilibrium): Given a Gaussian source X ∼ N (0, σ 2 ), communication and jamming costs c, d ≥ 0, a saddle point strategy (γ , η , ϕ ) for the remote estimation game with a proactive jammer over a large-scale network of capacity κ is given by: 1) 2) If In all cases, the estimation policy is:
D. Numerical results
Based on Theorem 5, the following numerical results provide some insights on the optimal transmission strategy and optimal jamming strategy.Table I shows the saddle point equilibrium under different parameters, where X ∼ N (0, 1) and c = 1.For example, let d = 1 and κ = 0.25.Since and l λ (κ) = 1.32 < l ϕ (d) = 4.11.So the optimal strategies are γ = 1(x 2 > 4.11) and ϕ = 0.76.Notice that the complementary slackness property is always satisfied, i.e., λ (P(γ (X) = 1) − κ) = 0.Moreover, in the saddle point equilibrium of Theorem 5, there is a sharp transition in the optimal jamming probability from zero to nonzero, which directly depends on the jamming cost.However, the structure of the transmission and estimation policies remain unchanged.In particular, the optimal transmission threshold policy is always symmetric.
V. CONCLUDING REMARKS AND FUTURE WORK
Building upon the pioneering model introduced by Gupta et al. in [22], [23], we have considered a remote estimation game with asymmetric information involving transmitters, receivers and a jammer.While most of the literature focuses on jamming in the network and the physical layer of the communication protocol stack, our work focuses on the medium access control layer.To address the complicated problem originated by the fact that the problem has a non-classical information structure, we adopt a coordinator approach, which leads to a tractable framework based on a zero-sum game between coordinator and the jammer.We have obtained several results on the saddle point equilibria for many cases of interest, and extended the result for large scale networks, which provide insights in the design of massive IoT deployments for many modern applications such as smart farming, Industry 4.0, and robotic swarms.
There are many interesting directions for future work.The most prominent ones are related to learning.In this work, we have assumed that the probability density function of the observations are common knowledge.However, this assumption is never realistic in practice.The design of real systems is data-driven which leads to issues related to the stability, robustness and performance bounds when the probabilistic model is not known a priori and is learned from data samples.For example, the sample complexity of our system is a largely unexplored issue with only a few related results reported in [37].Additionally, all of our results assume that the jamming and communication costs are available to the coordinator and the jammer, which is also a contrived assumption.If the costs are private information, the game may no longer be zerosum.Moreover, these parameters may need to be learned from repeated play.In such case, it would be interesting to developed a theory that characterize the rate of regret in online learning in this more realistic scenario.
where the first equality uses (1 − ϕ)[v i − (x 0i − µ i )] 2 ≥ 0 and the second equality uses the oddness of inner function.For v −i ∈ E, we obtain the sign of g(v −i ) in three cases: 1) x0i = µ i .The oddness of the inner function implies that g(v −i ) = 0. 2) x0i > µ i .A similar technique with the proof of Theorem 1 implies that g(v −i ) > 0. 3) x0i < µ i .A similar technique with the proof of Theorem 1 implies that g(v −i ) < 0.
Finally, we analyze the sign of [∇ x0 J (x 0 , x1 )] i , which is where the last equality uses Eq. ( 148).Together with the sign of g(v −i ) for v −i ∈ E, we obtain Therefore, x0 = µ minimizes J (x 0 , x 1 ).Therefore, the general objective function of Eq. ( 37) for the jammer is formulated as Define φ as (152) Theorem 7: Let X be a multivariate Gaussian random vector with mean µ and diagonal covariance matrix Σ.The optimal jamming probability when the transmission policy has the structure as in Proposition 1 and the estimation policy has the structure as in Theorem 6 satisfies Proof: For brevity, we define J (γ η ,ϕ , η ), ϕ def = J ϕ .Reformulating J ϕ , we obtain Before computing ∇ ϕ J ϕ , we first prove that Taking the derivative of J ϕ with respect to ϕ and using Eq.(154), we have Then we use the same technique as the proof of Theorem 2 to complete the proof.
Once the result is established for diagonal covariance matrices, it is simple to obtain the result for general covariance matrices such that Σ is symmetric positive definite.Since Σ admits an eigendecomposition such that Σ = W T ΛW , where Λ is a diagonal matrix and W T W = I.Thus, equipped with Theorem 7, upon observing x i , each agent computes xi = W x i , uses its transmission policy designed by the coordinator assuming the covariance matrix is Λ.Notice that the transmission decision is computed using xi and not x i .However, when a packet is transmitted, agent i sends the original observation x i .This scheme preserves the saddle point equilibrium property, and therefore, our results hold in full < l a t e x i t s h a 1 _ b a s e 6 4 = " 3 R R i / O P 7 6 y k 7 k s D w w L f j W V D m X + g = " > A A A B / X i c b V D L S g N B E J y N r x h f 6+ P m Z T A I n s J u D u o x k I v H C O Y B y R J m Z y f J k N m Z Z a Z X j E v w V 7 x 4 U M S r / + H N v 3 G S 7 E E T C x q K q m 6 6 u 8 J E c A O e 9 + 0 U 1 t Y 3 N r e K 2 6 W d 3 b 3 9 A / f w q G V U q i l r U i W U 7 o T E M M E l a w I H w T q J Z i Q O B W u H 4 / r M b 9 8 z b b i S d z B J W B C T o e Q D T g l Y q e + e 9 I A 9 g K F Z X S k d c U l A 6 W n f L X s V b w 6 8 S v y c l F G O R t / 9 6 k W K p j G T Q A U x p u t 7 C Q Q Z 0 c C p Y N N S L z U s I X R M h q x r q S Q x M 0 E 2 v 3 6 K z 6 0 S 4 Y H S t i T g u f p 7 I i O x M Z M 4 t J 0 x g Z F Z 9 m b i f 1 4 3 h c F 1 k H G Z p M A k X S w a p A K D w r M o c M Q 1 o y A m l h C q u b 0 V 0 x H R h I I N r G R D 8 J d f X i W t a s W / r P i 3 1 X K t m s d R R K f o D F 0 g H 1 2 h G r p B D d R E F D 2 i Z / S K 3 p w n 5 8 V 5 d z 4 W r Q U n n z l G f + B 8 / g C B 9 5 X a < / l a t e x i t > Coordinator < l a t e x i t s h a 1 _ b a s e 6 4 = " b a 7 L z I G 0 0 f s F m w + N f x 5 M 0 z 3 O S S s = " > A A A B + H i c b V A 9 S w N B E N 3 z M 8 a P n F r a L A b B K t y l U M u A j V h F M B + Q H G F v M 0 m W 7 N 4 d u 3 N i P P J L b C w U s f W n 2 P l v 3 C R X a O K D g c d 7 M 8 z M C x M p D Hr e t 7 O 2 v r G 5 t V 3 Y K e 7 u 7 R + U 3 M O j p o l T z a H B Y x n r d s g M S B F B A w V K a C c a m A o l t M L x 9 c x v P Y A 2 I o 7 u c Z J A o N g w E g P B G V q p 5 5 a 6 C I 9 o e H b L l A I 9 7 b l l r + L N Q V e J n 5 M y y V H v u V / d f s x T B R F y y Y z p + F 6 C Q c Y 0 C i 5 h W u y m B h L G x 2 w I H U s j p s A E 2 f z w K T 2 z S p 8 O Y m x b 1 u r j h P Z N 0 A V 9 R N Y + B U W B h B i Z W T j b 3 A f A 7 Q c y d L R O f f a P i d M p D D o e d / O 0 v L K 6 t p 6 b i O / u b W 9 s + v u 7 d d M nG o O V R 7 L W D d C Z k A K B V U U K K G R a G B R K K E e D i 7 H f v 0 O t B G x u s V h A q 2 I 9 Z T o C s 7 Q S m 3 3 J O C g E L R Q P R o g P K D h W V 1 o k G A M D Q J 6 D X g f 6 8 G o 7 R a 8 o j c B X S T + j B T I D J W 2 + x V 0 Y p 5 G 9 n Y u m T F N 3 0 u w l T G N g k s Y 5 Y P U Q M L 4 g P W g a a l i E Z h W N g k 0 o s d W 6 d B u r O 1 R S C f q 7 4 2 M R c Y M o 9 B O R g z 7 Z t 4 b i / 9 5 z R S 7 F 6 1 M q C R F U H z 6 U D e V F G M 6 b o d 2 b H S O c m g J 4 1 r Y v 1 L e Z 5 p x W 5 H J 2 x L 8 + c i L p F Y q + m d F / 6 Z U K J d m d e T I I T k i p 8 Q n 5 6 R M r k i F V A k n j + S Z v J I 3 5 8 l 5 c d 6 d j + n o k j P b O S B / 4 H z + A J G G n X 8 = < / l a t e x i t > Wireless Network < l a t e x i t s h a 1 _ b a s e 6 4 = " Y 9 S Z Y 3 1 O c U j 0 p 8 n A K k k B l 7 5 o T J E = " > A A A B 7 3 i c b V D L S g N B E O y N r x h f U Y 9 e B o P g K e y K q M e A F 4 8 R z A O S J f R O Z p M h M 7 P r z K w Q Q n 7 C i w d F v P o 7 3 v w b J 8 k e N L G g o a j q p r s r S g U 3 1 v e / v c L a + s b m V n G 7 t L O 7 t 3 9 Q P j x q m i T T l D V o I h L d j t A w w R V r W G 4 F a 6 e a o Y w E a 0 W j 2 5 n f e m L a 8 E Q 9 2 H H K Q o k D x W N O 0 T q p 3 R 2 g l N g L e u W K X / X n I K s k y E k F c t R 7 5 a 9 u P 6 G Z Z M p S g c Z 0 A j + 1 4 Q S 1 5 V S w a a m b G Z Y i H e G A d R x V K J k J J / N 7 p + T M K X 0 S J 9 q V s m S u / p 6 Y o D R m L C P X K d E O z b I 3 E / / z O p m N b 8 I J V 2 l m m a K L R X E m i E 3 I 7 H n S 5 5 p R K 8 a O I N X c 3 U r o E D V S 6 y I q u R C C 5 Z d X S f O i G l x V g / v L S u 0 y j 6 M I J 3 A K 5 x D A N d T g D u r Q A A o C n u E V 3 rx H 7 8 V 7 9 z 4 W r Q U v n z m G P / A + f w C t 9 4 + x < / l a t e x i t > 1 < l a t e x i t s h a 1 _ b a s e 6 4 = " w C w c r A 4 c 1 i d m Y p 2 r c I B 0 D + i f U 4 g = " > A A A B 7 3 i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 B I v g q S S l q M e C F 4 8 V 7 A e 0 o U y 2 m 3 b p 7 i b u b o Q S + i e 8 e F D E q 3 / H m / / G b Z u D t j 4 Y e L w 3 w 8 y 8 M O F M G 8 / 7 d g o b m 1 v b O 8 X d 0 t 7 + w e F R + f i k r e N U E do i M Y 9 V N 0 R N O Z O 0 Z Z j h t J s o i i L k t B N O b u d + 5 4 k q z W L 5 Y K Y J D Q S O J I s Y Q W O l b n + E Q u C g N i h X v K q 3 g L t O / J x U I E d z U P 7 q D 2 O S C i o N 4 a h 1 z / c S E 2 S o D C O c z k r 9 V N M E y Q R H t G e p R E F 1 k C 3 u n b k X V h m 6 U a x s S e M u 1 N 8 T G Q q t p y K 0 n Q L N W K 9 6 c / E /r 5 e a 6 C b I m E x S Q y V Z L o p S 7 p r Y n T / v D p m i x P C p J U g U s 7 e 6 Z I w K i b E R l W w I / u r L 6 6 R d q / p X V f + + X m n U 8 z i K c A b n c A k + X E M D 7 q A J L S D A 4 R l e 4 c 1 5 d F 6 c d + d j 2 V p w 8 p l T + A P n 8 w e v e 4 + y < / l a t e x i t >
2
3 v w 3 b t s c t P X B w O O 9 G W b m h a n g B j 3 v 2 y m s r W 9 s b h W 3 S z u 7 e / s H 5 c O j p l G Z p q x B l V C 6 H R L D B J e s g R w F a 6 e a k S Q U r B U O 7 2 Z + a 8 S 0 4 U o + 4 j h l Q U L 6 k s e c E r R S s z u K F J p e u e J V v T n c V e L n p A I 5 6 r 3 y V z d S N E u Y R C q I M R 3 f S z G Y E I 2 c C j Y t d T P D U k K H p M 8 6 l k q S M B N M 5 t d O 3 T O r R G 6 s t C 2 J 7 l z 9 P T E h i T H j J L S d C c G B W f Z m 4 n 9 e J 8 P 4 J p h w m W b I J F 0 s i j P h o n J n r 7 s R 1 4 y z s e i t e D k M 8 f w B 8 7 n D 8 w z j 0 Q = < / l a t e x i t > . . .< l a t e x i t s h a 1 _ b a s e 6 4 = " k Q 2 H a Y F n w H J C 4 a 6 3 n e 5 i Z r E C b n 3 v w 3 b t s c t P X B w O O 9 G W b m h a n g B j 3 v 2 y m s r W 9 s b h W 3 S z u 7 e / s H 5 c O j p l G Z p q x B l V C 6 H R L D B J e s g R w F a 6 e a k S Q U r B U O 7 2 Z + a 8 S 0 4 U o + 4 j h l Q U L 6 k s e c E r R S s z u K F J p e u e J V v T n c V e L n p A I 5 6 r 3 y V z d S N E u Y R C q I M R 3 f S z G Y E I 2 c C j Y t d T P D U k K H p M 8 6 l k q S M B N M 5 t d O 3 T O r R G 6 s t C 2 J 7 l z 9 P T E h i T H j J L S d C c G B W f Z m 4 n 9 e J 8 P 4 J p h w m W b I J F 0 s i j P h o n J n r 7 s R 1 4 y < / l a t e x i t > Estimators < l a t e x i t s h a 1 _ b a s e 6 4 = " p O Q 3 n q V a r U g w B u x y Z 2 v 0 7 5 g A I n 8 = " > A A A B 6 n i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L B b B U 0 m k V I 8 F L x 4 r 2 g 9 o Q 9 l s N + 3 S z S b s T o Q S + h O 8 e F D E q 7 / I m / / G b Z u D t j 4 Y e L w 3 w 8 y 8 I J H C o O t + O 4 W N z a 3 t n e J u a W / / 4 P e H X G z r P z 5 r w v W n N O N n M I v + B 8 f A P f z J I m < / l a t e x i t > S = ?< l a t e x i t s h a 1 _ b a s e 6 4 = " z f j 7 m p u k q G n a R N W w 5 p e z L a w 9 V 8 4 = " > A A A B 6 n i c b V D L S g N B E O y N r x h f U Y 9 e B o P g K e y q a C 5 C w I v H S M w D k i X M T n q T I b O z y 8 y s E E I + w Y s H R b z 6 R d 7 8 G y f J H j S x o K G o 6 q a 7 K 0 g E 1 8 Z 1 v 5 3 c 2 v r G 5 l Z + u 7 C z u 7 d / U D w 8 a u o 4 V Q w b L B a x a g d U o + A S G 4 Y b g e 1 E I Y 0 C g a 1 g d D f z W 0 + o N I / l o x k n 6 E d 0 I H 9 g P a U D b b T b t 0 s x t 2 J 4 U S + j O 8 e F D E q 7 / G m / / G b Z u D t j 4 Y e L w 3 w 8 y 8 M B H c o O d 9 O 4 W N z a 3 t n e J u a W / / 4 P r y u X 9 V b l W z e M o w D G c w B l 4 c A M 1 u I M 6 N I C C g G d 4 h T f n 0 X l x 3 p 2 P e e u K k 8 8 c w R 8 4 n z 9 l 5 Y + I < / l a t e x i t > 1 ↵
|
2023-03-01T06:42:45.780Z
|
2023-02-28T00:00:00.000
|
{
"year": 2023,
"sha1": "b25f29aa3ba0da4368588a6200c29e999ae4242b",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "b25f29aa3ba0da4368588a6200c29e999ae4242b",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Engineering",
"Computer Science",
"Mathematics"
]
}
|
259914731
|
pes2o/s2orc
|
v3-fos-license
|
Prognostic Value of the Ratio of Hemoglobin to Red Blood Cell Distribution Width in Patients with Out-of-Hospital Cardiac Arrest: A Retrospective Study
The ratio of hemoglobin to red blood cell distribution width (HRR) can reflect the degree of oxidative stress and systemic inflammatory response in the body, and is a potential indicator to predict the prognosis of patients with cardiac arrest (CA). We retrospectively analyzed 126 patients successfully resuscitated after out-of-hospital cardiac arrest. Patients were grouped according to their survival status at discharge: 35 survived and 91 died. Binary logistic regression was used to analyze the independent factors affecting the prognosis of patients after cardiopulmonary resuscitation (CPR). A receiver operating characteristic (ROC) curve was used to analyze the predictive value of each independent factor for the prognosis of patients after CPR. The HRR in death group was lower than that in the survival group (P < 0.05), which was closely related to the prognosis of patients after CPR. The ROC curve showed that HRR < 8.555 (AUC = 0.733, sensitivity 87.5%, specificity 40.7%, P < 0.001) indicated poor prognosis after CPR. The HRR is an independent risk factor for the prognosis in patients who underwent CPR after out-of-hospital cardiac arrest. After successful resuscitation, HRR lower than 8.555 indicates poor prognosis.
Introduction
Cardiac arrest (CA) is a major global public health problem [1]. Out-of-hospital cardiac arrest (OHCA) is a leading cause of death worldwide [2]. The discharge survival rate of patients with OHCA in the United States is approximately 10% [3]. In China, the rates of discharge survival and good neurological function in OHCA patients are only 1.3% and 1%, respectively [4]. During CA, severe hypoxia and ischemia occur and inflammatory factors are released in the body, leading to the accumulation of various metabolites in the body. Reperfusion injury occurs after the return of spontaneous circulation (ROSC), resulting in multiple organ dysfunction or a disorder called post cardiac arrest syndrome (PCAS) [5].
Early identification of PCAS, of its severity, and early intervention measures are very important for the prognosis of patients with CA. In recent years, the red blood cell distribution width (RDW) has been found to be associated with the prognosis of CA in and out of hospital [6]. Hemoglobin (Hb) levels are associated with survival and neurological function prognosis after ROSC [7]. The ratio of hemoglobin to red cell distribution width (HRR) is a recently proposed Ruyi Lei and Chao Lan have contributed equally to this work and share first authorship. composite parameter and novel inflammatory marker that can reflect the degree of oxidative stress and systemic inflammatory responses [8,9]. Recent studies have shown that it plays an important role in predicting the prognosis of patients with cancer and lymphoma [10]. However, HRR has not yet been reported to predict the prognosis of patients with PCAS. HRR can be obtained quickly and economically. Therefore, this study aimed to explore the prognostic value of HRR in patients underwent CPR after OHCA and provide new insights for the clinical identification of PCAS.
Participants
We conducted a retrospective study of 126 patients with successful resuscitation for OHCA admitted to the ICU from the emergency room of the First Affiliated Hospital of Zhengzhou University between August 2016 and December 2022. The inclusion criteria were: (1) patients admitted to the ICU from the emergency room of our hospital after successful cardiopulmonary resuscitation after OHCA and (2) ≥ 18 years old. The exclusion criteria were: (1) Patients with less than 24 h in the ICU, (2) CA due to various traumas, (3) successful CPR for > 6 h, (4) history of blood transfusion within 3 months, (5) history of hematological or immune system diseases, (6) previous tumors and chemotherapy, and (7) missing medical records. Patients' personal information was anonymized while collecting and analyzing the data. This study conformed to the standards of medical ethics and was approved by the Ethics Committee of the First Affiliated Hospital of Zhengzhou University (Approval #:2022-KY-1411).
Diagnostic Criteria
OHCA is defined as a loss of consciousness, disappearance of arterial pulsation, respiratory arrest or sighing breathing, which is the clinical basis of OHCA. In addition, prehospital diagnosis of OHCA requires reference to ECG changes. CPR follows the 2020 American Heart Association Guidelines for Cardiopulmonary Resuscitation. The CPR success criteria were: patient's heart rate, blood pressure (systolic blood pressure ≥ 60 mmHg), and autonomic or pacemaker rhythm were restored after cardiac compressions and remained so until admission.
Date Collection
The following clinical data were collected from the 126 patients included in the study: gender, age, medical history, and ICU length of stay (days); initial score after admission to ICU, including the acute physiology and chronic health evaluation II score (APACHE II) and Glasgow coma scale (GSC); and initial vital signs after admission to ICU including heart rate, body temperature, respiration and mean arterial pressure (MAP). Additionally, blood gas analysis indexes, including arterial pH (pH), arterial partial oxygen pressure (PaO 2 ), blood lactic acid (Lac), 6 h lactic acid (6 h Lac), 12 h lactic acid (12 h Lac), 6 h lactic acid clearance, and 12 h lactic acid clearance; and serological indicators, including glomerular filtration rate (GFR), white blood cell count (WBC), red blood cell RBC, Hb, RDW and HRR were assessed. The above biochemical indices were determined by the Emergency Laboratory of the First Affiliated Hospital of Zhengzhou University. Patients were grouped according to their survival status at the time of discharge: 35 survived and 91 died.
Statistical Analysis
Statistical analyses were performed using IBM SPSS Statistics for Windows, version 26.0 and GraphPad Prism, version 8.0. The Kolmogorov-Smirnov method was used to test the normality of the quantitative data. Quantitative data conforming to normal distribution were represented by Mean ± SD, comparisons between groups were performed by independent sample t tests, and quantitative data not conforming to normal distribution were represented by median (quartile) [M (P25, P75)]. The Mann-Whitney U test was used for comparisons between groups. Disorderly classification data were expressed as frequency (%). Group comparisons between the χ 2 inspections and binary logistic regression were used to analyze independent prognostic factors for patients who underwent CPR after OHCA. A receiver operating characteristic (ROC) curve was used to analyze the evaluation value of independent influencing factors on the prognosis of patients after CPR and the value corresponding to the most approximate entry index was taken as the best cut-off value. P < 0.05 was considered statistically significant.
Demographics and Clinical Characteristics
A total of 126 patients (average age: 58.21 years; men: 85) with successful CPR after OHCA were included. Hypertension was present in 42.1% of the patients, diabetes in 22.2%, coronary heart disease in 22.2%, cerebrovascular disease in 9.5% and other system diseases in 36.5%. There were no statistically significant differences in age or medical history between the survival and death groups. Compared to the death group, patients in the survival group were younger and spent more days in the ICU (P < 0.05). The characteristics of the patients in both groups are presented in Table 1.
Comparison of Initial Scores, Vital Signs and Laboratory Results on Admission to ICU
The initial vital signs, PaO 2 and WBC count of patients admitted to the ICU were not significantly different between the two groups ( Table 2). The APACHE II score of the survival group was significantly lower than that of the death group, GSC score of the survival group was significantly higher than that of the death group. The laboratory indexes pH, 6 h lactic acid clearance, 12 h lactic acid clearance, GFR, RBC, Hb, HRR of the survival group were significantly higher than those of the death group, while the Lac, 6 h Lac, 12 h Lac, and RDW were significantly lower than that of death group (P < 0.05).
Binary Logistic Regression Analysis of the Initial Admission Index
Indicators with significant differences in the univariate analysis of the 126 patients were assessed in a binary multivariate logistic regression analysis. The results included sex, ICU stay, APACHE II score, GSC score, pH, Lac, 6 h Lac, 6 h lactate clearance, 12 h Lac, 12 h lactate clearance, GFR, RBC, Hb, RDW, and HRR. HRR, 6 h lactic acid clearance rate, and APACHE II score were independent risk factors for prognosis after CPR (Table 3).
ROC Curve Analysis of HRR and 6 h Lactic Acid Clearance to Predict Prognosis of Patients After CPR
With the prognosis (survival = 1) of patients discharged after CPR as the state variable and HRR, 6 h lactate clearance rate, and their combination as the test variables, the ROC curve was used to analyze the predictive value of HRR, 6 h lactate clearance rate, and their combination in predicting the prognosis of patients after CPR. The cut-off value for the initial HRR of patients admitted to the ICU was 8.555 (AUC = 0.733, sensitivity 87.5%, specificity 40.7%, P < 0.001; Table 4 and Fig. 1), the cut-off value of lactic acid clearance at 6 h was 28.947, (AUC = 0.701, sensitivity 88.6%, specificity 35.8%, P < 0.001; Table 4 and Fig. 2), and the combined cut-off value was 0.296(AUC = 0.802, sensitivity 71.4%, specificity 51.4%, P < 0.001; Table 4 and Fig. 3).
Discussion
CA is the sudden termination of the cardiac ejection function, disappearance of arterial pulsation and heart sounds, and severe ischemia and hypoxia of vital organs such as the brain, leading to the end of life, which is a common clinical emergency [11]. A significant proportion of patients with ROSC after CPR die in the early stages of post-resuscitation care due to their unique and complex pathophysiological process known as "post-cardiac arrest syndrome" or PCAS [12]. Once a patient with CA develops ROSC, a chain of events occurs, characterized by ischemic hypoxic brain injury, cardiac dysfunction, and systemic ischemia-reperfusion response. Ischemia-reperfusion causes widespread activation of the immune and clotting pathways, thereby increasing the risk of multiple organ failure and infection [13]. The inflammatory response after CPR is similar to that of sepsis, hence the name" septicemia like syndrome" [14]. Patients may exhibit abnormal inflammatory regulation, myocardium and adrenal gland dysfunction, coagulation dysfunction, and endothelial dysfunction following CA [5,15]. Capillary endothelial cell dysfunction results in increased capillary permeability, capillary leakage, proteins, and other substances from the blood vessels into the tissue [14]. Table 2 Comparison of initial scores, vital signs and laboratory results on ICU admission between survival and death groups APACHE II, mean acute physiology and chronic health evaluation II score; GSC, mean Glasgow coma scale; MAP, mean arterial blood pressure; 6 h lactic acid clearance, mean (Initial blood lactic acid concentration − after 6 h)/initial blood lactic acid concentration * 100%; 12 h lactic acid clearance, mean (Initial blood lactic acid concentration − after 12 h)/initial blood lactic acid concentration *100%; HRR, mean hemoglobin to red blood cell distribution width ratio The RDW is a measure of the heterogeneity of circulating erythrocyte size. Oxidative damage and inflammatory responses can affect the RDW [20]. Recent studies have shown that an elevated RDW is a strong predictor of mortality in critically ill patients. Additionally, the RDW is a predictor of cardiac dysfunction and mortality in patients with congestive heart failure and acute coronary syndrome [21][22][23][24]. Kim et al. [12] conducted a retrospective study and found that the highest quartile of RDW (RDW > 15.4%) was independently associated with all-cause mortality at 30 days after CPR (OR = 1.95; 95% CI 1.05-3.60; P = 0.034), suggesting that the initial RDW was an independent predictor of all-cause mortality after resuscitation. Comparison of the baseline data in this study showed that the RDW in the death group was higher than that in the survival group (P < 0.05). This may be related to myocardial dysfunction after resuscitation. In addition, there is evidence of a significant relationship between higher Hb levels and the recovery of spontaneous respiratory circulation after CA (in conjunction with favorable neurological outcomes within hours of CA and survival to discharge) [25][26][27]. A study of Albaeni et al. [7] showed that Hb levels ≥ 10 g/dl after recovery of spontaneous respiratory circulation are associated with a good neurological prognosis for survival, while patients with high Hb levels have a shorter duration of good neurological prognosis. However, an index of the HRR combined with RDW and Hb has not been reported to predict the prognosis and neurological function of patients after CPR following OHCA. This study aimed to investigate the prognostic value of the HRR ratio in patients undergoing cardiopulmonary resuscitation after OHCA. The HRR is a new inflammatory marker that quickly reflects the degree of oxidative stress and systemic inflammatory responses [8,9]. Recently, it has been used to predict the prognosis of various cancers and lymphomas [10,[28][29][30]. However, the relationship between the HRR and patient prognosis after CPR has not been reported. In this study, the survival group had a higher HRR than the death group (P < 0.001). Bivariate multivariate logistic regression analysis showed that the HRR was an independent factor influencing the prognosis of patients after cardiopulmonary resuscitation (OR = 1.558, 95% CI:1.178-2.060, P < 0.05). The ROC curve suggested that when the HRR was < 8.555, patients had a poor prognosis after cardiopulmonary resuscitation (AUC = 0.731, sensitivity 0.875, specificity 0.407). This may be related to oxidative stress and inflammation in tissues and organs caused by ischemia-reperfusion injury after ROSC during CA.
During CA, the body is in a state of severe ischemia and hypoxia and the transmission of oxygen and metabolic substrates is interrupted, resulting in an anaerobic metabolic state of the mitochondria and dysfunction of the electron transport chain, and injury to mitochondria after CA. Anaerobic metabolism of tissues produces lactic acid, and the accumulation of a large amount of lactic acid can lead to metabolic acidosis and a decrease in PH. When spontaneous respiratory circulation is not restored during CA, tissues and organs undergo anaerobic metabolism to produce lactic acid because of hypoxia, and lactic acid accumulation leads to metabolic acidosis in the tissues and organs [31,32]. The study of Seeger et al. [33] showed that lactic acid was an independent predictor for patients after CPR. In addition, two recent prospective studies shown that a greater lactate clearance was a predictor for lower mortality and better neurologic outcome after CA [34,35]. The results of the binary logistic regression analysis in this study showed that the 6 h lactate clearance rate was an independent prognostic factor for patients with CPR after OHCA (OR = 1.029, 95% CI: 1.008-1.050, P < 0.001). When the 6 h lactate clearance rate was < 28.947, the prognosis after CPR was poor (AUC = 0.701, sensitivity: 0.886; specificity: 0.358). This study further explored the combined index of the HRR and 6 h lactic acid clearance, and the results showed that the combined index of the HRR and 6 h lactic acid clearance had a higher predictive value than each single index.
However, this study has some limitations. First, the sample size was small; therefore, there may have been some bias in the results obtained, and the sample size needs to be expanded for verification. Second, this was a retrospective observational study that only collected scores and laboratory indicators at a single time point upon admission and did not study dynamic changes in the HRR. Moreover, although the HRR is relatively mature in research on various cancers and lymphomas, further research on the mechanism of the HRR in CA as a newly proposed inflammatory indicator remains to be conducted. Finally, the data points in time collected in this study are indicators of the initial laboratory results of a patient with CA admitted to the ICU after successful CPR. Various indicators before hospital admission were not included; therefore, this study has certain limitations and further investigation is required.
Conclusions
HRR is an independent risk factor affecting the prognosis of patients undergoing CPR after OHCA. After successful resuscitation, an HRR of < 8.555 indicates poor prognosis. The 6 h lactate clearance rate is an independent risk factor affecting the prognosis of patients after resuscitation for OHCA. The 6 h lactate clearance rate of < 28.947 after successful resuscitation indicates poor prognosis. The combined index of 6 h lactate clearance rate and HRR has a higher predictive value for the prognosis of patients after resuscitation.
|
2023-07-16T15:04:03.181Z
|
2023-07-07T00:00:00.000
|
{
"year": 2023,
"sha1": "d2d790df3b3a4b8cae264e2142e214f0fda14a7a",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s44231-023-00046-3.pdf",
"oa_status": "GOLD",
"pdf_src": "Springer",
"pdf_hash": "8b0b1141fa549a6544322d92b67bcfa158e8b728",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
253782914
|
pes2o/s2orc
|
v3-fos-license
|
Diverse Physiological Roles of Flavonoids in Plant Environmental Stress Responses and Tolerance
Flavonoids are characterized as the low molecular weight polyphenolic compounds universally distributed in planta. They are a chemically varied group of secondary metabolites with a broad range of biological activity. The increasing amount of evidence has demonstrated the various physiological functions of flavonoids in stress response. In this paper, we provide a brief introduction to flavonoids’ biochemistry and biosynthesis. Then, we review the recent findings on the alternation of flavonoid content under different stress conditions to come up with an overall picture of the mechanism of involvement of flavonoids in plants’ response to various abiotic stresses. The participation of flavonoids in antioxidant systems, flavonoid-mediated response to different abiotic stresses, the involvement of flavonoids in stress signaling networks, and the physiological response of plants under stress conditions are discussed in this review. Moreover, molecular and genetic approaches to tailoring flavonoid biosynthesis and regulation under abiotic stress are addressed in this review.
Introduction
Abiotic stresses affect different aspects of plants' physiological, biochemical, and molecular status. Nevertheless, through evolution, plants evolved various strategies to overcome stressful conditions by altering their physiological and metabolic pathways. Recent progress in metabolomics enabled the studying of the regulatory roles of metabolites in plants under abiotic stress conditions. It has been well-documented that metabolites play versatile roles in plants' response to abiotic stresses [1]. Among secondary metabolites, flavonoids are known as "specialized metabolites". They are low molecular weight polyphenolic compounds that play crucial biological functions in plants and animals [2,3]. More than 6500 flavonoids have been discovered [4]. The US Department of Agriculture has identified several dietary flavonoid subgroups that significantly benefit human health, including anthocyanin, flavonols, flavanones, proanthocyanidins, (iso) flavones, and flavan-3-ols [2]. Most flavonoid compounds prevail in nature as glycosides, and they are soluble in water for the occurrence of sugar and hydroxyl groups in their structure; they are also lipophilic for the presence of isopentyl and methyl groups [5]. Flavonoids are synthesized in specific sites of plant cells, and they control different physiological activities, such as the germination of spores and seeds, the development of aroma and color of flowers, seedlings growth, and also attracting the pollinators toward flower pollen for its dispersion [6,7].
These secondary metabolites participate in defense processes by initiating some biological activities to protect plants when exposed to diverse environmental stresses [8,9]. The accumulation of flavonoids in plants in response to various abiotic stresses, including temperature, heat, freezing, light, UV, nitrogen deficiency, phosphate deficiency, and drought, have been evidenced [10,11]. Flavonoids also play protective roles, including detoxification, allelopathic and antimicrobial effects, phytoalexins, signaling molecules, and signaling UVfilter [12]. Having antioxidant properties, flavonoids can scavenge reactive oxygen species (ROS) under biotic and abiotic stresses [13]. Flavonoids restrict the metabolic activities of enzymes in ROS-generation pathways, thereby stimulating the antioxidant defense system. It is worth mentioning that the diversity in the structure of flavonoids enables them to interact with an immense variety of biomolecules simultaneously [14]. Moreover, several groups of enzymes (isomerases, reductases, and hydroxylases), and Fe 2+ /2-oxoglutarate-dependent dioxygenases act differently in flavonoid biosynthesis, which modifies the fundamental pathway of flavonoid biosynthesis and leads to a different flavonoid subclass [15]. This flexibility at the biosynthesis and functional level makes flavonoids a versatile molecule that can regulate the activities of various enzymes, cell cycles, DNA and protein functions, and lipid peroxidation [14]. However, the biological functions of flavonoids and their role under various environmental stimuli are yet to be unraveled. In the present review, we briefly review the chemistry and biosynthesis of flavonoids in plants to have an insight into their biochemical properties and the mechanism of action in the plant cell. The antioxidative properties of flavonoids and the evidence of the involvement of flavonoids in plants under different abiotic stresses are also discussed. In addition, the underlying mechanisms of the functional roles of flavonoids in shaping a response to different abiotic stresses and their role as signaling molecules in abiotic stress response pathways are probed. Finally, the recent progress in molecular and genetic approaches to tailoring flavonoid regulation under abiotic stresses is discussed. This review provides an overview of the mechanism of plant stress response from the metabolic perspective and enables the assessment of flavonoids as a promising stress marker.
Chemistry and Biosynthesis of Flavonoids
Flavonoids are the largest group of naturally produced substances, of which more than 9000 phenolic products are detected in planta [16]. Flavonoids comprise some primary chemical conformations with three rings of phenols, two of which are joined with the central phenolic ring. The two rings of phenols are associated with 6-carbons independently, and the central ring is associated with the 3-carbons [17]. Various kinds of chemical compounds, as well as derivatives, are synthesized from flavonoids with discrete interchange in their basic chemical constitution. The precursor of flavonoids is flavones, originating in the cell sap of immature plant tissues [18]. Generally, flavonoids are synthesized in the cytosol of the plant cell through different pathways. The synthesis of major classes of flavonoids, including anthocyanins, isoflavonoids, and proanthocyanidins, occur along the general phenylpropanoid and polyketide pathways, transforming phenylalanine into 4-coumaroyl-CoA through cytosolic multienzyme complex (flavonoid metabolon) that is loosely attached to the cytoplasm of the endoplasmic reticulum [19]. The expressions of flavonoid biosynthetic genes such as chalcone synthase (CHS), flavanol synthase (FLS), flavonoid 3 -hydroxylase (F3 H), flavanone 3-hydroxylase (F3-H), and chalcone isomerase (CHI) are mediated by some flavanol regulators, including MYB11/PFG2, MYB12/PFG1, and MYB111/PFG3 [20,21].
In plants, flavonoid accumulation depends upon the modulation of the expression of genes related to the flavonoid biosynthetic pathway [22][23][24]. The phenylpropanoid pathway gives p-coumaroyl-CoA. This pathway starts from the aromatic amino acids phenylalanine and tyrosine, produced by the shikimate pathway ( Figure 1). The flavylium ion is known as the core of the flavonoids' biosynthesis pathways, the upstream of which is three molecules of malonyl-CoA and one molecule of 4-coumaroyl-CoA [25]. Furthermore, CHS and CHI enzymes are involved in the two-step condensation, which yields in naringenin (flavanone) [26]. chalcone isomerase (CHI) are mediated by some flavanol regulators, including MYB11/PFG2, MYB12/PFG1, and MYB111/PFG3 [20,21].
In plants, flavonoid accumulation depends upon the modulation of the expression o genes related to the flavonoid biosynthetic pathway [22][23][24]. The phenylpropanoid path way gives p-coumaroyl-CoA. This pathway starts from the aromatic amino acids phenyl alanine and tyrosine, produced by the shikimate pathway ( Figure 1). The flavylium ion is known as the core of the flavonoids' biosynthesis pathways, the upstream of which is three molecules of malonyl-CoA and one molecule of 4-coumaroyl-CoA [25]. Furthermore, CHS and CHI enzymes are involved in the two-step condensation, which yields in naringenin (flavanone) [26]. Figure 1. Biochemistry of flavonoids with their various subgroups: Phosphoenolpyruvate and erythrose-4-phosphate are converted to chorismate via the shikimate pathway in seven metabolic stages. Chorismate is the common precursor of three aromatic amino acids viz., tryptophan, tyro sine, and phenylalanine. The enzyme phenyl ammonium lyase (PAL) induces the synthesis of cin namic acid from phenylalanine, and cinnamate is converted to p-Coumaric acid by the activity o cinnamate 4-hydroxylate. Another enzyme 4-coumaroyl CoA ligase converts p-Coumaric acid into 4-coumaroyl CoA and 3-malonyl CoA, which are responsible for synthesizing chalcones by chalcone synthase activity. Eriodictyol chalcone and naringenin chalcone are the two classes of chalcones The flavanones are synthesized from chalcones by the activity of chalcone isomerase. There are dif ferent subgroups of flavonoids, shown in this diagrammatic representation. Fisetin, kaempferol myricetin, and quercetin are the types of flavonols. Hesperitin, naringin, and naringenin are the types of flavanones. Some types of isoflavonoids are daidzein, glycitein, and genistein. Flavanols are produced from flavanones by dihydroflavonol reductase activity, and some examples of fla vanols are catechin, epicatechin, and epigallocatechin. The enzyme flavone synthase is responsible for the production of flavones from flavanones. Apigenin, chrysin, luteolin, and rutin are some types of flavones. The two other flavonoid subgroups-isoflavones and dihydroflavonols-are synthe sized by the activity of isoflavonoid synthase and flavanone 3-hydroxylase, respectively. Dihydro flavonol reductase converts dihydroflavonols into leucoanthocyanidins, which are converted into anthocyanidins by the anthocyanidin synthase enzyme. Cyanidin, delphinidin, and pelargonidin are some types of anthocyanidins. Chorismate is the common precursor of three aromatic amino acids viz., tryptophan, tyrosine, and phenylalanine. The enzyme phenyl ammonium lyase (PAL) induces the synthesis of cinnamic acid from phenylalanine, and cinnamate is converted to p-Coumaric acid by the activity of cinnamate 4-hydroxylate. Another enzyme 4-coumaroyl CoA ligase converts p-Coumaric acid into 4-coumaroyl CoA and 3-malonyl CoA, which are responsible for synthesizing chalcones by chalcone synthase activity. Eriodictyol chalcone and naringenin chalcone are the two classes of chalcones. The flavanones are synthesized from chalcones by the activity of chalcone isomerase. There are different subgroups of flavonoids, shown in this diagrammatic representation. Fisetin, kaempferol, myricetin, and quercetin are the types of flavonols. Hesperitin, naringin, and naringenin are the types of flavanones. Some types of isoflavonoids are daidzein, glycitein, and genistein. Flavanols are produced from flavanones by dihydroflavonol reductase activity, and some examples of flavanols are catechin, epicatechin, and epigallocatechin. The enzyme flavone synthase is responsible for the production of flavones from flavanones. Apigenin, chrysin, luteolin, and rutin are some types of flavones. The two other flavonoid subgroups-isoflavones and dihydroflavonols-are synthesized by the activity of isoflavonoid synthase and flavanone 3-hydroxylase, respectively. Dihydroflavonol reductase converts dihydroflavonols into leucoanthocyanidins, which are converted into anthocyanidins by the anthocyanidin synthase enzyme. Cyanidin, delphinidin, and pelargonidin are some types of anthocyanidins.
Several studies noted the localization of FLS 1, CHS, and CHI in nuclei of Arabidopsis [30][31][32]. However, most of the flavonoids accumulated in the cytoplasm are then possibly moved into the vacuole via an autophagic mechanism [33] and grape-vesicle, concerning a GST and two multidrug and toxic compound extrusion-type transporters (anthoMATEs) [34,35]. This transformative nature and reactivity may underly flavonoids' versatility in abiotic stress response in plants.
Antioxidant Properties of Flavonoids
Flavonoids act as antioxidant in plants and provide protection against various environmental stresses ( Figure 3; Table 1). A consequence of abiotic stresses is the production of harmful ROSs [36,37]. These are known as highly reactive superoxide anion radical (O 2 •− ), •− . These ROSs cause oxidative stress that occurs as a consequence of disturbance in maintaining homeostasis between ROS production and endogenous antioxidant defense mechanism [37]. During environmental stress, ROSs act as oxidants of the DNA, proteins, carbohydrate, and lipids and eventually cause damage to plant cells [38]. To cope with oxidative damage, plants produce antioxidant enzymes (e.g., SOD, CAT, ascorbate peroxidase (APX), glutathione peroxidase (GPX), glutathione reductase (GR), etc.). However, under extreme environmental stress conditions, the production of antioxidants in plants cannot keep pace with the magnitude of the oxidation, leading to increased ROS content in the cell [39]. Under such circumstances, the antioxidant properties of flavonoids help plants to counterbalance the excessive ROS production and repair the damage caused by them [40,41]. Flavonoids are a large class of secondary metabolites, and several pieces of evidence confirm the hypothesis of their antioxidant functions in higher plants under a range of environmental stresses [42,43]. With potent antioxidant properties, flavonoids help plants to cope with oxidative stresses by quenching free radicals, thereby protecting plants from cellular peroxidation [44]. The suppression of ROS generation by flavonoids occurs through the four following pathways: (i) restriction of singlet oxygen, (ii) inhibition of ROS-producing enzymes (cyclooxygenase, lipoxygenase, monooxygenase, and xanthine oxidase), (iii) chelation of transition metal ions, and (iv) recycling of other antioxidants [45,46]. moved into the vacuole via an autophagic mechanism [33] and grape-vesicle, concernin a GST and two multidrug and toxic compound extrusion-type transporters (anthoMATE [34,35]. This transformative nature and reactivity may underly flavonoids' versatility i abiotic stress response in plants.
Antioxidant Properties of Flavonoids
Flavonoids act as antioxidant in plants and provide protection against various env ronmental stresses ( Figure 3; Table 1). A consequence of abiotic stresses is the productio of harmful ROSs [36,37]. These are known as highly reactive superoxide anion radic (O2 •− ), singlet oxygen (O2), hydrogen peroxide (H2O2), and hydroxyl radical (•OH Among the highly reactive and dominant ROSs are H2O2 and O2 •− . These ROSs cause ox dative stress that occurs as a consequence of disturbance in maintaining homeostasis b tween ROS production and endogenous antioxidant defense mechanism [37]. During en vironmental stress, ROSs act as oxidants of the DNA, proteins, carbohydrate, and lipid and eventually cause damage to plant cells [38]. To cope with oxidative damage, plan produce antioxidant enzymes (e.g., SOD, CAT, ascorbate peroxidase (APX), glutathion peroxidase (GPX), glutathione reductase (GR), etc.). However, under extreme environ mental stress conditions, the production of antioxidants in plants cannot keep pace wit the magnitude of the oxidation, leading to increased ROS content in the cell [39]. Und such circumstances, the antioxidant properties of flavonoids help plants to counterbalanc the excessive ROS production and repair the damage caused by them [40,41]. Flavonoid are a large class of secondary metabolites, and several pieces of evidence confirm the hy pothesis of their antioxidant functions in higher plants under a range of environment stresses [42,43]. With potent antioxidant properties, flavonoids help plants to cope wit oxidative stresses by quenching free radicals, thereby protecting plants from cellular p roxidation [44]. The suppression of ROS generation by flavonoids occurs through the fou following pathways: (i) restriction of singlet oxygen, (ii) inhibition of ROS-producing en zymes (cyclooxygenase, lipoxygenase, monooxygenase, and xanthine oxidase), (iii) chel tion of transition metal ions, and (iv) recycling of other antioxidants [45,46]. Flavonoids occur abundantly in different parts of plants, mostly in the vacuole and chloroplasts of the mesophyll cells, and are also found consistently in the subcellular sites to function as ROS-quenchers [47,48]. During stress, the presence of flavonoids in the vacuole helps detoxification of H 2 O 2 molecules, which are generally released from the chloroplast [49]. Several studies have shown the antioxidant properties of flavonoids through different actions. The general modes of action of flavonoids against stresses are (i) quenching of free radical molecules, (ii) metal chelation, (iii) interfering with the enzymes related to free radical generation, and (iv) activation of plants' natural antioxidant enzymes [50]. Flavonoids are directly involved in scavenging ROS. Having chelating properties, flavonoids take part in the chelation of free radicals by donating a hydrogen atom or by single-electron transfer, as well as through the chelation of transition metal elements to prevent free-radical formation [51]. Additionally, flavonoids function as an internal antioxidant enzyme by hindering free-radical triggering enzymes, for instance, xanthine oxidase, lipoxygenase, protein kinase C, cyclooxygenase, microsomal monooxygenase, mitochondrial succinoxidase, and NADPH oxidase [50,52].
Numerous abiotic stresses trigger highly hydroxylated flavonoids. Under such a state, the stronger scavenging function occurs by the activity of an extra free hydroxyl radical (-OH) on the C-30 position of the B-ring [42,53]. A study on soybean seedlings treated with lanthanum demonstrated flavonoid potential for scavenging O 2 and ·OH [54] by decreasing the MDA concentration and maintaining standard plasma membrane permeability. Quercetin 3-O-and luteolin 7-O-glycosides, having a catechol group (ortho-dihydroxy B-ring substitution) in the B-ring of the flavonoid skeleton, show considerable antioxidant activity in plant cells [53]. Moreover, quercetin derivatives protect chloroplast damage from the singlet oxygen induced by high light in Phillyrea latifolia leaves [48]. Similarly, kaempferol, a monohydroxy B-ring flavanol, also showed antioxidant properties under light irradiance [55]. It has been observed through studies that in most cases, quercetin derivatives are more efficient than monohydroxy B-ring, particularly during complex formation with ions of Cu and Fe. They are also found to be involved in inhibiting ROS production by the Fenton reaction [56] and suppressing generated ROS, as well as equipping plants with versatile compounds to cope with environmental stresses. Table 1. Antioxidative role of flavonoids in plant response to abiotic stress.
UV-B radiation
Medicago sativa Increased content of flavonoid compound induces enhanced antioxidant capacity of the plant. [57] UV-B radiation Kalanchoe pinnata Increases total flavonoid and quercitrin content, which have antioxidant properties to protect the plant. [58]
UV-B stress and drought
Populus tremula × P. tremuloides Transgenic line of poplar with high proanthocyanidins content displayed lower hydrogen peroxide content. [59] Salinity Zea maize Improved plant performance under salt stress through antioxidant activities. [60]
Arabidopsis thaliana
CrUGT87A1, a UDP-sugar glycosyltransferases (UGTs) gene, improved salt tolerance by increasing antioxidant capacity resulting from the accumulation of flavonoids. [61] Salinity Amaranthus tricolor Increases flavonoid content, which showed the potent antioxidant activity in scavenging ROS. [62] Salinity
Amaranthus lividus
Increases flavonoid content and the antioxidant capacity of leaves, total flavonoid content scavenged ROS. [63] Water stress Chrysanthemum morifoilum Increases flavonoids (rutin, quercetin, apigenin, and luteolin) and enhanced antioxidant activity. [64] Drought Arabidopsis thaliana Increase in total flavonoid content followed by an increase in antioxidant activity. [65] Drought Cistus clusii prevented oxidative damage. [66] Drought Swingle citrumelo Proline accumulation was concomitant with an increase in antioxidant activity. [67] Temperature stress
Abiotic Stress Plant Species Antioxidant Response of Flavonoids References
Cadmium stress Trigonella foenum-graecum H 2 S-induced polyamines accumulation was concomitant with an increase in ROS-detoxification capacity. [70] Cadmium stress Solanum Lycopersicon Nitric oxide-induced increase in flavonols resulted in improved antioxidant capacity. [71] Lead stress Tritium aestivum Accumulation of proline was concomitant with a lower level of lipid peroxidation. [72]
Flavonoids-Mediated Defenses against Abiotic Stress
To survive under abiotic stresses, plants adopt different strategies at molecular [73], metabolomic [74], physiological, and morphological levels. The common aftermath of abiotic stress is ROS production, accumulation, and signaling. Accordingly, the scavenging of ROSs is an inevitable part of shaping a response to abiotic stress. Flavonoids are secondary metabolites with antioxidant properties that play an efficient role in ROS scavenging and the prevention of ROS generation [41]. Besides antioxidant properties, different mechanisms and sites of action have been proposed for flavonoids in plants' stress tolerance ( Figure 4; Tables 2 and 3). Flavonoids are mostly species-specific compounds [3,4], and their biosynthesis is dependent on the plant species, developmental stage, and the nature of the stresses [12,75]. Many studies have reported the alternation in the level of flavonoid contents under different stress conditions [76]. Reduced ROS accumulation in pollen grain and improved development of pollen tube and germination. [96] Air pollutant O3 stress (300 nL L −1 ) 6 h
Medicago truncatula
Accumulation of endogenous phenolic compounds.
Phenols were oxidized red/purple pigments and resulted in the accumulation of antioxidant compounds. [97]
Drought and Salinity
Drought is known as the most important physical stress of terrestrial ecosystems [98 Therefore, different research has dealt with drought by studying either agricultural wat saving and water reuse [99] or by considering plants' physiological adaption to the limite water supply [100]. Salinity occurs when the number of nutrient elements exceeds a sp cies-specific threshold and threatens plant productivity [101].
Alternation in gene expression, metabolic modifications, osmotic adjustment [102 regulation of stomatal movement [103,104], and adjustment of growth and developme are among drought-adaption strategies that are activated in plants to maintain the wat balance [105][106][107].
[84] Increase in the activity of AsA-GSH cycle and glyoxalase system and a further increase in accumulation of osmolytes. Improved K + accumulation and restricted Na + accumulation. Increase in superoxide dismutase (SOD), catalase (CAT), and ascorbic acid (AsA). [90]
Drought and Salinity
Drought is known as the most important physical stress of terrestrial ecosystems [98]. Therefore, different research has dealt with drought by studying either agricultural water saving and water reuse [99] or by considering plants' physiological adaption to the limited water supply [100]. Salinity occurs when the number of nutrient elements exceeds a species-specific threshold and threatens plant productivity [101].
Alternation in gene expression, metabolic modifications, osmotic adjustment [102], regulation of stomatal movement [103,104], and adjustment of growth and development are among drought-adaption strategies that are activated in plants to maintain the water balance [105][106][107].
A study on tea plants revealed that under drought stress, the expression of the genes related to flavonoid biosynthesis, including CHS, dihydrofavonol 4-reductase (DFR), Leucoanthocyanidin reductase (LAR), and leucoanthocyanidin dioxygenase (ANS), was decreased in the early stages of the drought but subsequently increased by continuous drought stress [108]. The significant upregulation of flavonoid biosynthesis genes (phenylalanine ammonia-lyase (PAL), cinnamic acid 4-hydroxylase (C4H), 4-coumarateCoA ligase (4CL), CHS, and Dihydrofavonol 4-reductase (DFR)) was demonstrated by another study on tea plant under drought condition [85]. The positive effect of fulvic acid in improving the drought resistance of tea was shown to be related to its role in activating flavonoid biosynthesis pathway genes [86]. Another piece of evidence of the involvement of flavonoids in drought stress response was reported in Arabidopsis by indicating that a highly droughtinduced gene, CYTOCHROME P450, was involved in the upregulation of antioxidant flavonoids genes in Arabidopsis [65]. Coexpression of flavonoid biosynthesis genes and drought-induced genes, as well as upregulation of flavonoid biosynthesis genes under drought stress, accounts for the involvement of flavonoids in drought stress responses. The mechanism of flavonoid action under drought has been proposed by previous studies and portrays several interlinked pathways, including antioxidant properties, signaling components, osmotic adjustment, stomatal movement, and photosynthesis regulation.
In plants, the role of flavonoids in response to salt stress has also been proposed. A transgenic line of Arabidopsis, UGT76E11, that overaccumulates flavonoids exhibited a high antioxidant capacity, reduced ROS accumulation, and enhanced NaCl and mannitol stress resistance [88]. A genotype-dependent manner was detected in the accumulation of flavonoids upon short-term or long-term salt stress in two Cardoon genotypes. The genotype "Bianco Avorio" showed a constant increase in flavonoid content in response to both short-and long-term stresses, while in "Spagnolo", only long-term salt stress triggered flavonoids accumulation [109]. The Arabidopsis MYB transcription factor, MYB111, regulates salt stress responses, as a reduction in MYB111 is significantly linked with reduced salt tolerance in Arabidopsis. An increase in flavonoid biosynthesis was associated with MYB111 overexpression, suggesting that flavonoids act in MYB111-regulated response to salt stress tolerance. To test the hypothesis, the researchers examined the effect of exogenous bioflavonoids such as chalcone, dihydrokaempferole, and quercetin on saltstressed Arabidopsis plants. They found that these isoflavones rescued the decreased salt tolerance in MYB111 mutants [89]. Coexpression network analysis of salt-tolerant wild soybean revealed that the mechanism of class B heat shock factor, HSFB2b, in soybean response to salinity stress partially underlies its role in activating a subset of genes related to flavonoid biosynthesis [110]. This part explains the mechanism of flavonoid involvement in drought stress response in plants.
A primary consequence of drought stress is oxidative damage. Antioxidant systems take part in the amelioration of oxidative damage through the activation of enzymatic or nonenzymatic antioxidants that provide effective scavenging of ROS [111]. Flavonoids are among nonenzymatic antioxidants that improve plants' fitness to drought stress.
An increase in the antioxidative defensive mechanism as a result of selenium and a-tocopherol was the result of enhancement in the production of phenolics and flavonoid content in maize plants under salt stress, signifying the antioxidative role of flavonols in the salt stress response pathway [60]. Exogenous application of vanillic acid took part in osmolyte accumulation, regulation of ion uptake, and augmentation of superoxide dismutase (SOD), catalase (CAT), and ascorbic acid (AsA) under salt stress [90].
Another role of flavonoids is improving the plants' adaption to drought by regulating stomatal movements. It was revealed that flavanols hinder ABA-induced hydrogen peroxide (H 2 O 2 ) accumulation in stomata guard cells of Arabidopsis [112]. It was also found that the accumulation of flavonols in stomata guard cells was highly induced by drought, and the accumulation of flavonols was higher in a drought-overly-insensitive (doi57) mutant compared with the wild type, which was associated with a relatively lower accumulation of H 2 O 2 in stomata guard cells of doi57 [81]. In a study on pigeon pea, the accumulation of flavonoids (genistein, genistin, and pterostilbene) was accompanied by the initiation of stomatal closure by ABA treatment under drought [87]. Gene coexpression networks in sea buckthorn revealed that ABA and flavonoid signaling crosstalk determines the levels of drought resistance among different subspecies [113]. Unraveling the metabolic signature of Brassica napus in response to ABA suggested a role for flavonols in stomatal movement under drought stress. Further examination showed that the exogenous application of 1 µM quercetin resulted in a slight increase in the stomatal aperture of B. napus [114].
A role as a signaling molecule for flavonols has also been suggested [88,115]. Increased transcription of stress-related genes in the UGT76E11 transgenic line of Arabidopsis was featured by flavonols overaccumulation, suggesting a role for flavonols as a signaling molecule that activates stress-related transcription factors [88]. A study on Arabidopsis indicated that ectopic expression of a grape Basic helix-loop-helix (bHLH) transcription factor gene, VvbHLH1, increased the accumulation of flavonoids. Authors suggested that overexpression of VvbHLH1 resulted in adaption to salt and drought stress by upregulation of genes involved in the ABA biosynthesis pathway, which further increases the generation of signaling molecules and the expression of stress-tolerance genes [116].
Flavonoid accumulation improved photosynthesis by decreasing lipid peroxidation and lowering excitation pressure and loss of energy through nonphotochemical quenching [117]. Therefore, flavonoids take part in drought stress responses at different levels, including signal transduction, regulation of gene expression, ROS scavenging, stomatal movements, and retention of photosynthetic system functionality, and eventually improve plants' performance under drought stress conditions. Microarray analysis indicated that upregulation of a gene encodes chalcone isomerase2 (OsCHI2) under drought and salt stress. The OsCHI2 is responsible for increasing the transcripts of structural genes related to the flavonoid's biosynthesis pathway. Rd29A::OsCHI2 transgenic rice plants exhibited prolonged photosynthesis activity under drought and salinity stress. An increase in relative water content, photosynthetic pigments, and proline with reduced relative electrolyte leakage and malondialdehyde content detected in plants were suggested as the mechanisms by which flavonoids take part in the regulation of photosynthesis activity under drought and salinity [118]. Another study proposed that the positive effect of mild NaCl treatment on net photosynthesis (P n ) and quantum yield efficiency of electron transfer (F V /F M ) was the result of an increase in the total flavanols content of Tetrastigma hemsleyanum [119]. Alleviation of the effect of salinity on cellular redox, chloroplast antioxidant system, and photosynthetic activity is indicated by applying exogenous naringenin on bean plants (Phaseolus vulgaris) under salt stress [91]. A genotype-dependent response of photosynthesis to salt stress was detected in two Paulownia genotypes. Further investigation that displayed different capacities for the accumulation of flavonoids in Paulownia tomentosa × fortune (TF) compared with Paulownia elongata × elongata (EE) underlies the variation in their potential to respond to salt stress. The genotype with a higher capacity of flavonoid accumulation (TF) showed higher resilience of photosynthesis apparatus, indicated by higher F V /F M and higher QAreoxidation compared to EE [120].
Moreover, flavonoids improve plants' resistance to drought and salt stress by preventing oxidative processes, maintaining a fine-tuned oxidation/redox potential, osmotic regulation, and improving photosynthesis efficiency. Therefore, the accumulation of flavonols in plants under salt stress favors plants' resilience to drought and salt from both molecular and physiological aspects.
Toxic Metal/Metalloids
Flavonoids, as a versatile compound in abiotic stress alleviation, also take part in response to heavy metal stress. The concomitant increase in flavonoids with an increase in the concentration of heavy metals in plant tissue suggested an antioxidative role for flavonoids in alleviating heavy metal stress in plants [121][122][123]. Moreover, the phytoremediation capacity of N. biserrate was concluded to be the result of a high accumulation of myricetin and kaempferol in its tissue when grown in heavy metal-contaminated soils [124]. Preincubation of lupin seedlings exposed to lead stress for 48 h with flavonoids attenuated the adverse effects of lead stress. Increased root growth, reduced accumulation of ROS, lipid peroxidation, and cell death were detected in flavonoid-incubated plants compared to control under lead toxicity. To answer the query related to the effect of flavonoids on the removal of excess lead due to its antioxidant properties, the capacity of root extracts to scavenge 1-diphenyl-2-picrylhydrazyl (DPPH) was investigated and confirmed the antioxidative role of flavonoids in lead stress-exposed plants [92]. Flavonoids enhanced the tolerance of Avicennia marina to Cd. However, flavonoids showed no influence on the uptake of Cd in root cell walls since the exposure of roots to ion transport inhibitor (LaCl 3 ) evidenced the facilitation of Cd transport in roots, indicating that flavonoids have a significant stimulative effect on symplastic transport of Cd in roots, and Ca-channel was not the unique means of symplastic transport for Cd absorption. Flavonoids facilitate symplastic transport when roots take up Cd but do not affect apoplastic transport [125]. According to the existing literature, the antioxidative role of flavonoids is the only mechanism of heavy metal stress alleviation in plants that has been taken into consideration thus far. Nevertheless, there are a limited number of reports on the metal-chelation properties of flavonoids. In a study on Fagopyrum esculentum by Moench, the role of salicylic acid in alleviating Cd stress was attributed to its effect on the enhancement of the metal-chelation properties. Heavy-metal chelation properties have also been assigned to plant-based natural flavanols in a study on the effect of lead poisoning in mice [126].
Extreme Temperature
Low temperature upregulates the expression of flavonoids' biosynthetic genes and increases the content of flavonoids in plant tissue in a species-dependent manner [127,128]. Flavonoids were also introduced as the potential biomarkers for cold stress in barley [129]. Reportedly, anthocyanin synthesis plays an essential role in cold stress tolerance in B. rapa since the expression of anthocyanidin synthase (BrANS) genes was sturdily related to coldstress tolerance [130], whereas knock-out mutation of PRODUCTION OF ANTHOCYANIN PIGMENT 1 (PAP1) MYB transcription factor depicts impaired leaf-freezing tolerance in Arabidopsis [128].
The role of anthocyanin and other flavonoids in the tolerance of Arabidopsis to cold stress has been reported. Nevertheless, the precise causal relationship between flavonoids and cold stress tolerance was not proposed [131]. Other researchers have studied the tolerance of Arabidopsis against freezing to initiate the stress linked with apoplastic ice crystal formation at subzero temperatures. Their investigation indicated minor effects of flavonoids on primary metabolism. They also refuted the possibility of involvement of flavonoids in the modification of phytohormones' balance or stabilization of proteins as a possible function of flavonoids in chilling stress. This was because plant growth, development, and primary metabolism were unaltered in all flavonoid biosynthesis mutants used in their study. Instead, they approved a previously proposed role for flavonoids in freezing tolerance because flavonoids take part in the protection of cell membranes and proteins against cold stress since flavonoids-mediated partition and stability of plants' membranes have been evidenced [132]. They also proposed that the redundancy of flavonoid structures allows the deficiency of flavanols or anthocyanins to be compensated by other flavonoid compound classes [133]. A close association between cold stress tolerance and expression of dihydroflavonol 4-reductase (DFR) genes is known to be another essential function in the flavonoid biosynthetic pathway. This association proposed that the BrDFR gene is a useful resource for molecular breeding of freezing stress-resistant Brassica crops [134].
In addition, a role for flavonoids as an osmoticum has been proposed in a study on apple leaves exposed to cold temperatures. Although the role of anthocyanin in the osmotic adjustment of apple leaves has been proved by their study, due to metabolic costliness relative to other osmolytes and low concentrations, it is unlikely that they solely take part in osmoregulation [135]. Moreover, in a study on Liriope spicata, it was revealed that genes and metabolites involved in the flavonoid pathway had a synergist role in osmoregulation under freezing stress [136].
Interactions between light and cold stress have been depicted by several studies [137]. A study on the interactive pathway of blue light signaling with cold stress response depicted the dependency of anthocyanin biosynthesis on the expression of cold-stress-responsive genes affected by blue light signaling [137,138]. The role of light intensity and spectra on flavonoids, particularly anthocyanin, has recently attracted attention and was briefly discussed in the section on light stress.
The comparison between pepper plants (Capsicum annuum L.) incubated by Penicillium resedanum with nonincubated plants showed that tolerance to high temperature was associated with the uplift in amino acid and the production of flavonoids in high quantities [139]. On the other hand, transcriptomic analysis of eggplants under high temperature displayed downregulation of genes in the anthocyanin biosynthetic pathway of eggplant [140]. The role of flavonoids in enhancing the fertility of tomatoes under high temperatures was investigated. Studying anthocyanin-reduced (are) tomato mutants demonstrated that flavanols ameliorated the adverse effects of high temperature by reducing the abundance of ROS [94]. The ROS scavenging role of flavonoids as the potential function of flavonoids in attenuation of heat stress was also reported in heat-stressed soybean seeds. The authors proposed that higher concentrations of flavonoids, ascorbate precursors, and tocopherols alleviated heat stress damage during seed maturity via scavenging heat-induced ROS damage [95]. In contrast, the reduction of flavonoids in response to high temperatures has also been reported, suggesting a negative role for flavonoids in plants' fitness to high temperatures [141,142]. However, combined heat and drought stress led to increased flavonols content in Quercus ilex L. [143].
In conclusion, an increase in flavonoid content can be considered a cold-tolerance strategy, while it is not the case under high-temperature stress since under high-temperature stress, flavonoid content in different plant organs may depict different patterns. ROSs' scavenging properties of flavonoids are introduced as the functional role of flavonoids for both cold and heat stress, and membrane protection properties are proposed as a cold-stress alleviation strategy. Overall, the crosstalk of flavonoids with the various temperature stress response pathways is yet to be studied.
Atmospheric Pollutants
As biomarkers of air pollutants, the fluorescence emission of selected chloroplast metabolites, including flavonoids, carotenoids, lipofuscins, and pheophytins, revealed that nitric oxide (NO 2 ) toxicity resulted in the modification of the fluorescence emission profile of carotenoids and flavonoids, suggesting a role for flavonoids in plants' resistance against air pollutant stress [144]. HPLC analysis of the pollen grain of three ornamental plants grown under polluted areas contained mainly sulfur dioxide (SO 2 ), NO 2 , carbon monoxide (CO), hydrocarbons (HC), and airborne particulate material (APM) revealed that the flavonoids content in ethanolic aquatic extracts of pollen grain of studied plants was increased. The increase in flavonoids led to reduced ROS accumulation in pollen grain and further resulted in improved pollen tube development and germination and eventually enhanced the plants' fecundity [96]. Moreover, an antioxidative role is proposed for flavonoids in air pollutant stress scenarios. In this regard, it was noted that Passiflora quadrangularis L. plants grown in a hazy atmosphere synthesized more anthocyanin to cope with the oxidative stress caused by the hazy atmosphere [145]. In addition, an alternation in anthocyanin content has been noted in grape berry plants fumigated by SO 2 , which was claimed to be the result of preventing its degradation rather than de novo synthesis [146]. Application of H 2 S on Brassica oleracea L. resulted in an increase in anthocyanin content, which also accounts for the signaling role of H 2 S in antioxidative pathways [147]. Furthermore, treating Vitis vinifera cell suspension by the donor of H 2 S (sodium hydrosulfide) also resulted in increased flavonols and total phenolic, sinigrin, and anthocyanins [148].
Transcriptome and metabolome analysis of Malus crab apple indicated that a key (O 3 )responsive transcription factor, McWRKY75, was positively correlated with a flavonoidrelated structural gene. In addition, the exogenous application of methyl jasmonate decreased the negative impacts of O 3 stress by enhancing the flavonoid metabolic pathway [149]. Studying Medicago truncatula response to O 3 stress revealed that the potential for upregulation of flavonoid biosynthesis pathway and being benefited by flavonoids' antioxidant properties account for the resilience of ozone-insensitive accession against O 3 pollution [97]. Air pollutants intrude plant tissue through stomata and affect stomatal characteristics and apparatuses [150]. In cuticles and epicuticular waxes, flavonoids play the role of an antioxidant barrier to protect cellular components against air pollutants such as ozone (O 3 ) and sulfur dioxide (SO 2 ) [13].
Involvement in the signaling of stomatal movement and scavenging of ROS to block the transduction of signals that lead to stomatal malfunctioning is a well-defined role of flavonoids [112,115]. Given that, a possible role of flavonoids in ameliorating air pollutant stress can be their involvement in the signaling network of stomatal movement.
Light Stress
Light provides the fuel for photosynthesis, the process that a plant's life entirely depends on. Light quality, intensity, and duration affect plant growth, morphology, resource acquisition, and adaption to the environmental condition [151][152][153][154]. Nevertheless, excess levels of light impose detrimental effects and cause light stress on plants. Flavonoids have been demonstrated to play a positive role in the amelioration of light stress effects on plants. However, some studies cast doubt on the positive role of flavonoids in stress response because, in some cases, flavonoids had a negligible role against light stress. Studies on different plant species showed that high light intensity increases flavonoid accumulation [155,156]. Further, the accumulation of flavonoids in epidermal cells, apical meristem, and pollens take part in filtering the extreme sunlight, thus reducing the likelihood of the collision of the harmful spectra on the vulnerable cellular organism causing oxidative stress [157]. Nevertheless, a contrasting report on the modification of flavonoid content under light stress is further elaborated by comprehensive metabolomics studies.
A study on Ginkgo biloba leaves exposed to UV-B radiation depicted a significant increase in the accumulation of flavonols in leaves under long-term UV-B exposure [158]. Similar work on white asparagus (Asparagus officinalis L.) showed that accumulation of a specific flavanol, quercetin-4 -O-monoglucoside, increased following exposure to UV-B stress [159]. Moreover, the effect of high light stress on the anthocyanin content of rose has been investigated and showed a high level of dependency on the light spectra in such a way that monochromatic red and blue light decreased while full-spectrum white light increased anthocyanin content. Interestingly, plants grown under white light depicted a better tolerance to high light stress [160]. Similarly, another study depicted that UV-B stress has a negligible effect on anthocyanin and flavonol index in cucumber plants grown under different light spectra [161]. Nevertheless, a more comprehensive metabolomics study showed that the ratio of four flavonoid compounds, kaempferol, quercetin, flavonol disaccharide I, and flavanol disaccharide II, varied after exposure to UV-B stress, and this modulation in flavonoids content was highly dependent on growing light spectra [162]. These reports suggest a spectral-dependent manner for the role of flavonoids in the regulation of light stress response.
Other Stresses
An A. thaliana ROS1-dependent flavonoid accumulation in response to herbicide stress has been revealed through the transcriptomic analysis of the imazethapyr-treated wild-type and ROS1 plants [163]. In an attempt to grow and develop multiple-herbicide resistance (MHR) in grass weeds, Schwarz et al. [164] examined the binding affinity of flavonoids to a phi class glutathione-S-transferase (AmGSTF1), which is a functional biomarker of MHR in black-grass (Alopecurus myosuroides). Using the ligand fishing experiment, they indicated that a variety of flavonoid structures are potent binders to AmGSTF1 [164].
It was indicated that stress caused by the flood could alter the accumulation pattern of flavonoids by influencing the expression of key enzymes involved in the flavonoid synthesis pathway and eventually resulting in an increase in the total flavonoid content of the Chrysanthemum morifolium [165]. The tolerance of Pterocarya stenoptera, a species widely distributed along rivers, to flooding stress was also attributed to increasing the synthesis of alpha-Linolenic acids and flavonoids in areal organs and activation of phytohormone biosynthesis and signaling pathways [166]. On the contrary, a study on soybean indicated that the genes related to the biosynthesis of phenylpropanoids, lignin, and flavonoids were downregulated under flooding stress and rendered plants' roots more susceptible to pathogens [167]. These findings may propose the organ-specific response of flavonoid accumulation in plants under flooding stress.
Flavonoids-Mediated Abiotic Stress Signaling
Exposure of plants to external stresses initiates the increased regulation of flavonoid biosynthetic responsible genes, thus increasing the flavonoid content. In the desert plant Reaumuria soongorica, a rapid increase in RsF3H (flavanone 3-hydroxylase) gene expression and hindered lipid peroxidation triggered by antioxidant flavonoids has been evidenced as a protective strategy against UV-B and drought stress [168]. Tolerance to UV radiation occurs following flavonoids accumulation since they act as a sunscreen that filters the UV radiation, thereby hindering the generation of ROSs. The activation of UV-B photoreceptor activates the transcriptional factors (TFs), which further activate the transcription of flavonoid biosynthetic genes [13,169]. Similarly, UV-B stress in different species led to a modification in the transcription of flavonoid biosynthetic genes, which further enhanced the ratio of dihydroxy to monohydroxy B-ring-substituted flavonoid glycosides [170,171]. Luteolin and quercetin are glycosides actively involved in chelating iron (Fe) and copper (Cu) ions [56]. For instance, Berli et al. [172] observed that in grape leaves, UV-B radiation triggered an increase in quercetin derivates as an antioxidant for plant protection. It was demonstrated that when A. thaliana is exposed to drought stress, increased accumulation of flavonoids resulted in plant tolerance through the overexpression of MYB12/PFG1 (PRODUCTION OF FLAVONOL GLYCOSIDES1) or MYB75/PAP1 (PRODUCTION OF ANTHOCYANIN PIG-MENT1), MYB12 and PAP1, transparent testa4 (tt4) as a flavonoiddeficient mutant, and flavonoid-deficient MYB12 or PAP1 (obtained by crossing tt4 and the individual MYB overexpressor in A. thaliana) [2] Figure 5. In addition, direct estimation of the antioxidant activity revealed that enhanced accumulation of anthocyanin with effective in vitro antioxidant activity directly alleviates ROS in vivo [2]. In salt-stressed transgenic tobacco, overexpression of a repressor of silencing from Arabidopsis (AtROS1) occurred, which consists of genes encoding enzymes of flavonoid biosynthetic and antioxidant pathways, the influence of AtROS1 increasing the demethylation levels of these genes encoding CHS, CHI, F3-H, FLS, dihydroflavonol 4-reductase, and anthocyanidin synthase of the flavonoid biosynthetic pathway, and antioxidant enzymatic pathway that confirms the flavonoids mediated tolerance to salt stress [173]. Ismail et al. [174] reported that flavonoid (rutin) level increased by 25-fold in quinoa leaves under salt stress, which improved tissue tolerance and decreased the negative impact of high salinity on leaf photochemistry by elevating the availability of potassium (K + ) and rate of (Na + ) pumping. In addition, the negative correlation between rutin-stimulated modifications in K + and H + fluxes proposed that the accretion of rutin in the cytosol takes part in scavenging the hydroxyl radicals, thereby preventing K + leakage through K + efflux pathways [174]. These findings suggest the potential role of flavonoids in alleviating the negative impacts of abiotic stress. and rate of (Na + ) pumping. In addition, the negative correlation between rutin-stimulated modifications in K + and H + fluxes proposed that the accretion of rutin in the cytosol takes part in scavenging the hydroxyl radicals, thereby preventing K + leakage through K + efflux pathways [174]. These findings suggest the potential role of flavonoids in alleviating the negative impacts of abiotic stress.
Molecular and Genetic Approaches in Tailoring Flavonoids Biosynthesis and Regulation under Abiotic Stress
Several researchers adapted molecular techniques to examine the role of flavonoids in triggering the adaptive responses to abiotic stresses (Table 4). Calcium-dependent protein kinases actively participate in calcium signaling and stimulate the production of flavonoids to participate against the plethora of environmental stresses. In this regard, higher expression of GuCPKs genes in Glycyrrhiza uralensis under treatments of NaCl (30 mM) and CaCl2 (2.5 mM) has been reported. Induced expression of GuCPKs significantly improved the accumulation of flavonoid biosynthesis and glycyrrhizic acid under different salinity treatments [175]. Moreover, the study of Jan et al. [176] shows that transgenic plants with F3-H showed improved biosynthesis of quercetin and kaempferol in rice under salinity (150 mM) and heat stress (28-30 °C, light 16/8 h). They noted that heat and salinity stress increased oxidative damage, which was mitigated by the accumulation of flavonoid content. In addition, the overexpression of the AtMYB12 gene increased the accumulation of flavonoids by upregulating the genes involved in flavonoid biosynthesis in transgenic Arabidopsis under drought (25% PEG6000 for 2 weeks) and salinity (300 mM once every 2 days for 4 weeks) stresses [177]. Similarly, VvMyBF1 gene, cloned from grapevine, enhanced the accumulation of flavonoids in transgenic Arabidopsis for confronting drought (25% PEG6000 for 2 weeks) and salt stress (200 mM NaCl for 2 weeks) [178]. The transgenic plants showed higher activities of SOD, POD, pyrroline-5-carboxylate synthase, dihydroflavonol reductase, FLS, CHI, and PAL, as well as a significant
Molecular and Genetic Approaches in Tailoring Flavonoids Biosynthesis and Regulation under Abiotic Stress
Several researchers adapted molecular techniques to examine the role of flavonoids in triggering the adaptive responses to abiotic stresses (Table 4). Calcium-dependent protein kinases actively participate in calcium signaling and stimulate the production of flavonoids to participate against the plethora of environmental stresses. In this regard, higher expression of GuCPKs genes in Glycyrrhiza uralensis under treatments of NaCl (30 mM) and CaCl 2 (2.5 mM) has been reported. Induced expression of GuCPKs significantly improved the accumulation of flavonoid biosynthesis and glycyrrhizic acid under different salinity treatments [175]. Moreover, the study of Jan et al. [176] shows that transgenic plants with F3-H showed improved biosynthesis of quercetin and kaempferol in rice under salinity (150 mM) and heat stress (28-30 • C, light 16/8 h). They noted that heat and salinity stress increased oxidative damage, which was mitigated by the accumulation of flavonoid content. In addition, the overexpression of the AtMYB12 gene increased the accumulation of flavonoids by upregulating the genes involved in flavonoid biosynthesis in transgenic Arabidopsis under drought (25% PEG6000 for 2 weeks) and salinity (300 mM once every 2 days for 4 weeks) stresses [177]. Similarly, VvMyBF1 gene, cloned from grapevine, enhanced the accumulation of flavonoids in transgenic Arabidopsis for confronting drought (25% PEG6000 for 2 weeks) and salt stress (200 mM NaCl for 2 weeks) [178]. The transgenic plants showed higher activities of SOD, POD, pyrroline-5-carboxylate synthase, dihydroflavonol reductase, FLS, CHI, and PAL, as well as a significant reduction of MDA and H 2 O 2 content. Overexpression of the GmMyB12 transcription factor increased the downstream flavonoids by improving the expression of flavonoid biosynthesis-related genes in Arabidopsis [179]. Its overexpression also increased the pyrroline-5-carboxylate synthase, SOD, and POD genes under salinity (200 mM NaCl, 2 weeks) and drought stress (25% PEG6000, 2 weeks). According to Wang et al. [180], a basic helix-loop-helix (bHLH) transcription factor gene antirrhinum (AmDEL) increased flavonoids accumulation under drought (25% PEG6000 for 2 weeks) and salinity (300 mM 2 days for 4 weeks) stresses via upregulating flavonoids biosynthesis genes in Arabidopsis. Moreover, the enzymatic analysis and Western blotting showed the higher activities of pyrrline-5-carboxylate synthase, dihydroflavonol reductase, CHI, and phenylalanine ammonia lyase (PAL) in transgenic plants as compared to wild plants against stressful conditions. Overexpression of SIbHLH22 in tomatoes showed small leaves, short height, and higher accumulation of flavonoids under drought (100 mM mannitol) and salt (200 mM NaCl) stresses [181]. Transgenic plants showed enhanced vigor by improving the ROS scavenging system. In another study, Jayaraman et al. [118] isolated gene encoding for chalcone isomerase 2 (OsCHI2) from drought-tolerant upland rice variety "Nagina22" and transduced it in drought-sensitive rice cv. Pusa Sugandh 2. by using inducible promotor AtRd29A. Stable chromosomal integration of transgenes showed abundant structural genes of flavonoid biosynthesis, which thereby resulted in higher production of flavonoids in mutant rice plants against abiotic stresses including heat (40 • C for 3 days), cold (2 • C; 16 h light/8 h dark for 12 days), salinity (150 mM NaCl for 7 days), and drought (withholding water 7 days at 9 to 10 leave stage) stresses. Their findings suggested that induction of OsCHI2 genes modulates flavonoid metabolism and enhanced abiotic stress tolerance, other than heat stress. In another case, the AeCHS gene isolated from Abelmosschus esculentus also increased flavonoid biosynthesis under treatments of osmotic (300 mM mannitol for a week) and salt (200 mM NaCl for a week) stresses in Arabidopsis plants [182]. Similarly, overexpression of the CHS gene in Arabidopsis improved high light stress by increasing the synthesis of anthocyanins that enhances the plant's adaption to light when transferred from 100 µmol m −2 s −1 to 200 µmol m −2 s −1 [183]. Increased flavonoids synthesis, absorbed more K + , maintained Na + /K + homeostasis, and increased K + /Na + ratio. [191] SbMYB8 Overexpression of R2R3-MYB form Scutellaria baicalensis in tobacco Tobacco Salt stress (150 mM NaCl), drought (0.2 M mannitol), and ABA (100 µM) for 3, 6, and 9 days, respectively Higher flavonoid biosynthesis and antioxidants, and improved tolerance against stress. [192] Flavonol synthetase (FLS) is among the essential enzymes that participate in flavonoid biosynthesis. Wang et al. [184] overexpressed the EkFLS gene in Arabidopsis, isolated from Euphorbia kansui Liou under drought stress (20% PEG600) and salinity (200 mM NaCl) stresses. Their results revealed that EkFLS overexpression was strongly correlated with higher flavonoid biosynthesis and offer the theoretical basis for further improving the phytoextracts of medicinal plants and their resistance against multiple stresses simultaneously. Dong et al. [185] characterized GSA1 (a quantitative trait locus regulating grain-size of rice) that encodes a UDP-glucosyltransferase and exhibits glucosyltransferase activity toward monolignols and flavonoids. They noted that GSA1 redirects the metabolic flux from lignin synthesis toward flavonoid synthesis under abiotic stresses and accumulates more glycosides and flavonoids in rice for abiotic stress tolerance. Moreover, the GAS1 overexpression resulted in larger grain size and played a key role in metabolic flux direction against multiple stresses, including salinity (150 mM for 7 days), drought (16% PEG8000 for 2 to 3 weeks), and heat (42 • C for dozens of h) stresses. Their findings suggested that GSA1 catalyzes the glucosylation of flavonoids and monolignols to modulate the metabolic flux by altering the phenylpropanoid pathway and flavonoid glycoside profile in response to abiotic stress conditions. In another case, Li et al. [88] cloned the Arabidopsis glycosyltransferase gene (UGT76E11), and overexpressing plants showed substantially enhanced tolerance against H 2 O 2 (0.4 mM), drought (200 mM mannitol), and salinity (100 mM NaCl for 10 days) stresses through producing higher glucosylate quercetin by modulating the flavonoid biosynthesis pathway as compared to wild plants. In another study, Li et al. [187] recognized two differentially expressed leucoanthocyanidin dioxygenase genes (RtLDOX/RtLDOX2) rapidly upregulated in Reaumuria trigyna under drought and salinity stress, consistent with stress-related cis-elements located in the promoter region. Transgenic Arabidopsis overexpressing RtLDOX2 showed a higher accumulation of flavonols and anthocyanin, suggesting that this gene functions as a multifunctional dioxygenase in the flavonoid pathway and converts dihydrokaempferol to kaempferol. They noted that transgenic plants via agrobacterium-mediated transformation showed higher tolerance against drought (150 mM and 300 mM mannitol for 15 days), salinity (75 mM and 100 mM NaCl for 10 days), and ultraviolet-B (30 min per day for 7 days) stresses by modulating the flavonoid's pathway and scavenging ROS.
Conclusions
Sessile plants develop various endogenous defense mechanisms to counter unfavorable conditions. Flavonoids are among the natural tools developed by plants to cope with abiotic stresses. This review encloses an overview of the functional roles of flavonoids in shaping a response to abiotic stress via the regulation of antioxidant systems, involvement in the signaling network, and modulation of physiological aspects of the plant. The biosynthesis of flavonoids and their accumulation in plants is triggered by abiotic stimuli and consequences in the modulation of stress response pathways. Flavonoids improve plants' tolerance to abiotic stress at physiological and biochemical levels by the improvement of antioxidant capacity, regulation of cellular redox, activation of stress-responsive TFs, osmoregulation, and involvement in the stress response signaling network as a signaling molecule. We also discussed that flavonoids regulate stress response in different parts of plants, including, stomata, pollen grain, thylakoid membrane, cell membrane, and nucleus. This review provides the current status of flavonoids' functional role in abiotic stress responses of plants and suggests flavonoids as a promising abiotic stress marker. Moreover, this review invites investigation of stress-specific flavonoids and the underlying exact mechanism of flavonoids' involvement in stress responses, which can be a promising tool for crop breeding programs. Moreover, there are contrasting reports on the accumulation or reduction of flavonoid compounds in plants under light and temperature stresses. These remain to be delicately investigated using comprehensive methods such as metabolomics.
|
2022-11-23T16:13:53.278Z
|
2022-11-01T00:00:00.000
|
{
"year": 2022,
"sha1": "e385d6785342eb6acfbd1f3bfdee40a4cfc0de6d",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2223-7747/11/22/3158/pdf?version=1668767511",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "73277bb9feb1786b2ba2f786789782131e0c19bc",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
119152622
|
pes2o/s2orc
|
v3-fos-license
|
Calculus, constrained minimization and Lagrange multipliers: Is the optimal critical point a local minimizer?
In this short note, we discuss how the optimality conditions for the problem of minimizing a multivariate function subject to equality constraints have been dealt with in undergraduate Calculus. We are particularly interested in the 2 or 3-dimensional cases, which are the most common cases in Calculus courses. Besides giving sufficient conditions to a critical point to be a local minimizer, we also present and discuss counterexamples to some statements encountered in the undergraduate literature on Lagrange Multipliers, such as `among the critical points, the ones which have the smallest image (under the function) are minimizers' or `a single critical point (which is a local minimizer) is a global minimizer'.
Introduction
In spite of being a strategy for finding the local maxima and/or minima of a function subject to constraints, the Lagrange Multiplier Method (LMM), particularly when it is used for solving undergraduate Optimization problems, is tipically used as a systematic procedure for identifying global extrema. In doing so, some undergraduate textbooks on the subject show the statements related to the LMM (and/or the worked problems based on it) without complete hypotheses or carelessly written in an imprecise manner and with oversimplification (as an attempt to make the method more palatable). Specifically, the validity of the LMM, when one is looking for global extrema, depends on the existence of those extrema. This basic assumption has to be satisfied beforehand. Otherwise, even if one succeeded in obtaining local extrema from the critical points determined by the method, it might not be possible to get the global extrema from the local ones. However, it is not unusual to find books or academic homepages where, right after the local extrema are found, they are promptly evaluated and the ones with the smallest (respectively, the greatest) image under the function are elected the global minima (respectively, maxima). This is done even without having been previously established the existence of the global extrema. The problem gets worse when there is just one critical point and one has to determine the maximum/minimum value from it without having a local criterion to be used in conjuction with the LMM. We present such a criterion here and propose the adoption of it in Calculus courses. At least, such a procedure makes the student completely sure that the optimal local values can be determined from the critical points. Concerning the global aspect, well, this is a whole different story. Many times, without compactness, the proof that there are global extrema is something out of the scope of Calculus courses and depends on the specific problem that we are dealing with. On the other hand, even in the very few books which correctly state the LMM, such as the excellent [3], emphasizing only the local aspect of the method, when working on problems with lack of compactness, it is assumed, for instance, the existence of "a box of largest possible volume" among all rectangular boxes with a fixed surface area. Then, right after finding a unique critical point via the LMM, the solution ends with a conclusion like this one: "This (cubical) shape must therefore maximize the volume, assuming there is a box of maximum volume." In this paper we work on a similar problem, showing the existence of the global extrema via a nontrivial reasoning for Calculus courses. Furthermore, we emphasize that a criterion to determine if a critical point (obtained by the LMM) is a local maximum/minimum would help enormously in problems like those ones.
Preliminaries
We consider here the equality constrained optimization problem of the form where f : D → R and g : D → R m are twice continuously differentiable functions defined on the open set D ⊂ R n . The set is called feasible set of the problem (1). Recall the well known definition of a minimizer: for all x ∈ Ω, the point x * is called a global minimizer.
Remark 2.2
There is no loss of generality in considering only minimization problems since if we want to maximize a function f , we can equivalently minimize −f . So, the definitions and results can be easily rewritten.
In the following example we can directly verify that a point is a minimizer. Nevertheless, this is not always the case and we normally need other tools to find minimizers. It should be pointed out that, in spite of focusing on constrained problems, some examples can be better understood and/or visualized if we disregard the constraint. In fact, we can transform an unconstrained problem into an equivalent constrained one by introducing an artificial variable: The point x * = 0 is a local minimizer since f (x * ) = 0 ≤ f (x) for all x ∈ B(x * , 1). Moreover, this point is not a global minimizer because f (4, 1) = −11. See Figure 1. A well known condition that ensures the existence of a global minimizer is the compactness of the set.
Theorem 2.4 Let L ⊂ Ω a compact set. Then f has a global minimizer in L, that is, there exists Unfortunately, there are many situations where the above result cannot be directly applied since the underlying set might not be compact. Even so, we may still guarantee the existence of a global minimizer, as discussed in the upcoming Example 3.3.
Necessary optimality conditions: Lagrange multipliers
In this section we present the necessary conditions that must be satisfied by every local minimizer. We also point out two misunderstandings that sometimes arise in calculus courses.
Definition 3.1 A point x * ∈ R n is said to be critical (or stationary) for the problem (1) if there exists a vector λ * ∈ R m such that The components of λ * are the Lagrange multipliers associated with the constraints. The next result is a classical one and it is used to find possible candidates for the optimal solutions. (See, for instance, [1] for a version of such theorem.) Theorem 3.2 Suppose that x * ∈ R n is a local minimizer for the problem (1) and the gradients ∇g i (x * ) are linearly independent, i = 1, . . . , m. Then x * is a critical point for this problem.
As previously mentioned, a very common exercise in undergraduate Calculus is the problem of minimizing the area of a box, without lid, subject to a constant volume. The issue here rely on the fact that, normally, the solution is not accompanied by a mathematical argument explaining that the box with minimum area does exist and/or with a justification as to why the critical point obtained (via the equations (3a)-(3b)) is the global minimizer of the problem. Let us discuss these issues more precisely in the next example.
Show that the problem (1) has a (unique) global solution and find it using the Lagrange method.
Resolution. As defined before, let Ω = {x ∈ D | g(x) = 0} be the feasible set of the problem. We claim that if x ∈ Ω and f (x) ≤ 5, then 1 proving the claim. Now, consider the set = 0 and f (x) ≤ 5, which means that L is closed. It is also bounded in view of the claim. Thus, Theorem 2.4 ensures that there exists . Now, applying Theorem 3.2, we conclude that x * must be solution of the equations Since this system has a unique solution, namely, Concerning the previous example, it is worth mentioning that, on the one hand, its reasoning is out of the scope of undergraduate Calculus textbooks. On the other hand, it is also a remainder that, sometimes, it is not trivial to establish the existence of a global minimum. The next example is a reformulation of the unconstrained problem given in Example 2.3 as an equivalent constrained problem, obtained by introducing an artificial variable.
It is easy to verify that the point x * = 0 and the multiplier λ * = 0 satisfy the conditions (3a)-(3b). In fact, it can be proved that this point is the only critical point of this example. Moreover, x * is a local minimizer since f (x * ) = 0 ≤ f (x) for all x ∈ B(x * , 1). However, this point is not a global minimizer because f (4, 1, 0) = −11. Remark 3.5 Note that the previous example also answers negatively the question "If a function has a single critical point which is a local minimizer, is this point a global minimizer?", which sometimes takes place (in some Calculus courses and academic homepages) and is responded incorrectly with a "yes". This probably occurs since, for functions of one variable, the result holds as the next theorem states. (a, b). If x * is a local minimizer, then it is a global minimizer.
Proof. Assume by contradiction that there existsx ∈ (a, b), . Therefore, by the Rolle's theorem, we conclude that exists a critical point x * * ∈ (x * ,x), contradicting the hypothesis. Figure 2 illustrates this proof. It is well known that the converse of Theorem 3.2 is not necessarily true. That is, the optimality conditions (3a)-(3b) are not sufficient to ensure that the point is a local minimizer. Indeed, these conditions are also satisfied at a maximizer. When dealing with unconstrained minimization in two variables, we have the famous sufficient condition, present in almost all textbooks on the subject, to ensure that a critical point x * is a local minimizer, namely, the test of second derivatives However, it is not so common to discuss a test for constrained optimization. In the next remark we address another issue related to this subject.
Remark 3.7
In the context of problem (1), it is also typical the following question: "Among the critical points, is the one with the smallest image (under the function) a local minimizer?". Again, the answer is no and the example below shows why. and g(x) = x 2 . Find the critical points, its images and say which one is a minimizer.
Resolution. The condition (3a) in this case is 1 2 yielding λ * = 0,x 1 = 0,x 1 = 1, x * 1 = 3 2 andx 1 = 3. So, we have four critical points By restricting the objective function to the feasible set, that is, to the points of the form Since ϕ (t) = 1 2 t 2 (t − 3) 2 (t − 1)(2t − 3), we conclude thatx is neither maximizer nor minimizer, but a saddle point. The same is true forx. On the other hand,x is a local maximizer and x * is a local minimizer. Finally, the critical values are It should be noted that the smallest critical value does not correspond to a local minimizer and that the greatest critical value does not correspond to a local maximizer. Figure 3 illustrates this example.
The next section is devoted to discuss sufficient conditions to ensure optimality for constrained optimization problems.
Sufficient optimality conditions
In this section we present a criterion, based on the second derivatives, for attesting that a critical point is a local minimizer. For completeness we present first a general result, well known in the optimization community. Then, we particularize the test to the specific cases studied in Calculus.
We stress that the criteria we will present are only local conditions and do not say anything about global minimization without additional assumptions or specific situations.
To simplify the presentation consider the Lagrangian function associated with the problem (1), The Lagrangian Hessian, that is, the matrix of the second derivatives of with respect to x, is denoted by The result below can be found in many optimization books. See, for example, [2,4].
Theorem 4.1 Let x * ∈ R n be a critical point for the problem (1) and λ * ∈ R m a corresponding multiplier vector, according to Definition 3.1. Suppose that for all nonzero vectors d ∈ R n satisfying ∇g i (x * ) T d = 0, i = 1, . . . , m. Then there exist δ > 0 and a neighborhood V of x * such that for all x ∈ V with g(x) = 0. In particular, x * is a strict local minimizer of (1).
Despite the existence of this condition for general dimensions, we consider here the particular 2 and 3-dimensional cases with one or two constraints, which are the most common cases in the Calculus courses. In these situations, the Hessian matrices of a function ϕ are if n = 2 or n = 3, respectively.
The two variables and one constraint case
Consider the problem (1) with n = 2 and m = 1. That is, the problem of minimizing a function of two variables subject to a single equality constraint. The next theorem follows immediately from the previous one.
then x * is a local minimizer for the problem.
Now, let us see a straighforward application of the previous theorem.
Example 4.3
Discuss the problem (1) with f, g : Resolution. The condition (3a) in this case is giving λ * = −1 and x 1 = − 1 3 or x 1 = 1. So, we have two critical points Moreover, Thus, x * is a local minimizer since v T Hv = 2 − 6x * 1 > 0. Note that this point is not a global minimizer because f Figure 4 illustrates this example.
Remark 4.4
In the context of Theorem 4.2 we also have the maximization condition. If v T Hv < 0, then x * is a local maximizer of f (x) subject to g(x) = 0.
The three variables and one constraint case
Consider the problem of minimizing a function of three variables subject to a single equality constraint.
Here comes another simple application of the general theorem: Resolution. We have the critical point We can consider, for example, ∇g(x * ) ⊥ = span Hence, x * is a local minimizer since a 11 = 2 > 0 and det(A) = 12 > 0.
The three variables and two constraints case
Consider the problem of minimizing a function of three variables subject to two equality constraints.
Finally, let us present our last application of the general theorem.
then x * is a local minimizer for the problem.
Here comes our last example: Solve the problem (1) for these functions.
Resolution. The condition (3a) in this case is which immediately implies that λ * = 0 and hence, x * 2 = 0. Using the constraints, we conclude that x * 1 = x * 3 = 1. This in turn implies that λ * = We can consider, for example, and see that v T Hv > 0, proving then that x * is a local minimizer for the problem. In fact, we can prove that this point is a global minimizer. To see this, note that Figure 5 illustrates the feasible set of this example.
Remark 4.9
It is easy to see that the condition (6) does not depend on the particular choice of v: if v T Hv > 0 for a vector v ∈ {∇g 1 (x * ), ∇g 2 (x * )} ⊥ , thenv T Hv > 0 for any other nonzero vector v ∈ {∇g 1 (x * ), ∇g 2 (x * )} ⊥ . Indeed, in this case,v = αv, for some α ∈ R \ {0}. The same reasoning is true for condition (5). It can be also proved that the conditions in Theorem 4.5 do not depend on the choice of vectors v 1 , v 2 such that span{v 1 , v 2 } = ∇g(x * ) ⊥ . 1 + x 2 2 − x 2 3 = 0, the plane defined by x 1 + x 3 − 2 = 0 and the feasible set, the intersection represented by the red curve. In this problem we want to find the lowest point of that curve.
Conclusion
In this paper, we have pointed out that, in some examples (of some undergraduate Calculus textbooks) related to the acquirement of global minimizers via the Lagrange Multiplier Method (LMM), a little bit of imprecision has been typical, particularly when dealing with worked problems. One way to mitigate that would be the use of a criterion to guarantee when a critical point (obtained by the LMM) is a local minimizer. So, we have proposed such a criterion, which, by the way, has been kept absent from Calculus textbooks. On the other hand, for those Professors who jump into the 'global' aspects of the LMM, in spite of being a strictly local result, based on what we discussed here, we also propose the following way to state the LMM: For continuously differentiable functions, f and g, in order to determine the minimum value of f subject to the constraint g = k with k constant, assuming that this global minimum value is attained on the interior of the domain shared by f and g, but not on the boundary of it, and that ∇g = 0 holds for that domain, do the following: 1. Determine each point and, if necessary, also λ, satisfying the following system: (a) ∇f = λ∇g; (b) g=k.
2. Evaluate f for those points obtained in the previous item: the smallest value of f is its global minimum.
|
2019-04-10T14:47:37.000Z
|
2019-04-10T00:00:00.000
|
{
"year": 2019,
"sha1": "dae8278c5daf9976c6f3a5be3920b2cbb915b243",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "dae8278c5daf9976c6f3a5be3920b2cbb915b243",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
43709974
|
pes2o/s2orc
|
v3-fos-license
|
Priority-Based CCA Periods for Efficient and Reliable Communications in Wireless Sensor Networks
The IEEE 802.15.4 standard utilizes the CSMA-CA mechanism to control nodes’ access to the shared wireless communication medium. CSMA-CA implements the Binary Exponential Backoff (BEB) algorithm by which a node refrains from sending any packet before the expiry of its backoff period. After that, the node is required to sense the medium for two successive time slots to assert that the medium is clear from any ongoing transmissions (this is referred to as Clear Channel Assessment (CCA)). Upon finding the medium busy, the node doubles its backoff period and repeats that process. While effective in reducing the likelihood of collisions, this approach takes no measures to preserve the priorities among the nodes contending to access the medium. In this paper we propose the Priority-Based BEB (PB-BEB) algorithm in which we enhance BEB such that nodes’ priority is preserved. We provide a simulation study to examine the performance of PB-BEB. Our simulations show that the latter not only outperforms BEB in terms of fairness, but also show promising results in terms other parameters like channel utilization, reliability, and power conservation.
Introduction
The specifications of the IEEE 802.15.4 standard [1] describe both the physical and MAC layers for low-rate wireless personal area networks (LR-WPANs).These specifications fit the distinguished characteristics of Wireless Sensor Networks (WSNs), which work under stringent conditions like the scarce power resources and the hostile environment of deployment.The MAC layer of the IEEE 802.15.4 standard utilizes the Carrier Sense Multiple Access with Collision Avoidance (CSMA-CA) mechanism to control nodes' medium access.In this mechanism a node has to follow the Binary Exponent Backoff (BEB) algorithm to assert that the medium is free from ongoing transmissions before commencing its packet transmission.Although effective in reducing the likelihood of packet collisions, BEB does not take any measures to guarantee the priority of access among the nodes.Instead, all the nodes are treated the same and each node receive the same opportunity to access the medium, regardless of how long it has been attempting to gain that access.
The problem of enhancing the MAC protocol of IEEE 802.15.4, such that the priority among the nodes is preserved, has received a considerable attention in the research community.In this paper we propose the Priority-Based BEB, a modified version of BEB that can ada-ptively prioritize the access to the medium such that nodes are treated more fairly.The rest of this paper is organised as follows.In Section II we provide a general description of the MAC protocol of the IEEE 802.15.4 standard.In Section III we review some of the work related to prioritizing the medium access in IEEE 802.15.4basedWSNs.Section IV describes the new Priority-Based BEB algorithm.Section V presents the simulations we conducted to compare the performance of Priority-Based BEB and BEB.Finally, Section VI concludes this work.
Overview of the Beacon-Enabled IEEE 802.15.4
The IEEE 802.15.4 standard supports both the star and peer-to-peer topologies.The star topology requires that communications between any pair of nodes to be relayed through a designated node called the coordinator.In the peer-to-peer topology, however, although the coordinator is available, direct communications between nodes is possible.The MAC layer of the IEEE 802.15.4 may operate in either the beacon-enabled mode or the nonbeacon-enabled mode.In the former, the slotted CSMA-CA mechanism is used to manage the nodes' access to the medium, while the latter uses the unslotted CSMA-CA mechanism.Our focus in this paper is on the beaconenabled mode.With this mode, a superframe structure is used to manage how the contending nodes can again access to the wireless medium.The structure of the superframe is shown in Figure 1 (redrawn from [1]).The superframe is bounded by two beacons that are periodically sent by the coordinator to synchronize the nodes in the network.
The superframe is constituted by a mandatory active and an optional inactive period.The active period is divided into the Contention Access Period (CAP) and the optional Contention-Free Period (CFP); both are shown in Figure 1.The latter refers to a set of time slots during which guaranteed medium access is offered to certain nodes.These time slots are referred to as guaranteed time slots (GTSs) and are allocated to nodes upon their requests.In this paper we assume that both the inactive period and the CFP are absent from the superframe.
During the CAP, the nodes follow the slotted CSMA-CA mechanism to gain access to the medium.That is, accessing the medium is governed by the BEB algorithm.According to BEB, a node should backoff for a period of time whenever it has a packet ready for transmission.The backoff period is randomly selected from the interval [0, 2 BE -1], where BE is the backoff exponent that ranges from macMinBE (a default value of 3) to mac-MaxBE (a default value of 5) [1].Upon the expiry of the backoff period, the node is required to conduct two clear channel assessments (CCAs).The transmission of the packet cannot be started unless the medium is found idle during both CCAs.If either CCA finds the channel busy, the node should backoff again (after increasing BE by 1).The maximum a node is allowed to repeat its backoff, before the packet is discarded, is set to macMaxCSMA-Backoffs.The latter is an IEEE 802.15.4 attribute that takes a default value of 4 [1].Once the node asserts that the channel is idle, for two CCAs, the packet is sent.In case a packet collision occurs, the node backs off again and retries to send the packet.The maximum number of transmission retries, before the packet is discarded, is set to macMaxFrameRetries, which is defined by the standard with a default value of 3 [1].Once the node manages send the packet, it waits for an acknowledgement (ACK) packet.from the aforementioned desc EB treats all the nodes similarly without any consideration to the number of times the node has failed to access the medium.Furthermore, the algorithm does not have any guidelines to distinguish the urgency of certain traffics and their need to be granted higher priorities than others.
Related Work
Enhancing IEEE 802 nodes has attracted the attention of research efforts.
Huang et al. proposed the Adaptive GTS Alloc GA) scheme to support low latency and fairness [2].The idea of AGA is to provide an estimate of future GTS needs of the nodes.With that estimate, the coordinator gives higher priority of GTS allocation to needy nodes.AGA operates in a two-phase manner.In the first phase, nodes are classified according to their recent usage of GTSs and then assigned priority numbers (priority decreases as the priority number increases).In the second phase, GTSs are allocated with reference to the priority numbers such that nodes with low priority numbers are considered first.Although AGA shows promising results, over IEEE 802.15.4, in terms of fairness and low latency, the scheme concentrates on improving medium access during the CFP without considering the CAP.
Takaffoli et al. in [3] proposed the GTS-TD hm which targets the improvement of GTS scheduling to recognize the nodes' different priority classes.Under GTS-TDMA, nodes do not request GTSs, GTSs are rather allocated to them using a GTS allocation scheme.The network is viewed as a multi-level tree and a TDMA schedule is constructed for it.The schedule is constructed such that maximum data rate is achieved for each node in the network.In other words, the TDMA-GTS algorithm seeks the optimal allocation of GTSs such that each node is provided with the maximum data rate it requires.Simulation results show that TDMA-GTS is capable of achieving almost twice the throughput of CSMA-CA.However, similar to AGA above, this algorithm exploits the GTS capability of IEEE 802.15.4 and does not consider solving the priority problem during the CAP.
Ndih et al. observe that IEEE 802.15.4 offers a p -independent functionality [4].This is resulting from having the nodes use the same contention access parameters.Therefore, the authors in [4] develop a Markov-based analytical model for the CAP in which different sets of access parameters are permitted for the nodes with different priority classes.Two priority classes are recognized: high-priority (Class 1) and low-priority (Class 2).A node-state Markov-chain is developed for each priority class, beside a Markov-chain for the channel state.The priorities or service differentiation is based on assigning a contention window of 1 for Class 1 nodes and 2 for Class 2 nodes.Using these settings of the contention window, while maintaining the other backoff parameters at their standard-defined defaults, gives high-priority nodes higher opportunity to access the medium.This is because their small contention window size reduces the duration of their idle channel sensing.
Severino et al. work in differentiating traffic classes w opose an explicit priority sc
The Priority-Based BEB Algorithm
C layer, we pr P ithin the CAP such that differentiated services are offered to time-critical messages [5].Their approach is based on the proper tuning of the IEEE 802.15.4 parameters macMinBE, macMaxBE, and CW init (the initial size of the contention window).The tuning depends on whether the frame is identified as a high-priority or not.Data frames are considered of low priority while command frames, like alarm reports and GTS requests, are considered of high priority.Therefore, nodes use different parameter settings depending on their traffic type.Similar to [4], the settings are chosen such that the backoff periods of high-priority frames are made shorter than those for the low-priority frames.Furthermore, while a queue of different frames is building up, a Priority Queuing is used such that higher-priority frames are selected first for transmission.
Jardosh et al. in [6] pr heme for IEEE 802.15.4.According to this scheme, nodes are categorized into critical nodes and normal nodes.Critical nodes are the ones that have important information to send to the coordinator while normal nodes that send routine information, which can tolerate some delay.Critical nodes are considered of high-priority while normal nodes are considered of low-priority.The coordinator can learn about this categorization using a secondary beacon.Basically, critical nodes send the coordinator this beacon to indicate their high priority traffic.With that information, the coordinator restricts the contention during the CAP to only those critical nodes.That is, the coordinator includes the priority information in the primary beacon that it periodically broadcasts to all the nodes.Once notified, the normal nodes will refrain from attempting to access the medium during the CAP.This way traffic priority is preserved and critical information is given preference over regular information.
From the description of the IEEE 802.15.4'sMA we can see that it implicitly recognizes two classes of priority.In particular, the first class is given to the data packets while the second class is given to their associated ACK packets.This can be noticed by observing that CCA1 is firstly needed to avoid any collision with an ongoing data packet transmission.After that, CCA2 is imposed such that the ACK for that packet is transferred successfully.However, this functionality does not consider the number of attempts that certain nodes have been committing to access the medium, but without any success.These nodes are more prone to deplete their power resources at a higher pace without utilizing these resources in useful activities.Different nodes should be treated fairly such that those that experience repeated access failures are given higher priority to access the medium.
In the Priority-Based BEB (PB-BEB) algorithm opose to extend the BEB algorithm such that the number of CCAs is not confined to only two.Instead, the number of CCAs will be dictated by the level of collisions over the communication medium.In other words, after a node conducts its regular BEB-defined CCAs, it will be required to conduct more CCAs before being able to start its transmission.The total number of CCAs conducted by a node will be determined by the following formula: 2 where, P c is the probability of collision and A is a constant value.The first term in Equation (1) indicates that PB-BEB keeps the two CCAs of BEB without modification.This is required in order preserve the aforementioned functionality of BEB in which the highest priority is assigned to the ACK packet.The second term in Equation (1) indicates that the addition of the extra CCAs will be dependent on the collisions experienced by the packets.That is, we adapt the number of extra CCAs undergone by a node depending on the activities over the wireless channel.P c is computed as follows: where, s n et is the total number of successfully transmitted pack s and f n is the total number of failed packets.The latter refers to packets discarded due to either channel access failure (when exceeding macMaxCSMABackoffs) or transmission failures (when exceeding macMax-FrameRetries). Equation ( 2) is computed locally at each node.In Equation (1), A is a constant that is set to the value macMaxCSMABackoffs.We use the latter value to indicate that we need the number of extra CCAs to be below the maximum number of backoff stages allowed.In Figure 2 we illustrate the CAA timeslots of our system.
As the node starts conducting its extra CCAs, it may find the medium busy, and therefore, it will backoff again.Once the backoff counter expires, the node will not restart the CCAs from CCA1.Instead, it will continue from exactly the same CCA it stopped at.The only exception of this rule is if the node stopped previously at CCA2.In that case, the node will have to restart from CCA1.Again, this keeps unchanged the original functionality of BEB to give priority to the ACK packet over all other packets.In brief, PB-BEB applies the following formula to find the next CCA the node will conduct: ast at which the node found the medium he result of imposing Equation ( 3) is that the node that has been experiencing multiple backoffs while trying to send a packet is given a higher priority to access the medium than those nodes that started their contention at a later time.This means that we are able to integrate a degree of priority into CSMA-CA that has been absent in BEB.In Figure 3 we show the flow diagram of PB-BEB.
busy. T
In this section we performance of PB-BEB and BEB.The performance parameters we concentrate on are fairness, channel utilization, reliability, average power consumption, channel collision time, delay, and channel idle time.Our simulations are run over a C-based simulator that we have developed.The network we study is of a peer-to-peer topology.We list in Table 1 the simulation parameters we use in our performance evaluation (some parameters are adopted from the work of Pollin et al. in [7]).
In Table 1, CCA power is the power consum nsing the medium during the clear channel assessment states.The network operates under the beacon-enabled IEEE 802.15.4 with the superstructure constituted by only the CAP.The traffic used is assumed to be both saturated (i.e., nodes has always packets to send).In the following sub-sections we show and discuss the results of our simulations.
Testing the fair to assert that nodes are getting equal opportunity to access the wireless medium.In measuring the fairness, we depend on Jain's fairness index [8], which is expressed as follows: where, N i number of nod available in the network an ith node's share the medium.A ackoff algorithm is deemed fair if it can achieve a fair-s the total es d x i is the of b ness index close 1.In Figure 4 we show the fairness index for both PB-BEB and BEB.The graph clearly shows that as the network's size grows beyond 100 nodes, BEB falls behind PB-BEB in treating the nodes fairly.In fact, we can see that PB-BEB is achieving a significant improvement over BEB.For example, at N = 200, BEB achieves a fairness index of 0.77 while PB-BEB achieves (4) Copyright © 2012 SciRes.WSN studies that highlighted and proved the shortu
Channel Utilization
Channel Utilization (U) is the proportio wireless channel is being u packets.We define U as foll where, L is the packet length and T is the total duration spent to deliver the packet to it tion includes the backoff periods, packet transmission packet ed differently, R is the probability of acket.The latter reflects the fact that the power requirements of any algose of that sensor to be conservative s destination.This duratime, and the time wasted while retrying (due to experiencing multiple collisions) to send the packet.In Figure 5 we show the performance in terms of U for both BEB and PB-BEB.We can quite observe that PB-BEB is significantly outperforming BEB.At a network size of 100 nodes, for instance, PB-BEB achieves a U of 53.3% while BEB utilizes the channel by as low as 4.3%.
Reliability
Reliability (R) is the probability of transmitting a successfully.Stat not discarding a p nodes may backoff multiple times and/or suffer from multiple collisions before managing to send the packet.An algorithm of high R is one that can reduce the possibility of repetitive backoffs and/or collisions while attempting to send a packet.We illustrate in Figure 6 the performance of PB-BEB and BEB in terms of R. PB-BEB is able to achieve higher R than BEB as the size of the network grows.At a network size of 50 nodes, PB-BEB achieves a reliability of 10% while BEB's reliability is only 4.4%.
Average Power Consumption
It is essential to study rithm devised for WSNs.This is becau nodes are battery-powered and we need in power usage in order to prolong the lifetime of th sens ow e average power consumption required by PB-BEB and CC ) refers to the proportion of llisions.This parameter e the channel is being e or node, and thus the network.In Figure 7 we sh th BEB.The graph shows that the performance of both PB-BEB and BEB is generally comparable.Therefore, it is interesting to investigate the activities during which the nodes' power resources are consumed to see what activities are contributing more to that consumption.In Figure 8 we show the power wasted due to collisions when the network operates under PB-BEB or BEB.It is quite evident that BEB is wasting a large amount of power in collisions.For instance, at a network size of 45 nodes, the average power consumption of BEB is 1.34 W/s (Figure 6).From Figure 7, we observe that 0.38 W/s is wasted due to collisions, which contributes to 28.4% of the average power consumed.The contribution becomes 30.6% at N = 100.However, under PB-BEB, the power wasted due to collisions contributes to only 10% (at N = 45) and 6% (at N = 100) of the average power consumption.
Channel Collision Time
Channel Collision Time (T time the channel is busy due to co measures the percentage of tim utilized in useless activities, and therefore, should be reduced as much as possible.We illustrate the performance in terms of T CC in Figure 9.This figure demonstrates the significant reduction in T CC that PB-BEB can B and BEB result in a T CC of 27.7% and 83.1%, spectively.This means that PB-BEB can considerably acket to its destinaportant metric that gives more insight into ance of PB-BEB.The delay is measured achieve compared to BEB.For example, at N = 100, PB-BE re reduce the percentage of time during which the wireless channel is wasted due to collisions.
Delay
The delay encountered to deliver a p tion is an im the perform starting from the instant the packet is available at the node till it is finally received at its destination.That is, the time spent in backoff stages, transmission retries, and CCAs is included in this measurement.In Figure 10 we can see that PB-BEB is causing an increase in the delay.
Channel Idle Time
Channel idle time (T ) refers to the proportion of ti the channel is free of any sions.This metric measures t ing which all nodes are either in backoff or CCA states.Therefore, T CI should be reduced as much as possible because it indicates that the wireless channel is not being used.From the definition of T CI we can see that it is the complement of both U and T CC .That is, we compute T CI as follows: 1 In Figure 11 we show the performance of PB-BEB and BEB in terms of T CI .It comes as no PB-BEB is resulting in excessive am A rformance except for the delay.The reason ncements in the performance is that as surprise that ount of idle time.gain, this behavior is due to that we are introducing extra CCAs with which nodes are encountering additional waiting periods before being able to send their packets.However, although BEB results in lower T CI , it is causing excessive collisions, as is evident in Figure 9.
Discussion
Our simulation results showed a superior pe for PB-BEB over BEB, behind these enha we preserve the priority of certain nodes, we basically increase their likelihood of medium access.That is, as different nodes commence their channel sensing at different CCAs, the number of nodes contending to access the channel is reduced and therefore the probability of collision is reduced.This is reflected in improved U, R, and T CC as well as reduced power consumption due to collisions.The fact that introducing extra CCAs does not result in increased power consumption is a direct result of making n CCA change probabilistically.This is because of that the second term in Equation (1) will be eliminated
Conclusion
In this paper we tackle the problem of prioritizing the .CSMA-CA mecha CSMA-CA applie algorithm to manage the contention among the nodes attempting to access the wireless medium.The problem with BEB is that it treats all the nodes equally without giving any consideration to the repeated channel access failures or transmission failures experienced by some nodes.We propose the Priority-Based BEB (PB-BEB) to fill that hole in the design of BEB.In PB-BEB, the number of CCAs is controlled probabilistically according to the collision level over the communication medium.As the node finds the medium busy at a certain CCA, beyond the mandatory CCAs of BEB, the node will backoff.At the end of the last backoff, the node will start its clear channel assessment at exactly the last CCA where it stopped.This means that the node that has been experiencing multiple backoffs while trying to send a packet is given a higher priority to access the medium than those nodes that started their contention at a later time.Our simulations show that PB-BEB is able to outperform BEB in terms of fairness.Also, PB-BEB shows superior performance, over BEB, in terms channel utilization, reliability, power wasted in collisions, and channel colli-sion time.However, the main drawback of PB-BEB, compared to BEB, is that it leads to increased delay.The latter is an expected outcome since nodes are forced to conduct extra number of CCAs, which delays the transmission of a packet.
Figure 2 .
Figure 2. PB-BEB uses the original CCAs of BEB and add
Figure 4 .
Figure 4. Fairness of PB-BEB compared to BEB. a fairness index of 1.This behavior is consistent with other term nfairness of BEB (see [9] for example).
n of time the sed to successfully transmit ows:
Figure 7 .
Figure 7. Average power consumed under PB-BEB compared to BEB.
Figure 8 .
Figure 8. Power wasted in collisions with PB-BEB compared to BEB.
Figure 9 .
Figure 9. Channel collision time with PB-BEB compared to BEB.
Figure 10 .
Figure 10.Delay under PB-BEB compared to BEB.At N = 200, PB-BEB increases the delay by 22.3% com pare EB introducing extra CCA states, and therefore, the node is
Figure 11 .FERENCES
Figure 11.Channel idle time with PB-BEB compared to BEB.
|
2017-08-16T00:19:18.837Z
|
2012-02-23T00:00:00.000
|
{
"year": 2012,
"sha1": "3ed87fcb9cad9b01ef8505a5ea3f387c893b5a5d",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=17345",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "3ed87fcb9cad9b01ef8505a5ea3f387c893b5a5d",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
258357162
|
pes2o/s2orc
|
v3-fos-license
|
The relationship between micronutrient status, frailty, systemic inflammation, and clinical outcomes in patients admitted to hospital with COVID-19
Background Micronutrients have been associated with disease severity and poorer clinical outcomes in patients with COVID-19. However, there is a paucity of studies examining if the relationship with micronutrient status and clinical outcomes is independent of recognised prognostic factors, specifically frailty and the systemic inflammatory response (SIR). The aim of the present study was to examine the relationship between micronutrient status, frailty, systemic inflammation, and clinical outcomes in patients admitted with COVID-19. Methods Retrospective analysis of prospectively collected data was performed on patients with confirmed COVID-19, admitted to hospital between the 1st April 2020–6th July 2020. Clinicopathological characteristics, frailty assessment, biochemical and micronutrient laboratory results were recorded. Frailty status was determined using the Clinical Frailty scale. SIR was determined using serum CRP. Clinical outcomes of interest were oxygen requirement, ITU admission and 30-day mortality. Categorical variables were analysed using chi-square test and binary logistics regression analysis. Continuous variables were analysed using the Mann–Whitney U or Kruskal Wallis tests. Results 281 patients were included. 55% (n = 155) were aged ≥ 70 years and 39% (n = 109) were male. 49% (n = 138) of patients were frail (CFS > 3). 86% (n = 242) of patients had a serum CRP > 10 mg/L. On univariate analysis, frailty was significantly associated with thirty-day mortality (p < 0.001). On univariate analysis, serum CRP was found to be significantly associated with an oxygen requirement on admission in non-frail patients (p = 0.004). Over a third (36%) of non-frail patients had a low vitamin B1, despite having normal reference range values of red cell B2, B6 and selenium. Furthermore, serum CRP was found to be significantly associated with a lower median red cell vitamin B1 (p = 0.029). Conclusion Vitamin B1 stores may be depleted in COVID-19 patients experiencing a significant SIR and providing rationale for thiamine supplementation. Further longitudinal studies are warranted to delineate the trend in thiamine status following COVID-19.
Introduction
Micronutrients, broadly classified into vitamins and trace elements, are a cornerstone to human metabolism [1]. From the maintenance of normal tissue function, to the prevention of disease, the role of micronutrients have been widely studied [2]. Specifically, in the normal functioning of the healthy immune system and regulation of oxidative stress [3][4][5]. It has been postulated that deficiencies in certain micronutrients can compromise the body's innate and adaptive immune responses to pathogens [6]. Furthermore, that supplementation of micronutrients may improve clinical outcomes in critically ill patients admitted with sepsis [7].
Trace elements including copper, magnesium, selenium and zinc have been associated with disease severity and poorer clinical outcomes in patients with COVID-19 [8][9][10][11]. This had led to a corresponding call for supplementation of certain micronutrients to improve outcomes in patients with COVID-19 [12]. However, studies within the present literature have often been carried out using serum or plasma measurements that are recognised to be perturbed by the systemic inflammatory response [13]. This is likely to be a confounding factor given that an exaggerated host inflammatory response (the cytokine storm), is now recognised as a hallmark of severe COVID-19 [14][15][16]. Furthermore, results may have been confounded by micronutrient deficiencies being more prevalent in frail patients [17,18], another robust prognostic factor in COVID-19 [19]. Therefore, it remains unclear whether there is a causal effect between micronutrient deficiencies and clinical outcomes in patients with COVID-19 [20].
In contrast to serum/plasma measurements, red-cell micronutrient concentrations are thought to be reliable markers of long-term status, not confounded by the systemic inflammatory response (SIR) [21,22]. Such measures may be more useful markers for examining micronutrient status in patients with an acute inflammatory pathology such as COVID-19 [22]. While it has been postulated that deficiencies in vitamins, specifically B and D, may be rational therapeutic targets in patients with COVID-19, there is a paucity of studies examining the relationship with recognised prognostic factors, specifically frailty and SIR [23,24]. Furthermore, within the current literature, studies examining the potential therapeutic benefit of micronutrient supplementation on clinical outcomes have often not reported baseline micronutrient values [25,26]. As such, it remains unclear whether such deficiencies are a result of COVID-19 per se, or simply that COVID-19 occurs on a background of micronutrient deficiency. Therefore, the aim of the present study was to examine the relationship between micronutrient status, frailty, systemic inflammation and clinical outcomes in patients admitted with COVID-19. Firstly, to examine if there are micronutrient perturbations in patients admitted with COVID-19. Secondly, examine if micronutrient perturbations are independent of frailty status and the systemic inflammatory response in patients with COVID-19.
Methods
A retrospective analysis was carried out of prospectively collected data on patients who presented to Glasgow Royal Infirmary or the Queen Elizabeth University Hospital, Glasgow, U.K., between the 1st April 2020-6th July 2020. In line with NHS policy, this study was approved by the NHS Greater Glasgow and Clyde biorepository research ethics committee. The study protocol (GN20AE307) was approved by the Northwest England-Preston research ethics committee (20/NW/0336) and registered with clinicaltrials.gov (NCT04484545), as previously described [27].
Patients with either a positive polymerase chain reaction (PCR) test or radiological changes characteristic of COVID-19 infection, reported on chest X-ray (CXR) or CT thorax, by a board-certified radiologist on admission were assessed for inclusion in the study. Patients who had samples taken for storage in the Biorepository at the time of presentation and subsequently analyzed for micronutrient status were included. Exclusion criteria were the absence of a recorded diagnosis of COVID-19 test result or assessment of any micronutrient measurement.
Routine demographic details, clinicopathological characteristics, frailty and nutritional assessments, as well as biochemical laboratory results were recorded. Age, sex, documented micronutrient assessment and diagnostic modality confirming COVID-19 infection, as well as date of admission and 30-day mortality status were minimal inclusion criteria. Age categories were grouped to </≥ 70 years. BMI was categorised as < 25/≥ 25 kg/ m 2 . Co-morbidity data collected included a diagnosis of hypertension, heart failure, chronic obstructive pulmonary disease, type 2 diabetes mellitus, liver disease, chronic kidney disease and active cancer. Frailty was assessed using the 9-category Clinical Frailty Scale (CFS) [28]. Malnutrition was screened using the fivestep Malnutrition Universal Screening Tool (MUST) [29]. Both frailty and MUST scores were identified from admission nursing assessments. Patients with CFS > 3 were categorized as frail. Patients were classified as no risk (MUST = 0), or at risk of malnutrition (MUST ≥ 1). Magnitude of SIR was determined using admission serum C-reactive protein (CRP). Values were categorised as ≤ 10/11-80/> 80 mg/L, as per previous studies of the relationship between micronutrient concentrations and SIR [13]. Admission serum albumin concentration values
Micronutrient analysis
Venous blood samples of approximately 5mls, collected from patients on admission to hospital in EDTA and non-gel heparin tubes were used for the measurement of micronutrients. Samples were centrifuged (3500×g for 15 min at 4 °C) and plasma was removed. Then, packed red cells were carefully prepared removing all remaining plasma and buffy coat prior to storage at − 70 °C. Samples were analysed within 6 months of collection. Samples from individual patients were assayed in the same batch to minimize analytical variation. In total, eight micronutrients (vitamins and trace elements) were measured in plasma or erythrocytes. Analysis was performed by the Scottish Trace Element and Micronutrient Diagnostic and Research Laboratory, a national accredited service.
Vitamin B1 (thiamine diphosphate; TDP) is present almost exclusively in red cells and so vitamin B1 status was assessed by measuring TDP in red cells. An HPLC system with post-column ferricyanide derivatization and fluorometric detection was used as described previously [30]. The TDP concentration in red cell was expressed as a haemoglobin (Hb) ratio in the sample (ng TDP/g Hb).
The within-batch imprecision was 5.1% at 380 ng/g Hb.
Vitamin B2 (flavin adenine dinucleotide; FAD) measurement in whole blood and erythrocytes was based on the method of Speek and co-workers. Briefly, diluted red cell hemolysate were precipitated with 70% perchloric acid and centrifuged. Then, supernatant was injected for HPLC analysis. FAD was separated on an isocratic HPLC system with a reversed-phase C18 column and fluorescence detection. The within-batch imprecision for whole blood FAD was 4.8% at 384 nmol/L and 4.8% at 2.8 nmol/g Hb red cell FAD.
Vitamin B6 (pyridoxyl phosphate; PLP) concentrations in red cells were measured by HPLC using pre-column semi-carbazide derivatization and fluorescent detection as described previously [31]. The within-batch imprecision was 5.2% at 367 pmol/g Hb.
PLP, FAD and TDP concentrations in red cells were adjusted to haemoglobin (Hb) rather than to the volume of packed red cells because accurate pipetting of packed red cells is difficult due to the high viscosity. The HPLC system for measurement of vitamins B1, B2 and B6 consisted of a Waters solvent delivery system and a Waters fluorimeter, Model 2475 (Waters, Wilmslow, UK).
Inductively coupled plasma mass spectrometry (Agilent Technologies, Cheadle, UK) was used to measure plasma copper and selenium, as well as red cell selenium and iron. Plasma samples were diluted 1 in 20 with a 2% ammonia solution containing 50 µg/L germanium, which was present as an internal standard. Copper and selenium were measured in plasma using a 7900 inductively-coupled plasma mass spectrometer (Agilent Technologies, Inc., Santa Clara, CA, USA) operating in helium mode. The coefficient of variation for plasma selenium was < 4%.
Packed erythrocytes were diluted 1 in 80 with a 0.5% ammonia solution containing 50 µg/L scandium, which was present as an internal standard. Selenium and iron were a measured simultaneously in erythrocytes using a 7900 inductively-coupled plasma mass spectrometer operating in hydrogen mode for selenium and helium mode for iron. Erythrocyte selenium was expressed as a ratio to haemoglobin concentration to correct for potential inaccuracies associated with pipetting packed erythrocytes and minimise imprecision. Unlike plasma, erythrocytes are viscous making accurate pipetting difficult. Iron was measured as a surrogate for haemoglobin, the concentration of which was calculated using the following equation (where 64,456 is the molecular weight of haemoglobin in g/mol and the denominator is the number of atoms of iron per haemoglobin molecule): The coefficient of variation for erythrocyte selenium:haemoglobin ratio was < 6%.
Vitamin D, vitamin B12, magnesium and LDH were measured in serum by routine laboratory procedures using automated analysers (Alinity, Abbott Diagnostics). The CV for these methods was < 5%.
For all methods, Quality Assurance and Quality Controls were assessed using Certified Reference Materials and thorough external Quality Assessment schemes (data available on request).
Statistical analysis
Demographic data, clinicopathological variables, CFS, BMI, MUST score, micronutrient level, CRP, albumin, oxygen requirement, ITU admission and 30-day mortality were presented as categorical variables. Categorical variables were analysed using chi-square test for linearby-linear association. For categorical variables, Fisher's exact test was used when value of single cell of a twoby-two table was n ≤ 5. Micronutrient concentrations were also presented as continuous variables and analysed using the Mann-Whitney U or Kruskal Wallis tests.
The relationships between clinicopathological variables were also examined using univariate and multivariate binary logistic regression with backward conditional method. Covariates with a significance value of p < 0.1 in the univariate analysis were included in the multivariate analysis. The present study was testing the hypothesis that patients with COVID-19 were similar to other patient groups with regard to micronutrient status. Therefore, given the present analysis was exploratory in nature and no formal power calculation was carried out.
Missing data were excluded from analysis on a variableby-variable basis. Two-tailed p values < 0.05 were considered statistically significant. Statistical analysis was performed using SPSS software version 25.0. (SPSS Inc., Chicago, IL, USA).
The median value of each micronutrient analysed as part of the study and prevalence of patients with values outside of the references range of that micronutrient is shown in Table 2.
The relationship between low vitamin B1 and clinicopatholgical characteristics, BMI, MUST, systemic inflammatory response and clinical outcomes in non-frail patients admitted with COVID-19 is shown in Table 5. On univariate analysis, a low vitamin B1 was not significantly associated with age (p = 0.
Discussion
To our knowledge, the present study is the first to examine the relationships between micronutrient status, frailty, systemic inflammation and clinical outcomes in patients admitted with COVID-19. The results of the present study show that in the non-frail patients (younger, healthier and less malnourished), over a third of patients (36%) had low vitamin B1. This was in contrast to the other red cell measures including B2, B6 and selenium in which no patients were deficient (see Table 3). Furthermore, that median vitamin B1 concentration was inversely related to the magnitude of systemic inflammatory response (SIR). Therefore, although the basis of the deficiency is unclear, targeted supplementation of thiamine may have the potential to improve clinical outcomes in patients with COVID-19. Particularly, in those patients who experience a significant SIR, a robust prognostic factor adversely associated with clinical outcomes in COVID-19 [32,33] and a risk factor for developing long COVID [34,35]. Postulated as a therapeutic target in patients with COVID-19 [23], the prevalence of thiamine deficiency in COVID-19 patients remains unknown. Indeed, to date, preliminary studies examining the effects of thiamine supplementation on clinical outcomes in patients with COVID-19 have failed to report baseline thiamine status or the relationship with other red cell vitamins [25]. Therefore, the present observations are informative finding that on a background of normal reference range values of the other red cell vitamins, over a third (36%) of non-frail patients admitted with COVID-19 had a low vitamin B1 (thiamine). Given the prevalence of vitamin B1 deficiency in the present study is higher than that reported by contemporary studies of critically unwell patients (20%) [36,37], it provides rational for thiamine supplementation in patients admitted with COVID-19. However, comparative studies of healthy individuals are required to whether the vitamin B1 deficiencies observed are endemic to our population post-pandemic. While a low vitamin B1 was not found to be significantly associated with SIR or clinical outcomes in the present study (see Table 5), median vitamin B1 concentration was found to inversely related to magnitude of SIR, a robust prognostic factor [32]. Indeed, the lowest median vitamin B1 concentration was observed in patients experiencing the highest magnitude of SIR (CRP > 80 mg/L). As such, the present observations suggest that vitamin B1 stores may be depleted in COVID-19 patients experiencing a higher magnitude of SIR. One hypothesis for the thiamine deficiency observed in patients with COVID-19 is increased consumption to meet the energy demands of protein and ribonucleotides synthesis for viral replication [37]. If confirmed in future studies, this may have a number of clinical implications. Firstly, that thiamine supplementation may be required to replenish depleted stores in hospitalized COVID-19 patients who experience a high magnitude of SIR [25,38]. Secondly, given the similarities in symptoms of thiamine deficiency [39,40], depletion of vitamin B1 stores may explain the basis of the relationship between a high magnitude of SIR during COVID-19 infection with the development of long COVID [34,35]. Further studies of thiamine status following recovery from COVID-19 and in those with long COVID are therefore warranted.
Deficiencies in serum/plasma micronutrients, including vitamin D, selenium and copper have been associated with disease severity and poorer clinical outcomes in patients with COVID-19 [8,9,41]. However, these studies have often been carried out using measurements that are recognised to be perturbed by the SIR [13,42,43]. This was highlighted in a recent meta-analysis by Oscanoa and co-workers who stated that it was unclear whether the deficiencies observed in Vitamin D was specific to COVID-19 severity or simply a consequence of the cytokine storm typically exhibited in patients with severe disease [26]. Indeed, both plasma selenium and copper were found to be associated with magnitude of SIR in the present study, in keeping with contemporary literature [42,44,45]. Furthermore, despite nearly half of patients studied (45%) found to have low plasma selenium, no patients had a low red cell selenium-thought to be a reliable indicator not confounded by the SIR [21]. As such, the therapeutic benefit of supplementation cannot be determined as it remains unclear if there is a casual effect between micronutrient deficiencies (assessed by measurement of their concentration in plasma) and clinical outcomes in patients with COVID-19, or if these are solely reflective of the magnitude of the SIR [2].
The present study has a number of limitations. Firstly, given the relatively small sample size, the present study may be subject to sample bias. Secondly, there is also potential for selection bias with micronutrient screening conducted in only 47% (n = 281) of the 599 patients admitted to hospital with COVID-19 during the study time frame. However, the clinicopathological characteristics of the included patients were similar to that of patients in the overall cohort [46]. Therefore, this was not considered to be a significant confounding factor to the present observations. Thirdly, analysis of all eight micronutrients examined in the present study was not possible in all patients included due to limited blood sample availability. As such, examination of micronutrient perturbation specific to COVID-19 was limited. Lastly, the absence of follow-up micronutrient screening to examine trends in status is a limitation. Serial measurements would be useful to delineate whether the deficiencies in micronutrients are a result of COVID-19 per se, or simply that COVID-19 occurred on a background of micronutrient deficiency.
Conclusion
In patients admitted with COVID-19 who were not frail, over a third (36%) of patients had low vitamin B1. Furthermore, median vitamin B1 concentration was inversely associated with a higher magnitude of SIR. Therefore, in patients experiencing a significant SIR, thiamine stores may be depleted and provide rationale for supplementation. Further longitudinal studies are warranted to delineate the trend in thiamine status following COVID-19.
|
2023-04-28T14:12:47.661Z
|
2023-04-28T00:00:00.000
|
{
"year": 2023,
"sha1": "aba358c0165deee4202bf7b3654453b5d6d3817b",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "aba358c0165deee4202bf7b3654453b5d6d3817b",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
104175407
|
pes2o/s2orc
|
v3-fos-license
|
CO 2 -switchable polymer-hybrid silver nanoparticles and their gas-tunable catalytic activity †
The design of controllable or “ signal-triggered ” metal nanoparticles is one of the emerging trends in nanotechnology and advanced materials. CO 2 -switchable polymer-hybrid silver nanoparticles (AgNPs) were prepared by a one-pot reaction reducing AgNO 3 and trithioester terminated PDEAEMA with sodium borohydride (NaBH 4 ). The hybrids showed a long-term stability, and their size and size distribution can be easily modulated by tuning the molar ratio of polymers to AgNO 3 . The hybrids not only exhibit hydrophobic – hydrophilic transitions in immiscible mixed solvents, but also undergo a switchable dispersion/aggregation states upon alternately treating with CO 2 and N 2 . Moreover, this smart hybrid was preliminarily used as catalyst for the reduction of 4-nitrophenol. The catalytic activity of the hybrids can be switched and monotonously tuned by varying the fl ow rate of CO 2 purged into the reaction system, which may open a new avenue for tailoring the catalytic activity of metal nanoparticles toward a given reaction.
Up to date, several kinds of stimuli-responsive polymers have been used to functionalize metal NPs to form "smart" catalysts.For example, temperature-responsive polymeric hydrogel, 24 micelle, 25,26 microgel, 27 "yolk-shell" structure, 28 and polymers 29 were employed to support metal NPs, forming temperature switchable or tunable catalysts.Similarly, metal NPs were immobilized in pH-responsive polymeric hydrogel, 30,31 microsphere, 32 and micelle 33 to get pH-responsive catalytic systems.Besides, light-controllable catalyst was obtained by the combination of a temperature-responsive polymer with metal nanoparticles which can convert light into heat through light irradiation. 9Though the use of temperature as a trigger for metal "smart" catalysts can switch reaction and modulate reaction rate, changing reaction temperature would result in some side reaction or affect reaction rate, 9 which lead to a nonmonotone correlation with temperature.As for pH trigger based on acids and bases, it is hard to tune reaction rate as reaction usually happened in a x pH range. 23,31Moreover, the use of acids and bases for tuning pH may contaminate or modify the nal products. 9Therefore, to switch catalytic activity and modulate catalytic rate, it is desirable to develop a "green" and simple trigger for metal "smart" catalysts.
Very recently, we and others have employed CO 2 and inert gas such as nitrogen to switch between the hydrophobic and hydrophilic property of polymers with amidine 34,35 or amino groups, [36][37][38][39][40] and then control the morphology of nanomaterials 34,40 or its self-assemblies. 36,39As an abundant, economical, nontoxic, biocompatible, and renewable resource, CO 2 trigger just bubble gas and leave no contamination during the stimulate process, [34][35][36][37][38][39] which may satisfy the requirements of metal "smart" catalysts.Zhao and coworkers 41 pioneered the fabrication of CO 2 -switchable gold nanoparticles (AuNPs) by functionalizing AuNPs with CO 2 -switchable polymers, and found the obtained AuNPs hybrids can be dispersed-redispersed in and separated from aqueous solution by CO 2 and N 2 bubbling.Besides, the hybrids exhibit high catalytic activity, easier separation and better reusability for 4-nitrophenol reduction.Yuan et al. 42 embedded AuNPs into the shell of CO 2responsive magnetic hybrid nanospheres, and switch their catalytic activity through the access of swollen or collapsed of CO 2 sensitive shell.Although these metal hybrid catalysts exhibit CO 2 -responsive properties, the studies were focused on gold metal, and its reusability and switchability.To the best of our knowledge, there have been few reports of other CO 2responsive metal catalyst and the gas-modulated catalytic activity.Compared with AuNPs and other metal nanoparticles (like Pt, Pd), 43,44 silver nanoparticles (AgNPs) can be prepared more readily and inexpensively and also exhibits similar catalytic properties, which may have broad application prospects in catalysis. 45As such, the appeal for CO 2 -switchable AgNPs with gas-tunable catalytic activity for the general development of smart catalysts remains high.
7][48] Resorting to reducing agent sodium borohydride (NaBH 4 ), AgNO 3 and trithioester end group of PDEAEMA were reduced to silver nanoparticles (AgNPs) and thiol moiety, respectively; then PDEAEMA adsorbed onto the surface of AgNPs via strong Agsulfur interaction, forming PDEAEMA-AgNPs (Ag-P) hybrids (Scheme 1).The hydrophilic-hydrophobic properties and dispersibility of the AgNPs were examined by bubbling CO 2 or N 2 .The hybrids were applied as a catalyst in a model catalytic reduction of 4-nitrophenol, as it is one of the most refractory pollutants that can occur in industrial waste waters. 42,48Besides, the catalytic activity at different CO 2 ow rate was preliminarily discerned, which may open a new avenue for tailoring the catalytic activity of metal nanoparticles toward a given reaction.
Preparation of PDEAEMA
PDEAEMA with trithioester terminal group was prepared via RAFT polymerization.DEAEMA (6.0 g, 0.032 mol), AIBN (21 mg, 0.128 mmol), and CTA (0.23 g, 0.64 mmol) were added into a three-necked ask, followed by the addition of 30 mL of THF.Aer bubbling with nitrogen for at least 30 min, the reaction was heated to 70 C for 24 h.Then the polymerization was quenched by immerging the reaction ask into liquid nitrogen for about 5 minutes.The product mixture was diluted by 10 mL THF, and the nal product was gained via precipitation in cold n-hexane (À78 C, ca.500 mL) followed by ltration over a G4 frit.The obtained product was redissolved in 20 mL THF and reprecipitated in cold n-hexane (À78 C, ca.500 mL) again.The resultant solid was collected by ltration and dried overnight in a vacuum oven for 24 h to give PDEAEMA.Yield: 5.45 g ($85%).
Preparation of silver nanoparticles
Different molar ratios (1 : 24, 1 : 12 and 1 : 6) of PDEAEMA to AgNO 3 were investigated, and PDEAEMA-stabilized silver nanoparticles (Ag-P) dispersions were prepared by the reduction of AgNO 3 using NaBH 4 in methanol solution.The ratio of polymer to silver (1 : 24) is as an example: 0.192 g of PDEAEMA was dissolved in 50 mL of methanol.0.081 g of AgNO 3 was dissolved in 2 mL of deionized water, and the solution of AgNO 3 was added into the PDEAEMA solution drop by drop under vigorous stirring (1200 rpm).Aer stirring for 30 minutes, 0.359 g of NaBH 4 solid was slowly added to the mixed solution with stirring.The appearance of the mixture immediately changed from light yellow to dark brown.Aer stirring for 24 h, the mixture was concentrated to ca. 10 mL by rotary evaporator.Then 2 mL of water was added with bubbling CO 2 (30 mL min À1 ), which does not have solid precipitation.Aer bubbling for 30 minutes, another 2 mL of water was added.The above procedure was repeated until 10 mL of deionized water was added.Then the mixture was dialyzed against distilled water for one week to remove methanol, sodium borate and other unreacted impurities.The distilled water was changed every 4 h during the dialysing.Finally, the polymer-hybrid silver nanoparticles dispersion was obtained.
Catalytic reduction of 4-nitrophenol
Referring to previous methods, 55 1 mL of 2.5 mM 4-nitrophenol and 1 mL of 250 mM NaBH 4 solution was diluted by 23 mL of distilled water in a 50 mL beaker, the mixture was stirred for 5 min at room temperature.Then, the mixture was transferred to a test tube and 0.01 mL of 5 mg mL À1 silver nanoparticle dispersion was added, the mixture was taken out for UV-vis adsorption monitoring immediately.Based on the strength of the peak at 400 nm, the initial concentration (C 0 ) of 4-nitrophenolate ions was calculated.At a certain reaction time, 3 mL of the mixture was taken out for UV-vis adsorption monitoring, and the concentration (C) was calculated.The correlation of ln(C/C 0 ) versus the reduction time t was estimated to be linear, and the slope was estimated as the apparent reaction rate constant (k app ).k app is the average values calculated from three runs of a certain measurement.Following the above steps, the ow rate of CO 2 was controlled at 10 mL min À1 , 15 mL min À1 , 20 mL min À1 , 25 mL min À1 , 30 mL min À1 , 40 mL min À1 and 50 mL min À1 , respectively, to carry out catalytic reaction.With the progress of the reaction, the color of the solution gradually fades from yellow to colorless.Noteworthy, aer bubbling CO 2 for a minute, the pH of the mixture will be less than 6.7.At the same time, there was a peak at 317 nm, the peak of 4-nitrophenol, indicating that NaBH 4 was consumed by CO 2 and the catalytic reaction was not complete.Therefore, a certain amount of NaBH 4 should be added.At the end of the reaction, silver nanoparticles were separated by bubbling N 2 .
Characterization
UNICO UV-4802 double-beam spectrophotometer (UV-vis spectroscopy) was used to observe the location and migration of silver nanoparticle peaks.The full width at half-maximum (FWHM) values of the UV-vis spectra were measured by previous methods. 57In the catalytic reduction of 4-nitrophenol, UV-visible spectroscopy can be used to determine its reduction.
Infrared spectra were registered on a Nicolet MX-1E FTIR (USA) spectrophotometer in the scanning range of 4000-400 cm À1 using KBr pellet method.
The 1 H NMR of the polymer was measured using a Nuclear Magnetic Resonance Spectrometer (AV CORP300, Bruker, Germany).The polymer was puried according to its solubility in CDCl 3 , and the molecular structure of the polymer was determined from the corresponding chemical shi and integral area in the 1 H NMR.
The molecular weight and the molecular weight distribution of the polymers were determined using a gel permeation chromatography (GPC) system equipped with a Waters 515 pump and a 2410 detector.The column temperature was set at 25 C and the THF was used as the mobile phase.Polystyrene (PSt) was used as the reference material.
Thermal gravimetric analysis (TGA, 299-F1, NETZSCH, Germany) was used to test the polymer modied silver nanoparticles in the polymer content.The sample was heated to a temperature of 800 C at a rate of 10 C min À1 in a nitrogen atmosphere (ow rate 20 mL min À1 ).The dialyzed dark brown concentrated mixture was purged with N 2 for 1 h and centrifuged at 10 000 rpm for 5 minutes, the upper layer was discarded, washed with deionized water and centrifuged.Aer three times, the mixture was suction ltered and the solid sample was vacuum dried at 50 C for 24 hours prior to TGA and XRD characterizations.
The zeta potential values of silver colloid were measured with zetameter ZetaPALS (Brookhaven, USA).Each test was carried out for ve times and the average values were taken as the nal results.
X-ray diffraction (XRD) patterns of AgNPs hybrids was recorded using a RigakuDMAX2200 with Ni-ltered Cu K a radiation over a scanning range of 30 to 80 at an X-ray power of 40 kV and 40 mA.
The conductivity of Ag-P1 dispersion was measured with a DDS-11A conductometer (Chengdu Fangzhou Instrument) at 25 C, and the average values were calculated from three runs of a certain measurement.
Transmission electron microscope (Holland, Philips Company, Tecnai12) was used to observe the particle size distribution of silver nanoparticles.The size distributions of each sample were determined at least 1000 particles from photographs of the TEM images by image analysis soware (Nano Measurer).As for negative staining TEM, the sample was stained with phosphotungstic acid for TEM observation.
Synthesis and characterization
To bestow the CO 2 -switchable property on AgNPs, CO 2 -responsive polymer was needed.Based on previous reports, polymer with amidine 34,35 or tertiary amine groups 36-40 exhibits CO 2responsive characteristic.However, as for amidine-based polymer, heating is usually needed to expel the captured CO 2 . 34,357][38][39][40] PDEAEMA was a typical tertiary amine type CO 2 -responsive polymer. 36,37Thus PDEAEMA was synthesized through RAFT polymerization, as RAFT not only is a powerful and versatile controlled radical polymerization technique, enables precise control over the MW, MW distribution, but also introduce trithioester group to the end of polymer. 49,50Detailed preparation and characterization procedures can be found in ESI.† To form CO 2 -switchable AgNPs, the silver nanoparticles were prepared by reducing silver nitrate (AgNO 3 ) with NaBH 4 in the presence of the PDEAEMA.Meanwhile, trithioester end group of PDEAEMA was reduced to thiol group with NaBH 4 , [46][47][48][49] providing the necessary for chemical attaching PDEAEMA on the surface of AgNPs (Scheme 1).With such one-pot protocol, a series of AgNPs-PDEAEMA (Ag-P) dispersion were prepared by varying the molar ratios (1 : 6, 1 : 12 and 1 : 24) of PDEAEMA to AgNO 3 , as shown in Table 1.For characterization and storage, AgNPs hybrids were transferred from methanol to aqueous environment by bubbling CO 2 .And three dark brown dispersion were nally obtained (insert, Fig. 1), indicative of the formation of AgNPs. 51o characterize the formation of AgNPs hybrids, UV-vis spectroscopy which is known for the sensitivity to the size, size distribution and morphology of metal nanoparticles 51,52 was employed.As shown in Fig. 1, single absorption peak is found in the region of 320-600 nm, resulting from intense surface plasmon resonances (SPR) of the obtained AgNPs. 53It is noteworthy that the l max gradually increases with increasing the silver contents of Ag-polymer dispersion.Usually, l max of AgNPs are biased to shi to longer wavelengths with increasing nanoparticle size, 52 suggesting that the AgNPs size increases slightly upon increasing the silver contents.This may arise from the increased collision frequency due to the formation of more Ag atoms. 46,54Furthermore, the full width at half-maximum (FWHM) could be calculated from the UV-vis spectra, since FWHM is useful to evaluate the polydispersity of AgNPs. 46,56The FWHM value of Ag-P1 is 105 nm, which are similar to or slightly smaller than those previously reported for AgNPs. 46,52,55Correlating with symmetric absorption peaks, this implies that the size of Ag-P1 is uniform. 56Compared with Ag-P1, the FWHM for Ag-P2 (118 nm), Ag-P3 (124 nm) become wider, suggesting that the polydispersity of the hybrids increased with increasing the silver ratio.
In order to get more direct information on the size, size distribution and morphology of AgNPs, TEM observations were performed.Fig. 2 shows TEM images and size distribution histograms of three AgNPs.One can nd that the AgNPs hybrids display good dispersion and a spherical shape.With total 1000 particles counted by an image analysis soware (Nano Measure) on number distribution, we found Ag-P1 have an average size of 8.51 AE 2.8 nm with a narrow distribution.With increasing the silver ratio, the size and size distribution become bigger.The sizes of the other two AgNPs are 10.06 AE 3.7 nm and 14.16 AE 6.6 nm, respectively.The TEM results are in good agreement with those of the UV-vis spectra analysis.Thus the size and size distribution of as obtained AgNPs can be easily modulated by varying the molar ratio of polymers to AgNO 3 .In addition, XRD was carried out to conrm the structure of AgNPs.As illustrated in Fig. S3, † their XRD pattern of AgNPs shows characteristic diffraction peaks for metallic silver [111], [200], [220] and [311] facets, indicative of the formation of pure Ag. 46,55,57,58 As stated above, the AgNPs were clearly observed by TEM.However, the graed PDEAEMA were not observed under TEM observation, which may attribute to the polymer with lower atomic mass. 55To conrm that PDEAEMA was graed onto the surface of AgNPs, FT-IR spectroscopy was employed.As given in Fig. S4, † The IR spectra of the nanoparticles and the PDEAEMA are similar to one another, indicating that the polymer molecules have indeed graed onto AgNPs.However, a remarkable difference in the peak intensity is found between the peaks in polymer and the hybrids.Those peaks correspond to the stretching mode of C]S (1062 cm À1 ) and -CH 2 -S-(725 cm À1 ). 49er reaction, C]S was disappeared because the trithioester group was reduced to thio group.The decreased intensity for -CH 2 -Sis believed to be that the thiol end group of the polymer on the nanoparticle form a relatively close packed thiol layer and molecular motion is constrained, 56 which suggesting that polymer attached to AgNPs surface through a chemical bond between S ions and Ag atoms.
To determine the relative amount of PDEAEMA on AgNPs, thermal gravimetric analysis (TGA) measurement was carried out.From the TGA curve given in Fig. 3, the weight percentage of PDEAEMA in the Ag-P1, Ag-P2, Ag-P3 hybrids were ca.90 wt%, 84 wt%, 71 wt%, respectively, from which we could calculate that one AgNPs was wrapped by roughly 2000 polymer chains (see ESI †).To visualize the polymer on nanoparticles, AgNPs samples treated with negative staining technique was used for TEM observation, which provides reverse-contrast negative electron optical images for the unstained component. 55One can nd that black AgNPs dot surrounded by the brighter polymer part, showing typical cocoon-like morphology (inset, Fig. 2).The thickness of observed PDEAEMA layer is about 6-10 nm, indicating that the polymers were attached to AgNPs.Compared with Ag-P2 and Ag-P3, Ag-P1 has smaller size and narrower size distribution.Therefore, in the following experiments, we will mainly focus on Ag-P1.
In addition, the stability of AgNPs in aqueous environment is important for their application. 55,56To detect the stability of the PDEAEMA-protected AgNPs in water, we measured the absorption spectra of one of the AgNPs hybrids systems (Ag-P1) with the same concentration at different times.As shown from Fig. S5, † there is no obvious difference in the shape, position, and symmetry of the absorption peak during 12 months, indicative of the long-term stability of the hybrid.
CO 2 -switchable behavior of AgNPs hybrids
Hydrophobic-hydrophilic transition in immiscible mixed solvents.We found previously that the PDEAEMA itself can undergo a hydrophobic-hydrophilic transition by the stimulation of CO 2 , 39 and now it is natural to examine whether the AgNPs coated with PDEAEMA has analogous transition.Fig. 4 compares the appearance of Ag-P1 in the mixed solvent of water/dichloromethane (DCM) (1 : 1, v/v) in the presence and in the absence of CO 2 , respectively.In the former case, the dispersion separates into two phases; the upper layer is brown while the lower one is transparent, indicating that Ag-P1 reside in the top water layer (inset, Fig. 4a).However, when N 2 is bubbled into the mixing solution to expel CO 2 , the upper water phase turns into transparent, suggesting that the AgNPs hybrids move from the aqueous phase to the organic one, i.e., the Ag-P1 hybrids transform from hydrophilic to hydrophobic, and thus dissolve in the organic layer (inset, Fig. 4b).
During this process, UV-vis spectroscopy was used to investigate the dispersion state in the mixed solvent.As the hybrids was treated with CO 2 , the upper aqueous solution exhibited SPR at 408 nm (Fig. 4a), indicating that Ag-P1 hybrids were dispersed in the water phase.In contrast, the lower DCM solution shows no obvious signals in the range of 250-800 nm, suggesting no Ag-P1 presented in lower organic layer.When N 2 was bubbled into the biphasic solution, the UV-vis spectra of the Fig. 3 TGA for Ag, Ag-P1, Ag-P2, Ag-P3 and PDEAEMA.two phases was reversed, that is, the lower DCM phase exhibits strong SPR peak (Fig. 4b), while no signals appeared in the upper aqueous solution.These UV-vis spectra clearly show that Ag-P1 experiences a hydrophobic-hydrophilic transition.Based on these macroscopic results, it is noteworthy that Ag-P1 can switch between aqueous media and organic solvents, which is convenient for separation/collection.
Switchable dispersibility in water.Having established that AgNPs hybrids were dispersed in aqueous environment, we wonder if the dispersibility of AgNPs hybrids could be reversibly controlled by the stimulation of CO 2 .Thus, we extend the response of AgNPs hybrids to CO 2 in pure water.As stated above, AgNPs hybrids can dissolved in aqueous environment under the treatment of CO 2 .Nevertheless, aer bubbling N 2 for 60 min at room temperature, the hybrids precipitated from aqueous solution (inset, Fig. 6a); when CO 2 was bubbled for 10 min again, the hybrid aqueous solution became homogeneous accordingly.Such a procedure is still effective beyond three cycles of bubbling N 2 and CO 2 , suggesting that the dispersion/aggregation state of hybrids can be switchablely controlled.
To further reveal the dispersion state of the hybrids, UV-vis spectroscopy, was employed to monitor the variation of absorbance of hybrid suspension aer bubbling and removing CO 2 , respectively.As shown in Fig. 5a, a strong SPR absorbance was observed aer bubbling CO 2 .With bubbling N 2 , on the other hand, the absorbance gradually decreased, and concomitantly the SPR peak showed blue shi (from 408 nm to 425 nm), indicative of aggregation and precipitation. 52,56In addition, the FWHM values of the spectra increase with time of bubbling N 2 (Fig. 5c), implying the formation of AgNPs aggregates with larger size and broad size distribution.In order to get more direct information on the aggregation state of AgNPs hybrids in water, TEM were performed.An obvious aggregation state of the AgNPs hybrids can be observed (Fig. S6 †), which is in good agreement with the abovementioned results.When the dispersion was treated with CO 2 again, the SPR peak and its FWHM reinstated (Fig. 5b and c), indicating that AgNPs hybrids were redispersed again.
To elucidate the dispersed/aggregated transition, electrical conductivity measurements were performed to monitor the change of conductivity for the suspension when cyclically bubbling CO 2 and N 2 (Fig. 6a).When CO 2 was introduced into dispersion, the conductivity sharply rises from about ca.8][39][40][41][42] When CO 2 was displaced by N 2 , the conductivity recovered to its original value, and this reversible change in conductivity could be repeated several times, which amply demonstrate that the response of the suspension to CO 2 was fully reversible and reproducible. 34o further conrm that ionization happened on the cocoon of the hybrids, zeta potential of the silver colloidal solution has been measured.The zeta potential for the particle treated with CO 2 reached +60.2 mV, as exhibited in Fig. 6b, supporting the formation of positive ammonium ions of the surface coated polymers.Aer removing CO 2 by purging N 2 , the zeta potential decreases with increasing in the time of purging N 2 , and nally reduced to +0.66 mV (Fig. 6b), suggesting that positive ammonium ions of the polymer cocoon were mostly deprotonated due to the deportation of the CO 2 .
Responsive mechanism.On the basis of the above results, it is reasonable to speculate that AgNPs were dispersed in solution by adsorbed PDEAEMA owing to the chemical bond formed between S ions and Ag atoms, as shown in Scheme 1.The CO 2switchable dispersion of AgNPs hybrids in mixed solvent is ascribed to the polymer cocoon with hydrophobic backbone and CO 2 -responsive tertiary amine groups.As illustrated in Scheme 2a, when treated with CO 2 , the tertiary amine groups along PDEAEMA convert into charged ammonium bicarbonates, [37][38][39][40][41][42] making PDEAEMA hydrophilic and thus leading the hybrids to disperse in aqueous media.Otherwise, aer removing CO 2 , the PDEAEMA become neutral and hydrophobic; therefore DCM is a good solvent and the hybrids shi from water to DCM.
The reversible dispersion/aggregation states of AgNPs hybrids controlled by CO 2 in water could also be understood based on the CO 2 -responsive behaviour of PDEAEMA.As shown in Scheme 2a, the tertiary amine groups of PDEAEMA coated on AgNPs were protonated when reacted with CO 2 in water, leading an extended conformation of the polymer.Thus the interchain electrostatic repulsion and the steric hindrance among the AgNPs protected by the charged PDEAEMA should allow the long-term dispersing in water.Aer the removal of CO 2 , the electrostatic repulsion of PDEAEMA disappears owing to an opposite deprotonation effect, 36,37 thus diminishing the polymer-polymer electrostatic repulsions and increasing the interaction among polymer chains, 34 resulting in larger particles, and precipitate from aqueous solution.
Application in catalysis
As the AgNPs hybrids exhibit sensitive to CO 2 -switchable behaviour, it is interesting to see if the hybrids can be used as a gas-tunable catalysis.Here, in order to evaluate the catalytic activity of the as-obtained AgNPs hybrids, the reduction of 4nitrophenol, to 4-aminophenol by NaBH 4 was used as model reaction.Because 4-nitrophenol is one of the most refractory pollutants in industrial wastewaters while 4-aminophenol is a commercially important intermediate for the manufacture of analgesic and antipyretic drugs. 59In addition, the reactant 4nitrophenol could transfer into 4-nitrophenolate ion at high pH and shows characteristic peak at 400 nm; and the product 4aminophenol gives a typical absorption at 300 nm, which is easily monitored by UV-vis measurement. 9,55,59ig. S7a † shows the time-dependent UV spectra for the reduction of 4-nitrophenol in presence of CO 2 -treated AgNPs hybrids.One can nd that the peak height at 400 nm exhibit a slight decreases within 40 min (Fig. S7a †), indicating a very low conversion of 4-nitrophenol.Based on the absorption at 400 nm values, linear correlation between ln(C/C 0 ) (C is the concentration at a certain reaction time and C 0 is the initial concentration of 4-nitrophenolate ions) versus reaction time was obtained (Fig. S7a †), indicating that such a catalytic reduction follows a pseudo-rst-order law.The apparent reaction rate constant (k app ) was calculated from linear and is only 1.15 Â 10 À5 s À1 .This result confronts earlier observations that CO 2 -bubbled AuNPs protected with PDEAEMA still exhibits a high catalytic activity for 4-nitrophenol. 41,42This may arise from that the density of PDEAEMA graed on AgNPs hybrids, ca.2000 polymer chains in one particle as aforedescribed, is higher than that of reported ones, though the previous papers did not provide this data. 41,42When CO 2treated AgNPs hybrids was introduce into the solution, the charged ammonium bicarbonates would be deprotonated because the basic NaBH 4 reacted with carbonic acid formed by CO 2 and water (i.e., NaBH 4 + H + + 3H 2 O ¼ Na + + 4H 2 [ + H 3 BO 3 ).Thus PDEAEMA cocoon became hydrophobic and wrapped AgNPs tightly, which inhibited access to the catalytic sites of AgNPs (Scheme 2b).
To open the access for 4-nitrophenol, we tried to bubble CO 2 into the mixing solution.Firstly, we wonder whether 4-nitrophenol could be reduced by CO 2 .CO 2 was bubbled into the mixing solution of 4-nitrophenol and NaBH 4 (without AgNPs hybrids).Aer bubbling CO 2 for 50 min, the peak height at 400 nm has no change (Fig. S8, ESI †), suggesting that 4-nitrophenol could not be reduced by CO 2 in absent of AgNPs hybrids.In contrast, CO 2 was purged into the mixture in presence of Ag-P1 hybrids.When the ow rate of CO 2 is 10 mL min À1 , it was found that the absorption at 400 nm decreased slowly within the rst 18 min, and then dropped fast to nearly zero (Fig. S7b †).Meanwhile, the peak of 4-aminophenol at 300 nm was observed, 42,55 implying that 4-nitrophenol was reduced to 4aminophenol.It clearly shows an induction time before the conversion of reactants into products takes place (Fig. 7a and S7b †).Aer deducting the induction time (t i ), the k app is 1.57Â 10 À3 s À1 and bigger than that of without bubbling CO 2 .This result suggests that access to the encapsulated AgNPs was opened, since polymer chains contacted with CO 2 and extended in solution.When the ow rate increased, the induction time decreased and disappeared at 25 mL min À1 (Fig. 7a).And concomitantly the k app increased with the ow rate, and reach to ca. 3.6 Â 10 À3 s À1 at 50 mL min À1 (Fig. S7h †), which is close to gold nanoparticle reaction rate constant. 42The k app as function of CO 2 ow rate was presented in Fig. 7b.Two linear regimes in the k app curve are found: a monotonous linear increase in k app is evidenced while CO 2 ow within 30 mL min À1 , aer which it is slight change.
To elucidate this variation, pH value of the solution under bubbling CO 2 was tested, since the degree of protonation increases with decreasing of pH. 37As exhibited in Fig. S9, † the pH decreased with bubbling CO 2 and maintained at ca. 7 for catalysis.Note that if the pH was less than 6.7, the peak at 317 nm ascribed to 4-nitrophenol would appear, which would affect the catalytic reaction. 60During the reaction, additional NaBH 4 should be added to compensate that of consumed by CO 2 and keep pH stable.Besides, it clearly shows that the time for decreasing pH to ca. 7 increases with low ow rate, which may cause the induction time.As there was no obvious difference on pH for catalysis at different ow rate, the hybrid should show a similar protonation and has similar k app .A possible explanation for increasing k app is that the frequency for PDEAEMA cocoon contacting with CO 2 increased under the high CO 2 ow rate, which is favourable to opening the access to AgNPs.Such preliminary ndings, particularly the linearity found in the ow rate below 30 mL min À1 , imply that the catalytic activity of Ag-P1 hybrids can be switched and monotonously tuned by varying the ow rate of CO 2 purged into the reaction system.
Conclusions
In summary, we have prepared CO 2 -switchable AgNPs hybrids by one-pot reducing AgNO 3 and trithioester terminated PDEAEMA with NaBH 4 .Apart from the long-term stability, the size and size distribution of AgNPs hybrids can be easily modulated by varying the molar ratio of polymers to AgNO 3 .The hybrids not only exhibit hydrophobic-hydrophilic transitions in immiscible mixed solvents, but also undergo a switchable dispersion/aggregation states upon alternate treated with CO 2 and N 2 .In addition, we have demonstrated that the catalytic activity of the hybrids for the reduction of 4-nitrophenol can be switched and monotonously tuned by varying the ow rate of CO 2 purged into the reaction system, which may open a new avenue for tailoring the catalytic activity of metal nanoparticles toward a given reaction.The strategies described in this work can also be used to functionalize other nanoparticles in a quick and easy way.
Conflicts of interest
There are no conicts to declare.
Scheme 1
Scheme 1 Schematic representation of the formation of polymerhybrid silver nanoparticles.
Fig. 1
Fig. 1 UV-vis spectra of Ag-P1, Ag-P2 and Ag-P3 in water after bubbling CO 2 at 25 C.The inset images of the hybrids solution refer to corresponding samples.The concentration of hybrids is 0.06 mg mL À1 .
Scheme 2
Scheme 2 Schematic illustration of the AgNPs hybrids response to the stimulus of CO 2 (a), and switch and tune the catalytic activity for reduction of 4-nitrophenol (b).Fig. 6 (a) Cyclic changes in conductivity of Ag-P1 aqueous dispersion measured at 25 C with treatment of CO 2 and N 2 , where the concentrations of Ag-P1 is 2.23 mg mL À1 .(b) The zeta potential of Ag-P1 aqueous dispersion at different time after bubbling N 2 at 25 C.
Fig. 7
Fig. 7 (a) The plots of ln(C/C 0 ) versus the time t at different flow rate of CO 2 stimuli.(b) The variation of the apparent reaction rate constant (k app ) at different CO 2 flow rate.The concentrations of 4-NP and Ag-P1 are 0.0139 mg mL À1 and 2 Â 10 À3 mg mL À1 , respectively.
Table 1
Characterization of the Ag-NPs in the PDEAEMA solutions Sample a [PDEAEMA] : [AgNO 3 ] b % PDEAEMA/% Ag c Average diameter of Ag-NPs d , nm a Ag-P refers to PDEAEMA polymer-stabilized silver nanoparticles.b Molar ratios of PDEAEMA to AgNO 3 .c Weight percentage, determined by TGA.d Estimated from TEM images.
|
2019-04-09T13:06:28.606Z
|
2017-10-20T00:00:00.000
|
{
"year": 2017,
"sha1": "c2c4bd86c5a9a491080154117f8d4d0af90e10a4",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1039/c7ra09233d",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "10c8f828c172de9e729d7822e1f61c89a651e860",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
218689435
|
pes2o/s2orc
|
v3-fos-license
|
Characterization of the High-Resolution Infrared Radiation Sounder Using Lunar Observations
The High-Resolution Infrared Radiation Sounder (HIRS) has been operational since 1975 on different satellites. In spite of this long utilization period, the available information about some of its basic properties is incomplete or contradictory. We have approached this problem by analyzing intrusions of the Moon in the deep space view of HIRS/2 through HIRS/4. With this method we found: (1) The diameters of the field of view of HIRS/2, HIRS/3, and HIRS/4 have the relative proportions of 1.4◦ to 1.3◦ to 0.7◦ with all channels; (2) the co-registration differs by up to 0.031◦ among the long-wave and by up to 0.015◦ among the shortwave spectral channels in the along-track direction; (3) the photometric calibration is consistent within 0.7% or less for channels 2–7 (1.2% for HIRS/2), similar values were found for channels 13–16; (4) the non-linearity of the short-wavelength channels is negligible; and (5) the contribution of reflected sunlight to the flux in the short-wavelength channels can be determined in good approximation, if the emissivity of the surface is known.
Introduction
The High-resolution Infra-Red Radiation Sounder (HIRS) performs temperature/humidity sounding on satellites in sun-synchronous orbit since the seventies. The first HIRS instrument has been operated on Nimbus-6 from 1975 through 1983. Starting with its first evolution, HIRS/2, built by the Optical Division of ITT Aerospace, it is equipped with 19 infrared channels and forms part of the TOVS sounding instrument suite (TIROS [Television Infrared Observation Satellite] Operational Vertical Sounder) on NOAA-6 to -19. The channel frequencies of each instrument can be found at [1]. It flies as well on TIROS-N, Metop-A, and Metop-B. In all these years the instrument evolved to HIRS/4 and has accumulated a large set of observations relevant to the study of long-term variations of temperature and upper tropospheric humidity (UTH), amongst other things. A trend analysis over three decades of HIRS channel 12 measurements, trying to find changes in tropical UTH, is described for example in [2]. Their investigation, however, was hampered by significant inter-satellite biases, the reason for which was not easily identified. Different spectral response functions, the on board black body calibration system or non-linear response of the detectors could all be at fault. The situation is similar with other channels [3]. A full understanding of the various effects contributing to the systematic errors of HIRS that manifest themselves as biases is further complicated by the fact that one encounters contradictory or incomplete information in the literature even about basic properties of this instrument. Examples are: • According to the first performance report from ITT Aerospace, the FoV (field of view) of HIRS/2 has a diameter of 1.22 • [4]. This value increased to 1.25 • in various books, for example [5], and reached a temporary height on the OSCAR web page with some 1.4 • [6]. OSCAR gives the resolution in km at s.s.p., which introduces an uncertainty, because the altitude of a satellite on a sun-synchronous orbit can vary by almost 50 km during the mission, and its mean value is not the same for different satellites. The discrepancies get even larger for HIRS/4, where most web pages and documents give a value of some 0.7 • , except for the ESA Metop performance page with its extravagant claim of 1.4 • for the shortwave channels 13-19 and 1.3 • for the long-wave channels 1-12 [7]. • Experts on HIRS cannot agree, either, as to how good the spectral channels co-registration is. This property of the instrument can only be determined in flight for window channels, where it is possible to identify characteristic features on the surface of the Earth. As HIRS has a beamsplitter and two completely different optical paths for long-wave and shortwave channels, systematic pointing differences between these groups of channels are to be expected. Investigations into this matter on ground are not necessarily representative for the conditions in flight, because the strong vibrations during the launch phase can affect the optical path of the instruments. • The central wavelengths of several channels of HIRS are very similar, but spectral uncertainties remain [8]. This concerns in particular channels [13][14][15][16], which lie between 4.4 and 4.6 µm, and channels 1-7, which lie between 13.3 and 15 µm, i.e., close to the frequency of maximum spectral radiance for an object with a brightness temperature of 300 K. These wavelengths cover absorption bands of nitrous oxide and carbon dioxide in the atmosphere and produce therefore quite different flux values for Earth scenes, which makes it difficult to directly determine the inter-channel homogeneity of the sounding channels in flight. • Estimates of the non-linearity varied by more than a factor of two. A non-linear effect was first detected in flight with HIRS/2 on NOAA-10, where it was maximum in the ninth and tenth channel and lowest in the shortwave channels [9]. When [10] determined later the non-linearity terms for the long-wave channels 4 and 6 of HIRS/4 on Metop-B in flight, they found them exactly 3.333 times smaller than the pre-launch values. Their method worked by identifying those non-linearity terms that produced the smallest orbital mean bias against IASI (Infrared Atmospheric Sounding Interferometer) on the same satellite. The underlying assumption is here that non-linearity is the only reason for bias. An independent confirmation of this claim is highly desirable. • The impact of reflected solar radiance and the low signal-to-noise ratio at low temperatures adversely affect the accuracy and precision of radiance measurements with the short-wavelength channels, according to [11]. The exact value of the reflected solar radiance depends on the scan position, solar zenith angle, etc. [12], which makes its calculation difficult. It can, however, be very much simplified by assuming diffuse reflection-a concept that can be tested with the geometric albedo of the Moon.
The aim of our investigation is to shed some light on these and other issues concerning the performance of HIRS/2-HIRS/4 in flight, or at least to propose new ways of addressing the open questions. This is not only of interest to meteorologists trying to understand biases and other peculiarities in the data from HIRS, but provides as well helpful suggestions for verifying the compliance of future infrared sounders with requirements. Here we make use of intrusions of the Moon in the deep space view (DSV) of HIRS during routine operations. As the Moon has no atmosphere, its infrared spectrum has no narrow, variable features. The hemisphere it presented to the weather satellites was more or less the same for all observations: The sub-observer longitude varied between 351.2 • and 6.8 • , and the sub-observer latitude, i.e., the apparent planetodetic longitude and latitude of the nearest point of the target seen by HIRS, varied between −5.6 • and +6.8 • . We could not detect any significant correlation between either coordinate and the measured brightness temperatures. All of these things are strong evidence in favour of a basic assumption relevant for thermo-physical modeling, namely that the disk-integrated properties of the Moon were the same in all observations. Its temperature, however, varies with the illumination by the Sun, and these variations are much larger than those of the Earth's upper troposphere [13]. Hence observations of Moon intrusions make it possible to monitor the performance of an instrument over a very large range of flux values. They provide new insights into the effects causing systematic uncertainties in the measurements. Such deeper understanding is essential to produce fundamental climate data records from HIRS with consistent calibration. Such a data set could for example form the basis for new climate data records of upper tropospheric humidity as a complement to the corresponding microwave data set [14].
In the next section we describe the method used for identifying suitable observations of the Moon with HIRS, and how we derived brightness temperatures from the raw data. The results are presented in Section 3, and we show for each of the five items mentioned above, how the Moon can prove itself useful by providing new insights. In Section 4 we assess the relevance of these results by comparing them with expectations. Finally, we draw conclusions in the last section on the future use of the Moon in calibration and validation of instruments on weather satellites.
Materials and Methods
The first step in our efforts to take advantage of the intrusions of the Moon in the DSV of HIRS was to identify such events in the raw data (level 1b), which are supplied by the NOAA Comprehensive Large Array-data Stewardship System and which were read and procesed by us using Typhon [15]. The HIRS lunar contamination status can be found in plots that are available on the web page of the Center for Satellite Applications and Research Integrated Calibration/Validation System Long-Term Monitoring [16]. This web page, however, does not include monitoring data of HIRS/2, and it does not show the lunar contamination status before 2016. Another, but much more laborious method, is searching the raw data for anomalies in the counts from the DSV during periods of time when the Moon was close to its pointing direction. In Figure 1 we show an example of the signal from deep space for all calibration lines of one orbit. The radiometric calibrations with deep space happen every 256 s, i.e., one gets 6100/256 ≈ 24 such calibration lines per orbit. In between there are Earth scans. Each calibration line contains at least 46 useful measurements, and they are shown in Figure 1 as 24 × 46 counts. The plot betrays immediately the scan that was affected by the presence of the Moon, because the Moon adds flux to the otherwise empty space. As can be seen in the figure, HIRS produces a lower number of counts for stronger incoming flux. The signal is already reverse after processing in the amplifier chains and before analog to digital conversion, see Section 2.1 in [17]. In a second step we checked with the aid of the infrared light curve of the intrusion of the Moon in the DSV, whether the Moon was fully included in the FoV of HIRS or not. As the number of counts is inversely proportional to the fraction of the lunar disk that falls inside the FoV, this number decreases while the Moon is approaching the center of the FoV and increases while the Moon is leaving the FoV. It is constant at a high level, as long as the Moon is completely outside the FoV, and at a low level, as long as the Moon is completely inside the FoV, apart from random fluctuations of the counts. When during an orbit the Moon is at the orange position in Figure 2, then only a fraction of the lunar disk can be present in the FoV of HIRS, and a falling number of counts is followed immediately by a rise in the number of counts. Whenever it seemed like HIRS got infrared radiation from the whole lunar disk, we looked for additional evidence by comparing the counts with the signal from another intrusion of the Moon with similar phase angle. We usually carried out our search for Moon intrusions with light curves from channel 8, which means that the Moon might not have been fully included in the FoV of the short-wavelength channels, due to systematic pointing differences between long-and shortwave channels, see Section 3.1.2.
The blue circle in Figure 2 is close to the celestial equator and its center coincides with the direction of the orbital axis of the satellite carrying HIRS. When this direction is sufficiently close to the orbit of the Moon, i.e., the orange line, then the Moon will appear every month in the DSV. At other times of the year, however, such an alignment cannot happen. It is also possible, that only Earth scenes were observed, when the Moon crossed the direction of the DSV, or that only part of the Moon fell into the FoV. In consequence, a whole year might pass without a single, useful Moon intrusion. We identified a total of 20 suitable intrusions of the Moon in the DSV of seven satellites. This number is large enough for a representative subset of all Moon intrusions and for demonstrating the methods we have developed to learn more about HIRS. It is a far cry, however, from a complete inventory of Moon intrusions, because it is heavily biased towards recent years and the latest satellites. We chose this approach in order to get a balanced set of Moon intrusions from different versions of HIRS, although HIRS/2 flew on more satellites than all other versions combined. Depending on the specific instrumental effect under investigation, we chose those Moon intrusions among our set of 20 that were particularly well suited. Examples are instances of different instruments observing the Moon at the same phase angle or observations, where the Moon appeared in all channels in spite of their small misalignment.
The observations of the Moon are not part of the standard processing of the raw data from HIRS, so we had to calibrate them ourselves. This was done by calculating the average counts X sp from the previous and the following DSV calibration line, i.e., 256 s before and 256 s after the Moon intrusion, and using the counts X bb from the black body (bb) calibration line that is closest in time to the Moon intrusion. The space radiance R sp is zero for all channels of HIRS, the black body radiance R bb is calculated from the temperature T bb of the black body. The reference counts are the average from some 47 samples, because the first eight to ten samples were taken while the scan mirror was still in motion. This average of 47 samples enters the equation of calibration, without non-linearity, as used by AAPP (ATOVS [Advanced TIROS Operational Vertical Sounder] and AVHRR [Advanced Very High Resolution Radiometer] Pre-processing Package) [18]. The counts obtained, when the Moon is in the FoV, are the average of less than 47 samples, because usually the whole disk of the Moon does not remain in the FoV during the entire duration of the calibration line (6.4 s). The radiant flux density received by HIRS from the lunar surface is calculated according to where X Moon is the average counts from the space target. G is defined as where the radiance of the black body is calculated according to Planck's law The temperature of the black body needs a band correction with channel specific constants b and c, which are given in AAPP.
The temperature of the black body is measured with n calibrated platinum sensors, where n = 4, except for for HIRS/4, where it is 5.
Every platinum sensor has its own resistance/temperature relationship.
The measured radiance is zero, when the instrument points at empty space.
with a ij = conversion coefficient (numeric counts to temperature) PRT i = mean numeric counts associated to PRT (platinum resistance thermometer) number i The Moon has a smaller apparent diameter than the FoV of HIRS, but the calibration targets and also the Earth scenes are extended. For objects that do not fill the FoV, one has to divide R by the fraction of the FoV they cover and the included energy, i.e., the fraction of the flux that actually originates within the FoV as opposed to the contribution from stray light, like this: with: R Moon = radiance of the lunar disk d FOV = diameter of the optical field of view d Moon = diameter of the Moon as seen from the position of the satellite η = total energy contained within a circle of 1.8 • (1 − η is the fraction of the flux reaching the detector from outside the field of view. Such flux is present with extended sources, e.g., the black body, but not with the Moon. Numerical values can be found in [17,19]). For the diameter of the FOV we assumed 1.4 • for HIRS/2, 1.3 • for HIRS/3, and 0.7 • for HIRS/4. The included energy is 0.97 for HIRS/2 [4] and 0.98 for HIRS/3 and HIRS/4 [19]. As no uncertainties were reported for these values, it is not clear whether this difference between HIRS/2 and the following versions of the instrument is significant.
We did not correct for changes in temperature of the instrument with a self emission model, although they are known to affect the calibration measurements of the deep space view [20]. This means for our investigation that the self emission could be slightly different at the time of the intrusion of the Moon in the DSV than with the deep space calibration lines before and after. We mitigate this problem by using for our calibration the average of the counts from the deep space calibration lines before and after, but this method removes only the effects of a linear drift of the temperature. By comparing the counts from many sets of three consecutive deep space calibration lines when the Moon was nowhere to be seen, we concluded that the absence of a self emission model adds an uncertainty of one or two counts to the cold calibration reference, but that it does not introduce a systematic error. This does not rule out the possibility of long term effects on the photometric calibration, but they are something altogether different. We did not include a non-linearity term in our equation of calibration, either, because there is no consensus on the correct values for this term, and as a consequence it is set to zero in AAPP. Furthermore we would compromise our aim of deriving upper limits on the non-linearity, if we applied a non-linearity correction already in our processing of the data.
Results
As the Moon has got no atmosphere, its spectral energy distribution is close to that of a grey body, with gradual, small variations of its brightness temperature. For a comparison of the disk-integrated flux density of the Moon from HIRS with a thermo-physical model, see [21]. None of the spectral lines familiar from Earth's atmosphere are present on the Moon, and this special quality allows checks of the performance of HIRS in flight that are much more difficult or even impossible in the framework of the routine calibration procedure. An example is checking the coregistration of sounding channels, because one cannot identify surface features on Earth with them. We give in the following several illustrations of how lunar intrusions in the deep space view can serve as diagnostic tool for an infrared sounder. The absolute photometric calibration is not among them, because a sufficiently accurate model of the lunar radiance in all channels of HIRS is not available yet [21].
Optical Field of View
According to Equation (8) the measured radiance of the Moon is proportional to the solid angle of the FoV. Hence, it is possible to determine the ratio of the FoVs of different instruments with high accuracy, provided that they observed the Moon at very similar phase angles so that they got more or less the same radiance from the Moon. In this case they must measure the same value, if the assumed FoVs are correct. Variations of the included energy η are negligible, because this value is almost the same for all instruments, viz. close to one, and not controversial in the literature. With other words, the intrusions of the Moon in the DSV put us in a position to find out, which numbers in Table 1 are correct. Table 2 is a collection of nine pairs of observations of the Moon at similar phase angle, but different times. In most cases different versions of HIRS are involved. There were for example intrusions of the Moon in the DSV of HIRS/2 on NOAA-14 on 1996-05-28 and with a very similar phase angle in the DSV of HIRS/3 on NOAA-17 on 2002-09-26. The average brightness temperature of the Moon for all twelve long-wave channels was calculated in order to reduce the uncertainty. We used the shifted, central wavelengths provided by ECMWF for our calculations. In some cases we could do the same calculation also for most shortwave channels; these values are given in Table 3. Because of the poor alignment between long-and shortwave channels, and because there are less shortwave channels to begin with, the calculated brightness temperatures at short wavelengths are averages of much fewer values. As we have chosen channel 8 to identify the Moon intrusions in the deep space view, other long-wave channels are more likely to provide useful data than the shortwave channels. Table 2. Ratio of the average brightness temperature T br of the Moon as measured with the long-wave channels 1-12 of HIRS on different satellites. This ratio would be one for perfect instruments. The uncertainties reflect the random scatter of the ratios among the different channels. The first column gives the absolute value of the phase angles of the Moon; the pairs were chosen just so these angles are almost the same for either measurement. The value in bold face refers to the only pair, where both measurements were made with the same instrument on the same satellite, but at different times. It is also the only pair, where the measurements were made close to minimum and maximum distance between the Sun and the Moon, which explains the large ratio.
Long-Wave Channels
The calculations were carried out assuming a FoV of 1.4 • for HIRS/2, 1.3 • for HIRS/3, and 0.7 • for HIRS/4. There is at least one comparison for each possible combination of versions of HIRS. Besides, we found three pairs of observations of the Moon with very similar phase angle that were performed with the same version of HIRS. When comparing observations of the Moon with different versions of HIRS, one has to take the slightly different central wavelengths of each channel into account. This inconsistency is particularly pronounced with channel 12, where the central wavelength is 6.7 µm with HIRS/2 and 6.5 µm with HIRS/3 and 4, according to the numbers given by ECMWF. Adding to the confusion in the literature about the HIRS system characteristics, ref. [23] claimed a central wavelength of 6.5 µm also for HIRS/2, but only on two satellites [2], however, demonstrated convincingly that the central wavelength was shifted from 6.7 µm to 6.5 µm only with the launch of HIRS/3 on NOAA-15 in 1998. This shift resulted in a BT difference of 8 K [24] for Earth scenes, because the absorption caused by water vapour in the atmosphere varies strongly between these two wavelengths. On the Moon, however, the emissivity and with it the brightness temperature remain almost constant between 6.5 and 6.7 µm [25]. This is also true for the other channels. In the example of the pair HIRS/2 and HIRS/3 mentioned above we find a value of 1.031 for the ratio of the average radiance of the Moon measured with HIRS/2 and HIRS/3, but 1.007 for the ratio of the corresponding brightness temperatures. This difference is caused in part by the different wavelengths of the channels in version 2 and 3 of HIRS. Hence we determined the ratios of the brightness temperature rather than the ratios of the radiance in each pair for all twelve long-wavelength channels and calculated their average and standard deviation of the mean for Tables 2 and 3. We note that the biggest difference is found with two measurements made with the same instrument, viz. HIRS/2 on NOAA-14. Surprisingly the smaller flux was measured here for the smaller phase angle, i.e., closer to full Moon. The explanation for this unusual ratio is that this is also the pair with the largest ratio in the Sun->Moon distance: It amounts to 1.521·10 13 cm 1.475·10 13 cm = 1.031. The different brightness temperatures measured on the Moon reflect therefore in this special case the fact that the solar irradiance at perihelion is 106% of the value at aphelion. In all other cases we find differences in measured brightness temperature among the various instruments below 1.1%. This corresponds to less than 4% difference in flux density, which gives an upper limit of the random uncertainty of the diameter of the FoV of about 2%.
Shortwave Channels
Our selection criterion for the Moon intrusions was based on the long-wave channels, but unfortunately there is a systematic misalignment between long-and shortwave channels, because their optical paths are separated by a beamsplitter [4]. Hence we have only five pairs of observations at the same phase angle with the shortwave channels. A direct comparison between measurements with HIRS/2 and HIRS/3 is not among them, but the excellent agreement between HIRS/2 and HIRS/4 and between HIRS/3 and HIRS/4 suggests that the FoVs we assumed are correct also with the shortwave channels. In particular we have proven wrong the occasional claims of different FoVs for different channels of HIRS/3 or HIRS/4 in documents, e.g., [19], or web pages [7] dedicated to HIRS. We note that also the shortwave channels produce the highest ratio of brightness temperatures for the pair with the largest difference in the Sun->Moon distance.
Our measurements suggest that the diameter of the FoV is 1.4 • for HIRS/2, 1.3 • for HIRS/3, and 0.7 • for HIRS/4 with all channels. These values are relevant for the comparison of HIRS data with those from other instruments, when they observed simultaneously the same Earth scene.
Spectral Channels Co-Registration
In a few, rare cases, the light curve of the Moon intrusion shows both decreasing and increasing counts (see Figure 3). The lack of constant signal means that the Moon was never fully included in the FoV, because HIRS would receive the radiation from the complete disk as long as this is the case. At least, however, the moment of its closest approach to the pointing direction of the DSV happened during the calibration procedure. In this case it is possible to determine exactly the time of this closest approach for each channel and to derive from this information the HIRS spectral channels coregistration in the along-track direction. We did that by fitting a second order polynomial to the light curve of channel 8, shown in Figure 3, and the light curves of all other channels. Then we determined the number of the sample, where the second order polynomial reached its minimum. The uncertainty of the position of the minimum was calculated from the uncertainties of the parameters produced in the polynomial fit. Then we converted the number of the sample to an angular displacement. For this last step we followed the method described by [26]. The whole procedure allows us to find out, whether the different channels point in the same direction. This assumption is often taken for granted by meteorologists when working with data from HIRS. Table 4 lists the sample number of this closest approach for each channel, except for number 1. Channel 1 was excluded because of its poor signal-to-noise ratio. Its SNR is small, because the difference in counts between low and high fluxes is smaller with channel 1 than with the other channels. There is a clear trend in the sense that this sample number decreases along the rows of the table, but there is a discontinuity between channels 12 and 13, i.e., between SW and LW. This suggests the presence of chromatic aberration. As the long-wave and shortwave optical paths have no lenses in common [17], their variations in refractive index are different. On the other hand the correlation between pointing direction and wavelength is in case of LW significantly higher than the correlation between pointing direction and channel number, because channel 10 does not fit the sequence of decreasing wavelength. This fact demonstrates that it is not some tilt of the filter wheel that matters for the misalignment of the different channels, but rather a property of the lenses.
During the calibration procedure, the instrument stays at the same scan position, which is 68 for space view, but its pointing direction in the sky changes because of the movement of the satellite on its orbit around the Earth. The angular distance between the pointing directions of two consecutive samples of the space calibration line is: with: t = dwell time = 100 msec θ = space view position relative to the orbital axis = 161.1 • (pointing away from the Sun) P = orbital period = 101.5 min for NOAA-19. The fourth and fifth row (Displacement) of Table 4 give the differences between the positions of the Moon in the along-track direction found with channel 19 and the other channels. These values are plotted in Figure 4. There is a systematic shift in the position of the Moon as seen through the different filters, and the slope of position as a function of wavelength is larger for the SW channels than for the LW channels. This misalignment must be taken into consideration for estimating the overall uncertainty of the result, when flux densities measured in different channels are combined to calculate climate variables.
Inter-Channel Uniformity
The sounding channels of HIRS measure flux densities at several different wavelengths in order to characterise the exact shape of a spectral feature. In order to obtain meaningful results, the different channels do not only have to point in the same direction, they also must have a consistent flux calibration. These preconditions are usually not questioned, when the measurements are used to retrieve atmospheric variables. It is desirable to check the validity of these assumptions, and therefore we want to derive now upper limits for the systematic discrepancies between channels.
As the Moon has got no atmosphere, the N 2 O (SW) and CO 2 (LW) sounding channels should always give almost the same lunar brightness temperature. In Table 5 we give the values for the brightness temperatures and their standard deviations based on channels 2-7 for different versions of HIRS. According to the last column of the table it is typically 1.0 K for HIRS/2 and 0.6 K for HIRS/4, corresponding to about 1.2% or 0.7%, respectively, in radiance. These figures, however, are only an upper limit of the inter-channel bias, because the brightness temperature decreases slightly with increasing wavelength for the channels considered here, and this systematic trend inflates the calculated standard deviations. This finding was expected in the light of the properties of the lunar soil [25] between 9 and 11 µm, because it means that also the radiance and as a consequence the emissivity of the lunar soil decreases by about 1%. We note the fact that HIRS observed the disk-integrated radiance at non-zero phase angles of the Moon, i.e., the measured spectral energy distribution is the average over quite different angles of incidence and reflection. Hence, any systematic trends of brightness temperature with wavelength seen by HIRS could differ from those shown in the plots by [25]. The relationship between the central wavelength of the channels and the displacement is plotted as two different red lines for the SW and the LW channels. The channel numbers are, from left to right: 19,18,17,16,15,14,13,12,11,9,8,10,7,6,5,4, 3, 2. Table 5. Average brightness temperatures and their standard deviations for channels 2-7 at different phase angles of the Moon. The central wavelengths of these long-wave CO 2 channels lie between 13.3 and 14.8 µm for HIRS/2, 3, and 4; their calibration has been studied in detail by [8]. The SW channels show an even stronger trend of increasing brightness temperature towards smaller wavelengths than the LW channels. There are two reasons for this: • At longer wavelengths one sees a temperature, which is close to the disk-average temperature of the Moon, but at shorter wavelengths the radiance is dominated by the hottest (sub-solar) surface areas on the Moon. • At shorter wavelengths the share of reflected sunlight becomes larger. It should be subtracted from the flux density that HIRS receives from the Moon, before one can analyze the inter-shortwave-channel uniformity (see Section 3.2).
Our measurements prove that the inter-channel uniformity of the carbon dioxide sounding channels has no systematic component larger than the random scatter of the measurements. The measurements with HIRS are trustworthy.
Non-Linearity
The correct equation of calibration is needed for calculating the correct radiance and its uncertainty, and therefore it is important to know, whether the relationship between counts and radiance is linear or not. The HIRS operational calibration algorithm [27] sets all non-linearity coefficients to zero, because their effect is supposed to be negligible. A detailed investigation of this question, however, was only carried out for a few LW channels [10]. We use the observations of the Moon to derive an upper limit on the non-linearity coefficient of most SW channels. In doing so we take advantage of the fact that the sub-solar region of the Moon reaches temperatures of almost 400 K, i.e., more than 100 K above the temperatures of the black body and typical Earth scenes. The shortwave channels have central wavelengths between 3.7 and 4.6 µm, where the radiance grows exponentially with temperature according to Planck's law for short wavelengths. This means that, when the DSV is pointed at full Moon, the SW channels receive flux densities that are several times higher than those the black body can provide. As the non-linearity term in the measurement equation increases with the square of the counts, it must feature in observations of the Moon, if it is there at all.
For the calculation of the non-linearity term we follow the definition of [10]: with: q = non-linearity R nl Moon = radiance of the Moon after correction for non-linearity.
The non-linearity correction makes a difference d nl (in percent of the lunar radiance) of On 6/15, 1997, there was an intrusion of the Moon in the DSV of HIRS/2 on NOAA-14, and on 7/21, 2019, in the DSV of HIRS/4 of Metop-B. The phase of the Moon was in either case some 47 • , therefore R Moon was almost the same. The FoV is different with the different versions-HIRS/2, HIRS/3, and HIRS/4-but in each case big enough to fully include the Moon, and therefore big enough to receive the flux from the whole disk. The situation is different, however, with the black body, because this calibration reference has a larger diameter than any FoV of HIRS. Therefore the flux received from the black body is proportional to the radius of the FoV squared. With other words, because the DSV of HIRS/4 has only half the radius of the one of HIRS/2, the flux density obtained from the black body S bb of HIRS/4 is four times smaller than S bb of HIRS/2. In the case we consider here, where the Moon is observed at a phase angle of 47 • , it provides a similar flux density as the black body of HIRS/2 does. This means, however, that the term X Moon − X bb is close to zero, whereas the same term is much larger with the small FoV of HIRS/4, and d nl of all SW channels is more than ten times higher for HIRS/4 than for HIRS/2-in case of channel 17 the ratio even amounts to 236. Hence for an estimate of the non-linearity coefficient one can assume that the non-linearity is negligible with the Moon intrusion of HIRS/2, and we take the fluxes measured with this instrument as reference. They agree, however, within 1.5% with the flux values obtained with HIRS/4. Hence we conclude that the non-linearity, if uncorrected, causes at most an error of this size with HIRS/4. The corresponding upper limits for the values of q are given in Table 6. The non-linearity terms of channels 13-17 are at least a factor ten smaller than the pre-launch values for the LW channels [10]. The shorter the wavelength, the larger the flux difference between Moon and black body, and the tighter the constraint on the non-linearity coefficient. Our data are compatible with q = 0 for all SW channels and lend support to the equation of calibration used in AAPP. It is planned to extend the search for non-linearity effects to observations of the Moon at a variety of phase angles in [21].
Reflected Solar Radiance
As the STD of the shortwave channels is so large that in the Antarctic June the observed radiance for example in channel 19 is at the level of instrument noise [11], they are best used at daytime. This rule applies also to observations of the Moon: When it is full, the reflected sunlight alone gives already a satisfactory signal-to-noise ratio. In order to calculate its flux density, we assume that the Sun is a black body with the temperature [28]: with: T e f f = solar effective temperature L = solar absolute luminosity This approximation is good enough for our purpose of getting an estimate of the contribution of reflected sunlight to Earth or Moon scenes at wavelengths around 4 µm [29]. We want to demonstrate that the thermal emission of the Moon in the shortwave channels can be determined accurately enough by a correction of the measured flux density that only requires the reflectance of the scene and its distance from the Sun. For this we assume that the Moon reflects 20% of the incoming radiation at 4 µm [25]. Table 7 gives the brightness temperatures of the Moon for three different phase angles from the SW sounding channels of HIRS/4 on three different satellites. The reflected sunlight was subtracted, taking the distance between the Sun and the Moon at the time of its intrusion in the DSV into account. None of the measured brightness temperatures differs by more than 0.4 K from the average value of channels 13-16, suggesting an even better inter-channel uniformity than with the long-wave channels. The reflected sunlight is always less then 8% of the overall flux density received from the Moon in the examples of Table 7, and the average temperatures given in the last column of this table are our best estimate for the brightness temperature of the Moon in the shortwave range of HIRS. The Sun's share in the measured flux densities increases, however, towards shorter wavelengths, because the Sun is on the Rayleigh-Jeans branch of the Planck function, and the Moon is on the Wien branch. Hence we can now use the average brightness temperatures from Table 7 to calculate the radiance from the Moon at the central wavelengths of channels 17-19, if there was no reflected sunlight present, and then calculate the albedo of the Moon at these wavelengths from the difference between the actually measured flux density and what we would get from the thermal radiation of the Moon alone. Here we assume that no thermal radiation of the Moon is emitted by its night side, because it is very cold and on the Wien branch of the Planck function. The results are listed in Table 8-all values are compatible with the emissivity determined by [25] and do not vary much among the three channels 17-19. This consistency, especially at the smallest phase angle, proves that the values for the albedo are close to the truth, else they would change towards shorter wavelengths, where the ratio between emitted and reflected infrared light shifts quickly in favour of the latter. A value for the reflectivity of the Moon at 4 µm that is significantly larger than 20%, as for example proposed by [30], would cause larger inconsistencies among the values in Table 8 and is therefore off the mark.
This method can be applied also the other way round, when the emissivity of an Earth scene is known, for example when the satellite flies over the Sahara. In this case the unwanted contribution from reflected sunlight to the overall signal can be subtracted, and the shortwave window channels can supply trustworthy measurements.
Discussion
The random uncertainty of our determination of the diameter of the FoV amounts to 2%, but strictly speaking we have only proven that the relative proportions of the FoVs of the different versions of HIRS are 1.4:1.3:0.7. Absolute values for the FoVs can only be calculated, if absolute values for the radiance of the Moon are known with high accuracy. Existing models of the brightness temperature of the Moon, however, have typical uncertainties of 5 K [31], and only few observations of the Moon with other infrared sounders than HIRS have been analyzed and published. As the DIVINER Lunar Radiometer Experiment on the Lunar Reconnaissance Orbiter did neither cover the wavelength range between 3.0 and 7.5 µm nor from 8.6 to 12.5 µm, the measurements we present from HIRS must be considered a unique source of information about disk-integrated brightness temperatures of the Moon. Given the fact that values between 0.69 • and 0.7 • for the FoV of HIRS/4 are well established in the literature, we are confident that our assumptions about the size of the FoV are correct.
The ratio of the lunar radiance from our observations close to perihelion and aphelion was 1.056 according to the measurements in channel 4 (14.2 µm). This value is very close to the 6% seasonal variation in the Moon's thermal emission found with CERES (Clouds and the Earth's Radiant Energy System) [32]. The solar flux that the Moon absorbs and also its emitted thermal flux are inversely proportional to the square of its distance from the Sun. √ 1.056 = 1.028, which is quite close to the ratio of the distances: 1.031. Detecting the effect of the eccentricity of the Earth's orbit on the temperature of the Moon's surface with only two observations is an impressive demonstration of the performance of HIRS, and we conclude that the brightness temperature of the Moon in the thermal infrared is 1%-2% higher at perihelion than at aphelion for a phase angle of some 70 • .
According to [19] the channel to channel registration is less than 0.01 • for LW and less than 0.007 • for SW. These values do not agree with our findings: The pointing direction in the along-track direction alone differs already by up to 0.031 • among the long-wave channels of HIRS/4 on NOAA-19, and by up to 0.015 • among the shortwave channels, based on accurate measurements of when the Moon came closest to the center of the FoV of each channel. The channel to channel registration in flight is hence at least two times worse than claimed in the KLM User's Guide for SW and three times for LW. Our results, however, are very similar to actual measurements of the centroid location of the SW channels of HIRS for NIMBUS F before launch [33]. The average pointing direction of all SW channels differs by 0.026 • from the direction of the LW channels in the along-track direction, therefore there is a significant misalignment between the two groups of channels. As a consequence of this and a possible misalignment in the along-scan direction as well, the Moon was never fully included in the sounding shortwave channels in a third of the intrusions we found with the long-wave channels. This problem is worst for HIRS/4, because of its small FoV. We attribute the misalignment to different chromatic aberration of the lenses in the long-wave and shortwave optical paths and recommend to take this defect into account in the design of similar instruments in the future.
The small, but significant differences among the brightness temperatures measured by the various N 2 O and CO 2 sounding channels contain information about the wavelength dependence of the emission properties of the bulk material on the Moon. As the disk-integrated fluxes, however, are the sum of areas with quite different distances from the "sub-Sun" and "sub-HIRS" point, a thermo-physical model is needed to interpret our findings-a task that goes beyond the scope of this article.
Our method is not able to reproduce the biases between different satellites detected by other authors in the past [3,11], because we do not have observations of the Moon at exactly the same phase angle with the satellite pairs they used. Besides, the uncertainty of a single observation of the Moon in a given channel would have to be a small fraction of a Kelvin, and we cannot achieve that without a self emission model for HIRS. We conclude, however, on the basis of our investigation into non-linearity that any biases that may be present in the shortwave channels are rather caused by errors in the HIRS spectral response function, as stated by [11].
Finally we mention the fact that a quite simple method for subtracting the contribution from sunlight in the shortwave channels produced surprisingly good results: Only the reflectance of the scene at the central wavelength of the channel and its distance from the Sun are needed. Hence we believe this technique could easily be applied to HIRS Earth-viewing measurements where both variables are readily available, such as from the study of surface emissivity and reflectance of northern Africa at 11.1 µm, 8.3 µm, and 4 µm, which was carried out by [34]. Although the determination of the reflected sunlight gets less reliable, when the Sun-scene-HIRS angle (phase angle when the scene is the Moon) is close to 90 • , it should be good enough for most of the swath of HIRS, which extends from −49.5 • to +49.5 • around nadir. The Metop satellites have a local equator crossing time of 9:30, i.e., seen from the nadir pixel on Earth, the Sun has an hour angle of 37.5 • when the satellite crosses the equator -again a value much smaller than 90 • . This means that over tropical regions with known reflectance it should be possible to eliminate the reflected sunlight without major impact on the overall uncertainty of the measurements.
Conclusions
The Moon has been observed with HIRS on many different satellites. We identified a few of these observations that offered particularly illuminating information about basic properties of the instrument. A basic calibration of the raw data was sufficient to characterize various effects with an impact on the performance of the instrument. In some cases this concerned properties that have never been determined in flight before. We have described the methods employed and given examples of the accuracy that can be achieved. The accuracy of the measured brightness temperature of the Moon might be further improved by correcting for the HIRS instrument self-emission [20].
Although it was not our intention to present a thorough study of all observations of the Moon from all satellites that carried HIRS, we were able to fill some gaps in the knowledge about this instrument. We ended the confusion about the size of the field of view, characterised how the co-registration of channels depends on their central wavelength in flight, and supplied upper limits on the non-linearity of the shortwave channels. All of these things are essential for a proper estimate of the uncertainties of the data from HIRS and as well for judging its compliance with the requirements.
The Moon has also been observed with other infrared sounders, e.g., CERES [35] or IASI, and therefore it offers unique possibilities for cross calibration. This includes comparisons with future instruments like IASI-New Generation or the Meteorological Imager on Metop Second Generation.
The (disk-integrated) Moon data, obtained with different versions of HIRS in different wavelength channels, are very consistent. Hence, they are well suited to verify/benchmark thermo-physical model (TPM) techniques, which are widely used for (disk-integrated) thermal IR measurements of other airless bodies (like asteroids, satellites, trans-Neptunian objects, or inactive comets). The benchmarked TPM of the Moon would then also help to calibrate thermal IR instruments of other satellites, e.g., interplanetary missions like the Origins Spectral Interpretation Resource Identification Security-Regolith Explorer or Hayabusa2. Both of them have looked at the Moon during swing-by maneuvers with their IR instruments to obtain an in-flight calibration [36,37]. We aim for a thermophysical model of the Moon using the available global properties and also a well-established directional hemispherical emissivity. This model will take the true observing and illumination geometries (as seen from the satellites) into account. Eventually we intend to establish the Moon as a calibration reference with empirical uncertainties for infrared instruments to evaluate their calibration accuracy and to assess their long-term calibration stability. Similar efforts are already underway with microwave instruments [38] and optical sensors in the framework of inter-agencies collaborations, for example at ESA [39] and EUMETSAT [40].
Abbreviations
The following abbreviations and mathematical symbols are used in this manuscript:
|
2020-05-20T13:05:09.473Z
|
2020-05-07T00:00:00.000
|
{
"year": 2020,
"sha1": "9806da3a2e791bdb7fb29750827c96d2639cebb1",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/rs12091488",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "c02e1353aff5b8a685c2fbbc9752fb301a96589f",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Computer Science",
"Environmental Science"
]
}
|
118400814
|
pes2o/s2orc
|
v3-fos-license
|
Multiplicity distributions in proton-(anti)proton and electron-positron collisions with parton recombination
A new approach to phenomenological description of the charged particle multiplicity distributions in proton-(anti)proton and electron-positron collisions is presented. The observed features of the data are interpreted on the basis of stochastic-physical ideas of multiple production. Besides the processes of parton immigration and absorption, two and three patron incremental and decremental recombinations are considered. The complex behaviour of the multiplicity distributions at different energies is described by four parametric generalized hypergeometric distribution (GHD). Application of the proposed GHD to data measured by the CMS, ALICE, and ATLAS Collaborations suggests that soft multiparton recombination processes can manifest itself significantly in the structure of multiplicity distribution in pp interactions at very high energies.
I. INTRODUCTION
The multiple production of particles in high energy collisions received a lot of attention during many years. A renewal interest in this topic originated recently with the beginning of operation of the Large Hadron Collider (LHC) at CERN. Among first results obtained at the LHC were multiplicity measurements in proton-proton collisions. The CMS, ALICE, and ATLAS Collaborations provided valuable and precise data on multiplicity distributions (MD) of the charged hadrons in the new super high energy domain. The study of multiparticle production can give important information about parton processes and transitions of the colored partons to the colorless hadrons. The complex phenomena that influence MD are however very hard to describe in detail, because we cannot formulate the multiparton process in terms of its QCD theory. Most of the particles which contribute to the multiplicity buildup are soft particles and a perturbative approximation is inapplicable here. Second, hadronization should be somehow taken into account which is far of being understood properly. The natural and probably most economical approach is to look for empirical relations and regularities.
The study of particle production as a function of multiplicity has revealed a very popular Koba-Nielsen-Olesen (KNO) scaling [1]. In pp/pp collisions, the scaling in the full phase-space holds up to the highest energy of the CERN Intersecting Storage Rings (ISR) but it is clearly violated [2] in the energy region of the CERN Super Proton Synchrotron (SPS) collider and beyond. The strong violation of KNO scaling was observed also at the energy √ s = 7 TeV in the limited pseudorapidity intervals (|η| < 2.4) though for small pseudorapidity windows (|η| < 0.5) the scaling is approximately valid [3]. A related phenomenon is the so-called negative binomial regularity, which is the occurrence of the negative binomial distribution (NBD) in different interactions over a wide range of the collision energies. The UA5 Collaboration showed that MD in the non-single-diffractive (NSD) pp/pp collisions can be described by NBD up to the energy √ s = 546 GeV both in the full phase-space [4] and in symmetric pseudorapidity windows [5]. Analysis of data on MD measured by the ALICE Collaboration indicates [6,7] that NBD describes the data in the small pseudorapidity window |η| < 0.5 up to the energy √ s = 2360 GeV. Besides pp/pp collisions, NBD has been applied to various systems including e + e − annihilations. Much effort has been made to explain the negative binomial form of MD observed in many situations, however, its physical origin has not been fully understood [8].
The shape of MD of particles produced in hadron-hadron collisions at high energies is quite different. The full phase-space data on charged particle multiplicities obtained from the NSD events in pp collisions at √ s = 900 GeV [9] indicated that besides KNO, the negative binomial regularity is violated as well. The measurements of MD at the energy √ s = 1800 GeV by the E735 Collaboration [10] at the Fermilab Tevatron showed even stronger deviation of data from NBD. Though with large mutual discrepancies at high multiplicities, both data demonstrate a narrow peak at the maximum and some structure around n ∼ 2 < n >. Despite its correct qualitative behaviour, the single NBD is not sufficient to describe the experimental data. A systematic study of the complex form of MD was performed in the framework of two component model [11] and its three component modification [12]. There exist also other approaches to explanation of the observed features of particle production at high energies (for a review see Ref. [8]). New interest in this field is motivated by the recent results of the multiplicity measurements in pp collisions at the LHC. The data obtained by the CMS [3], ALICE [13], and ATLAS [14] Collaborations show similar structure in MD of charged particles produced in the limited windows in pseudorapidity. The measurements allow to study evolution of the distinct peak at maximum and the broad shoulder at large n both with the collision energy and pseudorapidity.
In this paper we propose an alternative phenomenological approach to the description of MD in pp/pp and e + e − collisions. Using the same concept for hadron and lepton interactions, we try to account for the structure in data which emerges in the pp/pp interactions at high energies. The proposed four parametric representation of MD is motivated by a scenario of the parton cascading. The considerations behind are based on a premise that dynamics of the particle production as manifested on the level of multiplicities can be described in terms of a stochastic cascading with specific types of the underlying processes. The construction includes the processes of parton immigration and absorption together with two and three parton incremental and decremental recombination. The recombination processes are assumed in the final stage of the parton evolution during the color neutralization. The transition to the colorless hadrons is considered in a stationary regime at breakdown of the confinement and onset of hadronization.
II. RECURRENCE RELATION BETWEEN MULTIPLICITIES N AND N+1
The number of particles created in high energy collisions varies from event to event. The distributions of probabilities P n of occurrence of the multiplicity n provide sensitive means to probe dynamics of the interaction. Besides general characteristics of particle production the distributions contain information about multiparticle correlations in an integrated form. The correlations of the final particles reflect features of hadronization mechanism and properties of the QCD parton evolution just before hadronization. The data on MD of charged particles from the high energy pp/pp and e + e − interactions allow us to study the processes underlying the multiple production and its correlation structure within a phenomenological framework of parton cascading. The extraction of information on these processes requires the elementary charge conservation to be taken into account. In our approach we investigate and exploit MD of charged particle pairs and characterize it by a recurrence relation between P n and P n+1 . This assumes connection between the collisions of multiplicities n + 1 with n + 1 collisions of multiplicity n which can be written in the form (n + 1)P n+1 P n = g(n).
The class of MD defined by expression of this type has been considered with a view to stimulated emission and cascading in Ref. [15]. The independent emission of particles represented by Poisson distribution is characterized by g(n) = c. The constancy of g(n) means that creation of an additional particle is independent of number of other particles. The stimulated emission of identical bosons obeying Bose-Einstein statistics follows geometric distribution for which one has g(n) = c(n + 1). This means that emission probability of a boson is enhanced by a factor n + 1 when n bosons are already present in the system. Both examples are special cases of NBD given by the formula The distribution depends on two parameters q =< n > /(< n > +k) and k for which the recurrence relation (1) is given by the linear dependence For fixed < n >, the Poisson distribution is recovered with q, k −1 → 0. The geometric distribution corresponds to k = 1. Chew et al. proposed [16] a generalization of NBD which gives good description of MD of charged particles in e + e − annihilation [17]. The generalized multiplicity distribution (GMD) has the form The distribution is function of three parameters q = (< n > −k ′ )/(< n > +k), k, and k ′ . The expression for g(n) in this case reads The formula for GMD reduces to the expression for NBD, when k ′ = 0. For k = 0, GMD becomes Furry-Yule distribution (FYD) proposed by Hwa and Lam [18]. The mentioned distributions are fully determined by the functional form of g(n) together with the general normalization condition to unity. The normalization is guaranteed for n > k ′ −1 by the linear dependence of g(n) at large multiplicities with q < 1. In the parton cascade picture of multiple production the parameters of these distributions can be related with the rate constants of branching processes in the evolution equations for probabilities.
A. Cascade processes with parton recombination
In a high energy collision with the creation of a multiparticle state the dynamics of the system can be simulated as a parton cascading associated with quark and gluon interactions during the interaction time. The corresponding parton cascade processes inspired by the elementary perturbative QCD are of three types: a/ parton-parton collisions, b/ branching processes such as quark bremsstrahlung, gluon self interaction or gluon splitting, c/ fusion processes e.g. gluon annihilation on a quark. Practically all QCD-based models of many-body production involve some form of cascading. The stochastico-physical picture of the multiplicity evolution can be described by Kolmogorov-Chapman differential-difference (DD) rate equations [19] for the probability P n (t). The continuous parameter t is usually interpreted as an ordering parameter, QCD evolution parameter, or time. The rate equations were applied to hadron physics many years ago in Ref. [20]. Soon afterwards, the properties of MD were studied by solving DD equations in terms of parton cascading by number of people [21]. In particular, all possible QCD vertices were taken into account in Ref. [22]. Stationary regime in birth and death processes with recombination (confluence) of two (2 → 1) and three (3 → 1) gluons has been investigated in Ref. [23].
In this paper we point out and exploit somewhat different strategy. We try to establish minimal number of such processes in the cascade picture of particle production which could account for both the general character of MD and its complex structure observed at high energies. A successful description of experimental data requires essential features of the cascade evolution to be introduced into the phenomenology. We attempt to estimate those features which mostly influence the building structure of MD at high energies. This involves processes of parton recombinations at the end of the cascade just near breakdown of the confinement. The last phase of the parton evolution is considered to have large impact on the form of MD of the produced hadrons. We suppose that in the final stage there exists some kind of a stationary regime which occurs at the transition between parton and hadron degrees of freedom. In this regime, soft partons intensively exchange their momenta to reach an "momenta uniformity" before their conversion into observable particles [23].
In order to introduce features of parton recombination into a multiplicity description, we consider a stationary regime of DD equations of Kolmogorov-Chapman type. Hereafter we refer to the partons as objects which cascade in "time" and regard the terms particle and parton as interchangeable. The binary parton-parton scatterings do not change the number of particles during the system evolution but can influence neutralization or screening of the color flow. They can act as supporting processes which initiate branching or fusion of the nearby partons. Such connection represents multiparton interactions contributing to a change of the particle number by one. The simplest interactions of this type are recombinations of two or three particles. Here we consider the process of two parton incremental recombination (2 → 3) together with three parton incremental (3 → 4) and three parton decremental (3 → 2) recombination processes. The confluence of two and three partons, 2 → 1 and 3 → 1, is supposed to be negligible with respect to other recombination processes at the end of the cascade. Together with the particle production that depends on particles already produced, we consider independent immigration (0 → 1) and absorption (1 → 0) as another source of partons influenced by bulk properties of the system created in the collision. The accumulated energy is transformed into partons in this manner, or on the contrary a parton can be melted and absorbed by the expanding system again. In the case of independence of each cascade process, the corresponding DD evolution equations for the probability P n (t) readṖ n = + α 0 P n−1 − α 0 P n + β 0 (n+1)P n+1 − β 0 nP n + α 2 (n−1)(n−2)P n−1 − α 2 n(n−1)P n + α 3 (n−1)(n−2)(n−3)P n−1 − α 3 n(n−1)(n−2)P n + β 2 (n+1)n(n−1)P n+1 − β 2 n(n−1)(n−2)P n .
The coefficients α 0 , α 2 , α 3 are the corresponding rates for the processes 0 → 1, 2 → 3, 3 → 4, respectively. Similar parameters for degradation of the particle number in the processes 1 → 0 and 3 → 2 are denoted as β 0 and β 2 . Using the definition of the generating function we rewrite the system of DD equations (6) in terms of Q(w). The corresponding stationary solution (Ṗ n = 0) satisfies the third order differential equation A regular solution of this equation is proportional to the generalized hypergeometric function The complex constants a i and b j are given in terms of the real parameters α 0 , α 2 , α 3 , β 0 , and β 2 (see Appendix A). The ratio (1) is given as follows This recurrence relation together with the normalization condition for P n fully determines the formula for MD which we refer to as the generalized hypergeometric distribution (GHD). The distribution depends on four parameters α 0 /β 2 , α 2 /β 2 , α 3 /β 2 , and β 0 /β 2 which are ratios of the rate constants for the corresponding cascade processes.
In the next section we exploit GHD to describe MD of the charged particles produced in pp/pp and e + e − interactions. The formula (10) has various limits applicable for both reactions at different energies and phase space regions. If α 0 → 0 and β 0 → 0 simultaneously, GHD reduces to NBD. In that case g(n) is given by (3) with the parameters q = α 3 /β 2 and k = α 2 /α 3 − 2. On the contrary, if α 0 and β 0 are both large enough relative to other parameters, a Poisson-like peak appears in MD at small n for which g(n) ≃ α 0 /β 0 . There exists a region where GHD can be narrower than Poisson distribution. This happens at low energies where the rate constants α 2 and α 3 are small or even vanish.
III. ANALYSIS OF DATA
The statistics of the charged particle MD is governed by the requirements of conservation laws in each collision. Due to the global charge conservation, there are only even multiplicities when dealing with experimental data in the full phase-space. On the other hand the charge conservation cannot be satisfied by the stochastic processes which are independent of the charge. In particular, GHD, as defined for all non-negative integer values of n, cannot directly represent the full phase-space data on multiplicities which are always even. There are two main conceptions how to tackle the problem. According to the often used procedure [4], the probabilities for only even integers n are taken from a theoretical distribution, then are renormalized and compared to the measured values of P n . The second approach assumes to deal with MD of particle pairs and compare it to a theoretical distribution for all values of n. Both methods give different values of the parameters entering the distribution and differ also in capability of description of experimental data. Here we use the second method and apply GHD and NBD (for comparison) to the distribution of particle pairs.
We have analysed experimental data on MD of charged particles produced in the non-single-diffractive (NSD) pp/pp collisions in the full phase-space. The analysis was performed with the high energy data measured by the E735 Collaboration [10] at √ s = 1800, 1000, 546, and 300 GeV, by the UA5 Collaboration [24] at √ s = 900 and 200 GeV, and by the ABCDHW Collaboration [25] at √ s = 63, 53, 45, and 30 GeV. The study includes also the FNAL fixed target data [26][27][28][29] at √ s = 27.6, 23.8, 19.7, and 13.8 GeV corrected for diffraction cross sections [30], the Serpukhov data at √ s = 11.5 GeV [31] with diffraction corrections [32] and the low energy CERN data [33] measured by the BHM Collaboration at √ s = 6.8 and 4.9 GeV. The results of the analysis with the CERN-MINUIT program are shown in Table I and in Figs. 1-2. As can be seen from Table I, the description of MD of particle pairs by NBD is in most cases unsatisfactory including the data from ISR and lower energies. Though MD of the charged particles can be approximated by NBD up to the energy √ s = 200 GeV pretty well, NBD is not able to account fully for the narrower distribution of the particle pairs. At the energies lower than √ s ∼ 20 GeV, the distribution becomes even narrower than Poisson distribution. This corresponds to negative values of the parameters k and q, meaning that NBD transforms to a binomial one. By virtue of its four parameters, GHD describes the complex structure of the charged particle MD emerging in the high energy pp/pp collisions in the full phase-space sufficiently well. This is illustrated in Fig.1(a) on data [10] from the E735 Collaboration where a peak at low n and a shoulder at large multiplicities is visible. The evolution of the observed structure with the energy √ s is shown in more detail in Fig.1(b). Here the relative residues of MD with respect to the NBD parametrization are depicted. The residues are mutually shifted by factors 2 for single energies. The lines correspond to the description of data with GHD. As one can see from Fig.1(b), the shoulder broadens with the energy √ s and its maximum moves towards larger multiplicities. This corresponds to an increase of the parameter α 3 /β 2 which is ratio of the rate constants for the processes 3 → 4 and 3 → 2. The measurements of MD by the UA5 collaboration at √ s = 200 and 900 GeV in the full phase-space lay systematically below the E735 data [10] for large n. The discrepancy has a consequence that the parameter α 3 /β 2 is negative for the UA5 data. Therefore we set it to null in our analysis in this particular case (see Table I).
The description of data on MD by GHD in the NSD proton-proton collisions at the ISR energies is shown in Fig.2(a). The relative residues with respect to the NBD parametrizations are depicted in Fig.2(b). The residues are mutually shifted by factor 2 for single energies. One can see from Fig.2(b) that NBD does not represent accurate parametrization of MD of particle pairs even at the ISR energies. Especially for low n the residues od data relative to NBD show remnants of the peaky structure which is clearly visible in the TeV energy region. The solid lines represent the parametrization of data by GHD. The small experimental errors allow good determination of α 2 /β 2 and α 3 /β 2 in this region. The values of the parameters are non-zero at the ISR energies but they are smaller than at the TeV energies. Similarly, the parameters α o /β 2 and β 0 /β 2 are non-zero though both are relatively small. This is once again a reformulation of the statement that NBD is not sufficient to describe MD of the particle pairs nevertheless at the ISR energies it provides much better approximation of data than in the TeV energy region.
The analysis of MD below √ s ∼ 20 GeV showed that the parameter α 3 becomes negative. Therefore we set it equal to null in this region. It means that the process 3 → 4 dies out as the first at low energies. For still smaller √ s there is not enough energy even for the process 2 → 3 and GHD becomes two parameter distribution. In this region, MD of particle pairs is extremely narrow. The non-zero value of β 2 makes it narrower than the Piosson distribution. This means that the parton recombination process 3 → 2 remains active till the very small energies though it brings along only the diminution of the particle number. Recently, the CMS Collaboration provided results of systematic measurements of charged particle multiplicities in pp collisions at the LHC. The data [3] were accumulated in five pseudorapidity ranges from |η| < 0.5 to |η| < 2.4 at the collision energies √ s = 900, 2360, and 7000 GeV. The measurements of MD in the restricted phase-space regions can serve as a sensitive probe of the underlying dynamics in various phenomenological models.
For an adequate description one needs to make additional assumptions when projecting the full phase-space data onto the smaller pseudorapidity windows |η| < η c . The odd-even effect of the charged particle distribution P ch n is smeared out with the decreasing η c and the distribution fills all values of n. We avoid the complication concerning the charged particle measurements in the limited phase-space regions and collect the data in the neighbouring even and odd bins as follows P n = P ch 2n + P ch 2n−1 , n = 1, 2..., so simulating the distribution of particle pairs. It would be more correct to deal with MD of negative particles which is essentially the distribution of the charged particle pairs. Nevertheless, application of GHD to the distribution (11) allows us to establish main trends which characterize MD of negative particles. Here we study the dependence of the parameters α 0 /β 2 , α 2 /β 2 , α 3 /β 2 , and β 0 /β 2 on the size of the pseudorapidity span |η| < η c . In Fig. 3(a) we show MD of charged particles in the limited phase-space regions measured by the CMS Collaboration in pp collisions at √ s = 7000 GeV. The CMS data for the five pseudorapidity ranges from |η| < 0.5 to |η| < 2.4 are collected in the neighbouring even and odd bins in multiplicity (11) and multiplied by the powers of 0.2 for different η c . The data confirm existence of the complex structure observed in MD by the E735 and UA5 Collaborations at energies √ s = 900-1800 GeV in the full phase-space. At √ s = 7000 GeV, the peaky form at low n and a shoulder at high multiplicities is seen in the all five measured η c -windows. The peak becomes most prominent and the shoulder most wide for |η| < 2.4. The evolution of the structure with η c is demonstrated in Fig. 3(b) in more detail where the relative residues of the data with respect to the NBD parametrization are depicted. Fore the sake of clarity the residues are mutually shifted by unity for different pseudorapidity intervals. The solid lines represent a description of the data by GHD. One can see from Fig.3(b) that at the energy √ s = 7000 GeV the residual structure survives even down to η c = 0.5. As a consequence, NBD is not sufficient to describe the data well enough neither for small pseudorapidity windows at this super high energy. The CMS data measured at the energies √ s = 2360 GeV and √ s = 900 GeV are presented in the same fashion in Fig. 4 and Fig. 5, respectively. A similar structure as in Fig. 3 is visible also at these energies. The systematic measurements of the CMS Collaboration allow us to study the evolution of the observed structure with √ s. One can see from Figs. 3(b), 4(b), and 5(b) that the peak at low n becomes more distinct with the increasing energy. At the same time the shoulder widens and its maximum moves towards larger multiplicity. On the other hand one can see form Figs. 4(b) and 5(b) that, for the smallest window |η| < 0.5, the multiplicity structure vanishes and the residues of MD with respect to the NBD parametrization become flat at both these energies. This is in accord with the conclusions of Ref. [7], namely that at √ s = 900 and 2360 GeV the experimentally measured multiplicity distributions are well described by NBD for η c = 0.5.
The ALICE Collaboration presented data [13] on MD of charged particles in pp collisions at the energies √ s = 900, 2360, and 7000 GeV. Figure 6(a) shows the experimentally measured MD collected in the neighbouring even and odd bins in multiplicity (11) in the central pseudorapidity region |η| < 1. The data are from an event class where at least one charged particle in the measured pseudorapidity range is required. The depicted distributions are multiplied by the powers of 0.2 for different √ s. The corresponding relative residues with respect to the NBD parametrization are presented in Fig. 7(a). The residues are mutually shifted by unity for different energies. The full lines represent description of the ALICE data by GHD. One can see from Fig. 7(a) that data on MD for |η| < 1 at the energy √ s = 900 GeV can be well described by NBD. At √ s = 2360 GeV, NBD is still good description of data with the exception at low n where a peaky structure begins to emerge. The peak at low multiplicity is clearly visible especially at √ s = 7000 GeV. One can see from Fig. 7(a) that NBD overestimates experimental data for high multiplicities (n > 55) at this super high energy, as was already observed in Ref. [13].
The ATLAS Collaboration measured the charged-particle MD [14] in different phase-space regions for various multiplicity cuts at three LHC energies. The most inclusive phase-space region covered by the measurements corresponds to the conditions |η| < 2.5, p T > 100 MeV and n ch ≥ 2. We analyse the ATLAS data obtained under the above η c and p T selections at √ s = 900 and 7000 GeV. Figure 6(b) shows the experimentally measured MD collected in the neighbouring even and odd bins in multiplicity (11). The distribution at √ s = 900 GeV is multiplied by the factor of 0.2. Figure 7(b) shows the corresponding relative residues with respect to the NBD parametrization. The residues at √ s = 900 GeV are shifted down by the factor of unity. The full lines represent approximation of the ATLAS data by GHD. The distinct peak at low multiplicities in the ATLAS data represent stringent criterion for description of the whole shape of the distribution. Values of the parameters and processes included in GHD are strongly restricted by the peak's position, its width and hight as well as by the form of the shoulder at high n. The four parametric GHD accounts for smooth transition from the peak region to the shoulder at high multiplicities. Both structures in the ATLAS data are measured at such level of the experimental errors which exclude other processes to be present in GHD but those considered. Beside this, we have checked that application of GHD to the MD of particle pairs is substantial because when applied to the distribution of all charged particles, the form of data is not reproduced correctly. The measurements performed by the CMS, ALICE, and ATLAS Collaborations show that the peaky structure of MD is clearly seen in pp collisions in the limited phase-space regions at the LHC energies. The structure becomes more distinct with the increasing width of the pseudorapidity window and is most pronounced at the highest energy √ s = 7000 GeV. A similar trend in the pseudorapidity dependence of MD is visible in data [24] measured by the UA5 Collaboration in the NSD pp collisions at √ s = 900 GeV. Figure 8(a) shows the UA5 data collected in the neighbouring even and odd bins in multiplicity (11) in the full phase-space and in four smaller pseudorapidity regions. The depicted distributions are multiplied by the powers of 0.1 for different η c . Figure 8(b) shows the corresponding relative residues with respect to the NBD parametrization. The residues in the smaller pseudorapidity windows are mutually shifted down by the factor of two. The full lines represent description of the UA5 data by GHD. Starting from the window |η| < 1.5 a small peak at low multiplicity emerges in the shape of MD. It evolves with the window size and becomes best visible in the full phase-space. As can be seen from Fig. 8(b), the residues with respect to NBD are perfectly flat in the smallest window |η| < 0.5 meaning that NBD describes MD in this region well. Using formula (10) for GHD, we studied the inclusive samples of MD of charged hadrons produced in e + e − annihilations in the full phase-space at various collision energies √ s. Similarly as in the previous section, GHD is applied to the distribution of particle pairs for all multiplicities n ≥ n 0 . The value of n 0 is the minimal number of the charged particle pairs measured in experiment. The results of the analysis are presented in Table II and Figs. 9-10. We have analysed data on MD of charged particles obtained by the OPAL Collaboration at the centre-of-mass energies of √ s = 172, 183, and 189 GeV [34]. The data with high statistical precision correspond to the energy region sufficiently far beyond Z 0 peak. The three data samples are measured in the multiplicity range which begins with n ch = 8 i.e. with n 0 = 4 particle pairs. The analysis includes also data on MD measured by the OPAL Collaboration at lower energies, √ s = 161 GeV [35], √ s = 133 GeV [36], and at the energy of Z 0 peak, √ s = 91 GeV [37]. The application of GHD to the OPAL data shows that the parameter β 0 /β 2 is compatible with zero and, therefore, it was set to null in this high energy region. In such a case, GHD becomes three parameter distribution for which the recurrence relation (1) takes the form (n + 1)P n+1 P n =ᾱ 0 n(n − 1) In this limit the distribution reflects relatively sharp increase of P n at low n ≥ n 0 typical for e + e − annihilations at high energies and accounts for a "negative binomial tail" at large multiplicities. The relative residues of MD of charged particles measured by the OPAL Collaboration with respect to the description of the data by GHD with three parameters (12) are depicted in Fig. 9(a). The residues are nearly zero reflecting acceptable parametrization of the experimental data. Similar holds for the data [38] obtained by the ALEPH Collaboration at the energy √ s = 91 GeV (see Table II). As shown in Ref. [17], a good description of the OPAL data can be obtained also by GMD characterized by the recurrence factor (5). An exception represent data on MD measured by the DELPHI Collaboration at the energy of Z 0 peak in the sense that their description requires a non-zero value of the parameter β 0 /β 2 . As seen from Table II, this parameter is required also by data at lower energies. The relative residues of MD of charged particles measured by the DELPHI [39], AMY [40], TASSO [41], and HRS [42] Collaborations with respect to the four parametric GHD are depicted in Fig. 9(b). In all cases a good description is obtained.
We analysed also data [43] on MD in e + e − annihilations measured by the MARK I Collaboration in the low energy region. The application of GHD to the multiplicity data for √ s < 10 GeV gives negative values of the parameters α 3 /β 2 and α 2 /β 2 with an over-parametrized description. Therefore we set α 3 = α 2 = 0 at these energies. In that case, similarly as for pp collisions at very low energies, MD in e + e − annihilation can be characterized by the recurrence (1) with , at √ s = 161 GeV from Ref. [35], at √ s = 133 GeV from Ref. [36], and at √ s = 91 GeV from Ref. [37]. (b) The data measured by the DELPHI Collaboration at √ s = 91 GeV are from Ref. [39]. The combined data measured by the AMY Collaboration at √ s = 57 GeV are from Ref. [40]. The data measured by the TASSO Collaboration at √ s = 43.6, 34.8, and 22 GeV are from Ref. [41]. The data measured by the HRS Collaboration at √ s = 29 GeV are from Ref. [42]. MD of charged particles produced in e + e − annihilation in the full phase-space. The depicted data from the OPAL [34], DELPHI [39], AMY [40], and HRS [42] Collaborations were measured at the energies √ s = 189, 91, 57, and 22 GeV, respectively. The lines represent description of the data by GHD.
Because of the non-zero values ofβ 2 in (13), GHD is narrower than Poisson distribution in this region. Figure 10(a) shows the relative residues of MD with respect to GHD at low energies. A summarizing illustration of the description of MD of charged particles produced in e + e − annihilations at some energies is presented in Fig. 10(b). The full lines represent GHD with parameters quoted in Table II. The results of analysis of MD of the charged particle pairs in e + e − annihilations in the full phase-space by GHD. The minimal number of the pairs is n0. If not quoted, the corresponding parameter of GHD was set to null. The errors correspond to the quadratic sum of the statistical and systematic uncertainties of data when both are published.
IV. DISCUSSION
The motivation of the present study of multiple production originates from the experimental observation that in pp/pp interactions at high energies a structure in the charged particle MD emerges both in the full phase-space and in the limited phase-space regions. We concentrate in obtaining a plausible description of the observed structure which is distinctly visible in new data from the LHC in the super high energy domain. The phenomenological analysis is based on a scenario of multiparticle production in terms of parton cascade processes. The proposed approach aims to grasp some qualitative features of parton to hadron transitions which may be important at the end of the parton cascading. The observable shape of MD is assumed to be influenced mostly by the soft particles produced in the final stages of the cascade development. Besides the ordinary birth (0 → 1) and death (1 → 0) process, we consider the multiparton incremental (2 → 3), (3 → 4) and decremental (3 → 2) recombination processes which are supposed to contribute significantly to the multiplicity build up. Such kind of two and three parton recombination interactions in the final stage of the cascade evolution can be justified by the physical requirements of color neutralization and reaching of an approximate "momenta uniformity" at hadronization.
The phenomenological background of data description relies on DD evolution equations which include the terms corresponding to the suggested processes. A stationary solution of the equations gives the recurrence relation (10) defining a distribution which we refer to as GHD. The essential ingredients of the analysis are four parameters of GHD which are the ratios α 0 /β 2 , α 2 /β 2 , α 3 /β 2 , and β 0 /β 2 constructed from the rates α i and β i of the incremental and decremental parton cascade processes, respectively. The energy dependence of the parameters in the full phase-space which follows from the performed analysis is shown in Figs. 11 and 12. The empty symbols represent the values obtained from data on MD in pp/pp interactions. The full symbols correspond to e + e − annihilations. The regular behaviour of all four parameters is guaranteed by the non-zero values of β 2 . It means that the recombination process (3 → 2) is present in both reactions at all energies. The ratios α 0 /β 2 and β 0 /β 2 reflect features of data connected with the different initial state in the lepton and hadron collisions. They are proportional to the rates α 0 and β 0 of the immigration and death processes governed by the interactions of partons with the rest of the system as a whole. We will refer to them as parameters of the I. type. As one can see from Fig. 11(a), the parameter α 0 /β 2 increases with √ s for both reactions. This means that an increase in the collision energy results in the relative enhancement of the parton immigration (0 → 1) with respect to the three parton decremental recombination (3 → 2). Figure 11(b) shows similar trend in the parameter β 0 /β 2 for pp/pp collisions. The relative increase of the rate β 0 with √ s points to the intensive absorption of partons in the bulk of the expanding system formed in hadron collisions at high energies. A different picture is foreseen in e + e − annihilations. Here the parameter β 0 /β 2 reaches a maximum at √ s ≃ 40 GeV and beyond the energy of Z 0 peak it drops to zero (see Table II). This suggests that, in contrast to the hadron collisions, the parton absorption becomes negligible in the e + e − annihilations at high energies.
Another difference in the behaviour of the parameters of the I. type for the lepton and hadron collisions is in their absolute values. In the region below √ s ≃ 40 GeV, both parameters α 0 /β 2 and β 0 /β 2 are larger for e + e − annihilations in comparison with pp/pp interactions. There are indications from Figs. 11 and 12 that, at high energies, this tendency may be opposite. A reason for such a behaviour may be influence of jets on the multiplicity structure. While in the e + e − annihilations at low energies the immigration rate α 0 stemming from two (or few) jets prevails the parton immigration in the hadron collisions, the multitude of minijets at high energies would result in much larger α 0 in the pp/pp interactions. This in turn would lead to the higher absorption rate β 0 in the hadron collisions giving the partons more probability to be melted conversely in the complex system with many minijets again.
As seen from Figs. 11 and 12, there exists a region in which the parameters α 0 /β 2 and β 0 /β 2 can be relatively small within the errors indicated. The region corresponds to the pp interactions in the energy interval √ s ∼ 20 − 60 GeV. The small values of α 0 and β 0 mean that GHD depends mostly on the parameters α 2 /β 2 and α 3 /β 2 . In such case the data can be relatively well approximated by NBD except for a few low values of n (see Fig. 2(b)).
The energy dependence of α 2 /β 2 and α 3 /β 2 is shown in Fig. 12. The ratios depend only on the rates of recombination processes which are assumed to be active at the stage of parton-hadron conversions. The parameters characterize features of data connected with a breakdown of confinement and onset of hadronization. We denote α 2 /β 2 and α 3 /β 2 as parameters of II. type. Within the errors indicated, both parameters reveal approximately the same energy dependence common for e + e − and pp/pp collisions. Exceptions make the values for UA5 data [24] which result from discrepancies at high multiplicities pointed out in Ref. [10]. As seen from Tables I, II and Fig. 12(a), the parameter α 2 /β 2 has a threshold in the region √ s ∼ 7 GeV. Afterwards it becomes larger than unity and continues in rapid growth at high energies. This means that, with increasing √ s, the rate of the recombination process (2 → 3) prevails still more and more the rate of the inverse process (3 → 2). The parameter α 3 /β 2 has a threshold in the region √ s ∼ 20 GeV. It grows with the energy and reaches the value of 0.6 at √ s ∼ 1 TeV. In contrast to the parameters of I. type, the parameters of II. type reflect features of multiplicity production which are common to the e + e − and pp/pp interactions. The analysis suggests that two and three parton recombination processes may be part of an intrinsic property of the parton-hadron transitions. Physical justification for such an idea may be connected with the processes of color neutralization.
Experimental data on MD in the central pseudorapidity windows |η| < η c allow us to study the behaviour of the parton processes in the limited phase-space regions. The structure of MD observed in the hadron collisions in the full phase-space demonstrate itself distinctly in the limited windows in pseudorapidity if the collision energy is sufficiently high. This allows more reliable determination of the ratios of the corresponding rates of single processes in dependence on the window size. The fine structure of MD is visible even for small η c = 0.5 in pp collisions at √ s = 7000 GeV.
The data at this energy give strongest restriction on the values of the parameters in the small pseudorapidity range. The dependence of the ratios α 0 /β 2 , α 2 /β 2 , α 3 /β 2 , and β 0 /β 2 on η c is depicted in Figs. 13 and 14. The symbols represent values of the parameters obtained from the analysis of data measured by the CMS, ALICE, ATLAS, and UA5 Collaborations at different energies. For clarity reasons, every figure is divided into four panels. Three panels show the values obtained from analysis of data measured at the LHC at √ s = 7000, 2360, and 900 GeV, respectively. The values corresponding to the UA5 data are shown on the fourth panel.
One can see from Fig. 13 that α 0 /β 2 and β 0 /β 2 increase with η c at all displayed energies. Both parameters reveal weak energy dependence in the depicted η c region. With the increasing window size, the value of α 0 becomes still larger than β 0 . This means that the rate of the immigration process (0 → 1) grows faster with pseudorapidity than the rate of the parton absorption (1 → 0). Except for the energy √ s = 7000 GeV, the parameters α 0 /β 2 and β 0 /β 2 can be relatively small for η c = 0.5 within the errors indicated. In such a case GHD depends effectively on two parameters of the II. type and can be approximated by NBD. This is seen in Figs. 4(b), 5(b), and 8(b) where good approximation of data by NBD at √ s = 2360 and 900 GeV for η c = 0.5 is demonstrated. Such conclusion is in accord with the resume made in Ref. [7]. As shown in Fig. 13, the errors of α 0 /β 2 and β 0 /β 2 at √ s = 7000 GeV exclude small values of these parameters even for η c = 0.5. This is why the data cannot be well approximated by NBD. The statement rephrases the fact that structure of MD stays beyond the NBD description in this region (see lowest part of Fig.3(b)). The pseudorapidity dependence of the parameters α 2 /β 2 and α 3 /β 2 is shown in Fig. 14. The ratio α 2 /β 2 reveals growing tendency with η c and flattens at √ s = 7000 GeV. This observation suggests importance of the two parton incremental recombination process (2 → 3) in the small pseudorapidity windows at this ultra high energy. On the contrary, three parton incremental recombination (3 → 4) falls for small η c , as shown in the upper left panel in Fig.14(b). The growing tendency of α 3 /β 2 with η c is seen in the LHC data at all quoted energies.
V. CONCLUSIONS
Charged particle multiplicity distributions in pp/pp collisions have been studied including new data from the LHC. The analysis comprises MD in the full phase-space as well in the limited windows in pseudorapidity. At high energies the distributions show a relatively narrow peak at small multiplicities and a shoulder in the tail.
A phenomenological description of the observed structure was proposed. Using techniques based on the solution of DD evolution equations relevant for the stochastico-physical picture of particle production a simple formula (10) for the probabilities of the secondary produced multiplicity has been obtained in a stationary regime. The basic ingredients of the scenario are the elementary immigration and absorption of partons and the processes of particle recombination. We considered two and there particle incremental (2 → 3), (3 → 4) and three particle decremental (3 → 2) recombinations. Physical justification for existence of such kind of processes may be connected with the requirements of color neutralization at the end of the parton cascade and reaching of an approximate "momenta uniformity" of the soft particles at hadronization. The features such as two and three parton recombination allow to change particle number, exchange particle momenta and neutralize color repeatedly just before the conversions into the observable hadrons. The corresponding solution of the higher order equation for the generating function based on the recombination processes exhibit qualitative properties which are absent in the first order.
This allowed a quantitative description of the complex structure of data on MD in pp/pp collisions both in the full phase-space and in the limited pseudorapidity windows. The phenomenological formula (GHD) was applied to the description of the charged particle distributions in e + e − annihilations at different energies √ s. A good agreement with data was obtained. The dependence of the four parameters of GHD on the energy and pseudorapidity was discussed. The behaviour of some parameters reveals a universal character which is independent of the reaction type while some other parameters depend on it. It was shown that the incremental recombination processes play an increasingly large role in the multiplicity production as the collision energy increases.
Within the approach used and on the basis of the studied material we conclude that data on MD indicate existence of certain type of recombination processes correlating particle degrees of freedom which manifest itself at high energies. The results of analysis of MD of the charged particle pairs in pp collisions in the limited phase-space regions |η| < ηc
|
2011-06-23T12:28:46.000Z
|
2011-06-23T00:00:00.000
|
{
"year": 2011,
"sha1": "23da86790fa606e1867bdd84b7b17cde8189615e",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "23da86790fa606e1867bdd84b7b17cde8189615e",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
246996964
|
pes2o/s2orc
|
v3-fos-license
|
Cyclical Focal Loss
The cross-entropy softmax loss is the primary loss function used to train deep neural networks. On the other hand, the focal loss function has been demonstrated to provide improved performance when there is an imbalance in the number of training samples in each class, such as in long-tailed datasets. In this paper, we introduce a novel cyclical focal loss and demonstrate that it is a more universal loss function than cross-entropy softmax loss or focal loss. We describe the intuition behind the cyclical focal loss and our experiments provide evidence that cyclical focal loss provides superior performance for balanced, imbalanced, or long-tailed datasets. We provide numerous experimental results for CIFAR-10/CIFAR-100, ImageNet, balanced and imbalanced 4,000 training sample versions of CIFAR-10/CIFAR-100, and ImageNet-LT and Places-LT from the Open Long-Tailed Recognition (OLTR) challenge. Implementing the cyclical focal loss function requires only a few lines of code and does not increase training time. In the spirit of reproducibility, our code is available at \url{https://github.com/lnsmith54/CFL}.
Introduction
The use of trained neural networks is pervasive in a wide variety of societal applications such as medical diagnosis, scientific discovery, and the defense of our homes and country. In the majority of cases, a cross-entropy softmax loss guides the training of networks.
On the other hand, focal loss [11] is superior to the cross-entropy softmax loss when there is an imbalance, such as in the number of training samples per class, a foreground-background imbalance [11], or a positive-negative label imbalance in multi-label classification [16]. Focal loss modifies the cross-entropy softmax loss to increase the focus of a neural network's training on the hard, misclassified data samples. That is, the focal loss is governed by: where L lc focuses the loss on the low confidence samples, p t is the softmax probabilities, and focal loss adds the weight (1 − p t ) γ lc to the cross-entropy softmax loss. As the probability p t goes to 1 for more confident training samples, the weight for the loss of that training sample drives the loss term to zero faster than for cross-entropy, as shown in Figure 1. The impact of this weighting is to focus the network training on the rarer and less confident training samples. When γ lc = 0, the focal loss becomes identical to the cross-entropy softmax loss.
While the focal loss has been found beneficial in tasks with imbalanced class data, it generally reduces the performance when training with more balanced datasets. Therefore, the focal loss is not a good replacement for the cross-entropy softmax loss for most applications.
A paper on the concept of "General Cyclical Training of Neural Networks" [19] defines the general cyclical training of neural networks as network training that starts and ends with a focus on the easy and confident training samples and trains during the middle epochs with an increasing and then decreasing focus on the hard training samples. In other words, general cyclical training can be considered as a combination of curriculum learning [2] in the early epochs, training on the full problem space happening during the middle epochs, and fine-tuning on confident samples at the end. The CE loss is −log(p t ), the FL loss term is −(1 − p) γ lc log(p t ), and the L hc term is −(1 + p) γ hc log(p t ).
In this paper, we propose a cyclical focal loss (CFL) that follows this principle of general cyclical training. Specifically, we propose a new loss weighting term as where this term causes the loss to increase the focus on the more confident training samples (named L hc for focusing on the high confidence training samples). Figure 1 compares the weighting terms for γ hc = 2 and 4 in Equation 2 to the standard cross-entropy. It is clear in this Figure that this focusing term weights the loss of samples for which p t is close to 1 substantially more than cross-entropy does. We define the cyclical focal loss function so that more confident training samples are weighted more heavily in the early and final epochs via this new term, while in the middle epochs, the less confident training samples are more heavily weighted via Equation 1.
An additional inspiration for cyclical focal loss comes from curriculum learning [2]. Curriculum learning implies that it is best to construct a neural network training methodology for the task at hand such that the network's weight updates during earliest epochs are encouraged by the confident training samples via Equation 2. The training of hard samples is performed in the middle epochs to improve generalization. At the end of the training is a fine-tuning stage on the confident samples learned by the network, because this is when the network learns the more complex patterns [25] from the most confident training samples.
This paper demonstrates that cyclical focal loss is a more universal loss function than cross-entropy softmax loss or focal loss for balanced or imbalanced datasets. In our experiments on balanced datasets, we show that using CFL generally improves on the network's generalization performance, and at worst is comparable to the cross-entropy softmax loss.
Our experiments with only 4,000 training samples from CIFAR-10 and CIFAR-100 show superior performance for CFL for both balanced and imbalanced versions. We also demonstrate in the open long-tailed recognition (OLTR) challenge [12] that the cyclical focal loss improves on the network's performance more than training with either softmax or focal loss. These experiments provide evidence of CFL's superiority across balanced, imbalanced, or highly imbalanced datasets.
Our contributions are: • We propose a novel loss function, the cyclical focal loss, that begins and ends training with a focus on the confident training samples and trains in the middle epochs with a focus on the hard training samples. • We demonstrate that the cyclical focal loss improves on the performance of the trained network over training with the standard cross-entropy softmax and the focal loss for CIFAR-10/100 (with balanced, imbalanced, or limited data training data), ImageNet, and in the open long tail recognition problem. • Therefore, our experiments demonstrate that cyclical focal loss is a more universal loss function that often performs better than and can replace the standard cross-entropy softmax loss or focal loss. • Our implementation does not increase training or inference time. We describe our implementation in the Appendix and share our fully reproducible training code is available at https://github.com/lnsmith54/ CFL.
Related Work
Focal loss function: The focal loss function was first introduced for object detection [11]. These authors discovered that extreme foreground-background imbalance was the cause of the inferior performance of 1-stage detectors and showed that their proposed focal loss function improved the performance of these detectors. The focal loss heavily weights less confident training samples, as shown in Figure 1. After the introduction of focal loss, others have leveraged the focal loss in other situations of imbalance [16,10,22,14,26].
Focal loss was applied to multi-label classification by Ridnik, et al. [16], where there is a positive label/negative label imbalance; that is, in a given image, most of the labels are not present, so the negative label examples are much more prevalent than the positive labels. These author's proposed a novel variation of focal loss, which they called "asymmetric loss" (ASL), that gave improved performance over the original focal loss for multi-label classification. In ASL, the positive and negative terms of the focal loss are separated and each have its own focusing parameter.
General cyclical training: General cyclical training was defined [19] as any collection of settings in machine learning where the training starts and ends with "easy training" and the "hard training" happens during the middle epochs. General cyclical training can be considered as a combination of curriculum learning [2] in the early epochs with fine-tuning toward the end of training, plus training on the full problem space during the middle epochs. It has been shown that many important aspects of neural network learning take place within the very earliest iterations or epochs of training [6,5]. It is best to start a neural network's training with highly confident samples to encourage the network's weight updates in an optimal direction. As the training proceeds, increasing the variation and range of the training and data improves the generalization of the model. While this first part of a network's training can use a curriculum learning approach, the last epochs of the training should fine-tune the model for the desired data and task in order to to encourage the network to learn the more complex patterns [25] from the most confident training samples.
Adaptive hyper-parameters during training have become common. Cyclical learning rates [18,20,13] have been accepted by the deep learning community. In addition, the commonly used learning rate warmup [7] and stochastic gradient descent (SGDR) with restarts [13] are essentially equivalent to cyclical learning rates. Furthermore, the idea of adaptive hyper-parameters has been extended to other hyper-parameters, such as weight decay [27,3,15,9] and batch sizes [21]. In this paper, we extend the cyclical training principle to loss functions by proposing the cyclical focal loss.
Focal Loss
Following Lin, et al. [11], we start with the cross-entropy loss for binary classification as: where y specifies ground truth class and p ∈ [0, 1] is the model's estimated probability. For simplicity, these authors define: so that CE can be written at CE(p, y) = CE(p t ) = −log(p t ).
Using this notation, the focal loss function was defined as: where γ ≥ 0 is a tunable hyper-parameter set by the user. As seen in Figure 1, the weighting factor reduces the loss contribution for confident predictions, which increases the importance of correcting misclassified samples. Note that when γ = 0, the focal loss is equivalent to the cross-entropy loss. Equation 5 is the same as Equation 1 but we rename γ as γ lc for clarity in the rest of this paper.
Recently, Ridnik, et al. [16] generalized the focal loss to improve on multi-label classification, in which the number of negative labels in a training sample greatly exceeds the number of positive labels (i.e., imbalanced data). These authors propose decoupling the weighting factor between the positive and negative labels by having separate γ hyper-parameters for the positive and negative parts of the loss. Therefore, they define an asymmetric loss (ASL) as: where The authors replace the one user-defined hyper-parameter of γ with two hyper-parameters γ + and γ − . They note that γ − > γ + , and based on their experiments, suggest setting γ + = 0 so that the positive examples use the cross entropy loss. In this case, ASL differs from cross entropy only in the weighting of the negative labels.
Next, we define cyclical versions of both the focal loss and the asymmetric focal loss.
Cyclical Focal Loss
Intuitively, the goal for the cyclical focal loss is to combine a focus on confident predictions in the early epochs with an increasing focus on misclassified, hard samples during the middle epochs. We can accomplish this by including both the loss terms in Equations 1 and 2 with any reasonable schedule between them. For simplicity, we use a linear schedule in this paper.
That is, we define a parameter ξ that varies with the training epoch as: where e i corresponds to the current training epoch number and e n corresponds to the total number of training epochs.
Here, we introduce a cyclical factor f c ≥ 1 that provides variability to the cyclical schedule. If f c = 1, ξ goes from 1 at the beginning of the training to 0 at the end. If the cyclical factor f c = 2, the cycle resembles an upside down equilateral triangle, where ξ goes from 1 to 0 in the first half of the epochs and from 0 to 1 in the second half. If f c = 4, ξ goes from 1 to 0 at a quarter of the way through the training and linearly goes from 0 to 1 for the remaining three-quarters of the epochs.
Integrating Equations 1 and 2 with Equation 8, we define the cyclical focal loss as: Our cyclical focal loss introduces two new hyper-parameters, f c and γ hc but we show in our experiments that this is not a hardship. Specifically, γ hc = 2 or 3 generally works well and the results are fairly insensitive to the value of f c . We used f c = 4 throughout our experiments.
Similarly, we also tested a cyclical version of the asymmetric loss (which is defined in Equation 6). Our cyclical asymmetric focal loss is defined as: where L + and L − are defined in Equation 7. While our cyclical asymmetric focal loss is not symmetric between the high-confidence and low confidence terms, this does incorporate our goal to train on the confident samples early and late in the training. In our experiments with this cyclical asymmetric focal loss, we used γ − = 4 and γ + = 0, as recommended in the original paper for ASL.
We mention that our implementation of cyclical loss functions is simple and requires only a few lines of code. Our implementation does not increase training time. More details on the implementation are provided in the Appendix and we share our training code is available at https://github.com/lnsmith54/CFL.
Experiments
We now validate the performance of both the cyclical focal loss (CFL) and the cyclical asymmetric loss (CASL).
In this Section we provide results from experiments to compare our loss functions to cross-entropy softmax (CE), focal loss (FL), and asymmetric focal loss (ASL). We also provide experimental results to better understand the new hyper-parameters: γ hc and the cyclical factor f c . We demonstrate that the performance with the cyclical loss functions are comparable or better than cross-entropy and focal loss for a variety of datasets and whether the training datasets are balanced or imbalanced.
Model and hyper-parameters: Details of the implementations and hyper-parameters are given in the Appendix. In a majority of our experiments we used a TResNet [17] but we also show results with the ResNet-50 and EfficientNet_B0 [23] architectures We modified the implementation of the asymmetric focal loss 1 [16] to include a cyclical focal loss option. The asymmetric focal loss provided the flexibility to compare cyclical versions to both the original and asymmetric focal losses.
For the open long tailed recognition experiments [12], we modified the implementation provided by the authors 2 to use focal loss and our cyclical focal loss.
For all our other experiments and datasets, we used PyTorch Image Models (TIMM) 3 [24] as a framework in our experiments. This framework provides the models used in our our experiments. To ease replication of our work, code with modifications to this framework is provided at https://github.com/lnsmith54/CFL.
CIFAR-10 and CIFAR-100
This Section evaluates CFL's and CASL's test accuracy results for both CIFAR-10 and CIFAR-100. Table 1 shows the test accuracies comparing cross-entropy softmax (CE), focal loss (FL) [11], asymmetric focal loss (ASL) [16], cyclical focal loss (CFL), and cyclical asymmetric focal loss (CASL). For CIFAR-10 and CIFAR-100, each entry of test accuracy in this Table is the mean and the standard deviation of four runs with the same hyper-parameters and loss function.
The first row of Table 1 gives the results for CIFAR-10 with the TResNet_m architecture. In the second row are the results for a ResNet-50 model. The third row presents the results with the EfficientNet_B0 architecture. The fourth row of this Table gives the results for CIFAR-100 with the TResNet_m architecture. It is clear from the third column of this Table that using the original focal loss on these datasets reduces the performance of the network for CIFAR-10 and CIFAR-100 in all four cases. However, using the asymmetric focal loss (see column 4) generally regains much of the lost performance. This is to be expected because γ + = 0 is the same as the cross entropy loss for the positive labels. We note that the implication is that the loss contribution from the error or negative labels is minor for CIFAR-10 and CIFAR-100 classification.
On the other hand, the fifth and sixth columns of Table 1 show that the test accuracy when training with our cyclical focal loss and the cyclical asymmetric focal loss is consistently better than when training with the other loss functions. Note that the results for cross-entropy softmax with the EfficientNet_B0 are higher than the performance results for TResNet_m and ResNet-50 and that using our cyclical focal loss functions only gives comparable results. This implies that in situations where near optimal performance is being reached (i.e., optimal hyper-parameters), the cyclical focal loss might not improve on the network's performance, but it also does not harm it. On the other hand, throughout all of our experiments, our cyclical focal loss functions provided near-optimal performance with less need of precision search for optimal hyper-parameters. That is, we found that training with cyclical focal loss reduced the need for tedious hyper-parameter searches. Figure 2 shows the test accuracies for CIFAR-10 with TResNet_m over a range of values for γ hc and f c for CFL and CASL. For CFL, the best value for γ hc is 2, but using γ hc = 3 is within the precision of these tests (i.e., approximately ±0.1). For CASL, the best value for γ hc is 1 but values of 2 or 3 are within the precision of our experiments. The test accuracies were little changed over a range of values of f c for CFL and CASL (not shown) and we used a value of f c = 4 in our experiments. A similar plot was obtained for CIFAR-100 and is not shown.
ImageNet
This Section contains the test accuracy results for training a TResNet model on ImageNet. ImageNet is a large scale dataset, and the results here show the generality of our cyclical loss functions. The sixth row of Table 1 contains the results of training with the same five loss functions as discussed previously. For ImageNet, each entry is the mean and the standard deviation of two runs with the same hyper-parameters and loss function.
Once again, using the original focal loss (FL) substantially reduces the performance of the network. In this case, using the asymmetric focal loss (ASL) regains only part of the lost performance. We expect that the loss contribution from the error or negative loss might be hurting the classification performance and a smaller γ − would help (i.e., γ − < 4). In spite of this degradation of performance, the test accuracy when training with our cyclical focal loss and the cyclical asymmetric focal loss is consistently better than the cross-entropy loss, the focal loss, or the asymmetric loss. Figure 3 shows the test accuracy curves over the course of training the network for cross-entropy softmax (CE) and our cyclical asymmetric focal loss (CASL). The two curves look very similar but it is notable that training with CASL provides a slightly faster learning curve early in the training, which supports our intuition that cyclical focal loss helps the learning in the early epochs. In addition, the final performance is better than CE, as is confirmed in the sixth row of Table 1. Figure 4 shows the test accuracies for ImageNet over a range of values for γ hc and f c for CFL and CASL. For training with CFL the best value for γ hc is 3 and for CASL the best value is 2, but either the results for 2 or 3 fall within the precision of our experiments (i.e., ±0.1). While CFL and CASL do introduce the two new hyper-parameters γ hc and f c , we found that the results were insensitive to f c (by default, we used f c = 4) and that γ hc = 2 or 3 generally worked best, which reduces a grid search to only two tests for γ hc .
Imbalanced Training Set
This Section evaluates test accuracy when training with cyclical focal loss and cyclical asymmetric focal loss for balanced and imbalanced versions of the CIFAR-10 and CIFAR-100 datasets when there are only 4,000 training samples. For the balanced version, we take the first 4,000 training samples in the CIFAR-10 dataset, which give a count per class for the ten classes (airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck) as 396, 366, 420, 384, 422, 393, 414, 385, 407, and 413, respectively. While this isn't perfectly balanced due to the randomness in the order of the appearance of training samples, we consider this as our balanced dataset. Similarly, we take the first 4,000 training samples in the CIFAR-100 dataset as our balanced dataset, although the number per class is not exactly 40 per class.
For the 5/3 imbalanced version of CIFAR-10, we provide a data sampler that specifies which samples to use so that we get a count per class for the ten classes as 490, 470, 450, 430, 410, 390, 370, 350, 330, and 310, respectively. Specifically, we take the first 490 examples of the first class (i.e., airplane) from the training dataset to be the samples for this class. A similar procedure is used for the other nine classes. The total number of training samples is 4,000, which is the same number as in the balanced version. For CIFAR-100, it is a bit more complicated. The 5/3 sampler Figure 4: γ hc f c : This figure shows the impact of γ hc and f c when using cyclical focal loss (CFL) or cyclical asymmetric focal loss (CASL) for training ImageNet with the TResNet_m architecture. The best values for γ hc is 2 for CASL or 3 for CFL and the accuracy rapidly drops for higher values. However, the accuracy is relatively stable for a range of settings for the f c parameter.
starts at 50 samples per class and is reduced by round(i/5), where i is the class label. Then there is a slight adjustment at the end so there is exactly 4,000 training samples. In the spirit of reproducibility, all our samplers are provided at https://github.com/lnsmith54/CFL.
For the 6/2 imbalanced version of CIFAR-10, we provide a data sampler that specifies which samples to use so that we get a count per class for the ten classes as 580, 540, 500, 460, 420, 380, 340, 300, 260, and 220, respectively. The sampling procedure for this 6 -2 imbalanced version is the same as used for the 5 -3 imbalanced version. For CIFAR-100, the 6/2 sampler starts at 60 samples per class and is reduced by round(i/2.5) and there is a slight adjustment at the end so there are exactly 4,000 training samples. Table 2 illustrates the test accuracies comparing cross-entropy softmax (CE), focal loss (FL) [11], asymmetric focal loss (ASL) [16], cyclical focal loss (CFL), and cyclical asymmetric focal loss (CASL). As to be expected, the performance drops significantly when training with only a fraction of the CIFAR-10 or CIFAR-100 datasets. For the balanced version, focal loss and the asymmetric focal loss test accuracies are lower than the accuracies from training with cross-entropy softmax. On the other hand, training with the cyclical versions of focal loss improves on the performance over cross-entropy softmax even more clearly here than when training with the full dataset. This provides some evidence that cyclical focal loss is particularly beneficial in few-shot or medium shot scenarios when there is only a limited amount of labeled training data.
The results for training with the 5/3 imbalanced version are in the third column of Table 2 and they shows a reduction in overall performance for cross-entropy softmax. The performance for focal loss deteriorates slightly and it is still inferior to cross-entropy softmax. However, training with the asymmetric focal loss improves on its performance compared to its performance with the balanced dataset for CIFAR-10 and stays steady for CIFAR-100. However, the performance for our CFL and CASL is superior to the performance obtained by training with the other loss functions.
The fourth column of Table 2 provides the results for training with the 6/2 imbalanced version. With the greater imbalance in the dataset, the performance of the cross-entropy loss continues to decline. This Table shows that focal loss is still inferior to cross-entropy but the ASL approach provides comparable performance to cross-entropy for CIFAR-10 but is inferior to training with the cross-entropy loss for CIFAR-100. However, the performance for both CFL and CASL is significantly higher than the results from training with the other loss functions.
Open Long Tailed Recognition
Liu, et al. [12] defined the Open Long-Tailed Recognition (OLTR) challenge as learning from a highly imbalanced dataset, with many-shot (≥ 100), medium-shot (≤ 100 and > 20), and few-shot (≤ 20) per class, plus an open-world setting in which OLTR must handle previously unseen classes.
One expects that adding focal loss (FL) or the asymmetrical focal loss (ASL) would improve the results for the few-shot case, which was mostly confirmed in our experiments. The question we answer here is what impact does training with the cyclical focal loss (CFL) or the cyclical asymmetric focal loss (CASL) have on these highly-imbalanced ImageNet-LT and Places-LT datasets.
Here we present the results for ImageNet-LT in Table 3 and Places-LT in Table 4 using code modified from that provided by the authors 4 . Our revised code is available at https://github.com/lnsmith54/CFL.
In Table 3 and Table 4, performance is evaluated under both the closed-set (test set contains no unknown classes) and open-set (test set contains unknown classes) settings to highlight their differences. Under each setting, in addition to the overall top-1 classification accuracy, these Tables list the accuracy of three disjoint subsets: many-shot classes (classes with over training 100 samples), medium-shot classes (classes each with between 20 and 100 training samples per class), and few-shot classes (classes with under 20 training samples). For the open-set setting, the F-measure is also reported for a balanced treatment of precision and recall following [1]. Table 3: ImageNet-LT: Comparison of the top-1 test classification accuracies on ImageNet-LT of the original algorithm to adding cyclical focal loss. The results presented here for OLTR [12] are from the original paper. The medium-shot and few-shot performance when using CFL or CASL exceed the results from OLTR. Table 4: Places-LT:Comparison of the top-1 test classification accuracies on Places-LT of the original algorithm to adding cyclical focal loss. The results presented here for OLTR [12] are from the original paper. The medium-shot, few-shot, and average performance when using CFL or CASL exceed the results from OLTR. The second rows of Table 3 (for ImageNet-LT) and Table 4 (for Places-LT) show the results for the OLTR from the paper [12]. The third and fourth rows show that including either focal loss or the asymmetrical focal loss often improves the results for the medium-shot and few-shot cases, with a small degradation of the many-shot performance.
The fifth and sixth rows of both Tables show the performance results of using our CFL or CASL loss functions in place of cross-entropy softmax in the OLTR methodology. It is noteworthy that both cyclical focal loss and the cyclical asymmetric focal loss improve the results for the medium-shot and few-shot cases relative to the focal loss or the asymmetrical focal loss (with a small degradation of the many-shot performance). The best overall accuracy in the closed-set setting for both datasets was obtained by using the CFL and CASL losses. In the open-set setting, the best accuracies for the medium-shot and low-shot in both datasets were obtained when training with our CFL and CASL loss function and for Places-LT, these loss functions obtained the highest F-measure.
These experiments demonstrate the superiority of cyclical focal loss and cyclical asymmetric focal loss to cross-entropy softmax and focal loss when the input data is highly imbalanced.
Conclusions
In this work, we introduce two novel loss functions: the cyclical focal loss (CFL) and the cyclical asymmetric focal loss (CASL). These loss functions add a new loss term to the focal loss that more heavily weights confident training samples in the first epochs of a neural network's training. As the training progresses, the focal loss term that weights less confident samples dominates the loss. In the final epochs, the loss function returns to fine-tuning on the confident samples learned over the course of the training.
Our extensive empirical analysis demonstrates that CFL and CASL provide comparable or superior performance to cross-entropy softmax, focal loss, and asymmetric focal loss across balanced, imbalanced, and long-tailed datasets. We did not find CFL or CASL harmful to the performance relative to training with cross-entropy softmax or focal loss. Our experiments provide evidence that our cyclical focal loss and cyclical asymmetric focal loss are more universal loss functions over more scenarios and applications than cross-entropy softmax or the focal loss functions. Therefore, they can be used as drop in replacements for cross-entropy softmax and are especially beneficial when there are a limited number of labeled training samples or there is imbalance in the number of samples in each class.
A Software and Implementation
We modified the implementation of the asymmetric focal loss [16] to include a cyclical focal loss option. The original code is available at https://github.com/Alibaba-MIIL/ASL. The asymmetric focal loss provided the flexibility to compare cyclical versions to both the original and the asymmetric focal losses. Specifically, we modified the file from the original asymmetric focal loss at src/loss_functions/losses.py and made a copy of the class class ASLSingleLabel that we named class Cyclical_FocalLoss. We then added the following code to convert this into cyclical focal loss: The full revised code is available at https://github.com/lnsmith54/CFL, such as in timm/loss/asl_focal_loss.py.
In addition, we used PyTorch Image Models (timm) [24] as a framework in our experiments on CIFAR and ImageNet. This framework provides the models and downloads the data used in our our experiments. The original code is available at https://github.com/rwightman/pytorch-image-models. The file train.py was modified by inserting several new input parameters via calls to add_argument, modifying the loss setup to call our focal loss code for FL, ASL, CFL, or CASL. There were additional modifications made to add a sampler that used only part of a dataset, which was used in our experiments with limited labeled training data.
In summary, there were a number of modifications made in order to read in hyper-parameters, to call the cyclical focal loss, and to expedite running our experiments. The full revised train.py is available as part of code base at https://github.com/lnsmith54/CFL. Also, we made use of the code provided for the open long tailed recognition experiments [12]. The original code is available at https://github.com/zhmiao/OpenLongTailRecognition-OLTR. We copied the same file as described above for CFL to this loss/ folder. We also created configuration files that used FL, ASL, CFL, or CASL instead of the default softmax loss. Finally, we modified main.py to set γ hc for a given experiment more easily. Again, there were a number of modifications made in order to read in hyper-parameters, to call the cyclical focal loss, and to expedite our experiments. The full revised code is available at https://github.com/lnsmith54/CFL.
B Command Lines and Hyper-parameters
In the spirit of easy replication, it is important to know the values of the hyper-parameters used. Table 5 specifies the batch sizes, learning rates, and weight decay values used for the results in Table 1 and Table 2 in the main body of the paper.
The PyTorch Image Models implementation provides for command line input of a large number of hyper-parameters. Here we specified the command line used in the experiments, and all other hyper-parameters used the default settings provided by the framework.
Default values for hyper-parameters not specified in this command line were used (i.e., see the software at https://github.com/lnsmith54/CFL or the original code at https://github.com/rwightman/ pytorch-image-models for default values of the hyper-parameters). The new hyper-parameters are: focal_loss, gamma_hc, gamma_pos, gamma_neg, and cyclical_factor. The focal_loss parameter specifies whether to use the loss: if set to "asym-cyclical" it calls the cyclical asymmetric focal loss function; if set to "asym" it calls the asymmetric focal loss; and if set to anything else, then cross-entropy softmax is called. The other parameters (gamma_hc, gamma_pos, gamma_neg, and cyclical_factor) correspond to γ hc , γ + , andγ − , respectively.
For CIFAR-100, the following command line was the same as the one for CIFAR-10 except to replace cifar10 with cifar100 as follows: CUDA_VISIBLE_DEVICES=0 python train.py data/cifar100 --dataset torch/cifar100 Table 5 reflects any changes to the hyper-parameters.
|
2022-02-21T06:47:32.359Z
|
2022-02-16T00:00:00.000
|
{
"year": 2022,
"sha1": "0dfa5271443a718c377cf9ff9f42527d4cc961b9",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "0dfa5271443a718c377cf9ff9f42527d4cc961b9",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
256403434
|
pes2o/s2orc
|
v3-fos-license
|
Tau antibody isotype induces differential effects following passive immunisation of tau transgenic mice
One of the main pathological hallmarks of Alzheimer’s disease (AD) is the intraneuronal accumulation of hyperphosphorylated tau. Passive immunotherapy is a promising strategy for the treatment of AD and there are currently a number of tau-specific monoclonal antibodies in clinical trials. A proposed mechanism of action is to engage and clear extracellular, pathogenic forms of tau. This process has been shown in vitro to be facilitated by microglial phagocytosis through interactions between the antibody-tau complex and microglial Fc-receptors. As this interaction is mediated by the conformation of the antibody's Fc domain, this suggests that the antibody isotype may affect the microglial phagocytosis and clearance of tau, and hence, the overall efficacy of tau antibodies. We therefore aimed to directly compare the efficacy of the tau-specific antibody, RN2N, cloned into a murine IgG1/κ framework, which has low affinity Fc-receptor binding, to that cloned into a murine IgG2a/κ framework, which has high affinity Fc-receptor binding. Our results demonstrate, for RN2N, that although enhanced microglial activation via the IgG2a/κ isotype increased extracellular tau phagocytosis in vitro, the IgG1/κ isoform demonstrated enhanced ability to reduce tau pathology and microgliosis following passive immunisation of the P301L tau transgenic pR5 mouse model.
antibodies have been demonstrated to facilitate microglial phagocytosis of extracellular tau in vitro [11], and this process has been shown to require Fcγ-receptor binding and functional lysosomes [12].
Fc receptor binding is mediated by the conformation of the Fc domain of an antibody. Humans have five IgG isotypes (IgG1, IgG2a, IgG2b, IgG3 and IgG4); mice also have five IgG isotypes but these differ in their nomenclature (IgG1, IgG2a, IgG2b, IgG2c, IgG3). These subclasses mediate effector functions differently due to variable specificity and affinity for Fc receptors (FcR), including the intracellular Fc receptor, TRIM21, the neonatal Fc receptor and the family of Fcγ receptors (FcγRIa, FcγRIII, FcγRIV and FcγRIIb) [13]. For example, the murine IgG1 only binds FcγRII and FcγRIII with low affinity, whereas murine IgG2a binds to all receptors in the following order of affinity: FcγRI > FcγRIV > FcγRIII > FcγRIIb [13]. Human IgG1 is the most similar to murine IgG2a as they both have the strongest binding to FcγRs and therefore the greatest ability to activate microglia and induce phagocytosis of the antibody-antigen complex. Human IgG4 on the other hand is most similar to murine IgG1 as they display the weakest ability interact with FcγRs and are poor activators of microglia. This was demonstrated by Adolfsson et al. who directly compared an anti-Aβ antibody, MABT, as a human IgG1 isotype and a human IgG4 isotype, containing the same antigen-binding variable domains and with equal binding to Aβ. They showed reduced activation of stress-activated p38MAPK (p38 mitogen-activated protein kinase) in microglia and less release of the proinflammatory cytokine TNFα following treatment with MABT IgG4, compared with the IgG1 isotype [14]. This suggests that whilst a tau-specific monoclonal antibody in a high effector-function isotype may induce the greatest amount of tau phagocytosis, the subsequent release of pro-inflammatory cytokines may be deleterious in vivo.
We therefore aimed to investigate if the IgG isotype specifically affects the therapeutic efficacy of an anti-tau antibody. To achieve this, we cloned the variable domains of our previously characterised RN2N antibody, which is specific for 2 N tau isoforms [15], into both murine IgG1/κ and IgG2a/κ backbones and directly compared their ability to reduce tau. Here we show that despite RN2N IgG2a demonstrating an enhanced ability to clear tau in vitro, RN2N IgG1 demonstrated a superior ability to reduce tau inclusions and microgliosis following passive immunization of tau transgenic pR5 mice.
Antibody generation
RN2N is a 2 N-tau specific antibody raised against the tau peptide TEIPEGITAEEAGI (aa 84-97 of the longest human tau isoform, tau441) [15]. The variable light and variable heavy chains of RN2N were previously cloned into a mouse IgG2a/κ framework [15]. For this study, the variable domains were also cloned into a mouse IgG1/κ framework (mAbXpress vectors kindly provided by the Queensland node of the National Biologics Facility) as previously described [16]. IgG was purified by affinity chromatography then buffer exchanged into 1 × phosphate-buffered saline (PBS) (Protein Expression Facility, The University of Queensland). The concentration of the purified RN2N was determined using a NanoDrop 2000 (Thermo Scientific).
Recombinant human tau
A cDNA encoding full-length human tau was cloned into the pET-DEST42 vector (Thermo Fisher Scientific) in frame with the C-terminal His6 and V5 tag. Plasmids were transformed into One Shot BL21 bacterial cells (Thermo Fisher Scientific) and recombinant protein expression was induced with 1 mM IPTG. Protein purification was conducted following the protocol outlined in [17].
Surface plasmon resonance
Surface plasmon resonance measurements were conducted at the Monash Fragment Platform, Monash University, using the Biacore S200 biosensor (GE Healthcare). Biotinylated RN2N was captured on a streptavidincoated CM5 chip (GE Healthcare). For biotinylation, RN2N IgG1 (29.5 μM) in PBS was added in a 1:1 ratio to EZ-link NHS-LC-LC-biotin (Thermo Fisher Scientific) and incubated at 25 °C for 1 h. The antibody was separated from free, unconjugated biotin by size-exclusion chromatography on a Superdex 200 10/300 GL (GE Healthcare) column equilibrated in PBS. Streptavidin was immobilized on the CM5 chip using amine coupling at 37 °C. Antibody was captured at 25 °C, using a flowrate of 10 µL/min in PBS. Tau binding experiments were run using single-cycle kinetics at 25 °C with the running buffer (12 mM Na 2 HPO 4 , 287 mM NaCl, 2.7 mM KCl, 1.8 mM KH 2 PO 4 , 0.05% Tween-20 pH 7.4). Tau was injected for 120 s at a flow rate of 40 µL/min with a dissociation time of 600 s, using 8 concentrations of tau (1/3 serial dilutions from 0.0128 to 1,000 nM). The data were processed using Biacore S200 Evaluation Software Version 1.0, double referenced against blank injections of buffer and fit to a Steady State Affinity model using report points 4 s before the injection end, with a 5 s window.
BV2 phagocytosis assay
BV2 cells were cultured in DMEM supplemented with 5% heat-inactivated fetal calf serum. 24 h prior to treatment, cells were plated at 1.25 × 10 5 cells/well in a 12-well plate and allowed to attach overnight. For microscopy, BV2 cells were plated on Poly-D-Lysine (Sigma) coated glass coverslips 24 h prior to treatment. RN2N IgG and recombinant hTau were conjugated to Alexa Fluor-488 (AF488) (Thermo Fisher) and pHrodo Red protein (Thermo Fisher), respectively, according to the manufacturer's instructions. hTau-pHrodo (50 nM) was incubated either with or without RN2N IgG-AF488 (50 nM) in Opti-MEM (Thermo Fisher) at room temperature for 1 h. For flow cytometric analysis, proteins were added to plated cells and incubated for 2 h at 37 °C, then cells were trypsinized and resuspended in Hanks' balanced salt solution (Life Technologies) containing 1% FBS and 1 mM EDTA. Excitation at 488 nM and 560 nM (BD LSR II Analyser) was conducted and data was analysed using FlowJo v10 software (Tree Star). The integrated fluorescence intensity (% Parent x Mean 488 nM intensity) was then calculated. For microscopy, proteins were added to plated cells and incubated for 2 h at 37 °C then cells were fixed with 2% paraformaldehyde (PFA) (Sigma), washed with PBS then counterstained with DAPI (Dako). Cells were washed again with PBS and then mounted onto microscope slides. Imaging was performed using the Axio Imager Z2 (Zeiss).
sqRT-PCR analysis
For qPCR analysis, protein were added to plated cells and incubated for either 2 or 8 h at 37 °C. Cells were then collected and RNA isolated as described below. Total RNA was isolated from treated BV2 cells using TRIzol reagent (LifeTechnologies), following the manufacturer protocol. Reverse transcription of 200 µg total RNA was performed using SuperScript III reverse transcriptase (LifeTechnologies) and random hexamer primers (LifeTechnologies). Semi-quantitative RT-PCR was performed using 1 µL of the resulting cDNA in a 5 µL total volume containing SSoAdvanced Sybr Green (BioRad) and murine primers (IDT) targeting the genes of interest. For amplification and recording, a CFX384 Touch machine (Bio-Rad) was used, and the results were evaluated using the manufacturer's software. Amplification specificity was confirmed by melting curve analysis, with quantification performed using the ΔΔCt method.
Mice
All animal experiments were conducted under the guidelines of the Australian Code of Practice for the Care and Use of Animals for Scientific Purposes and approved by the University of Queensland Animal Ethics Committee (QBI/412/14/NHMRC; QBI/027/12/NHMRC). pR5 mice express 2N4R tau with the P301L mutation under the control of the mThy.1.2 promoter [18].
In vivo imaging
6 month-old pR5 mice were randomly assigned to one of the following groups: no antibody, RN2N IgG1 and RN2N IgG2a. 24 h prior to treatment, animals were anesthetized with ketamine (100 mg/kg) and xylazine (10 mg/ kg), their whole body shaved and residual hair removed using hair removal cream. Immediately prior to treatment, all animals were anesthetized again and prepared as previously described [19]. For the antibody groups, 3 nmol of either RN2N IgG1 or RN2N IgG2a conjugated to AlexaFluor 647 (Life Technologies) was injected retroorbitally. Mice were kept under 1-2% isoflurane and were scanned 1 h post-treatment using a Bruker In Vivo MS FX Pro optical imaging system with x-ray and a 630 nm excitation and a 700 nm emission filter. Whole animal scans were analyzed using Bruker Molecular Imaging software. An ellipsoid region of interest (ROI) was drawn in the brain of each mouse at every time point post-treatment, with the calibrated unit of radiant efficiency (P/s/ mm2) being reported for each ROI. Raw signal was logtransformed to improve Q-Q plot normality.
Passive immunisation of mice
Four month-old female pR5 mice were randomly assigned to three treatment groups (n = 10 per group): Control, RN2N IgG1 and RN2N IgG2a. Aged-matched wild-type (WT) littermate control mice were also assigned to a WT group for behavioural testing. pR5 mice were injected intraperitoneally once per week for 12 weeks with either 100 μL PBS (control) or 40 mg/kg RN2N IgG1 or RN2N IgG2a. Mice were weighed every week.
Elevated plus maze
Anxiety-like behavior was assessed in the elevated plus maze (EPM) as previously described [20] with some modifications. Briefly, mice were placed in the central area of the maze (elevated cross-shaped apparatus with a central square, and closed as well as open arms with unprotected edges). The time spent in the three zones over a 5 min period was recorded using an overhead camera with EthovisionXT ™ tracking software (Ethovision). The percentage time spent in each arm was calculated. Seven untreated wild-type (WT) littermates were used as controls. Analysis was conducted blinded.
Open field
The open field test was used to assess general animal activity, exploration and anxiety. The open field consists of a square-shaped box where animal movement, position and speed is monitored by infrared beam breaks that project across the open field along the X, Y and Z axes. Mice were placed in the centre of the box and were allowed to explore for 20 min. The software was set up according to the General Open Field test from the Activity Monitor version 7 manual (MED Associates, Inc.). All data was transmitted to a PC and analysed using Activity Monitor 7 software, SOF-812 (MED Associates, Inc.).
Tissue processing
24 h after behavioural testing, mice were anaesthetised and transcardially perfused with PBS. Their brains were harvested and the hemispheres separated. The right hemisphere was immersion-fixed in 4% PFA (Sigma) for 24 h, and then embedded in paraffin using a benchtop tissue processor (Leica). Coronal paraffin-embedded brain Sects. (7 and 14 μm thickness) between Bregma − 1.34 mm and − 2.06 mm were obtained using a microtome. The left hemisphere was snap-frozen in liquid nitrogen and stored at − 80 °C until further processing.
Fractionation of brain tissue
Brains from treated mice were extracted using the RAB/ RIPA fractionation method [21]. Brain tissue was homogenized in 6X volume of RAB Buffer (Astral Scientific), followed by centrifugation at 21,000 × g for 90 min. The supernatant (soluble tau) was collected for western blot analysis. The pellet was resuspended in RIPA buffer (Cell Signalling) then centrifuged at 21,000 × g for 90 min. The supernatant (insoluble tau) was collected for western blot analysis. Protein concentrations were measured using a BCA protein assay kit (Thermo Fisher).
Western blot analysis
15 μg of total protein was electrophoresed on a 10% Tris-glycine SDS-PAGE gel. Proteins were transferred to Immun-Blot Low Fluorescence PVDF membrane (Bio-Rad) using the Transblot Turbo Transfer System (Bio-Rad) and then stained with REVERT TM 700 Total Protein Stain according to the manufacturer's instructions (LI-COR). Total protein was imaged in the 700 nm channel with the Odyssey FC Imaging System (LI-COR) then membranes were washed and blocked for 30 min in Odyssey ® Blocking Buffer (LiCOR). Membranes were incubated in primary antibody overnight at 4 °C by rocking. Membranes were washed with TBS and incubated with the secondary antibody for 30 min at room temperature. Membranes were imaged in the 800 nm channel and fluorescence was quantified using the Image Studio ™ Lite software (LI-COR).
Immunohistochemistry
Mounted brain tissue was first dehydrated in a series of xylene and ethanol washes. Antigen retrieval was conducted in a domestic microwave (850 W) in citrate buffer pH 5.8 for 15 min, followed by cooling at room temperature for 40 min. Paraffin-embedded brain sections between Bregma − 1.34 mm and − 2.06 mm were analysed by immunohistochemistry as described [22], using at least 4 sections per mouse. Briefly, sections were incubated with primary antibodies overnight. For immunofluorescence staining, 7 μm sections were washed and incubated with respective secondary antibodies, then counterstained with DAPI nuclear stain (4′, 6-diamidino-2-phenylindole) (1:5000). For immunohistochemistry staining, 14 μm sections were stained using the nickel-diaminobenzidine (Nickel-DAB) method with no counterstain applied. All images were obtained on an automated slide scanner (Axio Imager Z2, Zeiss) using a Metafer Vslide Scanner program (Metasystems) at 20 × magnification.
Image analysis
Image quantification and analysis were performed blinded using ImageJ software. For all analyses, a no primary antibody control was used for thresholding. More specifically, for immunofluorescence staining quantification, percentage (%) area positivity for AT8-and AT180tau were obtained on thresholded images of the amygdala using the area fraction method. For Nickel-DAB staining quantification, Iba1 percentage (%) immunoreactive area (area fraction method) and Iba1-positive average cell sizes were obtained by using the Analyse Particles plugin on thresholded images of an area of the primary somatosensory cortex which was particularly positive for AT180-and AT8-tau in the transgenic mice used in this study. The regions of interest (ROIs) drawn in the Iba1 analyses were kept constant across images. All measurements were noted and averaged over 2-3 sections per mouse.
Statistical analysis
Statistical analyses were performed with GraphPad Prism 8 software using one-way ANOVA with Tukey's multiple comparison test. All values are given as the mean ± standard error of the mean (SEM). Outliers were removed using the ROUT method (Q = 1).
Antibody isotypes do not affect binding to tau
We have previously cloned the RN2N variable domains into a murine IgG2a/κ backbone (RN2N IgG2a) and determined its binding affinity to recombinant human tau to be 381 nM [23]. To directly compare our tau antibody in different IgG isotype backbones, we cloned the RN2N variable domains into a murine IgG1/κ backbone (RN2N IgG1) and conducted surface plasmon resonance as done in our previous study [23] (Fig. 1a). The binding affinity to human recombinant tau was calculated to be 407 nM (Fig. 1b), demonstrating that RN2N IgG1 and IgG2a have comparable binding affinities to human tau. Furthermore, both RN2N isotypes detected 2 N tau isoforms following western blotting of brain extracts from pR5 tau transgenic and wild-type mice but not in brain extracts of tau knock-out mice (Fig. 1c).
RN2N IgG2a treatment demonstrates enhanced phagocytosis of tau-antibody complex by BV2 microglia cells compared to RN2N IgG1 treated cells
To investigate the ability of the RN2N IgG isotypes to activate microglial phagocytosis of tau, BV2 microglial cells were treated with recombinant human tau conjugated to pH-rodo in the absence or presence of RN2N IgG. FACS analysis revealed that tau on its own, and when incubated with the control IgGs, was phagocytosed to a small extent (Fig. 2a). This was significantly increased, however, when tau was incubated with RN2N IgG1, and even more so when incubated with RN2N IgG2a (Fig. 2a). This demonstrates that the binding of RN2N to tau specifically stimulates microglial phagocytosis and that RN2N IgG2a is more efficient at mediating tau phagocytosis compared to RN2N IgG1. This was also observed by microscopy, which showed increased phagocytosis of tau in the presence of RN2N IgG2a, compared to in the absence of IgG, and only minimal phagocytosis of tau in the presence of RN2N IgG1 (Fig. 2b)
RN2N IgG2a, but not RN2N IgG1, induces pro-inflammatory cytokine transcription following treatment
Microglial stimulation of the Fc receptors can lead to pro-inflammatory cytokine release. Therefore, to determine if tau-specific antibody isotypes resulted in different levels of pro-inflammatory cytokine release, sqRT-PCR was conducted on cells treated with or without the two IgG formats for 2 or 8 h. At 2 h, transcription of the pro-inflammatory cytokine, tumor necrosis factor alpha (TNFα), was significantly increased in BV2 cells treated with tau in the presence of RN2N IgG2a, but not in the RN2N IgG1 treated group (Fig. 2c). However, this effect was absent at 8 h (Fig. 2c). Similarly, a b c Fig. 2 a RN2N IgG2a treatment demonstrates enhanced BV2 microglial cell phagocytosis of tau and increased transcription of pro-inflammatory cytokines. Flow cytometric analysis and quantification of tau-pHrodo fluorescence in BV2 microglial cells treated with either RN2N IgG1, RN2N IgG2a, control IgG1 or control IgG2a (***p < 0.001, ****p < 0.0001; n = 3 for each group). b Representative microscopy images of BV2 microglial cells treated with RN2N and human recombinant tau showing phagocytosis of RN2N IgG (green) and internalized Tau-pHrodo (red) complexes. Nuclei were stained with DAPI in blue. Scale bar = 20 μm. c BV2 microglial cells were treated with recombinant human tau with or without RN2N IgG, and TNFα and IL1-β transcription was analysed using sqRT-PCR at 2 or 8 h post-treatment (*p < 0.05; n = 3 for each group) at 2 h, transcription of the pro-inflammatory cytokine interleukin 1 beta (IL1-β), showed a trend towards an increase in the RN2N IgG2a treated cells compared to those treated with tau alone or in the presence of RN2N IgG1, and this effect was also absent at 8 h (Fig. 2c). These findings suggest that the engagement of tau by RN2N IgG2a, but not RN2N IgG1, and the subsequent stimulation of microglia, result in the increased acute production of pro-inflammatory cytokines.
Treatment with RN2N IgG2a, but not IgG1, induces a disinhibition-like behaviour in pR5 mice
Our in vitro studies have demonstrated that RN2N in the high FcR-binding IgG2a format enhances microglial phagocytosis of tau compared to the low FcR-binding IgG1 format. This process, however, results in a transient increase in expression of pro-inflammatory cytokines which might reduce the therapeutic efficacy of RN2N in the IgG2a format. We therefore aimed to determine the ability of both RN2N IgG formats to reduce tau pathology in P301L tau transgenic pR5 mice which express the longest brain tau isoform, 2N4R, and are characterized by progressive tau pathology predominantly in the amygdala and to a lesser extent in the hippocampus and cortex [18]. Prior to treating mice, we sought to compare the delivery of the different RN2N isotypes to the brains of these mice (Fig. 3a). In vivo whole-body imaging 1 h post intravenous delivery of fluorescently-conjugated RN2N IgG revealed that both RN2N isotypes were detected in the brains of mice (Fig. 3a), with no significant difference between IgG1 and IgG2a following quantification of the fluorescence intensity in the brains of the mice (Fig. 3a). This suggests that any differences in efficacy following passive immunisation of pR5 mice is not likely due to differences in brain uptake. We then went on to treat pR5 mice once per week for twelve weeks with either vehicle only (PBS), RN2N IgG1 or RN2N IgG2a (Fig. 3b). Upon completion of the treatment, the behaviour of the pR5 mice was investigated using the elevated plus maze (EPM) (Fig. 3c). We have previously shown that pR5 mice have increased anxiety-like behaviour as characterised by a reduction in the time spent in the open arm of the EPM [15]. In the current study, however, there was no difference in the time spent in the open arm between the WT and pR5 mice, suggesting a loss of this behavioural phenotype in this cohort of pR5 mice possibly due to genetic drifting (Fig. 3c). Interestingly, however, mice treated with RN2N IgG2a, demonstrated a significant increase in the time spent in the open arms of the EPM compared to the control pR5 mice and mice treated with RN2N IgG1 (Fig. 3c), suggesting an increase in disinhibition-like behaviour in these mice. Furthermore, pR5 mice treated with IgG2a, but not IgG1, showed a trend towards an increase in weight gain over the course of the treatment (Fig. 3d). In the open field, however, there was no differences observed between the treatment groups in the time spent in the centre zone (Fig. 3e), resting time (Fig. 3f ) or ambulatory time (Fig. 3g).
Phosphorylated soluble and insoluble total tau levels are reduced independent of antibody isotype following passive immunization of pR5 mice
To investigate the effect the different RN2N isotypes had on tau pathology, the brains of treated mice were dissected and analysed by western blotting following sequential protein extraction (Fig. 4a). In the RAB-soluble fractions, western blotting with the human tau-specific antibody, Dako Tau, demonstrated no differences in total tau in mice treated with RN2N compared to control treated mice (Fig. 4b). The phospho-tau specific antibodies AT8 and AT180, however, demonstrated a significant reduction in tau phosphorylated at these epitopes following treatment with both RN2N isotypes compared to control treated mice. Interestingly, there was a significant difference in the amount of AT180 immunoreactive tau in the RN2N treated groups with the IgG1 isotype reducing AT180 immunoreactive tau to a greater extent than IgG2a. On the other hand, investigation of serine 422-phosphorylated tau revealed no significant difference in the RN2N treated groups compared to control (Fig. 4b).
In the RIPA fraction, however, which contains insoluble, pathogenic species of tau, a reduction in total tau was observed in RN2N treated mice, regardless of isotype, compared to control mice (Fig. 4c). This was also seen with the phosphorylated-tau specific antibodies, AT8 and anti-serine 422. It is important to note that AT180 immunoreactive tau in the RIPA fraction was below the detection limit of the assay. Taken together, these data demonstrate the ability of both RN2N IgG1 and IgG2a to reduce tau phosphorylation and reduce the formation of insoluble tau species.
RN2N IgG1, but not RN2N IgG2a, reduces tau inclusions in the amygdala following passive immunization of pR5 mice
To determine the effect of RN2N treatment on the formation of tau inclusions, phosphorylated tau positive neurons in the amygdala, the main brain region that is affected in pR5 mice at this age [18], were counted following immunofluorescence analysis. Treatment with RN2N IgG1, but not IgG2a, significantly reduced the number of AT180 positive inclusions compared to control treated mice (Fig. 5). Quantification of AT8-positive tau inclusions in this region, however, did not show a significant difference in the RN2N-treated groups compared to control mice, although there was a trend towards a reduction in the IgG1-treated mice compared to control mice (Fig. 5).
RN2N IgG1, but not RN2N IgG2a, reduces microgliosis following passive immunization of pR5 mice
Tau transgenic mice are known to exhibit increased microgliosis and this can be demonstrated by staining for Iba1, a cytoplasmic microglial marker [24]. In the pR5 mice, we observed an increase in the number of Iba1positive microglia in the cortex compared to WT littermate controls (Fig. 6). To determine the effect of antibody isotype on microgliosis following passive immunisation, brain sections of treated mice were labelled with Iba1. Total cortical microglial surface area, cell count and average cell body size were all significantly reduced in the mice treated with RN2N IgG1, but not RN2N IgG2a, compared to the PBS treated mice, suggesting that only IgG1 treatment can reduce microgliosis observed in tau transgenic mice (Fig. 6).
Discussion
Tau-targeting passive immunotherapy is a promising strategy for the treatment of AD, with a number of monoclonal antibodies currently in clinical trials, delivered as either a humanized IgG1 or IgG4 isotype [25]. As a direct comparison of a tau monoclonal antibody as a high-effector function isotype and low-effector function isotype has not previously been conducted, we therefore aimed to directly compare the in vitro and in vivo efficacy of our RN2N tau-specific antibody in two murine IgG-isotypes: (i) the high-affinity FcR-binding IgG2a (equivalent to human IgG1), and (ii) the low-affinity FcR-binding IgG1 (equivalent to human IgG4).
Here, we demonstrate that RN2N, in both the IgG1 and IgG2a format, reduced insoluble tau in the P301L tau transgenic pR5 mouse model. The reduction in tau pathology was more pronounced in the IgG1-treated group, however, with only IgG1 reducing phospho-tau positive inclusions in the amygdala. Furthermore, only IgG1 treatment was able to reduce microgliosis. This is despite the IgG2a isotype demonstrating enhanced phagocytosis of tau in vitro. Treatment with the IgG2a isotype in vitro, however, increased the secretion of pro-inflammatory cytokines following tau engagement, suggesting that enhanced phagocytosis and/or enhanced microglial activation with the subsequent release of proinflammatory cytokines may be able to induce indirect disease toxicity and overshadow therapeutic effects of a tau-specific antibody. This is consistent with the work of Lee et al. who demonstrated that although a full-effector antibody and an effector-less antibody reduced the accumulation of tau pathology following treatment of P301L tau transgenic mice, only the effector-less tau antibody protected neurons from tau-induced toxicity in vitro [26]. In addition, recent studies have shown that microglia are RN2N IgG1 and IgG2a treatments reduce total insoluble and phosphorylated tau in pR5 mice. a Schematic of RAB/RIPA brain fractionation protocol to obtain soluble (RAB) and insoluble (RIPA) fractions of whole brain lysate. b Representative images of soluble pR5 brain lysate probed with anti-tau antibodies as indicated (W = brain homogenate from wild-type mice; K = brain homogenate from MAPT knock-out mice) and quantification (n = 10 for each group). c Representative images of insoluble pR5 brain extracts probed with anti-tau antibodies as indicated and quantification (n = 10 for each group). *p < 0.05, **p < 0.01, ***p < 0.001, ****p < 0.0001 able to take up and secrete tau seeds capable of seeding pathology for neuron-to-neuron propagation [27]. Furthermore, tau-mediated activation of the microglial inflammasome was shown to increase the activity of caspase-1 and downstream IL-1β release, thereby inducing tau pathology [28].
Beyond the effect of the RN2N antibodies on tau pathology, we also observed a surprising behavioural In the open field, however, all treatment groups similarly avoided the centre zone, indicating similar anxiety levels. This highlights a disinhibition-like behaviour rather than reduced anxiety levels. Disinhibition-like behaviour has been previously reported in a number of mutant tau transgenic mouse models when compared to non-transgenic mice in the EPM [29][30][31][32][33][34]. In addition, a disinhibition-like behaviour has been observed in a mouse model of traumatic brain injury which is characterised by both tau pathology and microgliosis following injury [35]. This suggests that pathology caused by tau accumulation or possibly microglial activation, particularly in the amygdala, may contribute to impulsivity and disinhibition-like behaviour in mice. Therefore, rather than improving therapeutic outcomes, treatment with the IgG2a isotype may actually worsen mouse behaviour to recapitulate the behaviour observed in human disease.
Whilst enhanced microglial activation in the IgG2a treated mice and subsequent release of pro-inflammatory cytokines is one explanation for the differences we observed between RN2N IgG1 and RN2N IgG2a in vivo, the physicochemical properties of the Fc domains is another explanation for these differences. A study by Congdon et al. showed that neuronal uptake of tau antibodies and their efficacy strongly depends on antibody charge, with an increase in antibody charge inhibiting the antibody's ability to be internalized by neurons [36]. RN2N IgG1 has a net charge of 0.4, whereas RN2N IgG2a has one of 3.3, suggesting that RN2N IgG1 may have an enhanced ability to be internalised into neurons and target intraneuronal tau. To date, however, we have not observed any intraneuronal uptake of RN2N IgG in our experiments.
Taken together, our study supports the growing body of evidence that demonstrates that high tau-antibody effector function is not required for antibody efficacy following passive immunization and that a robust microglial response in vivo may be deleterious. Therefore, low-effector function tau antibody isotypes, such as humanized IgG4, and engineered antibodies that lack effector-function all together, such as scFvs [15], Fabs and effector-less IgGs [26], may be safer and more effective alternatives for clinical development.
|
2023-01-31T14:38:35.927Z
|
2021-03-12T00:00:00.000
|
{
"year": 2021,
"sha1": "3062c0fe5c385ac023b67b59287621836dd4ed74",
"oa_license": "CCBY",
"oa_url": "https://actaneurocomms.biomedcentral.com/counter/pdf/10.1186/s40478-021-01147-0",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "3062c0fe5c385ac023b67b59287621836dd4ed74",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
}
|
73520449
|
pes2o/s2orc
|
v3-fos-license
|
Six-Month Therapy with Metformin in Association with Nutritional and Life Style Changes in Children and Adolescents with Obesity
C l i n M e d International Library Citation: Azcona-Sanjulián MC, Lambán AC, Ruiz BL (2015) Six-Month Therapy with Metformin in Association with Nutritional and Life Style Changes in Children and Adolescents with Obesity. Int J Pediatr Res 1:002 Received: February 18, 2015: Accepted: March 11, 2015: Published: March 13, 2015 Copyright: © 2015 Azcona-Sanjulián MC. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Azcona-Sanjulián et al. Int J Pediatr Res 2015, 1:1 ISSN: 2469-5769
Introduction
Obesity during childhood has became a growing public health problema throughout the world to the extent that accordint to the European Association for the study o Obesity (EASO), about 16-22% of european adolescents between 14-17 years old are overweight or obese, with an anual increase of the prevalence of around 2% in the 1990s and 2000s [1].
Conventional therapy for obesity is sometimes unsuccessful, especially in those children who develop hyperinsulinemia and insulin resistance, which often precede the development of glucose intolerance [2].Hyperinsulinemia has a strong lipogenic effect and therefore a positive energy balance is established.Fat deposition persists and so it seems that insulin-stimulated lipogenesis is unimpaired despite the resistance to carbohydrate metabolism.Therefore, it is hypothesized that, in the obese, if the insulin level falls, lipogenesis will decrease and weight gain will diminish.Metformin (dimethylbiguanide) is an insulin-sensitizing and antihyperglycemic agent used in the treatment of type 2 diabetes.The beneficial role of metformin in young patients with diabetes type 2 was demonstrated in a randomized trial [3].Metformin has been recently approved by the FDA for the treatment of diabetes type 2 in children over 10 years old.
Metformin acts by stimulating intracellular glucogen synthesis, decreasing hepatic glucose production (inhibition of gluconeogenesis), decreasing intestinal absorption of glucose, and increasing insulin sensitivity [4].It also increases muscle uptake of glucose and interfere with mitochondrial activity.The use of metformin in nondiabetic obese adults and children has been associated with reduced food intake [5][6][7].
The first clinical application of metformin in children with obesity was described in 1977; a beneficial effect on weight and insulin concentrations was reported [8].
Subsequent data from randomized, doubled-blind, placebo, controlled trials with children given metformin therapy for exogenous obesity with insulin resistance [9][10][11][12][13][14] liver dysfunction and obesity [15] as well as for psychotropic drug induced weight gain [16] have shown improvement in body mass index (BMI), in levels of fasting serum glucose and insulin, and in the lipid profile.
• Page 2 of 6 • ISSN: 2469-5769 into the system.The prediction equations are not provided by the manufacturers.
Biochemical and hormonal assays
Glucose was measured in plasma by an enzymatic method (Roche diagnostics).Insulin was measured in serum by an EIA assay (DPC).Sensitivity to insulin was evaluated using the HOMA index (fasting insulin x fasting glucose mmol/L/22.5)[24].After an overnight fast, an oral glucose bolus of 1.75 g/kg (up to a maximun of 75g) was administered.Blood for determination of glucose and insulin was obtained at 0, 30, 60, 90, and 120 minutes.The liver function test and determination of vitamin B12 and folic acid levels were carried out as standard clinical assays.
Statistical analysis
Computer analysis: The statistical analyses and data recordings were performed on a personal computer using the Statistical Package of the Social Science program (SPSS), version 20.0 (Chicago, Illinois).
Descriptive analysis: Descriptive statistics are reported as means and SD because all the variables were normally distributed.Weight, BMI, and height were converted to z-scores (standard deviation scores).To obtain the z-scores (x-mean/SD), we used Spanish growth standards [25].
Statistical analysis: Since all the variables were normally distributed, we used the paired-sample t-test to compare data before and after metformin therapy.All tests were two-tailed.
Results
Twenty-five patients were referred to the study.Two patients declined to participate, and two did not meet the inclusion criteria.Therefore, 21 patients, of which 8 were male and 13 female, participated in the study.One patient declined the OGT test or any other blood investigation at month 0. At month six, only 7 patients agreed to have an OGT test.One patient discontinued the study after one month of metformin therapy because of dyspepsia and diarrhea.
Baseline characteristics
The mean age of the participants was 12.3 ± 3.9 years, with 13 being in Tanner stage 1-2 and 8 in Tanner stage 3-5.There were not significantly more girls than boys in puberty (Tanner stage 3-5) (p=0.055).All other characteristics were similar for both males and females (Table 1).All participants but one were of Caucasian origin.A family history of metabolic syndrome in either first or second degree relatives was noted in 15 patients (71%).Thirteen patients (61.9%) had acanthosis nigricans.At onset of therapy, 38% of the participants had plasma glucose concentrations at 120 min.higher than 140 mg/ dL, and 42% had HOMA higher than 3.8.
Subjects and Methods
Participants were 5-to 18-year-old with obesity, as defined by the International Obesity Task Force [18], referred to the pediatric endocrine clinic at the University Hospital of Navarra between January 2006 and January 2007.Prior to metformin therapy, all participants had been subjected to nutritional intervention and lifestyle changes but had not responded with a loss of weight.Exclusion criteria were known type 1 or type 2 diabetes; dysmorphic features; diabetes mellitus; renal disorders; obesity due to hormonal, chromosomal, or neurological disorders; and contraindications to metformin therapy.
Written informed consent was obtained from all parents and from all participants over 12 years old.Participants under 12 years old were also informed about the therapy and expressed their agreement to it.The therapy was approved by the Spanish health ministry and by the University Hospital of Navarra Ethics Committee.
Study design
All subjects received treatment with metformin for six months.Subjects were seen monthly by a dietitian to receive standardized instructions for healthy eating and exercising.We encourage children to do aerobic exercise such as swimming, cycling, running, dancing, 3 times per week.They also do physical exercise at school twice a week.After a two week period, metformin dose was gradually built up from 425mg once daily to a maximum final dose of 850mg twice daily.Up to 40 kg the daily dose was 425mg after breakfast and dinner, and above 40kg the dose was 850mg after breakfast and dinner.Unused tablets were counted when the patients' pill dispensers were refilled with the drug, that is, after three months and at the end of the study period.Pill counts were conducted to calculate percent adherence to therapy, based on number of tablets consumed versus anticipated tablet consumption for each three month period.
Clinical assessment and anthropometry
At baseline and six months, participants attended the University Hospital of Navarra for clinical assessment including anthropometry, body composition analysis by bioelectric impedance, oral glucose tolerance test or basal glucose and insulin tests, liver function test, and determination of folic acid and vitamin B12 levels.
The medical history of each participant was investigated in detail, and subjects were examined for clinical signs of adrenarche or gonadarche according to Tanner stage [19].A participant was considered pubertal if he or she was in at least Tanner stage 2 with respect to breast development or testicular volume.
All participants were measured anthropometrically by the same observer.Weight was measured to the nearest 0.01 kg using the calibrated BP electronic scale (Life Measurement Instruments, Concord, CA, U.S.A.).Height was measured to the nearest 0.1cm using a Harpender stadiometer.BMI was calculated as weight/height 2 (kg/m 2 ) [20].Waist and hip circumferences were also measured, as described in the literature [21].
Acanthosis nigricans on the neck was assessed for severity by a validated scale ranging from grade 0 (not present) to grade 4 (severe: extending anteriorly, visible when the participant is viewed from the front) [22].
We assessed degree of appetite on a categorical scale of one to five: 1 being very poor, 2 poor, 3 fair, 4 good, and 5 very good [23].
Bioimpedance
BIA was performed with a TANITA BIA body fat analyser, which measures impedance of only the lower part of the body (TBF-410, Tanita, Tokio, Japan).Participants were asked to stand barefoot on the four metal sole-plates of the machine.Gender and height details were input via a keyboard.Bioelectric resistance was measured after induction of a 50 KHz electrical signal with an interval current of 150 to 900 mA.Percentage body fat was automatically estimated by the prediction equations (for children) which had been programmed
Effect of Metformin treatment on parameters of insulin sensitivity
Metformin in association with nutritional and lifestyle intervention decreased plasma glucose level at 120 min.after glucose loading at six months of follow-up (p=0.008)(Figure 4).There were statistically non-significant decreases in basal insulin (p=0.142),basal glucose (p=0.183), and HOMA (p=0.198)(Figure 5) at six months of follow-up (Table 2).
Side effects, adherence to therapy, and safety profile
Metformin was well tolerated by the majority of patients.One patient had to stop therapy after a month because of dyspepsia.Only 23.8% of the participants referred gastrointestinal side effects: transient abdominal discomfort, diaorrhea, or both.After reducing the metformin dose, these gastrointestinal problems resolved within two weeks of therapy.Liver function tests, and levels of B12 and folic acid remained normal in all patients throughout the study.Mean basal vitamin B12 levels were 650pg/mL and after treatment 690pg/ mL.Mean folate basal levels were 15,5ng/mL and after treatment 16,5ng/mL.Based on pill counts, adherence to therapy was 78% (range 14-98%).
Discussion
This study demonstrates that, when diet and exercise alone are not effective, metformin helps obese patients lose weight.The patients studied had been referred to the Pediatric Endocrine Unit for management of obesity and, despite being given appropiate instructions and guidance about how to change their nutrition and lifestyle, were not losing weight, some were even putting on weight.Some were developing features of insulin resistance, such as, acanthosis nigricans.Metformin was well tolerated and had a beneficial effect on weight, BMI, waist circumference, and fat mass as has been described previously in previous other studies [9][10][11][12][13][14][15].Visceral fat was not accurately measured in this study; body composition was assessed by BIA, a method which has its limitations in estimating body fat and fat-free mass.Moreover, the BIA system used in this study does not measure body fat in different body areas.Other researchers have performed whole body DEXA but did not find loss of visceral fat [17].Loss of visceral fat might require metformin therapy for a longer period of time.
Changes in BMI-SDS after Metformin
Although, based on HOMA index a number of patients had improved insulin sensitivity, this improvement was not statistically significant for the group as a whole.Similar results have been previously observed by Srinivarsan et al. [17] using more complex methods to assess insulin sensitivity.
There are several possible explanations for the lack of statistically significant improvement in insulin sensitivity.Firstly, most of the participants who had insulin resistance (52.3%, n=11) were in puberty, a period when insulin resistance due to obesity can be enhanced.This physiological insulin resistance characteristic of puberty may have masked the effect of metformin.To study the effects of metformin on insulin resistance during puberty was not the main objective of this study and would require a very different experimental approach to that adopted here.Moreover, there were not enough patients in this study to statistically assess the effect of pubertal stage on response to metformin.
Secondly, the methods we used to assess insulin sensitivity, HOMA and OGT, whilst easy to perform clinically, are not the most accurate.However, Srinivarsan et al. [17] also failed to detect a significant improvement in insulin sensitivity assessed with the frequently sampled intravenous glucose tolerance test.
Thirdly, the doses of metformin were small and the adherence to therapy was poor.We used a maximum dose of 1700 mg whereas data from adults with type 2 diabetes suggest that a total dose of 3g may be required to maximize the metabolic benefits of metformin [26].Many patients did not adhere well to our prescribed therapy.This is not uncommon in the treatment of obese patients, especially with adolescents.
The precise way in which metformin acts is unknown.It is believed to increased insulin sensitivity and glucose uptake in subjects with type 2 diabetes mellitus [27].It has also been suggested that metformin exerts its antyhyperglycemic effect by decreasing hepatic glucose output through inhibition of gluconeogenesis.Previous studies have demonstrated that metformin treatment reduces food intake in both humans and experimental animals [28,29].A study has demonstrated that metformin can inhibit the complex I of the electron transport chain in mitocondria and lead to a loss of mitocondrial membrane potential and inhibition of ATP production [30].They conclude that metformin pharmacological effects are mediated, at least in part through a time-dependent, self-limiting inhibition of the respiratory chain that restrains hepatic gluconeogénesis while increasing glucose utilization in peripheral tissues [30].However, it is diffcult to know its precise mechanism of action in humans because most of the studies have been performed in animal cells.
In this study, we did not analyze diet in order to assess caloric intake.However, we did use a categorical scale to assess appetite and found it decreased significantly.Therefore, loss of appetite is another mechanism by which metformin can exert its anti-obesity effect.
Metformin was well tolerated.There were only minor and transient side effects, which in the majority of patients resolved spontaneously after decreasing the dose.We did not measure lactic acid, but it should be observed that an increased level of lactic acid has been reported as a complication, albeit very rare, and that it can be more frequent in patients with renal disease.
In conclusion, for certain patients with obesity who have difficulties in losing weight with conventional therapy, additional medical therapy can help.Obesity is a chronic disease with severe complications, specially insulin resistance, that become more difficult to treat the longer they persist.Metformin, by helping patients to lose weight, represents a way to try to forestall progression to type 2 diabetes in those patients predisposed to it.However, longer term, control-placebo studies are needed to fully assess the safety and behaviour of this drug.
Figure 1 :Figure 2 :
Figure 1: Changes in BMI after metformin therapy
Table 2 :
Metformin treatment effect
|
2018-12-29T08:26:32.442Z
|
2015-06-30T00:00:00.000
|
{
"year": 2015,
"sha1": "6482e4fe68c2598f8725e77be18b4ee6057fd221",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.23937/2469-5769/1510002",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "6482e4fe68c2598f8725e77be18b4ee6057fd221",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
271088125
|
pes2o/s2orc
|
v3-fos-license
|
Mechanisms underlying the therapeutic effects of Gang Huo Qing wen granules in the treatment of influenza based on network pharmacology, molecular docking and molecular dynamics
Influenza (Flu) is a severe health, medical, and economic problem, but no medication that has excellent outcomes and lowers the occurrence of these problems is now available. GanghuoQingwenGranules (GHQWG) is a common Chinese herbal formula for the treatment of influenza (flu). However, its methods of action remain unknown. We used network pharmacology, molecular docking, and molecular dynamics simulation techniques to investigate the pharmacological mechanism of GHQWG in flu. TCMSP and various types of literature were used to obtain active molecules and targets of GHQWG. Flu-related targets were found in the Online Mendelian Inheritance in Man (OMIM) database, the DisFeNET database, the Therapeutic Target Database (TTD), and the DrugBank database. To screen the key targets, a protein–protein interaction (PPI) network was constructed. DAVID was used to analyze GO and KEGG pathway enrichment. Target tissue and organ distribution was assessed. Molecular docking was used to evaluate interactions between possible targets and active molecules. For the ideal core protein–compound complexes obtained using molecular docking, a molecular dynamics simulation was performed. In total, 90 active molecules and 312 GHQWG targets were discovered. The PPI network's topology highlighted six key targets. GHQWG's effects are mediated via genes involved in inflammation, apoptosis, and oxidative stress, as well as the TNF and IL-17 signaling pathways, according to GO and KEGG pathway enrichment analysis. Molecular docking and molecular dynamics simulations demonstrated that the active compounds and tested targets had strong binding capabilities. This analysis accurately predicts the effective components, possible targets, and pathways involved in GHQWG flu treatment. We proposed a novel study strategy for future studies on the molecular processes of GHQWG in flu treatment. Furthermore, the possible active components provide a dependable source for flu drug screening.
by which GHQWG achieves its anti-flu properties, including evaluations of putative targets, cellular processes, and metabolic pathways, are missing.
Complex chemicals used in TCM have numerous clinical targets and pathways.For mechanistic research of TCM, conventional pharmacology methodologies are so restricted 15 .A novel idea known as "network pharmacology," put forth by Shao Li in 2013, offered a fresh method for figuring out how TCM formulas work 16 .The development of bioinformatic networks and network topology investigations are part of TCM network pharmacology, which also encompasses virtual computing, high-throughput data processing, and network database retrieval 17 .This method is appropriate for studying TCM because it focuses on multi-component, multi-channel, and multi-target synergy 18,19 .Network pharmacology, which is based on computational prediction, has recently merged with bioinformatics, making it a reliable way to systematically disclose the biological mechanisms behind complex diseases and therapeutic actions at the molecular level 20,21 .More research is being done using network pharmacology to determine the mechanisms underlying TCM's therapeutic benefits on diverse ailments 22,23 .
In 2007, Hopkins, a British pharmacologist, first proposed the concept of network pharmacology (NP), which is defined as a branch of pharmacology based on the theoretical foundation of systems biology and multidirectional pharmacology, and utilizing the method of biomolecular network analysis to select specific nodes for new drug design and target analysis 18,24 .Molecular docking is a computer-aided drug design (CADD) method 25 , which is an important tool in NP to predict the binding affinity between drugs and targets.They can predict the binding affinity between a drug and its target by constructing drug-target-pathway network models and simulating drug-target interactions, which can reveal the mechanism of action and potential side effects of the drug.TCM is characterized by "multi-components, multi-targets, and multi-pathways", which coincides with the "multi-genes, multi-targets" pharmacological research characteristics of NP 4,26 .With the soundness of database, the maturity of CADD technology and the modernization of TCM, the research of NP and molecular docking technology in the field of TCM has become more and more extensive, including but not limited to the prediction of Chinese medicine targets, understanding of the biological basis of diseases and syndromes, the network regulation mechanism of TCM, and the identification of biomarkers of diseases and syndromes based on the biological network 5,27 , which has brought vigorous opportunities and has also brought new opportunities for the development of Chinese medicine pharmacology.It also provides new horizons for the search of new drugs against influenza virus.
In summary, the pandemic nature of influenza, the drug resistance of the virus and the limited means currently available to Western medicine to cope with influenza have prompted researchers around the world to search for new ways to prevent and treat influenza.TCM possesses unique advantages and broad prospects in the influenza.Clinical experience has found that GHQWG can effectively inhibit the progression of the disease and improve the prognosis, but its mechanism of action still needs to be further explored.NP and molecular docking technology can be a good way to explore the mechanism of action of Chinese medicine pharmacodynamic substances and therapeutic diseases.Therefore, this research intends to search for active molecules and targets of GHQWG, influenza-related targets, construct the network system of GHQWG-influenza by NP and molecular docking techniques to predict the main bioactive substances, possible targets and signaling pathways of GHQWG in the treatment of influenza.We hope that this work will provide a scientific elucidation of GHQWG for influenza, align with the frontiers of modern science and technology, lay the foundation for further research on the mechanism of action of GHQWG for influenza, and provide new insights into the treatment of influenza by TCM for researchers around the world.The flow chart of this study is shown in Fig. 1.
Screening and target prediction of active components of GHQWG
The active components of Ilex asprella root, Pogostemon cablin, Lonicerae Japonicae Flos, Forsythiae Fructus, Saposhnikoviae Radix, Notopterygii Rhizoma Et Radix, Radix Bupleuri, Atractylodes lancea, Schizonepetae Spica and Bovis Calculus in GHQWG were searched by application analysis platform and database system pharmacology of Chinese medicine (TCMSP, https:// tcmspw.com/ tcmsp.php) 28 and the targets of active components were predicted.We screened the active ingredients that meet both oral bioavailability (OB) ≥ 30% and drug-like (DL) ≥ 0.18 by pharmacodynamics 29,30 .Since "Ilex asprella root" was not searched in the TCMSP database, the chemical composition of Ilex asprella root was collected by searching the chemical composition literature of Ilex asprella root, and these components were imported into SwissADME for ADME screening, with GI absorption as high and Druglikeness greater than or equal to two as Yes.The drug-likeness was greater than or equal to two as Yes as the screening criteria, and the target of the action of Ilex asprella root was predicted in the Swis-sTargetPrediction database, and the target with a Probability value greater than 0.1 was selected as the predicted target, and the target of the action of Ilex asprella root (GMG) was obtained.
Acquisition of GHQWG targets in flu
To clarify the mechanism of action of drug targets and disease targets at the protein level, drug genes, and disease genes were submitted to the Venn diagram tool (http:// bioin forma tics.psb.ugent.be/ webto ols/ Venn/ 32 , it was possible to further achieve the goal of removing the repeated targets among GHQWG and flu.Then, the intersection of GHQWG-related targets and flu-related targets was then screened for common targets 33 www.nature.com/scientificreports/
Analyses of the protein-protein interaction network and hub targets
The protein-protein interaction (PPI) network helps to better understand the biological mechanisms involved in target-related pathogenesis at the protein level.Thus, the STRING 11.0b database (https:// string-db.org/) was used to construct the PPI network and receive hub targets.The organism was set to "Homo sapiens" and the minimum required interaction score was 0.4 34 .Subsequently, the PPI network was visualized and analyzed by Cytoscape 3.7.2software (https:// cytos cape.org/).The degree values in the PPI network were calculated by using the NetworkAnalyzer plugin of Cytoscape 3.7.2software, calculates the degree of each target node of the PPI network through topological analysis, and selects the key target genes in the network according to the degree of the node, and the larger the degree, the greater the role of the gene in the PPI network, and then takes the top 6 proteins as the core targets.The proteins with the top 6 Degree values were selected as the core targets.
Enrichment analyses for common targets
To study the biological function of potential targets in flu, DAVID (https:// david.ncifc rf.gov/) database was used to collect GO analysis and KEGG data 35 .GO analysis is used to screen biological processes (BP), cellular www.nature.com/scientificreports/components (CC), and molecular functions (MF) 36 .KEGG enrichment analysis can find important signaling pathways involved in biological processes [37][38][39] .Subsequently, GO and KEGG data were uploaded to the Bioinformatics (http:// www.bioin forma tics.com.cn/) platform for visual analysis 40 .
Molecular docking
Molecular docking can predict the potential therapeutic effects of drug components by analyzing the binding potential between the main components and hub genes 41 .The specific methods were as follows: the 3D structure of the hub genes proteins and the crystal structure of the main components were obtained from the RCSB PDB (http:// www.rcsb.org/) and PubChem databases 42 respectively.PyMOL 1.7.xsoftware was used to remove ligands, dehydrate, and hydrogenate the hub genes proteins 43 .Then, the hub genes proteins and main components were transformed into PDBQT format by AutoDock Tools 1.5.6 software.AutoDock Tools 1.5.6 was used to construct a crystal structure docking grid box for each target.Then the molecules with the lowest binding energy for each active compound in the docking conformation were allowed for semi-flexible docking by comparing with the original ligands and intermolecular interactions.Box center coordinates and size of the box were determined for evaluating the interaction and the "Number of GA Runs" was set to 50 44 .Finally, the docking results were visualized with PyMOL software.
Molecular dynamics simulation
To analyze the binding affinities of protein targets and active compounds obtained by molecular docking, a 100 ns atomistic molecular dynamics (MD) simulation was performed using the AMBER20 software package 45,46 .The ff14SB force field parameter was used for the proteins, and the general AMBER force field (gaff) was used for the active ingredients.The electrostatic potentials were calculated at the B3LYP/6-31G level using Gaussian09 and the atomic partial charges were assigned using the Antechamber package in AMBER20.After loading the protein, active ingredients were uploaded to the leap module, and hydrogen atoms and antagonist ions were automatically added to neutralize the charge.The TIP3P explicit water model was selected, and periodic boundary conditions were set 47 .The molecular dynamics simulation workflow included four steps: energy minimization, heating, equilibrium, and production dynamics simulation.First, the heavy atoms of proteins (and small molecules) were confined, and 5000 steps (including the 2500-step steepest descent method and the 2500-step conjugate gradient method) were performed on the water molecules to minimize the energy; then, the system was slowly heated from 0 to 300 K within 60 ps.After heating, the system at 60 ps under the NPT ensemble was balanced.Finally, the system was subjected to a molecular dynamics simulation of 100 ns (a total of 50,000 steps) under the NPT ensemble.The Particle Mesh Ewald (PME) methods were used to deal with long-range electrostatic interactions and the SHAKE algorithm was used to restrict the bond to the hydrogen atom.The time-step was set at 2 fs, and the trajectory data were saved every 1 ps.The CPPTRAJ module was used for analyses 48 .
Active compounds and potential targets of GHQWG
In total, 135 chemical compounds from ten herbs in GHQWG were obtained from TCMSP databases and the literature, including 6, 11, 23, 9, 17, 18, 8, 23, 5 and 15 compounds from GMG, GHX, JYH, CZ, CH, FF, JJS, LQ, NH and QH, respectively.After removing redundancy, 90 chemical compounds were identified (Supplementary File Table 1).312 probable targets of GHQWG active chemicals and their related gene symbols were identified by scanning the TCMSP target module (Supplementary File Table 2).The active compounds and common targets were loaded into Cytoscape 3.7.1 to create a herb-compound-target network with 411 nodes and 2134 edges (Fig. 2).The network sheds light on the multiple complicated impacts of GHQWG active chemicals on flu.We determine the key components based on the degree ranking of the active ingredients, and identify the top eight active ingredients with degree ranking as key components.A network analysis showed that quercetin (degree: 585), Kaempferol (degree: 175), Luteolin (degree: 163), β-Sitosterol (degree: 137), wogonin (degree: 127), Stigmasterol (degree: 91), Caffeic Acid (degree: 82), and Isorhamnetin (degree: 33) were key nodes.
Construction and analysis of GHQWG-flu-related PPI network
134 intersection targets were then imported into the STRING platform to build a PPI network, yielding 109 nodes and 397 edges.The size and color of the node denoted the size of the value.The intersection targets were screened using the double median of "Degree," that is, "Degree 24."As a result, the PPI network data from the STRING11.5database was imported into the Cytoscape 3.9.1 software for visualization (Fig. 4).The PPI analysis revealed that the therapeutic targets of GHQWG had many networks and synergistic interactions.Topological analysis showed RELA proto-oncogene, NF-kBr (RELA), tumor protein p53 (TP53), mitogen-activated protein kinase 3 (MAPK3), tumor necrosis factor (TNF), AKT serine/threonine kinase 1 (AKT1), and mitogen-activated protein kinase 1 (MAPK1) occupy the core position in the PPI network and are considered hub genes of GHQWG against flu (Supplementary File www.nature.com/scientificreports/
Gene Ontology and Kyoto Encyclopedia of Genes and Genomes pathway analysis
To study the biological activities and pathways of GHQWG against flu, GO and KEGG enrichment analysis on common targets were done.On the Metascape platform, the GO function enrichment study of the 134 core targets yielded 888 GO items, comprising 674 "Biological Processes (BP)", 80 "Cellular Components (CC)", and 134 "Molecular Functions (MF)"(Supplementary File Table 6).Based on the P value for visual analysis, the first 15 "Biological Processes" items, 10 "Cellular Components" items, and 10 "Molecular Functions" items were chosen (Fig. 5).Representative BP terms included the response to xenobiotic stimulus, positive regulation of gene expression, cellular response to cadmium ion, inflammatory response, positive regulation of the apoptotic process, positive regulation of transcription from RNA polymerase II promoter, lipopolysaccharide-mediated signaling pathway, positive regulation of cell proliferation, etc. Representative CC terms included extracellular space, macromolecular complex, membrane raft, caveola, an integral component of the plasma membrane, extracellular region, postsynaptic membrane, plasma membrane, cytosol, presynaptic membrane, etc. Representative MF terms included enzyme binding, identical protein binding, protein binding, protein homodimerization activity, heme binding, cytokine activity, G-protein coupled acetylcholine receptor activity, protein kinase binding, transcription cofactor binding, etc.In addition, 174 signal pathways were enriched (Supplementary File Table 7) by KEGG pathway analysis of the core targets using the DAVID platform (Fig.
Molecular docking
In general, it is thought that a docking score of 0 kcal/mol or less suggests that the component and the target can interact spontaneously, a value of − 4.0 kcal/mol or less shows good docking affinity, a value of − 7.0 kcal/mol or less shows strong docking affinity, and the smaller the value, the higher the binding activity [49][50][51] .The docking targets with the 8 active components with the highest degree of Quercetin, Kaempferol, Luteolin, β-Sitosterol, Wogonin, Stigmasterol, Caffeic Acid, Isorhamnetin, and the top 6 targets of degree are RELA, TP53, MAPK3, TNF, AKT1, and MAPK1 were used for molecular docking analysis.The docking results have shown the main components and hub genes have good binding activity (affinity < − 6.00 kcal/mol) (Fig. 7) and the binding energy of quercetin and hub genes was the lowest, indicating that quercetin had the strongest binding ability.Detailed information on molecular docking is shown in Table 1.PyMoL-1.7.2.1 and Discovery Studio 2020 were used to depict the compound-target interactions with the greatest free binding energy scores, as well as their mechanisms of binding (Fig. 8A-L).Docking results had free binding energies ranging from − 3.505 to − 9.853 kcal/mol, indicating stable binding.The free binding energy of www.nature.com/scientificreports/TNF with quercetin was − 9.853 kcal/mol.Binding affinities were attributed to hydrogen bonding with TYR-199 residues van der Waals forces with LEU-94, LEU-120, TYR-59, LEU-57, GLN-61, GLY-121, LEU-120, SER-60, TYR-119 and GLR-61 residues, as well as hydrophobic interactions with LEU-57 residues of TNF (Fig. 8I,J).
Molecular dynamics simulation
Molecular dynamics simulations can be used to understand the stability of protein-ligand complexes.Based on the results of molecular docking, we selected the six compounds with the highest free binding energy scores as the targets of molecular dynamics simulations, namely AKT1-Kaempferol, MAPK1-Kaempferol, MAPK3-Kaempferol, RELA-Luteolin, TNF-Quercetin, and TP53-Caffeic Acid were subjected to simulation analyses of molecular dynamics for 100 ns to assess their motion, trajectory, structural features, binding potential, and conformational changes.The root mean square deviation (RMSD), which measures the degree of atom position departure from the initial position, is a useful indicator of the conformational stability of proteins and ligands.An improved conformational stability is shown by a decreased deviation.The complexes' variations in RMSD values were examined.The RMSD of the AKT1-Kaempferol complex varied early on and stabilized after 0.4 nm, as seen in Fig. 9A.Although the MAPK1-Kaempferol complex's RMSD trajectory varied between 86,000 and 90,000 ps and was largely smooth the remainder of the period (Fig. 9B), it did so.The trajectory became steady for the MAPK3-Kaempferol complex at 1 nm (Fig. 9C).The RELA-Luteolin complex's RMSD fluctuated early on before stabilizing at 48,000 ps (Fig. 9D).RMSD of the TNF-Quercetin combination has been stabilized overall (Fig. 9E).In the beginning, the RMSD of the TP53-Caffeic Acid complex varied until stabilizing at 0.4 nm (Fig. 9F).www.nature.com/scientificreports/ The active pockets of small molecules and proteins were found to be in a stable condition based on the RMSD values for the ligand and pocket.This shows that the protein's structure does not change considerably following the interaction with the small molecule ligand, and the combination is reasonably stable.
Together, these findings resoundingly validate the docking results.Temperature and pressure have no discernible impact on the structural conformation.
The stability of the target protein at the residue level
The vibrations of each residue following chemical binding were examined as root mean square fluctuations (RMSF) in order to investigate the local fluctuations of macromolecular proteins at the residue level.RMSF can be used in molecular dynamics simulations to depict the flexibility of proteins.The medication typically has the function of stabilizing the protein and exerting the effect of enzymatic activity by binding to the protein and causing a reduction in its flexibility.Protein amino acid flexibility and motion intensity are described by RMSF throughout the simulation.The medication attaches to the protein, stabilizing it and activating enzymes, as seen in the picture, which generally makes the protein less flexible.All compounds show the same tendency when compared to the respective reference ligand, i.e., they cause some fluctuation in the same protein region (Fig. 10A-F).However, all compounds generally have high rigidity and structural stability.Meanwhile, the Rg of AKT1-Kaempferol complex, MAPK1-Kaempferol complex, MAPK3-Kaempferol complex and TNF-Quercetin complex were stabilised at 2.16-2.25 nm; that of RELA-Luteolin complex was stabilised at 2.80 nm and that of TP53-Caffeic Acid was stabilised at 1.67 nm.The Rg of the RELA-Luteolin complex eventually stabilised at 2.80 nm, and that of the TP53-Caffeic Acid complex eventually stabilised at 1.67 nm.
Hydrogen bond analysis
Because hydrogen bonding is one of the most powerful non-covalent binding interactions, understanding the binding affinity between ligands and proteins is critical.The results indicated that the hydrogen bond numbers for the AKT1-Kaempferol, MAPK1-Kaempferol, MAPK3-Kaempferol, RELA-Luteolin, TNF-Quercetin, and TP53-Caffeic Acid complexes were 0-6, 0-10, 0-8, 0-8, 0-8, and 0-6, respectively (Fig. 12A-F).The number of hydrogen bonds produced by all protein-ligand complexes remains constant throughout the simulation.Furthermore, the presence of persistent amino acid residues at the active site contributes to the complex's overall structural stability.
Analysis of solvent accessible surface area
The interface enclosed by solvent is used to compute the solvent accessible surface area (SASA) 52 .Because this solvent behaves differently under different conditions, it can be used to examine protein conformational dynamics in a solvent context.The contact area between the six complexes and water is comparable, and the small molecule has little effect on the protein and water effect, the results are shown in (Fig. 13A-F).The SASA values of AKT1-Kaempferol complex, TP53-Caffeic Acid complex, MAPK3-Kaempferol complex and AKT1-Kaempferol complex remained generally unchanged during the simulation, RELA-Luteolin complex decreased from the initial 230 nm 2 to 195 nm 2 and TNF-Quercetin complex decreased from the initial 215 nm 2 to 185 nm 2 during the simulation, indicating that the protein-protein interactions have little effect on the characterisation and stability of the surface of protein molecules.
Discussion
Seasonal influenza epidemics with varying degrees of intensity present challenges to public health in the early twenty-first century, where influenza remains a significant cause of mortality 5 .Antiviral therapy can be difficult to use because of mutations and the emergence of resistance 53 .According to research, symptoms of influenza virus infection range from mild upper respiratory tract infections with symptoms like fever, sore throat, runny nose, cough, headache, muscle pain, and fatigue to severe and occasionally fatal pneumonia caused by the influenza virus or secondary bacterial infection of the lower respiratory tract 54 .In some instances, infection with the influenza virus can result in a variety of non-respiratory consequences that can affect the heart, central nervous system, and other organ systems 55,56 , Human mortality rates and the severity of infection are frequently correlated with a lack of pre-existing immunity 57 .TCM has been utilized for disease prevention and treatment based on systematic multi-target/multi-component techniques for thousands of years 58 .TCM has long been utilized as a treatment method for illnesses connected to the flu in particular 59 .
Based on clinical and experimental research, GHQWG is a possible therapy for influenza.The bioactive chemical components and the underlying mechanism of GHQWG's anti-flu therapeutic actions are still unknown.As a result, we employed molecular docking and a network pharmacology technique to pinpoint possible GHQWG targets and modes of action in flu.
Using the ADME criteria, we screened 90 active GHQWG molecules from the TCMSP databases.Then, through GeneCards, OMIM, DisGeNET, TTD, and DrugBank, we collected 1996 targets for influenza illness.There were 134 potential GHQWG-flu targets found in all.Based on the network pharmacology analysis, the top eight active compounds of GHQWG were screened, which were quercetin, kaempferol, luteolin, beta-sitosterol, wogonin, Stigmasterol, Caffeic Acid, and isorhamnetin respectively.Previous studies have proven that these chemicals are effective at treating the flu.The initial stage of influenza virus infection, which includes viral attachment, endocytosis, and viral-cell fusion, was inhibited by quercetin 60 .Kaempferol can prevent the replication and autophagy caused by the influenza A virus 61 .Luteolin reduces influenza virus production.A virus in vitro by preventing the production of the coat protein I complex 62 .Strong In Vitro Antiviral Activity of β-Sitosterol Against Influenza A Viruses 63 .Wogonin has strong anti-influenza properties that are controlled by AMPK activation 64 .Treatment with isorhamnetin can minimize the production of reactive oxygen species (ROS) brought on by viruses, block the acidification of cytoplasmic lysosomes and the lipidation of microtubuleassociated protein 1 light chain 3-B (LC3B), and stop influenza viruses from replicating 65 .Studies have shown that Stigmasterol is the major anti-hemagglutinin binding component and inhibits the spread of influenza virus by inhibiting the activity of influenza virus neuraminidase and preventing the release of viral particles from infected cells 66 .Therefore, based on the above findings, we can find that the active ingredients of GHQWG have an inhibitory effect on influenza virus.
The results of the GO functional analysis demonstrated that a variety of biological processes, including the response to xenobiotic stimulus, positive regulation of gene expression, cellular response to cadmium ion, inflammatory response, positive regulation of the apoptotic process, positive regulation of transcription from RNA polymerase II promoter, lipopolysaccharide-mediated signaling pathway, positive regulation of cell proliferation, etc., were connected to the effects of GHQWG in influenza.Influenza A virus-induced IFN-alpha/beta is vital in the host's antiviral protection by activating the expression of antiviral Mx, PKR, and oligoadenylate synthetase genes.IFN-alpha/beta increases T cell survival as well, upregulates IL-12 and IL-18 receptor gene expression and together with IL-18 stimulates NK and T cell IFN-gamma production and the development of Th1-type immune response 67 .The targets in this study were enriched in immune-related and inflammatory pathways, such as IL-17 signaling pathways, C-type lectin receptor signaling pathways, and TNF signaling pathways.Previous research has shown that inhibiting intraepithelial TNF-signaling prevents CD8 T-cell-mediated lung injury during influenza infection 68 .In a clinical study, it was found that children with influenza A virus pneumonia had higher serum levels of the cytokine interleukin-17 (IL-17) 69 , which is essential for mediating the immune response to extracellular bacteria and fungus in the lung 70 .Results from an in vivo study revealed that compared to infected wild-type controls, H5N1-infected IL-17 knockout (KO) mice lose weight much more quickly, exhibit more pronounced lung immunopathology, and pass away much sooner.Additionally, B cell density in the lung was drastically decreased in IL-17 KO mice following viral infection.In cultured B cells from IL-17 KO mice, chemokine-mediated migration was significantly reduced.These findings demonstrate that IL-17 is crucial for promoting the migration of B cells to the site of mouse lung influenza virus infection 70 .Influenza mouse lung injury is lessened by epithelial TNF signaling pathway modification because it lowers epithelial chemokine expression and lung inflammatory infiltration 71,72 .TNF triggers the expression of inflammatory genes, which directly triggers inflammatory responses, but it also indirectly triggers cell death, inflammatory immune responses, and the development of diseases 73 .
Furthermore, molecular docking was performed to examine six major target proteins (RELA, TP53, MAPK3, TNF, AKT1, and MAPK1) and active chemicals obtained from TCMSP, including quercetin, Kaempferol, Luteolin, β-Sitosterol, wogonin, Stigmasterol, Caffeic Acid, and Isorhamnetin.Binding affinities for docking data ranged from − 3.880 to − 9.853 kcal/mol, indicating that all of the targets may be capable of docking with active molecules.Among the six target proteins, TNF and AKT1 had the lowest binding affinities.quercetin, kaempferol, and luteolin demonstrated good binding activity to the targets, implying that these chemicals may contribute to GHQWG's therapeutic benefits in flu.More experimental research is needed, however, to confirm and examine the hypothesized targets and regulatory processes.
Finally, we selected the six compounds with the highest free binding energy scores as the targets of molecular dynamics simulations, namely AKT1-Kaempferol, MAPK1-Kaempferol, MAPK3-Kaempferol, RELA-Luteolin, TNF-Quercetin, and TP53-Caffeic Acid were subjected to simulation analyses of molecular dynamics to further It should be noted that there were some restrictions on this study.The reliability and accuracy of the predictions were therefore dependent on the quality of the data because both bioactive chemicals and target information were first gathered from the literature and databases.Second, this study employed a data mining methodology, and additional research using clinical trials and animals is required to validate the results.
Conclusion
For the treatment of the flu, this is the first time that bioinformatics techniques like network pharmacology, molecular docking, and molecular dynamics modeling have been used to systematically study the pharmacological and molecular mechanism of action of GHQWG.These bioinformatic and computational investigations showed that the primary constituents of GHQWG with therapeutic actions against the flu may include quercetin, kaempferol, kuteolin, β-sitosterol , wogonin, stigmasterol, caffeic Acid, and Isorhamnetin.Additionally, GHQWG can treat the flu by lowering pathologic harm, inflammatory reactions, and oxidative stress via several channels, including TNF and IL-17.
Overall, the present study concentrated on the multi-component and multi-pathway nature of GHQWG and its mode of action.These results are anticipated to guide the use of GHQWG and its continued development for the treatment of flu.
Strengths and limitations
Notably, our study provided some fresh insights into GHQWG in flu therapy and revealed feasible biochemical pathways and potential pharmacological targets of GHQWG for the first time.However, a few difficulties with our study's limitations must be addressed.Because the findings of this study were not confirmed in genuine flu patients, future confirmation of these findings will necessitate the recruitment of actual flu patients.Second, additional in vivo and in vitro research are required to confirm the predicted mechanisms and pharmacological targets in order to confirm the potential therapeutic applicability of GHQWG for flu.
Figure 1 .
Figure 1.Workflow of the network pharmacological investigation strategy of GHQWG in the treatment of flu.
Figure 2 .
Figure 2. Herb-compound-target network.Blue arrows represent the herbs in GHQWG, yellow circles node are QH compounds, green circles node are CH compounds, orange circles node are JYH compounds, deep purple circles node are GHX compounds, light purple circles node are intersection components, pinkish purple circles node are JJS compounds, dark green circles node are GMG compounds, navy blue circles node are NH compounds, bright yellow circles node are FF compounds, red-orange circles node are CZ compounds, powdered circles node are LQ compounds.The edges represent the interaction between compounds and targets, and the node size is proportional to the degree of interaction.
) representative pathways included the AGE-RAGE signaling pathway in diabetic complications, Lipid and atherosclerosis, Fluid shear stress and atherosclerosis, TNF signaling pathway, Pathways in cancer, IL-17 signaling pathway, Chagas disease, C-type lectin receptor signaling pathway, Hepatitis B, Kaposi sarcoma-associated herpesvirus infection, Prostate cancer, Chemical carcinogenesis-reactive oxygen species, Human cytomegalovirus infection, etc.In conclusion, Go and KEGG analysis highlighted that GHQWG's anti-inflammatory, antiviral, and anticancer properties are important targets/pathways in flu treatment.
Figure 4 .
Figure 4. Core target PPI network.As shown in the figure, the darker color of the circle is proportional to its importance in this network.
Figure 5 .
Figure 5.The GO function analyzes the histogram.BP is marked in teal, CC in sienna, and MF in steel blue.The bar graph is obtained by the Bioinformatics Platform.
Figure 6 .
Figure 6.KEGG enrichment bubble diagram of the treatment of flu by GHQWG.
Figure 7 .
Figure 7. Heat map of binding energies.
Table 1 .
The molecular docking results of the hub genes with the main components of GHQWG.
target Binding energy (kcal/mol) Quercetin Kaempferol Luteolin β-Sitosterol Wogonin Stigmasterol
www.nature.com/scientificreports/verify it.The results showed that kaempferol, luteolin, quercetin, and caffeic acid were able to tightly dock to the individual targets and quickly reached steady state.
|
2024-07-11T06:15:48.523Z
|
2024-07-09T00:00:00.000
|
{
"year": 2024,
"sha1": "33e0067a18e0e7cf15ecb75f1115931cf94e184f",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1038/s41598-024-62469-2",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bad52b788cea6c5a1ca64cc8a91760ab04f283f0",
"s2fieldsofstudy": [
"Medicine",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
46890364
|
pes2o/s2orc
|
v3-fos-license
|
Combined Analysis of the Fruit Metabolome and Transcriptome Reveals Candidate Genes Involved in Flavonoid Biosynthesis in Actinidia arguta
To assess the interrelation between the change of metabolites and the change of fruit color, we performed a combined metabolome and transcriptome analysis of the flesh in two different Actinidia arguta cultivars: “HB” (“Hongbaoshixing”) and “YF” (“Yongfengyihao”) at two different fruit developmental stages: 70d (days after full bloom) and 100d (days after full bloom). Metabolite and transcript profiling was obtained by ultra-performance liquid chromatography quadrupole time-of-flight tandem mass spectrometer and high-throughput RNA sequencing, respectively. The identification and quantification results of metabolites showed that a total of 28,837 metabolites had been obtained, of which 13,715 were annotated. In comparison of HB100 vs. HB70, 41 metabolites were identified as being flavonoids, 7 of which, with significant difference, were identified as bracteatin, luteolin, dihydromyricetin, cyanidin, pelargonidin, delphinidin and (−)-epigallocatechin. Association analysis between metabolome and transcriptome revealed that there were two metabolic pathways presenting significant differences during fruit development, one of which was flavonoid biosynthesis, in which 14 structural genes were selected to conduct expression analysis, as well as 5 transcription factor genes obtained by transcriptome analysis. RT-qPCR results and cluster analysis revealed that AaF3H, AaLDOX, AaUFGT, AaMYB, AabHLH, and AaHB2 showed the best possibility of being candidate genes. A regulatory network of flavonoid biosynthesis was established to illustrate differentially expressed candidate genes involved in accumulation of metabolites with significant differences, inducing red coloring during fruit development. Such a regulatory network linking genes and flavonoids revealed a system involved in the pigmentation of all-red-fleshed and all-green-fleshed A. arguta, suggesting this conjunct analysis approach is not only useful in understanding the relationship between genotype and phenotype, but is also a powerful tool for providing more valuable information for breeding.
Introduction
Kiwifruit (Actinidiaceae, genus Actinidia), a kind of perennial and deciduous plant, is one of the fruit trees that have been domesticated and cultivated successfully in the last century, and in recent years has been grown commercially worldwide [1,2]. As a fruit tree originating from China, the genus Actinidia comprises a total of 76 species and about 125 known taxa worldwide [3], but among these, only two species-Actinidia chinensis and A. deliciosa-have been cultivated commercially [4]. software was conducted by the overall MS signal intensity controlled by TIC (total ion chromatogram) ( Figure S1), evaluation of width for m/z peak, and assessment of width for retention time peak for each feature ( Figure S2).
The original wiff format data was converted to the mzXML format using MSConvert software [27], followed by alignment and extraction of the peak and calculation of the peak area. The final statistics showed that 18,598 and 10,239 metabolites were obtained by the POS (positive) and NEG (negative) models, respectively, 9016 and 4699 of which were annotated (Table 1). Figure S1), evaluation of width for m/z peak, and assessment of width for retention time peak for each feature ( Figure S2). The original wiff format data was converted to the mzXML format using MSConvert software [27], followed by alignment and extraction of the peak and calculation of the peak area. The final statistics showed that 18,598 and 10,239 metabolites were obtained by the POS (positive) and NEG (negative) models, respectively, 9016 and 4699 of which were annotated (Table 1). Mode indicates that the mode of MS analysis is mainly divided into a positive ion mode and a negative ion mode; all metabolites indicates the number of substances extracted by XCMS software (3.2.0 version, UC, Berkeley, CA, USA); all annotated indicates the amount of metabolites annotated by level-one and level-two MS data; MS2 indicates the number of metabolites that could not only match that of level one m/z, but could also match that of a level-two fragment ion in the database; MS1 PLANTCYC indicates the number of metabolites that could match that of level-one m/z in the database; MS1 KEGG (Kyoto Encyclopedia of Genes and Genomes) indicates the number of metabolites assigned to KEGG pathways, POS (positive) and NEG (negative). The metabolites identified above were assigned to the KEGG and PLANTCYC databases. All 8070 and 4321 metabolites were classified into 18 KEGG second-grade pathways, 3580 and 2461 of which were classified into "metabolism" in the POS and NEG models, respectively. For the "metabolism" term, the top priority was "Global and overview maps", followed by "biosynthesis of other secondary metabolites", "metabolism of terpenoids and polyketides", "amino acid metabolism", "metabolism of cofactors and vitamins" and "carbohydrate metabolism" (Figure 2). 480 and 340 metabolites were assigned to "biosynthesis of other secondary metabolites" in the POS Figure 1. The phenotype of Actinidia arguta cv. "HB" and "YF" at two different sampling stages 70 DAFB and 100 DAFB, respectively. (A) Green fruit "HB" at 70 DAFB, HB70; (B) Red fruit "HB" at 100 DAFB, HB100; (C) Green fruit "YF" at 70 DAFB, YF70; (D) Green fruit "YF" at 100 DAFB, YF100.
The metabolites identified above were assigned to the KEGG and PLANTCYC databases. All 8070 and 4321 metabolites were classified into 18 KEGG second-grade pathways, 3580 and 2461 of which were classified into "metabolism" in the POS and NEG models, respectively. For the "metabolism" term, the top priority was "Global and overview maps", followed by "biosynthesis of other secondary metabolites", "metabolism of terpenoids and polyketides", "amino acid metabolism", "metabolism of cofactors and vitamins" and "carbohydrate metabolism" (Figure 2). 480 and 340 metabolites were assigned to "biosynthesis of other secondary metabolites" in the POS and NEG models, respectively. Of these, 73 and 67 metabolites were involved in "flavonoid biosynthesis" and "anthocyanin biosynthesis" in the POS and NEG models, respectively ( Figure 3).
All 1267 and 360 metabolites obtained from level-two identification were assigned to the HMDB database, 550 and 250 of which were matched and classified into 11 HMDB super classes in POS and NEG, respectively. Of these, 305 metabolites were included in the "organic acids and derivatives" term, which presented the majority of all classes in the POS model. Meanwhile, "organic acids and derivatives", which contained 59 metabolites, was the first class, followed by "organooxygen compounds" and "lipids and lipid-like molecules" in the NEG model ( Figure 4). and NEG models, respectively. Of these, 73 and 67 metabolites were involved in "flavonoid biosynthesis" and "anthocyanin biosynthesis" in the POS and NEG models, respectively ( Figure 3). All 1267 and 360 metabolites obtained from level-two identification were assigned to the HMDB database, 550 and 250 of which were matched and classified into 11 HMDB super classes in POS and NEG, respectively. Of these, 305 metabolites were included in the "organic acids and derivatives" term, which presented the majority of all classes in the POS model. Meanwhile, "organic acids and derivatives", which contained 59 metabolites, was the first class, followed by "organooxygen compounds" and "lipids and lipid-like molecules" in the NEG model ( Figure 4).
Identified Metabolites Involved in Flavonoid Biosynthesis
In order to obtain quantitative information on the metabolites, 14,132 and 8400 high-quality metabolites obtained from the 18,598 and 10,239 of all metabolites were used for differential analysis ( Table 2). According to the quantitative results of the identified metabolites, the differential metabolites between different group comparisons were analyzed based on fold-change and p-value. In the comparison between HB100 and HB70, a total of 2421 and 1838 metabolites presented as being up-regulated and down-regulated, respectively (Table 3). In addition, a total of 41 metabolites were identified as being flavonoids among the high-quality metabolites, 7 of which were with significant difference, and were identified as bracteatin, luteolin, dihydromyricetin, cyanidin, pelargonidin, delphinidin and (−)-epigallocatechin (Tables S1 and 4).
Comprehensive Analysis of Metabolome and Transcriptome
In order to investigate the association between metabolites and genes involved in the same biological process (KEGG Pathway), the comprehensive analysis of metabolome and transcriptome was performed using Pearson's Correlation Coefficient [28,29]. The results showed that 646, 156 and 484 differentially expressed genes participated in 13, 18 and 12 pathways among three groups HB100 vs. HB70, HB100 vs. YF10 and YF100 vs. YF70, respectively (Table S3). The 2 pathways, "flavonoid biosynthesis" and "galactose metabolism", were a common presence among the three group comparisons. In flavonoid biosynthesis, a total of 8 candidate DEGs were found ( Figure 6).
RT-qPCR, Cluster and Phylogenetic Analysis
A total of 14 structural genes comprising the 8 DEGs found above and 6 other structural genes involved in flavonoid biosynthesis were determined to conduct RT-qPCR, as well as 5 regulatory genes AaMYBC1, AaMYB, AabHLH, AaHB1 and AaHB2 (Figure 7). The expression patterns of these three structural genes AaF3H, AaLDOX and AaUFGT were similar to the two regulatory genes AaMYB and AaHB2. The expression levels of AaF3H, AaLDOX and AaUFGT in "HB100" was significantly higher than those in "HB70" and "YF100" ( Figure 7A). A similar rule of expression was detected for AaMYB and AaHB2. The expression levels of AaMYB and AaHB2 in "HB100" were higher than those found in "HB70" and "YF100" ( Figure 7B). Cluster analysis results showed that the genes AaF3H, AaLDOX, AaUFGT, AaMYB, AabHLH and AaHB2 were clustered into the same class (Figure 8), suggesting that the expression patterns of these genes were similar to each other. Because MYB is the biggest and most important transcription factor family involved in flavonoids, AaMYB was used for BLAST in order to search for other MYB sequences in other species. A phylogenetic tree was constructed with the BLAST results. The results showed that AaMYB and CsMYB5a (MYB5a of Camellia sinensis) were clustered into one class ( Figure 9).
Comprehensive Analysis of Metabolome and Transcriptome
In order to investigate the association between metabolites and genes involved in the same biological process (KEGG Pathway), the comprehensive analysis of metabolome and transcriptome was performed using Pearson's Correlation Coefficient [28,29]. The results showed that 646, 156 and 484 differentially expressed genes participated in 13, 18 and 12 pathways among three groups HB100 vs. HB70, HB100 vs. YF10 and YF100 vs. YF70, respectively (Table S3). The 2 pathways, "flavonoid biosynthesis" and "galactose metabolism", were a common presence among the three group comparisons. In flavonoid biosynthesis, a total of 8 candidate DEGs were found ( Figure 6).
RT-qPCR, Cluster and Phylogenetic Analysis
A total of 14 structural genes comprising the 8 DEGs found above and 6 other structural genes involved in flavonoid biosynthesis were determined to conduct RT-qPCR, as well as 5 regulatory genes AaMYBC1, AaMYB, AabHLH, AaHB1 and AaHB2 (Figure 7). The expression patterns of these three structural genes AaF3H, AaLDOX and AaUFGT were similar to the two regulatory genes AaMYB and AaHB2. The expression levels of AaF3H, AaLDOX and AaUFGT in "HB100" was significantly higher than those in "HB70" and "YF100" (Figure 7A). A similar rule of expression was detected for AaMYB and AaHB2. The expression levels of AaMYB and AaHB2 in "HB100" were higher than those found in "HB70" and "YF100" (Figure 7B). Cluster analysis results showed that the genes AaF3H, AaLDOX, AaUFGT, AaMYB, AabHLH and AaHB2 were clustered into the same class (Figure 8), suggesting that the expression patterns of these genes were similar to each other. Because MYB is the biggest and most important transcription factor family involved in flavonoids, AaMYB was used for BLAST in order to search for other MYB sequences in other species. A phylogenetic tree was constructed with the BLAST results. The results showed that AaMYB and CsMYB5a (MYB5a of Camellia sinensis) were clustered into one class (Figure 9).
Regulatory Network of Flavonoid Biosynthesis
In order to better understand the relationship between metabolites and genes in flavonoid biosynthesis, all results of metabolites and genes were combined to establish a network, aiming to show the relationship between gene expression and metabolite accumulation more intuitively (Figure 10). Among the 14 structural genes, AaF3H, AaLDOX and AaUFGT were highly expressed in HB100 vs. HB70 in comparison with the other 11 structural genes. The 7 metabolites with significant differences in HB100 vs. HB70 of flavonoid biosynthesis were: bracteatin, luteolin, dihydromyricetin, cyanidin, pelargonidin, delphinidin and (−)-epigallocatechin. It is generally known that transcription factor plays a role by binding the promoter of structural genes. Therefore, we speculate the hypothesis that the three transcription factors AaMYB, AabHLH, and AaHB2 activate expression of AaF3H, AaLDOX and AaUFGT by binding the promoter of these three structural genes, inducing the accumulation of the 7 metabolites described above in flavonoid biosynthesis. Specifically, AaMYB, AabHLH, AaHB2, AaF3H, AaLDOX and AaUFGT were the candidate genes obtained from this study and could be used for further study, revealing the red mechanism of A. arguta.
Regulatory Network of Flavonoid Biosynthesis
In order to better understand the relationship between metabolites and genes in flavonoid biosynthesis, all results of metabolites and genes were combined to establish a network, aiming to show the relationship between gene expression and metabolite accumulation more intuitively ( Figure 10). Among the 14 structural genes, AaF3H, AaLDOX and AaUFGT were highly expressed in HB100 vs. HB70 in comparison with the other 11 structural genes. The 7 metabolites with significant differences in HB100 vs. HB70 of flavonoid biosynthesis were: bracteatin, luteolin, dihydromyricetin, cyanidin, pelargonidin, delphinidin and (−)-epigallocatechin. It is generally known that transcription factor plays a role by binding the promoter of structural genes. Therefore, we speculate the hypothesis that the three transcription factors AaMYB, AabHLH, and AaHB2 activate expression of AaF3H, AaLDOX and AaUFGT by binding the promoter of these three structural genes, inducing the accumulation of the 7 metabolites described above in flavonoid biosynthesis. Specifically, AaMYB, AabHLH, AaHB2, AaF3H, AaLDOX and AaUFGT were the candidate genes obtained from this study and could be used for further study, revealing the red mechanism of A. arguta.
Metabolites Were Obtained by Metabolome Analysis
Plant metabolomics, a new field in the post-genome era [30], presents the rule of change of metabolites in various tissues. Metabolites are the final products of cell biological regulation process, and their level can be regarded as the response of plant development to genetic and environmental changes [31]. Therefore, metabolomic analysis can be used to investigate the relationship between biological processes and phenotypes; furthermore, some intuitive changes can also be observed at the metabolic level [32]. Up until now, several reports have examined the metabolic responses of different species to flavonoid biosynthesis, such as Fagopyrum esculentum [33], Camellia sinensis [34], and Ficus carica L. [35]. This research suggests that metabolome analysis plays a crucial role in explaining the molecular responses to flavonoid biosynthesis. In this study, through fruit metabolome analysis on two different A. arguta cultivars "HB" and "YF" at two developmental stages, a total of 28,837 metabolites were obtained, of which 13,715 had been annotated (Table 1). In addition, 22,532 high-quality metabolites were identified and selected for differential analysis ( Table 2). The focus of our research was on flesh coloring, so the HB100 vs. HB70 comparison was selected as the object for further analysis. In the comparison HB100 vs. HB70, 41 metabolites were identified as being flavonoids, 7 of which, with significant difference, were identified as bracteatin, luteolin, dihydromyricetin, cyanidin, pelargonidin, delphinidin and (−)-epigallocatechin (Table S1). Such a result provides us with the insight that the difference between these 7 metabolites leads to the difference between red and green coloring in A. arguta flesh.
Transcription Factors Involved in Flavonoid Biosynthesis
Transcription factors are absolutely necessary for regulation of gene expression [36]. Normally, transcription factor proteins play a role through the combination of their own DNA-binding domain and the cis-acting element of their target genes [37,38]. In plants, transcription factors could participate in various biological process, including developmental regulation, defense elicitation, and stress responses [39][40][41][42][43]. Numerous studies have shown that flavonoid biosynthesis is regulated by the MBW complex, comprising MYB, bHLH, WD40 [44][45][46]. Therefore, finding transcription factors related to flavonoid biosynthesis is essential for investigating fruit coloring. In this study, 5 transcription factor genes were obtained by transcriptome analysis and used for expression analysis. The results showed that AaMYB, AabHLH and AaHB2 were highly expressed at HB100, indicating that these three transcription factors might play a key role in fruit coloring in A. arguta. Phylogenetic analysis revealed that AaMYB was highly homologous to CsMYB5a, rather than the other MYB in A. chinensis, in which AcMYB75 [47], AcMYB110 [48] and AcMYBF110 [13] were the key MYB transcription factors controlling fruit coloring, indicating that the MYB transcription factor that plays a key role in regulating flavonoid biosynthesis might be different in different Actinidia species.
Candidate Genes are Involved in Regulating Fruit Coloring
Genes, including structural and regulatory genes, involved in flavonoid biosynthesis and regulation have been found, studied and reported in many plants, including Actinidia [11,13,[15][16][17][18][19]. However, most genes have been obtained and identified using traditional study technology. Since transcriptome analysis has come to be regarded as a crucial way to study the expression level, structure and function of genes in order to revealing phenotypic traits, combined analysis of transcriptome and metabolome has increasingly become a popular and practical tool for the mining of new genes involved in various metabolic pathways [49]. In this study, metabolome data, coupled with transcriptome profiling, was carried out in a combined analysis for the discovery of genes involved in flavonoid biosynthesis, thus searching for useful information to illustrate phenomenon of red coloring in A. arguta fruit. 19 genes, comprising 14 structural genes and 5 transcription factor genes, were obtained and used to analyze expression level; additionally, cluster analysis was conducted and the construction of phylogenetic tree was also performed. The results showed that structural genes including AaF3H, AaLDOX and AaUFGT and transcription factor genes including AaMYB, AabHLH and AaHB2 were highly expressed at HB100, when the flesh color significantly presented red (Figure 7). In addition, the cluster analysis results suggested that AaF3H, AaLDOX, AaUFGT, AaMYB, AabHLH and AaHB2 were clustered into one class, indicating that their expression patterns were similar to each other (Figure 8). Based on these results, a regulatory network of flavonoid biosynthesis was established to show the role of genes involved in pathways more intuitively ( Figure 10). Thus, such a model of action was derived: the three transcription factors AaMYB, AabHLH and AaHB2 interact with promotors of the three structural genes aF3H, AaLDOX, and AaUFGT to control the expression, inducing the accumulation of metabolites and the appearance of the red phenotype of A. arguta fruit.
This method-combined metabolome and transcriptome-is an effective analytical method for explaining the relationship between key genes and metabolites involved in biosynthesis pathways. Using this method, we determined the candidate genes and metabolites involved in the flavonoid biosynthesis pathway, providing valuable information and a useful reference for explaining the phenomenon of red coloring of A. arguta fruit. Nevertheless, the specific mechanism still needs further research, and remains to be explored.
Fruit Materials
Two different types of kiwifruit (Actinidia arguta), "Hongbaoshixing" ("HB", a kind of all-red-fleshed A. arguta cultivar) and "Yongfengyihao" ("YF", a kind of all-green-fleshed A. arguta cultivar), were selected as experimental materials. Two different sampling stages of fruit development, defined in days after full bloom (DAFB), were 70 DAFB (70d, green fruit) and 100 DAFB (100d, the color-break stage, at which point they start to turn red). 'YF' fruit selected as a control case were green throughout the whole development process and were also sampled at 70d to 100d (Figure 1). Both of them are tetraploids with similar genetic backgrounds. The fruit samples at the two developmental stages described above were harvested from the National Kiwifruit Germplasm Garden of Zhengzhou Fruit Research Institute, CAAS, Henan province, China. Three biological replicates were collected per sample, each with 30 fruits randomly collected from 6 kiwifruit trees, every two of which were set as a biological replication; thus, all data were obtained based on three independent biological replicates. All flesh samples of the fresh fruits were dissected using a blade, frozen immediately in liquid nitrogen, and then stored at −80 • C until further use.
Metabolite Extraction and Parameter Setting
The collected samples were thawed on ice, and metabolites were extracted with 50% methanol buffer (50% solution of methanol in distilled water). Briefly, 20 µL of sample was extracted with 120 µL of precooled 50% methanol, vortexed for 1 min, and incubated at room temperature for 10 min; the extraction mixture was then stored overnight at −20 • C. After centrifugation at 4000× g for 20 min, the supernatants were transferred into new 96-well plates. Three independent repetitions were executed for the extraction and subsequent analysis process.
A high-resolution tandem mass spectrometer TripleTOF5600plus (SCIEX, Cheshire, UK) was used to detect metabolites eluted form the column. The Q-TOF was operated in both positive and negative ion modes. XCMS software 3.2.0 (UC, Berkeley, CA, USA) was used to control the chromatograph and mass spectrometer.
Identification and Quantification of Metabolite
MSConvert was used to transform LC-MS raw data into the mzXML format, which was then processed by the XCMS, CAMERA and metaX toolbox, implemented in the R software [50][51][52]. The combined retention time (RT) and m/z data were used to identify each ion.
RNA Sequencing
RNA isolation, purification and monitoring, and cDNA library construction and sequencing were performed as previously described [53]. Briefly, RNA purity, concentration and integrity were checked, measured and assessed using the NanoPhotometer ® spectrophotometer (IMPLEN, Westlake Village, CA, USA), Qubit ® RNA Assay Kit in Qubit ® 2.0 Flurometer (Life Technologies, Carlsbad, CA, USA) and RNA Nano 6000 Assay Kit of the Agilent Bioanalyzer 2100 system (Agilent Technologies, Santa Clara, CA, USA), respectively. Sequencing libraries were generated using NEBNext ® Ultra™ RNA Library Prep Kit for Illumina ® (NEB, Ipswich, MA, USA) following manufacturer's recommendations and were sequenced on an Illumina Hiseq platform. According to the manufacturer's instructions, NEBNext ® Ultra™ RNA Library Prep Kit for Illumina ® (NEB, Ipswich, MA, USA) was used to generate sequencing libraries, which were then sequenced on an Illumina Hiseq platform.
Analysis of Transcription Factors and DEGs (Differentially Expressed Genes)
Transcription factors were predicted using iTAK software [54]. The identification and classification of transcription factors were conducted by previously described methods [55,56]. On the basic of model of negative binomial distribution, DESeq R package (1. 10. 1) was used to perform differential expression analysis by providing statistical routines used to determine differential expression by means of the digital data of gene expression. The false discovery rate was deliberately controlled by the adjusted P values, which were adjusted by Benjamini and Hochberg's approach. The differentially expressed genes were defined as whose adjusted p-value < 0.05 found by DESeq.
RT-qPCR (Real-Time Quantitative PCR)
Total RNA was extracted using a modified CTAB (cetyltrimethyl ammonium bromide) method [16]. The testing of RNA quality and determination of RNA concentration were performed by 1.0% agarose gel electrophoresis and micro ultraviolet spectrophotometry (Thermo NanoDrop 2000, Thermo Fisher Scientific, Waltham, MA, USA), respectively. Approximately 1µg of total RNA was determined for cDNA synthesis using RevertAid™ First Strand cDNA synthesis kit (Thermo Fisher Scientific, Waltham, MA, USA). The LightCycler ® 480 real-time PCR system with a 96-well plate was used to conduct an amplified reaction consisting of 95 • C for 5 min, followed by 45 cycles of 10 s at 95 • C, 20 s at 60 • C, and 20 s at 72 • C in a volume of 10 µL. At the end of each experiment, a melt-curve analysis was carried out using the default parameters (5 s at 95 • C and 1 min at 65 • C). The Actinidia β-actin was used for normalization [57]. All analyses were repeated three times using biological replicates. Primer sequences are listed in Table S4.
Statistical Analysis
The relative expressions were calculated using the 2 −∆∆Ct method [35], and GraphPad Prism 5 (GraphPad Software Inc., San Diego, CA, USA) was used for chart preparation. The R-3.4.2 and MEGA6 were used to conduct the heatmap and cluster analysis. IBM SPSS Statistics 20 was used to test significant differences.
Supplementary Materials: Supplementary materials can be found at http://www.mdpi.com/1422-0067/19/5/ 1471/s1. Author Contributions: Y.L. designed the experiments, carried out the research, analyzed the data, wrote and revised the manuscript. M.L., Y.Z., L.S. and W.C. gave important suggestions and assisted in data analysis. J.F. and X.Q. designed the study and revised the manuscript. All authors participated in this study and approved the final version of the manuscript.
|
2018-05-15T23:42:55.002Z
|
2018-05-01T00:00:00.000
|
{
"year": 2018,
"sha1": "9f798b1efa208f1b963eb0bdaf27adde2f4f66e0",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/19/5/1471/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9f798b1efa208f1b963eb0bdaf27adde2f4f66e0",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
31311056
|
pes2o/s2orc
|
v3-fos-license
|
Estimation of 10-Year Risk of Coronary Heart Disease in Nepalese Patients with Type 2 Diabetes: Framingham Versus United Kingdom Prospective Diabetes Study
Background: Predicting future coronary heart disease (CHD) risk with the help of a validated risk prediction function helps clinicians identify diabetic patients at high risk and provide them with appropriate preventive medicine. Aim: The aim of this study is to estimate and compare 10-year CHD risks of Nepalese diabetic patients using two most common risk prediction functions: The Framingham risk equation and United Kingdom Prospective Diabetes Study (UKPDS) risk engine that are yet to be validated for Nepalese population. Patients and Methods: We conducted a hospital-based, cross-sectional study on 524 patients with type 2 diabetes. Baseline and biochemical variables of individual patients were recorded and CHD risks were estimated by the Framingham and UKPDS risk prediction functions. Estimated risks were categorized as low, medium, and high. The estimated CHD risks were compared using kappa statistics, Pearson's bivariate correlation, Bland-Altman plots, and multiple regression analysis. Results: The mean 10-year CHD risks estimated by the Framingham and UKPDS risk functions were 17.7 ± 12.1 and 16.8 ± 15 (bias: 0.88, P > 0.05), respectively, and were always higher in males and older age groups (P < 0.001). The two risk functions showed moderate convergent validity in predicting CHD risks, but differed in stratifying them and explaining the patients' risk profile. The Framingham equation predicted higher risk for patients usually below 70 years and showed better association with their current risk profile than the UKPDS risk engine. Conclusions: Based on the predicted risk, Nepalese diabetic patients, particularly those associated with increased numbers of risk factors, bear higher risk of future CHDs. Since this study is a cross-sectional one and uses externally validated risk functions, Nepalese clinicians should use them with caution, and preferably in combination with other guidelines, while making important medical decisions in preventive therapy of CHD.
Introduction
Type 2 diabetes mellitus, once considered a disease of the affl uent world, is reaching an endemic scale in Nepal leading to an increased burden on the national healthcare system. [1] Patients with type 2 diabetes bear up to sixfold higher risk of future coronary heart diseases (CHDs), equivalent to nondiabetic patients with preexisting heart disease. [2][3][4][5] Studies have shown that more than 50% patients with type 2 diabetes die at an early age mainly due to CHDs. [6] For this reason, they are treated as patients of CHDs. However, this may not always be effective because the actual CHD risk varies greatly among them. [7] Many international guidelines, therefore, continue to recommend estimation of CHD risk among such patients using a validated risk function. [8][9][10] Estimation and stratifi cation of CHD risk help clinicians identify patients at high risk and provide them with appropriate personalized medicine to prevent such risk. [11] Comprehensive diabetes management programs based on risk stratifi cation concepts have been shown to yield better clinical outcomes than those without. [12,13] Therefore, estimation and stratifi cation of CHD risk provide a good basis for effi cient management of diabetes mellitus.
A validated or recalibrated CHD risk prediction function utilizes a point scoring system that allows several risk factors to be considered together, calculates the accurate CHD risk of a large number of people, and favorably infl uences the decisions of the clinicians. [14,15] The two most widely adopted popular risk prediction functions are the Framingham risk equation and United Kingdom Prospective Diabetes Study (UKPDS) risk engine. The Framingham risk equation was originally developed from a prospective study based on the general North American white population between 30 and 74 years with less than 10% diabetic population. [16] This equation takes into account the cumulative effects of age, sex, total cholesterol (TC), low-density lipoprotein cholesterol (LDL-C), high-density lipoprotein cholesterol (HDL-C), blood pressure (BP), smoking, and diabetes mellitus for prediction of the incidence risk of CHD. Modifi ed versions of this risk equation developed for some European populations resulted in overestimation of the CVD risk in such populations. [17,18] One such modifi ed version of the Framingham risk equation, the UKPDS risk engine, was developed for a large cohort of newly diagnosed European patients with type 2 diabetes. It is more diabetes-specific than the Framingham risk equation as it includes variables such as the duration of diabetes and levels of glycated hemoglobin (HbA1c). [19] While some countries have adopted these two risk prediction functions after their appropriate calibration, their performances remain untested for Nepalese population, [20] which have different genetic make-up and cardiometabolic risk profi les from the European population. [21] It is, therefore, necessary to assess their predictive performance before they could also be adopted for the Nepalese population. While their complete assessment of predictive potential requires a population-based longitudinal study, we conducted only a hospital-based, cross-sectional study to use them for the estimation of CHD risk among Nepalese patients with type 2 diabetes.
Study design and patients
We carried out a hospital-based, cross-sectional study from July 2012 to June 2013 at Manipal Teaching Hospital (MTH), Pokhara, Nepal. A total of 524 type 2 diabetic patients aged 32-74 years from different outpatient departments of MTH were enrolled for this study. The study protocol was approved by the institutional ethical committee and informed consent was obtained from all the patients.
Patients were diagnosed to have type 2 diabetes when they fulfi lled the World Health Organization (WHO) diagnostic criteria for diabetes mellitus [22] and were 30 years or older at the time of diagnosis, had not undergone insulin therapy for a year after the diagnosis, and had no history of diabetic ketoacidosis. Patients with acute or chronic complications, atrial fi brillation, previous history of CHDs, and antilipemic treatment were excluded from this study. Demographic, clinical, and biochemical data of the patients were collected from personal interviews using a preformed set of questionnaires, anthropometric measurements, and biochemical analyses of their blood samples. The primary variables recorded included their age, sex, waist circumference (WC), waist-hip ratio (WHR), body mass index (BMI), BP (systolic (SBP) and diastolic (DBP)), fasting plasma glucose (FPG), HBA1c, duration and treatment status of diabetes and hypertension (HTN), smoking habit, triglycerides (TG), total cholesterol (TC), HDL-C, and LDL-C. Patients, who were taking oral hypoglycemic drugs with or without insulin, were considered to be under diabetes treatment.
Measurement of anthropometric and physiological variables
Height, weight, and waist and hip circumferences of the study patients were measured using standard protocols and the BMI and WHR values were calculated. BMI and WC status were classified according to recent WHO guidelines for South Asian population. [23] Patients were said to have general obesity when their BMI was ≥25 kg/m 2 and central obesity when their WC was ≥90 cm (for men) and ≥80 cm (for women). SBP and DBP were measured in triplicates using digital sphygmomanometer (TaiDoc Technology Corporation, Taiwan) and categorized according to the seventh report of the Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure. [24] Laboratory measurement of biochemical variables A total of 5 ml fasting venous blood was drawn from each study patient and divided into fl uoride-oxalate vials, ethylenediaminetetraacetic acid (EDTA) vacutainers, and plain test tubes. FPG was measured in blood collected in fl uoride-oxalate vials by glucose oxidase/peroxidase method. HbA1c was measured in the EDTA mixed blood by ion-exchange resin method. Serum lipids (TG, TC, and HDL-C) were directly measured in the plain blood and the value of LDL-C was calculated using the Friedwald formula. [25] All these parameters were analyzed using a semiautomated chemistry analyzer (Humalyzer-3500) and ready-to-use reagent kits according to the manufacturer's instructions (Human Diagnostics, Germany). Serum lipid reference level was based on the Third Report of the National Cholesterol Education Program Adult Treatment Panel (NCEP ATP III) guideline, [26] with hypercholesterolemia being defi ned as TC >200 mg/dl, high LDL-C >100 mg/dl, hypertriglyceridemia TG >150 mg/dl, and low HDL-C <40 mg/dl. Dyslipidemia was defi ned as the presence of one or more abnormal serum lipid concentrations, while metabolic syndrome was defi ned according to the Harmonized criteria. [27] Estimation of the 10-year CHD risk The Framingham risk equation and UKPDS risk engine were used for the estimation of 10-year CHD risk for each study patient. The Framingham risk was estimated by the sex-specifi c LDL-C based prediction equation, [14] while the UKPDS risk was estimated by offl ine risk engine version 2. [28] For the UKPDS risk estimation, study patients were treated as Asian Indians. Estimated CHD risks were then categorized as low (<10%), medium (10-20%), and high (>20%).
Statistical analysis
Statistical analysis was performed using the Statistical Package for Social Sciences (SPSS), version 17.0 for Windows (SPSS, IL, Chicago, USA), XLSTAT, and NumXL. Data for categorical variables were expressed in number and percentage (N, %) or 95% confi dence interval (CI). Numerical data for continuous variables were expressed as mean ± standard deviation. Pearson's chi-square test (asymptotic signifi cance (asymp. sig.), two-sided), independent sample test (sig., two-tailed), and Wilcoxon signed-rank test (asymp. sig., two-tailed) were used to test the statistical significance of the differences between the proportions and mean values of two or more groups of variables.
The agreement between the Framingham and UKPDS risk prediction functions for classifying patients into different risk groups was determined by the kappa statistics. The level of agreement was categorized as poor, κ ≤0.20; fair, κ = 0.21-0.40; moderate, κ = 0.41-0.60; substantial, κ = 0.61-0.80; and very good, κ >0.80. [32] Bland-Altman analysis was performed using XLSTAT to compare the convergent validity of these two risk functions. The equation for lines and correlation coeffi cients were obtained by linear regression. Pearson's bivariate correlation and stepwise linear multiple regression analyses were performed to assess the extent of association between the predicted risks and CHD risk factors present in the study patients. The Pearson's correlation coeffi cients (r) values of ± 1 was interpreted as perfect correlation, r-values between ± 0.7 and ± 0.9 as strong correlations, r-values in the range ± 0.4 to ± 0.6 as moderate correlations, r-values between ± 0.1 and ± 0.3 as weak correlations, and r-value of 0 as no correlation. Kernel density estimation determined using the NumXL was used to plot the frequency distribution of the 10-year predicted CHD risk scores. The tests were considered statistically signifi cant when P < 0.05.
The 10-year CHD risks estimated by the Framingham and the UKPDS risk functions, their stratifi cation into low, medium, and high risk groups and statistical agreement are shown in Table 2. The mean CHD risks estimated by the two risk prediction functions did not differ signifi cantly (bias = 0.88, P = 0.16) and were always higher in males (P < 0.001). Both of the risk prediction functions showed fair agreement (κ = 0.39, 95% CI (0.33-0.45), P < 0.001) in classifying the patients into low, medium, and high risk groups. There were 166 (31.7%) patients at low, 167 (31.9%) at medium, and 191 (36.4%) at high risk according to the Framingham risk equation; while 224 (42.7%) patients were at low, 148 (28.2%) at medium, and 152 (29%) at high risk according to the UKPDS risk engine. They also identifi ed more males than females (P < 0.05) at medium and high risk. Patients associated with obesity, poor glycemic control, longer duration of diabetes, dyslipidemia, HTN, and current smoking habit had higher CHD risk than those without [ Table 3]. However, the CHD risk estimated for such patients by the UKPDS risk engine was signifi cantly lower than the one estimated by the Framingham risk equation. Both the predicted 10-year CHD risks increased gradually with the age of the patients, although the overall increase was always higher in males [ Figure 1]. Except for the age groups 40-44, and 70-74 years, both the predicted CHD risks showed substantial overlap with each other.
The Framingham-estimated CHD risk showed signifi cant correlation with many risk factors prevalent in the study patients than the UKPDS-estimated risk. Surprisingly, neither of the estimated CHD risks showed signifi cant 10-year CHD risk was determined using kappa statistics. ‡ P = 0.16, *P = 0.007, **P < 0.001 (two-tailed). CHD = Coronary heart disease, FGM = Framingham, UKPDS = United kingdom prospective diabetes study, SD = Standard deviation, CI = Confi dence interval correlation with the BMI of the patients [ Table 4]. Age, sex, LDL-C, HDL-C, and DBP were found to be the strong predictors of the Framingham risk; while only age, sex, and LDL-C were identifi ed as the strong predictors of the UKPDS risk [ Table 5].
The Kernel density distribution plot of the predicted CHD risks is shown in Figure 2. The highest Kernel densities for the Framingham and UKPDS risk were at 3.7 and 2.2, respectively. Despite a substantial overlap, the density of the UKPDS risk distribution was more concentrated towards the higher side of the risk spectrum than that of the Framingham risk. These two risks showed nonlinear association with each other [ Figure 3]. The difference showed a positive bias (0.88, 95% CI −2.11, 0.34) between the two risk prediction functions with majority of the difference falling within the range of −28.9 to 27.1. The distributions of the difference were all heteroscedastic, with a cone-shaped distribution suggesting a bigger variability among patients with higher CHD risk [ Figure 4].
Discussion
A validated CHD risk prediction function helps clinicians identify individuals in a high risk group and devise the most appropriate and cost-effective personalized therapeutic approach. Accurate prediction of future CHD risk among type 2 diabetes patients as well as the general population is not yet possible in Nepal due to lack of validated or calibrated risk prediction functions. [20] There are examples where the CHD risk prediction functions developed elsewhere have been imported and utilized for the local population after proper calibration and adjustment. [30,31] Normally, a large, population-based prospective study is required to validate such external risk prediction functions before they could be imported and fully utilized for the local population. However, in the absence of such study which is usually costlier and time consuming, we simply conducted a hospital-based, cross-sectional study to snapshot their risk prediction potential and comparative performance in the forms that are not yet validated for Nepalese diabetic population. We hope that this study provides the baseline data and opens the avenue for future validation or development of the risk prediction functions in Nepal.
Like any other diabetic patients, our patients were also associated with many established CHD risk factors such as smoking, obesity, poor glycemic control, dyslipidemia, and HTN. The prevalence of many of these risk factors was signifi cantly higher in males, an observation also supported by studies conducted among other subsets of the Nepalese population. [32,33] Presence of many of these risk factors including insulin resistance and obesity has been shown to be strongly associated with future CHD events in diabetic peoples of all ethnic origin. [34][35][36] However, presence of multiple risk factors does not necessarily imply that all of our patients are already at The overall CHD risks predicted by two risk functions did not differ signifi cantly, but showed a gender-wise variation; with males showing 1.2 and 1.8 times higher risk according to the Framingham and UKPDS risk functions, respectively. As might be expected, the CHD risks predicted both by these risk functions were the highest among older patients of either sex associated with multiple risk factors. Studies on other populations have also shown similar results. [34,35] Although the two predicted CHD risks showed enough overlapping, they did not show strong convergent validity. We found only a moderate correlation between the two, and found differences in classifying our patients into low, medium, and high risk groups. The mean CHD risk predicted by the UKPDS risk engine was lower in about 35% of diabetic patients, particularly in females, who were associated with multiple risk factors classifi ed under medium or high risk groups according to the Framingham risk equation. These diabetic patients, who were below 70 years, met the criteria for preventive therapy using aspirin and statins. On the other hand, the UKPDS estimated risk was higher for those male patients who were above 70 years of age, dysglycemic, and chronic diabetic. The Framingham estimated CHD risk better accounted for the synergetic effects of major classical risk factors prevalent in the study patients, particularly of increased age, sex, HTN, decreased serum HDL, and increased LDL cholesterols. In contrast to our expectations, the UKPDS risk accounted only a few risk factors such as age, sex, and LDL-cholesterol. For example, it did not take into account the effect of HbA1c level, an important parameter on which the risk engine was based. We expect that this lack of association with HbA1c might be due to the small sample size of diabetic patients in our study who had poor glycemic control (>6.5%). The CHD risk estimated by a properly validated risk prediction function is expected to show association with the majority of the risk factors such as age, sex, obesity, HTN, dyslipidemia, poor glycemic control, and duration of diabetes. This is because keeping many of these risk factors under control has been shown to lower the CHD risk signifi cantly. [17,37] Risk prediction functions are statistical models that predict the CHD risk refl ecting the cumulative effect of the established risk factors present in the subjects under study. Hence, it is expected that higher the number of established risk factors present, the higher will be the predicted risk, although it may not happen in the reality. We had expected the UKPDS risk engine to predict higher risk for our diabetic patients than the Framingham risk equation as the former is believed to be more diabetic specifi c than the later one. However, the UKPDS risk engine actually estimated lower than expected risk for our diabetic patients who were associated with multiple risk factors and below 70 years. The Framingham risk equation, on the other hand, predicted higher risk for this group of patients and showed better association with their existing risk profile. However, this risk equation estimated lower than expected for patients who were older, centrally obese, and not under diabetes treatment. These observations suggest that neither of these risk prediction functions may reliably be used to predict the CHD risk of wider spectrum of Nepalese diabetic patients until they are validated locally. Studies conducted on other similar populations have also raised questions about their reliability in predicting accurate CHD risk. [17,18] Some studies have even suggested that these risk prediction functions may now be outdated for longstanding diabetic patients due to improvement in diabetic medications and clinical care since the time of their inception, and therefore their refi nement for better refl ection of the current risk profi le, diagnostics, and medications may be essential. [38,39] Moreover, since these risk prediction functions were developed for the western white Caucasian population, it is also possible that they do not accurately refl ect the CHD risk of South Asians who have different genetic makeup and risk profi le. In light of this, the British Cardiac Society has clearly warned against the generalization of risk prediction functions for South Asians in the absence of validated models. [40] The strength of our study is based on the enrollment of clearly defi ned and uncomplicated type 2 diabetic patients with no previous history of CHDs. The patients were from different socioeconomic strata and ethnic groups, and hailed from different areas of mid-western Nepal, and are therefore expected to be a good representation of the diabetic population in the country. Our study has for the fi rst time predicted the 10-year CHD risk of a subset of Nepalese diabetic population using two most common risk prediction functions and attempted to make basic comparison of their predicted risks. It has established that these risk functions show moderate agreement in predicting CHD risk in diabetic patients. Our study also informs Nepalese clinicians that they should use these risk functions only as the references, along with other established guidelines, while making important decision regarding the prevention and treatment of patients with higher risk of CHD event. Moreover, it also provides the baseline data for future validation of these and other risk prediction functions in Nepal. The major limitation of our study is that we could not calibrate these risk functions against the Nepalese population and enroll study patients that could better represent the general population of Nepal.
Conclusion
In conclusion, the Framingham and the UKPDS risk prediction functions that are yet to be validated for the Nepalese diabetic population showed moderate convergence in predicting 10-year CHD risk, despite their differences in classifying diabetic patients into different risk groups. The Framingham risk equation predicted higher CHD risk and showed better association with the current risk profi le than the UKPDS risk engine. However, both the risk functions could not fully account for the complete risk profi le of the study patients and, therefore, their performances for the Nepalese diabetic population remains questionable until they are locally validated or calibrated. The availability of a populationspecific validated or calibrated risk function would greatly assist Nepalese clinicians in mitigating the CHDrelated morbidity and mortality in diabetic patients.
|
2018-04-03T02:19:29.647Z
|
2015-08-01T00:00:00.000
|
{
"year": 2015,
"sha1": "2d875c9cda37c0be9081ef9afb13b3eaf1fbfc19",
"oa_license": "CCBYNCSA",
"oa_url": "https://europepmc.org/articles/pmc4561440",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "09b8ac0e0fcabe3d160797f7bb3ae369409ee2e1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
212642578
|
pes2o/s2orc
|
v3-fos-license
|
Rapid detection of CITES-listed shark fin species by loop-mediated isothermal amplification assay with potential for field use
Shark fin is a delicacy in many Asian countries. Overexploitation of sharks for shark fin trading has led to a drastic reduction in shark population. To monitor international trade of shark fin products and protect the endangered species from further population decline, we present rapid, user-friendly and sensitive diagnostic loop-mediated isothermal amplification (LAMP) and effective polymerase chain reaction (PCR) assays for all twelve CITES-listed shark species. Species-specific LAMP and PCR primers were designed based on cytochrome oxidase I (COI) and NADH2 regions. Our LAMP and PCR assays have been tested on 291 samples from 93 shark and related species. Target shark species could be differentiated from non-target species within three hours from DNA extraction to LAMP assay. The LAMP assay reported here is a simple and robust solution for on-site detection of CITES-listed shark species with shark fin products.
Materials and Methods
Sources of samples and genetic identification of samples. In (Table 1). Among 291 samples, there were 94 dried processed fin samples, 174 frozen tissues, and 22 genomic DNA. For the 12 CITES-listed shark species, at least two samples were collected for each species. No live specimens were involved in the sample collection and the samples were either dry products from the market or donated by the various institutions. All samples, including dried processed fin samples, were stored at -80 °C until DNA extraction and downstream experiments.
In order to ensure the species identity of the samples and avoid misidentification of samples due to other parties, DNA of all samples were extracted and amplified by PCR amplification. PCR products were sequenced, and the sequencing results were compared to the NCBI database using BLAST analysis. DNA was extracted from the samples using Biomed Genomic DNA/Tissue Extraction Kit (Biomed, Beijing, China) according to the manufacturer's instructions (~60 minutes). PCR was performed to amplify COI gene using universal fish primers, FISHCOILBC_ts (5′-CACGACGTTGTAAAACGACTCAACYAATCAYAAAGATATYGGCAC-3′), FISHCOIHBC_ts (5′-GGATAACAATTTCACACAGGACTTCYGGGTGRCCRAARAATC-3′) 32 , FishF1 (5′-TCAACCAACCACAAAGACATTGGCAC-3′) and FishR1 (5′-TAGACTTCTGGGTGGCCAAAGAATCA-3′) 33 for all the samples with an expected product size of ~700 bp. Sanger sequencing of PCR products purified with gel extraction kit (Biomed, Beijing, China) was performed by Tech Dragon Ltd., Hong Kong. Samples were identified using BLAST search against nucleotide sequences available on GenBank. The top-match results with a sequence similarity of at least 99% were used for species identification 34 . Sequences of COI, NADH2 and 12 S rRNA regions of sharks (12265, 1761, and 327 sequences respectively) were downloaded from the NCBI database for alignment. Species-specific primers were designed based on nucleotides unique for each of the 12 CITES-listed species.
Design of species-specific LAMP primers and nucleic acid amplification assays using LAMP techniques. Thirteen sets of LAMP primers were designed using software PrimerExplorer V5 (Eiken Chemical Co. Ltd., Tokyo, Japan) and Primer-BLAST (https://www.ncbi.nlm.nih.gov/tools/primer-blast/). Each set contains at least four LAMP primers (external primers F3 and B3; forward and backward internal primers FIP and BIP), complementary to the species-specific sequences of the 12 target shark species or to the sequences conserved among all shark species as an internal control. Species-specific LAMP primers were designed based on the COI region for pelagic thresher shark, common thresher shark, great white shark, basking shark, whale shark, scalloped hammerhead shark, great hammerhead shark and smooth hammerhead shark. For bigeye thresher shark, silky shark, oceanic whitetip shark and porbeagle shark, the species-specific LAMP primers were designed based on the NADH2 region. Primers set for the internal control were based on the 12 S rRNA gene. For great white shark, porbeagle shark, great hammerhead shark and the internal control, loop primers (LF and/or LB) were added to the LAMP reactions. The sequences of all LAMP primers and the corresponding reaction conditions are shown in Table 2. LAMP reactions of target shark species were tested against all collected non-target species using the Isothermal Mastermix Amplification Kit (OptiGene, Horsham, UK) using Genie II (OptiGene, Horsham, UK) in triplicates. Each LAMP reaction mixture contained 1.0 μL of 10.0 ng/μL template DNA, 7.5 μL of Isothermal Mastermix, 0.5 μL of F3 (5 μmol/L), B3 (5 μmol/L), FIP (50 μmol/L), BIP (50 μmol/L), and, for great white shark, porbeagle shark, great hammerhead shark and the internal control, 0.5 μL of LF (25 μmol/L) and/or LB (25 μmol/L) with a final volume of 12.5 μL were applied as listed in Table 2. LAMP was performed at the corresponding temperature and reaction time ( Continued workflow, each LAMP reaction for one sample costs around USD $0.6. Time of experiment from DNA extraction to result analysis was approximately within 2-3 hours for 14 samples, one positive control and one negative control per run using Genie II. Design of species-specific PCR primers and nucleic acid amplification assays using PCR techniques. Thirteen pairs (12 target shark species and one internal control) of PCR primers were designed using Primer-BLAST (https://www.ncbi.nlm.nih.gov/tools/primer-blast/). Species-specific PCR primers of bigeye thresher shark and oceanic whitetip shark were designed on the NADH2 region, whereas the primers for the rest were designed on the COI region. Sequences of primers and the corresponding PCR conditions are shown in Table 2. Species-specific PCR were tested against all collected non-target species using the GoTaq G2 Flexi DNA Polymerase (Promega, Wisconsin, USA) in T-100 Thermo Cycler (Bio-rad, California, USA) in triplicates. Each PCR mixture contained 10 ng/μL template DNA, 6 μL of 5X PCR buffer, 3 μL of MgCl 2 (25 mmol/L), 0.6 μL of dNTP mixture (10 mmol/L each), 1.5 μL forward primer (10 μmol/L), 1.5 μL reverse primer (10 μmol/L) and 0.2 μL Taq polymerase (5 U/μL) with a final volume of 30 μL. PCR products were mixed with 6X loading dye in a ratio of 5:2 and visualized in 1.5% agarose gel. Sizes of fragments were compared with GeneRuler 100 bp DNA Ladder (Thermo Scientific, USA). Under this workflow, each PCR costed around USD $0.25. www.nature.com/scientificreports www.nature.com/scientificreports/ Sensitivity and specificity of the LAMP and PCR assays. The sensitivity of the LAMP and PCR assays was determined using genomic DNA of target species that was adjusted to a concentration of 10.0 ng/μL and diluted to 5.0 ng/μL, 1.0 ng/μL, 0.4 ng/μL, 0.2 ng/μL, and 0.1 ng/μL. The specificity of the LAMP and PCR assay was determined by amplification of the 93 shark and related species with the species-specific primers. All amplifications were performed in triplicates.
Results
To evaluate the specificity of LAMP and PCR assays for the CITES-listed shark species, all species-specific primers (Table 2) were tested on their target species and against all non-target species as listed in Table 1. All samples were successfully amplified using universal COI primers 32,33 and their identity confirmed using BLAST analysis. Most external primers (F3 and B3) of the LAMP primer sets were the same as their PCR forward and reverse primers (LP and RP) except for silky shark, porbeagle shark and internal control, as their LAMP and PCR primer sets were designed based on different regions. The expected size of target amplicon of each primer set is listed in Table 2.
Species-specific LAMP assays. The 12 CITES-listed shark species LAMP assays specifically amplified their target species only and non-target species were not detected (Fig. 1). Three independent experiments had been performed for each assay. The melting temperatures of LAMP products of the 12 LAMP assays were between 81.1 and 86.6 °C. The melting curves of target species with their corresponding LAMP assays were identical and no melting curve was found for non-target species showing the specificity of these LAMP assays (Fig. 2). For the sensitivity, the detection limit of the LAMP assay was 0.2 ng/μL for pelagic thresher shark, bigeye thresher shark, oceanic whitetip shark, porbeagle shark, scalloped hammerhead shark and smooth hammerhead shark; 0.4 ng/μL for great white shark, silky shark, and great hammerhead shark and 5.0 ng/μL for common thresher shark, basking shark and whale shark (Fig. 3). For the internal control of the LAMP assay, positive result was found with all samples and no amplification was found with the negative control. The LAMP assay for internal control was able to detect the 12 CITES-listed shark species at the concentration equivalent to the detection limit of their corresponding species-specific LAMP assays (Supporting Information). Apart from the real-time amplification curve and melting curve obtained from Genie II, positive LAMP signals could be observed by naked eye after addition of SYBR Green I dye. As shown in Fig. 4, under ambient white light, all positive LAMP reactions turned to yellow colour and negative reactions remained in brownish orange colour (Fig. 4a,c). The lower panel (Fig. 4b) shows the limit of detection of each LAMP assay under visual detection. These features allowed rapid on-site screening.
Species-specific PCR assays.
All CITES-listed shark species were specifically amplified by their corresponding PCR assays with expected sizes ( Table 2) and no amplification was found for non-target species, showing the specificity of each of the PCR assays (Supporting Information). The limit of detection of each PCR assay was 0.2 ng/μL for porbeagle shark and bigeye thresher shark; 0.4 ng/μL for common thresher shark, oceanic whitetip shark, and smooth hammerhead shark; 5.0 ng/μL for pelagic thresher shark, great white shark, silky shark, basking shark, scalloped hammerhead shark and great hammerhead shark; 10.0 ng/μL for whale shark (Supporting Information). All samples showed positive results in the PCR assay for internal control and no amplification was found in the negative control. Internal control could be successfully amplified from the 12 CITES-listed shark species at the DNA concentration equivalent to the limit of detection of their corresponding species-specific PCR assay (Supporting Information).
Discussion
In this work, we have developed sensitive, simple to use, and to our knowledge, the fastest species-specific assays using LAMP technique, as well as simple and low-cost PCR assays to authenticate all 12 CITES-listed shark species. The LAMP and PCR assays we developed can successfully discriminate their target species from the other 92 shark and related species, which has included species commonly found in the Hong Kong shark fin market 12 .
We have designed our primers based on COI and NADH2 loci for the CITES-listed shark species and on 12 s rRNA locus for internal control. Previous studies about species-specific primers on shark species identification are mostly focused on the ITS2 locus [17][18][19][20] . The COI locus, a standard marker for fish and shark species DNA barcoding 16,32,33,35,36 , is highly species specific and the number of nucleotide difference of congeneric species is sufficient for designing species-specific primers. For common thresher shark, silky shark, oceanic whitetip shark and porbeagle shark, NADH2 locus is a better region for designing species-specific primer, due to high nucleotide sequence differences between the target species and the corresponding congeneric species 37 . High degree of nucleotide sequence difference allows us to design LAMP primers targeting four to six species-specific regions instead of only two for conventional PCR. Our LAMP assays are highly specific, especially for species with many congeneric species such as the silky shark and the ocean whitetip shark. Furthermore, care was taken to avoid sites of known intraspecific variation among sharks from different regions or ocean basins during primer design, so that amplification success would not be affected by intraspecific nucleotide variation. In contrast, 12 s rRNA locus is a suitable target for designing primers for the internal control as it has high sequence homology among different shark species. This allows us to design six LAMP primers that can amplify all of the shark species in this work.
Samples of 91 shark species and 2 Chimaera species were collected for testing the specificity of our developed LAMP and PCR assays. These species covered 56 out of 61 species found in the Hong Kong fin market in 2014-2015 12 The other five species were rarely found in the fin trade market. For LAMP and PCR assays targeting the silky shark and the oceanic whitetip shark, we have tested 21 out of 31 congeneric species from Carcharhinus and a total of 76 Carcharhinus samples were tested. Although we were not able to test the assays on all 31 recognised Carcharhinus species 37 , we have confidence on the specificity of our assays for three reasons. First, we have tested our assays against most species which are phylogenetically close to the silky shark and the oceanic whitetip shark 38,39 and no amplification was found with non-target species. Second, those nine untested species are phylogenetically more distant from the silky shark and the oceanic whitetip shark 39 and they are rarely found in the Hong Kong fin market 12 . Third, we have compared nucleotide sequences of those nine species on NADH2 region available on NCBI GenBank except for C. hemiodon, which has no sequences available on any public database, for checking of primer mismatching with Primer-BLAST to ensure the specificity of our primers. For the oceanic whitetip shark, previous studies have found difficulties on designing species-specific primers targeting (2020) 10:4455 | https://doi.org/10.1038/s41598-020-61150-8 www.nature.com/scientificreports www.nature.com/scientificreports/ the oceanic whitetip shark due to its high phylogenetically similarity with C. obscurus and C. galapagensis 17,22,39 . We are the first to successfully develop species-specific assays for the oceanic whitetip shark. We have found that these three species share less similarity among their sequences on the NADH2 region and hence easier to design species-specific primers discriminating the oceanic whitetip shark from the other two species. Furthermore, we have used DNA extracted from processed shark fin for testing to ensure that the LAMP assays could provide accurate result when performed on-field.
Besides oceanic whitetip shark, we are also the first to present species-specific assays for the whale shark. Whale shark, being the only species in the family Rhincodontidae, is phylogenetically more distant from other species in Orectolobiformes 37,39,40 . We have examined the COI sequences in Orectolobiformes that are available on the NCBI database and performed the Primer-BLAST analysis to ensure the specificity of our assays. For the thresher sharks, basking shark, great white shark and porbeagle shark, they are classified in the mackerel shark order Lamniformes. For the LAMP and PCR assays targeting these six species, we have tested them with 12 out of 15 recognised species of the order Lamniformes including all species in the family Alopiidae and the family Lamnidae to ensure the specificity of our primers. For the three CITES-listed hammerhead sharks, we have tested our assays with six out of nine species in the family Sphyrnidae. The other three species (S. corona, S. gilberti, and S. media) are phylogenetically more distant from the CITES-listed hammerhead sharks and not commonly found in the Hong Kong shark fin market 3,12,19,41,42 .
LAMP allows rapid amplification of DNA using Bst polymerase with high strand displacement activity at isothermal temperature within an hour. This facilitates diagnostic on-site detection with limited equipment and provides species-level identification in a short period of time, increasing efficiency for law enforcement. To our knowledge, this is the first report on applying LAMP on shark species identification. Although the optimal temperature for Bst polymerase is between 60 °C and 65 °C 43 , the optimal reaction temperatures for most of the LAMP assays developed were found to be 68 °C or 70 °C, except for those targeting bigeye thresher shark, common thresher shark and porbeagle shark. Increase in LAMP reaction temperature allows a better specificity www.nature.com/scientificreports www.nature.com/scientificreports/ especially for eliminating congeneric species. Apart from increasing reaction temperature, another common approach of optimizing LAMP reaction is the addition of loop primers, which can increase the sensitivity of LAMP reaction. The LAMP assays targeting great white shark, porbeagle shark, great hammerhead shark, and that for the internal control include loop primers in their primer sets. Loop primers are designed to bind to the additional sites that are not accessed by the internal primers and accelerate the rate of LAMP reaction and provide higher sensitivity 24 . This allows the LAMP assay for the internal control to amplify all shark species in Table 1 at a relatively low reaction temperature, which also favours primer annealing and amplification. The presence of loop primers also allows the LAMP assays developed to give amplification with detectable threshold within 60 minutes, reducing the reaction time required for the assays. For LAMP and PCR assays amplifying the same region of a species, the species-specific LAMP assays generally have a better sensitivity than the species-specific PCR assays, except for the LAMP assays targeting common thresher shark and basking shark. Looped DNA product increases the sensitivity of the LAMP assay and hence facilitates detection with limited sample amount from dried shark www.nature.com/scientificreports www.nature.com/scientificreports/ fins. In addition to better sensitivity, since LAMP produces more DNA products than conventional PCR method, it allows easy visual detection with naked eye to tell between positive and negative results without fluorescence detection equipment. On-site detection of CITES-listed shark species can be achieved. On the other hand, there are also limitations on LAMP assays. Since LAMP assay is highly sensitive, it is important to perform them at their optimal reaction temperature. Four different temperatures, according to conditions listed on Table 2 should be used to achieve repeatable result. Furthermore, there may be a chance of cross contamination leading to false positive results. Therefore, it is important to clean the equipment before test and keep good laboratory practice. Since there are only 16 wells available on Genie II, using of multi-block thermal cycler may be considered. www.nature.com/scientificreports www.nature.com/scientificreports/ For on-field screening and monitoring of shark fin products with unknown identity, use of multi-block thermal cycler is suggested for performing LAMP reactions, at four different temperatures within 60 minutes. Sample DNA, in different forms including dried processed shark fin, dried and frozen flesh tissues and inner organs, can www.nature.com/scientificreports www.nature.com/scientificreports/ be extracted using (1) Biomed Genomic DNA/Tissue Extraction Kit in 30-60 minutes depending on the rate of sample dissolution in lysis buffer; or (2) Kaneka Easy DNA Extraction Kit (Version 2) (Funakoshi, Osaka, Japan) in 15 minutes. After sample extraction, 1 μL of the extracted DNA is amplified by LAMP (50-60 minutes). At the end of the reaction, 2 μL of 1000X SYBR Green dye is added for visual detection. If there is positive LAMP reaction, the colour would change simultaneously with the addition of SYBR Green dye. For a 96-well multi-block thermal cycler with temperature control in each column, eight samples, including positive and negative control, with 12 LAMP assays could be tested per hour. To obtain the best result, portable device for DNA concentration measurement could also be used. In our experience, DNA with concentration of at least 10.0 ng/µL was able to be extracted from all unprocessed and processed samples using the Biomed Genomic DNA/Tissue Extraction Kit.
There are several on-field detection protocols available for CITES-listed shark species identification. They include multiplex real-time PCR assay by Cardeñosa et al. 22 and genome skimming using MinION hand-held sequencer by Johri et al. 44 . Our protocol has good potential to be used on field. As shown in Table 3, our LAMP assays have covered all twelve CITES-listed shark species and it is cheaper in the aspects of cost and equipment compared to the other two methods. It is less time consuming for test on a small scale and allows simple result analysis which is more favourable to customs officials without scientific background. Multiplex real-time PCR assay allows test on 94 samples within 4 hours, which is the fastest one among these three methods and data analysis is also straightforward. This could be favourable for on-field test on a large scale. However, the multiplex real-time PCR developed only covers 9 out of 12 CITES-listed shark species. For the other three shark species, use of morphological guide is needed 22 . For genome skimming using MinION hand-held sequencer, the major limiting factors are the high cost of MinION flow cell compared to the other two methods. Although this protocol could provide additional genetic information, result has to be analysed by bioinformatic expertise, which is less favourable for customs use. Furthermore, this approach has not been tested with highly processed shark fin products which are generally skinned and bleached. In short, for on-field detection by customs, LAMP assays are more favourable for small-scaled quick test while multiplex real-time PCR assay is more favourable for large-scaled test.
Our rapid on-site detection assays have shown good specificity and can be used for on-site law enforcement of all CITES-listed shark species which are commonly found in the fin market 3,12 . It will be useful for screening of processed shark fin products which is difficult to be identified using morphological identification guide and shark products such as shark meat as they are hardly identified based on morphology 22,45 . The result has also shown that COI and NADH2 regions are suitable for species-level identification, especially for species with a high nucleotide similarity with congeneric species.
Our LAMP assays allow easy inspection of imported shark fin products at the border for all 12 CITES-listed shark species and facilitate monitoring of international trade 46 . Same model using our LAMP assays could also be applied for law enforcement in other area. For example, as the 4th largest shark-catching area 47 , Taiwan has recently passed the "Regulations for Import of Shark Fins" of which catching of whale shark and import of whale shark products are forbidden 48 . Our protocol for whale shark could provide a reliable and efficient test for monitoring the trade of whale shark products. With the exact species identity of shark fin products, it would provide a better picture on the CITES-listed shark products in Hong Kong and elsewhere. The data could be important for better monitoring of international shark fin trade and the conservation of shark by CITES members 49 .
Besides shark species currently listed on CITES appendix II, there is a need for development of rapid identification assay for other endangered shark species in the future CITES appendix and the list of IUCN to ensure effective detection of illegal trade. In fact, two more shark species, Isurus oxyrinchus and I. paucus, have been recently included in CITES Appendix II at the 18th meeting of the Conference of the Parties of the CITES in 2019 and will enter into effect on 26 November 2019 11 . Our innovation can be extended to these new CITES-listed shark species and processed products and for further development of multiplex LAMP assays. We also expect that our work will serve as a model for the rapid identification assay for other endangered species. Table 3. Comparison of on-field detection protocols for CITES-listed shark species identification. a Cardeñosa et al. 22 . b Johri et al. 44 .
|
2020-03-10T15:02:41.919Z
|
2020-03-10T00:00:00.000
|
{
"year": 2020,
"sha1": "85d8e52114f225ff2b733ce4163c1d87b0bcf11c",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-020-61150-8.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2665296240253f728756fdbf91cfc37ea7b22067",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
245773064
|
pes2o/s2orc
|
v3-fos-license
|
Flexible and Integrative Psychiatric Care Based on a Global Treatment Budget: Comparing the Implementation in Germany and Poland
Background: The past decade has witnessed the establishment of flexible and integrative treatment (FIT) models in 55 German and Polish psychiatric catchment areas. FIT is based on a global treatment budget (GTB), which integrates funding of all acute psychiatric hospital services for a regional population. Prior research has identified 11 specific program components of FIT in Germany. In this paper we aim at assessing the applicability of these components to the Polish context and at comparatively analysing FIT implementation in Poland and Germany. Methods: Qualitative interviews about the applicability of the 11 FIT-specific components were conducted with the program managers of the Polish FIT models (n = 19). Semi-quantitative data on the FIT-specific components were then collected in 19 Polish and 10 German FIT models. We assessed the grading of each component, their overall degree of implementation and compared them between the two countries. In all study hospitals, structural and statistical parameters of service delivery were collected and compared. Results: The qualitative results showed that the German FIT-specific components are in principle applicable to the polish context. This allowed the comparative assessment of components grading and degree of implementation, which showed only subtle discrepancies between German and Polish FIT models. The little discrepancies point to specific aspects of care such as home treatment, peer support, and cooperation with non-clinical and social welfare institutions that should be further integrated in the components' definition. Conclusions: The specific program components of FIT as first defined from the German experience, serves as a powerful tool to measure, and evaluate implementation of integrated psychiatric care both within and between health systems.
INTRODUCTION
In recent decades, health service providers in several countries have made extensive efforts to establish community crisis alternatives to inpatient psychiatric admission. In Europe and the UK there is now a broad spectrum of team-based, outreach and integrative care models for assisting people with severe mental illness (SMI) (1)(2)(3)(4). In addition to acute day hospitals, residential crisis houses, and assertive community treatment (ACT), Crisis Resolution Teams (CRT) are probably the most widespread form of community-based acute treatment. However, only England and Norway have so far implemented CRT at the national level (5,6). European Countries, such as the Netherlands or Switzerland, have introduced several forms of outreach care but do not yet offer it nationwide. Only England and Norway have so far implemented CRT at a national level (1,7).
Germany and Poland are countries where acute psychiatric care is still predominantly provided in inpatient settings (8). There is evidence that daily-and performance-based remuneration, which is the predominant financing approach for German psychiatric inpatient care, leads to treating service users in rather costly inpatient settings (9). A similar state of affairs prevails in Poland, where only one quarter of mental health care expenditures are allocated for outreach care (10). In the past two decades, the psychiatric societies of both countries have endeavoured to establish framework conditions to enable the delivery of integrated psychiatric care. They have adopted a similar "Flexible and Integrative Treatment" (FIT) model, which has been implemented in hospitals of selected regions. The FIT model is based on a shift from a performancebased remuneration to a lump-sum global treatment budget (GTB). The GTB provides hospitals with the financial security and flexibility needed to develop more community-oriented, outpatient and outreach care, while at the same time reducing inpatient treatment days and enabling flexible shifts between different settings and intensities of care on a need-based basis (11,12). Basic differences between standard and model care in the two countries are presented in Table 1. Because it significantly contributed to a reduction of coercive measures the German FIT Model is explicitly recommended by the recently published WHO "Guidance on Community mental health services" (13).
Legal Framework and Evolution of FIT in Germany
In Germany, integrated psychiatric care was first introduced with a legislative reform in 2000, allowing service providersof both in-and outpatient sectors-to contract with selected statutory health insurance companies for a joint delivery of assertive outreach care (14). A GTB was first introduced in 2008 for one pilot region in rural northern Germany (15). It is negotiated between service providers and statutory health insurance companies and is established based on historical expenditures and on the number of patients to be treated. Thus, the GTB financing approach can be described as occupying a middle ground between block contracts (where providers are paid a fixed amount to deliver a specific, usually broadlydefined, service) and capitation (where providers receive lumpsum payments based on the number of patients treated) (16). After the positive evaluation of the pilot project, the model was applied to a legal framework ( §64b social code V; FIT) that enables the development of one such model project in each German federal region. Importantly thus, FIT is not a concrete model of care but rather a legal framework, which can be flexibly adapted and implemented according to specific contexts, needs, and concepts of service providers. Instead of the usual performance-based remuneration, participating hospitals receive a GTB, with which they are obliged to offer a "continuous service provision across different settings, including a complex assertive outreach care." By July 2021, 22 of these FIT models had been established, ensuring acute psychiatric care for 5.5 million people (8% of the adult German population).
Legal Framework and Evolution of FIT in Poland
The development of FIT in Poland is based on a statutory health reform, namely the National Mental Health Program (NMHP), which was initiated in 2008. The key aim of the program was to reduce the need for inpatient wards by increasingly diverting to outpatient psychiatric care. This aim was pursued in two phases, namely first through the introduction of Community Mental Health Teams (CMHT; 2011-2015) and then of Mental Health Centres (MHCs; 2017-2021) in 33 selected catchment areas (17). MHCs form a steering unit that bundles and coordinates psychiatric and (in part) social services for people suffering from SMI in a particular region. A given MHC is sited at and administered by a psychiatric hospital department, whereas its services are mostly provided externally by CMHTs. Structural requirements and organisational standards of MHCs were recently specified and
Previous Findings From FIT Models
The initial outcome evaluation studies of FIT (and of its precursor models) in Germany have shown a significant reduction of inpatient length of stay, as well as an increase of service users treated in the day-patient, outpatient and outreach settings (19)(20)(21). They have also shown improved personal continuity of care between settings and fewer instances of involuntary treatment or coercive measures (20,22). The clinical outcomes (e.g., HoNOS, CGI, and GAF) of service users improved, whereas the overall costs for mental health care were kept stable or even decreased (19,23,24). Moreover, FIT models were positively evaluated by service users, caregivers, and clinical staff in Germany (25)(26)(27)(28). Due to the more recent introduction of the Polish FIT models, results from observational studies are still pending. Results of a pilot outcome study indicate higher levels of satisfaction among patients using FIT compared to standard care (29). The aim of this exploratory study is to examine comparatively the structures and processes of FIT models in Germany and Poland, and to address the following research questions: 1. What are the fundamental similarities and differences between service provision in the Polish and German models of integrated psychiatric care? 2. To what extent can specific program components of the German FIT models be adapted for assessing implementation in the Polish context?
Design
This study was carried out between February 2020 and July 2021 under the auspices of the Polish-German Society for Mental Health (PGSMH). The aim of this psychiatric society is to strengthen exchange in research, practise, and thus cooperation between the two countries (30). As part of a previous process evaluation study (25), 11 empirically based, practicable, and quantifiable program components have been developed to describe treatment structures and processes of German FIT models (see Table 1) (31). Subsequently, these FIT-specific components were used routinely to measure, evaluate, and thus assure quality during the process of FIT model implementation in Germany (25,26,(31)(32)(33). So far, there have been no comparable guidelines or instruments for the quality assurance of the newly introduced Polish FIT models. To address the first research question, we collected structural and performance data, which enabled a comprehensive comparison between the two countries and a contextualisation of the component grading. For the second question, we investigated the degrees of implementation of the 11 FIT-specific components in the two countries. The applicability of the components for the Polish context was examined beforehand using qualitative expert interviews. We did not seek or require ethical approval for this study, since only institutional and non-patient data was used.
Setting and Sampling
We selected a total of 30 model regions each in Germany and Poland on the basis of defined structural criteria, which included population density in the catchment area and duration of the model project.
Specific Program Components of FIT
Specific FIT components (see Table 2) enable us to assess the degree of implementation of FIT in a given hospital. The components were developed based on the German FIT models in a multi-stage process (operationalized, weighted, quantified, and validated) (31). Each component consists of one to four items, which are to be rated on a point scale of 0-2 points depending on their weight. From the single item values scores for components and a total score are calculated. The total score depicts the degree of implementation of FIT at the corresponding study centres. The assessment of components is conducted by administering a fully structured questionnaire, the questions of which are answered on the basis of performance and structural data from any FIT-adopting hospital. Further methodological details are provided by Johne et al. (31). A recent study confirms the fitness of the FIT-specific program components to differentiate statistically between FIT models and standard care (32).
Qualitative Data
Before being able to grade the FIT-specific components in the Polish MHCs, we had to validate and confirm the applicability of the components in the Polish model regions.
For this purpose, we initially carried out qualitative expert interviews with managers and program developers from all Polish study centres (35). The interviewers (JG, BG) had been trained by members of the German research team (JS, YI, SvP, MH) who had previously established the 11 specific program components of German FIT models. As a guideline for the qualitative interviews we used the questionnaire for the grading of the specific components, which had been translated into Polish language (JG, AC). Participants were asked to discuss the appropriateness of the (sub-)components for the Polish context and to denote any potential deviations and required additions. Qualitative data was then scrutinised using content analysis according to Mayring (35). Deviations of the Polish FIT models from the original operationalization of the FIT-specific components were thematically summarised (deductive approach).
Grading of FIT-Specific Components
Based on the qualitative findings the German and the Polish research teams agreed on one version of the FIT-specific components which was then applied in all study centres in Germany and Poland. For this purpose, we carried out structured telephone interviews with the management of each study centre, in which the grading of each component was assessed. To ensure a sufficiently uniform process in the two countries, all interviews were conducted by the same interviewers (JS for Germany, JG for Poland). Quantitative grading data were analysed in a series of steps: First, the components values and the total score were calculated for each centre and tabulated using descriptive statistics. The total score of overall FIT compliance was calculated as an equally weighted mean of each component value; in other words, all components were considered to be equally important dimensions of FIT implementation (31). We then tested whether the total scores differed significantly (p < 0.05%) between the German and the Polish study centres. This question was addressed deductively, with testing of the null hypothesis "the same total score values in both countries" using the Mann-Whitney test. Due to the exploratory nature of this study, we made no alpha adjustment. The analyses were carried out with the SYSTAT software, version 13. Component one had to be excluded from the evaluation, as insufficient parameters were available for its quantification (see Table 1).
Missing values on the component grading (13 of a total of 1,008 values) were entered as "no positive information possible" after a data verification.
Structural and Performance Data
To be able to compare German and Polish study clinics and to contextualise FIT grading results, additional data were collected (e.g., population size of the catchment area, average length of stay) from each study region. Performance data parameters were calculated based on routine data at each centre and then transmitted to the study team. All parameters collected were grouped for Germany and Poland and if possible, mean and standard deviation, weighted by the number of cases or patients, were calculated.
Structural and Performance Data
Structural and statistical parameters of the Polish and German FIT models are summarised in Table 3. The complete information on the individual centres can be found in the Supplementary Table 1.
In comparison to Germany, in Poland a relatively larger number of model regions has emerged in a much shorter period of time. In both countries, model clinics are mainly located in departments at publically owned general hospitals. The Polish centres have switched their entire care to model care, while the majority of German clinics provide model care only for patients covered by certain statutory health insurance companies. Polish models are mainly implemented in relatively less unpopulated catchment areas.
In both countries, the majority of patients receive outpatient treatment, although this proportion is slightly higher in Poland. On the other hand, the day-clinic setting is increasingly used in Germany. The proportion of patients being treated in two to three different settings is slightly higher in the German model regions.
Qualitative Comparison of FIT-Specific Components
In summary, managers (n = 16) and program developers (n = 6) of the Polish models (n = 19) who participated in the survey rated all (sub-) components as generally suitable for the Polish context. However, some participants reported that certain FIT-specific components have a different relevance in Poland than in Germany and lack some aspects they deem crucial to the Polish FIT models. These differences are presented alongside the FIT-specific components as follows: II. Flexible Care Management Across Settings: This component includes individual treatment plans that span different settings and are handed out weekly to service users. In the Polish FIT models, on the other hand, there is a recovery plan that extends beyond acute hospital treatment and contains therapy goals but does not include a daily planning of the therapy sessions. III. Continuity of treatment team: The sub-component "Home treatment by inpatient and day-patient team" was deemed to be of secondary importance in the Polish models, as the outreach teams mostly work separately from clinical teams and provide a more community-based service. IV. Multiprofessional Cooperation: Although this feature was recognised in the Polish models, it here concerns fewer specific measures in comparison to the ones provided in Germany (such as interdisciplinary team days). V. Therapeutic group sessions across all settings: In the Polish models, therapeutic groups are usually not offered across different settings. Survey participants attributed this to a differing underlying therapeutic concept, whereby group processes may be disrupted by the simultaneous presence of acute (= inpatient) and less acute (e.g., outpatient) patients. VI. Outreach home care: This component is operationalized in the German FIT-components as acute treatment with a minimum treatment intensity of one home visit per week. In Poland, on the other hand, the intensity is controlled flexibly (sometimes < 1 contact per week), since the treatment tended to be community-based and non-acute. VIII. Accessibility of services: The sub-component "waiting lists for patients with a request for admission" is mostly not implemented in the Polish FIT models, as the legal stipulation (NMHP) of Polish FIT explicitly calls for patients to receive care within 72 h after the first contact with mental health services. X. Cooperation across Sectors: The division of the subcomponents on cooperation between actors in the social and health systems applies to both countries. Yet, the term "sector" [sektor] is used in Polish only to delimit these two areas of care, while it is used vaguely in German (e.g., to differentiate between inpatient and outpatient care). Therefore, the Polish experts recommended using a more precise definition, such as "Cooperation across Institutions and Sectors." XI. Expansion of professional expertise: The training and employment of peer-support workers is a high priority in the Polish models but is not included in the current version of the FIT-specific components.
Degree of Implementation of FIT Models
With a mean total score of 0.99, the German clinics were slightly higher in the overall degree of implementation than the Polish clinics with 0.75 (see Figure 1). Although very small, this difference was statistically significant (p = 0.02). The individual total score of each study centre can be found in the Supplementary Figure 1.
The mean ratings of the individual components in Germany and Poland are presented in Table 4 and depicted in Figure 2. The components "Multi-professional Cooperation" (IV), "Therapeutic Group Sessions across all settings" (V) and "Outreach homecare" (VI) were rated significantly higher in the German models. These components may thus be particularly relevant for future FIT comparisons across Europe. Although the other differences were not statistically significant, results showed that German clinics scored higher in six of the FIT-specific components and the Polish clinics in four.
DISCUSSION
The main result of the qualitative analysis is that the specific program components identified based on the German FIT models are generally suitable for describing the implementation of the newer Polish FIT models. We were able to operationalize and compare the characteristics of the FIT-specific program components in both countries, albeit with restrictions. Our comparison focuses on the differences between countries, and not between regions of a country.
A central aspect in the qualitative findings concerned the operationalization of outreach home care (part of components III and VI): The Polish models operate according to the principles of CMHT, i.e., with independent outreach teams that carry out home visits as required, but with adaptation of the frequency and duration of contacts based on service users' needs . On the other hand, outreach home care in the German FIT models is mainly grounded in a concept of CRT, which represents an alternative to inpatient admission. Consequently, home treatment is undertaken with a treatment intensity of more than one visit per week. This explains the significantly higher degree of implementation of this component by the German study centres.
In the German operationalization of component XI (Expansion of Professional Expertise) Peer Support Workers (PSW) are not included. In Poland, this professional group had existed at most of the sites since the advent of the FIT models (36). In the meantime, some of the German model clinics have also employed PSW (37). A case study in the German town of Geesthacht demonstrated the transformative power of the GTB financing approach for enabling the introduction of peer-supported interventions (38). PSW are therefore very important in FIT models of both countries. Future revisions of the FIT-specific components should allow a differentiated assessment of these aspects (including outreach treatment and peer support interventions) to better capture the current general states of development of the FIT models in each country.
The overall degree of implementation proved to be significantly higher in the German than in the Polish regions, presumably in relation to the two-fold longer mean run-time of the FIT models in Germany. This is in line with previous evaluation results showing a growing degree of implementation over the duration of a FIT model project, or of a similar care and remuneration model (26,39). Nevertheless, the Polish models show a surprisingly high value for the overall and component-related degrees of implementation, given that they were introduced only in the past 3 years. We attribute this rapid implementation in Poland to a number of possible factors. Previous evaluations have shown that FIT models are particularly well-developed in German regions that offer a strong support in the implementation of FIT by health insurance providers and policy holders (26,39). In Poland there has been a similar degree of acceptance and support of FIT models, which arguably contributed to the development of 33 models in just 3 years. Such support is also attested by the commitment of the Ministry of Health to start further model regions in the next few years.
German studies have indicated that clinics that have a FIT model contract with all health insurance companies implement the FIT components to a particularly high degree (25,26,39). This association also applies to all Polish models, since Poland has a single health insurance fund, i.e., the National Health Fund [NHF (Narodowy Fundusz Zdrowia)]. Therefore, if a FIT model is contracted in a Polish region, the responsible psychiatric clinic therefore reconfigures all their structures and processes from standard to model care. This enables the clinics to concentrate on service delivery according to the FIT model and ensures minimal friction or additional expenses that might otherwise occur due to the simultaneous operation of standard care (26).
From the perspective of health care systems, the present work reveals the advantages and disadvantages of a regulated market (RMS) vs. a national health system (NHS). Although by their formal typology, both countries qualify as RMS, the Polish NHF is more akin to an NHS. Since Poland has a single payer health service, it can presumably encourage reforms with greater effectiveness. In Germany, on the other hand, there is a patchwork of statutory health insurance companies. This makes it more difficult to implement system innovationssuch as the FIT models-at the federal level, since all health insurance companies must first be convinced of its merits and have no obligation to enter such a contract. On the other hand, the possibility to conclude contracts with each single health insurance company-as it is in Germany-might enable a larger scope for small scale experimentation and for rapidly adopting new forms of care, since a decision need not occur at the federal level.
Strength and Limitations
This is the first comparative study assessing FIT in two different countries. As a first limitation of this study, we note that only 10 of 22 (45%) of model regions in Germany participated in the survey. Nevertheless, the sample can be regarded as representative, because the study centres were selected to reflect the broadest possible spectrum of the model clinics (40). For the case of Poland, we obtained information across all regions.
The FIT-specific components were developed in 2016 for German model regions, but proved robust for use in Poland, requiring only minor linguistic changes. However, the models in both countries have evolved, such that certain features are missing (e.g., peer support) or are not operationalized with sufficient precision (e.g., home treatment). The qualitative survey made it possible to ensure that the existing components are generally suitable for use in the Polish model regions, with the acknowledgement of certain caveats.
Ten German experts contributed to weighting of the specific components for the original total scoring of the FIT model implementation (31). In the present analysis, we applied an equal weighting to each component for calculation of the total implementation score. For the further development of international FIT-specific components, a new weighting for each component should be jointly developed by the two countries.
Due to the naturalistic study design with only limited numbers of FIT-adopting hospitals in both countries, we cannot exclude the possibility of ß-errors when comparing results between Germany and Poland. In view of the identified qualitative differences between individual FIT components in the two countries, the composite value (FIT total score) could only approximately serve as a standard for comparison.
Conclusions
The FIT-specific components were originally identified in Germany in 2016 in order to evaluate the introduction of innovative, cross-setting and ward-replacing treatment models (according to §64b social code V) and to promote quality assurance. A similar model of care has been piloted in Poland since 2018. With regard to the key research questions this study showed that 1. despite considerable health system differences between Germany and Poland, their psychiatric FIT models are very similar in terms of their core aspects of service provision; 2. the FIT-specific components are generally suitable for use in Poland, but they should be further supplemented and better specified to enable a more precise assessment of implementation differences between the two countries. In Poland and in Germany, decisions on the continuation or steadying of the FIT models in standard care will be pending in the next few years. Such decisions will require more scientific comparative knowledge about the evaluation and the implementation of FIT models also at the national level. The present findings constitute a first important step in this direction and point to the need for further research on integrative psychiatric care within the framework of Polish and German Mental Health Policy.
DATA AVAILABILITY STATEMENT
The datasets presented in this article are not readily available because of the used data protection declaration and the nature of qualitative interviews where individual participants could be possibly identified. Parts of the dataset are available from the research group on reasonable request. Requests to access the datasets should be directed to Julian Schwarz, julian.schwarz@mhb-fontane.de.
AUTHOR CONTRIBUTIONS
JS, MH, AC, and JT designed the study and applied for funding. JS, JG, and AC collected the qualitative and quantitative data and further structural and statistical data of service provision in the German and the Polish study regions. The qualitative analysis was conducted by JS. Statistical analyses were conducted by JT. JS drafted the first version of the manuscript, AC and MH supervised them. All authors are involved both in the preparation of the study and in its implementation, participated in the interpretation of the results, critically reviewed and commented on the manuscript, read and approved the final version of the manuscript.
FUNDING
This work was funded by Brandenburg Medical School, Neuruppin, Germany.
|
2022-01-07T14:29:52.971Z
|
2022-01-07T00:00:00.000
|
{
"year": 2021,
"sha1": "5954e93cdd050a0f53fea6a271ed5d374bc19fa9",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "5954e93cdd050a0f53fea6a271ed5d374bc19fa9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
13951168
|
pes2o/s2orc
|
v3-fos-license
|
Introgression in the genus Campylobacter: generation and spread of mosaic alleles
Horizontal genetic exchange strongly influences the evolution of many bacteria, substantially contributing to difficulties in defining their position in taxonomic groups. In particular, how clusters of related bacterial genotypes – currently classified as microbiological species – evolve and are maintained remains controversial. The nature and magnitude of gene exchange between two closely related (approx. 15 % nucleotide divergence) microbiologically defined species, Campylobacter jejuni and Campylobacter coli, was investigated by the examination of mosaic alleles, those with some ancestry from each population. A total of 1738 alleles from 2953 seven-locus housekeeping gene sequence types (STs) were probabilistically assigned to each species group with the model-based clustering algorithm structure. Alleles with less than 75 % assignment probability to one of the populations were confirmed as mosaics using the structure linkage model. For each of these, the putative source of the recombinant region was determined and the allele was mapped onto a clonalframe genealogy derived from concatenated ST sequences. This enabled the direction and frequency of introgression between the two populations to be established, with 8.3 % of C. coli clade 1 alleles having acquired C. jejuni sequence, compared to 0.5 % for the reciprocal process. Once generated, mosaic genes spread within C. coli clade 1 by a combination of clonal expansion and lateral gene transfer, with some evidence of erosion of the mosaics by reacquisition of C. coli sequence. These observations confirm previous analyses of the exchange of complete housekeeping alleles and extend this work by describing the processes of horizontal gene transfer and subsequent spread within recipient species.
INTRODUCTION
The availability of multi-locus genetic data from large collections of bacterial isolates has demonstrated the central role of horizontal genetic exchange in the evolution of many bacterial populations (Didelot & Maiden, 2010;Fraser et al., 2007). The extent of this process varies from essentially none, in clonal monomorphic bacteria such as Mycobacterium tuberculosis (Gutacker et al., 2006) and Salmonella Typhi (Kidgell et al., 2002), to extensive in Neisseria meningitidis, Streptococcus pneumoniae, Staphylococcus aureus and Campylobacter jejuni (Sheppard et al., 2008;Wilson et al., 2009), and extreme in Helicobacter pylori, which has a non-clonal population structure (Falush et al., 2001). Horizontal acquisition of genes influences bacterial ecology and evolution (Ochman et al., 2000), for example by increasing the rate at which favourable mutations accumulate within a population (Baltrus et al., 2008;Cooper, 2007), or by conferring novel metabolic capabilities (Lawrence, 1999;Spratt et al., 2001), and in recombinogenic species it can be more important than mutation in determining the genome content Ochman & Groisman, 1994).
Genetic exchange was first demonstrated in the bacteria in the 1940s (Lederberg & Tatum, 1946), but it was with the widespread application of DNA sequencing methods at the end of the 20th century that its extent and evolutionary significance were appreciated, challenging the concept that all bacteria were essentially asexual organisms (Maynard Smith et al., 2000). Multi-locus sequence typing (MLST) (Maiden, 2006), and related techniques, demonstrated that, notwithstanding high frequencies of genetic exchange in many bacterial populations, seven-locus allelic profiles (sequence types, STs) contained sufficient information to associate genotype clusters with phenotypic properties commonly associated with microbiological species and subspecies groups (Hanage et al., 2005;Maiden, 2006;Sheppard et al., 2008).
In addition to the reshuffling of intact genes, intragenic recombination between genetically divergent loci can generate new 'mosaic' alleles. These are alleles where portions of the nucleotide sequence have different evolutionary histories combined in a single gene allele (Maynard Smith et al., 1991;. Mosaic alleles, which are readily generated in bacteria because RecA-mediated DNA repair systems can accept the import of DNA with up to 30 % sequence divergence (Lorenz & Wackernagel, 1994), have been described in genera as diverse as Streptococcus (Hollingshead et al., 2000;Kapur et al., 1995), Neisseria (Maiden et al., 1996;Maynard Smith et al., 1991;Spratt, 1988;Spratt et al., 1991) and Escherichia Stoltzfus et al., 1988). Many studies have focused on mosaic genes that have obvious phenotypic effects, such as increased antibiotic resistance (Coffey et al., 1995;Dowson et al., 1990), virulence (Halter et al., 1989) and antigenic variation (Snyder et al., 2007), where positive selection can be invoked to explain the spread of rare novel variants in a population; however, mosaic alleles have also been described among housekeeping genes that are subject to stabilizing selection (Dingle et al., 2005;Zhou & Spratt, 1992).
Because the donor and recipient regions of the mosaic housekeeping genes can potentially be identified, they can provide information on the patterns of gene flow in the absence of overt strong positive selection (Brückner et al., 2004;Hakenbeck, 1998;Maynard Smith et al., 1991;Maynard Smith, 1992;Stoltzfus et al., 1988). Using mosaic alleles to investigate gene flow has advantages over whole-allele replacements (Hanage et al., 2005;Sheppard et al., 2008), firstly because the potential for incorrectly defined mixed STs (Caro-Quintero et al., 2009) is removed, and secondly because the recombination profile representing a single genetic event can be traced through the population to determine patterns of recombination. This allows the different types of hybrid proliferation to be differentiated, including founding events, clonal expansion and within-species horizontal transfer.
Recombination, and barriers to it, have been identified as an important force in determining bacterial population structure (Hanage et al., 2006;Lawrence, 2002). However, many fundamental parameters have yet to be established and the relationship of bacterial population structure to 'species' remains a matter of debate (Cohan & Koeppel, 2008;Doolittle, 2008;Fraser et al., 2009). Large-scale nucleotide-sequence-based population studies permit the determination of the relative rates and mechanisms of gene flow among populations that are necessary to test the various theoretical paradigms and to investigate the role of neutral and selective forces in these processes (Fraser et al., 2009). In this study, we investigated bacterial 'introgression' -the movement of DNA between recognized species -in the species Campylobacter jejuni and Campylobacter coli, which together represent the major cause of bacterial gastroenteritis in many parts of the world (Sheppard et al., 2009). Mosaic alleles among C. jejuni and C. coli genotypes were characterized and used to investigate the movement and spread of genetic material between these two closely related taxa (Friis et al., 2010), confirming the observations made on the exchange of whole MLST loci (Sheppard et al., 2008), and demonstrating the pattern of horizontal gene transfer and the fate of transferred genetic material.
METHODS
Genotype data. Campylobacter jejuni and Campylobacter coli genotype information was obtained from the publicly accessible MLST database (http://pubmlst.org/campylobacter) (Jolley et al., 2004). The C. jejuni and C. coli data archive contains ST information for seven housekeeping loci: aspA, glnA, gltA, glyA, pgm (glmM), tkt and uncA (atpA), positioned ¢15 kb apart on the bacterial genome (Dingle et al., 2001). Alleles termed pgm and uncA in the MLST scheme have been renamed glmM and atpA respectively in later genome annotations, but their names have been retained within the MLST scheme, for consistency with previous studies. The alleles from 2953 distinct STs were analysed.
Identification of mosaic alleles. Candidate mosaic alleles were identified using the model-based clustering algorithm implemented in the software STRUCTURE (Pritchard et al., 2000). All the alleles were analysed for each of the seven MLST loci to identify the genetic ancestry of the allelic variants. A population number (k) of 2 was used. This ensured that the analysis robustly identified putative ancestry to the two species (C. jejuni and C. coli), which are approximately 12 % divergent at the nucleotide level. Alternative values for k, for example k54 to reflect the three-clade structure within C. coli, were not used because the nucleotide identity within C. jejuni is .98.5 % and among the three C. coli clades it is .93 %. Therefore, variation between MLST alleles within these two species is generally ,10 polymorphisms, compared to .50 between species. This affects the robustness of population assignment using STRUCTURE. Alleles where the probability of belonging to either the C. jejuni or the C. coli population was ¡0.75 were considered as possible mosaic alleles. A cut-off of 0.75 was chosen to conservatively detect potential mosaic alleles.
Recombinant fragment characterization. STRUCTURE-based analysis was used to confirm whether possible hybrids identified were mosaic alleles. This involved the characterization of potential mosaic alleles using a linkage model (Falush et al., 2003), which allowed for disequilibrium in linkage between loci, in this case nucleotides, and enabled the identification of the position of inter-specific recombination events. A comparison dataset (n5194), which described the diversity of C. jejuni (122) and C. coli (72) STs in the pubMLST database (Jolley et al., 2004), was analysed alongside the STs containing mosaic alleles (n581). The population of origin (POPINFO) was assigned for all the STs in the input file, but only the non-recombinant isolates were used, as a training dataset (POPFLAG), to define the background population structure. Origin-population was assigned probabilistically for each nucleotide (3309 bp) in all of the STs including those containing mosaic alleles. 10 000 burn-in cycles were run with 10 000 additional repetitions for all the analysis. Otherwise, the no admixture model was used with default settings.
Clonal relationships. The genealogy of the STs was estimated using CLONALFRAME, a model-based approach to determining microevolution in bacteria (Didelot & Falush, 2007), which calculates clonal relationships with improved accuracy as it distinguishes point mutations from imported chromosomal recombination events, which are the source of the majority of allelic polymorphisms. CLONALFRAME analysis was carried out on concatenated sequences of the same 275 STs as used in the STRUCTURE analysis (above). The program was run with a burn-in of 50 000 burn-in iterations followed by 50 000 data collection iterations. The consensus tree represents combined data from three independent runs with 75 % consensus required for inference of relatedness.
Origin of allelic recombination. The origin of the largest putatively imported region within each mosaic allele was determined using the BLAST algorithm (Altschul et al., 1990). Sequences were compared to a library database of all the non-recombinant alleles at that locus and the origin was assigned based on the highest identity (%) of the longest possible alignment region. Mosaic alleles, and the origin of recombination determined using BLAST, were indicated on phylogenies of individual alleles reconstructed using MEGA software, version 3.1, using the Kimura two-parameter model and neighbour-joining clustering (see Supplementary Fig. S1, available with the online version of this paper).
Structuring of species and gene flow. The number of fixed differences, shared polymorphisms and between-population gene flowestimated by F ST -were calculated using the DnaSP V 4.0 (Rozas et al., 2003) and Arlequin V 3.1 (Excoffier et al., 2005) software packages. Formal species assignment of STs was carried out as previously described (Sheppard et al., 2008) using STRUCTURE, with a threshold probability of 0.75 (75 %) being used as the cut-off for membership of a particular ST to each species, and CLONALFRAME for clade assignment of C. coli haplotypes. Combining these data enabled the assignment of each ST, including those containing mosaic alleles, to a given species/clade for quantitative analysis of gene flow between groups.
Micro-evolutionary analysis of polymorphisms. Analysis was carried out to investigate the polymorphic sites within host and donor regions of mosaic alleles in the four largest mosaic allele clusters (see Fig. 3). Polymorphic sites were characterized within the recombinant and non-recombinant portions of each mosaic allele using START2 (Jolley et al., 2001) and MEGA (Kumar et al., 2004) software, and two analyses were carried out to investigate the polymorphisms within C. coli and C. jejuni alleles. First, individual polymorphisms within recombinant regions were compared to those in non-recombinant alleles and the synonymous and non-synonymous nucleic acid substitutions were quantified. Second, the Baysian analysis software STRUCTURE was used to probabilistically assign polymorphisms within recombinant regions to C. jejuni or C. coli to determine if the conserved polymorphisms were putatively more similar to those associated with one of the two species. In some cases there were low numbers of nucleotide polymorphisms within the MLST alleles; for example there were only two nucleotide differences between the mosaic alleles aspA-87 and aspA-117, which limited the extent to which inference could be made.
Mosaic allele characterization
From the 2953 STs examined, which contained 1738 alleles, a total of 31 alleles were defined as mosaics on the basis of population assignment probabilities of ¡0.75 to groups corresponding to either C. jejuni or C. coli (Fig. 1). Further analysis identified putative recombination points in each mosaic which were consistent with mixed ancestry. The greatest number of mosaic alleles was recorded in the aspA locus (12 mosaic alleles) and tkt locus (11 mosaic alleles). There were three alleles defined as mosaic in the gltA and glyA loci and one in the pgm and uncA loci. No evidence was found for mosaic alleles at the glnA locus. The 31 mosaic alleles were distributed among 81 STs. Sixteen mosaic alleles were specific to one ST, 11 were found in two to five STs, mosaic alleles aspA-87 and gltA-134 were each found in 8 STs, tkt-12 was in 11 STs and tkt-169 was in 12 STs.
The origin of recombinant regions
The distribution of mosaic genes was asymmetrical between C. jejuni and C. coli, with six of the mosaic alleles Fig. 1. Identification of mosaic alleles: cluster analysis using STRUCTURE inferring the probability-based genetic ancestry for allelic variants of the seven MLST loci (aspA-uncA). Each unique allele sequence is represented with vertical lines, divided into two shaded regions indicative of genetic ancestry to C. jejuni (light grey) or C. coli (dark grey). From this information inter-specific recombination between these species can be inferred.
Alleles not assigned to a single genetic ancestry (P¢0.95), and with assignment probability ¡0.75, are considered inter-genomic recombinants (r). The analyses were carried out with k52.
representing horizontal gene transfer from C. coli into C. jejuni STs and the remaining 25 from C. jejuni into C. coli STs. The direction of gene transfer, estimated by BLAST assignment of the recombinant regions, identified the ancestry of the recombinant region of the mosaic alleles ( Supplementary Fig. S1). In alleles aspA-54, aspA-87, aspA-120, glyA-208, glyA-240 and tkt-166, this could be assigned to an identifiable C. jejuni lineage. The possible origin of the recombinant regions in the other mosaic alleles was more ambiguous because there was insufficient sequence variation in the derived allele to identify a particular donor lineage. However, recombination estimates were possible at the species or clade level and all but one episode of gene flow between C. jejuni and C. coli involved a single C. coli clade (clade 1). The exception was tkt-194, found in a single C. jejuni ST, which had regions of C. jejuni and C. coli clade 3 origin.
Inter-species gene flow
Quantification of mosaic alleles allowed the investigation of patterns of gene flow. Nineteen (0.86 %) of the 2221 C. jejuni STs contained mosaic alleles and 105 (16.56 %) of the 634 C. coli STs contained mosaic alleles. No mosaic alleles were found among STs from C. coli clades 2 and 3. Patterns of allele distribution, quantified based on the proportion of the total number of alleles and the BLASTdefined origin of the recombinant fragment, gave a conservative estimate of gene flow (Table 1). C. coli had acquired DNA from C. jejuni in 8.3 % of alleles, 17 times more prevalent than the reciprocal process (0.5 % of C. jejuni alleles were mosaics).
Clonal structure and allele mosaics
A CLONALFRAME genealogy, generated with concatenated MLST gene sequences, partitioned C. jejuni and C. coli into two distinct groups, which was consistent with the F ST values of 0.86 (DnaSP) and 0.87 (Arlequin) calculated from the same data. Eleven STs containing mosaic alleles were found within C. jejuni and 70 in C. coli clade 1. The mosaic alleles were examined in relation to this genealogy (Fig. 2). Mosaicism could be indicative of non-contiguous imports, as has been observed in Helicobacter pylori (Kulick et al., 2008), but shared patterns of mosaicism among alleles and the clustering of mosaic alleles on the tree are likely to be indicative of single founding events followed by subsequent expansion. Based on this assessment there was evidence for 13 introductions, five among C. jejuni and eight among C. coli (Fig. 2). Recombination events between C. jejuni and C. coli had varying levels of subsequent clonal expansion, indicated by clusters of related STs containing the same mosaic allele. For example, the putative introduction into C. coli, termed 'coli 1', which generated alleles aspA-87, aspA-126, aspA-120, aspA-117, aspA-157 and aspA-115, had expanded into 16 clonally related lineages (Figs 2 and 3). There was also evidence of horizontal transfer of mosaic alleles following initial founding events within both C. jejuni and C. coli. The mosaic pgm-93 allele was found in two distantly related C. jejuni lineages, and tkt-164 and tkt-168 were both found in the two largest C. coli tkt mosaic allele clusters (Figs 2 and 3). In all ST clusters that contained more than one mosaic allele, the recombination pattern appeared non-random with shared terminal ends, indicative of subsequent recombination events after the acquisition of a mosaic allele. For example, in the two main clusters of tkt mosaic alleles, the uniformity of the terminal position of the recombinant regions was consistent with whole-allele replacements being subsequently eroded by reacquisition of C. coli DNA (Fig. 3).
Micro-evolution of mosaic alleles
Some indication of possible genetic events in the evolution of the mosaic genes may be evident from patterns of sequence polymorphism within them. However, in this study the strength of these inferences was constrained by the limited number of polymorphisms within the mosaic alleles. The most parsimonious explanation for genetic relatedness within the four largest mosaic allele clusters (aspA, gltA, tkt1, tkt2 - Fig. 3) was a single founding recombination event followed by subsequent clonal expansion accompanied by diversification due to mutation and recombination events. Within these mosaic allele clusters (Fig. 3), the total numbers of non-synonymous (n) and synonymous (s) base substitutions in all the recombinant regions in each cluster were 11, 171 (aspA); 2, 8 (gltA); 7, 12 (tkt1); and 26, 49 (tkt2). The number expected 2. Distribution of mosaic alleles: consensus trees from CLONALFRAME analysis of concatenated sequences of 275 STs from a combined C. coli and C. jejuni population, including 81 STs containing one or more inter-genomic mosaic alleles. Lines connect the mosaic alleles to the STs in which they occur. Alleles located in (a) closely related and (b) distant clades are indicated in separate trees. Site-by-site nucleotide ancestry, inferred using a linkage model in STRUCTURE, is given for mosaic alleles describing putative origin within C. jejuni (light grey shading) and C. coli (dark grey shading). Background population structure (k52) was defined using non-recombinant STs. The number of founding introductions was estimated from shared patterns of mosaicism among alleles and the clustering of mosaic alleles on the tree.
if the whole allele had been replaced would be 5, 46 (aspA); 8,33 (gltA); 14,45 (tkt1); and 14,45 (tkt2). The n/s values for recombinant regions were therefore higher than the average for a whole-allele replacement in the gltA (0.25. 0.24), tkt1 (0.58.0.31) and tkt2 (0.53.0.31) clusters. However, although the observed cross-species imports in these examples had a higher proportion of synonymous changes than the part of the allele that was not present, the number of polymorphisms was low, and comparison of much larger sequence tracts would be necessary to confirm if this is because regions with a high proportion of nonsynonymous mutations were not imported or, more probably, if they were eroded to a greater extent by later events. Similarly, while assignment of individual polymorphisms within the mosaic alleles with STRUCTURE indicated that the recombinant regions had a greater mean assignment probability to C. coli than the polymorphisms in the eroded regions for the aspA (0.11.0.1), gltA (0.33.0.08), tkt1 (0.24.0.09) and tkt2 (0.18.0.0.07) mosaic allele clusters, the number of polymorphisms was too low to test if imported C. jejuni DNA that remained in C. coli alleles was more similar to the original C. coli sequence than that which had theoretically been removed.
DISCUSSION
The patterns of mosaic alleles in C. jejuni and C. coli clade 1 were consistent with a recent increase in gene flow between these two organisms, as suggested by the analysis of wholeallele replacements of housekeeping genes reported previously (Sheppard et al., 2008). As in the whole-allele analysis, which excluded mosaic alleles to ensure a conservative estimate of gene flow, the distribution of mosaic alleles is consistent with frequent introgression of housekeeping genes, or fragments of housekeeping genes, from C. jejuni into C. coli clade 1. The interpretation of the whole MLST allele analysis has been challenged in line with the view that horizontal genetic exchange of housekeeping genes must be infrequent in the absence of strong positive selection or hitchhiking (Caro-Quintero et al., 2009). However, the analysis of Caro-Quintero et al. (2009) did not use a formal population-genetic approach for assigning species and clade designations, and did not identify the clades within C. coli, significantly altering the estimates of gene flow obtained. These alternative estimates were further affected by the use of the catalogue of ST and allele variation present in the PubMLST database as a representative population sample, which it is not. In addition, some hybrids that may have expanded clonally were excluded from the analysis, without a reciprocal exclusion of non-hybrids that may have done so, further altering the impact of introgression, by decreasing the relative proportion of hybrid lineages. Finally, ST data that did not conform to the assumption of low genetic exchange among housekeeping genes were dismissed as clerical errors. Although no evidence was presented to substantiate this assertion, this last criticism is met by the analysis of mosaic genes alleles, as such clerical errors are not possible and the most likely explanation of mosaic alleles is that they are the product of inter-species recombination. In conclusion, the data presented here on mosaic genes are supportive of the original analysis, which proposed a recent increase in gene flow between C. jejuni and C. coli clade 1 that, if sustained, will lead to progressive convergence or despeciation of C. coli clade 1, but not identifiably C. coli clades 2 and 3, with C. jejuni (Sheppard et al., 2008).
The levels of introgression obtained from the analysis of mosaic genes were similar, but not identical, to those obtained from whole MLST allele analysis. Within C. coli clade 1, 105 STs (17 %) and 25 unique alleles (8 %) contained mosaic regions of C. jejuni ancestry ( Table 1). The latter value is somewhat lower than the estimate of 18 % introgression based on whole-gene analysis. This is not inconsistent with the earlier findings; however, the MLST data for these organisms suggest that the average recombination fragment size is considerably larger than the average size of the of the sequences used to define MLST alleles (402-507 bp) (Sheppard et al., 2008;Wilson et al., 2009): therefore much of the variation is the result of reassortment of existing alleles (Dingle et al., 2001;Wilson et al., 2009) and analysis of mosaic alleles is likely to underestimate introgression from C. jejuni to C. coli. In addition, the method for defining mosaic alleles ignores alleles where ,25 % of DNA is introgressed and therefore the estimate of 8 % of C. coli genes representing introgression events from C. jejuni is conservative for this dataset. It is noteworthy that the number of unique introgression events between C. jejuni and C. coli clade 1 appears to be similar in both directions (eight from C. jejuni to C. coli clade 1; five from C. coli to C. jejuni) so that the differences in the influence of the introgressed DNA in each species is apparently dependent on the fate of the hybrids and mosaics after these events have occurred.
Indeed there is only one common example of the persistence of C. coli clade 1 DNA in C. jejuni, that of the uncA-17 allele in ST-61 clonal complex isolates. This contrasts with the large amounts of C. jejuni DNA within C. coli clade 1 STs (Sheppard et al., 2008) and alleles reported here.
It is likely that mosaic alleles are continually generated among closely related transformable bacteria, as evidenced by many anecdotal reports of their occurrence (Dingle et al., 2005;Feil et al., 1995;Zhou & Spratt, 1992;Zhou et al., 1997), but remain at low frequency. In some cases mosaic genes spread within a population as a result of positive selection for a novel adaptive phenotype that they confer. A well-established example of this is the spread of mosaic genes encoding penicillin-binding proteins, which are associated with increased penicillin resistance in a number of bacteria (Coffey et al., 1995;Dowson et al., 1990). For pathogenic and commensal bacteria selection pressures may be different at different stages in transmission. For example, in the species Neisseria meningitidis, bacteria with mosaic antigen-encoding genes generated within hosts by interspecies genetic exchange, apparently experiencing an advantage in that host as a consequence of the host immune response, appear to be less fit in onward human-to-human transmission such that the hybrids are subsequently lost (Zhou et al., 1997). The extent to which stabilizing selection affects the spread of mosaic housekeeping genes remains unclear, but reduced selection against introgressed genes and mosaic genes could explain the observed increase in the proportion of whole-gene replacements and mosaic C. jejuni genes in C. coli clade 1. The patterns of polymorphisms within the mosaics are broadly supportive of a role of some selection against mosaics, but this is based on a small number of polymorphisms, and more data are required to investigate this possibility more rigorously.
The patterns of variation across bacterial genomes in features such as gene order, distribution of coding sequences on leading and lagging strands, GC skew, and codon usage are consistent with selection operating on sequence features other than maintenance of the protein sequences encoded (Bentley & Parkhill, 2004). A number of studies have indicated pervasive selection pressures across much of the genome in Escherichia coli and Salmonella enterica (Charlesworth & Eyre-Walker, 2006) and the genus Campylobacter (Lefébure & Stanhope, 2009). It is possible that intensive agriculture has generated a novel niche in which both C. jejuni and C. coli clade 1 thrive, promoting genetic exchange between these two bacteria, with a concomitant change in selective pressure that favours or tolerates mosaic alleles and whole-gene hybrids, but this intriguing possibility requires much more investigation.
With whole-genome analyses of multiple isolates becoming increasingly possible, such studies are now feasible, and the interactions of C. jejuni and C. coli clade 1 populations at the genomic level provide a model system for distinguishing between the various contrasting theoretical frameworks currently being proposed to explain the relative roles of selection, genetic exchange and neutral mechanisms in the evolution and maintenance of genetic structure in bacterial populations.
|
2014-10-01T00:00:00.000Z
|
2011-04-01T00:00:00.000
|
{
"year": 2011,
"sha1": "6306ee94732f5f45d08e2f9084ff157d18bdb43e",
"oa_license": "CCBY",
"oa_url": "http://mic.microbiologyresearch.org/deliver/fulltext/micro/157/4/1066.pdf?isFastTrackArticle=&itemId=/content/journal/micro/10.1099/mic.0.045153-0&mimeType=pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "aafb7729c0fbd87bf447ecb5aaf88ef9befde52b",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
257073420
|
pes2o/s2orc
|
v3-fos-license
|
p-Xylene Oxidation to Terephthalic Acid: New Trends
Large-scale terephthalic acid production from the oxidation of p-xylene is an especially important process in the polyester industry, as it is mainly used in polyethylene terephthalate (PET) manufacturing, a polymer that is widely used in fibers, films, and plastic products. This review presents and discusses catalytic advances and new trends in terephthalic acid production (since 2014), innovations in terephthalic acid purification processes, and simulations of reactors and reaction mechanisms.
Introduction
The main and most important product of p-xylene oxidation is terephthalic acid (1,4benzenedicarboxylic acid, TPA), the main component in the polyester industry for the production of polyester terephthalate, commonly known as PET. Although the production of PET uses a great proportion of the terephthalic acid produced worldwide, terephthalic acid has other applications such as in textiles, in polyester staple fibers and filament yarns, as a carrier in paints, a coating resin, and as a raw material in the pharmaceutical industry ( Figure 1).
Introduction
The main and most important product of p-xylene oxidation is terephthalic acid (1,4benzenedicarboxylic acid, TPA), the main component in the polyester industry for the production of polyester terephthalate, commonly known as PET. Although the production of PET uses a great proportion of the terephthalic acid produced worldwide, terephthalic acid has other applications such as in textiles, in polyester staple fibers and filament yarns, as a carrier in paints, a coating resin, and as a raw material in the pharmaceutical industry ( Figure 1). The current worldwide purified terephthalic acid market was evaluated to be between USD 51.5 and 54.8 billion in 2021, and is estimated to reach USD 70 or 78 billion at the end of the decade [2][3][4]. This would represent a compound annual growth rate of The current worldwide purified terephthalic acid market was evaluated to be between USD 51.5 and 54.8 billion in 2021, and is estimated to reach USD 70 or 78 billion at the end of the decade [2][3][4]. This would represent a compound annual growth rate of between
Homogeneous Catalysts
The liquid phase oxidation of p-xylene is very promising. As described above, the AMOCO process is a homogeneous catalytic system that, despite having excellent yields, utilizes acetic acid as a solvent and bromide compounds, such as hydrobromic acid (HBr) or sodium bromide (NaBr); this creates a hazardous reaction environment that is highly corrosive, and bromide compounds are not only non-environmentally friendly, but also harmful and dangerous to handle.
Therefore, research has been conducted to discover new catalytic systems that are less corrosive and more environmentally friendly for the oxidation of p-xylene. To better understand the developments that took place before this review, these articles summarizes what has been publish in the field of the homogeneous catalysis of p-xylene. [1,14,19].
In 2014, Plekhov et al. [20] studied the possibility of oxidizing p-xylene to terephthalic acid using molecular oxygen as an oxidant with acetate salts of cobalt(II) and manganese(II) in the presence of N-hydroxyphthalimide (NHPI), and using acetic acid as a solvent. As reported before, the catalytic systems of NHPI-Co(II) have a good synergistic effect; it is therefore interesting to study this reaction. Without Mn(II)'s presence, the reaction performed at 65 • C for 3 h led only to the intermediate products p-tolualdehyde and ptoluic acid. However, the initial reaction rate and conversion were higher than when the reaction was performed in the presence of a cobalt(II) and manganese(II) bromide salt catalysts. Upon adding manganese acetate as a catalyst, the authors observed a slight increase in conversion (35-40%), selectivity for p-toluic acid (85-89%), and oxidation rate (4.1-4.6 × 10 4 mol L −1 s −1 ). Since p-toluic acid is known to have an electron withdrawal effect on the methyl group at the p-position, Plekhov tested the same system at 90 • C for the oxidation of p-toluic acid. Much smaller conversion was obtained when compared to the oxidation of xylene: 12% over 3 h, but with selectivity of terephthalic acid of 93%.
Later in the same year, Wei et al. [21] reported the synthesis and application of new cobalt(II) aza-crowned dihydroxamic acid complexes ( Figure 2). The oxidation reaction occurred in gas-liquid apparatus, where liquid p-xylene was oxidized by air bubbled into the mixture at a flow rate of 2.0 L min −1 , at 110 • C, for a maximum of 5 h. Precipitates were formed over longer durations. Interesting information about this type of catalyst can be extracted from the results obtained: on one hand, the size of the crown ether ring enhanced the catalytic performance of the complexes, creating a small space favorable for the substrate; on the other hand, the steric hindrance of the crown ring shielded the active metal center, leading to a disadvantageous configuration to the formation of active oxygen species and their interaction with the substrate. Catalytically, the best result was obtained after an induction period of 0.3 h and an oxidation time of 5 h: p-xylene conversion achieved 84.8% with a yield of p-toluic acid of 80.2%, representing selectivity of 94.6%.
Molecules 2023, 28, x FOR PEER REVIEW 4 of 28 oxidation of p-toluic acid. Much smaller conversion was obtained when compared to the oxidation of xylene: 12% over 3 h, but with selectivity of terephthalic acid of 93%. Later in the same year, Wei et al. [21] reported the synthesis and application of new cobalt(II) aza-crowned dihydroxamic acid complexes ( Figure 2). The oxidation reaction occurred in gas-liquid apparatus, where liquid p-xylene was oxidized by air bubbled into the mixture at a flow rate of 2.0 L min −1 , at 110 °C, for a maximum of 5 h. Precipitates were formed over longer durations. Interesting information about this type of catalyst can be extracted from the results obtained: on one hand, the size of the crown ether ring enhanced the catalytic performance of the complexes, creating a small space favorable for the substrate; on the other hand, the steric hindrance of the crown ring shielded the active metal center, leading to a disadvantageous configuration to the formation of active oxygen species and their interaction with the substrate. Catalytically, the best result was obtained after an induction period of 0.3 h and an oxidation time of 5 h: p-xylene conversion achieved 84.8% with a yield of p-toluic acid of 80.2%, representing selectivity of 94.6%. In 2016, Wang and Tong [22] reported the production of p-xylene and terephthalic acid through the bio-based conversion of isoprene and acrolein. During the process of the production of p-xylene, an oxidized product was obtained: 4-methylbenzaldehdyde. Since this "by-product" is part of the reaction route of producing terephthalic acid, its oxidation was investigated. The reaction could have been performed using KMnO4, but due to its toxicity and environmental impact, an alternative path was taken using cobalt(II) acetate and manganese(II) acetate and NHPI in acetic acid, under oxygen, at reflux and atmospheric pressure. After 14 h of refluxing, a final yield of 91% of TPA was obtained, and the overall yield of all processes using biomass achieved 32%.
C-scorpionate complexes are known for their activity in alkane oxidation. In 2016, Mendes et al. [23] reported the use of an iron(II) C-scorpionate complex ( Figure 3) for the oxidation of p-xylene at low temperatures and in the presence of 30% H2O2. The most impressive result occurred with 10 µ mol of catalyst in acetonitrile, nitric acid in a ratio [n(acid)/n(catalyst)] of 10, at 35 °C. After only 5 min, a total yield of 21.8% of oxygenated products, representing a TOF of 1.3 × 10 2 h −1 , was formed. However, the main product was p-tolualdehyde, far from the last oxidation product of TPA (see Scheme 2). m-and o-xylene were also tested. The presence of nitric acid improved the reaction yield of the aldehyde formation, but the highest value was found for p-xylene, probably due to steric limitations that can exist with the ortho-and meta-positions. The mechanism for the reaction with this type of catalyst is thought to be a free-radical mechanism due to the monofunctional product and lack of ring hydroxylation (Scheme 2). The changes in the oxidation state of the metal center (+2/+3) detected by XPS also support the authors' hypothesis. In 2016, Wang and Tong [22] reported the production of p-xylene and terephthalic acid through the bio-based conversion of isoprene and acrolein. During the process of the production of p-xylene, an oxidized product was obtained: 4-methylbenzaldehdyde. Since this "by-product" is part of the reaction route of producing terephthalic acid, its oxidation was investigated. The reaction could have been performed using KMnO 4 , but due to its toxicity and environmental impact, an alternative path was taken using cobalt(II) acetate and manganese(II) acetate and NHPI in acetic acid, under oxygen, at reflux and atmospheric pressure. After 14 h of refluxing, a final yield of 91% of TPA was obtained, and the overall yield of all processes using biomass achieved 32%.
C-scorpionate complexes are known for their activity in alkane oxidation. In 2016, Mendes et al. [23] reported the use of an iron(II) C-scorpionate complex ( Figure 3) for the oxidation of p-xylene at low temperatures and in the presence of 30% H 2 O 2 . The most impressive result occurred with 10 µmol of catalyst in acetonitrile, nitric acid in a ratio [n(acid)/n(catalyst)] of 10, at 35 • C. After only 5 min, a total yield of 21.8% of oxygenated products, representing a TOF of 1.3 × 10 2 h −1 , was formed. However, the main product was p-tolualdehyde, far from the last oxidation product of TPA (see Scheme 2). mand o-xylene were also tested. The presence of nitric acid improved the reaction yield of the aldehyde formation, but the highest value was found for p-xylene, probably due to steric limitations that can exist with the orthoand metapositions. The mechanism for the reaction with this type of catalyst is thought to be a free-radical mechanism due to the monofunctional product and lack of ring hydroxylation (Scheme 2). The changes in the oxidation state of the metal center (+2/+3) detected by XPS also support the authors' hypothesis. Molecules 2023, 28, x FOR PEER REVIEW 5 of 28 Figure 3. Iron(II) C-scorpionate complex structure. Adapted from [23].
Scheme 2.
Proposed mechanism for the oxidation of p-xylene catalyzed by a C-scorpionate iron(II) complex. Adapted from [23].
Inspired by enzymes such as methane monooxygenase and toluene 4-monooxygenase, Antunes and co-workers [24] reported non-heme iron(III) complexes bearing bis-(2pyridylmethyl)amine (BMPA) and derivatives ( Figure 4) as catalysts for the selective oxidation of aromatic compounds by H2O2. The authors tested the most active catalyst for the oxidation of toluene [Fe(BMPA)Cl3], with several other aromatic compounds, including p-xylene, with the following conditions: 50 °C, 24 h, 0.77 mol L −1 of substrate, 7.7 × 10 −3 mol L −1 of catalyst, and H2O2 (0.77 mol L −1 ). The result obtained for the xylene oxidation was in line with that obtained with toluene, and the main products for both reactions were, respectively, 2,5-dimethyl-2,5-cyclohexen-1,4-dione and cresol (mainly o-cresol). This shows that preferable oxidation occurs on the aromatic ring instead of the methyl groups. The authors explained that such a result can be associated with the presence of hydroxyl radicals, formed via a radical mechanism through the autoxidation process with a highly electrophilic oxo-metal transient species reacting with the arene π-system. Another interesting aspect of this study is the fact that for toluene oxidation, the increase in temperature (25 to 50 °C) increased the selectivity for benzaldehyde, meaning that the selectivity for methyl oxidation increased. Since only a study at 50 °C was performed for p-xylene, it would be interesting to compare the result with lower and higher temperatures to see if the same behavior is observed. In the end, p-xylene oxidation led to a total yield of 16% with 42% selectivity for the main product, resulting from oxidation of the aromatic ring. The other complexes ( Figure 2B-D) were not tested for the oxidation of p-xylene and were only tested of the oxidation of toluene, whereby the authors obtained similar or lower yields when compared to the other complex.
Scheme 2.
Proposed mechanism for the oxidation of p-xylene catalyzed by a C-scorpionate iron(II) complex. Adapted from [23].
Inspired by enzymes such as methane monooxygenase and toluene 4-monooxygenase, Antunes and co-workers [24] reported non-heme iron(III) complexes bearing bis-(2pyridylmethyl)amine (BMPA) and derivatives ( Figure 4) as catalysts for the selective oxidation of aromatic compounds by H2O2. The authors tested the most active catalyst for the oxidation of toluene [Fe(BMPA)Cl3], with several other aromatic compounds, including p-xylene, with the following conditions: 50 °C, 24 h, 0.77 mol L −1 of substrate, 7.7 × 10 −3 mol L −1 of catalyst, and H2O2 (0.77 mol L −1 ). The result obtained for the xylene oxidation was in line with that obtained with toluene, and the main products for both reactions were, respectively, 2,5-dimethyl-2,5-cyclohexen-1,4-dione and cresol (mainly o-cresol). This shows that preferable oxidation occurs on the aromatic ring instead of the methyl groups. The authors explained that such a result can be associated with the presence of hydroxyl radicals, formed via a radical mechanism through the autoxidation process with a highly electrophilic oxo-metal transient species reacting with the arene π-system. Another interesting aspect of this study is the fact that for toluene oxidation, the increase in temperature (25 to 50 °C) increased the selectivity for benzaldehyde, meaning that the selectivity for methyl oxidation increased. Since only a study at 50 °C was performed for p-xylene, it would be interesting to compare the result with lower and higher temperatures to see if the same behavior is observed. In the end, p-xylene oxidation led to a total yield of 16% with 42% selectivity for the main product, resulting from oxidation of the aromatic ring. The other complexes ( Figure 2B-D) were not tested for the oxidation of p-xylene and were only tested of the oxidation of toluene, whereby the authors obtained similar or lower yields when compared to the other complex. Scheme 2. Proposed mechanism for the oxidation of p-xylene catalyzed by a C-scorpionate iron(II) complex. Adapted from [23].
Inspired by enzymes such as methane monooxygenase and toluene 4-monooxygenase, Antunes and co-workers [24] reported non-heme iron(III) complexes bearing bis-(2pyridylmethyl)amine (BMPA) and derivatives ( Figure 4) as catalysts for the selective oxidation of aromatic compounds by H 2 O 2 . The authors tested the most active catalyst for the oxidation of toluene [Fe(BMPA)Cl 3 ], with several other aromatic compounds, including p-xylene, with the following conditions: 50 • C, 24 h, 0.77 mol L −1 of substrate, 7.7 × 10 −3 mol L −1 of catalyst, and H 2 O 2 (0.77 mol L −1 ). The result obtained for the xylene oxidation was in line with that obtained with toluene, and the main products for both reactions were, respectively, 2,5-dimethyl-2,5-cyclohexen-1,4-dione and cresol (mainly ocresol). This shows that preferable oxidation occurs on the aromatic ring instead of the methyl groups. The authors explained that such a result can be associated with the presence of hydroxyl radicals, formed via a radical mechanism through the autoxidation process with a highly electrophilic oxo-metal transient species reacting with the arene π-system. Another interesting aspect of this study is the fact that for toluene oxidation, the increase in temperature (25 to 50 • C) increased the selectivity for benzaldehyde, meaning that the selectivity for methyl oxidation increased. Since only a study at 50 • C was performed for p-xylene, it would be interesting to compare the result with lower and higher temperatures to see if the same behavior is observed. In the end, p-xylene oxidation led to a total yield of 16% with 42% selectivity for the main product, resulting from oxidation of the aromatic ring. The other complexes ( Figure 4B-D) were not tested for the oxidation of p-xylene and were only tested of the oxidation of toluene, whereby the authors obtained similar or lower yields when compared to the other complex. In the same line of work, Kwong et al. reported [25] the application of the osmium(VI) nitride complex ( Figure 5) for alkylbenzene oxidation in H2O2 and obtained similar results. The reaction was performed at a lower temperature, 23 °C, with 62.5 mM of H2O2 (aq. 30%, solution), 0.625 mM of the catalyst, and 1.25 M of p-xylene in a mixture of CH2Cl2/CH3CO2H (5:2 v/v). After 25 min, a 98% yield was obtained, based on H2O2, where the main products corresponded to phenols with 97% selectivity. This occurred even though the aromatic C-H bond had a higher bond dissociation energy (112 kcal mol −1 ), compared to the benzylic C-H bond (95 kcal mol −1 ). Other solvents were tested, and a major conclusion was that acetic acid's presence enhanced the products' yields, as happens in the AMOCO process. The experimental work was enhanced by density functional theory (DFT) calculations, the results of which are discussed in Section 3. Lindhorst et al. reported, in 2017 [26], a molecular iron-NHC complex ( Figure 6) capable of catalyzing the oxidation of aromatic hydrocarbons including p-xylene. The authors were focused on the oxidation of the aromatic ring. Upon applying the optimized conditions (1 mol% catalyst vs. the substrate in acetonitrile, 0.25 equivalent relative to the substrate of H2O2 aq. 50%, at −10 °C for 1 h), the total conversion obtained was 12.6%, with phenol selectivity reaching 84.9%. Lower temperatures appeared to enhance the catalyst's stability (9 cycles at 20 °C vs. 13 cycles at −10 °C), whereas a high concentration of oxidant significantly decreased the lifetime of the catalyst, and 2,5-DMP and 2,4-DMP were formed in the same order of magnitude. In the same line of work, Kwong et al. reported [25] the application of the osmium(VI) nitride complex ( Figure 5) for alkylbenzene oxidation in H 2 O 2 and obtained similar results. The reaction was performed at a lower temperature, 23 • C, with 62.5 mM of H 2 O 2 (aq. 30%, solution), 0.625 mM of the catalyst, and 1.25 M of p-xylene in a mixture of CH 2 Cl 2 /CH 3 CO 2 H (5:2 v/v). After 25 min, a 98% yield was obtained, based on H 2 O 2 , where the main products corresponded to phenols with 97% selectivity. This occurred even though the aromatic C-H bond had a higher bond dissociation energy (112 kcal mol −1 ), compared to the benzylic C-H bond (95 kcal mol −1 ). Other solvents were tested, and a major conclusion was that acetic acid's presence enhanced the products' yields, as happens in the AMOCO process. The experimental work was enhanced by density functional theory (DFT) calculations, the results of which are discussed in Section 3.
Molecules 2023, 28, x FOR PEER REVIEW 6 of 28 In the same line of work, Kwong et al. reported [25] the application of the osmium(VI) nitride complex ( Figure 5) for alkylbenzene oxidation in H2O2 and obtained similar results. The reaction was performed at a lower temperature, 23 °C, with 62.5 mM of H2O2 (aq. 30%, solution), 0.625 mM of the catalyst, and 1.25 M of p-xylene in a mixture of CH2Cl2/CH3CO2H (5:2 v/v). After 25 min, a 98% yield was obtained, based on H2O2, where the main products corresponded to phenols with 97% selectivity. This occurred even though the aromatic C-H bond had a higher bond dissociation energy (112 kcal mol −1 ), compared to the benzylic C-H bond (95 kcal mol −1 ). Other solvents were tested, and a major conclusion was that acetic acid's presence enhanced the products' yields, as happens in the AMOCO process. The experimental work was enhanced by density functional theory (DFT) calculations, the results of which are discussed in Section 3. Lindhorst et al. reported, in 2017 [26], a molecular iron-NHC complex ( Figure 6) capable of catalyzing the oxidation of aromatic hydrocarbons including p-xylene. The authors were focused on the oxidation of the aromatic ring. Upon applying the optimized conditions (1 mol% catalyst vs. the substrate in acetonitrile, 0.25 equivalent relative to the substrate of H2O2 aq. 50%, at −10 °C for 1 h), the total conversion obtained was 12.6%, with phenol selectivity reaching 84.9%. Lower temperatures appeared to enhance the catalyst's stability (9 cycles at 20 °C vs. 13 cycles at −10 °C), whereas a high concentration of oxidant significantly decreased the lifetime of the catalyst, and 2,5-DMP and 2,4-DMP were formed in the same order of magnitude. Lindhorst et al. reported, in 2017 [26], a molecular iron-NHC complex ( Figure 6) capable of catalyzing the oxidation of aromatic hydrocarbons including p-xylene. The authors were focused on the oxidation of the aromatic ring. Upon applying the optimized conditions (1 mol% catalyst vs. the substrate in acetonitrile, 0.25 equivalent relative to the substrate of H 2 O 2 aq. 50%, at −10 • C for 1 h), the total conversion obtained was 12.6%, with phenol selectivity reaching 84.9%. Lower temperatures appeared to enhance the catalyst's stability (9 cycles at 20 • C vs. 13 cycles at −10 • C), whereas a high concentration of oxidant significantly decreased the lifetime of the catalyst, and 2,5-DMP and 2,4-DMP were formed in the same order of magnitude. Molecules 2023, 28, x FOR PEER REVIEW 7 of 28 Goulas et al. [27] screened the oxidation of di-hydroxymethyl benzene (DHMB) to terephthalic acid using several metals supported on carbon, such as Au, Pt, Pd, Cu, and Ir. In the first test, performed at 90 °C and 1 bar of oxygen, Pd led to the highest conversion but low yields for the oxidation products. The same trend occurred with Pt. Both were reported to be good C-C bond scission catalysts. Metals are known for their oxidative power; for exampke, gold and copper only oxidized DHMB to HMBA, while the Ir catalyst obtained a 93% oxidation product yield, including 55% for TPA at full conversion of DHMB. When comparing all the metals at similar conversion of between 60 and 75%, Ir presented a minimal loss of carbon. Under optimized conditions, the Ir/C catalyst promoted the full conversion of DHMB, leading to a maximum yield of TPA of 76% after 20 h, at 100 °C, with 12 bar of O2. Under the same conditions, p-xylene did not react; therefore, this type of catalyst appears to be more suitable for a biomass-based processes, or later, oxidation within the full process. The authors explored the possible mechanism (Scheme 3) followed by the iridium catalyst. By subjecting the reactants and intermediate oxidation products to reaction conditions in the presence and absence of a catalyst, it was interestingly observed that DHMB was not converted in the absence of a catalyst, while HMBA and terephthalaldehyde were. The authors explained that the presence of a single aldehyde group activated the molecule for the radicalar reaction. Lastly, DHMB and 4-hydroxymethylbenzoic acid were inert in the absence of the iridium catalyst, reinforcing the radical hypothesis. Goulas et al. [27] screened the oxidation of di-hydroxymethyl benzene (DHMB) to terephthalic acid using several metals supported on carbon, such as Au, Pt, Pd, Cu, and Ir. In the first test, performed at 90 • C and 1 bar of oxygen, Pd led to the highest conversion but low yields for the oxidation products. The same trend occurred with Pt. Both were reported to be good C-C bond scission catalysts. Metals are known for their oxidative power; for exampke, gold and copper only oxidized DHMB to HMBA, while the Ir catalyst obtained a 93% oxidation product yield, including 55% for TPA at full conversion of DHMB. When comparing all the metals at similar conversion of between 60 and 75%, Ir presented a minimal loss of carbon. Under optimized conditions, the Ir/C catalyst promoted the full conversion of DHMB, leading to a maximum yield of TPA of 76% after 20 h, at 100 • C, with 12 bar of O 2 . Under the same conditions, p-xylene did not react; therefore, this type of catalyst appears to be more suitable for a biomass-based processes, or later, oxidation within the full process. The authors explored the possible mechanism (Scheme 3) followed by the iridium catalyst. By subjecting the reactants and intermediate oxidation products to reaction conditions in the presence and absence of a catalyst, it was interestingly observed that DHMB was not converted in the absence of a catalyst, while HMBA and terephthalaldehyde were. The authors explained that the presence of a single aldehyde group activated the molecule for the radicalar reaction. Lastly, DHMB and 4-hydroxymethylbenzoic acid were inert in the absence of the iridium catalyst, reinforcing the radical hypothesis.
In the same year, Pan et al. [28] studied the catalytic oxidation of p-xylene to TPA by ozone in the presence of a cobalt catalyst. First, different salts of cobalt were tested under the following conditions: 110 • C, 0.10 mol of catalyst and 0.545 mmol of KBr per mol of p-xylene, 15 mg/L of ozone, a gas flow rate of 0.8 L/min, a duration of 6 h, and glacial acetic acid as a solvent. Cobalt chloride hexahydrate achieved conversion of ca. 50%, which was increased up to 70% when cobalt acetate tetrahydrate was used. The best result from the salts was obtained when cobalt acetate was used, allowing for almost full conversion (97%) of p-xylene with high selectivity for TPA (82%); this shows that the anhydrous reactional medium is mandatory for the success of the reaction. The increase in the concentration of ozone in the reaction medium did not affect the selectivity distribution of the products. The conversion of p-xylene, on the other hand, increased linearly with the increase in ozone concentration. After further optimization, such as altered catalyst and ozone concentration and temperature, the best conditions attained were: 80 • C, 0.10 mol catalyst per mol of p-xylene, an ozone concentration of 63 mg/L, a gas flow rate of 0.8 L/min, a duration of 6 h, and glacial acetic acid 17 mol/p-xylene mol used as the solvent. A total of 76% of p-xylene conversion was attained with 84% selectivity for TPA, which could be increased up to almost full conversion (96%) by adding KBr.
3) followed by the iridium catalyst. By subjecting the reactants and intermediate oxidation products to reaction conditions in the presence and absence of a catalyst, it was interestingly observed that DHMB was not converted in the absence of a catalyst, while HMBA and terephthalaldehyde were. The authors explained that the presence of a single aldehyde group activated the molecule for the radicalar reaction. Lastly, DHMB and 4-hydroxymethylbenzoic acid were inert in the absence of the iridium catalyst, reinforcing the radical hypothesis. Scheme 3. Proposed mechanism for the oxidation of 1,4-dihydroxymethylbenzene. Adapted from [27].
New polynuclear Cu(II) complexes derived from aroylhydrazone N'-(di(pyridin-2yl)methylene)pyrazine-2-carbohydrazide were reported by Sutradhar et al. in 2019 [29] as catalysts for the oxidation of p-xylene under mild conditions. Upon appling microwave irradiation (5 W, 3 h) or conventional heating (6 h), acetonitrile as the solvent, H 2 O 2 (aq. 30%, 2:1 oxidant: substrate), and 2% mol of catalyst based on the substrate, methyl benzyl alcohol and p-tolualdehyde were the main products. However, no subsequent oxygenated products, such as 4-CBA or terephthalic acid, were detected. Under conventional heating and using the complex [Cu 2 (µ-1κN 3 In the same year, Pan et al. [28] studied the catalytic oxidation of p-xylene to TPA by ozone in the presence of a cobalt catalyst. First, different salts of cobalt were tested under the following conditions: 110 °C, 0.10 mol of catalyst and 0.545 mmol of KBr per mol of pxylene, 15 mg/L of ozone, a gas flow rate of 0.8 L/min, a duration of 6 h, and glacial acetic acid as a solvent. Cobalt chloride hexahydrate achieved conversion of ca. 50%, which was increased up to 70% when cobalt acetate tetrahydrate was used. The best result from the salts was obtained when cobalt acetate was used, allowing for almost full conversion (97%) of p-xylene with high selectivity for TPA (82%); this shows that the anhydrous reactional medium is mandatory for the success of the reaction. The increase in the concentration of ozone in the reaction medium did not affect the selectivity distribution of the products. The conversion of p-xylene, on the other hand, increased linearly with the increase in ozone concentration. After further optimization, such as altered catalyst and ozone concentration and temperature, the best conditions attained were: 80 °C, 0.10 mol catalyst per mol of p-xylene, an ozone concentration of 63 mg/L, a gas flow rate of 0.8 L/min, a duration of 6 h, and glacial acetic acid 17 mol/p-xylene mol used as the solvent. A total of 76% of p-xylene conversion was attained with 84% selectivity for TPA, which could be increased up to almost full conversion (96%) by adding KBr.
New polynuclear Cu(II) complexes derived from aroylhydrazone N'-(di(pyridin-2yl)methylene)pyrazine-2-carbohydrazide were reported by Sutradhar et al. in 2019 [29] as catalysts for the oxidation of p-xylene under mild conditions. Upon appling microwave irradiation (5 W, 3 h) or conventional heating (6 h), acetonitrile as the solvent, H2O2 (aq. 30%, 2:1 oxidant: substrate), and 2% mol of catalyst based on the substrate, methyl benzyl alcohol and p-tolualdehyde were the main products. However, no subsequent oxygenated products, such as 4-CBA or terephthalic acid, were detected. Under conventional heating and using the complex [Cu2(μ-1κN 3 Zhang et al., in 2019 [30], screened the metal-free oxidation of p-xylene to p-toluic acid using N-alkyl pyridinium salts, expecting the use of mild conditions and better interaction with the organic substrate, using only a pure organic catalyst and avoiding over- Zhang et al., in 2019 [30], screened the metal-free oxidation of p-xylene to p-toluic acid using N-alkyl pyridinium salts, expecting the use of mild conditions and better interaction with the organic substrate, using only a pure organic catalyst and avoiding over-oxidation caused by the metal-species. The general conditions used were: 0.5 mmol of 1-benzyl-4-N,N-dimethylaminopyridinium salt ( Figure 8) as a catalyst for 10 mmol of p-xylene heated at 160 • C, an O 2 pressure of 1.5 MPa, and a duration of 2 h. Before the reaction, 0.2 mmol of p-tolualdehyde was introduced as an initiator to reduce the induction period. The solvent-free reaction produced satisfactory results, with a maximum conversion of 52% and 86% selectivity towards p-toluic acid. The low conversion of xylene was probably due to the low solubility of the products and can be improved with the addition of solvent. While testing several solvents, the choice between acetic acid, acetonitrile, and DMF was clear. The highest conversion (88%) obtained was with acetonitrile, maintaining selectivity for p-toluic acid of 83%. While in the original paper, the authors compared their results to some metal catalytic systems that did not achieve great results, as seen in this review, other metallic catalytic systems can achieve similar results when lower temperatures or shorter times are applied (see Table 1) Molecules 2023, 28, x FOR PEER REVIEW 9 of 28 oxidation caused by the metal-species. The general conditions used were: 0.5 mmol of 1benzyl-4-N,N-dimethylaminopyridinium salt ( Figure 8) as a catalyst for 10 mmol of p-xylene heated at 160 °C, an O2 pressure of 1.5 MPa, and a duration of 2 h. Before the reaction, 0.2 mmol of p-tolualdehyde was introduced as an initiator to reduce the induction period. The solvent-free reaction produced satisfactory results, with a maximum conversion of 52% and 86% selectivity towards p-toluic acid. The low conversion of xylene was probably due to the low solubility of the products and can be improved with the addition of solvent. While testing several solvents, the choice between acetic acid, acetonitrile, and DMF was clear. The highest conversion (88%) obtained was with acetonitrile, maintaining selectivity for p-toluic acid of 83%. While in the original paper, the authors compared their results to some metal catalytic systems that did not achieve great results, as seen in this review, other metallic catalytic systems can achieve similar results when lower temperatures or shorter times are applied (see Table 1) Another example of the use of ozone for the conversion of p-xylene to terephthalic acid was reported by Hwang et al. in 2019 [31]. This work presents an alternative to the AMOCO process with low energy demand and a greener process, avoiding the formation of CO2 and CH3Br. Using a 100 W Hg lamp (200 mW cm −2 at 310 nm) and a mixture of pxylene:acetonitrile:water = 1:3:2 (pH = 4.5), and O2 gas, with ~10% ozone at room temperature, it achieved outstanding conversion of 98% with a yield of TPA of 96%. In the end, the authors reported an E-factor ( ℎ ) of 0.118, while that of the AMOCO process varies between 3.14 and 10.14, with similar atom economy, carbon efficiency, and selectivity for TPA. The proposed mechanism, presented in Scheme 4, illustrates first the formation of the hydroxyl radical resulting from the interaction of ozone and water. The presence of water plays an important role at the start of the reaction. After that, the radical abstracts the proton from the methyl group of p-xylene, initiating the radical reaction until the last product, TPA, is obtained. Another example of the use of ozone for the conversion of p-xylene to terephthalic acid was reported by Hwang et al. in 2019 [31]. This work presents an alternative to the AMOCO process with low energy demand and a greener process, avoiding the formation of CO 2 and CH 3 Br. Using a 100 W Hg lamp (200 mW cm −2 at 310 nm) and a mixture of p-xylene:acetonitrile:water = 1:3:2 (pH = 4.5), and O 2 gas, with~10% ozone at room temperature, it achieved outstanding conversion of 98% with a yield of TPA of 96%. In the end, the authors reported an E-factor (
Total mass o f waste f rom the process
Total mass o f product ) of 0.118, while that of the AMOCO process varies between 3.14 and 10.14, with similar atom economy, carbon efficiency, and selectivity for TPA. The proposed mechanism, presented in Scheme 4, illustrates first the formation of the hydroxyl radical resulting from the interaction of ozone and water. The presence of water plays an important role at the start of the reaction. After that, the radical abstracts the proton from the methyl group of p-xylene, initiating the radical reaction until the last product, TPA, is obtained.
Jiang et al. [32] explored the photo-catalytic oxidation of p-xylene using substituted anthraquinones as catalysts. 2-carboxyanthraquinone was the best catalyst in the study, being active under visible light (35 W tungsten-bromine lamp) and with 1 atm of O 2 at r.t., achieving 70.9% conversion and 88.2% selectivity for p-toluic acid after 12 h of reaction. When the reaction was performed under air instead of O 2 , a small decrease in conversion and selectivity to 65 and 67%, respectively, was observed. One interesting result was the addition of benzenesulfonic acid, which not only increased the conversion of p-xylene to 86.7%, but also allowed the reaction to progress, achieving 27.2% terephthalic acid. Increasing the temperature to 50 • C reduced activity not only in conversion, but also in selectivity for TPA, resulting in a decrease in the stability of the excited state of 2-carboxyanthraquinone. The mechanism presented for this catalyst (Scheme 5) combined previously obtained results, as well as those from the literature and EPR spectra, showing that is possible to excite anthraquinones with visible light. When this process occurs, an excited state is generated where photoelectrons can be rapidly transferred to oxygen to form radical anions. Table 1 summarizes some of the reaction conditions, as well as the main products and yields for the best systems reported with homogeneous catalysts. Table 1 summarizes some of the reaction conditions, as well as the main products and yields for the best systems reported with homogeneous catalysts. Table 1 summarizes some of the reaction conditions, as well as the main products and yields for the best systems reported with homogeneous catalysts. Table 1 summarizes some of the reaction conditions, as well as the main products and yields for the best systems reported with homogeneous catalysts. Table 1 summarizes some of the reaction conditions, as well as the main products and yields for the best systems reported with homogeneous catalysts. Table 1 summarizes some of the reaction conditions, as well as the main products and yields for the best systems reported with homogeneous catalysts. Table 1 summarizes some of the reaction conditions, as well as the main products and yields for the best systems reported with homogeneous catalysts. Table 1 summarizes some of the reaction conditions, as well as the main products and yields for the best systems reported with homogeneous catalysts. [31] (a) HPLC yield; (b) isolated yield; (c) GC yield.
Heterogeneous Catalysts
The field of heterogeneous catalysis provides new information on and highlights systems that can be efficient for the oxidation of p-xylene; for more information about this topic and to see previous results, [1,14,19,33] are good examples.
Qin et al. [34] focused on the production of terephthalaldehyde, a promising material for the pharmaceutical industry, among others. To achieve a safer process than the one olecules 2023, 28 [31] (a) HPLC yield; (b) isolated yield; (c) GC yield.
Heterogeneous Catalysts
The field of heterogeneous catalysis provides new information on and highlights systems that can be efficient for the oxidation of p-xylene; for more information about this topic and to see previous results, [1,14,19,33] are good examples.
Qin et al. [34] focused on the production of terephthalaldehyde, a promising material for the pharmaceutical industry, among others. To achieve a safer process than the one 64 (a) [ [31] (a) HPLC yield; (b) isolated yield; (c) GC yield.
Heterogeneous Catalysts
The field of heterogeneous catalysis provides new information on and highlights systems that can be efficient for the oxidation of p-xylene; for more information about this topic and to see previous results, [1,14,19,33] are good examples.
Qin et al. [34] focused on the production of terephthalaldehyde, a promising material for the pharmaceutical industry, among others. To achieve a safer process than the one olecules 2023, 28 [31] (a) HPLC yield; (b) isolated yield; (c) GC yield.
Heterogeneous Catalysts
The field of heterogeneous catalysis provides new information on and highlights systems that can be efficient for the oxidation of p-xylene; for more information about this topic and to see previous results, [1,14,19,33] are good examples.
Qin et al. [34] focused on the production of terephthalaldehyde, a promising material for the pharmaceutical industry, among others. To achieve a safer process than the one 94 (a) [31] (a) HPLC yield; (b) isolated yield; (c) GC yield. Jiang et al. [32] explored the photo-catalytic oxidation of p-xylene using substituted anthraquinones as catalysts. 2-carboxyanthraquinone was the best catalyst in the study, being active under visible light (35 W tungsten-bromine lamp) and with 1 atm of O2 at r.t., achieving 70.9% conversion and 88.2% selectivity for p-toluic acid after 12 h of reaction. When the reaction was performed under air instead of O2, a small decrease in conversion and selectivity to 65 and 67%, respectively, was observed. One interesting result was the addition of benzenesulfonic acid, which not only increased the conversion of p-xylene to 86.7%, but also allowed the reaction to progress, achieving 27.2% terephthalic acid. Increasing the temperature to 50 °C reduced activity not only in conversion, but also in selectivity for TPA, resulting in a decrease in the stability of the excited state of 2-carboxyanthraquinone. The mechanism presented for this catalyst (Scheme 5) combined previously obtained results, as well as those from the literature and EPR spectra, showing that is possible to excite anthraquinones with visible light. When this process occurs, an excited state is generated where photoelectrons can be rapidly transferred to oxygen to form radical anions. Table 1 summarizes some of the reaction conditions, as well as the main products and yields for the best systems reported with homogeneous catalysts. Table 1 summarizes some of the reaction conditions, as well as the main products and yields for the best systems reported with homogeneous catalysts.
Heterogeneous Catalysts
The field of heterogeneous catalysis provides new information on and highlights systems that can be efficient for the oxidation of p-xylene; for more information about this topic and to see previous results, [1,14,19,33] are good examples.
Qin et al. [34] focused on the production of terephthalaldehyde, a promising material for the pharmaceutical industry, among others. To achieve a safer process than the one currently in place, the authors reported the use of a fixed-bed reactor where 0.5 g of catalyst Fe-Mo-W was added, and then, p-xylene was vaporized at 0.01 mL/min at 300 • C. The oxidation of p-xylene occurred using air as an oxidant, with a liquid space velocity of 1.2 mL gcat −1 h −1 at 500 • C. During the optimization process of the multiple parameters, some important conclusions could be drawn. For example, at the optimal temperature, the variations in enthalpy and Gibbs free energy of the reaction in the gas phase were less than zero. In addition, the reaction was highly exothermic meaning, which is favored by low temperatures. So, a compromise must be reached to activate the molecules of p-xylene to react and avoid decomposition into CO 2 and water. In the end, the Fe-Mo-W calcined at 550 • C achieved a conversion of p-xylene of 74% with a 73.6% yield of terephthalaldehyde and stability of about 50 h.
Lee and co-workers reported [35] the use of supercritical carbon dioxide for the synthesis of terephthalic acid via the partial oxidation of p-xylene in the presence of CoBr 2 . The authors screened the reaction for temperatures between 50 and 165 • C and for a maximum of 24 h of reaction time and, like previous authors, a large portion of p-xylene was lost during the process, probably through conversion into CO 2 . In this case, CO 2 was already used in excess; thus, it was not possible to quantify the loss. The best result reported in this work achieved a terephthalic acid yield of 33.7% after 24 h at 150 • C and 17 MPa using supercritical CO 2 and CoBr 2 . It is a longer process compared to other supercritical processes, such as the one by Dunn et al., which, at 280 • C and 24 MPa, in the presence of supercritical water and MnBr 2 , after 7.5 min, produced a TPA yield of 57%. The poor results obtained could be mainly due to the insolubility of CoBr 2 in scCO 2 . It could be seen that as the catalyst load increased (due to the higher surface area available), the yield of terephthalic acid increased. Additionally, when compared with other cobalt catalysts such as cobalt(II) stearate and cobalt(II) hexafluoroacetylacetonate, this exhibited a higher yield in the same conditions as CoBr 2 due to being homogeneous in the system. Another important factor studied is the influence of acetic acid or water in the reaction medium, and just a small amount of 1 mL of additive/mL of PX increased the yield obtained from 8.4% to 29.7% for acetic acid, and 23.7% for water. This increase could be explained by the increase in the homogeneity of the system, where PX and the catalyst are now more dissolved on scCO 2 . While this system did not improve the previously obtained results, it gives some insight into how to progress in the field of the supercritical catalysis of p-xylene and what to consider when projecting the experiments.
Li et al. prepared a metal-doped mesoporous material, Cu-MCM-41, to be used as a catalyst to produce 2,5-dihydroxyterephthalic acid in a one-step reaction starting from p-xylene. 2,5-Dihydroxyterephthalic acid is a derivative of the terephthalic acid used as an intermediate in the pharmaceutical industry and in batteries, and a linker to the synthesis of MOFs. [36][37][38][39][40][41] The synthesis of this compound usually requires harsh conditions such as high temperatures and pressures, multiple steps, and non-environmentally and healthfriendly reactants; therefore, the search for greener and softer conditions is intense. By using the mesoporous material MCM-41, dopped with copper on a scale of Cu:Si = 1:100, it was possible to achieve a moderate conversion of p-xylene of 21.7% with relatively high selectivity of 73% for the product in question after 5 h of reaction at 80 • C, using 30% H 2 O 2 as an oxidant, and with acetonitrile and acetic acid in a volume:volume proportion of 7:3 [42].
The use of C-scorpionate complexes as catalysts for xylene oxidation was also explored in 2017 by Wang et al., [43] where vanadium(IV) C-scorpionate complexes ( Figure 9) were supported in functionalized carbon nanotubes and tested as catalysts for this reaction. In this work, again, the synergy of microwave irradiation with the reaction was remarkable. Under the same conditions, it was possible to obtain a 31% yield of p-toluic acid using microwave irradiation, where conventional heating led to 7.5%, showing that microwave irradiation provides very fast initial heating, enhancing the reaction rates. As proven before for C-scorpionate catalysts, the addition of an acid addictive such as nitric acid promoted the reaction. The immobilized catalysts were proven to be stable for up to six catalytic cycles, and in the best conditions (5 h of microwave irradiation, 80 • C, solvent-free, TBHP 70%, an oxidant:substrate ratio of 2:1, 3.2 × 10 −2 mol% vs. the substrate, and nitric acid (10:1 additive:catalyst)), they obtained a yield of 43% p-toluic acid. Since it is a solvent and a KBr-free system, this already presents some advantages.
of MOFs. [36][37][38][39][40][41] The synthesis of this compound usually requires harsh conditions such as high temperatures and pressures, multiple steps, and non-environmentally and healthfriendly reactants; therefore, the search for greener and softer conditions is intense. By using the mesoporous material MCM-41, dopped with copper on a scale of Cu:Si = 1:100, it was possible to achieve a moderate conversion of p-xylene of 21.7% with relatively high selectivity of 73% for the product in question after 5 h of reaction at 80 °C, using 30% H2O2 as an oxidant, and with acetonitrile and acetic acid in a volume:volume proportion of 7:3 [42].
The use of C-scorpionate complexes as catalysts for xylene oxidation was also explored in 2017 by Wang et al., [43] where vanadium(IV) C-scorpionate complexes ( Figure 9) were supported in functionalized carbon nanotubes and tested as catalysts for this reaction. In this work, again, the synergy of microwave irradiation with the reaction was remarkable. Under the same conditions, it was possible to obtain a 31% yield of p-toluic acid using microwave irradiation, where conventional heating led to 7.5%, showing that microwave irradiation provides very fast initial heating, enhancing the reaction rates. As proven before for C-scorpionate catalysts, the addition of an acid addictive such as nitric acid promoted the reaction. The immobilized catalysts were proven to be stable for up to six catalytic cycles, and in the best conditions (5 h of microwave irradiation, 80 °C, solventfree, TBHP 70%, an oxidant:substrate ratio of 2:1, 3.2 × 10 −2 mol% vs. the substrate, and nitric acid (10:1 additive:catalyst)), they obtained a yield of 43% p-toluic acid. Since it is a solvent and a KBr-free system, this already presents some advantages. Samanta and Srivastava [44] explored the use of an FeVO4 graphitic carbon nitride (g-C3N4) nanocomposite for the oxidation of various cycloalkanes, including p-xylene. In their work, they compared catalysts using conventional heating (catalyst (50 mg), 60 °C, 4 h, H2O2:substrate = 2.5) and photocatalysis (catalyst (50 mg), 20-25 °C, 4 h, H2O2:substrate = 2.5, 250 W high-pressure visible lamp > 420 nm). Overall, the nanocomposite with a FeVO4:g-C3N4 wt% of 37 was found to be more active in the conditions of the study, and for p-xylene oxidation, achieved a yield of 21.3% for 4-methyl benzaldehyde in conventional heating and 34.4% for the photocatalytic system. Although the photocatalytic system presents an improvement, it is not as remarkable as for the other substrates, where the difference between the two systems is great, achieving double or even triple the yield. Samanta and Srivastava [44] explored the use of an FeVO 4 graphitic carbon nitride (g-C 3 N 4 ) nanocomposite for the oxidation of various cycloalkanes, including p-xylene. In their work, they compared catalysts using conventional heating (catalyst (50 mg), 60 • C, 4 h, H 2 O 2 :substrate = 2.5) and photocatalysis (catalyst (50 mg), 20-25 • C, 4 h, H 2 O 2 :substrate = 2.5, 250 W high-pressure visible lamp > 420 nm). Overall, the nanocomposite with a FeVO 4 :g-C 3 N 4 wt% of 37 was found to be more active in the conditions of the study, and for p-xylene oxidation, achieved a yield of 21.3% for 4-methyl benzaldehyde in conventional heating and 34.4% for the photocatalytic system. Although the photocatalytic system presents an improvement, it is not as remarkable as for the other substrates, where the difference between the two systems is great, achieving double or even triple the yield.
Nicolae et al. reported [45] manganese iron oxides (Mn/Fe/O) as heterogeneous catalysts using hydrothermal treatment (HT) and citrate methods (CIT), and applied them to the oxidation of p-xylene using green conditions. The choice of oxidant was important. Upon comparing the activity of both catalysts, Mn/Fe/O_HT and Mn/Fe/O_CIT, with the different oxidants, it was observed that both catalysts were inactive when molecular oxygen or H 2 O 2 were used, with a maximum conversion of 4%. On the other hand, when TBHP was used, the conversion exponentially increased to 85% and 77% for Mn/Fe/O_CIT and Mn/Fe/O_HT, respectively. This phenomenon was explained by the authors as TBHP being more stable than H 2 O 2 and producing radical species that were more stable, as well, for molecular oxygen is known to have a lower oxidizing characteristic, often needing more temperature to become reactive and being a very stable molecule. The quantity of the oxidant is also an important factor; as stated before, some catalytic systems are influenced by the presence of water, and in this case, the increase in the oxidant did not increase conversion or selectivity for the last product of the oxidant when it was screened from a ratio of 1:4 to 1:12. When the effect of temperature was evaluated, it showed that below 80 • C, there was a significant decrease from 85 to 57% when Mn/Fe/O_CIT was used, and for temperatures above 80 • C, the reaction progressed, forming 4-CBA. Although the authors consider this as a negative point, with the focus being on p-toluic acid, it could indicate that this catalyst could actively produce TPA. Lastly, the recyclability of the catalyst showed a loss of activity of around 15% between the first and second cycles, but no further investigation was conducted; it could have been interesting to see if the catalyst continued to lose activity or if it would stabilize at one point, and to determine how many cycles it would take. In the end, under the best conditions (2.5 mmol of PX, 50 mg of catalyst, 2 mL of acetonitrile, p-xylene/TBHP = 1:4, 24 h, 100 • C) Mn/Fe/O_CIT was shown to be the more active catalyst, converting 98% of the substrate with a selectivity of 95% for p-toluic acid.
Wang and co-workers [46] reported the oxidation of p-xylene, this time to 4-hydroxymethylbenzoic acid (4-HMBA), under mild conditions using Cu-MOF as catalysts. Previously the same authors reported the use of Cu-MCM-41 as a catalyst to produce 2,5dihydroxyterephthalic acid in a one-step reaction. This time the reaction took place at 30 • C using 30%H 2 O 2 as an oxidant, and in the first optimization of temperature with this catalyst, it was observed that higher temperatures did not favor the conversion or selectivity of the reaction. Copper ions were detected in the liquid phase at high temperatures, indicating dissociation of the MOF, leading to the over-oxidation of 4-HMBA and to the decomposition of H 2 O 2 . At a lower temperature, selectivity for 4-HMBA reached 99.3% with a p-xylene conversion of 28.6%. After the optimization of the amounts of oxidant and solvent, and of the duration of the reaction, the authors reported conversion of 85.5% with high selectivity of 99.2% for 4-HMBA using acetonitrile as the solvent at 30 • C, for 5 h, with 30 mg of Cu-MOF, and with an oxidant:substrate ratio of 5.8:8.1 mmol; not only did it achieve incredible results at low temperatures and reaction times, but the catalyst maintained its activity for five cycles without losing any activity.
In 2019, Karakhanov et al. [47] doped a hierarchical mesoporous MCM-41/halloysite nanotube composite with the bimetallic MnCo catalyst (ratio 1:10) and screened its activity for the oxidation of p-xylene to TPA under the conditions of the AMOCO process. Therefore, the authors applied the new supported catalyst using acetic acid as a solvent, in the presence of KBr and with molecular O 2 (20 atm) as an oxidant, for 3 h, at 200 • C; they achieved almost full conversion of p-xylene, with the main product being the desirable TPA at over 95% selectivity. When comparing the homogeneous system (using only the acetate salts of the metals in the same proportion) to the heterogenous one tested here, TOF increased exponentially from 37 to 142 h −1 . The authors also provided an insight into the reaction conditions' importance in the process. For example, a decrease in the temperature to 150 • C led to a significant decrease in the conversion (99% to 37%) and, more importantly, in the yield of TPA, which, at lower temperatures, did not achieve 1%. The influence of O 2 pressure and the presence of KBr were also studied. The oxygen pressure is relevant to the TPA yield, whereas the presence of KBr is important to the overall reaction (a decrease to 2.2% of p-xylene conversion in its absence was observed). Although the catalyst could look promising for industrial processes, it exhibited a serious problem of metal leaching. After the first cycle, the cobalt content decreased from 1.29%w/w to 0.22%w/w in the second cycle, while manganese, which was already present in a small proportion, was leached after the second cycle from 0.15% to 0.03%. The authors proposed that leaching occurred due to the dissolution of metals by the solvent and hydrobromic acids in the reaction medium.
Trandafir et al. disclosed an alternative and renewable raw material to produce terephthalic acid. In their study, manganese-cobalt mixed oxide catalysts, prepared using the co-precipitation and citrate methods, were applied as catalysts for the selective liquid phase oxidation of p-cymene to TPA (Scheme 6). While the oxidation of p-xylene to TPA is very straightforward, starting from p-cymene is more complex and consists of parallel and consecutive reactions. These can be divided into two parts (low oxidation products and advanced oxidation products) and the last product, TPA. All catalysts in this study produced full conversion of p-cymene but with different distributions of products. The catalysts prepared via co-precipitation presented the overall best distribution, with higher amounts of the advanced oxidation products and TPA when compared to those of the citrate method. The authors refer to the acidity of the catalyst as an important factor in an autoxidation mechanism. Studies on the autoxidation of p-xylene to benzaldehyde and the oxidation of cyclohexanone to adipic acid with Mn/Co catalysts have proven that better activity is linked to stronger acidity [48][49][50][51]. The best result was obtained by 1Mn2Co_pp, referring to the catalyst prepared via co-precipitation with a metal ratio Mn: Co of 1: 2, with an advanced oxidation product yield of 67% with a 10% yield of TPA (reaction performed at 140 • C, 20 atm of O 2 , no solvent, O 2 /p-cymene = 6/1, 2 mmol of p-cymene, 16.7 mg of catalyst). Finally, when the stability of the catalyst was tested for three consecutive runs, the authors observed a slight decrease in p-cymene conversion, a total of 9%, with selectivity for the main products remaining constant [52].
Trandafir et al. disclosed an alternative and renewable raw material to produce terephthalic acid. In their study, manganese-cobalt mixed oxide catalysts, prepared using the co-precipitation and citrate methods, were applied as catalysts for the selective liquid phase oxidation of p-cymene to TPA (Scheme 6). While the oxidation of p-xylene to TPA is very straightforward, starting from p-cymene is more complex and consists of parallel and consecutive reactions. These can be divided into two parts (low oxidation products and advanced oxidation products) and the last product, TPA. All catalysts in this study produced full conversion of p-cymene but with different distributions of products. The catalysts prepared via co-precipitation presented the overall best distribution, with higher amounts of the advanced oxidation products and TPA when compared to those of the citrate method. The authors refer to the acidity of the catalyst as an important factor in an autoxidation mechanism. Studies on the autoxidation of p-xylene to benzaldehyde and the oxidation of cyclohexanone to adipic acid with Mn/Co catalysts have proven that better activity is linked to stronger acidity [48][49][50][51]. The best result was obtained by 1Mn2Co_pp, referring to the catalyst prepared via co-precipitation with a metal ratio Mn: Co of 1: 2, with an advanced oxidation product yield of 67% with a 10% yield of TPA (reaction performed at 140 °C, 20 atm of O2, no solvent, O2/p-cymene = 6/1, 2 mmol of pcymene, 16.7 mg of catalyst). Finally, when the stability of the catalyst was tested for three consecutive runs, the authors observed a slight decrease in p-cymene conversion, a total of 9%, with selectivity for the main products remaining constant [52]. Scheme 6. Reaction pathway for the oxidation of p-cymene, with low oxidation products in the orange box and high oxidation products in the green box. Adapted from [52]. Scheme 6. Reaction pathway for the oxidation of p-cymene, with low oxidation products in the orange box and high oxidation products in the green box. Adapted from [52].
Xu et al. [53] tested the efficacy of the MOF cobalt-benzenetricarboxylate (BTC) when in conjunction with NHPI, since this compound is a very effective catalyst for oxidation under mild conditions and as an initiator. Both catalysts alone could not perform the oxidation of p-xylene at 100 • C, and after 12 h, no product was detected (using acetonitrile as the solvent and 3 MPa of O 2 ); however, when used in combination, the conversion of p-xylene reached 57%, with the main product being p-toluic acid. By increasing the reaction temperature up to 150 • C, the previous system with only NHPI started to achieve 93.5% conversion, with TPA selectivity of 67.5%, while the Co-BTC could not achieve conversion higher than 3%. Again, when both catalysts were in use in the same reaction, the system achieved full conversion of p-xylene, and the TPA selectivity increased from 67.5 to 96.2%. Finally, the stability of Co-BTC was tested, and after the reaction, the MOF was separated, washed, dried, and applied in a new cycle. After two cycles, the Co-BTC maintained the same activity as before, and SEM images showed the catalyst remained nearly as spherical as it had previously. The proposed mechanism, presented in Scheme 7, showcases, in the first equation, the common oxidation of hydrocarbons. PX reacts to form a p-methyl benzyl radical, which has strong resonance stabilization (Equation (2)). As stated before, p-toluic acid is even more difficult to oxidize due to the presence of the electron-withdrawal group, but the presence of NHPI as an initiator and of the catalyst allowed this difficulty to be overcome.
higher than 3%. Again, when both catalysts were in use in the same reaction, the system achieved full conversion of p-xylene, and the TPA selectivity increased from 67.5 to 96.2%. Finally, the stability of Co-BTC was tested, and after the reaction, the MOF was separated, washed, dried, and applied in a new cycle. After two cycles, the Co-BTC maintained the same activity as before, and SEM images showed the catalyst remained nearly as spherical as it had previously. The proposed mechanism, presented in Scheme 7, showcases, in the first equation, the common oxidation of hydrocarbons. PX reacts to form a p-methyl benzyl radical, which has strong resonance stabilization (Equation (2)). As stated before, ptoluic acid is even more difficult to oxidize due to the presence of the electron-withdrawal group, but the presence of NHPI as an initiator and of the catalyst allowed this difficulty to be overcome. Scheme 7. The mechanism proposed for the oxidation of p-xylene, which is catalyzed by Co-BTC/NDHPI X = CH3, COOH. Adapted from [53].
In one of the most recent publications about this theme in 2022, Wang et al. [54] applied α-Al2O3-supported VMOx (M = Ag, Mn, Fe, Co, etc.) as a catalyst for the oxidation of p-xylene. The catalysts were prepared via incipient wetness impregnation with a vanadium load of 5.4 wt% and tested under the same conditions (295 °C, 1 vol % p-xylene, pxylene/O2/He = 1:3:96, Vtotal = 40 mL/min, GHSV = 24,000. From all the metals, the VAgOx mixture presented the best reaction rate (82.3 mg −1 g −1 h −1 ) and selectivity towards p-methyl benzaldehyde, and the optimum nAg/nV ratio was located at ca. 0.4-0.5. Table 2 summarizes some of the best reaction conditions, as well as the main products and yields for the different systems reported, with heterogenous catalysts for PX oxidation. Scheme 7. The mechanism proposed for the oxidation of p-xylene, which is catalyzed by Co-BTC/NDHPI X = CH 3 , COOH. Adapted from [53].
In one of the most recent publications about this theme in 2022, Wang et al. [54] applied α-Al 2 O 3 -supported VMOx (M = Ag, Mn, Fe, Co, etc.) as a catalyst for the oxidation of p-xylene. The catalysts were prepared via incipient wetness impregnation with a vanadium load of 5.4 wt% and tested under the same conditions (295 • C, 1 vol % p-xylene, p-xylene/O 2 /He = 1:3:96, V total = 40 mL/min, GHSV = 24,000. From all the metals, the VAgO x mixture presented the best reaction rate (82.3 mg −1 g −1 h −1 ) and selectivity towards p-methyl benzaldehyde, and the optimum nAg/nV ratio was located at ca. 0.4-0.5. Table 2 summarizes some of the best reaction conditions, as well as the main products and yields for the different systems reported, with heterogenous catalysts for PX oxidation.
The oxidation of alkylarenes is an extensive and complex topic and can sometimes produce some important information on the oxidation of p-xylene, since it is one of the many substrates tested. Although the majority of results and studies focus on the production of alcohols and aldehydes, we wanted to highlight a few interesting results that could represent important advances in the production of TPA.
In 2019 Sarma el al. [55] tested zinc oxide loaded with copper as a photocatalyst at room temperature and in the presence of oxygen. When p-xylene was tested, the product obtained was terephthalaldehyde (the two methyl groups converted to aldehyde simultaneously). Additionally, 4-methyl benzyl alcohol and p-tolualdehyde were tested, and the catalyst was able to convert the methyl group to aldehyde, with the lowest yield being 66% and the highest 82% in a maximum reaction of 24 h. This way, it is possible to avoid the formation of p-toluic acid by maintaining the most reactive group present (alcohol and benzaldehyde). [53] (a) GC yield; (b) HPLC yield.
The oxidation of alkylarenes is an extensive and complex topic and can sometimes produce some important information on the oxidation of p-xylene, since it is one of the many substrates tested. Although the majority of results and studies focus on the production of alcohols and aldehydes, we wanted to highlight a few interesting results that could represent important advances in the production of TPA.
In 2019 Sarma el al. [55] tested zinc oxide loaded with copper as a photocatalyst at room temperature and in the presence of oxygen. When p-xylene was tested, the product obtained was terephthalaldehyde (the two methyl groups converted to aldehyde simultaneously). Additionally, 4-methyl benzyl alcohol and p-tolualdehyde were tested, and the catalyst was able to convert the methyl group to aldehyde, with the lowest yield being 66% and the highest 82% in a maximum reaction of 24 h. This way, it is possible to avoid the formation of p-toluic acid by maintaining the most reactive group present (alcohol and benzaldehyde).
In a similar way, Dutta et. al. [56] reported, in 2020, that the use of a ruthenium(II) complex achieved interesting results at 60 °C in the presence of TBHP. While oxidizing 4methyl benzyl alcohol, it was possible to oxidize the methyl group to an aldehyde (with a yield of 65% after 3 h), maintaining the alcohol in the aromatic ring and avoiding the [53] (a) GC yield; (b) HPLC yield.
The oxidation of alkylarenes is an extensive and complex topic and can sometimes produce some important information on the oxidation of p-xylene, since it is one of the many substrates tested. Although the majority of results and studies focus on the production of alcohols and aldehydes, we wanted to highlight a few interesting results that could represent important advances in the production of TPA.
In 2019 Sarma el al. [55] tested zinc oxide loaded with copper as a photocatalyst at room temperature and in the presence of oxygen. When p-xylene was tested, the product obtained was terephthalaldehyde (the two methyl groups converted to aldehyde simultaneously). Additionally, 4-methyl benzyl alcohol and p-tolualdehyde were tested, and the catalyst was able to convert the methyl group to aldehyde, with the lowest yield being 66% and the highest 82% in a maximum reaction of 24 h. This way, it is possible to avoid the formation of p-toluic acid by maintaining the most reactive group present (alcohol and benzaldehyde).
In a similar way, Dutta et. al. [56] reported, in 2020, that the use of a ruthenium(II) complex achieved interesting results at 60 °C in the presence of TBHP. While oxidizing 4methyl benzyl alcohol, it was possible to oxidize the methyl group to an aldehyde (with a yield of 65% after 3 h), maintaining the alcohol in the aromatic ring and avoiding the 43 (a) [43] Samanta [53] (a) GC yield; (b) HPLC yield.
The oxidation of alkylarenes is an extensive and complex topic and can sometimes produce some important information on the oxidation of p-xylene, since it is one of the many substrates tested. Although the majority of results and studies focus on the production of alcohols and aldehydes, we wanted to highlight a few interesting results that could represent important advances in the production of TPA.
In 2019 Sarma el al. [55] tested zinc oxide loaded with copper as a photocatalyst at room temperature and in the presence of oxygen. When p-xylene was tested, the product obtained was terephthalaldehyde (the two methyl groups converted to aldehyde simultaneously). Additionally, 4-methyl benzyl alcohol and p-tolualdehyde were tested, and the catalyst was able to convert the methyl group to aldehyde, with the lowest yield being 66% and the highest 82% in a maximum reaction of 24 h. This way, it is possible to avoid the formation of p-toluic acid by maintaining the most reactive group present (alcohol and benzaldehyde).
In a similar way, Dutta et. al. [56] reported, in 2020, that the use of a ruthenium(II) complex achieved interesting results at 60 °C in the presence of TBHP. While oxidizing 4methyl benzyl alcohol, it was possible to oxidize the methyl group to an aldehyde (with a yield of 65% after 3 h), maintaining the alcohol in the aromatic ring and avoiding the [53] (a) GC yield; (b) HPLC yield.
The oxidation of alkylarenes is an extensive and complex topic and can sometimes produce some important information on the oxidation of p-xylene, since it is one of the many substrates tested. Although the majority of results and studies focus on the production of alcohols and aldehydes, we wanted to highlight a few interesting results that could represent important advances in the production of TPA.
In 2019 Sarma el al. [55] tested zinc oxide loaded with copper as a photocatalyst at room temperature and in the presence of oxygen. When p-xylene was tested, the product obtained was terephthalaldehyde (the two methyl groups converted to aldehyde simultaneously). Additionally, 4-methyl benzyl alcohol and p-tolualdehyde were tested, and the catalyst was able to convert the methyl group to aldehyde, with the lowest yield being 66% and the highest 82% in a maximum reaction of 24 h. This way, it is possible to avoid the formation of p-toluic acid by maintaining the most reactive group present (alcohol and benzaldehyde).
In a similar way, Dutta et. al. [56] reported, in 2020, that the use of a ruthenium(II) complex achieved interesting results at 60 °C in the presence of TBHP. While oxidizing 4methyl benzyl alcohol, it was possible to oxidize the methyl group to an aldehyde (with a yield of 65% after 3 h), maintaining the alcohol in the aromatic ring and avoiding the 34 (a) [ [53] (a) GC yield; (b) HPLC yield.
The oxidation of alkylarenes is an extensive and complex topic and can sometimes produce some important information on the oxidation of p-xylene, since it is one of the many substrates tested. Although the majority of results and studies focus on the production of alcohols and aldehydes, we wanted to highlight a few interesting results that could represent important advances in the production of TPA.
In 2019 Sarma el al. [55] tested zinc oxide loaded with copper as a photocatalyst at room temperature and in the presence of oxygen. When p-xylene was tested, the product obtained was terephthalaldehyde (the two methyl groups converted to aldehyde simultaneously). Additionally, 4-methyl benzyl alcohol and p-tolualdehyde were tested, and the catalyst was able to convert the methyl group to aldehyde, with the lowest yield being 66% and the highest 82% in a maximum reaction of 24 h. This way, it is possible to avoid the formation of p-toluic acid by maintaining the most reactive group present (alcohol and benzaldehyde).
In a similar way, Dutta et. al. [56] reported, in 2020, that the use of a ruthenium(II) complex achieved interesting results at 60 °C in the presence of TBHP. While oxidizing 4methyl benzyl alcohol, it was possible to oxidize the methyl group to an aldehyde (with a yield of 65% after 3 h), maintaining the alcohol in the aromatic ring and avoiding the [53] (a) GC yield; (b) HPLC yield.
The oxidation of alkylarenes is an extensive and complex topic and can sometimes produce some important information on the oxidation of p-xylene, since it is one of the many substrates tested. Although the majority of results and studies focus on the production of alcohols and aldehydes, we wanted to highlight a few interesting results that could represent important advances in the production of TPA.
In 2019 Sarma el al. [55] tested zinc oxide loaded with copper as a photocatalyst at room temperature and in the presence of oxygen. When p-xylene was tested, the product obtained was terephthalaldehyde (the two methyl groups converted to aldehyde simultaneously). Additionally, 4-methyl benzyl alcohol and p-tolualdehyde were tested, and the catalyst was able to convert the methyl group to aldehyde, with the lowest yield being 66% and the highest 82% in a maximum reaction of 24 h. This way, it is possible to avoid the formation of p-toluic acid by maintaining the most reactive group present (alcohol and benzaldehyde).
In a similar way, Dutta et. al. [56] reported, in 2020, that the use of a ruthenium(II) complex achieved interesting results at 60 °C in the presence of TBHP. While oxidizing 4methyl benzyl alcohol, it was possible to oxidize the methyl group to an aldehyde (with a yield of 65% after 3 h), maintaining the alcohol in the aromatic ring and avoiding the 93 (b) [45] Ying [53] (a) GC yield; (b) HPLC yield.
The oxidation of alkylarenes is an extensive and complex topic and can sometimes produce some important information on the oxidation of p-xylene, since it is one of the many substrates tested. Although the majority of results and studies focus on the production of alcohols and aldehydes, we wanted to highlight a few interesting results that could represent important advances in the production of TPA.
In 2019 Sarma el al. [55] tested zinc oxide loaded with copper as a photocatalyst at room temperature and in the presence of oxygen. When p-xylene was tested, the product obtained was terephthalaldehyde (the two methyl groups converted to aldehyde simultaneously). Additionally, 4-methyl benzyl alcohol and p-tolualdehyde were tested, and the catalyst was able to convert the methyl group to aldehyde, with the lowest yield being 66% and the highest 82% in a maximum reaction of 24 h. This way, it is possible to avoid the formation of p-toluic acid by maintaining the most reactive group present (alcohol and benzaldehyde).
In a similar way, Dutta et. al. [56] reported, in 2020, that the use of a ruthenium(II) complex achieved interesting results at 60 °C in the presence of TBHP. While oxidizing 4methyl benzyl alcohol, it was possible to oxidize the methyl group to an aldehyde (with a yield of 65% after 3 h), maintaining the alcohol in the aromatic ring and avoiding the [53] (a) GC yield; (b) HPLC yield.
The oxidation of alkylarenes is an extensive and complex topic and can sometimes produce some important information on the oxidation of p-xylene, since it is one of the many substrates tested. Although the majority of results and studies focus on the production of alcohols and aldehydes, we wanted to highlight a few interesting results that could represent important advances in the production of TPA.
In 2019 Sarma el al. [55] tested zinc oxide loaded with copper as a photocatalyst at room temperature and in the presence of oxygen. When p-xylene was tested, the product obtained was terephthalaldehyde (the two methyl groups converted to aldehyde simultaneously). Additionally, 4-methyl benzyl alcohol and p-tolualdehyde were tested, and the catalyst was able to convert the methyl group to aldehyde, with the lowest yield being 66% and the highest 82% in a maximum reaction of 24 h. This way, it is possible to avoid the formation of p-toluic acid by maintaining the most reactive group present (alcohol and benzaldehyde).
In a similar way, Dutta et. al. [56] reported, in 2020, that the use of a ruthenium(II) complex achieved interesting results at 60 °C in the presence of TBHP. While oxidizing 4methyl benzyl alcohol, it was possible to oxidize the methyl group to an aldehyde (with a yield of 65% after 3 h), maintaining the alcohol in the aromatic ring and avoiding the 85 (b) [ [53] (a) GC yield; (b) HPLC yield.
The oxidation of alkylarenes is an extensive and complex topic and can sometimes produce some important information on the oxidation of p-xylene, since it is one of the many substrates tested. Although the majority of results and studies focus on the production of alcohols and aldehydes, we wanted to highlight a few interesting results that could represent important advances in the production of TPA.
In 2019 Sarma el al. [55] tested zinc oxide loaded with copper as a photocatalyst at room temperature and in the presence of oxygen. When p-xylene was tested, the product obtained was terephthalaldehyde (the two methyl groups converted to aldehyde simultaneously). Additionally, 4-methyl benzyl alcohol and p-tolualdehyde were tested, and the catalyst was able to convert the methyl group to aldehyde, with the lowest yield being 66% and the highest 82% in a maximum reaction of 24 h. This way, it is possible to avoid the formation of p-toluic acid by maintaining the most reactive group present (alcohol and benzaldehyde).
In a similar way, Dutta et. al. [56] reported, in 2020, that the use of a ruthenium(II) complex achieved interesting results at 60 °C in the presence of TBHP. While oxidizing 4methyl benzyl alcohol, it was possible to oxidize the methyl group to an aldehyde (with a yield of 65% after 3 h), maintaining the alcohol in the aromatic ring and avoiding the [53] (a) GC yield; (b) HPLC yield.
The oxidation of alkylarenes is an extensive and complex topic and can sometimes produce some important information on the oxidation of p-xylene, since it is one of the many substrates tested. Although the majority of results and studies focus on the production of alcohols and aldehydes, we wanted to highlight a few interesting results that could represent important advances in the production of TPA.
In 2019 Sarma el al. [55] tested zinc oxide loaded with copper as a photocatalyst at room temperature and in the presence of oxygen. When p-xylene was tested, the product obtained was terephthalaldehyde (the two methyl groups converted to aldehyde simultaneously). Additionally, 4-methyl benzyl alcohol and p-tolualdehyde were tested, and the catalyst was able to convert the methyl group to aldehyde, with the lowest yield being 66% and the highest 82% in a maximum reaction of 24 h. This way, it is possible to avoid the formation of p-toluic acid by maintaining the most reactive group present (alcohol and benzaldehyde).
In a similar way, Dutta et. al. [56] reported, in 2020, that the use of a ruthenium(II) complex achieved interesting results at 60 °C in the presence of TBHP. While oxidizing 4methyl benzyl alcohol, it was possible to oxidize the methyl group to an aldehyde (with a yield of 65% after 3 h), maintaining the alcohol in the aromatic ring and avoiding the 96 (b) [53] (a) GC yield; (b) HPLC yield.
In a similar way, Dutta et. al. [56] reported, in 2020, that the use of a ruthenium(II) complex achieved interesting results at 60 • C in the presence of TBHP. While oxidizing 4-methyl benzyl alcohol, it was possible to oxidize the methyl group to an aldehyde (with a yield of 65% after 3 h), maintaining the alcohol in the aromatic ring and avoiding the formation of p-toluic acid. Similarly in the same study p-xylene was tested and both methyl groups were oxidized to the aldehydes (after 4 h, with a yield of 74%), again avoiding the formation of p-toluic acid.
Lastly, in 2022, Zhang et. al. [57] used Pd/C as a catalyst at 80 • C, dimethylacetamide, and a small amount of water and NaOH to perform selective oxidation of 4-methyl benzyl alcohol, achieving, after 48 h, a 4-hydroxybelzandehyde yield of 62%.
Overall, the study of alkylarenes is extensive, with several review articles [58][59][60][61][62] available to obtain a wider understanding of this topic, and could be an answer to improving the production of TPA.
Biocatalysis
The topic of biocatalysis is significant and extensive. It provides an alternative to other methods and helps to understand the mechanistic pathways involved in reactions [63,64].
Luo and Lee [65] approached the transformation of p-xylene into TPA in a different form. While most of the research is focused on metallic catalysts or changes in the setup, these authors decided to find a bio-approach to the TPA production problem. Several process developments related to biomass-derived p-xylene production allowed the authors to search for a microbial strain that could convert p-xylene into TPA, and it was found in an Escherichia coli species. The authors divided the system into two parts: first, the conversion of p-xylene into p-toluic acid, and then, to TPA. With some genetic modification and after optimization of the process, the authors reported a maximum production of TPA of 0.748 gL −1 h −1 , meaning 4.5 mmolL −1 h −1 .
Computational Studies and Simulations
In 2016, Li et al. simulated and improved the dynamic model of a purification process ( Figure 10) by applying the IFSH methodology, considering its characteristics and catalyst deactivation, with the inclusion of some energy integration. First, the authors simulated steady-state and dynamic models with kinetics adjustments resulting from previous work [66], showing that in the case of the steady state, these new kinetics parameters were fitting when compared to the real plant. The dynamic simulations used the real size of the plant and were also compared to the previous dynamic model by Azarpour and Zahedi [67], including the deactivation equation and values for the catalyst due to sintering. The total simulation time was 8000 h, but the real data are only available for the first 5000 h due to the shutdown of the plant. During that period, the simulation overlaps with the experimental data very fittingly. Some key parts of the simulation, such as the plant control objectives, were fixed, guaranteeing normal function of the unit (for example, avoiding cracking of the catalyst bed or maintaining the reactor temperature in the range of the reaction kinetics in consideration, and most importantly, keeping the content of 4-CBA below the 25 ppm). When analyzing the steady-state model, temperature exhibited a larger steady-state gain on the 4-CBA fraction, but due to the restriction of the reactor due to catalyst deactivation, the hydrogen variable was selected as the main variable. As an objective function, a profit equation considering the production volume, as well as material and energy costs, was put in place. After testing the influence of control and applying PID controllers, valves, and their gains and delays, including a module which functioned as quality control by receiving information on the exit concentration of 4-CBA, a performance assessment of the plant was conducted ( Figure 11). It included the important parameters of the settling time, deviation from the production target, and final profit, and in the end, sixteen main control loops were involved in achieving a good control system that distinguished itself from the industrial one due to its quality controller for 4-CBA. Three types of disturbance were tested: the 4-CTA feed flow, the concentration of 4-CTA slurry, and 4-CTA flow with different impacts on the system. However, the difference before and after the implementation of the control system was evident; for example, a disturbance on the 4-CTA feed flow would deactivate the catalyst at 3630 and 4250 h (depending on the disturbance), while after the implementation, this only occurred at 5520 and 5890 h. This delayed effect is seen with the disturbance of other variables with different degrees of intensity, showing improvement in the system with increased profits, controlled purification of TPA, and an increased lifetime of the catalyst [68]. the 4-CTA feed flow would deactivate the catalyst at 3630 and 4250 h (depending on t disturbance), while after the implementation, this only occurred at 5520 and 5890 h. Th delayed effect is seen with the disturbance of other variables with different degrees intensity, showing improvement in the system with increased profits, controlled purific tion of TPA, and an increased lifetime of the catalyst [68]. As reported above, in 2017, Kwong et al. [25] studied the oxidation of p-xylene usi an osmium(VI) catalyst, and to gain more insight into this particular catalytic system, D was studied (Scheme 8). The catalytic systems involved the binding of H2O2 to the O center, which, after a change in geometry, led to heterolytic O-O bond cleavage, leadi to water formation and an Os VIII nitride oxo species. This new species is highly reacti and, due to different attacks on p-xylene, led to different products. The attack of the ort position led to the formation of 2,5-dimethylphenol, and it could also attack the ipso p sition, but the transition state was less stable (18.1 vs. 17.1 kcal/mol), which agrees w the selectivity trend obtained (85.3 vs. 11.9). The methyl attack was even less stable, w a barrier of 19.5 kcal/mol, which corroborates the results obtained experimental As reported above, in 2017, Kwong et al. [25] studied the oxidation of p-xylene using an osmium(VI) catalyst, and to gain more insight into this particular catalytic system, DFT was studied (Scheme 8). The catalytic systems involved the binding of H2O2 to the Os VI center, which, after a change in geometry, led to heterolytic O-O bond cleavage, leading to water formation and an Os VIII nitride oxo species. This new species is highly reactive and, due to different attacks on p-xylene, led to different products. The attack of the ortho position led to the formation of 2,5-dimethylphenol, and it could also attack the ipso position, but the transition state was less stable (18.1 vs. 17.1 kcal/mol), which agrees with the selectivity trend obtained (85.3 vs. 11.9). The methyl attack was even less stable, with a barrier of 19.5 kcal/mol, which corroborates the results obtained experimentally. As reported above, in 2017, Kwong et al. [25] studied the oxidation of p-xylene using an osmium(VI) catalyst, and to gain more insight into this particular catalytic system, DFT was studied (Scheme 8). The catalytic systems involved the binding of H 2 O 2 to the Os VI center, which, after a change in geometry, led to heterolytic O-O bond cleavage, leading to water formation and an Os VIII nitride oxo species. This new species is highly reactive and, due to different attacks on p-xylene, led to different products. The attack of the ortho position led to the formation of 2,5-dimethylphenol, and it could also attack the ipso position, but the transition state was less stable (18.1 vs. 17.1 kcal/mol), which agrees with the selectivity trend obtained (85.3 vs. 11.9). The methyl attack was even less stable, with a barrier of 19.5 kcal/mol, which corroborates the results obtained experimentally. Interestingly, DFT calculations showed that side-chain hydroxylation can occur via hydride transfer, which directly forms a hydroxylated product without any rebound barrier. Interestingly, DFT calculations showed that side-chain hydroxylation can occur via hydride transfer, which directly forms a hydroxylated product without any rebound barrier.
Scheme 8.
Reaction mechanism for the reaction of DFT-derived p-xylene. Adapted from [25].
Several authors report the liquid oxidation of p-xylene as a first-order reaction, but gas-phase oxygen follows zero-order kinetics. To study the influence of oxygen partial pressures on the liquid phase oxidation kinetics of p-xylene, Shang et al., in 2014, investigated reaction kinetics mimicking industrial conditions. Two sets of experiments were tested: a batch system and a continuous system. The kinetic mechanism for the oxidation of p-xylene involved Co II , Mn II , and Br -, so that when in acetic acid, the Co II oxidized to Co III , later oxidizing Mn II , which, in the end, created a bromine free radical. The bromine free radical will abstract the alpha-H atom from the hydrogen-containing groups on the benzene ring, later making an alkyl free radical in addition to oxygen, resulting in the formation of an alkoxyl free radical. In previous works, this step was assumed to be instantaneous due to high oxygen partial pressure, but since this parameter was studied this time, it was considered a rate-determining step. The rest of the mechanism remains unchanged and derives from the other publications and examples demonstrated above. In the model equations, there exist 44 parameters to be estimated, and this high number can lead to difficulty and overfitting, making the estimated parameters unreliable. Therefore, it was necessary to reduce this number by simplifying some elementary steps, such as the reaction of ROOand ROCOOOfor removing the alpha-H atom of a certain substrate, that were identical. Additionally, all of the substrates had equivalent approximated initiation rate constants, and the reaction rate constants of oxygen to different alkyl and acyl radicals were the same. Starting with the batch experiments, it was observed that when the oxygen volume fractions in the outlet were decreased to 3%, the kinetics obtained were very different from those when this value was higher, showing the importance and effect of oxygen partial pressure in the kinetics of the reaction. When the authors compared their models with others published with different orders, it was possible to see that those models could not accurately represent the kinetics at 3%, mainly due to not considering the importance and influence of oxygen partial pressure. To continue to verify the validation of the model, continuous experiments were then carried out in which the gaseous and liquid reactants were fed into the reactor continuously. The yield of TPA was calculated as a function of the oxygen volume fraction, and it was observed that the simulation could predict the results obtained in the experiments. Moreover, when the volume decreased Scheme 8. Reaction mechanism for the reaction of DFT-derived p-xylene. Adapted from [25].
Several authors report the liquid oxidation of p-xylene as a first-order reaction, but gasphase oxygen follows zero-order kinetics. To study the influence of oxygen partial pressures on the liquid phase oxidation kinetics of p-xylene, Shang et al., in 2014, investigated reaction kinetics mimicking industrial conditions. Two sets of experiments were tested: a batch system and a continuous system. The kinetic mechanism for the oxidation of p-xylene involved Co II , Mn II , and Br − , so that when in acetic acid, the Co II oxidized to Co III , later oxidizing Mn II , which, in the end, created a bromine free radical. The bromine free radical will abstract the alpha-H atom from the hydrogen-containing groups on the benzene ring, later making an alkyl free radical in addition to oxygen, resulting in the formation of an alkoxyl free radical. In previous works, this step was assumed to be instantaneous due to high oxygen partial pressure, but since this parameter was studied this time, it was considered a rate-determining step. The rest of the mechanism remains unchanged and derives from the other publications and examples demonstrated above. In the model equations, there exist 44 parameters to be estimated, and this high number can lead to difficulty and overfitting, making the estimated parameters unreliable. Therefore, it was necessary to reduce this number by simplifying some elementary steps, such as the reaction of ROO − and ROCOOO − for removing the alpha-H atom of a certain substrate, that were identical. Additionally, all of the substrates had equivalent approximated initiation rate constants, and the reaction rate constants of oxygen to different alkyl and acyl radicals were the same. Starting with the batch experiments, it was observed that when the oxygen volume fractions in the outlet were decreased to 3%, the kinetics obtained were very different from those when this value was higher, showing the importance and effect of oxygen partial pressure in the kinetics of the reaction. When the authors compared their models with others published with different orders, it was possible to see that those models could not accurately represent the kinetics at 3%, mainly due to not considering the importance and influence of oxygen partial pressure. To continue to verify the validation of the model, continuous experiments were then carried out in which the gaseous and liquid reactants were fed into the reactor continuously. The yield of TPA was calculated as a function of the oxygen volume fraction, and it was observed that the simulation could predict the results obtained in the experiments. Moreover, when the volume decreased from 5%, the yield of TPA started to reduce gradually and the model could represent that change. From this report, a kinetic model where the oxygen partial pressure is taken into consideration was created, and it was shown that there exists a threshold for this pressure that will influence p-xylene oxidation kinetics [69].
The same authors also later modulated CO 2 -assisted p-xylene oxidation using batch system experiments. The kinetic model was the same as the previous one, including the simplifications, except for the addition of CO 2 , which was proposed to form a peroxocarbonate that may participate in the formation of the free radical chain by oxidizing Co II to Co III . From the experiments conducted, it was concluded that the presence of CO 2 enhances the reaction, achieving the best results in a shorter time when compared to the reaction performed with air, but the range of temperatures studied did not influence the rate constants significantly. The last parameter was the quantity of the catalyst, and in this case, only the increase in HBr was proven to affect the rate constant, which doubled when double the quantity of HBr was used; this indicates that in the presence of CO 2 , the conversion reaction between Co(II)/Co(III) and Mn(II)/Mn(III), as well as the steps in the oxidation of HBr by Mn(III), are fast enough that the concentration of the bromide free radical is roughly equal to that of HBr. In the end, a kinetic model for CO 2 -assisted PX oxidation was developed, and it was capable of calculating the values of the reactants, intermediates, and desired products that matched the experimental data well [70].
Reactor and Purification Step
The purification of terephthalic acid is an important process in the production of this compound. As stated above, during the process of production, TPA is contaminated with 4-CBA at a mass fraction of around 3000 ppm, whereas commercial TPA must contain less than 25 ppm. To treat and reduce this quantity, crude TPA is sent after synthesis for hydropurification, a highly energy-consuming (high temperatures and pressure) and catalytically costly process (deactivation of the palladium catalyst due to Pd sintering, poisoning by sulfur and other elements, mechanical destruction, corrosion, and fouling) [67,71]. During this process, 4-CBA is transformed into TPA as described in Scheme 9, while a side reaction of decarbonylation to benzoic acid can also occur (Scheme 10) [12,72].
Molecules 2023, 28, x FOR PEER REVIEW 21 of 28 from 5%, the yield of TPA started to reduce gradually and the model could represent that change. From this report, a kinetic model where the oxygen partial pressure is taken into consideration was created, and it was shown that there exists a threshold for this pressure that will influence p-xylene oxidation kinetics [69]. The same authors also later modulated CO2-assisted p-xylene oxidation using batch system experiments. The kinetic model was the same as the previous one, including the simplifications, except for the addition of CO2, which was proposed to form a peroxocarbonate that may participate in the formation of the free radical chain by oxidizing Co II to Co III . From the experiments conducted, it was concluded that the presence of CO2 enhances the reaction, achieving the best results in a shorter time when compared to the reaction performed with air, but the range of temperatures studied did not influence the rate constants significantly. The last parameter was the quantity of the catalyst, and in this case, only the increase in HBr was proven to affect the rate constant, which doubled when double the quantity of HBr was used; this indicates that in the presence of CO2, the conversion reaction between Co(II)/Co(III) and Mn(II)/Mn(III), as well as the steps in the oxidation of HBr by Mn(III), are fast enough that the concentration of the bromide free radical is roughly equal to that of HBr. In the end, a kinetic model for CO2-assisted PX oxidation was developed, and it was capable of calculating the values of the reactants, intermediates, and desired products that matched the experimental data well [70].
Reactor and Purification Step
The purification of terephthalic acid is an important process in the production of this compound. As stated above, during the process of production, TPA is contaminated with 4-CBA at a mass fraction of around 3000 ppm, whereas commercial TPA must contain less than 25 ppm. To treat and reduce this quantity, crude TPA is sent after synthesis for hydropurification, a highly energy-consuming (high temperatures and pressure) and catalytically costly process (deactivation of the palladium catalyst due to Pd sintering, poisoning by sulfur and other elements, mechanical destruction, corrosion, and fouling) [67,71]. During this process, 4-CBA is transformed into TPA as described in Scheme 9, while a side reaction of decarbonylation to benzoic acid can also occur (Scheme 10) [12,72]. In 2019, while researching the oxidation of p-xylene to TPA, Hwang et al. [31] proposed a different purification process for the removal of p-toluic acid and 4-CBA from the TPA formed. In this case, the reaction taking place in the presence of ozone and a UV lamp Scheme 9. Reaction pathway and products obtained for the hydrogenation of 4-CBA.
Molecules 2023, 28, x FOR PEER REVIEW 21 of 28 from 5%, the yield of TPA started to reduce gradually and the model could represent that change. From this report, a kinetic model where the oxygen partial pressure is taken into consideration was created, and it was shown that there exists a threshold for this pressure that will influence p-xylene oxidation kinetics [69]. The same authors also later modulated CO2-assisted p-xylene oxidation using batch system experiments. The kinetic model was the same as the previous one, including the simplifications, except for the addition of CO2, which was proposed to form a peroxocarbonate that may participate in the formation of the free radical chain by oxidizing Co II to Co III . From the experiments conducted, it was concluded that the presence of CO2 enhances the reaction, achieving the best results in a shorter time when compared to the reaction performed with air, but the range of temperatures studied did not influence the rate constants significantly. The last parameter was the quantity of the catalyst, and in this case, only the increase in HBr was proven to affect the rate constant, which doubled when double the quantity of HBr was used; this indicates that in the presence of CO2, the conversion reaction between Co(II)/Co(III) and Mn(II)/Mn(III), as well as the steps in the oxidation of HBr by Mn(III), are fast enough that the concentration of the bromide free radical is roughly equal to that of HBr. In the end, a kinetic model for CO2-assisted PX oxidation was developed, and it was capable of calculating the values of the reactants, intermediates, and desired products that matched the experimental data well [70].
Reactor and Purification Step
The purification of terephthalic acid is an important process in the production of this compound. As stated above, during the process of production, TPA is contaminated with 4-CBA at a mass fraction of around 3000 ppm, whereas commercial TPA must contain less than 25 ppm. To treat and reduce this quantity, crude TPA is sent after synthesis for hydropurification, a highly energy-consuming (high temperatures and pressure) and catalytically costly process (deactivation of the palladium catalyst due to Pd sintering, poisoning by sulfur and other elements, mechanical destruction, corrosion, and fouling) [67,71]. During this process, 4-CBA is transformed into TPA as described in Scheme 9, while a side reaction of decarbonylation to benzoic acid can also occur (Scheme 10) [12,72]. In 2019, while researching the oxidation of p-xylene to TPA, Hwang et al. [31] proposed a different purification process for the removal of p-toluic acid and 4-CBA from the TPA formed. In this case, the reaction taking place in the presence of ozone and a UV lamp In 2019, while researching the oxidation of p-xylene to TPA, Hwang et al. [31] proposed a different purification process for the removal of p-toluic acid and 4-CBA from the TPA formed. In this case, the reaction taking place in the presence of ozone and a UV lamp would precipitate the TPA, since this is less soluble than the other two compounds in the p-xylene-acetonitrile-water (1:3:2) solution. The first precipitate formed during their study, which contained 32 mol% of p-toluic acid and 160 ppm of 4-CBA, was washed three times with an acetonitrile-water solution (3: 2, pH = 4.5), dissolving and removing the p-toluic acid and 4-CBA from the precipitate TPA. In the end, only 3 mol% of p-toluic acid was present and the 4-CBA decreased to 4 ppm (which is lower than the current industrial hydropurification treatment), without needing a high energy input or producing high volumes of wastewater, since the washed acetonitrile-water solution can be further exposed to the same conditions and convert the p-toluic acid and 4-CBA to precipitate TPA.
In 2020, Kuznetsova et al. [73] studied the effect of additives during an oxidation process performed to obtain a pure form of TPA with a low content of 4-CBA. During the AMOCO process, the TPA, which had low solubility in acetic acid, precipitated together with intermediates that also had low solubilities, such as p-TA and 4-CBA; this resulted in a crude TPA with around 2% contaminants that inhibit the polycondensation of TPA. Pure TPA should not have more than 100 ppm of 4-CBA, and to achieve this, the crude TPA was subjected to a hydrotreatment purification step at high temperatures and pressure, and with a palladium catalyst. The authors then tried to find an alternative to this two-step process by mimicking the reaction conditions during the AMOCO process, and in the second step, increasing the solubility of TPA in acetic acid by adding 1-butyl 3-methylimidazolium bromide (BMIM Br) or NH 4 OAc, thus making the intermediates easier to further oxidize to TPA. By adding BMIM Br or NH 4 OAc, the acidity of the medium decreased, meaning this step had to be performed at a higher temperature and catalyst concentration to compensate for this "acidic loss". In the end, the authors reported the possibility of using either of these two compounds to purify TPA. While NH 4 OAc required a lower temperature, BMIM Br produced a purer final product with an increase of only 20 • C, but most importantly, both produced a pure TPA without any loss of the originally produced yield and without introducing an expensive catalyst such as palladium.
Poliakoff and co-workers [74] studied the selective oxidation of p-xylene in sub-and supercritical water and the effects of geometry and mixing, since it was previously shown to be an important factor in the synthesis of metal oxide nanoparticles in supercritical water. The setup consisted of a coiled pre-heater at a supercritical temperature, which allowed for the total decomposition of H 2 O 2 into a mixture of SC water and O 2 ; CuBr 2 and NH 4 Br as catalysts; and downstream of the reactor, NaOH, which neutralized CO 2 caused by burning and prevented TPA from precipitating. In the study, two types of reactor were used: an opposed-flow reactor and a tubular reactor, depicted in Figure 12.
The opposed-flow reactor consists of a PX pipe that is concentric with the catalyst pipe, and both point upwards, where they meet the downward flowing stream of heated H 2 O + O 2 . All reactants should be efficiently mixed in the middle section of the reactor, after which they flow upwards to the outer section (concentric tube configuration) until the NaOH quench. In the tubular reactor, all reactants and solvents are mixed at the top of the reactor, which then lets the mixture flow downwards, where the NaOH quench solution is rapidly cooled at the bottom. The residence times for the opposed-flow reactor were not calculated accurately, but the authors considered that they should be around 2.3-3.3 s at 380 • C and 7.4-11.9 s at 330 • C, whereas in the tubular reactor, they were 5.8 s at 380 • C and 19.2 s at 330 • C.
supercritical water and the effects of geometry and mixing, since it was previously shown to be an important factor in the synthesis of metal oxide nanoparticles in supercritical water. The setup consisted of a coiled pre-heater at a supercritical temperature, which allowed for the total decomposition of H2O2 into a mixture of SC water and O2; CuBr2 and NH4Br as catalysts; and downstream of the reactor, NaOH, which neutralized CO2 caused by burning and prevented TPA from precipitating. In the study, two types of reactor were used: an opposed-flow reactor and a tubular reactor, depicted in Figure 12. The opposed-flow showed, at first, that at 330 • C, the extension of the reaction was very small, mainly due to the low activity of the catalyst. When the temperature was increased to 380 • C, the production of TPA increased, but an interesting result was obtained. Changing the retention time did not alter the yield obtained of TPA, which was constant even when it was decreased by a factor of 2. As explained by the authors, this suggests that most of the product is formed at the beginning of the reaction alongside the intermediates, which will later burn or decarboxylate. An important factor alongside geometry is the mixing of the reactants and the catalyst, for which it has been reported that efficient mixing tends to provide a high rate of reaction. Since the opposed-flow reactor already achieves efficient mixing and is not suited to alterations, a tubular reactor was used to test the importance of this parameter. Therefore, four different reactors, with different entrances of the reactants (without and with different T-pieces) and fed a biphasic mixture, were tested with different mixing efficiencies. The results confirmed the theory that the reactor where the mixing was the best achieved the highest selectivity of TPA when compared with the others (90% vs. 30%). Upon comparing the different reactors, the tubular with the best mixing achieved higher TPA selectivity with lower CO 2 yield when compared with the opposed-flow reactor despite achieving similar selectivity, as a high increase in CO 2 is associated with this type of reactor [74].
Subramaniam and his co-workers simulated a greener spray process to produce TPA and compared it to the conventional AMOCO process. Economic analysis and life cycle assessment were conducted, and comparisons drawn between the two processes. The AMOCO process is a well-known process and is described in the introduction of this review. The spray process simulated by the authors consisted of a spray reactor in which the liquid phase, containing dissolved PX, and the catalyst (the same as in the AMOCO process) in acetic acid were dispersed as fine droplets via a nozzle into a vapor phase containing O 2 . This reactor operates at 200 • C and 15 bar pressure, and the resulting stream is later sent to a three-stage crystallizer, a centrifuge, and a dryer, producing a high-purity TPA in one step (<25 ppm 4-CBA and <5 ppm TA) and avowing the need for a later purifying unit. The off-gas unit is similar to the one in the AMOCO process. The economic evaluation of both systems, including four variations of the spray process with various amounts of acetic acid in the feed, revealed (with the cost adjusted to June 2012) that in both processes, about 40% of the equipment costs are due to special equipment such as crystallizers, dryers, etc., but the estimated investment for the AMOCO process is around USD 302 million. When compared to the spray process cases, especially the two extremes where the quantity of acetic acid used in the feed is nearly 10 times more compared to the AMOCO and a similar amount at its lowest, an investment of USD 241 million and USD 136 million, respectively, would be required, representing 80% and 45% of the investment necessary to implement the AMOCO process. This difference is mainly due to the non-existent hydrogenation section. When production costs are compared, the spray process only achieves a difference of between 5 and 16% when compared to the AMOCO process, which lies within the typical range of uncertainty of such predictions, and is thus not a deciding factor in comparing both processes. The environmental analysis and simulations showed that when comparing the VOC emissions of a real BP plant with the emissions of this process, the latter produces half the emissions reported by BP. It is also important to note that the major contributor, which is not reported in the TRI data, is acetic acid, which represents 40 times more emissions than methanol. In terms of CO 2 emissions, the spray process with the same acetic acid input emits only 23% compared to the AMOCO process, and this value increases up to 91% when the acetic acid input increases 10 times more. In the end, the overall spray process has been proven to provide both economic and environmental benefits when compared to the AMOCO process [75].
Challenges and Conclusions
For almost 10 years, several developments in the oxidation of p-xylene have been reported with catalytic systems, homogeneous or heterogeneous, that could change the industrial process and make it greener.
Homogeneous systems apply several times the catalysts and/or conditions of the AMOCO process, while other systems provide other changes such as the presence of metal catalysts, the absence of acetic acid, the application of green oxidants, or even the use of ozone. Typically, the systems are not strong enough to achieve full oxidation from p-xylene to TPA, and have toluic acid as a stop product.
Heterogeneous systems tend to necessitate a higher input of energy but can achieve a longer oxidation chain when starting with p-xylene, but they also have a high stopping point for toluic acid.
Overall, homogenous systems tend to be able to convert with less energy input but with no possibility of reusability, while heterogeneous systems can be more sustainable in terms of catalyst reuse. Nevertheless, most of the systems suffer from the same challenge: the oxidation of p-toluic acid.
One of the biggest challenges in the process of p-xylene oxidation to TPA (Scheme 1) is the activation of p-toluic acid. Not only does its low solubility in some solvents decrease the possibility of conversion into further oxidated products, but the electronic withdrawal effect on the aromatic ring also leads to high deactivation, making it hard to oxidize the second methyl group.
As depicted in Scheme 1, terephthalic acid is the sixth consecutive oxidation product of p-xylene. Thus, in designing new sustainable TPA production routes, the use of green peroxides (e.g., H 2 O 2 ) as oxidant agents could require an initial high concentration of such species, which could lead to inhibition of the catalyst and/or to higher degradation of the oxidant, increasing the difficulty of obtaining the desired (TPA) product.
Finally, the terephthalic acid manufacturing process must be sustainable, environmentally friendly, and cheap, thus assuring the feasibility of the whole production and utilization chain on a large scale.
Thus, proper design of the catalytic process, including the catalyst itself, is crucial to accomplish the above aims while overcoming the highlighted issues. As reported in this work, huge developments in different synthetic strategies have been made in recent years, and further developments will come from the intense research activity devoted to reaching the sustainable oxidization of p-xylene to TPA.
Conflicts of Interest:
The authors declare no conflict of interest.
|
2023-02-22T16:10:55.771Z
|
2023-02-01T00:00:00.000
|
{
"year": 2023,
"sha1": "e8f1ae7ccfa1d7e474ab9ff273132ff2e4ec348c",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "33ba2a9d63116fd5199a4d92c7b662aa720ff2f4",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
251021669
|
pes2o/s2orc
|
v3-fos-license
|
Crafting the success and failure of decentralized marine management
This paper presents an ethnographic case study of the design and revision of a decentralized marine management scheme implemented on the island of Moorea, French Polynesia named Plan de Gestion de l’Espace Maritime (PGEM). Drawing on an analysis of over 50 consultative workshops and meetings, held from 2018 to 2021 during the PGEM revision, we document the materials, discourses, and practices local stakeholders (e.g., fishers, cultural and environmental activists, government staff, and scientists) combine to build their interpretations of PGEM success or failure. We examine the diversity of domains these interpretations draw from (ecology, marine livelihoods, culture, religion, and politics) and how they are put into practice in people’s engagement with—or resistance to—the local marine management and governance design. Our results highlight how the controversies around the revision of Moorea’s PGEM overflowed the boundaries of ecology as construed by scientific experts. Stakeholders interpreted “marine resource management” as something well beyond just “marine resources” to include politics, identity, Polynesian cosmology, and livelihoods. Our findings provide generalizable patterns for understanding how natural-resource management policies are received and repurposed by local actors. Supplementary Information The online version contains supplementary material available at 10.1007/s13280-022-01763-7.
INTRODUCTION
Empowering local communities and forwarding their stewardship over natural resources have become, over the past decades, the dominant paradigm of conservation and sustainable development agendas across the globe (Berkes 2004), and more particularly in the Pacific. The lessons provided by the limited success of state-led, top-down efforts to ban or restrict human activities in designated areas (Bennett and Dearden 2014) as well as the rich body of academic work describing customary forms of management (e.g., Johannes 1978;Berkes 1989;Bambridge 2016;Lauer 2017) have given momentum to the idea of devolving resource management to local communities.
Since the 1980s, Oceania has been at the forefront in the implementation of various initiatives of marine co-management or Community-Based Marine Resource Management (CBMRM; Govan 2009). Through the fast-growing number of Locally Managed Marine Areas (LMMA) or enabling legislation guaranteeing customary marine tenure, the Pacific has witnessed a 'rennaissance' of locally based and community-driven initiatives of environmental and resource management (Johannes 2002;Govan 2009;Friedlander 2018) making CBMRM the primary regional strategy and policy orientation across the Pacific (Cinner et al. 2012;Cohen et al. 2015;Hanich et al. 2018). Regional organizations, conservation NGOs, and private foundations now stress the importance for state agencies to support and upscale CBMRM initiatives (SPC 2010(SPC , 2021Karcher et al. 2020;Steenbergen et al. 2022). Yet, the process by which these seemingly simple strategies move from bullet points in project plan documents towards gaining reality at the project site are not well understood. In this paper, we examine ethnographically the design and revision of a network of coral-reef protected areas established on the island of Moorea in French Polynesia (FP) as a decentralized marine spatial planning and resource management regime known as the Plan de Gestion de l'Espace Maritime (PGEM, Marine Spatial Management Scheme).
Our main line of enquiry concerns the assessment of CBMRM and how ''success'' or ''failure'' is constructed in the everyday practices of those who participate in or who are affected by management schemes. The gold standard of ''success'' is built upon peer-reviewed scientific literature compiled by trained experts reporting discrete ecological metrics about the change in fish biomass or the quality of marine habitats within the boundaries of no-take zones (Thiault et al. 2019). Socio-cultural indicators may also be used-such as the degree of engagement of stakeholdersto evaluate the successful (or unsuccessful) outcomes of marine resource management (Maliao et al. 2009). Both types of assessments share two common elements. First, success or failure is composed by experts construing the outcomes of CBMRM projects as an inherent set of attributes that can be rationally assessed from an external position (Giakoumi et al. 2018). Second, CBMRM is approached as an abstract model that can be copied from one context to the next (Jupiter et al. 2014). When it fails, it is assumed that the tool either malfunctioned or lacked some key elements.
Here, we approach CBMRM not as a self-evident abstraction, but as a process of composition carried out through the active practical work of not only conservation managers but other local actors such as fishers or activists. In some ways, our approach aligns with a growing body of literature-to which we have contributed (Hunter et al. 2018;Fabre et al. 2021)-documenting the local perceptions and conceptions communities have of conservation efforts affecting their everyday lives (Bennett and Dearden 2014). But here we build on another body of literature, mostly in development studies (Mosse 2005), and we ask not whether Moorea's PGEM failed or not, but how success or failure is produced through the discourses and practices of stakeholders engaged in Moorea's local marine governance and management.
Importantly, the coherence of complex socio-ecological processes like marine management is always in flux as the diversity of interests and stakeholders shifts through time, mitigating against a single, stable, and widely acknowledged interpretation. Although certain accounts may stabilize and emerge as the official version adopted by policy makers, CBMRM overflows its boundaries, and is instituted by local actors in variable ways and across diverse domains encompassing ecology, marine livelihoods, culture, and politics. Arguably, a detailed account of these local interpretations of CBMRM is paramount for designing adaptive management regimes which can adjust both to biological and social dynamics (Folke et al. 2005;Lemos and Agrawal 2006). However, the main line of argument we wish to develop is that a careful examination of how stakeholders construe the success or failure of CBMRM initiatives in which they engage-or against which they resist-provides an unprecedented opportunity for revealing generalizable patterns of how environmental management policies are received, negotiated, and repurposed by actors.
Below we focus on an empirical case study of Moorea's PGEM. Implemented in 2004, the PGEM was initially conceived as a decentralized marine spatial planning regime which, through time, increasingly promoted community involvement and stewardship over marine resource governance and management. Moorea's PGEM went through a lengthy participatory revision process from 2016 to 2021 and concomitantly gave rise to vibrant citizen engagement and overt political contestation that we have documented through ethnographic fieldwork from 2018 to 2021. We document how local authorities, scientists, local environmental and cultural activists as well as fishers brought the PGEM into existence, with some groups touting its success and others its failure. We also analyze the political dimensions of marine governance and how the design and revision of the PGEM emerged through the intricate interplay between local stakeholders, the municipality, and the French Polynesian Government. Finally, we examine how the revision of the PGEM gave rise to shifts, among stakeholders, in the distribution of authority, legitimacy, and expertise in the context of local cultural revival of Polynesian culture and neo-colonial political contestation.
STUDY SITE
Moorea is a high volcanic island located 20 km west of Tahiti, home of the capital of FP, Papeete. As a French Overseas Territory (Collectivite´d'Outre-Mer), FP depends on the French state for its military defense, foreign affairs, higher education, and monetary policy. From 1984 to 2004, it has gained in autonomy from the French state. FP parliament and government have jurisdiction over the local economy, cultural affairs, as well as the management of terrestrial and nearshore environments.
Moorea along with the small Island of Maiao forms the municipality of Moorea-Maiao. The Island of Moorea is further subdivided into five districts: Afareaitu, Haapiti, Paopao, Papetoai, and Teavaro (Fig. 1). It is the second most populated island of FP (17 463 inhabitants in 2017) and the second most visited in terms of international tourism (IEOM 2017;ISPF 2017). These figures, however, do not account for the importance of local tourism driven by the influx of residents from the more urban Tahiti visiting Moorea over weekends and holidays. Moorea's demography has exploded over the past few decades-its population doubled from 1988 to 2017-as rapid ferry transport has attracted people working in Tahiti to take up residence in Moorea. This, in conjunction with increased tourism-related activities, has led to fast changing environmental conditions due to increased anthropogenic pressures on both marine and terrestrial environments (Calandra et al. 2021;Loiseau et al. 2021).
Reef and lagoon fishing is an important aspect of local Polynesian lifestyles. Over 50% of households engage in the reef fishery with varying degrees of investment (Rassweiler et al. 2020). Even though only a handful of artisanal fishers generate their full income from reef fishing, it constitutes, for many families, an important buffer for subsistence purposes whether it be through household consumption or the marketing of reef fish (Leenhardt et al. 2016). Reef fishing encompasses a broad diversity of techniques and gears (e.g., line, net, and spearfishing as well as invertebrate harvesting) and a wide array of reef fish species are targeted. Reef fish-often referred to as i'a tahiti (litt. Tahitian fish)-are a cultural keystone of Polynesian society as they are an essential component of the local gastronomy and identity. The potential for conflict-use between tourism-and fishery-related activities is essential to understand the socio-political dynamics at play in the management of Moorea's nearshore marine environment.
Two scientific institutions renowned worldwide in the fields of coral-reef ecology are located on Moorea: the French research station CRIOBE (Centre de Recherches Insulaires et Observation de l'Environnement) founded in 1971 and the American University of California Gump research station established in 1985. Both institutions host marine biology programs producing long-term longitudinal time series about Moorea's marine environment and biodiversity, and their engagement with local communities ranges from the provision of scientific expertise to local marine managers to the implementation of outreach events destined to youths and residents.
METHODS
To document the PGEM revision process, the resulting sociopolitical dynamics, and stakeholder perceptions of local marine management, we draw upon ethnographic fieldwork carried out in Moorea from April 2018 to September 2021 focusing on reef-fishing practices, local ecological knowledge, and perceptions of marine management. We attended a (Table 1). Meeting discussions and participant interactions were documented in detail. Moreover, each meeting provided the opportunity to conduct informal discussions and open-ended interviews with the different participants. Finally, through daily engagement with fishers while being embedded within the local communities for over four years, the lead author had the opportunity to gain insight into the socio-political positioning of the most active stakeholders engaged in-or opposing-the PGEM revision. All quotations have been translated by authors from French or Tahitian to English and are reported in Table S1.
THE NEED TO REVISE
The concept of the PGEM arose in the mid 1990s as a legal decentralization framework developed by the FP government. It was designed to empower local municipalities and communities in the management of their lagoons and coasts which-as part of the public marine domain-normally fall under the sole jurisdiction of the FP government (Cazalet 2008;Calandra et al. 2021). The PGEM was conceived of as a way to meet both France's commitments to increase the coverage of its marine-protected areas (MPAs) in mainland France and overseas as well as FP's policy to promote the country's tourism industry by advertising the beauty of its marine environments (Poirine 2010).
As a holistic marine spatial planning framework, designed by FP's Department of Urban planning (SAU), the implementation of a PGEM requires the involvement and collaboration of numerous state agencies and local stakeholders, such as the aforementioned Department of Urban Planning (SAU), the Environmental Agency (DIREN), and the Fisheries Department (DRM), alongside local stakeholders and municipal government. While several islands across FP (e.g., Fakarava, Bora Bora) had been identified as target areas for the implementation of a PGEM, Moorea is presently the only island where a PGEM has been fully operational. The design process was initiated as early as 1994, and it took government officials ten years to finalize and enact Moorea's PGEM in 2004. Among the numerous institutions and stakeholders involved, guidance provided by both the FP Fisheries Department and the CRIOBE Research Station played a pivotal role in the design and completion of Moorea's PGEM. The initial goals of the PGEM were manifold and included spatially regulating lagoon-based activities (whether recreational or subsistence-based), protecting coral-reef habitats and spawning areas as well as alleviating fishing pressure on the marine resources (Aubanel et al. 2013). The centerpiece of the original PGEM consisted of eight permanent MPAs and two regulated fishing zones (Fig. 1). Regulations covered a wide array of activities including anchoring, navigation speed, seawall construction, land reclamations, recreation, and fishing activities.
The PGEM is governed by a steering committee-including representatives from the civil society, municipal authorities, and central government (Table 2)-which examines any new lagoon-based activities, projects, or developments before their petitioners seek authorizations from the FP government. The effective day-to-day management is operated, on the one hand, by a specifically appointed team of municipal staff members (hereafter, referred as the 'PGEM staff') and, on the other, by a local NGO named Association PGEM founded by local community members who had proven very active in the past in both cultural and environmental associations. After ten years of existence, the municipality and FP government decided to revise the PGEM. They initiated in 2015 a wide-scale participatory campaign which gave birth to a revised version of the PGEM enacted by the FP government in September 2021. The revision process was governed by an appointed committee (CLEM-Commission Locale de l'Espace Maritime- Table 2) and executed in the field by the PGEM staff.
The recognition that the PGEM needed to be revised begs the question of how the original PGEM came to be perceived as unsuccessful. The initial diagnosis for the need to revise the PGEM came from within the steering committee as early as 2010. Committee members were concerned by the growing number of lagoon-based activities, the lack of sufficient means to enforce existing regulations and the need to secure greater engagement from stakeholders. In 2013, as the PGEM was approaching its 10th anniversary, a local group of stakeholders-named MAMA-involving PGEM staff, scientists, environmental activists, and representatives from the FP agencies started meeting on a regular basis to sketch out guidelines to prepare a full-blown revision. Desiring to carry out the revision through a wide-scale consultative and participatory process, the FP government and MAMA group sought the assistance from a European Union-funded projectoperated by the Pacific Community (SPC 1 )-focused on Seven represented agencies: Chamber of Commerce and Industry (CCISM); Chamber of Agriculture and Reef Fishery (CAPL); Land Ownership Agency (DAF); Public Equipment (Direction de l'Equipement); Youth and Sports Agency; Tourism Agency; Rural Development Agency the management of natural resources and the resilience of coastal environments across the Pacific (RESCCUE 2 ). This created the opportunity to access both funding and expertise from a new set of actors-namely social scientists specialized in participatory framework designs-to aid in the diagnosis of the failed project and to guide the revision process towards a new version of success. The official diagnosis of failure as detailed in a RESCCUE report was that the PGEM lacked ''legal, financial, and human assets'' and had been severely criticized by local stakeholders, most notably fishers who argued that they were not fully consulted about the implementation of the MPA network (Narcy and Herrenschmidt 2014). In our interviews, PGEM staff indicated that they were convinced that the original PGEM had failed to achieve its goals. They voiced concerns about the increasing number of activities in the lagoon over the ten years since the PGEM's implementation and reckoned that the initial PGEM scheme lacked the capacity to manage emerging diverse interests (Quote-1- Table S1). Similar to the RESCCUE report, PGEM staff also pinned the project's failure on fishers' lack of engagement and compliance as well as their overt opposition to the management scheme (Quote-2- Table S1).
The lack of positive ecological results was also a crucial element used to compose the official version of failure. For instance, government officials told us that the PGEM ''is not working, fishers are not complying, and ecological results are minimal.'' Indeed, an ecological study (Thiault et al. 2019) had shown that the biomass of harvested fish species was slightly higher on average inside MPAs where fishing was entirely prohibited but that this increase was weak compared to those documented in other published MPA analyses. According to the authors, it was the absence of fisher compliance, the lack of surveillance, and the ''limited public appreciation about the benefits of MPAs'' that were key reasons for the ''limited ecological benefits'' of the PGEM (Thiault et al. 2019, p. 8). The actions of the discontent fishing communities and scientists' findings were now aligned, moving in the same direction, yet for different reasons, and pointing to failures of the PGEM.
FISHERS' CONTESTATION OF THE PGEM
Despite the official praise of the PGEM during its early phases (Conseil des Ministres 2021, p. 22269), Moorea's fishers had never been fully enrolled in the project idea. In fact, some fishers actively protested against the project as it was being officially established (Gaspar and Bambridge 2008;Walker and Robinson 2009). The fishing communities' widespread criticism inhibited the original PGEM from ever becoming fully stabilized as a ''success'' across all the different interests.
While most of the fishers with whom we have interacted since 2018 agree that some form of marine management should be undertaken, they argued that the PGEM was flawed and unjust. The most common discourse was that the PGEM was geared towards the promotion of tourism at the expense of fishers. In meetings, fishers regularly indicated that the MPAs were opportunistically located near resorts, or they disgruntledly complained that regulations were only enforced for fishers while non-complying tourist operators were never sanctioned (Quote-3- Table S1). Fishers also frequently mentioned how permanent no-take zones were intrinsically unfair, forcing fishers living onshore from MPAs to travel greater distances to reach legal fishing grounds.
Several social-science studies articulated these concerns, providing further legitimacy to fishers' interpretations of the PGEM. Walker and Robinson (2009) detailed a concern described by a renowned local cultural activist (Quote-4- Table S1) about how the PGEM had displaced essential subsistence and cultural fishing activities carried out by women. In addition, Gaspar and Bambridge (2008) wrote how the PGEM had been constructed around technocratic and scientific principles which displaced Polynesian territorial understandings. Indeed, the spatial zoning of the lagoon alone-disconnected from terrestrial issues-perpetuated the land/sea divide the FP administration inherited from French colonial rule which contrasts with Polynesian forms of management embracing both land and sea in continuous territorial units. Moreover, the French name of the PGEM as well as its technocratic jargon has pushed Moorea stakeholders to conceive of it as an extension of French rule (Quote-5- Table S1) and, hence, as a governance mechanism imposed from the outside displacing Polynesian identities and modes of being (Quote-6-Table S1) (Rigo 2004).
Although efforts were made to consult with fishers during the initial design of the PGEM, those who acknowledged having been consulted argued they had been duped and stated they were promised non-permanent closures (ra¯hui-see below) instead of MPAs (Quote-7-Table S1), even though the PGEM staff rejects that such promises were ever made. Arguably, this situation reflects more than a misrepresentation of MPAs or a problem of translating from French to Tahitian, but rather an attempt by Moorea's fishing communities to institute a different version of marine management, one over which outside experts would have less control.
TOTAL FAILURE?
In contrast to many fishers, environmental and cultural activists on Moorea framed the initial PGEM as a success in checking the overdevelopment of the tourism industry in Moorea. They have argued that it addressed their demands of gaining greater decision-making power in overseeing developments on the island's public marine domain. The PGEM Steering Committee's environmental NGOs representative argued that the PGEM provided an unprecedented arena in which the voice of citizens, through their representatives, could be heard by the FP government (Quote-8- Table S1). The number of projects declined (over 200 demands ranging from seawall construction to new nautical recreational activities submitted for approval by residents or local businesses) by the PGEM Steering Committee over its first ten years of existence is used, by PGEM advocates, as a metric of success. The role of the PGEM as a counterweight to the FP government's desire to support growing tourism development on Moorea is solidly anchored into the foundational struggle-which occurred in 2000-of residents against the proliferation of over-water resort bungalows. Those who had led this struggle are now the PGEM's fiercest advocates. One of them repeatedly declaimed a narrative of this struggle during several PGEM revision meetings to remind attendants they had the power to oppose projects coming from the top (Quote-9- Table S1). The recursiveness of the narrative compels us to consider it as having become, for many local activists, part of the founding mythology anchoring the PGEM as a way to empower local citizens vis-a`-vis the FP government.
However, the environmental and cultural activists tempered their interpretations of PGEM success by misgivings about the steering committee's decision-making authority arguing that the committee should have more than a simple consultative voice. They would wish for the committee's decisions to be final to avoid the ability of FP agencies to overrule them. But this sense of failure has often been nuanced as many committee members pointed to the fact that FP agencies had rarely overruled their decisions (Quote-10- Table S1). However, the interpretation of the PGEM as a counterweight to policies implemented by the FP government depends on an alignment between the interests of local civil society organizations and of the municipality. In the absence of such an alignment-as illustrated by the latest developments around the final enactment of the revised PGEM (Appendix S1)-these interpretations of success may shift, jeopardizing the very existence of the PGEM.
Activists suggested that the initial PGEM escaped the control of the FP government by presenting obstacles to central government-imposed policies and by coercing inter-agency collaboration. One of the PGEM's supporters who had actively participated in the revision process from the beginning claimed that some FP agencies had hoped, and even planned, for the PGEM revision to fail in order to revoke the PGEM entirely in favor of other single-agency piloted legal frameworks geared towards more-specific environmental or fisheries-related purposes. He also argued that the FP government actively sought to undermine the revision process by lending a friendly ear to the Association Ra¯hui whose members were the fiercest detractors of the PGEM.
RĀ HUI: ALL IN FAVOR?
The dissatisfaction with permanent MPAs and the demand for rotational closures-inspired by the principles of ra¯hui-have been the most prominent alternative vision of marine management on Moorea. Ra¯hui refers concomitantly to territorial units-pie-shaped territories running from mountain ridge to reef crest-and a form of naturalresource management placing specific species or spaces under a temporary harvest ban (Bambridge 2016).
In pre-contact Tahiti, estates were governed through a nested hierarchy of nobiliary elites who had the power to establish a ra¯hui on the territory or resources they controlled. The notion went hand in hand with that of tapu-a strong spiritually sanctioned prohibition-under which resources could be placed for the duration of the ra¯hui. Most often, ra¯hui were destined to replenish marine or terrestrial resources in view of their future use for specific religious and political ceremonies. The institution was progressively undermined by Christianization and colonialization (Bambridge et al. 2019).
Parallel to the Pacific-wide 'renaissance' of customary forms of marine management (Johannes 2002), the concept of ra¯hui has reemerged across FP in the past decades under the double influences of Polynesian cultural revival movements and the state-led promotion of CBMRM initiatives Filous et al. 2021). The most visible case of ra¯hui has occurred in Teahupoo where it was implemented in 2014 and which has been framed-by community members, local media, and government authorities-as a success both in terms of stakeholder engagement and ecological outcomes 3 (Fabre 2021).
The growing demand for the implementation of ra¯huiinspired forms of management in Moorea has been expressed most clearly by the work of the Association Ra¯hui, founded by residents from the district of Haapiti in 2016, who vociferously contested the PGEM and promoted ra¯hui as an alternative (Hunter et al. 2018;Fabre 2021). Their main lines of argument captured the key grievances voiced around the island that the PGEM was an institution geared towards the promotion of tourism at the expense of fishers, residents, and the marine environment (Quote-11-Table S1) while ''lacking transparency and shared governance.'' The Association Ra¯hui proposed eliminating the PGEM and in its place establishing in each of Moorea's districts, ra¯hui committees named toohitu (litt. council of the seven) composed of fishers and community leaders. In other words, they were inventing and defining new groups and endowing them with new goals in their attempts to fail the PGEM. These committees would oversee the management of their district's lagoon and implement rotational, rather than permanent, closures according to their expertise. The idea to implement toohitu committees was a way to root their project in Polynesian tradition while promoting what the Association Ra¯hui understood as a more democratic mode of governance. The Association's shrewd deployment of the concept of ra¯hui to enroll a diverse coalition of interest groups provides an insightful example of the dynamism and plasticity of 'traditional' concepts. On the one hand, while ra¯hui was, in the past, nested in the strict hierarchies of the pre-contact socio-political order it becomes, in the present, a flagship of democratic governance. On the other hand, the institution of the toohitu originated as a post-contact form of governance promoted by Christian missionaries as a way to downplay the political power of Polynesian nobiliary elite (Saura 1996).
For the Association Ra¯hui, the notion of ra¯hui was used as a means of political contestation against the municipality by seeking to transfer decision-making power to local fishers and residents. The 'Polynesianess' of the ra¯hui concept was also employed in contrast to two of the most negatively evaluated effects of the colonial and postcolonial order: money and profit, each portrayed as the essential motives behind the initial PGEM (Quote-11- Table S1). Indeed, equating indigenous identity with noncapitalist modes of being is a well-documented and effective strategy deployed by indigenous people around the world when they seek to assert their political will (Kuper 2003;Dove 2006).
The Association's concerns gained relevance during the revision process as they managed to align the interests of fishers, church pastors, and many community leaders and tie them together through an appeal to Polynesian identity. Moreover, the Association's members were adept political operators and, seeking to circumvent the revision process, lobbied FP government officials to endorse their project. However, as the PGEM revision proceeded, the Association progressively lost steam and suspended their activities, in 2020, as its core members invested their efforts (unsuccessfully) into direct political action by running as candidates during the municipal elections. Nonetheless, the Association's activism engaged many different interest groups, including the PGEM staff in charge of the revision, in the idea that implementing district-level fishing committees was a pathway to secure greater engagement of fishers into the PGEM. Even though the Association's life was short lived, they were able to influence the PGEM revision towards a version of success that drew on the revitalization of Polynesian culture and identity and a long-festering sense of neocolonial dispossession and acculturation.
CULTURAL REVIVAL AND NEO-COLONIAL CONTESTATION
The appeal of ra¯hui in Moorea was not only tied to its invocation of a governance design that devolved greater power to residents and fishers, but also to the concept's links to Polynesian culture, identity, and cosmogonies. Two related concepts-tapu and mana-were frequently deployed by many community members as a means to provide ra¯hui with greater legitimacy than a ''disenchanted'' PGEM. For some, ra¯hui would be ''self-enforced'' due to the spiritual sanctions that would befall those breaking the sacredness of tapu instituted by a ra¯hui (Quote-12- Table S1).
The spiritual dimensions of ra¯hui are further revealed by its connections with the notion of mana found across the Austronesian world which is fundamental to political and religious authority, and that can be defined as an expression of power channeled by skilled practitioners vis-à-vis spiritual or godly entities (Keesing 1984) (Quote-13- Table S1). These core Polynesian concepts are not only widely accepted among fishers and residents on Moorea, but they also have traction among many municipal and FP government officials who identify as Polynesians.
Even though most officials expressed their cultural attachment to these Polynesian concepts, they also stated that they no longer apply in the contemporary context and were not incorporated into the original PGEM design (Quote-14/15- Table S1). That the original PGEM was not developed around ra¯hui appears to have been a strategic mistake and this resulted in failure to gain sustained and widespread support for the management initiative. In contrast, in Teahupoo on the south-eastern tip of the island of Tahiti, community members and FP agencies from the onset explicitly conceived of the marine management as ra¯hui and the management scheme has widely been acknowledged as a success both for the FP government and among stakeholders .
Despite the PGEM's evident techno-scientific imprints, the architects of the original PGEM tried to weave in cultural meaning and representations by tasking the steering committee's representatives of cultural and environmental associations with the design of the PGEM's logo and motto. An octopus, a widely known mystical being in Polynesian cosmology, was chosen as the logo (Figs. 1, 2). The sea creature's tentacles are understood as the eight main valleys of Moorea, and more broadly, it is known to be hewn from an amalgamation of marine beings, humans, and Ta'aroa-the god considered as the creator of the world and of all the spiritual and living entities (Gaspar and Bambridge 2008).
Rather than just an apolitical spirit being, Moorea's mythical octopus also encapsulates the life-destroying consequences of colonialism. It describes how the arrival of foreigners had disrupted the social harmony around the island and caused the angered octopus to severe its relations with the communities who had abandoned their lifestyles when welcoming the newcomers (see Gaspar and Bambridge 2008 for a detailed account of the legend). The octopus being was invoked during the implementation of the initial PGEM to restore harmony and balance to the island. More than mere tokenism, it was called upon to facilitate and compose the success of the PGEM by renowned cultural activists who had been active, before the implementation of the PGEM, in several struggles against tourism-oriented development projects in Moorea and who had become some of the original PGEM's strongest supporters.
STAKEHOLDER CONSULTATION
A central-guiding principle of the PGEM staff's initiative to revise the PGEM was stakeholder consultation, which they envisioned as a process where they could redefine the goals and priorities of the revised PGEM that would better align with community interests (Quote-16- Table S1). For this reason, PGEM staff, aided by ''participatory conservation experts'' hired through the EU-funded RESCCUE project, systematically kicked off their meetings with a PowerPoint image representing the different activities they had identified and that the PGEM sought to regulate: overwater resort bungalows, scuba diving, jet skiing, ray feeding, snorkeling, and fishing. With these uses as the central focus, a synthesis of the PGEM's objectives emerged from consultation workshops held from 2016 to 2017 and was codified as a central framework of the revision (Fig. 2). Again, adopting the octopus and its Fig. 2 Goals defined by PGEM staff through consultative and participatory workshops. The ten identified goals, represented as a ten-tentacle octopus, cover a range of objectives: (i) regulation of specific activities (sustainable and equitable fishing, mindful recreational nautical activities, regulating sailboat), (ii) reaching island wide socio-ecological goals (promotion of local culture, conservation of the coast, marine species and marine landscapes, users' safety, and access to sea) and (iii) implementing collaborative governance (participatory management and reinforced communication). Source RESCCUE Project spiritual connotations, the entity was a key stabilizing device deployed by the PGEM staff to align and assemble the heterogenous and cross-cutting interests and assert their interpretation of how the PGEM would be revised.
To address the 10 identified objectives, the PGEM staff proposed to create three kinds of goal-oriented zones (Fig. 3): 'Environmental protection zones'; 'Environmental, user-safety, and sustainable-tourism zones'; 'Sustainable and equitable fishing zones.' The staff instituted an important shift in vernacular by dropping the term ''MPA,'' which in the original PGEM signified no-take areas. Four of the initial PGEM's MPAs were instead relabeled as two 'Environmental protection zones' (Fig. 3-Aroa and Pihaena zones) and into two 'Environmental, user-safety, and sustainable-tourism zones' (Fig. 3-Nuarei and Tiahura zones). Both of the new zones effectively ban fishing while not portraying the ban as the main goal. The remainder of the previous MPAs have been transformed or reshaped into Marine Managed Areas (MMAs) labeled as 'Sustainable and equitable fishing zones' in which regulations would be defined by fishing committees and the DRM (FP Fisheries Department) working in parallel to the PGEM Steering Committee.
Ostensibly defining the goals through stakeholder consultation and shifting the core vernacular reflect the PGEM's choice to reconfigure the balance of power among stakeholders and the degree to which the various interests weighed in the design of the new PGEM. Transforming MPAs into MMAs was a critical conceptual shift as it put greater emphasis on the roles of fishers and community members in the governance regime while displacing the position of natural scientists, who had been the strongest advocates for strict MPAs and who equated ''success'' with biodiversity protection. Fig. 3 Goal-driven zones of the revised version of the PGEM. The spatial representation of the PGEM shifted from a unified map (Fig. 1) in which the MPA network is the main feature to a set of four maps: one representing the main goal-oriented zones (below), one representing the specific zoning for fishing, another representing the recreational activities' zones and a last one combining the previous three. Source Service de l' Aménagement et de l'Urbanisme -Polynésie Française (legend and zone names have been added by authors) Ó The Author(s) 2022, corrected publication 2022 www.kva.se/en
INVENTING INSTITUTIONS
Aware of the fishers' widespread criticism of the initial PGEM, the PGEM staff placed a high priority on fisher participation. To do so they created fishing committees in each district and opened them to all fishers-regardless of their investment in the fishery-to design, in consultation with DRM, fishing regulations to be applied in each district's lagoon. The design of the district-level committees emerged progressively during the revision as a direct outcome of the involvement of burgeoning local fisher groups, including the Association Ra¯hui, 4 which surfaced from 2016 to 2017 in three of the island's districts (Teavaro, Paopao and Haapiti). The invention of these new institutions was an attempt to align the heterogenous interests of the fishing communities. Of course, creating new institutions by writing them down as words in the revised project documents is easier than forming them on the ground, let alone controlling them. The first fishing-committee meetings were convened by DRM in late 2017. The results were mixed and the degree of investment of fishers varied greatly around the island. In Haapiti and Papetoai where PGEM contestation was strongest, fishers' participation did not take root before the final months of the revision in 2021. In Paopao and in Teavaro, however, the presence of recently formed fisher groups led to innovative fishing regulations. Rather than passively enrolling themselves in the new fishing committees, Paopao and Teavaro, borrowed the idea and directed the committees towards their own goals. In 2017, the Teavaro committee asked the DRM to organize a meeting with fisheries scientists to discuss minimal sizes of harvested fish. The workshop attracted dozens of fishers who brought along fish they had caught earlier the same day so that they could be measured. One of the fishers' goals was to demonstrate that a cultural keystone species, pahoro-initial phase parrotfish-caught with smaller mesh-sized nets than the 40 mm FP-wide legal minimum had reached sexual maturity warranting that they could be harvested without endangering the species' healthy reproduction. The workshop's outcome was, for participating fishers, a complete success as they secured from DRM the promise to make an exception for Moorea allowing the use of 35 mm mesh-sized nets for pahoro fishing. DRM requested that fishers define a select number of spatially delineated areas where this type of fishing could take place (Fig. 4) in order to ensure tight surveillance of the use of these nets. Through these workshops the Teavaro fishers aligned fish and scientists with their own interests to keep harvesting pahoro.
The Paopao fishing committee also developed a new inner-lagoon management plan. Its participants were fierce detractors of night spearfishing, considering it as the most deleterious fishing practice, while acknowledging its importance for many unemployed families and youths. Instead of trying to ban the practice altogether, the committee designed a system of rotational closures. Two passto-pass lagoons were divided into four distinct areas among which one area at a time would be closed to night spearfishing (Fig. 4). The ban would then shift, every 2 years, from one zone to the other, moving eastward, forming a full cycle over a period of 8 years. The outcome of the Paopao fishing-committee meetings emerged as a lynchpin for DRM staff to construe the revision process as a success because they were progressively managing to enroll fishers in the revised version of the PGEM and to gain their trust (Quote-17- Table S1).
The outcomes achieved by the Teavaro and Paopao committees were leveraged by the PGEM staff to invigorate fishing committees in other parts of the island. For example, during the January 2019 Afareaitu fishing-committee meeting participation was dismal and only five fishers from the district were present. A DRM agent led the meeting and participating fishers nodded their way throughout the PowerPoint presentation of the fishing committee governance regime to be enacted. No negotiations or debate had taken place and the meeting's outcome offered nearly no change to pre-existing regulations.
Two years later, in February 2021, however, we attended the last fishing-committee meeting held before the revised-PGEM's finalization and the outcomes were radically different. Over 20 fishers participated. After the DRM agent's presentation of the district-level regulations to be implemented in the revised PGEM, heated debates took place as fishers indicated that nothing had changed compared to the initial PGEM regulations. But after presenting the designs from the Teavaro and Paopao fishing committees, a real process of negotiation started between participants and DRM agents.
The most interesting debate surrounded demands from fishers to open the Afareaitu MMA (previously an MPA) to day-time spearfishing. Despite the DRM agent insistently advising against such a decision, the last word was given to fishers and the decision to allow day-time spearfishing was finally enacted to the great surprise of most participants (Quote-18- Table S1). This illustrates how the authority granted to fishing committees presented a clear break from the early PGEM governance, but also how bureaucratic control over the committees was limited (Quote-19- Table S1). An institution created by the PGEM staff was overflowing its envisioned boundaries and being repurposed in ways that was forcing the PGEM staff to cater to the interests of the fishing committees.
Despite the growing engagement of fishers in the fishing committees, overall participation of-and representation of-fishers was problematic for the PGEM staff. ''Representation'' was a guiding principle of the revision process, but it presented numerous challenges when put into practice (Quote-20- Table S1). Initially the fishing committees' representatives were handpicked by the municipality, but once the fishing committees started to meet, the appointment of representatives was voted upon at the end of each meeting.
Yet, in some cases, fishers still felt they were not well represented. The case of the Paopao fishing committee mentioned above provides an illustrative example. The committee, piloted mainly by a handful of net fishers, set out to implement strong regulations against night spearfishing. However, night-spearfishers were not engaged in the process and did not attend any of the Paopao district meetings we attended. After one of the district committee's meetings, we met a young spearfisher from Paopaowhom we had informed of the date and location of the meeting-and asked him why he had not come to the meeting. He replied that he felt his presence was unwelcome due to the scorn committee members demonstrated against night-spearfishers (Quote-21- Table S1). This night fishers' comments illustrate how a project inevitably has detractors that will assemble potentially destabilizing elements that might eventually fail a project. The solidity of a project is never given in advance but rather an achievement of those who produce its success. The introduction of fishing committees, nonetheless, enabled a shift of power towards fishers. The rebalancing of decisionmaking power and securing stakeholder engagement have been core processes for crafting the new version of ''success'' for the revised PGEM.
SHIFTING THE POSITION OF NATURAL SCIENTISTS
Many in the fishing community were uneasy about the role scientists played in the original establishment of the PGEM. During the revision process there was growing appreciation among fishers and activists for Polynesian ecological knowledge about the lagoon and the possibilities it provided for challenging scientific knowledge (Quote-22- Table S1). For instance, to impose their vision of the PGEM outcomes, the Association Ra¯hui carried out its own ''citizen science'' underwater fish counts and produced a written report summarizing the ecological state of the lagoon. Although their goal was to challenge the authority of Western science (and its local practitioners), the use they made of some of its methods-underwater fish countsindicates how they fully recognized the importance of knowledge that is deemed scientific and how policy makers invoke it as a neutral arbiter to assert political authority and legitimacy.
During the revision process, PGEM staff began to recognize the growing local skepticism towards scientists and www.kva.se/en the knowledge they produce which was grounded in two key concerns. First of which was the near-absence of Polynesian scientists working in Moorea. Second, there was a lack of communication from both the French and American scientific institutions about the many experiments carried out in Moorea's lagoon. These concerns participated in questioning the authority of scientific knowledge in the new PGEM designs. Furthermore, rather than unbiased objective knowledge about the marine environment that could guide PGEM decision making, scientific knowledge was now being framed as contributing to PGEM's failure by imposing post-colonial interests at the expense of Polynesian ones.
To neutralize these concerns, PGEM staff recast their relationship with and the position of scientists during the PGEM revision, a shift which was reflected in several aspects. First, natural scientists were progressively sidetracked all along the revision process to the point where they were no longer invited to play a leading, authoritative role in revision workshops and meetings (Quote-23- Table S1). Second, the revised text of the PGEM proposed a regulatory framework for scientific activities: scientists would be required to request approval of their lagoonbased scientific projects from both the PGEM Steering Committee and the district fishing committees. This shift in oversight over scientific activities has been a source of concern for both the CRIOBE and the Gump Research Station fearing that it may constitute a significant impediment to their work. Scrutiny and red tape were not welcomed by scientists. Concern was fueled by the experience of a scientific team which presented one of their proposed projects to Haapiti's fishing committee but was turned down. The new positioning of scientists vis-à-vis the PGEM was confusing to scientists and contrary to their self-image as neutral, apolitical observers providing impartial data that was immune to scrutiny by non-experts. Now they were cast as just another stakeholder who might be objects of suspicion or trust, strongly unwelcomed or embraced, cast aside or invited to participate (Quote-24- Table S1).
The transformed role of scientists in the revised PGEM was also evident in fishing-committee governance. Even though a representative of Moorea's two scientific institutions was invited to sit in each of the fishing district committees, neither of the scientific institutions made explicit recommendations nor did the representative participate in the debates during the meetings. Nevertheless, many scientists did have concerns. Some natural scientists thought the new regulations were too complex and consequently would be ineffective. They approached the PGEM staff after the last fishing-committee meeting in early 2021 and asked them for a special, closed-door meeting where they could express their concerns and provide their expertise. The PGEM staff agreed to hold the meeting, but the meeting was not fruitful for the scientists. The PGEM staff regretted that such concerns were not deliberated publicly during the fishing-committee meetings which would have provided fishers with the opportunity to consider, debate, or ignore the position of the scientific community. The attempt of some marine scientists to assert their authority in the mold of the original PGEM by positioning themselves as external actors who provide expertise to local stakeholders was met with PGEM staff's strategy to solidify their version of the PGEM's success by blurring the line between scientists and local stakeholders.
DISCUSSION AND CONCLUSION
The revision of the PGEM set the stage for local stakeholders as well as FP government and municipal officials to voice their interpretations regarding the successes and failures of Moorea's marine governance and management. Drawing from a diverse mix of claims and practical strategies-ranging from the desire to revitalize Polynesian culture and way of life to the necessity to empower local authorities and stakeholders vis-à-vis central government authorities-stakeholders' interpretations of the PGEM goals varied considerably both across groups and through time and had diverse effects on the revision process. Instead of a smooth object where the strengths and weaknesses of CBMRM are uncontestable and detectable from any vantage point, our account illuminates how the PGEM was an entangled web of dynamic relations where different stakeholders fought to stabilize their interpretations by enrolling various allies and aligning their different interests. The successes and difficulties PGEM staff had when attempting to stabilize the project during the revision process into a cohesive and widely embraced governance scheme highlight the instability of the core concept of comanagement or CBMRM initiatives: the concept of ''community'' which often goes unquestioned. By placing ''community'' at the center of the decision-making nexus, the PGEM staff confronted the practical difficulties of assuming that local stakeholders form well-defined, homogeneous units ''that speak[s] with a single voice'' (Watts 1999, p. 37) and that project managers have the capacity to rationally seek solutions that appease all interests. As illustrated in our case study, ''communities'' are partially composed through the project itself and are in constant motion with cross-cutting social and political intricacies that emerge, shift, and dissipate through time during a project's life. For these reasons, the questions of representation and legitimacy of community interests appeared as a touchstone of local contestation against the initial PGEM and, consequently, as essential issues to address during the revision. Yet even though such questions have been noted in the conservation science literature (Agrawal and Gibson 1999;Berkes 2010), their deeper political implications and how they may participate in reproducing and generating asymmetries in power distribution or hierarchies in knowledge production systems are often sidetracked in the design and evaluation of marine governance and management regimes (Dressler et al. 2010;Mazé et al. 2017).
As our description of the PGEM revision has illustrated, stakeholders involved in CBMRM invariably mix in their political interests, a point highlighted by Lemos and Agrawal who define environmental governance as: ''the set of regulatory processes, mechanisms and organizations through which political actors influence environmental actions and outcomes' ' (2006, p. 298). Indeed, the devolvement of governance and management involves political positioning as attempts are made to transfer decision-making power from one institution or set of actors to another. The fragility of the tripartite PGEM governance (i.e., civil society, municipal authorities, central government) is a testimony to the importance of considering the political dimensions of marine, and more broadly naturalresource governance. The latest developments surrounding the enactment of the revised PGEM provide a clear case in point (Appendix S1). Yet the success of many CBMRM governance schemes is often constructed around policy makers' interpretation that political contestation is avoided and that rational, apolitical strategies guided by scientific knowledge will lead to positive outcomes.
However, construing CBMRM as an apolitical, technical activity outside of democratic contestation implicitly positions local stakeholders as subordinates to experts from government agencies, scientific institutions, or conservation organizations (Beck 1992;Mitchell 2002;Eyal 2019). Indeed, the technocratic and scientific imprint of the initial PGEM was met with stakeholders' overt hostility towards scientific authorities which was fueled by the growing revival of local Polynesian knowledge as a valid domain. The attempts of marine biologists to position themselves as outside observers providing their expertise to stakeholders and policy makers illustrate how conservation initiatives often seek to police a boundary between experts and non-experts (Jasanoff 2004). For without this boundary, the experts' neutrality becomes a target of skepticism, leading to a questioning of their credibility and legitimacy in the eyes of local stakeholders. Yet as our case study of the PGEM has illustrated, the controversy overflowed the boundary between experts and non-experts, as different stakeholders attempted not only to assert their own authority and legitimate non-expert knowledge as an acceptable guide to policy but also by redefining ''marine resource management'' as something well beyond just ''marine resources'' to include politics, identity, Polynesian cosmology, and livelihoods. Even though the PGEM and DRM staff sought to atone this expert/non-expert divide-by sidetracking scientists or by empowering fishers in decision making-they worked to uphold such an epistemological positioning when framing the participation of fishers as ultimately a means to achieve ''what we believe to be more sustainable fishing practices'' (Quote-19- Table S1). We argue this deployment and distribution of expertise are deeply rooted in conservation practice and participate in hindering stakeholders' ability to build a common ground upon which to build more equitable management regimes. from the Moorea-Maiao Municipality and French Polynesian Agencies, notably from the Direction des Ressources Marines, for their availability and for regularly inviting us to official meetings. Finally, we thank the Moorea Coral Reef LTER Site, and the University of California Gump Research Station for logistic support. All socialscience researchers received ethics training, and research was approved by the San Diego State University's Institutional Review Board (HS-2017-0179).
Declarations
Conflict of interest The authors declare that they have no financial or non-financial conflicts of interest that could affect this study.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons. org/licenses/by/4.0/.
|
2022-07-25T13:27:57.771Z
|
2022-07-25T00:00:00.000
|
{
"year": 2022,
"sha1": "4eb9abdee6848dac245e3bf9a864a01d5831abeb",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1007/s13280-022-01763-7",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "4eb9abdee6848dac245e3bf9a864a01d5831abeb",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
13922250
|
pes2o/s2orc
|
v3-fos-license
|
Activation of the Extracellular Signal-Regulated Kinase Signaling Is Critical for Human Umbilical Cord Mesenchymal Stem Cell Osteogenic Differentiation
Human umbilical cord mesenchymal stem cells (hUCMSCs) are recognized as candidate progenitor cells for bone regeneration. However, the mechanism of hUCMSC osteogenesis remains unclear. In this study, we revealed that mitogen-activated protein kinases (MAPKs) signaling is involved in hUCMSC osteogenic differentiation in vitro. Particularly, the activation of c-Jun N-terminal kinases (JNK) and p38 signaling pathways maintained a consistent level in hUCMSCs through the entire 21-day osteogenic differentiation period. At the same time, the activation of extracellular signal-regulated kinases (ERK) signaling significantly increased from day 5, peaked at day 9, and declined thereafter. Moreover, gene profiling of osteogenic markers, alkaline phosphatase (ALP) activity measurement, and alizarin red staining demonstrated that the application of U0126, a specific inhibitor for ERK activation, completely prohibited hUCMSC osteogenic differentiation. However, when U0126 was removed from the culture at day 9, ERK activation and osteogenic differentiation of hUCMSCs were partially recovered. Together, these findings demonstrate that the activation of ERK signaling is essential for hUCMSC osteogenic differentiation, which points out the significance of ERK signaling pathway to regulate the osteogenic differentiation of hUCMSCs as an alternative cell source for bone tissue engineering.
Introduction
Although bone tissue has a high regenerative capacity, local endogenous cell numbers are often not adequate enough to reestablish tissue continuity or function in critical-sized defects [1][2][3]. Thus, there is a worldwide competition to develop engineered bone tissues to conquer this difficulty. However, after more than three decades of investigation, the success of bone tissue engineering is still limited [4]. One of the most critical obstacles is finding a suitable progenitor cell source. To date, human bone marrow mesenchymal stromal cells (hBMSCs) have been considered a native cell source and have been widely studied for osteogenic differentiation [5][6][7]. However several disadvantages, such as long derivation times, heterogeneous cell population, and variable potency [8,9], markedly hinder the clinical application of hBMSCs for bone tissue engineering. Thus, alternative cell sources for bone tissue engineering are in high demand.
Human umbilical cord mesenchymal stem cells (hUCM-SCs) isolated from Wharton's jelly of the umbilical cord have similar surface marker expression, high differentiation potential, low immunogenicity, and low tumorigenic risk as hBMSCs [10][11][12][13][14][15][16][17]. In contrast to hBMSCs that have to be harvested through invasive bone marrow aspiration, hUCMSCs 2 BioMed Research International are isolated from generally discarded tissue, umbilical cords, without ethical concerns [16,18,19], potential pain, and medical or surgical risks, such as bleeding and anesthesia [20]. Additionally, unlike hBMSCs and other stem cells isolated from adults, hUCMSCs share a high expansion capacity with fetal-derived stem cells [21]. These previous studies suggest that hUCMSCs may be a more suitable progenitor cell source than hBMSCs in a clinical setting [22], and thus multiple independent research groups have recruited hUCMSCs for various tissue regeneration, including bone tissue engineering [23][24][25][26][27][28]. However, the molecular mechanism of hUCMSC osteogenic differentiation has not been uncovered until now.
Mitogen-activated protein kinases (MAPKs) are a widely conserved family of serine-threonine protein kinases, including extracellular signal-regulated kinases (ERK), c-Jun Nterminal kinases (JNK), and p38 [29]. Previous studies have indicated that distinct MAPK pathways independently modulate stem cell self-renewal and differentiation [30,31]. For instance, Jaiswal et al. reported the regulatory role of the ERK pathway in hBMSCs osteogenic precursor commitment and differentiation [32]. However, conflicting results obtained from other investigations indicated whether activation of ERK signaling promotes stem cell osteogenic differentiation is a cell-type specific manner [32][33][34][35][36]. In this study, we intend to reveal the importance of MAPK signaling, especially the ERK pathway, in hUCMSC osteogenic differentiation.
Preparation of Human Umbilical Cord Mesenchymal Stem
Cells. This study was ethically approved by the Xi'an Jiaotong University IRB. hUCMSCs were isolated and characterized in the manner previously described in the protocol [37]. Briefly, 15 cm long umbilical cord was rinsed with phosphate buffered saline (PBS) and cut into 1 mm 3 pieces and then digested with 0.1% type I collagenase (Sigma-Aldrich, USA) for 7-10 hours to form a homogeneous gelatinous solution. The gelatinous tissue solution was then mixed with 0.25% trypsin (Gibco, USA) at a ratio of 1 : 1 and incubated at 37 ∘ C for 30 min before being diluted in sterile PBS at a ratio of 1 : 10. After being centrifuged at 1200 rpm for 5 min, isolated hUCMSCs were resuspended in a maintenance medium consisting of DMEM/F12 (Hyclone, GE Healthcare Life Sciences, USA) supplemented with 15% fetal bovine serum (FBS; Hyclone, GE Healthcare Life Sciences, USA) and 1% penicillin and streptomycin (Gibco, USA) and seeded in cell culture dishes at a density of 1 × 10 4 cells/cm 2 . Passage 3 hUCMSCs were characterized as CD90 + /CD105 + /HLA-ABC + /CD34 − /CD45 − /CD19 − /CD86 − /HLA-DR − by cell flow cytometry [37]. The differentiation capacity of passage 3 hUCMSCs towards osteogenic, adipogenic, and chondrogenic lineages was verified accordingly [37].
Inhibit the Activation of ERK by U0126.
To block the activation of ERK signaling, 25 M U0126 (Calbiochem, Merck Millipore, USA), a specific inhibitor of ERK activation [38], was added to the osteogenic differentiation medium for the entire 21-day differentiation period. In a separate recovery experiment, hUCMSCs were only treated with 25 M U0126 for the first 9 days, followed by a continual cultivation in osteogenic differentiation medium without U0126 until day 21. Band intensities were determined using ImageJ software. 2.6. Alkaline Phosphates Activity. Culture medium was collected and stored at −80 ∘ C until analysis. ALP activity in the culture medium was detected by a commercially available kit (Jiancheng biochemical, China) following the manufacturer's instructions. Briefly, each 30 L culture medium was mixed with 500 L buffer solution and 500 L basic solution and then incubated at 37 ∘ C for 15 min with standards and a blank. After the incubation, 1500 L of the chromogenic agent was added to each sample. The absorbance at 520 nm was measured (OD value). Additionally, ALP staining was performed as previously described [40]. In brief, hUCMSCs were fixed with an icecold 60% acetone-40% citrate solution and stained with diazonium salt with 4% naphthol AS-MX phosphate alkaline solution (Sigma-Aldrich, USA).
Alizarin Red Staining.
After 21 days of cultivation, hUCMSCs were fixed with 4% paraformaldehyde (Sigma-Aldrich, USA) for 15 min, washed with distilled water, and then stained with alizarin red solution (1% alizarin red and 2% ethanol in distilled water) for 15 min at room temperature. Excess stain was removed by washing with distilled water several times prior to photography.
Imaging and Image
Processing. Images were acquired at room temperature with the CellSens software (Olympus, America Inc., Center Valley, PA) on a fluorescence microscope (Olympus, America Inc., Center Valley, PA) using a 20x (dry HC Plan Apochromat, NA 0.17) objective lens.
Statistical Analysis.
All experiments were performed for a minimum of six times. Statistical analysis was computed by SPSS 13.0 (IBM, USA). Statistical comparisons were performed using factorial analysis of variance (ANOVA), followed by an LSD test for comparing treatments between each two groups, and a value less than 0.05 was considered statistically significant. Individual comparisons between two groups were determined by the Mann-Whitney test for nonparametric data.
Osteogenic Differentiation of hUCMSCs.
During the 21day cultivation in the maintenance medium (Control group), hUCMSCs presented consistent levels of ALP activity (an early osteogenic commitment indicator) (Figure 1(a)) as well as transcription of Osteocalcin (a terminal osteogenesis marker) (Figure 1(b)). On the contrary, ALP activity of hUCMSCs cultured in the osteogenic differentiation medium (Osteogenic Stimulation (OS) group) significantly increased at day 5, peaked at day 9, and remained at high levels afterwards (Figure 1(a)). Meanwhile, gene expression of Osteocalcin of hUCMSCs continually increased in the OS group from day 9 to day 21 (Figure 1(b)). These data demonstrate that hUCMSCs have the capability of osteogenic differentiation; however, without suitable stimulation such as the osteogenic differentiation medium, hUCMSCs do not go through osteogenic differentiation spontaneously.
Diverse Activation of MAPK Signals during hUCMSC
Osteogenic Differentiation. There are three major MAPKs in mammalians: ERK, JNK, and p38. Although all these three MAPKs are regulated by phosphorylation cascades [41], they may function differently in specific events [30,31]. Particularly, during hUCMSCs osteogenic differentiation, activation of JNK and p38 was not induced by the osteogenic differentiation medium throughout the entire 21-day cultivation ( Figure 2). This suggests that JNK and p38 signaling may not be essential for hUCMSC osteogenic differentiation. On the other hand, phosphorylation of ERK in hUCMSCs robustly increased from day 5 and peaked on day 9, followed by a decline stage thereafter (Figure 2).
Blocking hUCMSC Osteogenic Differentiation by ERK Activation Inhibitor.
To reveal the significance of ERK activation in hUCMSC osteogenic differentiation, U0126, an inhibitor used to prevent ERK activation in BMSCs [42,43], was added to the osteogenic differentiation medium through the entire 21-day cultivation period (Block group). In this loss-of-function evaluation, U0126 completely eliminated the stimulation of the osteogenic differentiation medium on ERK activation in hUCMSCs ( Figure 3). As previously shown, ALP activity of hUCMSCs increased in the culture of osteogenic differentiation medium, which was completely prohibited by continuous inhibition of ERK activation by U0126 (Figure 4). In addition, transcription of osteogenic marker genes, such as Type I Collagen, Osteopontin, Bone Sialoprotein, and Osteocalcin, was also significantly induced in hUCMSCs by the osteogenic differentiation medium ( Figure 5). However, this induction was fully abolished by continuous U0126 administration ( Figure 5). Immunostaining against Osteopontin and Osteocalcin as well as Alizarin red staining for calcium deposition confirmed the inhibitory effects of the ERK activation inhibitor U0126 on hUCMSC osteogenic differentiation ( Figure 6).
Rescuing hUCMSCs Osteogenic Differentiation by Removing U0126.
A separate Recovery group, in which U0126 was removed from the osteogenic differentiation medium at day 9 of the cultivation, was employed to further confirm the importance of ERK activation in hUCMSC osteogenic differentiation. Western blotting showed that although ERK activation in hUCMSCs was effectively blocked by U0126 at day 9 (Figure 3(a)), the phosphorylation of ERK was induced by the osteogenic differentiation medium after removing the inhibitor in the Recovery group (Figure 3(b)).
Functionally, the activity of secreted ALP in the Recovery group was significantly higher than those of the Control or Block groups at the end of cultivation, even though it was not comparable to that of the OS group yet (Figure 4(b)). Similar trends were also detected in attached hUCMSCs by ALP staining (Figure 4(c)). Real-time PCR and immunostaining Figure 3: The activation of ERK signaling in hUCMSCs was blocked by U0126. Treated hUCMSCs with U0126 for 9 days completely prohibited the osteogenic differentiation medium-induced increase of ERK activation (a). After the inhibitor was removed, ERK activation of hUCMSCs of Recovery group was upregulated at day 21 (b). The relative densities were normalized to the Control group. Data were presented as Mean + SD; * < 0.05 ( = 6).
also showed the increase of osteogenic markers expression in the Recovery group at day 21, which indicated the osteogenic differentiation of hUCMSCs ( Figures 5 and 6). However, the differentiation of the Recovery group was only partially characterized by lower levels of the osteogenic markers and less calcium accumulation than those of the OS group at the end of cultivation (Figures 5 and 6).
Discussion
Since its discovery, the MAPK family has been found to play important roles in controlling cellular behaviors. This includes, but is not limited to, cell differentiation induced by intracellular or extracellular stimulation [30,41,44,45]. The subsets of MAPKs are characterized in mammalians: ERK, JNK, and p38 [30]. Although all three MAPK subsets are regulated by phosphorylation cascades [41], they may function independently and distinctly [30,31]. Previous studies indicate that p38 signaling is involved in stem cell neurogenic, adipogenic, and chondrogenic differentiation. In regard to osteogenesis, the enhancing role of p38 activation was widely described in mouse preosteoblastic cell line [46][47][48][49], mouse muscle-derived stem cells [50], human adiposederived stem cells [33], and both mouse and human BMSCs [32,43,51]. Interestingly, since the p38 activation was not upregulated during the osteogenic differentiation of hUCM-SCs, our current study implies that the p38 pathway may not be involved in this procedure.
With the osteogenic differentiation medium stimulation, JNK activation was found in the later stage of hBMSC osteogenic differentiation [32]. In addition, studies using a mouse preosteoblastic cell line suggested constitutive activation of JNK increased bone morphogenetic protein (BMP) 2-induced osteoblast differentiation and mineralization [52].
However, Sullivan et al. reported that the JNK inhibitor enhanced osteogenesis in neurofibromatosis type 1-(NF1-) deficient mouse osteoprogenitor cells, including primary neonatal calvarial cells and BMSCs [53]. Moreover, Doan et al. also described the repression effect of JNK on mouse BMSC osteogenic differentiation [43]. However, despite the conflicting observations in the influence of JNK on BMSC osteogenic differentiation, our data suggests that JNK signaling is not critical for hUCMSC osteogenic differentiation.
Meanwhile, the negative impact of ERK signaling on osteogenesis was also observed in the mouse preosteoblastic cell line [49,54] and hBMSCs [43]. Actually, constitutive increases in activated ERK signaling were recognized as the reason for impaired osteogenesis in NF1-deficient patients [36]. Moreover, the blockade of the ERK activation in Nf1 −/− mBMSCs could attenuate the increased cortical porosity observed in mutant pups [36,55]. Conversely, in other studies, activation of ERK was thought to benefit mBMSC [33] and hBMSC [32,45,56] osteogenic differentiation. In this study, we found that the osteogenic differentiation medium strongly activated ERK, but not JNK and p38, in a timedependent manner in hUCMSCs. By employing loss-offunction and recovery studies, we further confirmed that the activation of the ERK pathway critically regulates the osteogenic differentiation of hUCMSCs, another example of how hUCMSCs are not identical to hBMSCs [16,17]. This discovery enriched our knowledge of underlying mechanisms behind the regulation of hUCMSC osteogenic differentiation and set up fundamental ideas to more effectively stimulate 8 BioMed Research International hUCMSCs conversion towards osteogenic lineage in cell therapy and tissue engineering strategy. It is worth noting that several diverse techniques have been developed for the dissociation of tissues for primary cell isolation. To obtain hUCMSCs with high quantity and high stemness, especially with a higher capacity for osteogenic differentiation, a previously described collagenase/trypsinbased isolation method was used in this study [37,57,58]. Since Salehinejad et al. revealed that the isolation method could profoundly alter the cell harvesting and proliferation by comparing different methods for hUCMSC isolation from human umbilical cord Wharton's jelly [57], the osteogenic potential of our hUCMSCs used in this study was slightly different form that of the hUCMSCs reported by Bosch et al. [22].
In summary, our current results demonstrated that the activation of the ERK signaling pathway, but not JNK or p38, was necessary for the osteogenic differentiation of hUCMSCs, which deepened the understanding of the nature of hUCMSCs, a relatively new alternative stem cell source for tissue engineering. Moreover, our study significantly benefits the application of hUCMSCs, particularly in bone tissue engineering, by pointing out a potential regulatory direction to stimulate hUCMSC osteogenesis for engineered bone tissue generation.
|
2018-04-03T01:10:20.107Z
|
2016-02-16T00:00:00.000
|
{
"year": 2016,
"sha1": "53376bb2a8cea0256a8bbdb5b116e0c17e0d4cb6",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/bmri/2016/3764372.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4eb86770903c2cfc2e82ac924976d5159a408728",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
20105853
|
pes2o/s2orc
|
v3-fos-license
|
Diagnosis of a bleeding Dieulafoy lesion on computed tomography and its subsequent embolization
1Department of Medicine, Division of Gastroenterology; 2Department of Radiology, University of Alberta, Edmonton, Alberta Correspondence and reprints: Dr C Noel Williams, Division of Gastroenterology, College Plaza #205, 8215-112 Street, Edmonton, Alberta T6G 2C8. Telephone 780-492-8242, fax 780-492-8153, e-mail nobeco@shaw.ca Received for publication May 10, 2004. Accepted June 8, 2004. RM Penner, RJ Owen, CN Williams. Diagnosis of a bleeding Dieulafoy lesion on computed tomography and its subsequent embolization. Can J Gastroenterol 2004;18(8):525-527.
CASE PRESENTATION
An 83-year-old woman with no history of previous gastrointestinal illness or hemorrhage presented to a university hospital with passage of maroon-coloured hematochezia and presyncope.Her past medical history included a history of rheumatoid arthritis requiring chronic treatment with a diclofenac-misoprostol combination.On arrival, her hemoglobin was 48 g/L, her mean corpuscular volume was 84 fL, and she exhibited signs of instability, with a heart rate of 90 beats/min and a blood pressure of 98/40 mmHg.Her laboratory investigations were otherwise unremarkable, and included normal coagulation times.Initial resuscitation consisted of four units of packed red cells followed by an intravenous saline infusion.Gastroscopy revealed an absence of blood in her stomach and no mucosal lesions.A careful inspection that included a retroflexed view of the gastric fundus was entirely normal.Colonoscopy to the terminal ileum was also normal, except that residual preparatory laxative solution was blood-tinged.
A small bowel source of bleeding was suspected, so plans were made for an abdominal computed tomography (CT) scan, to be followed by a small bowel follow-through depending on its results.She remained stable after the initial resuscitation with no further episodes of hemorrhage, so radiological investigations were arranged for the following day.Due to a miscommunication, oral contrast was given, and as a result the CT scan was delayed until the afternoon.A standard protocol for upper gastrointestinal tract CT examination was used, with oral water contrast given before dual phase post intravenous contrast (Omnipaque, Amersham Health, USA) helical scans.Axial 5 mm images were obtained in arterial phase (40 s postinitiation of intravenous contrast), and demonstrated active bleeding into the gastric fundus from the distal splenic artery.Axial and three-dimensional scans are presented in Figures 1 and 2, respectively.
The patient was hemodynamically stable at this stage and was transferred to the angiography suite for urgent mesenteric angiography.No active bleeding was seen, but a 6 mm × 12 mm pseudoaneurysm arising from the splenic artery was identified on the arterial phase images (Figure 3).The splenic artery was dividing proximal to the splenic hilum, and the pseudoaneurysm was arising from the posterior division.The celiac axis was selectively catheterized with a 5 Fr Simmons 2 catheter, and a rapid transit microcatheter and transend platinum guidewire combination was used to cannulate the splenic artery.Embolization across the neck of the pseudoaneurysm was successfully carried out, using 11 0.018 inch microcoils (4 mm to 5 mm in diameter) (Tornado, Cook Inc, USA).The completion angiogram did not demonstrate any filling of the pseudoaneurysm (Figure 4).
A repeat gastroscopy performed on the day after her angiogram revealed ulceration at the site of arterial occlusion in the gastric fundus, but no other abnormal mucosa in the esophagus, stomach or duodenum (Figure 5).The patient was treated with an intravenous proton pump inhibitor for 48 h and subsequently observed in hospital on oral proton pump inhibitor therapy.Because of an infected ulcer on her leg, she remained in hospital for over a month.During that time her hemoglobin was followed carefully, and there was no evidence of recurrent bleeding.
DISCUSSION
Since the published description in 1898 that resulted in their eponym (1), Dieulafoy lesions have been a vexing problem.Alternately known as exulceratio simplex (the name given by Dieulafoy), cirsoid aneurysm or caliber-persistent vessels, the lesions consist of inappropriately large arteries lying close to the gastrointestinal mucosal surface (2).More than one-half of the lesions occur in the gastric fundus, and one-third are extragastric (3).Estimated to occur in as few as 0.3% to as many as 5% of patients with gastrointestinal hemorrhage (4-6), they result in a disproportionate level of diagnostic challenge, because they produce no endoscopic change except for small erosions, 1 mm to 3 mm in diameter, overlying the culprit arteries (7).In fact, data from experienced endoscopists at the Mayo Clinic in Rochester, Minnesota (3), revealed that an initial endoscopy was diagnostic in only 63% of patients eventually diagnosed with endoscopic Dieulafoy lesions, even though 77% were actively bleeding at the time of their first endoscopy.Because established endoscopic criteria for the diagnosis of Dieulafoy lesions require the presence of a minute mucosal defect with or without active bleeding or adherent 8), it is not surprising that a lesion that has stopped actively bleeding can be difficult, or even impossible, to detect.
Although the diagnosis of Dieulafoy lesions remains challenging, it is crucial now that effective therapy can be offered.Mortality rates preceding the advent of therapeutic endoscopy have been traditionally described as 80% (9), with a high proportion of diagnoses made on autopsy (10), but a more recent 30 day mortality was reported to be 13% (3).Improved survival seems largely attributable to success with therapies that include endoscopic hemostasis by electrocautery, injection or clip application, and arteriographic therapy by selective embolization (3,11).
In the present case, the mucosal defect responsible for the patient's massive bleeding was not detected on initial endoscopy, when the bleeding was quiescent.Through fortuitous timing, however, active bleeding took place during her CT scan.Following her second hemorrhage, and the application of therapeutic angiography, a more obvious mucosal defect was evident.
Traditional CT scans have had a minor role in the localization of gastrointestinal bleeding (12), but the increased speed of image acquisition and improved resolution have increased their capability to define small lesions, and the advent of new technologies may widen their applicability.CT angiography has not been generally advocated for the workup of acute gastrointestinal hemorrhage (13), but it can accurately delineate the mesenteric vascular anatomy and identify bleeding sites in the setting of chronic occult bleeding.In a preliminary study ( 14) they compared favourably with traditional arteriography.
While a successful diagnosis in the present case was reached, in part, due to fortuitous timing, the use of conventional CT scanning did offer a compelling three-dimensional view of our patient's pathology that could be used to therapeutic effect on arteriography and later confirmed on endoscopy.We hope this will represent a harbinger of further harmonies between diverse technologies.
Figure 2 )Figure 3 )
Figure2) Three-dimensional reconstruction of the computed tomography images demonstrates that bleeding is from a tributary of the splenic artery
|
2018-04-03T05:09:42.003Z
|
2004-08-01T00:00:00.000
|
{
"year": 2004,
"sha1": "0cc12ca48ab56a12ea91007bf3b6c3ff6a80820f",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/cjgh/2004/985603.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "0cc12ca48ab56a12ea91007bf3b6c3ff6a80820f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
244342426
|
pes2o/s2orc
|
v3-fos-license
|
FARMERS’ ECONOMIC INTEREST IN DERMANYSSUS GALLINAE CONTROL
: Poultry red mite or Dermanyssus gallinae (De Geer, 1778) is the most significant poultry ectoparasite with regards to health and economy. It is a widely accepted opinion that D. gallinae can only be suppressed, with the current annual expenditure of 60 eurocents per layer. However, research indicates that D. gallinae can be controlled in other ways and eradicated from the production facilities and farms, and subsequent reinfestation can be prevented by implementing biosafety measures. This provides a long-term or permanent effect of D. gallinae control. From the aspect of economy, this means that after decades of increasing expenditures, farmers can first decrease, and then completely eliminate expenditures incurred by D. gallinae . Therefore, economic calculations should be based on an expert and comprehensive approach, which should itself be based on rational control, preventive veterinary medicine, i.e. D. gallinae control program. This would result in long-term savings. In 10 years’ time, 0.5 million euros would be saved per 100.000 layers. There are an estimated 4 billion infested layers worldwide .
Introduction
Poultry production is the field of animal husbandry which provides the largest portion of animal source foods for human consumption. Last decades have been marked by an intensive increase in egg production. Over the period from 1970 to 2007, the production of table eggs tripled and rose from 20 million to 60 million tons. According to FAO, the number of layers reached 4.93 billion in 2009 (FAO, 2010).
Poultry red mite is the most significant poultry ectoparasite with regards to health and economy (Nordenfors, 2000). Control of red poultry mites in the current situation, not paying enough attention to the choice of acaricidal products for control and methods, i.e. rational pharmacotherapy (suppression), professional application and the principle of preventive veterinary medicine. Therefore, simultaneously with the increase in egg production, the problem of prevalence and harmful effect of poultry red mite has also increased, together with the damage caused by inefficient, partly efficient, or illegal control (Giangaspero et al., 2011(Giangaspero et al., , 2017Marangi et al., 2012) Poultry red mite Poultry red mite, Dermanyssus gallinae (De Geer, 1778) is an invasive arthropod, successfully adapted to the conditions of modern poultry production (figure 1). Over 80% of commercial layer flocks, as well as parent and breeding flocks are affected by the invasion of this ectoparasite (Sparagano et al., 2009). Its small size (about 1 mm), mobility, ability to feed on a large number of species of birds and mammals, resistance to temperature conditions and starvation (Pavlićević et al., 2007b), extreme adaptability and development of resistance, large numbers and high reproductive potential, hidden way of life and night activity (Simić and Živković, 1958) are the traits of this parasite which enable its invasiveness. Poultry red mite is a problem of the flock, but also of the environment, thus jeopardising not just current, but also future flocks (Pavlićević et al., 2018b). In addition to biological traits of the parasites, the basis of this health and economic problem is a long-term wrong approach to D. gallinae control, which has been additionally exacerbated by the new changes (EU 1999/74/ЕC) in the technological conditions of housing layers (Pavlićević et al., 2016a(Pavlićević et al., , 2016c(Pavlićević et al., , 2019Flochlay et al., 2017).
Harmful effects
The harm caused by D. gallinae can be direct and indirect.Direct harm is caused by immediate parasitic action of D. gallinae: crawling on the body, stinging and bloodsucking. This results in poultry being afflicted by stress, anaemia due to blood loss, deteriorated general health state and immune status, aggravation and transmission of infectious diseases. Clinical manifestations in the flock are nervousness, pronounced health problems, increased mortality rates, reduced egg quality, and increased feed consumption (Emous 2005(Emous , 2017Flochlay et al., 2017). D. gallinae is a zoonosis and occupational disease (Cafiero et al., 2019). It also afflicts people, causing nervousness, itching, and changes on the skin. These problems can result in workers leaving their jobs or asking for compensation due to aggravated working conditions.
Further, indirect harm caused by D. gallinae occurs due to transmission and incorrect approach.
1. A young flock can be invaded in the rearing facility, and it can further transmit the infestation to a previously uninfested production facility, thus causing further damage. The disease caused by D. gallinae (Dermanyssosis) is a hidden flaw, but it can be detected if properly looked for. The condition necessary to avoid the damage is a correct forensic assessment and definition of legal relations (Pavlićević et al., 2003(Pavlićević et al., , 2018c. 2. Used cages and equipment are some of the key vectors of D. gallinae in intensive poultry production. When purchasing used cages and equipment, it is necessary to pay attention to possible invasion. Invaded cages and equipment should cost less, because they will incur unplanned sanitation expenditures for farmers (Pavlićević et al., 2016b(Pavlićević et al., , 2018c. 3. Transport cages for poultry transmit D. gallinae and thus cause damage (Pavlićević and Pavlović 2016b). 4. Incorrect work organisation on farms enables the transmission of invasion within the farm, from one house to another. 5. Incorrect choice of products and methods of control and/or their unprofessional application cannot provide efficient control of D. gallinae invasion, and it can cause damage. 6. Untimely D. gallinae control increases the damage by maximising direct harm, as well as making control more expensive, usually through higher consumption and lower efficacy of products. The production in highly infested flocks (++++) is not cost-effective. 7. Uncritical control is reflected in the application of illegal acaricides or inadequate application of registered products, which is harmful to human health, poultry and the environment. 8. The presence of D. gallinae on table eggs causes customers' disgust and aversion. 9. Highly infested flocks can result in abattoirs' refusal to accept the flock after the production period is finished.
Calculation
Expenditures incurred by the damage are added to the cost of products and implementation of control measures and they represent the farmers' total economic loss. In some cases, these expenditures can include the cost of preparation and sanitation of the consequences of application, such as the eggs which must be safely disposed of due to the withdrawal period (harmful chemical residue in eggs). Farmers' true economic loss is visible after one year, which is the duration of the production period, or over a longer period.
Farmers' economic loss is caused by the increased parasitic prevalence, intensity and extensity of the invasion, difficulty level of D. gallinae control, and cost of products and methods. Estimated cost per hen in the period from 2005 to 2017 increased by 40%, and it is 231 million euros annually for the whole of Europe. Annual expenditure caused by D. gallinae per hen is 60 euro cents, 15 of which are spent on the control and 45 on damage (1:3) (Emous, 2005(Emous, , 2017. Less successful, and especially unsuccessful control includes both types of expenditures. The less successful D. gallinae control measures are, the bigger total expenditure is for farmers. However, expenditures caused by D. gallinae and its control do not have to nor ought to occur simultaneously. Successful D. gallinae control implies only control expenditures for farmers, and, in time, even those are eliminated. For example, in 10 years' time, over 0.5 million euros would be saved per 100.000 layers (capacity of a medium size farm). If we apply the infestation rate to the number of layers (FАО, 2007), we get the figure of about 4 billion layers infested with D. gallinae worldwide. This is an approximate figure, since both the numbers of layers and infested flocks have risen in the meantime. For example, reports for Germany, the Netherlands, and Belgium put D. gallinae infestation in layer flocks at 94% (Mul et al., 2016).
Current control
Current D. gallinae control offers a large selection of products. In the purpose of clarity, we have selected just two most important groups of products.
Since the beginning of modern intensive poultry production, D. gallinae control has predominantly been based on acaricides, synthetic neurotoxic compounds. Its purpose is D. gallinae suppression and its effects last for several months, or in some cases for over six months (Pavlićević, 2005;Pavlićević et al., 2016Pavlićević et al., , 2018d.
Over the past 10 years, with the development of SiO2 formulations, a technology which can compete with acaricides has been developed for the first time. However, the progress achieved with SiO2 has not been properly utilised, but has also been employed just in the purpose of D. gallinae suppression.
No developmental steps taken so far indicate any future change in the widely accepted approach to D. gallinae control. The control program for D. gallinae has been developed in contrast to the predominant approach. However, for over 20 years, it has remained marginalised and without any significant impact on the mainstream red mite control in the poultry industry.
Program
The problem of D. gallinae control can be solved and it does not have to exist in the poultry industry. The solution is a program, a comprehensive approach which would be based on preventive veterinary medicine and rational pharmacotherapy (control). The primary goal of the program in intensive poultry production is to prevent D. gallinae infestation in uninfested poultry houses, i.e. farms. Safety risks need to be excluded in infested houses, rational control needs to be introduced, and then efficacy and cost-effectiveness will be increased. After the necessary conditions have been met, D. gallinae is eradicated from the production facilities on the farm, and biosafety measures are introduced (Pavlićević et al., 2018a(Pavlićević et al., , 2018b. For example, if a highly effective suppression of D. gallinae is achieved by two treatments (during housing preparation, before the population) with P 547/17 (project ID 1115), in the partial absence of adequate conditions, mites will appear in small numbers only in the final three months. A small mite infestation (from + to ++) has no significant (measurable) health impact and it does not cause economic loss. If there are adequate conditions (hygienic conditions and housing downtime), the procedure of housing preparation with P 547/17 technology results in D. gallinae eradication from production facilities. In this case, the flock is not exposed even to minimal D. gallinae presence, i.e. its harmful effect, and therefore these harmful consequences do not exist anymore. If there is continued implementation of biosafety measures, the expenditures caused by harmful effect or further control are excluded in the future.
During its development, the program has relied on the current, available products and methods. Initially, it was based on acaricides. However, contrary to the widely accepted method of control (which required more frequent use of acaricides), a correct acaricide application in the poultry house (eradication and introduction of biosafety measures) eliminates the need for further acaricide use (Pavlićević et al., 2016).
The first practically applicable distancing from acaricide control was enabled by SiO2-based formulations. By exploring the possibilities of mechanical control, an original program was developed, based on the combination of powdered and liquid forms. Eradication was possible again, this time based on a mechanical method. However, an expensive and complex technological procedure, highly demanding regarding the necessary conditions, hindered its wider implementation.
Main disadvantages of SiO2-based product application have been successfully overcome first by developing a specialised formulation based on inert oils (P 547/17, Pulcap), and then by developing an original technology of its application (Project ID 1115). The new formulation and technology has been tested in laboratory (Pavlićević et al., 2017a(Pavlićević et al., , 2017c and clinical conditions. In this way, we have eliminated any safety risks and devised a more functional and efficient and less complex application procedure which requires fewer conditions. There is no possibility for D. gallinae to develop resistance to P 547/17 or to significantly adapt its behaviour. Therefore, the current program will not lose its efficacy over time. Its results are permanent. In comparison to other programs and methods for poultry red mite control, we have concluded that P 547/17 formulation and application technology is an example of rational D. gallinae control (Pavlićević et al., 2017b(Pavlićević et al., , 2019b. Moreover, it provides all the conditions necessary to completely exclude the application of neurotoxic synthetic compounds from poultry meat and egg production. P 547/17 formulation and application technology has its requirements: professional application, hygienic conditions, housing downtime. Furthermore, despite its undisputed quality, it has certain disadvantages, thus leaving more space to further improve this type of control. This program is permanently open for all new contributions to D. gallinae control, which would help it to function better and more adequately respond to various practical challenges of modern poultry industry.
A systematic approach to the implementation of D. gallinae control would be an adequate step towards intensive vertical and horizontal integration of poultry industry. Systematic program implementation would additionally contribute to functional and rational product application, protection of human and animal health and environment, and improved control of diseases transmitted by D. gallinae; improving the flock's general health status and increasing production results would result in farmers' economic gain.
The situation for D. gallinae control in extensive poultry production is different from the one in the intensive production, and consequently, the approach of the program is different. In extensive poultry production, there is an open system, contact with other domestic and wild animals, large area per layer, and complex environment. It is advised to correctly build and set up the perches and nests, together with the barriers which successfully divide them from the rest of the environment. In this way, farmers can control D. gallinae problem easily and without significant expenses (P 2017/ 0762).
The necessary conditions have been met to first stop the unfavourable trend in D. gallinae control, then mitigate the problem, and eventually eliminate it completely. All this cannot be achieved immediately, since the procedure is technologically demanding and complex.
Veterinary medicine
Insufficient efficacy of veterinary medicine in D. gallinae control effected the size and extent of economic loss suffered by farmers. Veterinary medicine could have contributed more to the mitigation and prevention of loss caused by D. gallinae in the following ways: 1. The primary role of veterinary medicine should have been to provide timely and correct information to farmers. In this way, the disease could have been stopped in its initial phase, and most farms would have been protected by biosafety measures, while the rest would have been easily treated. Well-informed farmers would have taken an active role in the solution of the problem, otherwise they make wrong decisions and become a part of the problem (Pavlićević et al., 2016); 2. Defining eradication as the objective of the control in intensive poultry production is the premise for a real solution. The generally accepted opinion in veterinary medicine that D. gallinae can only be suppressed puts the farmers in a hopeless position of constant, increasing expenditures (Pavlićević et al., 2018(Pavlićević et al., , 2018b; 3. Improved detection and standardised laboratory and clinical testing of efficacy of products and methods for D. gallinae control (Pavlićević et al., 2007a(Pavlićević et al., , 2017b(Pavlićević et al., , 2019c; 4. Warning about technological flaws and negative effects of complex cages and equipment, which would contribute to the improvement of conditions for D. gallinae control (Pavlićević et al., 2016a); 5. Timely utilisation of the legislation change in the EU regarding cages and equipment (ЕU 1999/74/ЕC) could have easily eliminated the problem. However, the omission to do so had the opposite effect and actually contributed to the spread of the disease (Pavlićević and Pavlović 2016a); 6. Insisting on rational control, advising the farmers about the optimal choice of current products and methods, based on verified data; 7. Ensuring professional application of products and methods, which is crucial for efficient control; 8. Insisting on preventive veterinary medicine and maximising the efficacy of control relative to the cost; 9. Regular tests of resistance and timely elimination of unjustified use of acaricides which have already caused resistance (Pavlićević et al., 2016); 10. Promoting integrated health care, especially with regards to the control of infectious diseases transmitted by D. gallinae , which would additionally improve the general health status of poultry and contribute to the general welfare and cost-effectiveness of poultry production (Pavlićević et al., 2017b); 11. Improving the efficacy of D. gallinae control and residue monitoring and minimise or completely exclude the toxicological risk caused by uncritical control (Pavlićević et al., 2005(Pavlićević et al., , 2018c; 12. Introduction of the control program would cover all the above said requirements (Pavlićević et al., 2018a(Pavlićević et al., , 2018b(Pavlićević et al., , 2019d.
We are facing an open questionto what extent does veterinary medicine fulfil its role in D. gallinae control? Farmers' economic interest is currently not in accordance with the generally accepted opinion in veterinary medicine regarding D. gallinae control. The future will provide the answer to the question to what extent it is possible to critically review and improve the above-mentioned positions of veterinary medicine in accordance with the basic medical principles and in the interest of general welfare and economic interest of farmers.
Conclusion
The economic interest of poultry producers can be significantly improved. Farmers' expenditures incurred by D. gallinae can be reduced (time necessary to meet the conditions), and subsequently completely eliminated. Improving farmers' economic interest from the aspect of D. gallinae control is in correlation with the general welfare (interest). The requirements necessary in order to achieve the said interest depend on the role of veterinary medicine, which should reconsider the current procedure of D. gallinae control and introduce the principles of rational control and preventive veterinary medicine, i.e. the control program.
|
2021-10-18T17:31:59.388Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "0114ef4fc7865269119fd472e3c11e9b00e612c1",
"oa_license": "CCBY",
"oa_url": "http://www.doiserbia.nb.rs/ft.aspx?id=1450-91562103171P",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "dc6edcf68dc531dac61f82d55545fe6db7b01a98",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
}
|
252359870
|
pes2o/s2orc
|
v3-fos-license
|
Changes in body composition in relation to estimated glomerular filtration rate and physical activity in predialysis chronic kidney disease
Abstract Background Early body composition changes, associated with physical inactivity and disease advancement are devastating for patient‐related outcomes in predialysis chronic kidney disease (CKD), thus warranting a detailed analysis of body composition beyond conventional measures. Methods The study included 40 subjects diagnosed with CKD; recruited between January to May 2021. Body composition was measured using the multifrequency analyzer, InBody 770. International Physical Activity Questionnaire‐Short Form was used to assess physical activity. Suitable statistical analyses were performed using SPSS 21.0. Results The mean age of the subjects was 58.68 ± 12.24 years. Sarcopenic obesity was prevalent in 62.5% of the subjects. Body mass index under identified obesity by 15% compared to percent body fat, especially in subjects with low muscle mass. The decline in a unit of estimated glomerular filtration rate (eGFR) significantly correlated with a decrease in weight (p = 0.02), body fat mass (p = 0.05), visceral fat area (p = 0.05), and phase angle (p = 0.01) with marginal changes in waist–hip ratio and extracellular water/total body water. The effect of physical activity on skeletal muscle mass was homogeneous between low and moderate levels, but significantly different from high activity level. Conclusion Changes in fat and fluid compartment were associated with eGFR decline, whereas higher physical activity positively affected body composition.
| INTRODUCTION
The consequence of malnutrition arising from the interaction of a multitude of aberrations is devastating for the quality of life, physical functioning, and mortality of predialysis chronic kidney disease (CKD) patients. Globally, India occupies the third position in malnourished CKD patients with a pooled prevalence of 56.7%, among hemodialysis, peritoneal dialysis, and nondialysis CKD. 1,2 Even so, predialysis CKD patients are an understudied population as compared to hemodialysis in the Indian subcontinent.
Secondary protein deficiency caused by amino acid metabolism disturbances, toxin accumulation, appetite suppression due to increased circulating immune mediators, and metabolic acidosis, as well as poor nutrient intake, all contribute to hypoalbuminemia and muscle mass loss in CKD patients. [3][4][5][6][7] Thus, a comprehensive evaluation of the subclinical malnutrition in all stages of CKD is the need of the hour to prevent the onset of apparent malnutrition and improve associated patient outcomes. In this regard, assessment of body composition beyond the metric of body mass index (BMI) is essential in CKD to evaluate the alterations associated with the disease processes. Despite the presence of superior methods, dual-energy X-ray absorptiometry (DXA) and single-or multi-frequency bioelectrical impedance analysis (BIA) are used in clinical practice, regardless of the possible interference of hydration in optimal measurement. 8,9 Due to the additional advantage of portability, affordability, and absence of radiation, BIA may be more useful than DXA in routine use.
In addition, physical activity has recently been investigated because of its favorable effects on overall health in individuals with CKD. Although several studies have explored the effects of exercise and increased physical activity on body compartments such as skeletal mass, strength improvement, and loss of fat mass, there is little to no evidence from the Indian region. 1 Prevalent physical inactivity of the Indian population as compared to global estimates and recommended standards further strengthens the need for exploring the beneficial effects. 10,11 This study aimed to identify the body composition alterations in the regional predialysis population using BIA, their relation with estimated glomerular filtration rate (eGFR) decline, and the effects of physical activity on the parameters.
| Subjects
This was an exploratory study conducted from January to May 2021 in the Mysuru district of India. Male or female subjects above 18 years, with or without comorbidities such as diabetes, hypertension, and cardiovascular conditions in conjunction with a clinical diagnosis of CKD as referred by a Nephrologist, were considered for inclusion in the study. Exclusion criteria were as follows: dialysis regimen, renal transplant recipients, history of infection, trauma or major surgery within 1 month of recruitment, and other catabolic conditions such as cancer, chronic obstructive pulmonary disease, and cirrhosis. Patients with metallic implantations, amputees or physical disability, or inability to stand upright were also considered ineligible. Serum creatinine, blood pressure, and random blood sugar levels were noted from the medical records of the subjects. Jaffe's and GOD-POD (glucose oxidase-peroxidase) methods were used by the laboratories to test serum creatinine and random blood sugar levels, respectively, on nonfasting venous samples.
Demographic data, medical history, and serum creatinine levels were retrieved from the patient's medical records. CKD-Epidemiology Collaboration creatinine equation (2009) was used to calculate eGFR. Height was measured using a digital stadiometer, BSM 170. BMI was calculated by dividing weight in kg by height square in meters. The multifrequency bioelectrical impedance analyzer, InBody 770, was used to measure various body composition parameters. The stadiometer and body composition analyzer were from InBody Co., Ltd. The instrument had eight-point tactile electrodes through which bioelectrical impedance and reactance were measured at frequencies ranging from 1 to 1000 kHz at each of the five body segments: right arm, left arm, trunk, right leg, and left leg. The assessment was performed in the morning hours using the standard protocol, that is, in the upright position, after voiding urine and excrement to avoid interference with weight measurement and at least 2 h post breakfast to avoid interference with the food mass. The subjects were asked to remove all the accessories, socks, and jewelry before testing.
| Statistical analysis
Baseline categorical data and medical history were reported as percentages. The Shapiro-Wilk's test for normality revealed a nonnormal distribution (p < 0.05) of the data with respect to categories of the disease. Hence, differences between groups of variables were compared by Mann-Whitney U test (p < 0.05). Whereas the results of the Shapiro-Wilk's test irrespective of the disease categories indicated a normal distribution of the variables, warranting parametric tests. Karl Pearson's coefficients of correlation were obtained between eGFR and body composition parameters. Further, the parameters with a significant correlation with eGFR were considered to examine the multivariate relationships using a multivariate linear regression with eGFR as the independent variable and the selected body composition parameters as dependent variables. It was also observed that there was no effect of age, gender, hypertension, diabetes, and cardiovascular disease on selected dependent parameters. The χ 2 test for association was applied to examine the association between chronic kidney disease categories and physical activity levels. Multivariate analysis of variance (ANO-VA) was performed to examine the effect of physical activity levels on body composition parameters. For the parameters with significant results, Duncan's homogeneous subsets were obtained between physical activity levels. All statistically significant values were compared with a 0.05 or 0.01 level of significance. The statistical analysis was performed using SPSS 21.0 (IBM Corp.).
| Outcome variables
Obesity identification by BIA-derived percent body fat was ≥25% in men and ≥30% in women, respectively. 12 A high waist-hip ratio concurrent with abdominal obesity was defined with the cut-off of 0.8 for men and 0.9 for women. 13 Alternatively, BIA obtained visceral fat area >100 cm 2 was identified as abdominal obesity. 14 Low muscle mass was ascertained in men with skeletal muscle index <7.0 kg/m 2 and <5.7 kg/m 2 in women according to the 2019 consensus update of the Asian Working Group for Sarcopenia (AWGS). 15 Sarcopenic obesity, as per the European Society for Clinical Nutrition and Metabolism (ESPEN) and the European Association for the Study of Obesity (EASO) 16 criteria, includes a combination of obesity by percent body fat (PBF) as well as low muscle mass measured by means of BIA.
| RESULTS
A total of 40 subjects; 29 males and 11 females were recruited for the study. The demographic and medical history of the subjects is elaborated in Table 1. For comparison, the subjects were divided into an early-stage (ES) CKD group with mild to moderately decreased renal function and a late-stage (LS) CKD group with moderate to severely decreased renal function using a cut-off of 30 ml/ min/1.73 m 2 . The ES group consisted of Stages 2 and 3, whereas the LS group consisted of Stages 4 and 5 as per the Kidney Disease Improving Global Outcome guidelines. 3 Both the groups comprised majorly of men with 82.4% of the subjects in the ES group and 60.9% in the LS group. Hypertension was the most common comorbidity, followed by diabetes and cardiovascular disease in both groups. Even so, levels of random blood sugar and blood pressure did not vary significantly between the ES group and the LS group. Table 2 depicts the clinical and body composition parameters of the subjects. The mean age of the subjects was 58.68 ± 12.24 years. The mean eGFR levels of the ES and LS groups were 43 ± 12 and 16 ± 7 ml/min/1.73 m 2 , respectively. The waist-hip ratio was significantly lower in the LS group compared to the ES group (p = 0.00). The ratio of extracellular water/total body water (ECW/TBW) was significantly higher in the LS group compared to the ES group (p = 0.02), whereas the whole-body phase angle was significantly higher in the ES group compared to the LS group (p = 0.02). Of interest, although statistically insignificant, the remainder of the indices of fat and fat-free compartment exhibited a trend of higher values in the ES group relative to the LS group.
The prevalence of body composition abnormalities across the groups is shown in Figure 1. The incidence of a high waist-hip ratio was significantly greater in the ES group compared to the LS group (p = 0.00). Sarcopenic obesity was prevalent in 62.5% of the patients, whereas 30% of the patients had low muscle mass, according to AWGS criteria. Abdominal obesity was identified in 62.5% of subjects by visceral fat area cut-off, whereas WHR cut-off classified 67.5% of subjects. Likewise, obesity by PBF was identified in 67.5% of subjects, whereas BMI classified 52.5% of the subjects as obese, depicting under identification by 15%.
Correlation of eGFR with body composition parameters is established in Table 3. A significant positive correlation was observed between eGFR and weight (p = 0.02), body fat mass (p = 0.05), waist-hip ratio (p = 0.00), visceral fat area (p = 0.05), and phase angle (p = 0.01), whereas the ratio of ECW to TBW was significantly and negatively correlated with eGFR (p = 0.01).
The parameters that correlated with eGFR at p < 0.05 in univariate analysis were further subjected to multivariate analysis, the results of which are presented in Table 4. For the decline in one unit of eGFR as measured in ml/min/1.73 m 2 , there was a decrease in weight by 0.28 kg (p = 0.02), body fat mass decreased by 0.20 kg (p = 0.05), visceral fat area decreased by 1.06 cm 2 (p = 0.05) and phase angle decreased by 0.02°(p = 0.01). In addition, there was a minimal decrease in the waist-hip ratio (p = 0.00) while the ratio of ECW to TBW increased marginally (p = 0.01).
The relationship between eGFR and physical activity levels is depicted in Figure 2. χ 2 analysis showed a statistically significant association between CKD disease stages and levels of physical activity (p = 0.05). Further, the influence of physical activity levels on body composition parameters is detailed in Table 5. Overall, no effect of physical activity level was observed on body composition parameters (p = 0.34). Contrarily, one-way ANOVA revealed a statistically significant difference between groups for Post hoc Duncan's homogeneous subset was applied to the body composition parameters, which was associated with physical activity levels at p < 0.05. The results of the homogeneous subset are represented in Table 6. It was observed that the effect of physical activity on height, protein status, soft lean mass, and skeletal muscle mass were homogeneous between low and moderate activity levels, but significantly differed from that of high activity levels. Whereas the effect on percent body fat, fat-free mass, and body cell mass was significantly different between low and high physical activity levels.
| DISCUSSION
The current study elucidates body composition and examines the effects of eGFR decline and physical activity on the parameters. The population under study demonstrated body composition abnormalities with low muscle mass as well as higher body fat. In a study by Tyrovolas et al. 17 definition, varied cut-points, and use of different methods to measure muscle mass is perhaps the basis of disparity regarding the prevalence of low muscle mass across studies even among the Indian subpopulation. Furthermore, physiological changes during aging, such as denervation and reinnervation resulting in muscle fiber-type grouping, hinder the definitive assessment of disease-related secondary sarcopenia. 19 Obesity is described as an excess accumulation of body fat with health implications. BMI is a simple metric typically used to classify stages of underweight and obesity, although it does not demarcate the weight associated with muscle from the weight associated with fat. 20 In the present study, BMI under identified obesity in 15% of the patients, whereas almost none of them were overidentified. The high prevalence of sarcopenia in this population seems to play a compensatory role for the increased fat percentage, thus contributing to the misclassification. In agreement with the above, 83% of the patients were classified under sarcopenic obesity using the ESPEN and EASO criteria. 16 This indifferent diagnostic performance of BMI compared to percent body fat for obesity was also observed in a study by Sharma et al. 21 The authors also reported a high prevalence of sarcopenia in the wrongly classified patients with an increase in the misclassification alongside eGFR decline.
Regarding the effect of eGFR decline on body composition parameters, it was observed that eGFR decline was associated with the decline in weight, fat compartment, phase angle, and accumulation of fluid, whereas no significant changes were observed in the muscle compartment. In accordance with this finding but using DXA instead of BIA, Zhou et al. 22 reported a 0.26 ± 0.12 kg decrease in fat mass with every unit decrease in eGFR. Weight loss owing to a disease condition may increase the risk of mortality associated with a low BMI. 23 A study by Ku et al. 24 reported an increase in mortality upon dialysis initiation among CKD patients with an annual weight loss >5%. However, the study associated the weight loss with fat-free mass but not fat mass despite using BMI as a measure of weight, a better marker of fat mass than muscle mass. Underlining the paucity, studies exploring the association of CKD progression and mortality with fat loss, especially its distribution and various forms is essential in predialysis CKD subjects.
The measured phase angle reflects the health of the cells with respect to integrity and permeability of the cell membrane and has been studied as a plausible indicator of nutritional status and volume overload in CKD patients. 25,26 In the current study, a decline in eGFR was associated with a decrease in phase angle indicating a decrease in cell vitality as the disease progresses. In a study by Jha et al., 27 the phase angle of a healthy adult Indian population with a mean age of 39 ± 12 was found to be 6.6 ± 0.96 in men and 5.9 ± 0.94 in women, which was much greater than the observed values of phase angle in this study. However, the establishment of reference values is essential to properly assess the effects of varying levels on disease outcomes. Conventionally, exercise was not recommended in CKD due to the argument that intense physical activity decreased the effective renal plasma flow concomitantly decreasing eGFR, possibly via involvement of sympathetic nervous activity and catecholamine substances. 28 Contrastingly, recent studies have documented a neutral effect or a rather positive effect of exercise on eGFR. 29,30 When examining physical activity, it was found that vigorous physical activity had considerable effects on fatfree mass, that is, skeletal muscle mass, body cell mass, and body fat percentage as compared to low physical activity. Conforming to this finding, a study by Moon et al., 31 in Korean predialysis CKD patients, stated that physical activity obtained using the International Physical Activity Questionnaires (IPAQ) is noted to have protective effects against loss of muscle mass. Studies have also demonstrated an increase in O 2 consumption due to exercise, 32 which probably is a sequel to changes in metabolically active cell mass. Although in debate, subjective measure of physical activity such as IPAQ has been routinely used in the elderly and CKD population. 33 In conclusion, low muscle mass is prevalent in predialysis CKD and needs careful consideration to improve quality of life. Physical activity is favorable in improving the metabolically active components of the body, such as skeletal muscle mass and body cell mass. Assessment of nutritional status using BIA provides an insight into the subclinical signs of malnutrition associated with the disease before it is apparent as BMI cannot be relied upon as an appropriate nutritional marker given the offsetting effect of low muscle mass against high-fat mass.
Findings from this study have to be interpreted in light of certain limitations and strengths. The study has a smaller sample size, hence there is a need to verify the findings in a larger sample. In addition, because of the exploratory nature of this study, causality cannot be established between eGFR decline and changes in body composition parameters. The plausibility of bias cannot be denied due to the subjective nature of the tool used to classify physical activity. Conversely, the use of BIA as opposed to conventional anthropometric measures provides detailed information about the nutritional status of the patients. Patients with predialysis CKD are an underinvestigated population in India. Thus, this study provides a well-founded premise for further investigations with larger samples to examine the extent of body composition alterations in the early stages of CKD and to establish valid clinical cut-offs for Indian patients.
AUTHOR CONTRIBUTIONS Prathiksha R. Bhat was involved in formulating the design of the work as well as data acquisition, analysis, and interpretation. Asna Urooj was involved in the conception and design, provided critical intellectual input, and approved the manuscript for submission. Srinivas Nalloor was involved in data acquisition and provided valuable input to the design.
|
2022-09-19T15:06:01.354Z
|
2022-09-17T00:00:00.000
|
{
"year": 2022,
"sha1": "8ecca913ee7dcc379a991647f804c91e6f316acd",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "PubMedCentral",
"pdf_hash": "56501c3e836887f0fc1c5f1be025c14a033d8997",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
10794579
|
pes2o/s2orc
|
v3-fos-license
|
Profiling structured product labeling with NDF-RT and RxNorm
Background Structured Product Labeling (SPL) is a document markup standard approved by Health Level Seven (HL7) and adopted by United States Food and Drug Administration (FDA) as a mechanism for exchanging drug product information. The SPL drug labels contain rich information about FDA approved clinical drugs. However, the lack of linkage to standard drug ontologies hinders their meaningful use. NDF-RT (National Drug File Reference Terminology) and NLM RxNorm as standard drug ontology were used to standardize and profile the product labels. Methods In this paper, we present a framework that intends to map SPL drug labels with existing drug ontologies: NDF-RT and RxNorm. We also applied existing categorical annotations from the drug ontologies to classify SPL drug labels into corresponding classes. We established the classification and relevant linkage for SPL drug labels using the following three approaches. First, we retrieved NDF-RT categorical information from the External Pharmacologic Class (EPC) indexing SPLs. Second, we used the RxNorm and NDF-RT mappings to classify and link SPLs with NDF-RT categories. Third, we profiled SPLs using RxNorm term type information. In the implementation process, we employed a Semantic Web technology framework, in which we stored the data sets from NDF-RT and SPLs into a RDF triple store, and executed SPARQL queries to retrieve data from customized SPARQL endpoints. Meanwhile, we imported RxNorm data into MySQL relational database. Results In total, 96.0% SPL drug labels were mapped with NDF-RT categories whereas 97.0% SPL drug labels are linked to RxNorm codes. We found that the majority of SPL drug labels are mapped to chemical ingredient concepts in both drug ontologies whereas a relatively small portion of SPL drug labels are mapped to clinical drug concepts. Conclusions The profiling outcomes produced by this study would provide useful insights on meaningful use of FDA SPL drug labels in clinical applications through standard drug ontologies such as NDF-RT and RxNorm.
Introduction
Structured Product Labeling (SPL) [1] encodes very rich clinical drug knowledge, such as dosage, strength, usage of drug, etc. The importance of this resource is widely recognized, and it has been utilized in multiple studies [2,3] to support clinical and translational research use cases. For example, the relationships between genes, diseases, drugs, and adverse events available from SPL drug lavels can assist clinicians to improve the safety and effectiveness of treatments, and help translational researchers to design novel bioinformatics algorithms. However, the drug information/ knowledge written into SPL drug labels is currently in unstructured free text instead of structured codified information, which poses significant challenges to computational analysis of the knowledge, and hinders the integration of SPL drug labels with other existing knowledge bases. Actually, this is a common scenario occurring in the biomedical domain, where dozens of public resources involve laborious processes to manually annotate data. This is mostly because they are using heterogeneous code systems to represent their data. Hence, data normalization and building all possible linkages among these data sets will make data interoperation and integration feasible.
Semantic Web Technology (SWT) [4] can be useful to provide a scalable framework for facilitating semantic data integration of heterogeneous resources and enabling semantic sharing through the standard query services. It has been widely used in biomedical domains to formalize and model medical and biological systems [5][6][7]. In the present study, we adopted it as the core technology in our implementation step.
The objective of the present study is to map SPL drug labels into two major standard drug ontologies: the Veterans Administration's (VA) National Drug File Reference Terminology (NDF-RT) [8] and the National Library of Medicine's (NLM) RxNorm [9]. Our investigation was guided by answering the following research questions: (1) how SPL drug labels are covered and connected by RxNorm and NDF-RT; (2) how to utilize RxNorm/ NDF-RT drug resources to map SPL drug labels from the drug class and clinical drug perspective; (3) how to explore the mapping results to build a drug /drug class network; (4) how to leverage Semantic Web technology to accomplish the implementation task.
The paper is organized into the following sections. First, we introduce background information for SPL, NDF-RT, RxNorm and Semantic Web technology in the Background section; Second, in the Methods section, we introduce three main parallel approaches on SPL drug label profiling; Third, we illustrate our results generated from each step in the Results section, and then followed by Discussion and Conclusion.
Structured Product Labeling (SPL)
Structured Product Labeling (SPL) is a document markup standard approved by Health Level Seven (HL7) [10] and adopted by FDA as a mechanism for exchanging product information. SPL defines the human readable label documents that contain structured content of labeling (all text, tables and figures) for a product, along with additional machine readable information (i.e., drug listing data elements including information about the product and the packaging). SPLs for all drug products marketed in the United States are available for download from the National Library of Medicine's DailyMed website [11] and they were being used in this study.
National Drug File Reference Terminology (NDF-RT)
NDF-RT [8] is used for modeling drug characteristics including ingredients, chemical structure, dose form, physiologic effect, mechanism of action, pharmacokinetics, and related diseases.
In support of SPL initiative, a non-hierarchical collection of External Pharmacologic Class (EPC) concepts has been added to NDF-RT in parallel and analogous with the VA Drug Classification hierarchy. These concepts are distinguished by an "[EPC]" tag suffixed to their preferred names. Role relationships describing and defining concepts according to their relationships with other concepts, originating from these EPC concepts target concepts from the NDF-RT Mechanism of Action (MoA), Physiologic Effect (PE), and Chemical Ingredient (CI) hierarchies that are selected by the FDA to index their EPC for SPL purposes [12]. The content model of NDF-RT is shown in Figure 1. Three kinds of drug concepts are involved in this study, "VA Product", "Chemical Ingredient" and "EPC", of which "VA Product", and "Chemical Ingredient" are relevant to "Clinical Drug", "Generic Ingredient or Combination" and "EPC" is analogous with the "VA Drug Classification" respectively.
RxNorm
RxNorm [9] provides normalized names for clinical drugs and links its names to many of the drug vocabularies commonly used. RxNorm reflects and preserves the meanings, concept names, and relationships from these different copyright holders, such as SPL, NDF-RT, MeSH, and etc. The "SAB" code is defined by RxNorm to differentiate the different sources aggregated into RxNorm. For example, "MTHSPL" indicates that the corresponding concept is absorbed from SPL and "NDFRT" indicating the source from NDF-RT. These two sources were used in this study. RxNorm defines term type "TTY" to indicate the role an atom plays in its source. The term types are assigned based on source documentation or NLM understanding of the source. Table 1 shows a list of term types "TTYs" used in this study with their names and descriptions, and the relationships among these term types are shown in Figure 2.
Semantic web technology
Resource Description Framework (RDF) [14], a W3C recommendation, is a directed, labeled graph data format for representing information in the Web. SPARQL is a query language for RDF graphs [15]. RDF triple store is a database for the storage and retrieval of RDF metadata, ideally through standard SPARQL query language. Web Ontology Language (OWL) is a standard ontology language for the Semantic Web [16]. NDF-RT and EPC indexing SPL data used in this study are stored in a RDF triple store and by executing SPARQL queries to retrieve the desirable information. With the advance of SWT, linked data has been developed to describe a method of publishing structured data on the web so that it can be interlinked and become more useful.
SPL
In total, 1,247 EPC indexing SPLs in XML format were downloaded from NLM DailyMed website [11] as of April 12, 2012. An example of the EPC indexing SPL is shown in Figure 3. Each SPL labelled by setId (SPL unique identifier) is corresponding to one or multiple EPC classes, which mapped to NDF-RT concepts by role relationships. Totally three role relationships were identified from EPC indexing SPL files: "PE" standing for physiologic effect; "MoA" standing for mechanism of action; and "Chemical/Ingredient".
RxNorm
In this study, we used the following two files downloaded from RxNorm in April 9, 2012: 1) RXNCONSO.RRF. The file includes all connections (965,968 in total) with different source vocabularies. We used the data from two sources labeled as "MTHSPL" (the source from SPL) and "NDFRT" (the source from NDF-RT) in this study; 2) RXNSAT.RRF. The file includes all source vocabulary attributes that do not fit into other categories. We used the file to search for the information about drug categories and connections among RxNorm, NDF-RT and SPL. There are 6,221,513 entries included in this file. The data from both of these two files were loaded into a local MySQL database.
System architecture
There are four primary modules in the system, comprising 1) a data transformation module; 2) a data persistence module; 3) a SPL profiling module, in which SPL drug labels are profiled by EPC, NDF-RT and RxNorm; 4) a standardized drug/drug class network module. Figure 4 shows system architecture of the four modules.
For the data transformation module, data reformatting steps were performed for EPC indexing SPL, NDF-RT and RxNorm individually before loading the data into RDF tripe store and MySQL database since they are in different data formats: SPL in XML format, NDF-RT available in OWL, and RxNorm in the UMLS Rich Representation Format (RRF). A XML2RDF sub module [17] takes input rendered in the XML format, and outputs result in the RDF format through a transparent transformation service. NDF-RT in OWL was loaded into RDF store directly. RxNorm provides MySQL script for loading the data into MySQL database easily.
For the persistence module, we implemented an open source RDF store "4Store" that is developed by Garlik [18] and used the RDF store to host the SPL and NDF-RT data. After loading RDF triples into the RDF store, we implemented a SPARQL endpoint providing standard SPARQL query service against the RDF store. For the drug/drug class network module, we incorporated SPL profiling results with our previous standardized drug work [19]. We explored to use Cytoscape [20] as a general platform for complex network analysis and visualization.
Profiling by EPC classes
An EPC indexing SPL label is corresponding to one or multiple EPC classes and is also corresponding to one to multiple NDF-RT concepts via the following role relationships: "has_Chemical_Structure", "has_MoA" or "has_PE". For example, an EPC indexing SPL label, "BACILLUS CALMETTE-GUERIN SUBSTRAIN TICE LIVE ANTI-GEN" is mapped to two EPC classes. The first EPC class is "Live Attenuated Bacillus Calmette-Guerin Vaccine [EPC]", which is further mapped to "Actively Acquired Immunity The EPC indexing SPL were stored in a RDF triple store. We executed a SPARQL query (as shown in Figure 5) against the triple store and extracted setId (unique identifier of SPL), NUI (unique identifier of NDF-RT), and the relevant role relationships for each EPC indexing SPL. The outcome of this query is a list of setIds and relevant NDF-RT concepts with NUIs and display names. Category information is embedded in the display name, such as "Androgen Receptor Inhibitor [EPC]" indicates "Androgen Receptor Inhibitor" is an EPC class, and "Aminoglycosides [Chemical/Ingredient]" indicates "Aminoglycosides" is a chemical ingredient.
Profiling by RxNorm and NDF-RT
The objective of this step was to use existing annotations from RxNorm to categorize SPL drug labels, and to make connections between SPL and RxNorm/NDF-RT. As RxNorm data (RXNCONSO and RXNSAT) were preloaded in a MySQL database, we executed SQL queries to extract data from two RxNorm integrated resources: SPL and NDF-RT, which are differentiated by individual SAB labels.
For the source SPL (i.e., "SAB = MTHSPL"), we extracted concept names along with "TTY" (term type) as described in the Materials section, and RxCUIs (RxNorm unique identifier) from the RXNCONSO table. To establish linkages between SPL and RxNorm, we searched the RXNSAT table for a list of setIds with a given RxCUI. It is worthy to note that one RxCUI can correspond to multiple SPLs due to different product labellers. Figure 6 shows the workflow of the data extraction in this step.
For the source NDF-RT (i.e., "SAB = NDFRT"), SQL queries were executed to extract concept names along with NUIs and preferred names. Each preferred name includes role relationships information. For example, an entry with [RxCUI = "4278"] corresponds to the [NUI = "N0000006373"] with preferred name "Famotidine [Chemical/Ingredient]". We grouped this concept into "Chemical/Ingredient" category. Taking the same process as we did for SPL, we searched the RXNSAT table for a list of setIds with a given RxCUI and established linkages between NDF-RT and SPL. Figure 7 show the workflow of the data extraction in this step.
Results
EPC indexing SPL, NDF-RT and RxNorm were used for profiling SPL drug labels with detailed category and standardized drug information. We presented the outcomes from each resource below.
Results from EPC classes
The mapping results based on EPC indexing SPL labels are listed in Table 2. In total, 354 EPC unique classes were identified from the EPC indexing SPL labels, which link to 853 unique SPLs. In the meantime, we extracted all individual NDF-RT concepts with NUIs that are mapped to their corresponding EPC classes via different role relationships. 154 NDF-RT concepts were mapped to EPC classes via the role relationship "has_Chemical_Structure", 70 concepts via "has_PE" and 7 concepts via "has_MoA". The coverage of NDF-RT and SPL for each category are calculated and shown in Table 2. Here, the coverage for NDF-RT is calculated by the number of NDF-RT concepts in each category divided by the total number of NDF-RT concepts (47,075 in total), and the coverage for SPL was calculated by the number of SPL labels in each category divided by the total number of SPL labels (36,568 in total). There are totally 497 EPC classes identified from the NDF-RT RDF repository, indicating that 71.2% (354 out of 497) EPC classes have been integrated into the EPC Indexing SPL labels.
Results from RxNorm vs. NDF-RT mappings
We executed SQL queries and extracted 41,343 unique RxNorm entries (RxCUI) with NUIs and preferred names from RxNorm and NDF-RT mappings. To make linkages between SPL and NDF-RT, we searched each given RxCUI for a set of setIds from RXNSAT table. Finally, of 9,053 unique NUIs with setId, 6,611 unique NUIs belonging to three NDF-RT categories -"VA Product", "Chemical/ Ingredient", and "EPC" are associated with 35,094 unique SPL setIds. In this step, we utilized NDF-RT category information to profile SPL.
The mapping results from RxNorm and NDF-RT mappings are listed in Table 3. There are 4,880 unique NDF-RT concepts from the category "VA Products", linking to 20,937 unique SPL labels; whereas there are 1,730 unique chemical ingredients linking to 34,788 SPL labels. Comparing with the above EPC class mapping, only one EPC class identified and linked to 14 SPL labels have been integrated into RxNorm. The coverage is calculated by the number of concepts within each category divided by 47,075 NDF-RT concepts / 36,568 SPL labels. 96% SPLs have been covered by the RxNorm and NDF-RT mappings, whereas only 14.0% NDF-RT concepts have been linked to the SPL labels.
Results from RxNorm vs. SPL mappings
We first identified 35,480 unique SPL entries from RxNorm and SPL mappings with "SAB= MTHSPL". And then we identified 15,615 unique RxCUIs that correspond to the SPL setIds by searching RXNSAT MySQL table. We used the term types "TTY" to classify SPL labels into ten categories. Each category with the number of unique RxCUIs, unique setIds and their coverage has been listed in the Table 4. Comparing with the 36,568 SPL labels from the NLM DailyMed and 965,968 concepts from RxNorm, 97.0% SPLs have been covered by RxNorm and SPL mappings, whereas only 1.6% RxNorm has been linked to the SPL labels.
It is worthy to note that there are overlaps among concepts with different TTYs, such as SY and TMSY that denote synonyms of another TTY, thus concepts with SY or TMSY are overlapping with concepts with other TTYs. Also the same as RxNorm and NDF-RT mappings, there are overlaps for SPL labels among all of the categories.
Network visualization
The profiling results of SPL drug labels using RxNorm and NDF-RT not only demonstrate the connections among these three resources, but also help establish a drug/drug class network based on them. Within this network, the target and source nodes represent the concepts from SPLs, RxNorm or NDF-RT; the edges represent the category information. We are exploring Cytoscape [20] as a visualization tool to display and analyze the network. Figure 8 displays the network constructed using the results from this study. The upper right picture in Figure 8 shows a subset of the entire network; and the lower right picture shows one of the sub-networks with NDF-RT concepts as source nodes, SPL labels as target nodes, and category EPC as edge; and the details about nodes are shown in the lower left picture in Figure 8.
Discussion
SPL labels contain a large portion of clinical drugs, and chemical/ingredients, as well as other possible drug categories. SPL labels as a very useful drug knowledge resource have been applied in clinical drug applications such as Adverse Drug Events (ADEs) detection from electronic medical records (EMRs). Notably, a number of studies are emerging recently to use the SPL labels for the purpose of drug safety surveillance. For example, in a project called SIDER, a public, computer-readable side effect resource that connects 888 drugs to 1450 side effect terms was developed using the SPL labels [21]. In a system called ADESSA, as another example, the ADEs were extracted from the SPL labels and mapped to the MedDRA terms and concepts, then utilized the UMLS to generate mappings between the MedDRA terms and the SNOMED CT concepts [22]. In a project at Mayo Clinic, the SPL labels were used in a framework for building a standardized ADE knowledge base known as ADEpedia [23] through combining ontology-based approaches with Semantic Web technology. In addition, Schadow conducted some other studies to evaluate the impact of SPL for medication knowledge management [24]. And Schadow also had successfully aligned SPL with associated terminologies to make drugintolerance (allergy) decision support in computerized provider order entry (CPOE) systems in 2008 [25].
In this paper, we have successfully mapped SPL labels to NDF-RT and RxNorm, and categorized them using drug class information and clinical drug identification information respectively. 96.0% of SPL drug labels are mapped with NDF-RT categories whereas 97.0% of SPL drug labels are linked to RxNorm codes. The high SPL coverage by NDF-RT and RxNorm indicates that, on the one hand, the two drug ontologies are the appropriate data standards for normalizing drug information covered by SPL drug labels. On the other hand, the knowledge structure asserted in the two drug ontologies can be leveraged to analyze and enrich the SPL drug information. "Category" denotes NDF-RT categories extracted from EPC indexing SPL; "NUI" denotes identifier of NDF-RT; "setId" denotes identifier of SPL. "Category": NDF-RT categories from NDF-RT/RxNorm mappings; "NUI": identifier of NDF-RT; "setId": identifier of SPL.
As we mentioned above, RxNorm defines the term types to indicate the role an atom plays in its source (see Table 1) and specifies the relationships between the term types (see Figure 2). We utilized the term types to profile the SPL drug labels, which resulted in very useful insights on the characteristics of the existing SPL drug labels as an important drug knowledge source. We found that the majority of SPL drug labels (93.1%, see Table 4) are linked with the RxNorm term type "IN" which is defined as a compound or moiety that gives the drug its distinctive clinical properties. Only 21.9% of SPL drug labels are linked with the RxNorm term type "SCD" which is defined as the semantic clinical drugs with the combination of "Ingredient + Strength + Dose Form". These findings indicate that when the SPL drug labels are utilized for a clinical drug application, the mappings through the ingredients would provide a better coverage (i.e., more sensitive) than through the combination of "Ingredient + Strength + Dose Form". More importantly, the RxNorm term type "IN" is indirectly linked with the term type "SCD" through asserted relationships (see Figure 2). This provides additional options to optimizing the use of SPL drug labels in a clinical drug application through RxNorm. Similarly, there is a content model asserted for modeling knowledge structure of NDF-RT (see Figure 1). We utilized the categorical drug class in the NDF-RT to classify the SPL drug labels. Our finding indicates that 95.1% of SPL drug labels are classified into the category "Chemical/ Ingredient" and 57.3% of SPL drug labels are classified into the category "VA Product" which is analogous to the clinical drugs. While the finding from NDF-RT is consistent with what we have found in the RxNorm, NDF-RT provides more powerful analysis and aggregation capability because it contains a hierarchical concept structure that defines a rich set of domain-specific categories. For example, when a clinical application needs to collect drug information for all cardiovascular medications, NDF-RT provides a category "[CV000] CARDIOVASCULAR MEDICATIONS" which contains 16 direct subcategories and 1246 descendant classes in total (from NDF-RT version 2012.02.06). Using the asserted knowledge, the SPL drug labels that have been classified into the category "VA Product" will be able to aggregate to various domainspecific categories based on the requirements of a clinical application. This is also an important area we will explore in the future.
A drug and drug class network was built based on the SPL mappings with different data resources: EPC classes, NDF-RT and RxNorm. From such network, it can allow us to explore more drug class information for SPL drug labels and their connections with other clinical drugs. We are exploring Cytoscape [20] as a visualization tool to display and analyse the network. We consider that this is an important module of the system because it will provide a user friendly interface to allow end users to capture their target information efficiently. Note that we only integrated drug information into this network; we will integrate more drug / drug class resources, like PharmGKB [26], DrugBank [29], or National Drug Code (NDC) [27] into this network. In addition, the drug related phenotype information integration will be the next target in our future study. These integration efforts will make the network more useful to support clinical applications such as a clinical decision support system for clinicians.
Semantic etc., and makes both the original and extracted product label contents queriable using drug identifiers presented in these drug information resources. Although these Linked Data applications are focusing on data integration, none of them takes further steps to organize product labels from the perspective of drug class and clinical drug identification. In the present study, we applied EPC classes, NDF-RT and RxNorm to standardize and categorize SPL labels into different drug classes, aiming at integrating them into a standard drug / drug class network.
Semantic Web technology plays a key role in the implementation of our system. We represented the drug data from NDF-RT and SPL drug labels in RDF triples and hosted the RDF triples in a RDF triple store. We consider this makes the data integration, data management more feasible. In addition, running SPARQL queries against RDF store through SPARQL endpoint simplified our efforts on linking SPL drug labels to NDF-RT by using the predicates defined in the RDF triples. The NLM has not provided RDF based data format for RxNorm. In this study, we stored and analyzed the RxNorm data based on a relational database. In the future study, we will employ D2R [31] server for the RxNorm RDF transformation through defining mappings between the RRF-based relational database schema and a RDF data model, and also explore an RxNorm SPARQL endpoint offered recently by the NCBO BioPortal and integrate the RxNorm into the Semantic Web based framework of our system.
Conclusions
In this study, we have successfully mapped SPL drug labels with RxNorm and NDF-RT. In total, 96.0% of SPL drug labels are mapped with NDF-RT categories whereas 97.0% of SPL drug labels are linked to RxNorm codes. We found that the majority of SPL drug labels are mapped to chemical ingredient concepts in both drug ontologies whereas a relatively small portion of SPL drug labels are mapped to clinical drug concepts. We believe the profiling outcomes produced by the study would provide useful insights on meaningful use of FDA SPL drug labels in clinical applications through standard drug ontologies such as NDF-RT and RxNorm.
We will continue the following investigations in the future. 1) Since existing SPL drug labels have been classified into a number of categories, including over-the-counter (OTC), prescription, animal, and so on, we will explore to integrate the information into the current drug network; 2) we will build a backbone drug network based on NDF-RT and integrate the network with more drug resources such as DrugBank, NDC, PharmGKB, etc.; 3) we will explore to build linkages between the drug/drug class and relevant phenotype/genotype.
|
2014-10-01T00:00:00.000Z
|
2012-12-01T00:00:00.000
|
{
"year": 2012,
"sha1": "9f27cf9078a3e083ab4749323b43ce510b0f358c",
"oa_license": "CCBY",
"oa_url": "https://jbiomedsem.biomedcentral.com/track/pdf/10.1186/2041-1480-3-16",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "98bd76188dab105a2d6bdf283f0774a478b1117b",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
}
|
255122831
|
pes2o/s2orc
|
v3-fos-license
|
Efficacy and Safety of Carbetocin Versus Misoprostol in Cesarean Section: A Systematic Review and Meta-Analysis
In the absence of comprehensive data investigating carbetocin versus misoprostol for reducing postpartum hemorrhage (PPH) during cesarean section (CS), we performed this investigation to compare the efficiency and side events of carbetocin versus misoprostol in the protection and reduction of PPH for women who underwent CS. From inception to September 2022, we depended on searching through various databases for eligible trials involving Cochrane, Web of Science, PubMed, Scopus, and Google Scholar. From the efficacy prospect, we found that carbetocin substantially decreased intraoperative blood loss (p<0.001), hemoglobin/hematocrit levels (p<0.001), and the need for blood transfusion (p=0.002)/additional surgical interventions (p=0.003) than misoprostol. However, we revealed no substantial variation between both drugs for the need for additional uterotonic agents (p=0.08). From the safety prospect, we found that incidences of fever (p=0.002), heat sensation (p=0.007), metallic taste (p=0.01), and shivering (p=0.0002) were lower in carbetocin administration than in misoprostol. However, headache (p=0.34) and palpitation (p=0.11) incidences revealed no substantial variation between both drugs. In conclusion, from the efficacy and safety prospect, for women who underwent CS, carbetocin is more effective and safer in preventing and reducing PPH than misoprostol.
Introduction And Background
Cesarean section (CS) is among the most often performed major procedures on women globally [1], and its prevalence is rising, particularly in high-and middle-income nations. Although the World Health Organization (WHO) advised a CS rate of 10% to 15% to reduce both maternal and newborn death ratios [2], the prevalence has increased significantly, particularly in Egypt, reaching 52% of all deliveries [3].
CS-related postpartum hemorrhage (PPH) is a leading factor in maternal death [4], especially when the female loses more blood than 500 ml the day after a natural birth or more than 1000 ml after a CS [5]. Uterine atony is the underlying cause of PPH in almost 70% of instances [6,7]. The WHO encourages the active management of the third stage of labor and the introduction of uterotonic drugs as PPH prophylaxis in all women [8]. However, studies have revealed that 6%-16% of women still suffer hemorrhage over 500 ml despite the use of preventative medicines [9]. As a result, it is crucial to administer uterotonic medications during CS to reduce the incidence of PPH.
Numerous uterotonic medications, such as oxytocin, misoprostol, and carbetocin, were recommended to reduce bleeding during CS. Misoprostol, an analog of prostaglandin, has a strong uterotonic impact. It is affordable, viable at regular temperatures, and causes minimal adverse reactions [10]. When taken orally, vaginally, sublingually, recto-rectally, or buccally, it is well absorbed [11]. Since it enhances the frequency and strength of uterus contractility during labor, it is helpful in both the prevention and treatment of PPH [12].
Carbetocin is an oxytocin analog that has undergone structural changes to lengthen its half-life and extend its therapeutic action [13]. It is recommended to prevent uterine atony and PPH after CS. Carbetocin is injected intravenously to cause cyclic uterine contractility, which continues for around one hour; however, an intramuscular injection greatly extends this activity to 120 minutes [14]. It has been connected to a substantial reduction in the need for other uterotonic drugs and uterus massaging after vaginal childbirth [15].
In the absence of comprehensive data investigating carbetocin versus misoprostol for reducing PPH during CS, w we performed this investigation to compare the efficiency and side events of carbetocin versus misoprostol in the protection and reduction of PPH for women who underwent CS. 1 1 1 1
Sources of Data and Study Selection
The study protocol was not retrospectively recorded in the International Prospective Register of Systematic Reviews (PROSPERO).This study adhered to the guidelines in the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement [16] and the Cochrane Handbook for Systematic Reviews of Interventions [17]. Due to the study design, we did not need formal approval from ethics for our work.
From the inception to September 2022, we depended on searching through various databases for eligible trials, involving Cochrane, Web of Science, PubMed, Scopus, and Google Scholar. We adopted the following search strategy and its related terms: ("cesarean section" OR CS OR "C-section" OR "abdominal delivery") AND (misoprostol OR "novo misoprostol" OR "apo misoprostol" OR cytotec OR "SC30249" OR "SC29333" OR glefos OR misodel OR mysodelle OR misotac) AND (carbetocin OR pabal OR depotocin OR duratocin OR lonacetene). The actual strategy employed in each database is shown in the Appendices. We also checked the bibliographies of the papers we had gathered in order to broaden the scope of the literature study. Moreover, we excluded trials whose collected information could not be regarded for analysis. Two authors thoroughly examined the titles and abstracts of each related study found in the resources, eliminated duplication, and assessed validity by full-text screening. Additionally, the eventually selected papers' citations were personally checked for any new or missing sources. Conflicts were resolved through discussions.
Extraction of Data and Evaluation of Study Quality
To rate the quality of the articles that were included, we used the Cochrane Risk of Bias checklist (version 2) [18]. Two authors individually evaluated this work. Each scale domain and the overall quality of the chosen publications were given a risk level from low, with some concerns, to high by the authors. Conflicts were resolved through discussions. For combined research with less than 10 investigations, public bias is unreliable. As a result, we were unable to utilize Egger's test [19].
The first three types of data were gathered. In the beginning, we made a list of the features of the investigations that were included, such as the trial identification, country, duration, sample size, and research arms. Second, we obtained data on the fundamental details of the participants, such as sample size, age (years), gestational age (in weeks), parity, body mass index (BMI), delivery technique, and anesthesia type. Third, we collected data on effectiveness results, including intraoperative blood loss (ml), mean change in hemoglobin (mg/dl), and mean change in hematocrit (%). Also, we gathered information on blood transfusion need, additional uterotonics need, and additional need for surgical interventions (like uterine artery ligation or compressed suturing). Moreover, we collected information on safety profiles such as headache, fever, heat sensation, palpitations, metallic taste, and shivering.
Statistical Analysis
The Review Manager program, available for Windows as version 5.4 of RevMan (The Cochrane Collaboration, 2020), was used for data analysis. We combined the dichotomous and continuous data under the randomeffects model for calculating the risk ratio (RR) and mean difference (MD) with a 95% confidence interval (Cl). We depended on the Inverse-Variance and Mantel-Haenszel techniques for our analyses. Heterogeneity was assessed by utilizing the chi-square tests. Significant heterogeneity was determined when the chi-square test with p<0.1 and the I2 test >50 [20]. A p-value of 0.05 or lower is considered statistically substantial.
Results of the Literature Search
After excluding 285 duplicate articles, our search returned 735 articles. After that, during title/abstract screening, 726 references were eliminated. Finally, following the exclusion of 10 articles during full-text screening, four RCTs [14,[21][22][23] satisfy our Population, Intervention, Comparison, Outcomes, and Study (PICOS) requirements. The PRISMA flowchart for our screening procedure is shown in Figure 1. Seven hundred patients participated in these investigations, 305 were administered misoprostol and 305 were administered carbetocin.
Study Characteristics
Although the length and trial settings varied, all included RCTs were conducted in Egypt. Most RCTs [14,21,23] used misoprostol per rectum except one [22], which used it sublingually. Also, three RCTs [14,21,22] operated on patients under spinal anesthesia, and one RCT [23] performed on their patients under general anesthesia. Table 1 and Table 2 [14,[21][22][23] Quality Assessment of Studies Figure 2 and Figure 3 show the quality assessment of the eligible RCTs -two RCTs [21,22] were assessed as having a "low" risk of bias. However, one RCT [14] was considered as having "some concerns" risk of discrimination because it did not provide any data about the process of randomization, one RCT [23] did not report an important outcome such as blood loss estimation, and there are no postoperative values for hemoglobin and hematocrit to assess the mean change between groups. [14,[21][22][23] FIGURE 3: Risk of bias graph [14,[21][22][23]
Results of the Meta-Analysis
From the efficacy prospects, we revealed a substantial difference that favors carbetocin over misoprostol concerning intraoperative blood loss ( (Figure 4).
Discussion
Increased CS practice, particularly for non-medical reasons, carries numerous short-term hazards to the mother, including PPH, blood transfusion, hysterectomy, and maternal mortality [24,25]. In an era of rising CS rates, particularly in Egypt, numerous measures should be taken to reduce maternal comorbidities, such as lowering PPH and requiring blood transfusions.
Our study evaluated carbetocin's effectiveness and adverse events versus misoprostol in preventing PPH in women undergoing CS. From the efficacy prospect, we found that carbetocin substantially decreased intraoperative blood loss, hemoglobin/hematocrit levels, and the need for blood transfusion/additional surgical interventions than misoprostol. However, we revealed no substantial variation between both drugs for the need for additional uterotonic agents. From the safety prospect, we found that incidences of fever, heat sensation, metallic taste, and shivering were lower in the carbetocin administration than in misoprostol. However, headache and palpitation incidences revealed no substantial variation between both drugs.
Our results were consistent with a prior meta-analysis that contrasted rectal misoprostol with carbetocin for vaginal birth. They discovered that carbetocin was linked to less blood loss and a decreased requirement for blood transfusions [26]. These results are consistent with those of Abd El Aziz et al. and Hetiba et al., who discovered that women who birthed vaginally or via CS experienced much less blood loss in the carbetocin administration versus the misoprostol [27,28]. Even during other surgeries, such as myomectomies, the introduction of carbetocin was associated with many favorable clinical effects, such as a reduction in operation bleeding and the requirement for blood transfusions [29].
Furthermore, in a recent RCT, authors compared the administration of Carbetocin versus syntocinon and misoprostol for women during CS. They discovered that hemoglobin and hematocrit levels 24 hours postoperatively showed a moderately substantial change among the three examined groups [21]. Therefore, carbetocin protects or reduces the incidence of post-CS hemorrhagic anemia.
A thorough analysis revealed that the misoprostol group experienced severe placental bleeding more frequently than the carbetocin group did, as shown by the existence of a floppy uterus after the birth of the fetus and placenta [23]. This is mainly related to carbetocin's strong uterotonic impact, which was demonstrated in a prior study by Cordovani and colleagues, who discovered that it lowers the rate of uterine atony in low-risk women [30].
Additional surgical procedures, such as uterine artery ligation and uterine compression stitching, are required to reduce bleeding from uterine atony, which was more severe in the misoprostol arm [23]. This is consistent with our research, which showed that carbetocin minimizes the need for further surgical procedures to reduce bleeding during CS.
The medication type used in the current investigation as prevention against PPH had no influence on the need for uterotonic drugs, as we revealed no substantial variation between both drugs for the need for additional uterotonic agents. This was in contrast with Su et al., who discovered that carbetocin considerably reduces the requirement for additional uterotonic medications [15]. Ali et al. also found that uterine atony, which was more pronounced in the misoprostol group, necessitated the usage of more oxytocin [14].
The adverse effects in our investigation led to a range of results; some were statistically meaningful while others had no influence. We found that incidences of fever, heat sensation, metallic taste, and shivering were lower in carbetocin administration than in misoprostol. However, headache and palpitation incidences revealed no substantial variation between both drugs. Another research discovered a substantial variation between the misoprostol and carbetocin arms in terms of the number of patients who suffered from side effects like fever, nausea, diarrhea, and stomach pain after delivery; the participants in the carbetocin arm were less likely to suffer such adverse reactions. However, when there were negative effects such as hypersensitivity, face flushing, and headaches, it was shown that there was no scientifically substantial variation between the arms taking carbetocin and misoprostol [28].
The findings of several investigations on the adverse effects of the drugs utilized to prevent PPH reported significant inconsistencies and are inconsistent with one another due to multiple variables connected to demographic and population variables. For instance, Abd El-Aziz et al. found that misoprostol had worse side effects than carbetocin in aspects of heart rate and feeling hot [27]. Additionally, Ibrahim and Saad reported the results of their investigation into side effects, concluding that carbetocin was more usually associated with headache, nausea, and vomiting. Misoprostol was also more frequently associated with pyrexia and shivering [31].
Limitations
There are some concerns with the current study. The main weak point was that there weren't many trials that were included, which prevented us from investigating publication bias. Also, not all outcomes were homogeneous, as we found heterogeneity in some results. The participants' brief follow-up intervals and certain researchers' or subjects' lack of blinding were further drawbacks.
Conclusions
From the efficacy and safety perspective, for women who underwent CS, carbetocin is more effective and safer in preventing and reducing PPH than misoprostol in terms of decreased intraoperative blood loss, hemoglobin/hematocrit levels, and the need for blood transfusion/additional surgical interventions than misoprostol. However, we revealed no substantial variation between both drugs for the need for additional uterotonic agents. From the safety prospect, we found that incidences of fever, heat sensation, metallic taste, and shivering were lower in carbetocin administration than in misoprostol. However, headache and palpitation incidences revealed no substantial variation between both drugs. It is crucial to proceed cautiously while considering this conclusion because the assessment of PPH is often a subjective judgment. Future trials that employ other administration techniques, evaluation of several doses, and assessment of the impact of those drugs are required to verify our investigations.
Conflicts of interest:
In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
|
2022-12-26T16:02:25.222Z
|
2022-12-01T00:00:00.000
|
{
"year": 2022,
"sha1": "1223f5ba213abf093980de33ca3030d116932ed1",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "8c9c6451cc60ac620318add686f967957ab6100b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
254676625
|
pes2o/s2orc
|
v3-fos-license
|
POLYGAMOUS MARRIAGE REVIEW FROM LAW NUMBER 1 OF 1974 CONCERNING MARRIAGE
: In principle, the Marriage Law adheres to the principle of monogamy, but it does not rule out those who wish to polygamy as long as their religion and belief allows and can fulfill the requirements stipulated by the Marriage Law. This is reflected in Article 3 paragraph 1 of the Marriage Law which reads: In principle, in a marriage a man can only have one wife, a woman can only have one husband; The court may give permission for a husband to have more than one wife if the parties wish to do so. Thus it can be seen that the principle of monogamy adopted by the Marriage Law is not absolute.
INTRODUCTION
It is the nature that humans are created in pairs between men and women, to legalize the relationship between men and women, a place is needed, namely in the form of marriage. Marriage is very important in human life, because marriage is an institution of life or the gate of a person's life that has always been passed by every human being since ancient times until now. With a legal marriage, the association of men and women becomes honorable in accordance with the position of humans as honorable creatures. To make the marriage valid according to law, the marriage must be in accordance with the laws of their respective religions and beliefs as stated in Article 2 paragraph (1) of Law no. 1 of 1974.
Sometimes there are some men who cannot restrain their lust with only one wife, while he is a man who maintains the honor of his religion so that he does not want to fall into heinous acts, namely adultery with women who are not lawful for him, or the purpose of a predetermined marriage. by law has not materialized with a wife, then the way out of this is by conducting a polygamous marriage.
PROBLEM
Based on the description in the introduction above, the problem can be formulated as follows: How is polygamous marriage in terms of Law Number 1 of 1974 concerning Marriage?
DISCUSSION
In Article 1 of Law Number 1 of 1974 it is stated that marriage is "an inner and outer bond between a man and a woman as husband and wife with the aim of forming a happy and eternal family (household) based on the Almighty God". In this sense it contains the following values: a. Marriage is an inner and outer bond between a man and a woman b. The inner and outer bond is intended to form a happy and prosperous family (household) c. The basis of inner and outer bonding and a happy and eternal goal is based on the Almighty God.
The purpose of marriage desired by the marriage law is ideal. Marriage is not only seen in terms of an outward agreement but is also a spiritual bond between husband and wife which is aimed at forming and fostering a happy and eternal family and based on Almighty God.1 So what is meant by the meaning of marriage is the inner and outer bond between a man and a woman as husband and wife with the aim of forming a happy and eternal family (household) based on Almighty God.
The inner and outer bond in marriage means that it is not enough just to have an outer bond or an inner bond, but it must be both.
A birth bond is a bond that can be seen, revealing the existence of a legal relationship between a man and a woman to live together as husband and wife, in other words it can be called a formal relationship.
This formal relationship is real both for those who bind themselves, for others and for society. On the other hand, an inner bond is an informal relationship, a bond that cannot be seen. Even though it is not real, the bond must exist, because without this inner bond, the outer bond will become fragile. This should be felt especially by the person concerned. In the initial stage of holding a marriage, the inner bond begins with a real will to live together.
In life together, harmony is reflected, so this inner bond will form the core of the outer bond. The occurrence of physical and spiritual bonds is the foundation in forming and fostering a happy and eternal family.
Marriage that aims to form a happy and eternal family can be interpreted that the marriage must be for life and will not be broken for any reason except by death.
As mentioned above, the purpose of marriage is to form a happy and eternal family (household) based on the Almighty God.
For this reason, husband and wife need to help each other and complement each other so that each can develop their personality and achieve spiritual and material well-being.
The formation of a happy family is closely related to children, where the care and education of children are the rights and obligations of parents. Thus, the purpose of marriage according to the law is for the happiness of husband and wife, to have children and to uphold religion in a 2 parental family unit.
The Marriage Law states that marriage is legal if it is carried out according to the law of each religion and belief, and each marriage is recorded according to the applicable legislation.
In principle, in a marriage a man can only have one wife, likewise a woman can only have a husband, but in certain circumstances a marriage based on monogamy is difficult to maintain so that in very forced circumstances it is possible for a man to have more than one wife. based on the conditions specified in the Marriage Law.
Before the enactment of the Marriage Law, the practice of polygamy in Indonesia, especially for Muslims, was guided by the provisions of Islamic law, namely the Surah An Nisa paragraph 3, their respective religions/beliefs, must also comply with the provisions contained in Law Number 1 of 1974. The basis for the regulation of polygamy in Indonesia is article 3 paragraph 2 which reads: The court may give permission to a husband to have more than one wife if the 3 parties concerned want. The definition of polygamy can be interpreted as follows: 1. Polygyny, where a man marries more than one woman 2. Polyandry, is a woman marries more than one man.
The term polygyny in its implementation is often confused with the term polygamy, so that polygamy is defined by the community and the makers of the Marriage Law itself in practice as a man marries more than one woman.
In principle, the Marriage Law adheres to the principle of monogamy, but does not rule out the possibility for those who want to be polygamous, as long as their religion and beliefs allow and can fulfill the requirements determined by the Marriage Law.
This is reflected in Article 3 paragraph 1 of Law No.1 of 1974 which reads: "Basically in a marriage a man can only have a wife, a woman can only have a husband" However, paragraph 2 states that: "The court may give permission to a husband to have more than one wife if the parties concerned want" And of course, if the law of religion and belief permits, it can be seen that the monogamy principle adopted by the Marriage Law is not absolute.
According to Hilman Hadikusuma, that with the existence of this article, Law Number 1 of 1974 adheres to the principle of monogamy, because it is possible in circumstances where the husband is forced to practice closed polygamy which cannot simply be opened without the 4 supervision of a judge.
But whatever the reasons given for polygamy, society, especially women, cannot sincerely accept polygamy. This is due to the fact that polygamy causes things that are felt bitter in family life. For this reason, the law provides strict conditions regarding people who want polygamy, so that polygamy is the one that is tightened.
The conditions and reasons that allow a husband to have more than one wife are regulated in articles 4 and 5 of the Marriage Law and articles 40 to 44 of Government Regulation Number 9 of 1975.
The reasons that can be submitted to the Court for people who want polygamy are as follows: 1. The wife cannot carry out her obligations as a wife. It means that the wife cannot carry out the obligation to form a happy and eternal household based on the Almighty God. However, this situation must be investigated whether the wife really does not carry out her obligations as a wife because of herself or because of the husband's actions who are looking for reasons to remarry, so that all his actions irritate the wife, which in the end the wife does not carry out her obligations as a wife.
The wife has a disability or an incurable disease.
This reason is basically humanitarian because a wife who is disabled or suffers from an incurable illness is suffering, so it is better for the husband to remarry than divorce.
Wife can't give birth
This reason must be investigated properly that the wife is really barren, for example with a specialist doctor's statement. Because sometimes the husband is barren so that the wife cannot give birth, so this excuse is unacceptable.
The reason for this polygamy applies to those whose religion is permitted. For people who have mystical beliefs and recognize Islam, then in marriage they use Islamic law.
For a husband who has reasons for polygamy, he cannot simply carry out his marriage. To carry out this polygamous marriage in addition to the reasons mentioned above, it must also meet the following requirements: 1. There is consent from the wife/wives: Regarding this agreement, it was not stated that the agreement was in the form of oral before the court or in writing. Even though there was a written agreement, the court still summoned the wife in front of the trial and the judge listened to the agreement directly.
With the obligation of the wife to immediately give consent in front of the court, the husband cannot fake the consent of his wife. The wife's consent can be set aside, if the husband who wants polygamy turns out to be the wives it is impossible to ask for approval, because they cannot be a party to the agreement,
5
for example under guardianship because of madness and others.
There is no need for the wives' approval if: a. No news from his wife for at least 2 years. This can happen if the wife has left the house with no news or maybe the wife does not want to follow the residence together or the wife returns to her parents' house and does not want to live together. If this lasts up to 2 years then if the husband wants to remarry, it is permissible without his wife's prior consent. b. Because of other reasons that need to get a judge's assessment regarding this second matter, it is felt that the boundaries are less clear because it will give the judge very broad freedom so that it may be misused. Meanwhile, the principles and objectives of the Marriage Law are towards a monogamous family system by making polygamy difficult. 2. There is certainty that the husband is able to provide for the necessities of life for his wives and children.
To know that a husband will provide certainty that he is able to guarantee the necessities of life for his wives, and their children, a judge as a human being will find it difficult to give an objective assessment, if he has to estimate the husband's ability to guarantee the necessities of life for his wives. and their children to come. Article 41 sub C PP No. 9 of 1975 has given instructions to examine whether or not a husband is capable by showing: a. Income tax certificate b. Certificate of husband's income signed by the treasurer where the husband works. c. Other certificates that can be accepted by the court 3. There is a guarantee that husbands will treat their wives and children fairly. Regarding the guarantee of doing justice, it is very difficult because of the moral problem of the husband. How is his life, behavior and actions on a daily basis because if only confession will do justice before a judge it will be very doubtful, especially in the case of polygamy which is very complicated and full of twists and turns of life.
Based on the foregoing, it is clear that there are 3 (three) reasons that are used as the basis for submitting an application for polygamy. It is not easy for husbands to practice polygamy, because polygamy is not a religious order but is only allowed with certain conditions that must be met.
For this reason, the husband must submit a statement or promise that he will treat his wives and children fairly. To be able to practice polygamy, you must follow the following procedures: 1. The husband submits a written application to the court with the conditions mentioned in Article 5 of the Marriage Law. 2. The court examines the application, both the terms and the applicant's reasons. And the court must call and hear the wife in question. 3. The court must examine the good application no later than 30 days after the receipt of the application letter and its attachments. 4. If the court is of the opinion that there is sufficient reason for the applicant to have more than one wife, the court will give its decision in the form of granting permission to marry more than one person.
Marriage registrar employees are prohibited from registering the marriage of a husband who has more than one wife before obtaining permission from the court. If he violates these provisions, he will be subject to a maximum imprisonment of 3 months and a maximum fine of IDR 7,500.
Likewise for the husband concerned if he violates Article 40 PP No. 9/1975 concerning the obligation to obtain permission from the judge to have more than one wife, then he will be subject to a maximum fine of IDR 7,500.
The threat of punishment mentioned above does not affect the marriage that takes place. Because marriage is still valid as long as it is carried out according to the law of the religion, so that if we observe further the punishment feels light.
This will result in a violation of the provisions of polygamous marriage. To reduce or avoid these violations, we should be able to look at the criminal provisions of Article 279 of the Criminal Code and Article 436 paragraph 1 of the Criminal Code. Where article 279 of the Criminal Code stipulates that anyone who marries while he knows there is an obstacle in his marriage to marry will be sentenced to a maximum of 5 years in prison (for the person concerned).
Meanwhile, Article 436 paragraph 1 of the Criminal Code stipulates that whoever has the power to marry people according to the law that applies to both parties, to marry off people who have obstacles to marrying then he will get a maximum imprisonment of 7 years (for marriage registrar employees and religious leaders).
Although by religious law and belief a husband is allowed to have more than one wife, the Marriage Law provides quite severe restrictions, namely in the form of a fulfillment of conditions with a certain reason and permission from the Court.
CONCLUSION
In principle, the Marriage Law adheres to the principle of monogamy, but it is not absolute, meaning that it is possible for a man to have more than one wife if according to his religion and belief it is permissible.
The reasons that can allow a husband to have more than one wife are one of the following: a. The wife cannot carry out her obligations as a wife b. The wife has a disability or an incurable disease. c. Wife can't give birth One of the reasons mentioned above in its application to the court must be supported by the following three conditions: a. There is the consent of the wife / wives b. There is certainty that the husband is able to provide for the necessities of life for his wives and children. c. There is a guarantee that husbands will treat their wives and children fairly.
Suggestion
1. Although in the Marriage Law polygamy is allowed, someone who will be polygamous should really be able to account for it in accordance with the regulations that have been determined. 2. In polygamy, the element of justice must be considered, because this will have an impact on whether or not it is happy in the household for both the wives and their children.
|
2022-12-15T16:01:11.410Z
|
2022-01-07T00:00:00.000
|
{
"year": 2022,
"sha1": "2d8583c80ea2da3a19c1d285f90e5698bea83f77",
"oa_license": "CCBYNC",
"oa_url": "http://jurnal.untagsmg.ac.id/index.php/ulrev/article/download/2743/1699",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "198dc4c2271ff4b5900847f8ec23380a1f4118f1",
"s2fieldsofstudy": [
"Law",
"History"
],
"extfieldsofstudy": []
}
|
52054720
|
pes2o/s2orc
|
v3-fos-license
|
Assessing the Capabilities of Additive Manufacturing Technologies for Coral Studies, Education, and Monitoring
Abstract Additive manufacturing, better known as 3D printing is becoming an easily accessible method to produce 3D objects ranging from medical devices to jet plane parts. However, this implies the creation of an accurate 3D digital model by Computer Assisted Design (CAD) or direct acquisition of a 3D model as well as a correct understanding of the various 3D printing technologies available with their pros and cons. Here, we present a method for editing and printing of 3D models of coral colonies for the generation of accurate and enhanced 3D models suitable for research and education. This is a follow-up from other papers where 3D scanning was performed on fresh coral samples from field trips and coral skeletons from museum collections using different imaging techniques (multi-image photogrammetry and Micro CT scanning). 3D scans of colonies and samples of Turbinaria sp., Leptoseris incrustans, Oulophyllia crispa, Echinopora sp., Siderastrea savignyana and Platygira daedalea were used to produce multi-material and multi-scale 3D prints. Moreover, we studied the best practices for the 3D printing processes, and potential technologies most suitable for specific attributes in this practice. Additionally, we show the innovative application of 3D printed inert reactive corals able to indicate environmental changes, along with insights into the potential uses for the proposed method and related systems in biological fields and sharing with an online community.
INTRODUCTION
Since their introduction in the 1980s 3D Printing (3DP) systems have rapidly become highly effective tools for a broad range of applications in a wide number of fields. The technology was firstly developed in the Nagoya Municipal Industrial Research Institute by Hideo Kodama who proposed an account of a rapid prototyping systems utilising photopolymer curing processes (Kodama, 1981). The technology has matured since and has seen a huge growth in popularity, complexity, speed, variety of materials employed and potential areas of application (Fredieu et al., 2015). The term "3D Printing" is used to describe many rapid prototyping systems, all of which operate under the basic principle of the controlled consecutive layering of a material to generate complex 3D structures.
There are many methods which fall under the classification of 3DP, with the main differentiating factor being the process in which the individual layers are formed to produce a completed part, model, or object. Some of the most versatile and widely used 3DP technologies are Fused Deposition Modelling, (FDM), Stereolithography (SLA), Laminated Object Manufacture (LOM), and Blinder Jet. The use of such 3DP systems has found numerous applications in a wide range of fields, from purely scientific pursuits in the medical, engineering, design and biology fields through to more artistic endeavours in the fashion and artistic sectors.
The most frequently adopted and popularised uses of these 3DP techniques have been in medicalscience, particularly for the prototyping and implantation of replacement parts for the human body (Gerstle et al., 2014) and for pre-operative planning of surgeries, using collected CT and MRI scan data to reconstruct 3D models of complex internal structures (McGurk et al., 1997;Rengier et al., 2012;Klein et al., 2013). Use in other biological-based fields has included the replication of physical representations of molecules in biomolecular sciences (Jones, 2012) and the documentation and restoration of artefacts in the paleontological and archaeological fields (Kuzminsky and Gardiner, 2012). These applications have been focused on education and training for both the general public and specialised users (surgeons, archaeologists, etc.) and have helped improve the overall value of education (Eisenberg and Buechley, 2008;Eisenberg, 2013;Fredieu et al., 2015), science visualization (Partridge et al., 2012;Segerman, 2012) and science outreach to the public in museum and exhibition contexts (Allard et al., 2005;Tomaka et al., 2009;Wachowiak and Karas, 2009;Chapman et al., 2010). 3DP has recently been used in coral studies as physical models in hydrodynamic simulations in flow chambers (Chindapol et al., 2013) and to produce customised cement and sandstone blocks in artificial coral reef restoration (Kramer et al., 2016).
It has been demonstrated that, by using diverse imaging methods,3D models can be produced in both field and laboratory conditions for diverse morphologies of coral colonies and corallites at multi-scale levels from both fresh samples and skeleton collections (Gutierrez-Heredia et al., 2015. Some widely available 3D digitising methods are laser triangulation, multi-image photogrammetry, CT scanning, and structuredlight scanning. 3D models produced by these methods are suitable for the accurate calculation of biological parameters and can digitise different physical properties such as multiimage photogrammetry to capture superficial texture mapping and colouring, and CT scanning to represent internal structures. These digital collections are ideal for CAD modification to 3D printing tests as each 3D model can be adapted to fit the particular printing technology that can best represent different biological characteristics (form, texture, and colour) and distinct material properties (density, endurance, and reactiveness) for handling in education and training, and as customised tools for research.
Imaging and Modelling
A 3D model of Turbinaria sp. was digitized using multiimage photogrammetry according to Gutierrez-Heredia et al. (2015), (Figure 1), and 3D models of samples and corallites of Oulophyllia crispa, Leptoseris incrustans, and Platygyra daedalea were similarly digitized using this technique as mentioned in Gutierrez-Heredia et al. (2016), (Figure 2). Siderastrea savignyana and Echinopora sp. corallites were imaged from small pieces extracted from bleached coral skeleton samples cut into small sections to fit inside the Micro CT scanner. S. savignyana was cut into a rectangular prism (21 × 6 × 5 mm) and Echinopora sp. was cut into a rectangular prism (13 × 5 × 4 mm). These samples were secured in Polyurethane foam and scanned with a CT120 (Trifoil Imaging) Micro-CT scanner. The micro-CT image acquisition consisted of 1,200 projections taken at 0.3 • increments in one full rotation of the gantry in ∼40 min with an exposure of 100 ms and bin mode 1 × 1. The X-ray tube voltage current was 80 kV and 32 µA, respectively. The resulting raw data were reconstructed using customised proprietary software to create a final image with 25 µm voxel dimensions. MicroView (ABA 2.4 General Electric) software was used to crop and 99 rotate images to isolate the area of interest and the images were converted to DICOM format. The DICOM files were transferred to a desktop computer iMac Quad-Core 2.7 GHz Intel Core i5 with 8 GB RAM with OS X Lion 10.7.3, and processed with the free open-source version of the imaging software OsiriX (Pixmeo) v. 5.9 (32-bit). These were surface-rendered in the highest resolution values possible for closest fit to threshold between specimen density without generating artifacts from the holding foam for isosurface generation to export the resulting 3D models as stereolithography (.stl) format files. These models were then imported into freeware Meshmixer (64-bit, v2.4) to remove odd artefacts and background noise and for the segmentation of one corallite of interest.
Specimen Replication
The resulting digital specimens were utilised for 3D printing applications to allow for their physical replication. These stereolithography files, both from photogrammetry and Micro-CT Scanning, were imported into Cura 3D slicing software (version 13.04) to generate the individual printing slices, paths, or patterns. Models were scaled and orientated accordingly and then printed using a variety of 3DP methods. Fused Deposition Modelling (FDM), Stereolithography (SLA), Laminated Object Manufacture (LOM), and Binder Jet printing methods were used, with details discussed below.
Fused Deposition Modelling (FDM)
A range of Ultimaker FDM 3DP's (Ultimaker Original 2 & 3), were employed to produce prints of the coral models. Three millimeters of Ultimaker & Ink3D Polylactic acid (PLA) & Acrylonitrile butadiene styrene (ABS) polymer filaments were used in colours silver, white, and clear blue, to reproduce the generated model. Models were scaled and orientated in an upright position and then printed using base parameters with a fill density of 5-15%, external support material only and printed on a raft for stability. These slices were then compiled and exported as a G-code file to be printed on the printers listed above.
Stereolithography (SLA)
A Form1+ SLA printer (Formlabs) was used to produce prints of the generated models. A clear UV cure resin (Formlabs) was used, giving a semi opaque reproduction of the model. A 3D model of Turbinaria sp. was printed in translucent resin using a Formlabs Form 1+ printer, which uses the SLA method of 3DP. The matching Formlabs slicing software, Preform, was used with a model layer thickness of 25 microns. The resin used with the Form 1 is a photoreactive resin classed as a mixture of methacrylic acid esters and photo initiator. Its formulation is proprietary, but it contains methacrylated oligomers, methacrylated monomers, and an unknown photo initiator. It cures at 405 nm wavelength and the Form1 uses a laser light source directed by a galvanometer to cure the desired pattern per layer. Formlabs software (Preform) was used to orientate, hollow, and support the part for printing.
Laminated Object Manufacture (LOM)
An M-COR Iris (produced by M-Cor Technologies) was used to produce a LOM print of one of the generated models (Figure 3). Standard 100-micron printing paper was used, along with standard office printing ink to give a full colour reproduction of the model. LOM processes present significant opportunities in the biological sciences due to their ability to produce full-colour prints in a safe eco-friendly manner through the use of paper, inks and water based glues.
Binder Jet
A 3D model of an O. crispa corallite model was printed in a sandstone colour printer by Shapeways on a Z Printer 3D inkjet printer using VisiJet PXL Core/zp151 material. This process utilises inkjet printing heads to apply a liquid bonding agent to powder material and uses the same layering process to produce the completed part. This process allows the cheap full-colour replication of digital specimens in a range of materials, but due to the powder binding process the parts can have some material mechanical limits.
RESULTS
Several 3D prints were produced from multi-scale models from coral colonies and corallites in multi-material prints. Varied materials had distinct strengths and weaknesses in physical attributes and detail. When 3D printing fails to produce details at a certain scale, increasing the resolution of the 3D model produces an increment in detail of the desired characteristics. In Figure 4b, the 3D print of L. incrustans colony sample shows the morphology of the colony and a number of corallites can be observed individually, while characteristics of corallite septa, costae, and mouths can be observed in the amplification of individual corallites (Figure 4d).
As can be observed in Figure 5, the quality of the 3D prints from the 3D model is reduced if this is digitised in very high resolution. Different materials can produce varying degrees of detail in each scale, with varying structural strength and texture of the 3D model. Even different colours of the same material can produce striking differences in detail, due to the influence of light, shadow and colour on the parts. Figure 5 shows the impact of colour and material on the quality and detail of printed objects. Figure 5A shows the digitised 3d model, with Figures 5B-F showing a variety of materials and colours giving varying levels of detail and resolution. Silver PLA consistently produced better definition than the other materials, both for colony and corallite scales.
The Turbinaria sp. model was similarly printed below using FDM, BJ, and SLA printers, in a range of colours (Figure 6). The silver PLA printed specimen ( Figure 6E) shows the best feature definition, due to its grey colour, allowing the corallites to be clearly seen. The alternative colours, Blue (Figure 6C), White ( Figure 6D) & Black (Figure 6F), are great additions for improved engagement & education activities, allowing custom colours and materials to enhance the experience. The SLA process allows the production of clear models ( Figure 6G) for specialised marine education and the production of a new generation of education tools (such as visualisation of corallite growth inside the colony as imaged by CT scanning).
The segregation and separation of individual biological features or structures allows for high quality replication of fine details. Corallites can be separated and printed on a large scale in a variety of colours to show individual structures and layouts to enhance feature identification. Such small features would not be visible on a full coral print, so the replication of these individual sections allows more information to the imparted to the user. Replication of these corallites can be seen in Figure 7. Detail is high enough to observe the geometry of the corallite wall, number, arrangement and configuration of septa, and texture of the coenosarc, essential for taxonomic identification and other biological studies of scleractinia. Details and the definition of the prints were constantly greater when using Silver PLA on both corallite models. The painting of objects with commercial paints, resins, or varnishes, to produce composite effects can further enhance the display of 3D models. In Figure 8, detailed texturing and shading can be observed in 3D prints resulting in added contrast in the septa, and mouths of each corallite Furthermore, the use of UV-reacting varnish depicts the autofluorescence quality of the Symbiodinium sp. symbionts that are present in this species. These additions enhance the educational aspects of these models, allowing the mimicking of natural effects such as autofluorescence.
Larger-scale prints can be produced using the paper-based LOM processes, which produce objects using colour printed and layered paper. This process produces large-scale specimens of coral structures with significant weight and density. The full colour printing aspect allows the printing of coral structures in various stages of life, from living coral through corals recently removed from the oceans to fully bleached coral skeletons. This adds a new dimension to the education and engagement process, without the need for the creation of expensive and timeconsuming artist-created models. An example of a 3D paper printed living coral specimen can be seen in Figure 9, produced for Keaveney et al. (2016).
Built-In Functionality
As a digital specimen is produced, a new range of options becomes available for the further educational enhancement of these models. 3D design software was used to produce 3D prints with composite elements, accentuation of anatomic features, inbuilt functionality and the production of prints from scratch. In Figure 10a, the corallites of a 3D model of a Turbinaria sp. colony were accentuated and detached from the main model, so composite polymers could be printed. The PLA polymer was the base of the coral skeleton, and an UV-reactive PLA polymer was used for the corallite elements. Figure 10b shows the completed coral model, with Figure 10c showing the corallites reacting when exposed to UV light (direct exposure to sunlight), producing a colour change from light grey to dark purple.
For educational purposes an idealised model of a corallite and polyp were generated using 3D design software (Sketchup, 2016 1 ), the assembled parts being produced using different materials (Figure 10d). The polyps demonstrated high detail in morphology, tentacle number and gastric cavity and could be detached from the corallite base to demonstrate the septum number and arrangement, and the central columella. This ability to add or highlight selected elements and build in functionality allows improved education and outreach outcomes, via low cost production methods.
Customised and designed functionality is an added potential benefit from producing these objects with 3DP. New polymer materials are coming on the market which allow enhanced functionality, such as temperature-affected colour changes, (thermochromic filaments) and UV light-affected colour changes, (photochromic filaments) . Figures 11a,b show a photochromic coral colony printed with a photochromic filament, transitioning from a white colour to a purple colour in the presence of UV light, similarly Figures 11c,d show 3D fish skeleton models printed with assorted thermochromic filaments, changing colour when subjected to a temperature change.
Similarly, when a digital specimen is generated, the contrast and colouring of the specimen can be altered before printing to allow the highlighting or marking of specific features on the specimen to aid in identification, education, and research activities. This allows the user to tailor the visual characteristics of the printed part to aid in the education process, as seen in Figure 12, in which the texture map is changed to modify the visual characteristics. In addition to the objects themselves, new tools for coral research and data collection can be produced, allowing the creation of customised data collection systems. These systems can be produced at low cost, in remote locations and in short timeframes, allowing the infield repair or modification of parts and systems. An example of a 3D printed UV torch and camera stabiliser unit in use in the field can be seen in Figure 13.
DISCUSSION
Stereolithography (SLA) and Fused Deposition Modelling (FDM) were constantly used due to their individual advantages and suitability for the context in question. SLA processes are well suited to these applications due to their fine feature resolution, high quality details, surface finish, and favourable optical properties. This method also allows the production of extremely complex geometries, due to the added support from resin and the bottom-up layering process, which is very beneficial for the production of such complex biological specimens. SLA processes require strict control of the level, environment and properties of the print area, along with specific handling & costs issues associated with the resins required. These requirements make SLA a relatively complex and delicate process, requiring higher levels of control and resulting in higher costs. The FDM process is much simpler, producing low-cost, robust objects with fewer environmental constraints when compared to SLA. The simpler nature and lack of specific environmental conditions results in it being generally much cheaper and easier to use, with significantly lower barriers to entry and utilisation. These FDM processes can exploit a wide range of polymer materials, with almost any thermoplastic and composite materials being of particular interest. These composite materials are usually a mix of thermo-plastic as either a binder, with fibre additions like carbon fibre, or as a carrier, such as those utilising metal powders which can be fused using an oven-based sintering process. The ability to add specific elements to the polymer materials unlocks the potential to tailor specific material performance characteristics on bulk material terms or even to specific parts, components or regions in a print. While these benefits make FDM cheaper, quicker and more accessible than SLA systems, in general FDM processes produce parts with much lower resolutions and much rougher surface finishes than aforementioned SLA systems. The LOM method shows promise for specimen replication for public display, interaction, and engagement activities in the biological sciences leading to increased awareness and public/expert knowledge due to the incredible accuracy of the texture recreation from the digital specimen. Finally, the Binder Jet process may have mechanical performance limits due to the hands-on nature of the proposed interactions with the objects.
All of these 3DP methods are widely available, accessible, low cost methods of 3D printing, which allow the adoption of the proposed replication process in number of cost-sensitive fields. This uncomplicated production of printed parts for use in the classroom or other educational environments allows increased personal engagement with the specimens without any risk to the original samples. These highlighted applications of the proposed concepts & methodologies show the scope and potential presented in this field. The use of 3D printing technologies has directly impacted the coral and marine fields in several ways, including the production of cement and sandstone blocks for artificial coral reef restoration (Kramer et al., 2016). It has been noted that scleractinian planulas are naturally drawn to surfaces with pastel colours (particularly white and pink) with a similar texture to coral as potential locations for settling. The planulas also seem inclined to settle on complex 3D structures with numerous interstices and high rugosity to escape from predators or to avoid being crushed. 3DP technologies are optimal for producing artificial habitats that mimic these characteristics. The neutral pH of sandstone and ceramics could potentially increase the recruitment rates of planulas and other reef organisms (https://www.popsci. com/3d-printing-could-save-coral-reefs, last accessed date: 18 January 2018). There is already a company specialising in the customisation and production of 3DP reef blocks (http:// www.reefdesignlab.com). The replication of reproduced coral structures, could also be utilised for the physical assessment of the factors impacting coral development, growth and destruction. These 3D prints could be used as replicas of real coral specimens in hydrodynamic simulations in flow chambers (Chindapol et al., 2013), allowing a greater understanding of the physical characteristics affecting coral degradation in the natural environment.
Education in taxonometric features and species identification can be greatly enhanced through the use of the proposed 3DP process. High resolution 3D printed specimen models with enhanced embodied taxonomic features can be used to highlight and aid in the identification of these features for educational purposes. Multi-scale printing of specific features, portions or appendages in these coral specimens, will allow greater understand of how different characteristics are related at distinct organisation levels, as seen previously in Figures 4, 5. These high definition multi-scale 3D prints were produced by previously embedding higher mesh resolution corallites into colony 3D meshes and segmenting them afterwards (Figure 14).
Similarly, multi-spectral (visible light, fluorescence, and infrared) specimens can also be produced, to help indicate the relationships present in coral communities at different electromagnetic spectra, and to show the relationship between their individual biological attributes, as seen in Figures 10, 11.
DYI Coral Reef Research Tools
3DP processes can also be utilised for the development and production of bespoke customised laboratory, workshop, and field tools for marine research activities. The ability to produce customised tools and accessories before field work and the possibility to produce replacement parts in the field using these systems can greatly increase the speed, specificity, quality and outcomes of field research (Baden et al., 2015). The open-source file depositories available inline allow these tools, pieces of equipment and concepts to be widely disseminated and quickly accessed, enabling their rapid customization, improvement and sharing in specialised and non-specialised online communities.
A World of Opportunities…
The rapid growth in popularity and subsequently development of the 3DP field, particularly in the consumer FDM space, has resulted in the development of a very wide spectrum of polymers compatible with low cost FDM systems. These new compatible materials include composites, such as PLA imbedded with particles such as copper, bronze, stainless steel, wood, ceramics, and carbon, all aimed at increasing the performance characterises of the end products, or other alternative biologically sourced materials made from wood, linen, lignin, or algae, to reduce the impact of oil derived polymers on the environment. These filaments offer differing degrees of texture, strength, flexibility, elasticity, and colour, including fluorescence, phosphorescence, and translucence (Figures 10, 11). These characteristics can be enhanced by selecting and editing a 3D model based on a specific digitising method, or a fusion of different imaging technologies. 3D models produced from CT scans are highly detailed and accurate, even in microscopic samples, and produce meshes that display internal structures. These models were selected when translucent resins (SLA) could display internal anatomical details of coral samples and corallites (FDM). Likewise, these models could be potentially employed when producing 3D prints with plane cuts across the specimen to demonstrate internal structures with opaque materials ( Figure 1B, and similar to Figure 10e). 3D models produced from photogrammetry were selected for also being highly detailed, accurate, and greatly inexpensive compared to CT scanning, and texture mapping can be used to produce coloured 3D prints of fresh samples (Binder jet and LOM).
An interesting approach to coral reef research, and general marine biology community research, would be the use of sensory materials, or reactive plastics, as discussed previously. Recent developments have made available polymer materials for FDM 3DP systems which are colour responsive to a range of changes in physicochemical parameters in the contact environment (Figure 11). Commercial sensory products with similar qualities have been used in aquariums to assist in keeping environmental parameters optimal (temperature, pH, and UV light decals attached to the walls). Structures manufactured with these reactive polymers could be used on a larger scale in underwater environments. Thermochromic rings could be readily used to indicate abrupt changes in water temperature (e.g., El Nino climate incidents), photochrome rings could signal UV bleaching, and even halochromic rings could reveal changes in water acidity. These accessories are potential tools to passively monitor ocean changes using boats or permanently placed in environments, tools which could be easily monitored by locals without the need for specialist training or equipment.
Prospects for these technologies rely on the use of online resources to display, collaborate, and even add aesthetic and monetary value to the 3DP objects produced. One website, Sketchfab (https://sketchfab.com/therlab), allows the researcher to upload 3D content to a free account, where it can be displayed for education and science outreach, shared amongst collaborators, and even animated. Similarly, Thingiverse (https:// www.thingiverse.com/) and Shapeways (http://www.shapeways. com/) can potentially be a marketplace where the users of 3D digitisation and 3D printing can promote a global community of design and manufacturing, increasing awareness, opportunities, and potential for the concept. Thingiverse is a free website aimed at the sharing of user created 3D design files, and Shapeways is a 3D printing forum and service start-up company. Used in conjunction, they provide a place where researchers can upload and share 3D printable files which can be printed in a wide range of materials and colours, then shipped across the globe. This allows for a multi-user community where valid 3D prints can be interchanged, and practical solutions and customised tools for research and monitoring can be produced.
These 3DP systems and their associated networks present huge potential for the replication and activation of coral analogues for research, education, and environmental protection, offering new methods to help protect these delicate marine communities.
AUTHOR CONTRIBUTIONS
LG-H: study's conception, article drafting, imaging; CK: article drafting, production of 3D prints for assessment; ER: study's conception, article drafting.
|
2018-08-21T13:02:57.400Z
|
2018-08-21T00:00:00.000
|
{
"year": 2018,
"sha1": "609039f0d7fec2df56e675959fd84f9333cdad63",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmars.2018.00278/pdf",
"oa_status": "GOLD",
"pdf_src": "Frontier",
"pdf_hash": "609039f0d7fec2df56e675959fd84f9333cdad63",
"s2fieldsofstudy": [
"Engineering",
"Education",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
}
|
218959984
|
pes2o/s2orc
|
v3-fos-license
|
Policing mandatory bicycle helmet laws in NSW: Fair cop or unjust gouge?
Australia was the first country to introduce laws mandating the wearing of helmets by bicyclists with the aim of enhancing road safety for cyclists. Examining the law and its administration in NSW, we point to some serious problems and anomalies with the law. We argue safety concerns have been relegated and the fine for the offence has lost any sense of proportionality with offending, parity with penalties imposed in other states and with penalties for other road safety offences in NSW. We also discuss concern over potential police misuse of the law and of its collateral consequences for the vulnerable.
Australia was the first country in the world to introduce laws mandating the wearing of helmets by cyclists. 1 Victoria legislated in 1990 2 and the other states followed, New South Wales (NSW) in 1991. 3 They were among a raft of laws (including the introduction in 1971 of compulsory seat belt laws for motorists and mandatory helmet laws for motor cyclists, and random breath testing from 1976 in Victoria) that aimed to enhance road safety and reduce road related injury and death. In this article we examine the law and its administration in NSW and point to what we see as some serious problems and anomalies. We argue that the safety concerns that originally animated the mandatory helmet law has, at least in NSW, receded behind what can only be described as a brazen exercise in revenue-gouging of its citizen-cyclists. The penalty is now ludicrously excessive. Along with safety, a number of other principles and concerns have been relegated, including any sense of proportionality between penalty and offence, parity with penalties imposed in other states for the same offence and with penalties imposed for other road safety offences in NSW, and recognition of the potential for police misuses of the law and of its collateral consequences for the vulnerable. 4
The helmet law in NSW
As with the other road safety measures of the time, the introduction of mandatory helmet laws in the early '90s was accompanied by public campaigns, including in the case of the helmet laws school-based programmes and rebate schemes, that sought to persuade and support citizens and especially the young to adopt the practice of wearing helmets in order to reduce the risk of serious head injury. 5 Helmets, it is almost too obvious to state, are not aimed at protecting other users of roads or public space and cannot protect a cyclist against other injuries than those involving the head in the event of an accident. In the worst collisions, with motor vehicles for example, the protection afforded by a helmet is limited indeed. Their utility as a safety measure should not therefore be exaggerated. 6 Riding a bicycle without wearing an approved helmet securely fitted and fastened is one of a large number of bike offences under NSW Road Rules for which a penalty notice may be issued. 7 A penalty notice is a fixed onthe-spot fine which a person is required to pay within 28 days unless they choose to challenge it in court. The helmet offence is by far the most common bicycle offence for which penalty notices are issued by police, although on occasions it is accompanied by additional penalty notices for other bicycle offences. Very few are taken to court. Until March 2016 the offence of riding without a helmet was categorised as a Level 1 Penalty Notice (PN) offence under NSW law 8 which meant it carried a fine of $71. An amendment in March 2016 9 recategorised the offence as a Level 5 PN offence, carrying a fixed penalty of $319, an overnight increase of 349 per cent. The change was justified as part of a package of bike safety measures which included a new penalty notice offence applying to motorists failing to pass a cyclist at a safe distance. The new offence appears to have been something of a quid pro quo for the massive overnight hike in the penalty for the helmet offence. The two offences were treated as being of equal gravity, with the new safe passing offence also carrying a fine of $319.
Penalty notice offences are indexed in NSW, so the penalty for these offences as of July 2019 was $344.
As Table 1 indicates, the penalty for not wearing a helmet in NSW is wildly out of kilter with the penalty for the equivalent offence in other Australian jurisdictions, which range from a low of $25 in the Northern Territory to a high of $207 in Victoria (or 7 per cent to 60 per cent of the penalty in NSW).
There is no evidence that a penalty of the order of the NSW fine is necessary to ensure general compliance with the law, that compliance is higher in NSW than elsewhere, or that cyclists in the 'premier state' are safer than their counterparts in other states and territories as a consequence of the higher fine.
It is also revealing to compare the penalty for the helmet offence with other road safety offences in NSW that are subject to penalty notices: see Table 2. Although the NSW Roads and Maritime Services (RMS) website announces on its 'Speeding' webpage that 'Speeding is the most common contributing factor to road fatalities in NSW' (that it is speeding drivers therefore, not unhelmeted cyclists, who present a danger to others), a driver has to be exceeding the speed limit by more than 20 km/hr before the fine exceeds that of failing to wear a bicycle helmet. Exceeding the speed limit by 10 km/hr will incur a fine barely one third that of the bicycle helmet offence. The penalty for offences like negligent driving and failure to stop at a red light is not much more than $100 above the helmet offence. Although the risks and incidence of serious injury associated with motorised forms of transport are unquestionably greater than those associated with bicycles, the failure to wear a seatbelt in a motor vehicle and the failure to wear a helmet on a motor cycle carry the same fixed penalty as riding a bicycle without a helmet. The driver of a motor vehicle who chooses to drive in a dedicated bicycle lane faces a penalty little more than half the amount a cyclist will be pinged for driving in that same lane without a helmet.
Are we really talking safety here? Is there any nexus or proportionality between the penalties and the gravity of the offences and risks?
What we do know is that the helmet offence has risen to be a terrific little earner for NSW governments. Table 3 shows the number of penalty notices issued for riding without a helmet and the value of the fines imposed for each of the years for which data are available. It also shows the same for the new safe-passing law introduced in 2016. From 2012-13 to 2018-19, the number of penalty notices issued for riding without a helmet slightly more than doubled. The value of the penalty notices issued for the offence increased by $1,727,203 -well over 700 per cent. But it was in the year before (2017-18) that the numbers peaked, with over 6000 penalty notices valued at over $2 million. Compare the paltry number of penalty notices issued for the safe passing law offence since its introduction, a total of 95 over four years. Cyclists might reasonably wonder whose safety and welfare is the priority here.
What we know about cyclist injury and mortality
The goal of law and policy relating to cycling should be to ensure a safe environment for cyclists. According to injury data from 1999 to 2015-16 compiled and published by the Australian Institute of Health and Welfare (AIHW), 10 cyclists accounted for one in five people hospitalised as a result of injuries occurring on Australian roads, streets and paths. The national incidence of injuries to cyclists rose steadily over the entire period and at a higher rate in the latter few years, but the number of deaths, although fluctuating from year to year, is low. Over the study period there was an average each year of 38 cyclist deaths compared to over 1000 people in total killed in transport-related incidents. For the three years 2013-14 to 2015-16 almost half the cyclist deaths involved a principal injury to the head and neck. In almost four in 10 deaths, injuries were sustained to multiple parts of the body. Data on the presence or otherwise of a helmet in these fatal accidents are not available. The majority of injuries suffered by cyclists in this period were fractures, and most of these involved upper limbs. The age profile of hospitalised cyclists has changed over time, with the number of those over 25 years of age increasing and those under 25 declining. Injury severity also increased with age: those over 45 were more likely to have life-threatening injuries and longer stays in hospital.
Cyclists do not themselves present a significant risk to others. The AIHW data for 2015-16 indicate that for all cases of land transport-related injury resulting in hospitalisation a bicycle was the counterpart in only 1.3 per cent of crashes, much the same as for pedestrians or animals (at 1.2 per cent). It is the greater speed and mass of motor vehicles that constitute the dominant risk of severe injuries to cyclists, pedestrians and other motorists. There is even some recent evidence from an Australian pilot study that self-reported driver aggression toward cyclists is linked to negative attitudes towards cyclists as a group, confirming the anecdotal reports of some cyclists that they are subject to harassment and intimidation by motorists. 11 In keeping with the available data, a strategy which has as its primary concern the safety of cyclists would focus on infrastructure (like dedicated bike paths or lanes) and the education and regulation of drivers with respect to their impact on other road users. As is clear from the above data relating to its enforcement, the safe passing law represents little more than a hollow gesture in this regard. The emphasis so far as cyclists themselves are concerned should also be (as it was when the helmet law was originally introduced) on education, encouragement and support to ensure that individuals ride safely to avoid accidents and limit the risk of serious injury in the event that they do occur. Helmets obviously have a valuable role to play in relation to reducing the risk and severity of head injury, but it is a limited role given the many factors affecting safety for cyclists. We accept the case for a law that mandates helmet use, in appropriate circumstances, as a way to reduce those particular risks. However, the penalty and its enforcement should reflect some sense of proportionality with respect to both the risks against which it affords a measure of protection (given that wearing a helmet does not touch most of the factors affecting cycling safety) and the minor nature of the offence, involving no risk to other persons (unlike motor vehicle speeding, for example). A small fine to provide an additional incentive for compliance may be justified, but in our view there is no case for a punitive fixed financial penalty such as the current one under NSW law.
The question of enforcement
In its 2006 report on the effectiveness of fines, the NSW Sentencing Council carefully examined the general issue of penalty notice provisions as well as court-based fines. It underlined the fact that penalty notice regimes involve a form of 'executive sentencing' which attract little in the way of public oversight. The Council warned of 'the potential for the development of discriminatory, unfair and negligent or corrupt practices' given the lack of accountability. 12 This is a serious concern with respect to an offence like the failure to wear a bicycle helmet. The power to impose a quite punitive on-the-spot fine with no meaningful oversight requires that police exercise their discretion with great care. The risks associated with riding without a helmet vary enormously depending upon location and circumstance (for example, riding on a busy highway, a backstreet, a park, a dedicated bike path, etc). It is likely that many police use their discretion in issuing penalty notices with such legitimate considerations in mind and in recognition that they generally have much more serious crimes to attend to. However, there is also evidence from our research of quite arbitrary enforcement where unstated and sometimes discriminatory factors -rather than legitimate safety considerations -dictate enforcement practices. For a start, there is enormous geographical disparity in the number of penalty notices issued for failure to wear a bicycle helmet. In 2018-19, of the 117 NSW local government areas (LGAs) in which at least one penalty notice was issued for failure to wear a helmet, the 12 LGAs (barring Sydney) in which the majority of penalty notices were issued accounted for just short of half the total. 13 Most but not all of these are larger urban population LGAs, but this does not account for the fact either that other large population urban LGAs see many fewer penalty notices issued or that, in some rural LGAs with small populations, large relative numbers of penalty notices are issued. The year-to-year pattern in some LGAs appears to be consistent in this respect. Blacktown in Sydney's west accounts for a remarkably high proportion of penalty notices -between 11 per cent and 13 per cent of the total in each of the three years since 2016-17. Its surrounding LGAs account for nothing like these totals. In other LGAs, the numbers fluctuate significantly from year to year. In Byron LGA, there were a total of 31 penalty notices issued in 2017-18 with 70 issued in 2018-19. In September 2019 the Northern Star newspaper, which services the Lismore LGA, 14 quoted the local Police District Inspector to the effect that 80 fines had been issued by highway patrol officers over just one recent weekend (which amounts to nearly $30,000 worth of revenue). Was this the result of a sudden outbreak of defiance of the helmet law on the part of Byron residents and visitors -or a case of the police most of the time ignoring the offence until a decision was taken to enforce a crackdown in which more fines were issued in a weekend than in the entire previous year and more than twice as many as in the year before that? As one local observed, 'I think it's a dirty tactic by the police to allow non-compliance most of the time in Byron Bay and then have a "blitz" without warning and fine $350 a pop'. 15 It is impossible to discern any legitimate safety rationale in such an enforcement strategy. Similarly, it is difficult to explain the other enormous local disparities in the enforcement of the helmet law by reference to patterns of cyclist behaviour in choosing to wear or not wear a helmet. Does this behaviour vary so widely between localities? Or is the most satisfactory explanation that the numbers are predominantly a function of police, rather than cyclist, behaviour? So, it begs the question: are enforcement decisions taken with the safety of cyclists in mind or for other undisclosed reasons? More worryingly, our interviews with legal aid lawyers and others 16 indicate that the law is not infrequently used for a range of purposes clearly unrelated to bicycle safety, including to gather intelligence about other offences and suspects, to justify searches and to harass targeted individuals (particularly among young people). On occasion this can involve the repeat issue of penalty notices for failure to wear a helmet (eg, a child or teenager issued with a penalty notice while cycling both to and from school on the same day). Draconian enforcement of this kind does nothing to address the safety issue and is more likely to engender cynicism, resentment and resistance which may in turn produce escalation effects, including confiscation of the bike where a person cannot prove ownership and/or further offences like resist police, offensive language, goods in custody and so on.
There are, so far as we are aware, no formal guidelines which seek to guide enforcement discretion to ensure that enforcement serves a legitimate safety purpose rather than some other questionable aim. Guidelines should be developed and made widely accessible which would require police to consider the real risks of injury involved in the particular context before taking any action. Police procedures should mandate a caution for an initial breach (and perhaps also a second one) before a penalty notice could be issued. McBarnet's 'ideology of triviality' 17 can be evoked here: offences like the failure to wear a bicycle helmet are 'trivial', and so it is considered acceptable to punish them by way of onthe-spot fines that elude meaningful legal accountability. This is convenient for many whose financial security enables them to treat such penalties as akin to other household bills, but it can be highly misleading with respect to the impact of such penalties and their collateral consequences on others, especially the poor and the otherwise vulnerable. Consider that the 2016 Household, Income and Labour Dynamics in Australia (HILDA) Survey found that over 12 per cent of Australian households did not have $500 in savings to meet an emergency. 18 For these households, a $344 fine represents no small financial burden the discharge of which may often have to be weighed up against that of other debts and expenses, let alone when cumulative fines quickly multiply the household debt. And then there are the collateral consequences where the debt is not repaid. Enforcement action includes driver's licence suspension or, in the case of those who do not have a licence, a restriction on obtaining any form of licence or learner's permit. The possible flow-on effects of this, like secondary offending where a person is caught driving without a licence, can then land individuals in serious legal trouble. 19 The offence of riding a bicycle without a helmet can impose particular burdens on the young who are more likely to be dependent on cycling as a mode of transport but less likely to be able to afford to purchase a helmet or to pay the fine if issued with a penalty notice for failure to wear one. There is no differentiation of the penalty based on age 20 and this is just one dimension of a more fundamental problem. Provisions, for example those under the Young Offenders Act 1997 (NSW) and the Criminal Records Act 1991 (NSW), that seek to protect the young from the full force of the law by providing for cautions and other diversionary measures and treating certain juvenile convictions as 'spent' so that a juvenile record does not follow the person into adulthood, have no application to this penalty notice offence or to most others. There are some penalty notice offences (those that used to be known as criminal infringement notices or CINs, and which relate to offences like offensive language and minor theft) where a penalty notice cannot be issued to a juvenile. There is no obvious reason for these distinctions: it is a fundamental anomaly that young people are less protected under most penalty notice provisions involving minor offences than under the general criminal law. The impacts on some young people can be far from inconsequential. As an example, unpaid fine debt can increase the risk of a young person being pushed into poverty or prevent them from obtaining a driver's licence upon which employment and other opportunities may depend.
Conclusion: The wider picture
The penalty notice regime applying to this one minor offence of failure to wear a bicycle helmet raises multiple issues, but it also provides a glimpse into what is a much greater problem. Currently around 2.8 million penalty notices are issued each year in NSW. More than 20 penalty notice fines are issued for every sentence imposed by the courts. 21 Many of the problems noted here with respect to the helmet offence pervade the entire area of penalty notice provisions. It is a hidden justice system, devoid of the safeguards assumed to apply to the administration of justice. That a more than threefold increase in the penalty for failure to wear a bicycle helmet can be instituted overnight by simply amending a regulation attests to the capriciousness and paucity of meaningful accountability that characterises the legislative process as it pertains to penalty notice regimes. The same problems pervade enforcement, as penalty regimes effectively concentrate all power in the executive.
The 2006 NSW Sentencing Council Report made some withering criticisms of the penalty notice system and pointed to: The absence of any coherent or consistent crossgovernment mechanism for the fixing of the level of the penalties for which such notices may be used, or of guidelines for their adjustment in circumstances where there seems to be little in the way of any rational proportionality between many of the available penalties and the objective seriousness of the relevant offences . . . 22 The Council concluded: Extension of the scheme to cover additional offences, whether regulatory or criminal presents significant difficulties and should not occur unless and until there is wholesale review of the system for their issue and enforcement, and unless and until suitable safeguards and guidelines are established applicable to each agency. 23 In the intervening years, there have been constant efforts to mitigate some of the unfair hardship and injustice to which penalty notice regimes give rise. One example is the introduction of work and development orders that allow eligible persons to cut-out fine debt 24 and a very recent development is an allowance by the Commissioner of Fines Administration to reduce a fine by 50 per cent for persons on a government benefit. 25 However, no willingness has been shown by successive NSW governments to respond to those general criticisms and recommendations of the Sentencing Council, nor to tackle the fundamental structural problems with what continues to be a sprawling, everexpanding and largely invisible system characterised by irrationalities, anomalies and injustices of the kind that we have noted with respect to the helmet offence.
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
|
2020-05-21T00:08:42.547Z
|
2020-05-07T00:00:00.000
|
{
"year": 2020,
"sha1": "1f67c67779d9173b7271acb3fce3cf9467acc904",
"oa_license": "CCBYNC",
"oa_url": "https://eprints.qut.edu.au/212359/1/89186123.pdf",
"oa_status": "GREEN",
"pdf_src": "Sage",
"pdf_hash": "597420a511fcc093546e3864d92c0f79b2a73eee",
"s2fieldsofstudy": [
"Law",
"Political Science"
],
"extfieldsofstudy": [
"Political Science"
]
}
|
146678496
|
pes2o/s2orc
|
v3-fos-license
|
Effect of Organic Modification on Multiwalled Carbon Nanotube Dispersions in Highly Concentrated Emulsions
Highly concentrated water-in-oil emulsions incorporating multiwalled carbon nanotubes (MWCNTs) are prepared. Homogeneous and selective dispersions of MWCNTs throughout the oil phase of the emulsions are investigated. The practical insolubility of carbon nanotubes (CNTs) in aqueous and organic media necessitates the disentanglement of CNT “agglomerates” through the utilization of functionalized CNTs. The design and synthesis of two tetra-alkylated pyrene derivatives, namely, 1,3,6,8-tetra(oct-1-yn-1-yl)pyrene (TOPy) and 1,3,6,8-tetra(dodec-1-yn-1-yl)pyrene (TDPy), for the noncovalent organic modification of MWCNTs are reported. The modifier molecules are designed in such a manner that they facilitate an improved dispersion of individualized MWCNTs in the continuous-oil phase of the highly concentrated emulsion (HCE). Transmission electron microscopic analyses suggest that the alkylated pyrene molecules are adsorbed on the MWCNT surface, and their adsorption eventually results in the debundling of MWCNT agglomerates. Fourier transform infrared, Raman, and fluorescence spectroscopic analyses confirm the π–π interaction between the alkylated pyrene molecules and MWCNTs. The noncovalent modification significantly improves the effective debundling and selective dispersion of MWCNTs in HCEs.
Sedimentation behaviour of unmodified and modified MWCNTs
The unmodified and modified MWCNTs were dispersed in THF and studied the sedimentation behaviour. Around 1 mg of MWCNTs was ultrasonicated in 20 ml of THF with a bath-type ultra-sonicator (PCi Analytics, 20 Hz) for 30 min. In order to make sure equal amount of MWCNTs in all the dispersions, 2 mg and 3 mg of modified MWCNTs were used for 1:1 and 1:2 (w/w) modifications, respectively. Figure S1 demonstrates the photographs of the dispersions of unmodified and modified MWCNTs in THF. The images were captured after 24 hours of the ultrasonication. The dispersion of the unmodified S2 MWCNTs was highly unstable, and sedimentation was observed after a few minutes of ultrasonication. On the other hand, both 1:1 and 1:2 modified MWCNTs were stable for a few days in THF.
Dispersion studies: Dispersion state of MWCNTs in the oil blend
The dispersion studies of MWCNTs were carried out in the oil blend, where the oil blend composition was the same as that of the continuous phase of the emulsion. Figure Figure S2 (e, f)] in the oil blend. Both, 1:1 and 1:2 (w/w) modified MWCNTs were used.
The optical micrographs were data processed using ImageJ software in order to quantitatively investigate the effect of the non-covalent modification on the average size of the remaining S3 MWCNT agglomerates. A minimum of 1200 MWCNT agglomerates was considered by processing around 20-30 micrographs for each type of MWCNTs, in order to assure the accuracy of the analysis. The dispersion with unmodified MWCNTs exhibited a higher average agglomerate size when compared to all the modified MWCNTs. For the modified MWCNTs, the average size of the MWCNT agglomerates was reduced with increasing weight ratio of TOPy and TDPy. The average agglomerate sizes of the MWCNTs in the oil blend dispersion and the agglomerate-size distribution of modified and unmodified MWCNTs are provided in Figure S3.
Fluorescence Spectroscopy measurements
A set of fluorescence spectroscopy experiments was run at different concentrations of MWCNTs in the modifier solution in THF. As observed from Figure S5, upon excitation, the fluorescence spectra of the MWCNTs again exhibit superimposable profiles with respect to the free TOPy and TDPy molecules in THF. However, there is a significant quenching in the fluorescence emission intensity in the presence of MWCNTs when compared to the emission intensity of the modifier molecules alone in THF ( Figure S5).
The strong fluorescence emission intensity decreased in a non-linear, exponential fashion upon addition of MWCNTs. The intensity of the fluorescence was quenched by 56% and 77% upon addition of the MWCNTs to TOPy at 1:0.4 and 1:1 (w/w) ratios, respectively. It implies that the adsorption of the TOPy and TDPy on the surface of MWCNTs possibly S6 affects the decay of singlet-excited pyrene moieties. Moreover, the fluorescence quenching implies the interaction that occurs between the MWCNTs and the modifier molecules. There is no significant spectral shifting in the presence of MWCNTs. However, the slight shift of the lowest wavelength peak (437 nm) is dependent on the concentration and is due to reabsorption of the fluorescence emission. 1 The pyrene moiety is planar and a locked conformer with a rigid structure. Therefore, natural structure of both the modifiers is ideal for mapping to MWCNTs. Hence, the mapping does not lead to a red shift of the peak maximum.
Characterisation of modifier molecules: TOPy and TDPy
Both the alkylated pyrene derivatives were characterised using NMR, FTIR, UV-visible spectroscopy and fluorescent spectroscopy and the corresponding spectra are given below: Figure S6 ( 1 H-NMR spectrum of TOPy in CDCl3); Figure S7 ( 13 C-NMR spectrum of TOPy in CDCl3), Figure S8 ( 1 H-NMR spectrum of TDPy in CDCl3); Figure S9 ( 13 C-NMR spectrum of TDPy in CDCl3); Figure S10 (FTIR spectra of TOPy and TDPy) and Figure S11 (UV-Vis absorbance spectra and fluorescence emission spectra).
|
2019-05-07T14:16:14.117Z
|
2019-04-11T00:00:00.000
|
{
"year": 2019,
"sha1": "d1558c8ea4a4d4012e39f82b90156ad80c6ded6d",
"oa_license": "acs-specific: authorchoice/editors choice usage agreement",
"oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsomega.8b03179",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f0232c0bcbf24613059a6f2d1798bf5b486d9e5f",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
}
|
236309787
|
pes2o/s2orc
|
v3-fos-license
|
Commodity frontiers and the transformation of the global countryside: a research agenda
Abstract Over the past 600 years, commodity frontiers – processes and sites of the incorporation of resources into the expanding capitalist world economy – have absorbed ever more land, ever more labour and ever more natural assets. In this paper, we claim that studying the global history of capitalism through the lens of commodity frontiers and using commodity regimes as an analytical framework is crucial to understanding the origins and nature of capitalism, and thus the modern world. We argue that commodity frontiers identify capitalism as a process rooted in a profound restructuring of the countryside and nature. They connect processes of extraction and exchange with degradation, adaptation and resistance in rural peripheries. To account for the enormous variety of actors and places involved in this history is a critical challenge in the social sciences, and one to which global history can contribute crucial insights.
eastern Europe to Argentina and the American Midwest and coal from Europe to North America, Russia, China, India and Australia. Huge territories on all continents have seen their ecologies and societies radically reconfigured by the incursion of new commodity frontiers, outside capital, migrant workers and innovative technologies. Millions of people have laboured on these commodity frontiers, often under coercion, and huge wealth has streamed from fields and mines into the coffers of capital-rich urbanites, while provisioning industrial workers with food, and machines with supplies. In the process, native peoples have been dispossessed of land and rights, and the countryside has been endlessly reconfigured into a source for global capitalist growth.
Crucially, commodity frontiers are variable in terms of place and commodity, and they change over time. In 1700, for example, most cotton was grown by peasant producers on land they owned or rented, then sold to merchant middlemen. In 1800, most cotton traded on global markets was grown by enslaved cultivators on land taken from indigenous inhabitants; enslavement and possession both funded by European metropolitan capital. Another 100 years later, in 1900, sharecroppers and tenant farmers drawing on European capital and enabled by massive state-directed infrastructure projects and legal interventions produced cotton for global markets. There was also significant diversity within given moments in time. In 1800, for instance, sugar was produced in some regions of the world by enslaved workers, in others by peasant cultivators and elsewhere by wage workers.
The concept of commodity frontiers is a powerful lens through which to analyse capitalism's history. It helps us understand on an empirical and conceptual level how ongoing incorporations of new reservoirs of labour, land and nature have constituted capitalism's extraordinary dynamics especially its ability to produce ever more goods. Focusing on the long history of these commodity frontiers allows us to analyse how frontier expansion has generated shifting sets of seemingly localized activities to secure access to labour, land and nature for globalized commodity production, helping us come to terms with the diversity of outcomes at any given moment and their shift over time. Seeing how commodity frontiers have moved for centuries, taking on very different characteristicstransitions marked by booms and busts, inherent ecological and social limits including resistance, and altered by the very contradictions they producedlet us better understand some of the fundamental dynamics of capitalism and its connection to and subsumption of new spaces, new countrysides and new forms of nature. And, crucially, looking at commodity frontiers makes it strikingly clear that it is impossible to fully understand capitalism without thinking just as much about the countryside as about cities, about agriculture as about industry.
Commodity frontiers are core constituents of the modern world. Understanding how and why they have expanded, moved and adapted over time is thus a key step in a better understanding and analysis of the global history of capitalism. But it includes great challenges: how to account for the enormous variety and specificity of actors and places involved in this history, the dizzying number of changes that have taken place as well as their almost unfathomable scale, without losing sight of the broad movements of global capitalism and its systemic transformations? It is to this fundamental social sciences challenge that global history can contribute crucial insights.
Capitalism and commodity frontiers
Considering the spectacular rise in the growing of agricultural commodities and the mining of minerals in the past centuries, and the stunning and ongoing social and environmental effects of their production and circulation, it is not surprising that many scholars from a variety of disciplines have tried to grasp the underlying mechanisms of commodity frontier expansion.
Economists have contributed much to our understanding of these issues, especially their discussion of whether and to what extent capitalism can resolve the social and ecological crises it has created. We learn from Edward Barbier's monumental Scarcity and Frontiers that over the past centuries, some commodity frontiers sustained successful resource-based development, while many more collapsed under social and ecological pressures. 4 We also discover that capitalist production has created and extracted a wealth of new agricultural commodities and minerals across the world, while polluting bodies of water, land and people, depleting and salinizing soils and degrading the very conditions of its own reproduction. Many economists conceptualize these processes by emphasizing that capitalism tends to externalize social and ecological costs, and that the best way to correct such imbalance is to internalize them. In both economic research and policymaking, internalizing externalities has become a widely accepted approach to furthering global sustainability. Neoclassical economists are especially optimistic about such ecological accounting. Paul Collier, one of this field's most prominent voices, promotes an analytical tool he calls 'right prices' and highlights the role of 'inclusive and transparent institutions' in defining and regulating such prices. 5 Ecological economists, on the other hand, have looked at the same set of facts and come to very different understandings. Economist Joan Martinez-Alier, along with human ecologists working on distributional conflicts such as Alf Hornborg, for example, are sceptical that capitalist externalities can be priced into submission. They offer alternative conceptions of how to define capitalism's ecological problems, seeing them as the political results of uneven distribution. 6 They deploy concepts such as social metabolism and unequal ecological exchange to analyse how the flows of energy and materials between places and peoples generate and maintain inequalities. These scholars contend that the capitalist system, rather than being able to self-correct by pricing externalities, is based on crushing forms of ecological debt created by rich nations underwriting their growth with the resources of poor nations. For these scholars, past centuries have been marked by industrializing societiesalmost always colonial powerscompensating for their ecological deficits by imperialist exploitation. 7 By opening new frontiers of commodity cultivation, production, extraction and waste disposal, these countries have exported problems of pollution, soil degradation, poor labour conditions and social upheaval to poorer countries. Geographer David Harveyone of the most prominent voices in this debaterefers to this process as capital's 'spatial fix' the extraction of resources by dispossession and labour from local communities, resulting in highly uneven development. 8 Development studies scholars provide yet another set of approaches to understanding commodity frontiers by asking why many resource-rich countries tend to be characterized by low levels of economic growth, massive economic inequality and high rates of poverty. Neoclassical economic explanations for this 'paradox of plenty' or 'resource curse' tend to centre on issues of incorrect pricing, misallocation of development revenues and inadequate institutional quality and oversight. 9 Critical economists and political ecologists, especially in Latin America, argue, in contrast, that this outcome is not a paradox, but rather the direct result of outside actors and institutions extracting minerals, raw materials and forest and agricultural resources and exporting them along with the water, energy, labour and knowledge that they embody. To conceptualize that impact of commodity frontier expansion, these thinkers use notions of extractivism, neo-extractivism and post-extractivism. 10 Other scholars, including Mattias Borg Rasmussen, Christian Lund and Nancy Peluso, use the concept of territorialization to explain how patterns of resource exploration, extraction and commodification have dissolved existing social orders and reordered spaces. The reshaping of social and economic orders around new resource frontiers profoundly reworks patterns of authority and institutional architectures such as property systems, political jurisdictions, rights and social contracts. 11 Insights from these debates are germane for studying the social and environmental foundations and the effects of capitalism's commodity frontiers in the global countryside. But even as various disciplines converge around studying the social and environmental foundations and effects of capitalism's commodity frontiers, their concepts remain siloed in discrete literatures. What is more, many (though not all of them) tend to focus on contemporary problems, their analysis limited by a failure to fully consider the many centuries of commodity frontier expansion.
The importance of history Historical concepts are crucial, however, in situating present issues in longer trajectories to highlight the patterns that will ultimately help us find new analytical tools to grapple with our present. Of course, there is a long tradition of looking at capitalism historically, and indeed a distinguished group of social scientistsfrom Werner Sombart and Fernand Braudel to Immanuel Wallerstein and Alain Bihrhas argued that global capitalism emerged on the eve of the Columbian voyages across the Atlantic, that capitalism, in fact, was born global. 12 For this group of scholars and others in that tradition, commodity frontier expansion was a key marker of capitalism from its very beginning.
No one has done more to understand capitalism as a system encompassing distant places and people than Terence Hopkins and Immanuel Wallerstein, the developers of the concept of the commodity chain. 13 Their aim was to show the emergence of a worldwide division of labour about 600 years ago, a system that typically connected rural commodity-producing regions in the periphery with processing industries and consumers, typically in cities located in what they called the 'core.' These production processes, they argued, were 'cross-cutting political jurisdictions,' expanding over time, and subject to structural transformations. 14 Because the commodity chain concept sees commodities as 'containers of hidden social relations', it concentrates on people working on commodity frontiers, the places where they work, the conditions they work under and the social relations governing their work and production. 15 Feminist scholars working in this tradition, importantly, have expanded the scope of the commodity chain to include the unwaged household work that produces labourers and the 'free gifts of nature' that contribute to the making of commodities and commodity frontiers. 16 They have endeavoured, as Wilma Dunaway put it, to make visible those 'transfers of value that are embodied in commodities but do not show up in prices'. 17 In recent years, global historians have picked up on some of these ideas. Studies on the history of particular commodities, for example, have persuasively illustrated the deep links between agriculture and industry, the countryside and the city and the household and production. These studies have revealed how the global emerged from local configurations and have shown the essential role played by the politics, ideas and collective actions of non-elite actors such as rural cultivators, especially in the Global South, in shaping commodity frontiers, and thus the political economy of global capitalism. 18 Similarly, many scholars have been sensitive to the ecological dimension of economic and technological divergence between 'core' and 'periphery'. Environmental historian and historical geographer Jason W. Moore, for example, has argued that since for most of human history technological advances were slow and piecemeal, the global economy derived much of its growth from an unstinting expansion of vast frontiers of labour, food, energy and raw materials. 19 Sidney Mintz made the point abundantly clear in his Sweetness and Power, observing how slave labour produced cheap sugar for the emerging British industrial proletariat. Slave-based sugar and cotton supplied calories and clothing that industrialising Britain could never have procured from its own soil. In the words of historian Kenneth Pomeranz, these processes provided Britain with ecological relief. 20 It is at this point that Jason W. Moore's argument that global capitalism is organized through frontiers becomes especially relevant. For him, these frontiers have expanded from one place to the next, transforming socioecological relations as they go, producing more and more goods and services that circulate through an expanding series of exchanges. Valued by a growing number of scholars from different disciplines as a problem-oriented transdisciplinary approach to historical processes, Moore's commodity frontier concept invites a radical rethinking of the commodity chain approach from which it emerged. Commodity chain analysis starts from what its advocates have called 'the core' and works downstream towards peripheral locations of subaltern production, crop cultivation, mineral extraction and so on. The commodity frontier approach, in contrast, begins with the countrysidea significant departure. It moves analytically from chains of (labour) relations to frontiers of spatial expansion that include not only labour, but the incorporation and extraction of non-human nature.
Yet capitalism never changed solely by expanding in space and scale. It also experienced fundamental shifts in character, including in the dominant patterns of commodity frontier expansion. Analysing and understanding how these patterns varied across time and place, how and why such variations were institutionalized and how and why key dynamics changed requires a reflection on the periodization of capitalism. An influential approach here is that of Harriet Friedmann and Philip McMichael's work on successive food regimes. Concerned with 'the role of agriculture in the development of the capitalist world economy, and in the trajectory of the state system', their food regime concept has been debated, critiqued and carried further in the past 30 years. 21 Recently, Friedmann has amended a rather rigid structural conceptualization of food regimes by bringing agency and social movements more centrally into the frame. For her, a regime is constituted by a 'relatively stable set of relationships' with 'unstable periods in between shaped by political contests over a new way forward'. 22 Meanwhile, McMichael specified that the food regime is foremost an analytical device and historical method to pose specific questions about the structuring processes of the global political economy, and/or global food relations, at any particular moment. 23 Food regimes as a concept have thus developed into a device for periodization and a proposal for a comparative historical method that links broad political-economic change to local agency and contestation.
Commodity regimes
All the approaches outlined above illuminate important aspects of the expansion of commodity frontiers during the past six centuries. Yet, each is limited in its own way. Much of the writing on the history of capitalism produced during the past 150 years privileges a perspective from the city, from industry and from labour outside the household; not surprisingly, considering that most of these authors were men who resided in cities located in industrializing countries. Yet, the vast majority of humanity has lived and worked, until very recently, in rural and domestic places, and it was in these places that many of the revolutions of capitalism have taken place. And while commodity histories and commodity chain analysis have persuasively shown the deep links between agriculture and industry and the countryside and the city, their focus on single commodities has limited their ability to capture the expansion of commodity frontiers across several centuries and around the world as a whole. Global historians have captured some of these general processes, but their all-too-frequent privileging of top-down perspectives and elite actors has led them to ignore how the global, including global commodity frontiers, has emerged from local configurations of social space and social power. Scholars who focus squarely on commodity frontiers have often concentrated on single factors to illuminate their dynamics, insisting, for example, on master explanations like the 'spatial fix' (Harvey and Moore) or the 'technical fix' (neoclassical economists) and failing to historicize particular responses to particular moments of commodity frontier expansion. 24 Last but not least, many discussions of contemporary commodity frontier dynamics (i.e. land grabbing, flex crops, extractivism) fall into the trap of emphasizing the newness of developments that go back many centuries and can only be understood via a historical perspective.
What we need, instead, is to analyse commodity frontiers through a historical approach that (1) keeps multiple frontiers in the view over a very long time period, (2) focuses on a variety of actors, including capitalists, rural cultivators (peasants and slaves, men and women, indigenous people and state bureaucrats), (3) takes both a global and a local view to scrutinize frictions, contestations and counter movements from the household to the international arena and (4) asks how commodity frontiers have transformed in fundamental ways over the past 600 years, producing new kinds of dynamics, encountering particular resistances and constructing new fixes.
The concept of commodity regimes allows us to identify moments in the history of commodity frontiers in which particular sets of labour relations and property rights, patterns of land ownership, forms of the insertion of capital, state policies and technologies come to define a given historical period. It is a meta-historical device that allows us to capture the ways in which different societal domains on commodity frontiers (ecological, technological, social and political) are organized and relate to one another; it allows us to periodize and subdivide as needed, while still understanding the unity of the diverse.
Over time, we see that commodity frontiers exhibited regular, albeit shifting, combinations of labour systems, property regimes, technologies and state interventions. Any systematic analysis of the long history of these frontiers needs to begin by acknowledging this diversity. But we also need to acknowledge certain patterns. Properly analysed, these patterns help us understand the changing character of commodity frontiers as a constituent in the historical development of capitalism.
We can distinguish, roughly speaking, four distinct commodity regimes during the past centuries of capitalism's history, with the transitions from one to the next propelled by key transformations such as the abolition of slavery, the Industrial Revolution, the emergence of powerful state bureaucracies both in the industrializing countries of the North Atlantic and the colonial peripheries, and over the course of the twentieth century, the massive concentration of corporate enterprises.
The first such regime, which lasted from the 1450s through the 1850s, can be termed an early capitalist commodity regime. It was characterized by direct and violent dispossession of people from land and nature and by unfree labour systems that included chattel slavery, peonage and indentured contract labour. 25 Its forceful expansion was sanctioned by states, but its principal expansionary driver was merchant capital. The sugar commodity frontier is exemplary for this particular regime. Sugar production principally expanded because ever more land was violently taken out from underneath its native inhabitants, ever more workers were enslaved, and ever more capital moved from Europe into distant locations. Merchant capitalists and planters exerted a decisive influence on this commodity frontier, often ruling faraway places and organizing the day-to-day domination of labour. By the end of this period, maroonage and rebellion emerged as central social contestations, while technological advances propelled new and deeper forms of exploitation of both labour and nature. 24 The idea that there are additional fixes besides the well-known spatial fix has been expressed by Beverly Silver in her book The second regime, from the 1850s to the 1970s, can be described as an industrial commodity regime. It was characterized by the massively expanding use of fossil energy, soaring industrial and state demand for commodities and the rise of multinational capital, global bulk commodity markets and new transport and communication technologies. These factors reinforced infrastructural capabilities ranging from the telegraph to railroads, shaping the conditions under which commodity frontiers expanded, including the contractual mobilization of land and labour.
While the forms this transformation took were complex and varied across time and space, four central features can be distinguished: the conversion of a system of customary land rights into legally defined titles to land ownership; the transformation of the concept of property from ambiguously defined areas to concretely defined, possibly enclosed, physical spaces; the rationalization of the use of such demarcated landed property as a form of capital and the increased privatization of the earth's surface through dispossession and displacement of peasants and indigenous populations. Sharecropping, tenant farming, indentured servitude and wage labour increasingly replaced non-economic labour coercion such as enslavement. Massive colonial projects fuelled the commodification of land, while land grabs abolished communal peasants' rights and developmental projects or state-sponsored collectivization schemes led to further expropriation and displacement. Agricultural science, in turn, brought productivity leaps, eventually leading to the Green Revolution of the 1960s. Like the previous commodity regime, the industrial commodity regime was always contested, with labour activism and anti-colonial movements joining older forms of resistance.
The industrial commodity regime had enormous staying power and was propelled to new heights by the rapid economic expansion in the three decades after the Second World War. However, by the 1970s, it started to unravel and be replaced by a new corporate commodity regime. Slow economic growth hit numerous commodity-producing countries particularly hard, reducing their governments' abilities to mitigate market volatility and forcing them to accept structural adjustment programmes. This reinforced the changing role of the state vis-à-vis transnational corporations and financial institutions, as well as new global political divisions amongst and between North and South, simultaneously reproducing and remapping imperial, colonial and Cold War political geographies. The concentration of power in the hands of a few producers took a quantum leap as commodity trade and financial institutions became tightly connected from the 1980s on. Capitalist agriculture created new commodified inputs (seeds, fertilizers, pesticides, etc.) and legal protections for corporate ownership, resulting in further power concentrations at commodity frontiers. A massive contraction of land rights accelerated the growth of a rural proletariat on a world scale, and resistance movements acquired a transnational character. 26 Cracks in the corporate commodity regime became visible with the Great Recession and the world food crisis in 2008, spurring a fourthstill tentativecontemporary commodity regime. In this emerging regime, key elements of the previous regimes are reintegrated and intensified. For instance, firms already entrenched at the top of commodity markets and financial actors looking for new investment opportunities have come to own or finance increasing amounts of land around the world, largely through dispossession, often with the assistance of state power, best expressed in the newly fashionable public-private partnerships (PPPs). Green capitalism, built on the cooptation of sustainability discourse, continues to create new products and frontiers for accumulation, amongst them organic foods, biofuels and the optimistically named clean coal. 27 Rising authoritarianism around the world is pressuring people and environments on commodity frontiers in South America, the USA, South East Asia and elsewhere. At the same time, new dynamics are coming into view in this post-2008 era that Klaus Schwab, Executive Chair of the World Economic Forum, has called the 'fourth industrial revolution'. 28 Companies (often with state assistance) are expanding into radically new production and information technologies, including new automated (robot) labour in manufacturing, households and agriculture. Relatedly, new digital infrastructures that extract data and mediate between different groups have become increasingly important for commodity frontiers. Examples of what Nick Srnicek calls platform capitalism include agricultural implements manufacturer John Deere's data collection systems that record farmers' activities and commodify the data and mobile phone-based money transfer and (micro)financing services such as Vodafone's M-Pesa that operate in East Africa and elsewhere, and China's new 'Study the Great Nation' social platform that collects user data for surveillance and advertising. 29 As this regime is still unfolding, questions about which processes and relations will be most important, which will spur or encounter the most resistance, and what forms that resistance will take remain.
These four regimes have engendered widely divergent forms of expansion and exploitation, showing capitalism to be highly adaptive and flexible. And like capitalism more broadly, each regime contains profound tensions, generating fierce contestation. Approaching the history of capitalism through commodity regimes speaks against a teleological or linear interpretation of the relationship between capitalism and the countryside, as Figure 1 clearly shows, and aids us in uncovering capitalism's shifting historical and spatial logic. Figure 1 is our effort to describe in a tentative way and at the most general level how we propose to investigate frontier expansion as a series of cumulative frictions and fixes.
Commodity regimes and their frictions
The expansion of commodity frontiers was not a smooth unfolding of one universal logic or of unstinting human progress, but a series of regimes that transformed themselves in quite fundamental ways at certain moments. These transformations occurred because each regime ran into frictions that eventually made the further expansion of commodity frontiers impossible without fundamental changes. Preliminary investigations suggest that these regimes succeed each other at an accelerating pace, going from 400 years for the first regime to 30 years for the third. Market convergence and the increased momentum of technological change as well as growing resistance might account for fundamental frictions occurring more frequently. But these are assumptions that we want to test. In doing so, our point of departure is that commodity regimes encounter frictions along three central axes: (1) ecological frictions, (2) competition for land and labour (3) and social resistance, including counternarratives that contest the existing commodity regime.
To begin with, ecological frictions have imposed an important set of limitations on commodity frontier expansion. In the early phases of capitalism, ecological frictions such as declining soil fertility forced the production of agricultural commodities or minerals to move into new areas. Later, crippling diseases that swept through populations of uniformly bred crops and livestock instigated the quest for disease-resistant plant varieties, broad-spectrum pesticides and fungicides, more powerful vaccines and tightly controlled production systems with labour-disciplining biosecurity measures. Ecological damage caused by commodity agriculture or extraction rendered many frontiers unproductive or uninhabitable, often causing the end of the production of this particular commodity at this location. Deserted mining regions around the globe, the problem of severe water shortages and salinization surrounding the irrigated cotton fields in Central Asia, the wheat frontier in the USA and its degeneration into the infamous Dust Bowl in the 1930s are all examples of collapsed commodity frontier zones caused by ecological frictions.
Likewise, frictions arising from competition for land and labour have destabilized many commodity frontier zones. Rebellions of enslaved and servile women and men were a permanent feature of the frontier zones during the early capitalist commodity regime. Slavery became increasingly untenable due to resistance by enslaved people such as the revolution in Saint-Domingue in 1791 and the large-scale uprising in Jamaica in 1832, as well as the emergence of an abolitionist movement. Labour shortages were a perennial problem for commodity frontiers; in fact, during the many centuries in which mechanization proceeded slowly, labour supplies were the main limiting factor for commodity production. Once slavery was abolished and tropical commodity agriculture left its plantation enclaves, it increasingly inserted itself into existing rural societies, where it had to compete for land and labour. Thus the expansion of global capitalism increasingly encroached upon existing land rights, often using large-scale destruction of communal land ownership and outright dispossession and displacement of peasants and indigenous populations. This massive accumulation through dispossession has been a source of permanent, often violent, conflict.
Across the four regimes, commodity regime expansion produced other social frictions as well. As merchants, chartered companies, colonial officials, mining capitalists and frontier planters, amongst others, expanded into new territories and new productive activities, appropriated new land, extracted new materials and incorporated new labour, their ongoing attempts to externalize the social and environmental costs of production and reproduction were met with resistance. Coupled with ecological limits, such resistances can prefigure and compel regime change, pushing capital to seek new frontiers. During the early capitalist regime, for instance, revolt and desertion (maroonage) were responses to enslavement and servitude and helped propel the shift to sharecropping and wage labour. During the industrial regime, strikes, working-class political mobilizations and unionization, together with anti-colonial movements, joined rebellion and escape as forms of insurgency. Later, as corporate-headed transnational commodity chains integrated global production in the corporate regime, resistance has also become transnational. Workers struggled against capital's 'race to the bottom', indigenous communities against the polluting and degrading activities of transnational corporations that often operate with near impunity, peasants against the incursions of transnational capital into agrarian spaces. In some cases, resistance also provided counternarratives and counterproposals for different ways of organizing political, economic, social and ecological lifein recent years, for example, by seeking collaborative, locally embedded, equitable or non-growth-based forms of production. Because counter movements suggest some of the key themes around which people are exploited or oppressed, studying resistance within regimes is a crucial part of defining and analysing the regimes themselvesand helping to explain how over time they changed fundamentally.
Commodity regimes and their fixes
Frictions and resistances were part of each commodity regime over the centuries, usually culminating in systemic crises. In response, new commodity regimes emerged, characterized by particular fixes or combinations of fixes. Each ensemble was particular to particular moments in the history of global capitalism. The fixes, as described in Figure 1, were (A) the spatial fix, (B) the technological fix, (C) the state-led fix and (D) the corporate fix. Although each new fix was hailed as the master key to resolving the then-current limits to commodity frontier expansion, they were usually not entirely new. Moreover, older fixes did not disappear. Spatial fixes, for example, remain powerful today, usually at the expense of tropical rain forests, grasslands, indigenous communities and biodiversity.
Nonetheless, the spatial fix was most pronounced during the early capitalist commodity frontier regime, when the state was distant and quite weak and most increases in output came from additional inputs of land and labour. Until the late eighteenth century, the consequences of soil exhaustion caused by sugar, tobacco or coffee plantations, for example, were almost always overcome either by using additional labour to perform manuring or by adding more land for crop rotation. Labour shortages were addressed by immigration of either free or enslaved workers, as happened in most of the plantations of the New World. From the late eighteenth and particularly the early nineteenth century onward, a new set of fixes emerged to join the continuing spatial expansion. In the early stages of capitalism, technology had only a limited impact on productivity. The Industrial Revolution changed this, as it resulted in massive mechanization of both agriculture and mining and immense improvements in transportation, although these innovations entered the commodity frontiers unevenly and increased rural inequalities through class differentiation and lay the ground for future ecological distribution conflicts.
Increasingly, the state came to play a more prominent role in commodity frontier expansion. Where the expansion of commodity frontiers had previously been driven by a particularly violent type of capitalism exerted by merchant capital and sanctioned by the state, in the nineteenth and twentieth centuries states attained the infrastructural capability needed to shape the conditions under which frontiers operated. Infrastructure construction, financial legislation, sponsored migration of contract labour and legislation and implementation of new property rights regimes all featured as prominent new forms of state interventions. While technology-enabled global commodity production to soar after the systemic crisis that attended the abolition of the slave trade and slavery itself, these technological innovations would not have materialized without the new role played by the state.
Science, moreover, produced leaps in agricultural productivity, including the Green Revolution at the end of the industrial commodity regime. Differential access to technology and the ways that capitalist agriculture created new commodified inputs (seeds, fertilizers, pesticides, etc.) engendered new dependencies for farmers, 30 thus leading to further concentrations of power at the commodity frontier. The current stage of biotechnology enables the integration of the food and energy sectors and a profound appropriation of life through seed patents and intellectual property protections, leading to further large-scale dispossessions. In fact, over time, the stateled fix has paved the way for a transnational corporate enterprise to become the dominant force on commodity frontiers, which continuesnot without contestationin the contemporary regime.
Capitalism has been driving the creation of increasingly integrated and complex commodity chains, massively changing the relationship between commodity frontiers and processing industries. Significant convergence of commodity prices appeared by the late eighteenth century; information systems connected the global countryside with industrial centres by the mid-nineteenth century; the concentration of power in the hands of a few end producers intensified during the twentieth century, especially in the 1980s when commodity trade and financial institutions became tightly connected. Currently, a narrow range of mostly transnational corporations controls much of the expansion of commodity frontiers through direct ownership of land and means of production and contracting, subcontracting, digital data collection and general control of commodity circulation. 31 Much of the economic risks and the environmental and social costs of corporate-led production are dumped on marginalized people and places, while new calls for sustainability are co-opted into opportunities to deepen Green Capitalism in the new commodity regime.
The framework of commodity regimes takes commodity frontiers as an analytical point of departure, sensitizing us to geopolitical shifts, the role of the state, technology, ecology, and last but not least, local agency. This is to say that capitalism develops not only through incorporation and commodification, but also through what resists it. Non-human nature is not flat, constant or given: soils give out, water becomes scarce or toxic, wells run dry, 'weeds' and microorganisms develop resistance to agrochemicals and antibiotics. At the same time, social movements and civil society initiatives compel capital to adjust to the demands of organized groups of people, sometimes creating new avenues and segmented markets for capital such as fair trade and organic labelling in food, and sometimes creating a barrier for capital to overcome, such as wage labour.
A research strategy
It is challenging to translate the framework of commodity regimes into a workable research strategy; this challenge, of course, also pertains to global history's project of making wide-ranging comparisons across large time spans and geographic regions. 32 A commodity frontiers research project 30 See David Goodman, Bernardo Sorj, and John Wilkinson, From Farming to Biotechnology: A Theory of Agro-Industrial Development (Oxford: Basil Blackwell, 1987). The authors describe this process as appropriationism, whereby 'elements once integral to the agricultural production process are extracted and transformed into industrial activities and then reincorporated into agriculture as inputs' (p. 2). Through state policies and agribusiness-led markets, farmers adopt these technologies, entering into production treadmills that create dependencies, competitive imperatives in price-governed markets, and systemic ecological degradation. See also Philip Howard, "Visualizing Consolidation in the Global Seed Industry: 1996-2008," directly engages with ongoing debates about the ambitions, promises and limits of global and world history. It requires building collaborative and discipline-crossing research networks. It offers methods and sources for a history that aims to surpass or delegitimize the old Eurocentric stories of the rise of a unified world. 33 We aim for an inductive approach that studies localized experiences and global systemic movements and past experiences and contemporary problems within a single analytical framework. This requires that we overcome the fragmented and individualized character of archival research and fieldwork, which, because they are so labour intensive, usually produce geographically and temporally limited work. The immense library of existing case studies does not add up to a systematized body of knowledge. Another challengeone that again pertains to global history more broadlyis to move beyond privileging the national level as a unit of analysis. Many historical indicators of developmentper capita income, demography, migration, balance of trade, etc.are only available at the national level, but commodity frontier zones are usually subnational units, and sometimes cross national borders. 34 National data collections remain indispensable, but data collection at the subnational and transregional level is equally important.
While still difficult, an inductive and multi-scalar approach is increasingly feasible thanks to innovative technologies of data gathering, analysis and visualization. Digital humanities techniques let us look at a myriad of sources to illuminate the workings of commodity frontiers at the local level and may release us from over-relying on data aggregated at the national level. Publications on particular commodity frontier zones that span decades and perspectives are already available in digital format, and we can draw on their research findings. Moreover, since frontier zones are marked by the commodification of land and labour, they have been relatively well documented by colonial administrations, revenue records and so on. In recent years archives have been digitizing their holdings of documents, historical artefacts and newspapers. Archaeologists working on the Baltics, Southern Spain and the Eastern Mediterranean, for example, have employed excavations, paleoenvironmental sources and laser techniques that would help us reconstruct even the oldest commodity frontiers. 35 If these sources can be digitally connected and automated data-mining processes applied to them, it becomes feasible to extract a wide range of data from very disparate sources in different languages. What is more, semantic technology can help us create structured data sets from immense collections of fuzzy data. Analysis of these sources within a global comparative context, enhanced by visual representations such as maps and graphs, can provide clues for new hypotheses to be tested against the assembled corpus of data or even Linked Open Data. These technologies facilitate the integration of geographic knowledge from diverse data sources, and they have 33 been developing rapidly, offering the promise that they will soon become more customerfriendly. 36 To deliver on the promises of a commodity frontiers centred analysis of global capitalism we need to draw on approaches and disciplines that often stay aloof from one another. 37 Combining data gathering and analysis techniques with fieldwork will bridge the divide between disciplines that study the past and those that study the present. Global historians need to draw on the methods of ecological economists and other social scientistson the Environmental Justice Atlas https://ejatlas.org/, for exampleto map today's ecological inequalities. Combining archival research, fieldwork and digital techniques and deploying an inductive methodology at different spatial and time scales will enable us to understand the shifts of key commodity frontiers and the emergence of particular commodity regimes, and thus to redraw our understanding of global capitalism.
*** Studying the global history of commodity frontiers is crucial to coming to terms with important aspects of world history over the past six centuries. But this project is just as important when it comes to understanding our contemporary dilemmas. Two recent reports commissioned by the European Union recommend that the EU considers its giant global ecological footprint. 38 These reportsalong with international reporting more generallysuggest that we have arrived at a new state of unsustainability. But as alarming as this is, when we look at commodity frontiers over the very long term, we immediately see that our contemporary dilemmas are not new. The consumption of massive amounts of extra-European resources is an old story that goes back at least 600 years and has played a major role in Europe's economic ascendancy. There is a similar, though shorter story for the USA and other emerging powers that arrange their economies through a mix of domestic production and global trade in commodities that usually originate in the countryside.
To disentangle the complexities that may derail today's attempts to frame a global agenda of sustainable growth, we have to understand in new historical depth the dynamics of appropriation of nature, enclosures of land, regimes of labour control and transfers of capital and knowledge and the concomitant elimination of ecological and social knowledge. At the same time, global history can deliver on some of its fundamental promises by looking systematically at global change over very long time periods while remaining attentive to history told from the bottom-up. A new global social history will allow us to analyse consecutive commodity regimes and understand the ways they have created unequal power relations and massive inequities that shape the present. Studying commodity frontiers can help us identify historical capitalism as a process rooted in a profound restructuring of rural societies and their relation to nature and lets us connect processes of extraction and exchange with degradation, adaptation and resistance in rural peripheries. When we look at the history of the sixteenth-century sugar frontier or the mid-twentieth-century soy frontier as moments in the unfolding of various commodity regimes over the past six centuries, we gain not just novel perspectives on the history of capitalism, but also on our contemporary dilemmas. Global historians can make a unique contribution to this conversation. 36 See, for example, the work by Reinaldo Funes of Cuba's Fundación Nuñez Jimenez who has been leading the way in developing a historical GIS for Cuba, focused on mapping changing land use and ownership, the spread of the commodity-plantation economy and its environmental impact. http://www.fanj.org. Feasibility study on options to step up EU action against deforestation (January 2018); Study on the environmental impact of palm oil consumption and on existing sustainability standards (February 2018).
|
2021-07-26T00:05:55.495Z
|
2021-06-10T00:00:00.000
|
{
"year": 2021,
"sha1": "67159a65e1916ba796dd9a926ff91c5ad34111d2",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/C9675685801E74826468C1FC567982B5/S1740022820000455a.pdf/div-class-title-commodity-frontiers-and-the-transformation-of-the-global-countryside-a-research-agenda-div.pdf",
"oa_status": "HYBRID",
"pdf_src": "Cambridge",
"pdf_hash": "c25aed684c0209a7aa26665d0de108e36bf2c4b0",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
}
|
248442772
|
pes2o/s2orc
|
v3-fos-license
|
The Influence of Frailty Syndrome and Dementia on the Convenience and Satisfaction with Oral Anticoagulation Treatment in Elderly Patients with Atrial Fibrillation
Background: The impact of frailty syndrome (FS) and dementia on the convenience and satisfaction with oral anticoagulation (OAC) treatment in atrial fibrillation (AF) patients is not well-known. Aim: Assessment the impact of FS and dementia on the convenience and satisfaction with OAC treatment in 116 elderly (mean age 75.2, SD = 8.2) patients with AF. Methodology: A self-administered questionnaire was used in the study to collect basic socio-demographic and clinical data. Tilburg Frailty Indicator (TFI) questionnaire was used to assess the presence of FS, Mini Mental State Examination (MMSE) to assess cognitive impairment (CI), The Perception of Anticoagulant Treatment Questionnaire Part 2 (PACT-Q2) to assess convenience and satisfaction with OAC treatment, and the Arrhythmia-Specific Questionnaire in Tachycardia and Arrhythmia (ASTA) to assess quality of life (QoL). Results: Multivariable analysis as a significant, negative predictor of the convenience and satisfaction domain showed the occurrence of dementia (β = −0.34; p < 0.001, β = −0.41; p < 0.001, respectively) and prior major bleeding (β = −0.30; p < 0.001, β = −0.33; p < 0.001, respectively). Analysis showed a significant relationship between convenience and satisfaction and the overall result of the ASTA (r = −0.329; p < 0.001, r = −0.372; p < 0.001, respectively). Conclusions: Elements of geriatric syndrome, such as FS and dementia, adversely affect treatment convenience and satisfaction with OAC treatment in AF. It has been shown that better convenience and satisfaction with OAC treatment translates into better QoL. There were no differences between satisfaction and convenience and the type of OAC treatment (vitamin K antagonists (VKA) vs. novel oral anticoagulants (NOAC).
Introduction
Decisions regarding oral anticaogulation (OAC) treatment in older adults with atrial fibrillation (AF) require a holistic aproach. Comprehensive geriatric assessment (CGA), including frailty syndrome (FS), cognitive impairment (CI), and bleeding risk, may be important in clinical decisions in patients with AF. FS and CI are both conditions associated with risk of mortality, but the presence of FS and CI does not appear to be contraindication to anticoagulation in AF. There is a scant evidence of net clinical benefit in patients with frailty and cognitive functional impairment as a surrogate marker of biological age [1].
In the literature, more atention is being given to patient-reported outcomes (PROs). PROs provide important information from the patient's perspective and the subjective measures relating to patent experience and quantify assessment of patient satisfaction, adherence, or quality of life (QoL) [2]. Antigoagulation convienience and satisfaction are an important PROs markers. The link between phisical, cognitive conditions with anticoagulation convienience and satisfaction are not well-established. Higher values of convenience and satisfaction during novel oral anticoagulants (NOAC) treatment should be expected compared to the vitamin K antagonists (VKA).
Better convienience and satisfaction with OAC treatment is important, as it translates into better adherence and QoL [3]. There are single studies investigating the effect of FS and cognitive impairment on antithrombotic treatment satisfaction although they do not assess the effect of dementia [4]. There is also a lack of papers evaluating the effect of FS and dementia on the convenience of OAC treatment in this group of patients.
Aim of the Study
We aimed to assess the impact of FS and dementia on treatment convenience and satisfaction with OAC in patients with AF. The study also aims to examine how the type of OAC treatment, namely VKA or NOAC, affects convenience and satisfaction and assess the impact of convenience and satisfaction on QoL.
Study Settings and Patrticipants
The study was carried out on a group of 116 elderly (mean age 75.2, SD = 8.2, 64 women and 52 men), randomly selected in-patients with non-valvular AF hospitalized in the Department of Cardiology in the T. Marciniak Lower Silesian Specialist Hospital in Wroclaw. The study was cross-sectional and questionnaire-based. The inclusion criteria were: nonvalvular atrial fibrillation, age over 60, taking oral anticoagulation therapy for at least 6 months, and providing written and informed consent to participate in the study. The exclusion criteria were: condition of the patient requiring intensive cardiac care, lack of written and informed consent to participate in the study, mental disorders, and a history of stroke within the last 6 months. All patients qualified to participate in the study were asked to give their written consent to the study and to fill out the study questionnaires in the presence of the investigator. In our study, 19 patients had a history of stroke more than 6 months before participation in the survey.
Research Tools
Basic sociodemographic and clinical data were obtained using the author's questionnaire and analysis of medical records. In the study, the severity of disease symptoms according to the European Heart Rhythm Association was used. The CHA 2 DS 2 VASc scale was used to determine the risk of thromboembolism. The study also used the HAS-BLED bleeding risk assessment scale [5].
The assessment of the occurrence of frailty syndrome was based on the Tilburg Frailty Indicator (TFI) questionnaire in the Polish-language version. The TFI was developed on the basis of an integral model of frailty. The scale consists of two parts: part A, concerning health determinants of FS, and part B, concerning 15 questions on the occurrence of the main components of frailty. The total result of TFI can range from 0 to 15 points, and FS is recognized at 5 points and above [6,7].
The Mini Mental State Examination (MMSE) questionnaire was used to determine cognitive function. The total score of the questionnaire may range from 0 to 30 points, with lower scores indicating a cognitive impairment and dementia. A score of 30-27 points means no cognitive dysfunction, while a score of 26-24 points is equivalent to cognitive impairment without dementia. Dementia is recognized by a score of 23 points or fewer [8,9].
To assess the quality of life, a standardized research tool was used: the Arrhythmia-Specific Questionnaire in Tachycardia and Arrhythmia (ASTA), part III, questionnaire in the Polish-language version. Part III of the questionnaire was used in the study to assess the influence of arrhythmia on the patients' daily life and HRQoL. The ASTA HRQoL total scale score ranges from 0 (best possible HRQoL) to 39 (worst possible HRQoL). Higher scores reflect a more negative impact of arrhythmia on HRQoL [10][11][12].
The evaluation of antithrombotic treatment satisfaction and convenience was performed by using the Perception of Anticoagulant Treatment Questionnaire Part 2. (PACT-Q2). The PACT-Q2 measures convenience and satisfaction and is administered to patients once treatment is ongoing. The total score for each domain ranges from 0 to 100, with higher values corresponding to higher level of convenience and satisfaction with OAC treatment [13,14].
Ethical Considerations
The study was approved by the independent Bioethics Committee of the Wroclaw Medical University, Poland. The study was carried out in accordance with the tenets of the Declaration of Helsinki and guidelines of good clinical practice.
Statistical Analysis
The statistical analysis was performed using Statistica 9.0 EN. The level of statistical significance was considered p < 0.05. Mann-Whitney test was used to compare quantitative variables between two groups, while Kruskal-Wallis test (followed by Dunn post hoc test) was used for more than two groups. The relationship between two quantitative variables was assessed with Spearman's coefficient of correlation. Uni-and multivariable linear regressions were used to analyze impact of potential predictors on quantitative variables.
Results
The average age of the studied group was 75.2 years (SD = 8.2). More than half of the respondents (56%) were patients with diagnosed permanent AF. A total of 66.4% of patients received VKA treatment. FS was diagnosed in 67.2% and dementia in 36.2% of patients. Basic data are presented in Table 1.
The analysis of the PACT-Q2 questionnaire showed an average score of the convenience domain at 81.3 points, while the satisfaction domain at 58.7 points.
Based on comparative analysis, lower domain of convenience values were observed in the group with diagnosed FS (mean ± SD 78.5 ± 14.3) compared to those with no FS (mean ± SD 88.3 ± 11.8) and with dementia (mean ± SD 73.8 ± 14.8) as compared to those with cognitive impairment without dementia (mean ± SD 88 ± 9.2) and with normal cognitive function (mean ± SD 84.4 ± 13.8).
The lower satisfaction values were reported in the group of patients with diagnosed FS (mean ± SD 55.5 ± 13.3) than in group with no FS (mean ± SD 65 ± 11.1) and in group with dementia (mean ± SD 49.9 ± 11) than in groups with cognitive impairment without dementia and with normal cognitive function (mean ± SD 61.4 ± 11 and 65.8 ± 12.7, respectively). The data are presented in Table 2.
In multi-variable analysis, prior major bleeding (β = −0.33; p < 0.001) and the presence of dementia (β = −0.41; p < 0.001) were a significant negative predictor of treatment satis-faction. As a significant, positive predictor, we observed self-administration of medicines (β = 0.19; p = 0.02). The data are presented in Table 4. In ASTA questionnaire higher scores reflect a more negative impact of arrhythmia on QoL-the fewer the points, the better QoL. In the PACT-Q2 higher values corresponding to higher level of convenience and satisfaction with OAC treatment. Analysis showed a significant relationship between convenience and satisfaction and the overall result of the ASTA (r = −0.329; p < 0.001, r = −0.372; p < 0.001, respectively) and its physical domain (r = −0.356; p < 0.001, r = −0.374; p < 0.001, respectively). There was also a significant correlation between the PACT-Q2 satisfaction domain and mental domain of ASTA (r = −0.303; p < 0.001). The data are presented in Table 5.
Discussion
There are few studies assessing satisfaction of OAC treatment in the elderly in AF [3,4] or both satisfaction and convenience [15]. Convenience and satisfaction with OAC treatment is an important factor in assessing the effectiveness of treatment. Patients who are satisfied with OAC treatment show better clinical parameters, such as the International Normalized Ratio (INR), and better QoL [3].
Age is associated with the risk of frailty syndrome (FS) and dementia. Both FS and CI are indicated as one of the following for the most frequently mentioned reasons of lack of adherence [16]. The incidence of FS in patients with atrial fibrillation is estimated between 4.4% to 75.4% [17], and 67.2% of patients reported FS in our study. FS is one of the risk factors for earlier development of cognitive impairment [18].
It is still unknown whether antithrombotic treatment delays the development of CI. The relationship between AF and CI, including dementia, has been confirmed by metaanalyses. Kalantarian et al., in a meta-analysis of 89.907 participants, showed that AF is associated with a 40% increase in the risk of CI regardless of the stroke [19]. Wozakowska-Kaplon et al. demonstrated that the permanent arrhythmia among people over 65 years of age may be associated with lower results in the MMSE questionnaire, and CI was diagnosed in 43% of respondents [20]. In our study, CI without dementia occurred in 31.9% of respondents, while dementia occurred in 36.2%.
In our study, using the PACT-Q2 questionnaire, the average score for convenience of OACs was 81.3 ± 13.4 points, and the satisfaction score was 58.7 ±13.3 points. Brüggenjürgen et al. analyzed 5049 patients using the PACT-Q2 questionnaire. A similar mean score was shown for both convenience (82.9 ± 17.3) and satisfaction (63.4 ± 15.9 points [21]). Mull et al. showed a mean score of 87.88 (SD = 16.69) for convenience and a mean score of 67.86 (SD = 19.96) for satisfaction with OAC treatment in elderly patients with AF [15]. Gospos and Bernaitis showed that the mean overall score in AF patients for convenience was 72.26 ± 10.71 and 67.97 ± 18.58 for satisfaction [22].
A few studies in the literature evaluate the effect of FS and CI on OAC treatment satisfaction [4]. The SAGE-AF (Systematic Assessment of Geriatric Elements in Atrial Fibrillation) register on 1.444 patients > 65 years of age with AF who received OAC were included in the prospective study. The effect of six components of geriatric syndrome, namely FS, CI, social support, depression and anxiety, and hearing and vision disorders, on OAC treatment satisfaction measured by the Anticlot Treatment Scale (ACTS) was studied. The incidence of FS was significantly associated with lower satisfaction scores in univariate analysis [4]. Vision disorders, depression, and anxiety, all more common in elderly populations, had a negative impact on treatment satisfaction [4]. In a self-reported study, patients with FS and dementia scored worse in satisfaction and convenience of OAC treatment. In univariate analysis, frailty syndrome and dementia were significant negative predictors of satisfaction and convenience with OAC treatment although multivariate analysis showed the presence of dementia as a significant negative factor affecting both satisfaction and convenience.
The own study showed that the higher EHRA classification (EHRA III and EHRA IV) in the univariate analysis had a negative effect on satisfaction with OAC treatment. The severity of symptoms in AF can affect not only the patient's daily life and function but also the patient's sense of satisfaction with anticoagulation treatment.
Oral anticoagulation therapy is associated with an increased risk of bleeding. In our study, the history of bleeding determined the convenience and satisfaction results. Patients with a history of bleeding achieved lower mean scores for convenience (72.9 vs. 83.3 points) and satisfaction (49.7 vs. 60.9 points) compared to patients without previous bleeding. In the univariate and multivariable analysis, a history of bleeding was also a significant negative factor affecting treatment convenience and satisfaction. Weernink et al. showed that the presence of bleeding in the medical history is one of the main factors that determine patients' perception of OAC therapy [23].
Additionally, in our study, self-administration of medicines without the support of family members were associated with higher values of PACTQ-2 questionnaire, and this was a significant positive predictor of convenience and satisfaction. Thus, self-administration of medications is an important factor affecting patients' feelings while receiving oral anticoagulant therapy. Furthermore, in our study, disease duration > 5 years in univariate analysis had a negative effect on satisfaction with OAC treatment.
In clinical practice, VKA and NOAC are used in anticoagulation treatment. Although we expected higher values of convenience and satisfaction in the group of patients taking NOACs, in our study, we did not show statistically significant differences depending on the use of the OAC treatment (VKA or NOAC). However, patients taking NOAC received higher convenience values. Many studies have shown no differences in satisfaction with VKA treatment compared to NOAC. The multi-center SAFARI study, conducted on a group of 405 AF patients switching from VKA to NOAC (rivaroxaban) on the basis of the ACTS scale, showed improvement in treatment satisfaction after three months of treatment with rivaroxaban compared to the baseline condition (VKA) [24]. The RE-SONANCE (Patient Convenience study) register study showed better convenience and satisfaction assessed using the PACT-Q2 questionnaire in patients treated with dabigatran compared to the VKA [25].
OAC therapy can affect a patient's subjective sense of QoL due to the need for lifestyle changes, dietary restrictions, and other limitations. The QoL of patients receiving OAC treatment is receiving greater attention in recent years. There are available studies that evaluate independently the QoL and satisfaction with OAC in the AF population [26], but there are few studies that indicate the effect of satisfaction with treatment on QoL among patients with diagnosed atrial fibrillation [27]. Our study showed a significant relationship between convenience and satisfaction with OAC treatment and quality of life. The available literature indicates that a higher level of satisfaction during OAC therapy is associated with better compliance and adherence to the therapeutic recommendations and therefore with the improvement of QoL [3,28].
Guidelines for the management of AF emphasizes the role of the patient in an integrated management approach to the therapeutic process. In the integrated treatment of AF, the need for multidisciplinary teams is indicated, the composition of which should depend on the individual needs, values, goals, and preferences of the patient [5]. Patients with FS and cognitive impairment, including dementia, would benefit from a multidisciplinary teams including not only specialists and AF nurses but also family members or caregivers.
Conclusions
Elements of geriatric syndrome, such as FS and dementia, adversely affect convenience and satisfaction with OAC treatment in AF. However, in our study, the only independent predictor of treatment satisfaction and convenience was dementia. According to the guidelines, FS is not a contraindication to starting OAC, so our study supports guidelines [5] (Appendix A.1).
Interestingly, among sociodemographic factors, we also noted a positive effect of professional activity on convenience and satisfaction with OAC treatment. There were no differences between satisfaction and convenience and the type of OAC treatment (VKA vs. NOAC). It has been shown that better convenience and satisfaction of OAC treatment translates into better QoL. However, the treatment convenience and satisfaction evaluation during OAC therapy in patients with AF with FS and dementia should be part of clinical practice. Understanding treatment convenience and satisfaction and its determinants can help to plan and succeed in OAC treatment.
|
2022-04-30T15:17:01.778Z
|
2022-04-28T00:00:00.000
|
{
"year": 2022,
"sha1": "88b480acf15ec5453fa9f7875dd7d90e34bed12b",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/19/9/5355/pdf?version=1651128349",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ef174125ce561e870cb66e1621b7db686b3e4f21",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
9193138
|
pes2o/s2orc
|
v3-fos-license
|
Reliability of computer-assisted surgery as an intraoperative ruler in navigated high tibial osteotomy
Background Computer-assisted surgery (CAS) can act as an intraoperative ruler in high tibial osteotomy (HTO) to visualize continuously the leg during surgery. Questions The aim of the study is to evaluate the accuracy of CAS with respect to preoperative planning and postoperative deviation from the planned leg axis in HTO. In addition, the influence of surgeon experience as well as operation time and perioperative complications are analyzed. Methods A prospective multicenter study case series with follow-up at 6 weeks was performed in six centers. Medial open-wedge HTO with Tomofix® was done using computer assisted navigation technique with the Brainlab VV Osteotomy 1.0 module. Results Fifty-one patients with medial gonarthritis were treated with navigated HTO. The follow-up rate was 98%. The majority of HTO–CAS patients fell within the tolerated limit of ±3° for leg axis deviation, however, seven patients were reported with deviations outside of this range: three patients had deviations of >3°–4.5° and four patients >4.5°, respectively. Eight intraoperative complications were documented, partially resulting from technical problems associated with the navigation system. During the 6-week follow-up period, three postoperative complications were experienced, all not associated with navigation technology. Conclusions In about 85% of cases, a perfect result in terms of deviation of the planned mechanical leg axis could be achieved. Computer assistance in HTO proved to be a helpful tool regarding intraoperative control of leg axis. Level of evidence Level I, High quality prospective study (all patients were enrolled at the same preoperative planning point with ≥80% follow-up of enrolled patients).
Introduction
Today the most common application in the field of computer-assisted surgery is navigated total knee arthroplasty.
As several prospective randomized studies could show, the standard deviation of the mechanical leg axis is reduced significantly [4,14,16]. The mechanical leg axis is one important factor influencing the mechanical load distribution in the knee joint. According to the biomechanical studies of Hsu et al. [6], in a 1°varus deformity 75% of the knee joint load passes through the medial tibial plateau but this load increases to over 90% in a 6°varus deformity.
An effective method to decrease the load of the medial tibial plateau is a valgisation of the leg. High tibial osteotomy (HTO), first reported by Jackson in 1961 [8] and later popularized by Coventry [2] in the United States, is a common procedure in the management of medial compartment osteoarthritis (OA) of the knee and is often reported as an effective method of management [5,7,9].
In principle, two different techniques are available: (1) the closed-wedge technique, and (2) the open-wedge technique. The medial open-wedge HTO is considered to be superior to the closed-wedge procedure in terms of leaving the fibular bone unaltered. It also does not lead to a loss of bone and is assumed to encounter less iatrogenic soft tissue damage [5,10,15].
The TomoFix Ò implant was designed to overcome previous plate limitations in performing medial wedge HTO. According to the designers of this plate and screw system, the difference from conventional systems is that the screws can be locked in the plate hole, which provides angle stability and no primary or secondary tilting of the fragment [17]. These advantages are believed to prevent primary and secondary loss of correction and ultimately, improved functional outcomes [11,17].
Ideal correction of the leg axis is difficult to achieve and postoperative malalignment is observed following HTO [3,12]. A permanent surgical challenge is to achieve the planned leg axis intraoperatively. Computer-assisted navigation systems may improve precision and accuracy of the leg-axis correction, while offering simulation tools and being capable of predicting the postoperative alignment.
The hypothesis of this study is that CAS guidance in HTO is a reliable tool with respect to preoperative planning and postoperative deviation from the planned.
Study design
After approval of local ERBs, a prospective case series was conducted including patients with medial gonarthritis or genu varum congenitum undergoing medial open-wedge HTO using computer-assisted navigation at six European trauma centers.
Relevant patient demographics
Patients were included if they were aged 18 years or older and provided written informed consent, as well as commitment to attend the planned follow-up prior to participation. In addition, those candidates with an intact lateral joint compartment, as well as physiological age/appropriate range of motion in the hip, knee or ankle of the affected leg were included for medial open-wedge HTO surgery. Exclusion criteria included patients with a body mass index C35, a history of drug or alcohol abuse, bone healing disorders, the presence or history of active malignancy or systemic disease, physical or mental incapacity, legal incompetence, systemic or severe local inflammation or infections, and lastly any patient receiving radio-or chemotherapy before, during or within a period of 1 year. This analysis including 46 patients had 91% power to identify a relevant deviation from 0°of 1°at the 0.05 significance level, assuming a standard deviation of angle of 2°.
Description of surgery
The surgical technique is described in detail in the Synthes Tomofix TM Osteotomy System technique guide. Navigation was carried out using the Brainlab VectorVision Osteotomy module Version 1.0. (FDA 042513).
Minimally invasive reference array units are mounted ideally in the medial cortex of the tibia and ventral medial or ventral lateral side of the femur. Registration automatically starts with the definition of the center of the femoral head, which is found by pivoting the leg in the hip joint. When a precision of less than 2 mm is achieved, the software automatically proceeds to the next step. Anatomical landmarks to calculate the center of the knee and ankle are registered percutaneously with a pointer. The first landmark is the medial malleolus followed by the lateral malleolus. The medial tibial plateau border is registered, as well as the lateral tibial plateau border. The anteroposterior (AP) direction is then defined. The last points to be digitalized are the medial and lateral epicondyle. The surgeon now has all the information on the mechanical leg axis, the degree of flexion, and the relative rotation of the tibia against the femur. On the screen of the navigation system the leg is shown with the alignment parameters. Planning is finalized by defining the amount of leg axis correction needed based on the preoperative plan.
After planning, K-wires are positioned by navigating the drill guide to the right height and angulation. Two K-wires guide the saw in the predetermined direction. The planned depth of the cut is shown on the navigation screen and can be checked when cutting. A spreading device (spreader, SynthesÓ, Fig. 1) is used from the medial side to continuously correct the leg axis. The navigation screen displays information in real time. The fixation is also done under navigation control, and so any possible loss of correction can be detected and easily corrected. After correct leg alignment, the osteotomy is fixed with the Synthes Tomofix TM plate (Fig. 2).
Postoperative treatment
Postoperative treatment in all patients included radiographic control as well as partial weight bearing for 6 weeks with unrestricted extension/flexion of the knee joint.
Follow-up routine
During hospitalization, patient demographics (i.e., gender, age, height, weight, smoking) and baseline characteristics (i.e., date of examination, affected side and type of deformity, general pre-existing disorders, range of motion for the hip, knee and ankle of the affected and contralateral leg, ligament stability of both knees, surgeon characteristics, operation time and details, implant characteristics) were recorded. Intraoperative complications such as iatrogenic fractures, bleeding, technical and surgical problems were documented. Patients were X-rayed in AP (long-standing X-ray) and lateral projection (knee joint) at their initial attendance, as well as postoperatively.
Patients were actively followed-up after 6 weeks. At this follow-up, a long standing X-ray and an X-ray of the knee joint in two planes were performed to survey leg axis, fracture healing and the possible occurrence of complications. The prerequisites for long standing X-rays were full weight bearing and no active extension deficit more than 5°o f the affected knee joint. Postoperative complications (adverse events) were documented throughout the postoperative period up to 6 weeks. Anticipated complications included unplanned reoperation, implant problems such as secondary implant dislocation, implant loosening, breakage and failure, as well as healing problems and other local or general complications such as wound infection or hematoma, systemic infections, thromboembolic complications, or fat embolism. Reported complications and related patient X-rays were reviewed by an independent surgeon to define if complications were mild, moderate or serious, and occurred due to the implant, navigation or as a result of the patients' general condition.
Patients were interviewed concerning their walking ability (PMS), their pain at rest and during motion (WOMAC pain subscale) [1], as well as their satisfaction with current function (VAS) before surgery and at the 6-week follow-up. Range of motion of the hip, knee and ankle, ligament stability of the knee, as well as current mobilization status were determined at the follow-up examination.
Outcome measures
The measurements of the planned and actual postoperative leg axis deviations for patients undergoing HTO were made using the following parameters according to the classification of Paley et al. [13] that are listed below: • mFTA (mechanical femorotibial angle, mechanical leg axis) was used as the main parameter to measure the primary outcome. • mMPTA (mechanical medial proximal tibial angle) was used to evaluate whether the indication for HTO was correct. • mLDFA (mechanical lateral distal femoral angle) was used to control for the measurement reliability in the X-ray projection.
For the planned postoperative leg axis measurement, the individual centers were required to assess this parameter from preoperative X-rays. Prior to surgery, a fax sheet with the pre-and planned postoperative angles was sent to AO Clinical Investigation and Documentation Center (AOCID), Davos/Switzerland for final evaluation. At 6 weeks follow-up examination, the actual (''effective'') leg axis measurement was assessed by an independent surgeon from all postoperative long standing X-rays using the MediCAD software.
The final outcome measurement of precision of the achieved leg axis was equivalent to the deviation of the effective leg axis from the planned postoperative leg axis (i.e., actual ''effective'' postoperative leg axis-planned postoperative leg axis).
Statistical analysis
Study monitoring, database management (including data entry), plausibility checks and query generation, and statistics were performed by AOCID. Data were entered into an OpVerdi Database (Dr. Oestreich and Partner, GmbH, Cologne, Germany) and transferred via text files into the software Intercooled Stata Version 10 (StataCorp LP, College Station, Texas, USA) for statistical analysis.
All baseline and follow-up parameters were described using standard descriptive statistics. While continuous variables were described using means, standard deviations and ranges, categorical variables were tabulated with absolute and relative frequencies.
The primary outcome of leg axis deviation was analyzed using the one-sample t test to test the null hypothesis that the mean deviation was equal to zero. The percentage of patients with a deviation greater than 3°was calculated.
All statistical analyses and graphs were conducted with the software Intercooled Stata Version 10.
Results
Fifty-nine patients were included in the study. Due to major study protocol violations, 8 patients needed to be excluded for various reasons, which led to a final number of 51 cases receiving navigated HTO recruited in six centers between January 2006 and October 2007. At the 6-week follow-up examination, 50 patients were examined and this corresponded to a follow-up rate of 98%. Only one HTO-Navi patient did not attend the 6-week examination since their preoperative planning form was incomplete and as a result, the primary outcome could not be documented. HTO-Navi patients were seen for follow-up after a median of 6.6 weeks (range: 4.4-18.0) at the planned 6-week examination. Majority of these patients (n = 39) were examined within a threshold time range of 6 ± 1 weeks; only ten HTO-Navi patients attended the follow-up examination after 8.7-14.0 weeks and a single patient was seen at 18 weeks.
The demographic data of patients receiving navigated HTO is given in Table 1.
Navigated HTO surgery was mostly undertaken by consultants (73%) followed by senior residents (22%) and chief surgeons (6%). Over 50% had performed 30 or more HTO procedures without navigation and between 10 and 29 HTO procedures with a navigation system.
In four patients, the quality of the long standing X-rays was not sufficient to measure the appropriate angles. Therefore, 46 patients were included in the following interpretation. The mean leg axis deviation was 1.5°( standard error = 0.3) and was found to be significantly different from zero (p \ 0.0001; one-sample t test). The minimum, median and maximum leg axis deviations were -2.4°, 1.8°and 6.4°, respectively. The leg axis deviations categorized according to their degree of deviation (i.e., [-0.5°to \0.5°, 0.5°-\1.5°, 1.5°-\2.5°etc.) are presented in Fig. 3.
Deviations up to 3°were tolerated and could be explained by the measurement deviations derived from the X-rays and navigation software. Therefore, HTO-Navi patients with deviations of over ±3°were of special interest in this study.
Twenty-two patients (48%) showed a leg axis deviation of up to 2°(\2.5°) and 39 patients (85%) had deviations of up to 3°(\3.5°). Seven patients were categorized with leg axis deviations beyond the tolerance level of 3°and included three patients with deviations [3-4°and four patients with deviations [4° (Fig. 3). Further analyses to observe any potential influence of medial ligament instability (i.e., patients with Grade II or III medial extension or flexion at baseline or the 6-week follow-up) on these cases of higher leg axis deviation revealed that there was no ligament instability for these particular patients.
Seven intraoperative complications were reported from a total of 59 patients at the beginning of the study (12%). They were all derived from the navigation system and the majority occurred during one study center's learning phase. Loosening of the reference marker base was the most common intraoperative complication (n = 3), followed by system failure (n = 2), loss of orientation after changing the reference pins (n = 1) and the unavailability of the navigation instrument (n = 1). In all these patients, HTO surgery was continued in a conventional fashion.
Three postoperative complications were documented up to the 6-week follow-up examination, where two were defined as moderate soft tissue/wound complications and the third problem was a severe bone complication; all events were not directly related to the CAS procedure.
Discussion
In HTO, it is crucial to achieve a correct leg axis as defined in the preoperative planning. If the leg axis is under corrected, the transfer of weight from the medial to the lateral compartment is incomplete and the patient still experiences pain and the gonarthritis progresses. If it is overcorrected (too much valgus), the knee may become instable and the arthritis progresses faster on the lateral compartment.
Most authors recommend a postoperative valgus of 2°-3°based on long leg x-rays. Most important is the preoperative planning based on these films that give the size of the wedge, as during the operation the information of the long leg radiographs is not available. A conventional intraoperative control of the leg axis is the cable method, however, it is not always precise and quite an amount of radiation is necessary. This is the first reported prospective multicenter case series focusing on the accuracy of navigated leg axis control in HTO. In contrast to other case series in navigated total knee replacement, in this study the preoperative definition of the aimed leg axis and submitting the latter to the study center was a must. Therefore, the results precisely describe the accuracy of system as the final leg axis deviates from a defined preoperative value given to the study center.
The study proves that computer assisted leg axis control is a very precise intraoperative guidance tool in osteotomies. The accuracy is extremely high compared to standard surgical procedure that relies on ''surgeon's eye''-or fluoroscopy-controlled evaluation of the aimed leg axis. Even the intraoperative use of wedges is not that precise.
One limitation of our study is that the experience i.e., learning curve was different within the study centers. Dropout patients, i.e., deviation of more than 3°, are mainly caused by handling failures of the system such as loosening of reference arrays all of which occurred in one study center during its learning phase. But on the other hand, the results show that despite the learning curve the accuracy still is high.
If we would remove the four patients with a deviation of [4°that occurred within the learning phase of one study center, tolerable alignment would have been achieved in almost 93% of all cases.
Another limitation in all these studies focusing on leg axis is that the reported deviations from the planned leg axis are partially influenced by the fact that intraoperatively the final leg axis is evaluated on the resting leg whereas the postoperative X-ray control is done on the weight bearing leg which automatically means 1°or 2°of difference. In addition, the long standing X-rays may give an error up to 2°depending on full knee extension and inward/outward orientation of the leg. To reduce this additional error, all long standing X-rays have been analyzed for these technical aspects as well.
Regarding the correlation of radiographic and navigation measurement in limb alignment Stulberg and coworkers [18] recently reported a reasonable discrepancy as much as 8°in reading out the results. With respect to that finding our results prove that navigated HTO is a highly precise tool in defining intraoperative limb alignment.
The described limitations are the major reason to the fact that the accuracy of the achieved postoperative leg axis is not in all cases within the aimed range of ±3°.
Most of the rated complications are surgery related and not due to computer navigation except for the cases of system down or software failure.
Our initial hypothesis could be proven as the study clearly shows that in this first prospective case series in about 85% (including learning curve), a perfect result in terms of deviation of the planned mechanical leg axis could be achieved using computer assistance as in intraoperative guiding tool. In the hands of trained CAS surgeons one might expect that the accuracy will be even higher (93% excluding learning curve cases).
The study shows that in navigated HTO based on the described surgical technique a perfect intraoperative leg axis with respect to the preoperative plan can be achieved.
But nevertheless a navigation system is not a guarantee for a perfect result itself.
Open Access This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.
|
2017-08-02T19:39:35.244Z
|
2010-07-06T00:00:00.000
|
{
"year": 2010,
"sha1": "588cd1bf5a8c2add6af34449e7187a63ea001a25",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00402-010-1145-9.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "588cd1bf5a8c2add6af34449e7187a63ea001a25",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
269407832
|
pes2o/s2orc
|
v3-fos-license
|
Whole-Cell Display of Phospholipase D in Escherichia coli for High-Efficiency Extracellular Phosphatidylserine Production
Phospholipids are widely utilized in various industries, including food, medicine, and cosmetics, due to their unique chemical properties and healthcare benefits. Phospholipase D (PLD) plays a crucial role in the biotransformation of phospholipids. Here, we have constructed a super-folder green fluorescent protein (sfGFP)-based phospholipase D (PLD) expression and surface-display system in Escherichia coli, enabling the surface display of sfGFP-PLDr34 on the bacteria. The displayed sfGFP-PLDr34 showed maximum enzymatic activity at pH 5.0 and 45 °C. The optimum Ca2+ concentrations for the transphosphatidylation activity and hydrolysis activity are 100 mM and 10 mM, respectively. The use of displayed sfGFP-PLDr34 for the conversion of phosphatidylcholine (PC) and L-serine to phosphatidylserine (PS) showed that nearly all the PC was converted into PS at the optimum conditions. The displayed enzyme can be reused for up to three rounds while still producing detectable levels of PS. Thus, Escherichia coli/sfGFP-PLD shows potential for the feasible industrial-scale production of PS. Moreover, this system is particularly valuable for quickly screening higher-activity PLDs. The fluorescence of sfGFP can indicate the expression level of the fused PLD and changes that occur during reuse.
Introduction
Phosphatidylserine (PS) is an important component of the cell membrane in eukaryotic cells.It is the most abundant anionic phospholipid in eukaryotic organisms, accounting for 10% of the total cellular lipids.PS is widely used in the food, health product, chemical, and pharmaceutical industries.PS plays a significant role in cell apoptosis and blood clotting [1].PS is essential for healthy neuronal membranes and myelin sheaths [2] and acts as a global immune suppressor in phagocytosis, infectious diseases, and cancer [3].PS can enhance learning and memory abilities and has been studied for its potential in preventing Alzheimer's disease [4,5].
Traditionally, PS is extracted from soybeans, vegetable oil, egg yolk, and biomass.However, its low availability and high extraction cost are limiting factors [6,7].Phospholipase D (PLD, EC. 3.1.4.4)-mediated phosphatidylcholine (PC) transphosphatidylation with L-serine is a promising method for synthesizing PS.Compared to chemical synthesis and extraction methods, PLD enzyme synthesis of PS has the advantages of mild reaction conditions and minimal formation of byproducts [8].PLD has phospholipidation activity and can be used for the enzyme-catalyzed synthesis of various phospholipids.There have been many reports on the mediated synthesis of natural and specially designed PLs with functional head groups by readily available phosphatidylcholine or phospholipids using PLD [9].PLD also has hydrolytic activity, converting PC into the unwanted byproduct phosphatidic acid (PA).In recent years, various novel phospholipase D (PLD) enzymes with higher phosphatidylcholine esterification activity and lower hydrolytic activity have been obtained through gene-mining strategies and protein engineering.Among them, PLDs from Streptomyces species are more suitable for the efficient synthesis of phospholipids and phospholipid derivatives because they possess higher phospholipid acylation activity compared to PLDs from plants and fungi [10][11][12].
The production of PLD in Streptomyces is very low.Therefore, people have already heterologously expressed PLD from different sources in Escherichia coli [13], Pichia pastoris [14], and Bacillus subtilis [15], resulting in a slight increase in activity.In these heterologous expression systems, Escherichia coli is the main host for heterologous phospholipase expression, accounting for approximately 86% of all hosts [16].However, most reports on the heterologous expression of PLD in Escherichia coli are related to intracellular expression, which is disadvantageous for enzyme isolation and industrial production.Additionally, it has been reported that the accumulation of PLD in cells can lead to plasmid instability, cell lysis, and a further decrease in PLD activity [13,17].Cell-surface display offers a more cost-effective approach for reducing the expense of biocatalysts.It is a simple method that eliminates the need for additional purification and immobilization steps.Certain enzymes have been effectively exhibited on the cell surface as whole-cell biocatalysts, making them the ideal candidates for industrial biotransformation processes [18].However, compared to other enzymes, the use of PLD in the form of cell-surface display is less common [12,14,19].
The super-folder green fluorescent protein (sfGFP) exhibits fluorescence, allowing direct observation without the need for equipment.Thus, by fusing the sfGFP tag to the target protein for expression, we can detect the presence of the secreted sfGFP fusion protein in the cell supernatant or culture medium through fluorescence observation [20].The sfGFP expression system enables the rapid screening of high-level expression strains through fluorescence detection and facilitates the secretion of target proteins [21,22].To enhance the efficiency of PLD preparation, we developed an sfGFP-based system in Escherichia coli for expressing and displaying PLD.This system enables the surface display of PLD on the bacteria.By utilizing the displayed sfGFP-PLDr34, we demonstrated that under optimal conditions, almost all PC was converted into PS.The surface display not only allows for the direct whole-cell preparation of PS but also facilitates the high-level screening of target proteins.
Construction of Recombinant Plasmids for Displaying Proteins on the Cell Surface
The PLDr34 (GenBank accession number MN604233) was synthesized and cloned into the vectors pET28a, pET28a-sfGFP, and pET23a-sfGFP downstream from the 6× His tag.The sfGFP and PLDr34 are connected by a 3C site, as shown in Supplementary Figure S1.The recombinant plasmids were confirmed by sequencing.
Preparation of Whole-Cell Target Protein
The plasmids and strains utilized in this study are listed in Supplementary Table S2.The recombinant plasmids pET28a-PLDr34, pET28a-sfGFP-PLDr34, and pET23a-sfGFP-PLDr34 were transformed into E. coli Rosetta Blue (DE3) competent cells.One of the positive clones was inoculated into 5 mL of LB medium supplemented with 50 µg/mL of kanamycin and incubated at 220 rpm and 37 • C. Subsequently, the overnight cultures were transferred to 100 mL of LB medium containing 50 µg/mL of kanamycin and incubated at 220 rpm and 37 • C. Once the OD 600 reached 0.6-0.8, the cells were induced with IPTG at a final concentration of 1 mM and incubated at 220 rpm and 18 • C for 18 h.Then, the cells were harvested by centrifugation at 5000× g for 5 min at 4 • C. The cells were accurately weighed, and an appropriate volume of phosphate-buffered saline (PBS) with a pH of 7.4 was added to achieve a cell concentration of 100 mg/mL.The resuspended cells could then be directly utilized for enzyme activity assays.
Cell Fractionation
Cell fractionation was performed according to the protocol described by Quan et al. [23].Cells were harvested from a 10 mL culture broth by centrifugation at 5000× g for 10 min at 4 • C. The cells were then treated with one-third volume of Tris-EDTA-NaCl solution (TEN) containing 50 mM Tris-HCl (pH 8.0), 5 mM EDTA, and 50 mM NaCl.The mixture was incubated at 4 • C overnight.After incubation, the solution was centrifuged at 5000× g for 20 min at 4 • C. The supernatant was collected as the outer membrane fraction.
Fluorescence Microscopy
Cells were harvested from 1 mL of culture by centrifugation at 3000× g for 5 min at 4 • C. The harvested cells were then resuspended in 1 mL of phosphate buffer.The resuspended cells were diluted with phosphate buffer to the desired concentrations.Subsequently, 10 µL of the diluted cells was pipetted onto Poly-prep microscopy slides.Using a Zeiss LSM 980 fluorescence microscope (Zeiss, Oberkochen, Germany), the magnification of the objective lens was adjusted to locate the cells of interest.The LSCM was configured to scanning mode, the laser intensity parameters were fine-tuned, and ZEN v3.0 software was utilized to capture crisp confocal images.
Verification of PLD and sfGFP-PLD through Western Blot Analysis
The 10 µL of whole cells, outer membrane fraction, and medium supernatant from both pET28a-PLDr34 and pET28a-sfGFP-PLDr34 cells were employed for Western blot verification.Mouse-derived antibodies were used to bind the 6× His tagged-PLD and 6× His tagged-sfGFP-PLD proteins, and goat-derived antibodies were used to bind the mouse antibodies.The PVDF membrane was uniformly coated with the SuperKineTM Ultrasensitive ECL Luminescent Solution, followed by imaging using an ultra-sensitive chemiluminescence detector.
PC Hydrolysis Activity Assay
The hydrolysis activity of PLDr34 or sfGFP-PLDr34 (in the form of outer membrane fraction or intact cells) was detected using an enzyme-linked colorimetric assay [24].The 200 µL reaction mixture consisted of 0.1% Triton X-100, 40 mM Tris-HCl (pH 7.5), 10 mM CaCl 2 , and 10 mg/mL PC dissolved in an 80% ethanol solution.The mixture was incubated at 37 • C and 200 rpm for 5 min.Subsequently, 200 µL of resuspended cells (100 mg/mL) was introduced into the reaction system, and the final reaction system was 400 µL.The reaction system was incubated at 37 • C and 200 rpm for a duration of 10 min.To terminate the reaction, 200 µL of a reaction termination solution (0.1 M Tris-HCl, 10 mM EDTA, and 10 g/L Hexadecyl trimethyl ammonium chloride) was added and incubated at 37 • C and 200 rpm for 5 min.The resulting mixture was centrifuged at 12,000 rpm for 5 min, and the supernatant was added to 2.5 mL of a chromogenic solution containing 1 mg of phenol, 0.3 mg/mL of 4-Aminoantipyrine, and 0.1 M Tris-HCl (pH 8.0).Additionally, 2 units of choline oxidase and 2 units of catalase were added to the mixture.The mixture was then incubated at 37 • C and 200 rpm for 30 min.The absorbance of the reaction mixture was measured at 500 nm.One unit (U) of hydrolytic activity of PLD was defined as the amount of enzyme that produced 1 µmol of choline per minute.The calibration curve was generated using a standard solution of choline chloride rather than the enzyme solution.
PS Synthesis Catalyzed by PLD
The transphosphatidylation of PC and L-serine to PS was carried out in a two-phase reaction system as described by Haiyang Zhang et al. [22].Under the optimized reaction conditions, the final concentration of whole cells was 10 mg/mL.Therefore, 200 µL of 100 mg/mL resuspended cells was centrifuged at 12,000 rpm for 3 min.The supernatants were discarded, and the cell pellets were resuspended with 1.0 mL of a sodium acetate/acetic acid buffer (0.02 M, pH 6.0) containing serine (1.0 M) and CaCl 2 (0.1 M).Then, 1.0 mL of PC (20 mg/mL) dissolved in diethyl ether was introduced into the reaction system and incubated at 45 • C for 4 h.Following that step, 2 µL aliquots of the sample and standard solutions were applied to the silica gel plate.A mixture of chloroform, methanol, glacial acetic acid, acetone, and water (45:25:7:4:2) served as the developing agent.The thin-layer plate was removed, and the solvent was allowed to evaporate.Subsequently, the plate was placed in an iodine cylinder containing a mixture of iodine and silica gel powder for 2 min to observe color development.The conversion of PS was calculated as the ratio of generated PS to the sum of generated PS and unreacted PC.
Treatment of Cells by Proteinase K
The cell suspension, resuspended in 200 µL of PBS, was divided into two tubes.One tube received 10 µL of protease K (20 mg/mL), and both tubes were incubated at 37 • C for 1 h.The supernatant was removed via centrifugation at 5000× g for 10 min.The precipitate was washed three times with PBS and then resuspended in PBS to assess its hydrolysis activity.
Determination of Cell Growth Curve
To compare the growth status of pET28a-sfGFP-PLDr34 and pET23a-PLDr34 cells, the recombinants pET28a-PLDr34 and pET28a-sfGFP-PLDr34 were transformed into E. coli Rosetta Blue (DE3) competent cells.The colonies were picked and cultured in 5 mL LB medium with 50 µg/mL kanamycin at 220 rpm and 37 • C. Once the OD 600 of the two strains reached 0.8, they were diluted with LK medium (LB broth supplemented with 50 µg/mL kanamycin).Subsequently, 10 µL aliquots of each strain were inoculated into fresh 5 mL LK medium.The cultures were then incubated at 37 • C with shaking at 220 rpm.Samples were collected at a frequency of every 2 h for the initial 14 h, followed by every 4 h for the subsequent 14 h.The growth curve was obtained by measuring the OD 600 value.IPTG was added at a concentration of 1 mM when the OD 600 value of both bacteria reached 0.6.The cultures were incubated at 220 rpm and 18 • C.
Expression of the Target Proteins
The PLDr34 gene was synthesized and cloned into the vectors pET28a and pET28a-sfGFP, as illustrated in Figure 1a.Recombinant strains E. coli Rosetta Blue (DE3)/PLDr34 and E. coli Rosetta Blue (DE3)/sfGFP-PLDr34 were cultured following the methods described in the Materials and Methods section.The total cellular proteins were analyzed using SDS-PAGE.As depicted in Supplementary Figure S2, although PLDr34 (~60 kDa) and sfGFP-PLDr34 (~86 kDa) were detected in the total cells, their expression levels were not high.
To confirm the successful display of sfGFP-PLDr34 on the cell surface, we analyzed the outer membrane of the cells using a Western blot.As depicted in Figure 1b, sfGFP-PLDr34 and PLDr34 exhibit comparable expression levels, with both being relatively low.sfGFP-PLDr34 was predominantly present in the outer membrane of the cells, although it was also detected in the supernatant of the culture medium.PLDr34 was not detected in either the outer membrane of cells or the supernatant of the culture medium.Moreover, we used LSCM to detect the distribution of sfGFP-PLDr34 in E. coli. Figure 1c shows the obtained images from scanning in both bright and dark fields of view.The results revealed that sfGFP-PLDr34 bacterial cells displayed intense spontaneous fluorescence, with sfGFP also present in the background.These results indicate that sfGFP effectively promotes the display of PLD on the outer membrane of E. coli.To confirm the successful display of sfGFP-PLDr34 on the cell surface, we analyzed the outer membrane of the cells using a Western blot.As depicted in Figure 1b, sfGFP-PLDr34 and PLDr34 exhibit comparable expression levels, with both being relatively low.sfGFP-PLDr34 was predominantly present in the outer membrane of the cells, although it was also detected in the supernatant of the culture medium.PLDr34 was not detected in either the outer membrane of cells or the supernatant of the culture medium.Moreover, we used LSCM to detect the distribution of sfGFP-PLDr34 in E. coli. Figure 1c shows the obtained images from scanning in both bright and dark fields of view.The results revealed that sfGFP-PLDr34 bacterial cells displayed intense spontaneous fluorescence, with sfGFP also present in the background.These results indicate that sfGFP effectively promotes the display of PLD on the outer membrane of E. coli.
Surface Display of Fusion Enzymes on E. coli for PS Biosynthesis
We examined the hydrolytic activity of whole cells containing PLD.The strains carrying pET28a-PLDr34 and pET28a-sfGFP-PLDr34 were treated with proteinase K. Figure 2a shows that pET28a-PLDr34 cells without proteinase K treatment exhibited an enzyme activity of approximately 0.4 U/mL.Upon treatment with proteinase K, the activity decreased to around 0.2 U/mL, indicating that PLDr34 is active to some extent outside the cells.This finding is consistent with previous reports [12].The constructed pET28a-sfGFP-PLDr34 cells showed the highest activity (~1 U/mL), which was 2.5 times higher than the pET28a PLDr34 strain and 10 times higher than the pET28a-sfGFP-PLDr34 cells treated with protease K.This observation confirms the successful display of sfGFP-PLDr34 on the cell surface of E. coli.
Then, we measured the ability of E. coli Rosetta Blue (DE3)/pET28a-PLDr34 whole cells and E. coli Rosetta Blue (DE3)/pET28a-sfGFP-PLDr34 whole cells to convert PC into PS.After a reaction at 40 °C for 2 h, the TLC results showed that sfGFP-PLDr34 cells produced PS, while PLDr34 did not produce PS (Figure 2b).In order to further improve the efficiency of PS production, we also tested the activity of the pET23a sfGFP-PLDr34 strain, because the regulation of pET23a is more relaxed than pET28a.However, its activity is lower than pET28a-sfGFP-PLDr34 cells (Figure 2b).We speculate that due to the toxicity
Surface Display of Fusion Enzymes on E. coli for PS Biosynthesis
We examined the hydrolytic activity of whole cells containing PLD.The strains carrying pET28a-PLDr34 and pET28a-sfGFP-PLDr34 were treated with proteinase K. Figure 2a shows that pET28a-PLDr34 cells without proteinase K treatment exhibited an enzyme activity of approximately 0.4 U/mL.Upon treatment with proteinase K, the activity decreased to around 0.2 U/mL, indicating that PLDr34 is active to some extent outside the cells.This finding is consistent with previous reports [12].The constructed pET28a-sfGFP-PLDr34 cells showed the highest activity (~1 U/mL), which was 2.5 times higher than the pET28a PLDr34 strain and 10 times higher than the pET28a-sfGFP-PLDr34 cells treated with protease K.This observation confirms the successful display of sfGFP-PLDr34 on the cell surface of E. coli.
Then, we measured the ability of E. coli Rosetta Blue (DE3)/pET28a-PLDr34 whole cells and E. coli Rosetta Blue (DE3)/pET28a-sfGFP-PLDr34 whole cells to convert PC into PS.After a reaction at 40 • C for 2 h, the TLC results showed that sfGFP-PLDr34 cells produced PS, while PLDr34 did not produce PS (Figure 2b).In order to further improve the efficiency of PS production, we also tested the activity of the pET23a sfGFP-PLDr34 strain, because the regulation of pET23a is more relaxed than pET28a.However, its activity is lower than pET28a-sfGFP-PLDr34 cells (Figure 2b).We speculate that due to the toxicity of PLD to cells [13,17], strict regulation of the pET28a strain can reduce the impact of PLD expression on cell growth, and sfGFP-mediated secretion expression further reduces the cytotoxicity of PLD (Supplementary Figure S3).expression on cell growth, and sfGFP-mediated secretion expression further reduces the cytotoxicity of PLD (Supplementary Figure S3).
Effects of Cell Concentration
The reaction is significantly influenced by the concentration of cells.In order to examine this effect, we conducted a PS conversion reaction using sfGFP-PLDr34 cells at concentrations ranging from 0.25 mg/mL to 25 mg/mL.To prepare reaction mixtures containing cells at different concentrations, we aspirated varying volumes of cells resuspended in PBS buffer at a concentration of 100 mg/mL.After centrifuging to remove the supernatant, the cell pellets were resuspended with the reaction mixture.
Figure 3a demonstrates a strong correlation between the cell concentration and the yield of PS.PS begins to appear at a cell concentration of 2.5 mg/mL.PC completely disappears at a cell concentration of 10 mg/mL.But as the cell concentration further increases, PS also decreases.Therefore, considering the cell cost and PS conversion rate, 10 mg/mL was chosen as the optimal cell concentration.We also tested the effect of cell concentration on hydrolysis activity.As shown in Figure 3b, the optimal cell concentration for hydrolysis activity is also 5-10 mg/mL.A further increase in cell concentration will lead to a significant decrease in hydrolysis activity.We speculate that cells themselves might metabolize a portion of PC, and more cells metabolize more PC, resulting in a decrease in PS production and PC hydrolysis activity with a higher cell concentration.Therefore, it is necessary to choose a balance point between cell concentration and activity.
Effects of Cell Concentration
The reaction is significantly influenced by the concentration of cells.In order to examine this effect, we conducted a PS conversion reaction using sfGFP-PLDr34 cells at concentrations ranging from 0.25 mg/mL to 25 mg/mL.To prepare reaction mixtures containing cells at different concentrations, we aspirated varying volumes of cells resuspended in PBS buffer at a concentration of 100 mg/mL.After centrifuging to remove the supernatant, the cell pellets were resuspended with the reaction mixture.
Figure 3a demonstrates a strong correlation between the cell concentration and the yield of PS.PS begins to appear at a cell concentration of 2.5 mg/mL.PC completely disappears at a cell concentration of 10 mg/mL.But as the cell concentration further increases, PS also decreases.Therefore, considering the cell cost and PS conversion rate, 10 mg/mL was chosen as the optimal cell concentration.We also tested the effect of cell concentration on hydrolysis activity.As shown in Figure 3b, the optimal cell concentration for hydrolysis activity is also 5-10 mg/mL.A further increase in cell concentration will lead to a significant decrease in hydrolysis activity.We speculate that cells themselves might metabolize a portion of PC, and more cells metabolize more PC, resulting in a decrease in PS production and PC hydrolysis activity with a higher cell concentration.Therefore, it is necessary to choose a balance point between cell concentration and activity.
Effects of pH and Ca 2+
To determine the effect of pH on the reaction, we evaluated the PS conversion activity and hydrolysis activity of sfGFP-PLDr34 cells at different pH levels.As shown in Figure 4a, the optimum pH for the PS conversion reaction was 5.0-6.0.Interestingly, the sfGFP-PLDr34 cells showed optimal hydrolysis activity at pH 5.0 (Figure 4b).However, in pre-
Effects pH and Ca 2+
To determine the effect of pH on the reaction, we evaluated the PS conversion activity and hydrolysis activity of sfGFP-PLDr34 cells at different pH levels.As shown in Figure 4a, the optimum pH for the PS conversion reaction was 5.0-6.0.Interestingly, the sfGFP-PLDr34 cells showed optimal hydrolysis activity at pH 5.0 (Figure 4b).However, in previous reports, purified free PLDr34 exhibited the highest hydrolysis activity at pH 6.0 [12].We hypothesize that the shift in optimal pH could be attributed to alterations in the microenvironment surrounding the enzyme when displayed on the cell surface.This phenomenon has also been observed in a study where PLD was displayed on the surface of Pichia pastoris [19].
Effects of pH and Ca 2+
To determine the effect of pH on the reaction, we evaluated the PS conversion activity and hydrolysis activity of sfGFP-PLDr34 cells at different pH levels.As shown in Figure 4a, the optimum pH for the PS conversion reaction was 5.0-6.0.Interestingly, the sfGFP-PLDr34 cells showed optimal hydrolysis activity at pH 5.0 (Figure 4b).However, in previous reports, purified free PLDr34 exhibited the highest hydrolysis activity at pH 6.0 [12].We hypothesize that the shift in optimal pH could be attributed to alterations in the microenvironment surrounding the enzyme when displayed on the cell surface.This phenomenon has also been observed in a study where PLD was displayed on the surface of Pichia pastoris [19].The activity of PLD requires divalent metal ions, which are usually most active in the presence of Ca 2+ [14].We evaluated the PS conversion activity and hydrolysis activity of sfGFP-PLDr34 cells at different Ca 2+ concentrations.Surprisingly, there is a significant difference in the optimal Ca 2+ concentration between PS synthesis activity and PC degradation activity.As shown in Figure 4a,b, the optimum Ca 2+ concentrations for the PS conversion activity and hydrolysis activity are 100 mM and 10 mM, respectively.Although no PS was detected without the addition of Ca 2+ , weak hydrolytic activity could still be detected.Previous studies typically reduced the hydrolysis activity of PC and improved the synthesis activity of PS by engineering PLD [12].Our results suggest that by adjusting the Ca 2+ concentration, PLD can maintain low hydrolysis activity while maintaining high PS synthesis activity, thereby improving the PS conversion efficiency.
Effects of Reaction Temperature
We also assessed the impact of reaction temperature on the PS conversion activity.Figure 5a demonstrates that at 45 • C and 50 • C, no PC was detected, only PS was observed.Due to the higher abundance of PS products at 45 • C, this temperature was chosen as the optimal reaction temperature for sfGFP-PLDr34 cells.Figure 5a demonstrates that at 45 °C and 50 °C, no PC was detected, only PS was observed.Due to the higher abundance of PS products at 45 °C, this temperature was chosen as the optimal reaction temperature for sfGFP-PLDr34 cells.Subsequently, we examined the reaction time required to generate PS.The formation of the PS product was observed after a 1 h reaction at 45 °C, as depicted in Figure 5b.After 2 h, the majority of PC had been converted into PS.By the end of the 4 h reaction, PC had almost completely disappeared, and the amount of PS reached the maximum.Hence, we established the reaction conditions for the majority of the reactions at 45 °C for a duration of 4 h.
Effects of the Concentration of PC and L-Serine
We also examined the impact of substrate concentration on the reaction.The PS yield increased as the PC concentration increased when the concentration of L-serine was 1 M (Figure 6a).When the PC concentration was ≤20 mg/mL, almost all of it could be converted.Nevertheless, a further increase in the PC concentration could not be fully converted.Importantly, when we increased the concentration of L-serine to 2 M, and even with an increased concentration of PC to 50 mg/mL, most of the PC could still be converted to PS.This finding suggests that increasing the concentration of L-serine effectively promotes the PS conversion activity, which aligns with previous reports [14].Subsequently, we examined the reaction time required to generate PS.The formation of the PS product was observed after a 1 h reaction at 45 • C, as depicted in Figure 5b.After 2 h, the majority of PC had been converted into PS.By the end of the 4 h reaction, PC had almost completely disappeared, and the amount of PS reached the maximum.Hence, we established the reaction conditions for the majority of the reactions at 45 • C for a duration of 4 h.
Effects of the Concentration of PC and L-Serine
We also examined the impact of substrate concentration on the reaction.The PS yield increased as the PC concentration increased when the concentration of L-serine was 1 M (Figure 6a).When the PC concentration was ≤20 mg/mL, almost all of it could be converted.Nevertheless, a further increase in the PC concentration could not be fully converted.Importantly, when we increased the concentration of L-serine to 2 M, and even with an increased concentration of PC to 50 mg/mL, most of the PC could still be converted to PS.This finding suggests that increasing the concentration of L-serine effectively promotes the PS conversion activity, which aligns with previous reports [14].
Reuse of Whole-Cell Catalyst
High operational stability is crucial for most biological processes and can reduce the cost of biotransformation in commercial applications.We evaluated the stability of sfGFP-PLDr34 cells over five consecutive 4 h batches.The PS conversion activity decreased sig-
Reuse of Catalyst
High operational stability is crucial for most biological processes and can reduce the cost of biotransformation in commercial applications.We evaluated the stability of sfGFP-PLDr34 cells over five consecutive 4 h batches.The PS conversion activity decreased significantly with each reuse.As indicated in Figure 7a, the PS yield remained at approximately 40% and 10% in the second and third cycles, respectively.It was not detected in the fifth cycle.From a practical standpoint, sfGFP-PLDr34 could be reused for a maximum of four times.Nevertheless, this number of reuses is comparable to the previously reported results of displaying PLD on the surface of E. coli using the autotransporter domain of AIDA-I [12].
Reuse of Whole-Cell Catalyst
High operational stability is crucial for most biological processes and can reduce the cost of biotransformation in commercial applications.We evaluated the stability of sfGFP-PLDr34 cells over five consecutive 4 h batches.The PS conversion activity decreased significantly with each reuse.As indicated in Figure 7a, the PS yield remained at approximately 40% and 10% in the second and third cycles, respectively.It was not detected in the fifth cycle.From a practical standpoint, sfGFP-PLDr34 could be reused for a maximum of four times.Nevertheless, this number of reuses is comparable to the previously reported results of displaying PLD on the surface of E. coli using the autotransporter domain of AIDA-I [12].The fusion protein with an sfGFP tag can be directly visible without the need for any equipment [21].This provides a significant advantage over other tags.Furthermore, the fusion enzyme sfGFP-PLDr34 can be expressed on the surface of E. coli cells, enabling intact cells to exhibit green fluorescence.As shown in Figure 7b, the green fluorescence of intact cells gradually decreased after each reaction.When the fluorescence became very weak or invisible, it indicated that the intact cells were no longer viable.Therefore, we can determine whether to terminate the recycling reaction based on the fluorescence intensity, without the need for enzyme activity detection.This approach is highly convenient and cost effective, especially for large-scale industrial applications.
Discussion
By utilizing sfGFP to display PLD on the cell surface, we have developed a highly efficient, visible, simple, and cost-effective method to produce PS from PC and L-serine using the whole-cell sfGFP-PLDr34 catalyst.This approach is appealing because it eliminates the requirement for additional steps of enzyme extraction and purification.Furthermore, the visible fusion of sfGFP-PLDr34 immobilized on the bacterial surface is relatively stable under reaction conditions.Hence, the reaction conditions can be easily controlled, particularly for large-scale industrial applications.The entire cell can be reused up to three rounds while still producing detectable levels of PS.Most importantly, this system can be used for the rapid screening of higher-activity PLDs, as the fluorescence of sfGFP can indicate the expression level of the fused PLD and its changes during repeated use.Therefore, the use of sfGFP-mediated PLD displayed cells as a biocatalyst for cost-effective PS production shows promise.
However, we also acknowledge several limitations in this study.Firstly, the E. coli Rosetta Blue (DE3) strain we employed produces endotoxin, and this would be incompatible with food, cosmetics, and pharmaceutical usage, thus enormously limiting this study.To this method for these applications, alternative probiotic strains such as E. coli Nissle 1917 (EcN) [25] could be explored for sfGFP-PLD display.Secondly, the reuse times of whole cells need to be improved, which may be achieved through three strategies.(1) We observed that the expression level of sfGFP-PLD is relatively low.This can be improved by optimizing expression conditions, such as testing various inducer concentrations and induction temperatures.(2) Alternative PLDs with higher stability and activity could be selected for surface display, such as the rationally designed Sr MBP PLD Mu6 by Qi et al. [26].(3) Immobilization can enhance the stability of the catalyst.Recently, various strategies have been reported to improve PLD performance through immobilization [15,[27][28][29][30][31][32][33].We could explore the use of immobilization reagents to immobilize the cells or elute the membrane-bound sfGFP-PLD for enzyme immobilization.We believe that with the resolution of these limitations, sfGFP-based PLD display technology will significantly promote the biosynthesis of phospholipids, including PS.
Conclusions
In summary, our study has successfully achieved the efficient extracellular production of PS using whole-cell display technology in E. coli, providing a novel pathway for the biosynthetic production of PS.This approach is appealing as it eliminates the additional steps of enzyme extraction and purification.Furthermore, this system facilitates rapid screening of PLDs exhibiting higher activity.The fluorescence intensity of sfGFP serves as a proxy for the expression level of the fused PLD, allowing for the detection of changes during repeated use.In the future, we aim to further investigate and enhance the production efficiency of PS while reducing costs, thereby contributing to the advancement of related industries.
12 Figure 1 .
Figure 1.Construction of PLDr34 cell-surface display system for recombinant plasmids and analysis of PLDr34 and sfGFP-PLDr34.(a) Construction of recombinant plasmid.T7 Pro: T7 promoter; 3C: 3C site; T7 Ter: T7 terminator.(b) Western blot analysis of whole cells, washing membrane, and culture medium protein.M: protein marker; C: whole-cell fraction; O: washing membrane fraction; S: culture medium fraction.Original image can be found in Supplementary Materials Figure S4.(c) LSCM was used to detect the distribution of the sfGFP-PLDr34 protein in E. coli.The images obtained by scanning in merge, bright, and dark fields are represented by Merge, DIC, and GFP, respectively.
Figure 1 .
Figure 1.Construction of PLDr34 cell-surface display system for recombinant plasmids and analysis of PLDr34 and sfGFP-PLDr34.(a) Construction of recombinant plasmid.T7 Pro: T7 promoter; 3C: 3C site; T7 Ter: T7 terminator.(b) Western blot analysis of whole cells, washing membrane, and culture medium protein.M: protein marker; C: whole-cell fraction; O: washing membrane fraction; S: culture medium fraction.Original image can be found in Supplementary Materials Figure S4.(c) LSCM was used to detect the distribution of the sfGFP-PLDr34 protein in E. coli.The images obtained by scanning in merge, bright, and dark fields are represented by Merge, DIC, and GFP, respectively.
Figure 2 .
Figure 2. Exploration of whole-cell activity.(a) Demonstrating the hydrolytic activity of.PLDr34 in full cells.Control groups of pET28a-PLDr34 were divided into those not treated and those treated with proteinase K.The surface display group of pET28a-sfGFP-PLDr34 was also divided into those not treated and those treated with proteinase K.The data were derived from the mean and standard deviation of three independent experiments.(b) Comparing the whole-cell catalysis of PC synthesis to PS.
Figure 2 .
Figure 2. Exploration of whole-cell activity.(a) Demonstrating the hydrolytic activity of.PLDr34 in full cells.Control groups of pET28a-PLDr34 were divided into those not treated and those treated with proteinase K.The surface display group of pET28a-sfGFP-PLDr34 was also divided into those not treated and those treated with proteinase K.The data were derived from the mean and standard deviation of three independent experiments.(b) Comparing the whole-cell catalysis of PC synthesis to PS.
Biomolecules 2024 , 12 Figure 3 .
Figure 3. Effects of cell concentration.(a) The impact of cell concentration on PS synthesis activity.(b) The influence of cell concentration on hydrolysis activity.The data were derived from the mean and standard deviation of three independent experiments.
Figure 3 .
Figure 3. Effects of cell concentration.(a) The impact of cell concentration on PS synthesis activity.(b) The influence of cell concentration on hydrolysis activity.The data were derived from the mean and standard deviation of three independent experiments.
Figure 3 .
Figure 3. Effects of cell concentration.(a) The impact of cell concentration on PS synthesis activity.(b) The influence of cell concentration on hydrolysis activity.The data were derived from the mean and standard deviation of three independent experiments.
Figure 4 .
Figure 4. Effects of pH and Ca 2+ .(a) The impact of pH on PS synthesis activity.(b) The effects of pH on enzyme hydrolysis activity.(c) The effects of Ca 2+ concentration on the activity of PS synthesis.
Figure 4 .
Figure 4. Effects of pH and Ca 2+ .(a) The impact of pH on PS synthesis activity.(b) The effects of pH on enzyme hydrolysis activity.(c) The effects of Ca 2+ concentration on the activity of PS synthesis.(d) The effects of Ca 2+ concentration on enzyme hydrolysis activity.The data in (b,d) were derived from the mean and standard deviation of three independent experiments.
5 .
Effects of Reaction Temperature also assessed the impact of reaction temperature on the PS conversion activity.
Figure 5 .
Figure 5.The impact of temperature and time gradient on the synthesis of PS.(a) The influence of temperature on the efficiency of PS synthesis.(b) The outcomes of varying time gradients in the catalytic synthesis of PS from PC.
Figure 5 .
Figure 5.The impact of temperature and time gradient on the synthesis of PS.(a) The influence of temperature on the efficiency of PS synthesis.(b) The outcomes of varying time gradients in the catalytic synthesis of PS from PC.
Biomolecules 2024 , 12 Figure 6 .
Figure 6.Effects of the concentration of PC and L-serine.(a) The results of PS synthesis at different PC concentrations with 1 M L-serine.(b) The results of PS synthesis at different PC concentrations with 2 M L-serine.The data were derived from the mean and standard deviation of three independent experiments.
Figure 6 .
Figure 6.Effects of the concentration of PC and L-serine.(a) The results of PS synthesis at different PC concentrations with 1 M L-serine.(b) The results of PS synthesis at different PC concentrations with 2 M L-serine.The data were derived from the mean and standard deviation of three independent experiments.
Figure 6 .
Figure 6.Effects of the concentration of PC and L-serine.(a) The results of PS synthesis at different PC concentrations with 1 M L-serine.(b) The results of PS synthesis at different PC concentrations with 2 M L-serine.The data were derived from the mean and standard deviation of three independent experiments.
Figure 7 .
Figure 7. Reuse of whole-cell catalyst.(a) TLC results of whole-cell cyclic reaction catalyzing the production of PS.(b) Comparison of whole-cell fluorescence after five repetitions of pET28a-sfGFP-PLDr34 reuse.
Figure 7 .
Figure 7. Reuse of whole-cell catalyst.(a) TLC results of whole-cell cyclic reaction catalyzing the production of PS.(b) Comparison of whole-cell fluorescence after five repetitions of pET28a-sfGFP-PLDr34 reuse.
|
2024-04-28T05:49:15.889Z
|
2024-04-01T00:00:00.000
|
{
"year": 2024,
"sha1": "44a8b7b11491209658cc33179c2bfc4bd010881b",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2218-273X/14/4/430/pdf?version=1712046191",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b1287c3244613c1350c2830cbdc8c880e19090ad",
"s2fieldsofstudy": [
"Chemistry",
"Biology",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
13878865
|
pes2o/s2orc
|
v3-fos-license
|
A Case Study in Knowledge Discovery and Elicitation in an Intelligent Tutoring Application
Most successful Bayesian network (BN) applications to datehave been built through knowledge elicitation from experts.This is difficult and time consuming, which has lead to recentinterest in automated methods for learning BNs from data. We present a case study in the construction of a BN in anintelligent tutoring application, specifically decimal misconceptions. Wedescribe the BN construction using expert elicitation and then investigate how certainexisting automated knowledge discovery methods might support the BN knowledge engineering process.
the domain, common human difficulties in spec ifying and combining probabilities, and experts being unable to identify the causal direction of influences between variables.
Hence there has been much interest in recent time in auto mated methods for constructing BNs from data (e.g., [Spirtes et al ., 1993, Wallace and Korb, 1999, Beckerman and Geiger, 1995).
Most evaluation of these automated methods is done by taking an exist ing BN model, generating data from it that is given to the automated learner; the learned BN is compared to the original.
While there have been attempts to combine knowl edge elicitation from experts and automated knowl edge discovery methods (e.g. [Heckerman et a!., 1994, Onisko et al., 2000 In this paper, we present a case study in the construc tion of a BN model in the intelligent tutoring system (ITS) (Section 2). We describe the initial network con struction using expert elicitation, together with a pre liminary evaluation (Section 3). We then apply au tomated knowledge discovery methods to each main task in the construction process: (Section 4): (1) we apply a classification method to student test data; (2) we perform simple parameter learning based on fre-quency counts to the expert BN structures and (3) we apply an existing BN learning program. In each case we compare the performance of the resultant network with the expert elicited networks, providing an insight into how elicitation and knowledge discovery might be combined in the BN knowledge engineering process.
THE ITS DOMAIN
Decimal notation is widely used in our society. Our testing [Stacey and Steinle, 1999] of 5383 students has indicated that less 70% of Year 10 students (age about 15 years) understand the notation well enough to reliably judge the relative size of decimals. On the other hand, more than 30% of Grade 5 students (age about 10 years) have mastered this important concept. Expertise grows only very slowly through out the intervening years under normal instruction in our schools, and so an intelligent tutoring ap proach to this important topic is of interest. Stu dents' understanding of decimal notation has been mapped using a short test, the Decimal Comparison Test (DCT), where the student is asked to choose the larger number from each of 24 pairs of decimals [Stacey and Steinle, 1999]. [Stacey and Steinle, 1999] for a detailed discussion of these responses and categories of students. Table 1 shows the rules the experts originally used to classify students based on their response to 6 types of DCT test items: H = High correctness (e.g. 4 or 5 out of 5), L = Low number correct (e.g. 0 or 1 out of 5), with '.' indicating that any performance level is observable for that item type by that student class other than the combinations seen above. We note that the fine mis conception classifications have been "grouped" by the experts into a coarse classification -L (think longer decimals are larger numbers), S (shorter is larger), A (apparent expert) and UN (other). The LU, SU and AU fine classifications correspond to students who on their answers on Type 1 and 2 items behave like others in their coarse classification, however they don't behave like them on the other item types. These and the UNs may be students behaving consistently according to an unknown misconception, or students who are not following any consistent interpretation. [Mclntosh et al., 2000]. The overall architec ture of the system is shown in Figure 1. The com puter game genre was chosen to provide children with an experience different fr om, but complementary to, normal classroom instruction and to appeal across the target age range (Grade 5 and above). Each game focuses on one aspect of decimal numeration, thinly disguised by a story line. It is possible for a stu dent to be good at one game or the diagnostic test, but not good at another; emerging knowledge is often compartmentalised.1 1In the "Hidden Numbers" game students are con fronted with two decimal numbers with digits hidden be hind closed doors; the task is to find which number is the larger by opening as few doors as possible. The game "Fly ing Photographer" requires students to place a number on a number line, prompting students to think differently about decimal numbers. The "Number Between" game is also played on a number line, but particularly focuses on the density of the decimal numbers; students have to type in a number between a given pair. "Decimaliens" is a classic shooting game, designed to link various representations of the value of digits in a decimal number.
The simple expert rules classification described above makes quite arbitrary decisions about borderline cases.
The use of a BN to model the uncertainty allows it to make more informed decisions in these cases. Using a BN also provides a framework for integrating student responses from the computer games with DCT infor mation. The BN is initialised with a generic student model, with the options of individualising with class room or online DCT results. The BN is used to update an ongoing assessment of the student's understanding, to predict which item types that student might be ex pected to get right or wrong, and, using sensitivity analysis, to identify which evidence would most im prove the misconception diagnosis. The development of the BN is described below.
The system controller module uses the information provided by the BN, together with the student's previ ous responses, to select which item type to present to the student next, and to decide when to present help or change to a new game. This architecture allows flexibility in combining the teaching "sequencing tac tic" (that is, whether easy items are presented before harder ones, harder first, or alternating easy /hard), coverage of all item types, and items which will most improve the diagnosis. More detailed descriptions of both the architecture and the item selection algorithm are given in [Stacey et al., 2001]. The ITS shown in While in theory these tasks can be performed sequen tially, in practice the knowledge engineering process iterates over these task until the resultant network is considered "acceptable". In this section we describe the elicitation of the decimal misconception BN fr om the education domain experts. if 5 such items were presented, 0 or 1 correct would be considered low, 2 or 3 would be medium, while 4 or 5 would be High. For types with 4 items, medium encompasses only 2 correct, while for types with only 3 itemsl the medium value is omitted completely. This reflects the expert rules classification described above. 3.2
BN STRUCTURE
The experts considered the coarse classification to be a strictly deterministic combination of the fine classifi cation, hence the coarseClass node was made a child of the fineClass node, For example, a student was considered an L if and only if it was one of an LWH, LZE, LRV or LU.
The type nodes are observation nodes, where entering evidence for a type node should update the posterior probability of a student having a particular misconcep tion. This diagnostic reasoning is typically reflected in a BN structure where the class, or "cause" is the parent of the "effect" (i.e. evidence) node. Therefore an arc was added from the subclass node to each of the type nodes. No connections were added between any of the type nodes, reflecting the experts' intuition that a student's answers for different item types are independent, given the subclassification.
A part of the expert elicited BN structure implemented in the ITS is shown in Figure 2. This network fragment shows the coarseClass node (values L,S,A,UN), the detailed misconception fineClass node (12 values), the item type nodes used for the DCT, plus additional nodes for some games. These additional nodes are not implemented. Bold nodes are those discussed here. 3.3
BN PARAMETERS
The education experts had collected data that con sisted of the test results and the expert rule classifi cation on a 24 item DCT for over two thousand five 2 An indication as to the meaning of these additional nodes is as follows. The "HN" nodes relate to the Hidden Numbers game, with evidence entered for the number of doors opened before an answer was given, and a measure of the "goodness of order" in opening doors. The root node for the Hidden Number game subnet reflects a player's game ability -in this case door opening "efficiency".
hundred students from Grades 5 and 6. These were then pre-processed to give each student's results in terms of the 6 test item types; 5,5,4,4,3,3 were the number of items of these type 1 to 6 respectively. The and 0.7 respectively, numbers that our experts thought were reasonable. Results are described in Section 3.5.
BN EVALUATION PROCESS
During the expert elicitation process we performed the following three basic types of evaluation. First was Finally, we undertook a Prediction evaluation which considers the prediction of student performance on in dividual item type nodes rather than direct miscon ception diagnosis. We enter a student's answers for 5 of the 6 item type nodes, then predict their answer for the remruning one; this is repeated for each item type.
The number of correct predictions gives a measure of the accuracy of each model, using a score of 1 for a correct prediction (using the highest posterior) and 0 for an incorrect prediction. We also look at the pre dicted probability for the actual student answer. Both measures are averaged over all students. 4 We performed these four types of evaluation every time 4This evaluation method was suggested by an anony mous reviewer; the analysis of results using these predic tion measures is preliminary due to time constraints. Table 2 is an example of the comparison grids for the fine classification that were produced during the com parison evaluation phase. Similar grids were produced for the coarse classification. Each row corresponds to the expert rules classification, while each column cor responds to the BN classification, using the highest posterior; each entry in the grid shows how many stu dents had a particular combination of classifications from the two methods. The grid diagonals show those students for whom the two classifications are in agree ment, while the "desirable" changes are shown in ital ics, and undesirable changes are shown in bold. Note that we use the term "match" , rather than saying that the BN classification was "correct", because the expert rule classification is not necessarily ideal.
RESULTS
Further assessment of these results by the experts re vealed that when the BN classification does not match the expert rules classification, the misconception with the second highest posterior often did match. The ex perts then assessed whether differences in the BN's classification from the expert rules classification were in some way desirable or undesirable, depending on how the BN classification would be used. They came up with the following general principles which provided some general comparison measures: (1) it is desirable for expert rule classified LUs to be re-classified as an other ofthe specific Ls, similarly for A Us and SUs, and it was desirable for Us to be re-classified as anything else; (because this is dealing with borderline cases that the expert rule really can't say much about); (2) it is undesirable for (a) specific classifications (i.e. not those involving any kind of "U") to change, because Closer inspection also shows that some "undesirable" changes are reasonable. For example a student answer ing 443322 is classified as an ATE by both the expert rule and the H/M/L BN, since one mistake on any item is considered "high". However the 0-N BN (for pcm=0.03) classifies the student as UN, since the com bined probability of 5 careless mistakes (one on each item type) is very low.
It is not possible for reasons of space to present the full set of results for the fine classifications. Overall it is clear that the expert elicited network per forms a good classification of students misconceptions, and captures well the different uncertainties in the ex perts domain knowledge. In addition, its performance is quite robust to changes in parameters such as the probability of careless mistakes or the granularity of the evidence nodes.
KNOWLEDGE DISCOVERY
The next stage of the project involved the application of certain automated methods for knowledge discovery to the domain data.
CLASSIFICATION
The first aspect investigated was the classification of decimal misconceptions. We applied the SNOB clas sification program [Wallace and Dowe, 2000], based on the information theoretic Minimum Message Length (MML). SNOB was run on the data from 2437 stu dents on 24 DCT items, each being a binary value as to whether the student got the item correct or incor rect, with a variety of initial guesses for the number of classes (5,10,15,20,30). All five classifications were very similar; we present here results from the model with the lowest MML estimate (5 initial classes). Us ing the most probable class for each member, we con structed a grid comparing the SNOB classification with the expert rule classification. Of the 12 classes produced by SNOB, we were able to identify 8 that corresponded closely to the expert classifications (i.e. had most members on the grid diagonal). Two classes were not found (LRV and SU). Of the other 4 classes, 2 were mainly combinations of the AU and UN classi fications, while the other 2 were mainly UNs. SNOB was unable to classify 15 students (0.6%). The per centages of match, desirable and undesirable change are shown in . Clearly, summarising the results of 24 DCT into types gives relatively poor performance; it is proposed that this is because many pairs of the classes are dis tinguished by student behaviour on just one item type, and SNOB might consider these differences to be noise within one class.
The overall good performance of the classification method shows that automated knowledge discovery methods may be useful in assisting expert identify suit able values for classification type variables.
PARAMETERS
Our next investigation was to learn the parameters for the expert elicited network structure. The data was randomly divided into five 80%-20% splits for training and testing; the training data was used to parame terise the expert BN structures using the Netica BN software's parameter learning feature5, while the test data was given to the resultant BN for classification. The match results (averaged over the 5 splits) for the fine classification comparison of the expert BN struc tures (with the different type values, 0-N and H/M/L) with learned parameters are shown in Ta ble 4 (set 3), with corresponding prediction results (also averaged over the 5 splits) given shown in Ta ble 5 (set 2). ATE. We also note that the variation between the re sults for each data set 1-5 was much higher than for the var iation when learning parameters for the expert BN structure. This no doubt reflects the difference be tween the network structure learned for the different splits. However we did not find a clear correlation be tween the complexity of the learned network structures and their classification performance.
In seeking to improve automated discovery of struc ture by exploiting expert domain knowledge, experts could provide constraints to guide the search and could manually select for further investigation those alterna tive structures which were best interpretable in terms of the domain concepts.
CON CLUSION S
This work began with the recognition that we had ac cess to a novel combination of data and information Given that elicited BN was based on the expert knowl edge that had been accumulated over a period of time through much analysis and investigation, how useful is an automated approach in domains where such de tailed (validation) knowledge is not available? Our experience suggests that a hybrid of expert and au tomated approaches is feasible. We plan to apply these methods in a situation (student work on alge bra) where we have data on student behaviour, but do not have detailed prior expert analysis of the data.
|
2013-01-10T08:25:34.000Z
|
2001-08-02T00:00:00.000
|
{
"year": 2013,
"sha1": "07b263a8b35ce983b54ddc549cbb98836286319c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "07b263a8b35ce983b54ddc549cbb98836286319c",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
236380557
|
pes2o/s2orc
|
v3-fos-license
|
Principle and performance analysis for six‐pole hybrid magnetic bearing with a secondary air gap
In order to reduce the non-linearity of radial suspension forces of the three-pole hybrid magnetic bearing and further reduce the cost and power consumption, an AC six-pole hybrid magnetic bearing with secondary air gaps is proposed. First, the structure and working principle of the AC six-pole hybrid magnetic bearing with secondary air gaps are introduced, and the mathematical model of suspension forces are derived. Second, on the basis of the mathematical model, the linearity and coupling characteristics of the radial suspension forces are analysed, and then the suspension forces are simulated and verified by the finite element method (FEM). Then, the correlation performance indexes are compared with the six-pole hybrid magnetic bearing without secondary air gaps. Finally, an experimental platform is built, and suspension and disturbance tests are carried out. The research results show that the maximum bearing capacity of the six-pole hybrid magnetic bearing with secondary air gaps is 184% of the six-pole hybrid magnetic bearing without secondary air gaps.
In order to reduce the non-linearity of radial suspension forces of the three-pole hybrid magnetic bearing and further reduce the cost and power consumption, an AC six-pole hybrid magnetic bearing with secondary air gaps is proposed. First, the structure and working principle of the AC six-pole hybrid magnetic bearing with secondary air gaps are introduced, and the mathematical model of suspension forces are derived. Second, on the basis of the mathematical model, the linearity and coupling characteristics of the radial suspension forces are analysed, and then the suspension forces are simulated and verified by the finite element method (FEM). Then, the correlation performance indexes are compared with the six-pole hybrid magnetic bearing without secondary air gaps. Finally, an experimental platform is built, and suspension and disturbance tests are carried out. The research results show that the maximum bearing capacity of the six-pole hybrid magnetic bearing with secondary air gaps is 184% of the six-pole hybrid magnetic bearing without secondary air gaps.
Introduction: The friction between the rotor and stator of the traditional mechanical bearing increases the energy loss, and magnetic bearing can solve this problem effectively [1]. Two power amplifiers are required for magnetic bearing with four magnetic poles [2]. Only one three-phase inverter is required for three-pole magnetic bearing in [3], and it will greatly reduce the cost and power consumption of the magnetic bearing system. To overcome the shortcomings of the existing modelling methods, a new mathematical modelling method of suspension force for a centripetal force type-magnetic bearing is proposed in [4].
Although the three-pole magnetic bearing has many advantages, it also increases the overall design difficulty and cost of the system in [5]. A new radial magnetic bearing structure and its working principle are introduced in this manuscript, and a parameter design method is put forward. Compared with the existing magnetic bearing structure, the structure can increase the linear working range and stability margin of the model.
Structure and working principle:
The six-pole hybrid magnetic bearing is mainly composed of a permanent magnet, radial stator, rotor and radial coil as shown in Figure 1.
When the flux flows through the air gaps between the rotor and the magnetic poles of the stator, the corresponding maxwell force is generated. The direction and magnitude of the maxwell force can be controlled by adjusting the direction and magnitude of the flux, and finally, the rotor can be suspended in the balance position. When the bias flux and the control flux are superimposed in the radial air gaps, one of the relative radial air gaps is increased and the other air gap magnetic flux is decreased, and finally, the controllable radial suspension force can be produced. The suspension force in the radial direction can be obtained by adjusting the control current.
In Figure 2, the F m is the magnetomotive force of the permanent magnet, m is the total flux, A11 , A12 , B11 , B12 , C11 , C12 , A21 , A22 , B21 , B22 , C21 , C22 are the fluxes in the radial direction, respectively, are the magnetic conductance in the radial gaps, N is the total turns of a single radial control coil, i A , i B , i C are the control currents.
Through mathematical modelling of the six-pole hybrid magnetic bearing, the suspension force expression is obtained, and the maximum suspension force in the x-direction is 3B s 2 S r /2μ 0 , (B s is saturation flux, S r is magnetic pole area, μ 0 is air permeability), the maximum suspension force in the y-direction is √ 3B s 2 S r /μ 0 . The six-pole hybrid magnetic bearing has a maximum suspension force determined by the smaller ones in the x-and y-directions; therefore, the maximum suspension force of the six-pole hybrid magnetic bearing is 3B s 2 S r /2μ 0 . Figures 3(a) and (c) shows the magnetic density distribution of the six-pole magnetic bearing with only bias current. In Figure 3(a), the flux flow direction is same as the analysis result, and the magnetic density distribution is uniform in all six directions, which is about half of the saturation magnetic induction intensity 0.4T. The maximum control current in the positive direction of x-axis is introduced into the radial coils. The magnetic density distribution of the six-pole hybrid magnetic bearing is shown in Figures 3(b) and (d). The magnetic flux direction in each pole is the same as the bias flux, so the control flux is only superimposed on the bias flux; the A11 magnetic density is 0.8 T, the A12 density in the air gap is almost zero, and the maximum suspension force in the x-axis is 201 N.
Finite element analysis:
As can be seen in Figure 4(a), the control current and suspension force in the x-direction is linear. In Figure 4(b), the control current and suspension force in the y-direction is linear too. The six-pole structure benefits from the symmetry of its spatial structure. The force-current characteristic curves in the x-and y-axes are symmetric and have good linearity, which verifies the validity of the theoretical analysis.
As can be seen in Figure 5, when the length of secondary air gaps increases, the suspension force in the x-direction increases. When the length of the secondary air gaps is 3 mm, the force in the x-axis is 201 N. When the length of the secondary air gaps is 0 mm, the force of xaxis is 109 N. Therefore, the maximum bearing capacity of the six-pole hybrid magnetic bearing with secondary air gaps is 184% of the six-pole hybrid magnetic bearing without secondary air gaps.
Experiment validation:
Based on the mathematical model of magnetic bearing, the control system of the magnetic bearing is designed and the experimental platform of numerical control magnetic bearing is established as shown in Figure 6. The displacement of the magnetic bearing is detected by the eddy current sensors, the signal is adjusted to the acceptable value of the digital signal processing (DSP) controller through the interface circuit, and the control current is obtained by the DSP controller using the proportion integral derivative (PID) algorithm. The magnetic bearing is stabilised by the inverter.
The relationships between suspension force and control current of the six-pole hybrid magnetic bearing are shown as Figures 7(a) and (b), respectively. There is a little difference between simulation, experimental and calculation results. The experimental results are relatively bigger than the simulation results. The reason is that the air gap becomes smaller and the magnetic conductivity increases due to the rotor eccentricity during the experiment, which makes the maximum suspension force bigger.
Conclusion and discussion: This manuscript introduced the principle and performance analysis for six-pole hybrid magnetic bearing with a secondary air gap. Theoretical research and simulation analysis show that this radial magnetic bearing structure can effectively avoid the coupling of magnetic flux between two radial degrees of freedom and greatly reducing the control difficulty of magnetic bearing rotor offset. Thus, the linear range of the system is increased to improve the operational reliability, which is suitable for high speed and high precision magnetic levitation system. The research results show that the maximum bearing capacity of the six-pole hybrid magnetic bearing with secondary air gaps is 184% of the six-pole hybrid magnetic bearing without secondary air gaps.
|
2021-07-27T00:05:54.425Z
|
2021-05-22T00:00:00.000
|
{
"year": 2021,
"sha1": "9f2b3dd15b1a5c7197cf1239cbc521d27217de7b",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1049/ell2.12098",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "9fb10fac4f9f6fdec3282c52b76b565648381805",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
25277432
|
pes2o/s2orc
|
v3-fos-license
|
Cervical Cancer Screening in an Early Diagnosis and Screening Center in Mersin , Turkey
Cancer is a major public health problem both in Turkey and worldwide due to its disease burden, fatality and tendency for increased incidence (Sahin et al., 2013). Of all cancer types cervical cancer is the fourth most common cancer in women, and the seventh overall, with an estimated 528,000 new cases worldwide in 2012 (Ferlay et al., 2013). The global burden of cervical cancer is disproportionately high among the developing countries where 85% of the estimated new cases occur (Ali et al., 2012). On the contrary developed countries have been successful in controlling the incidence of cervix cancer largely due to the widespread and systematic use of the Papanicolaou (Pap) smear test which is an effective, easily applicable, low-cost, harmless and high-sensitive method of early diagnosis, reducing treatment burden, morbidity and mortality (Elovainio et al., 1997). Cervical cancer has a long preclinical detection phase consisting of slowly progressing precancerous lesions such as
Introduction
Cancer is a major public health problem both in Turkey and worldwide due to its disease burden, fatality and tendency for increased incidence (Sahin et al., 2013).Of all cancer types cervical cancer is the fourth most common cancer in women, and the seventh overall, with an estimated 528,000 new cases worldwide in 2012 (Ferlay et al., 2013).The global burden of cervical cancer is disproportionately high among the developing countries where 85% of the estimated new cases occur (Ali et al., 2012).
On the contrary developed countries have been successful in controlling the incidence of cervix cancer largely due to the widespread and systematic use of the Papanicolaou (Pap) smear test which is an effective, easily applicable, low-cost, harmless and high-sensitive method of early diagnosis, reducing treatment burden, morbidity and mortality (Elovainio et al., 1997).Cervical cancer has a long preclinical detection phase consisting of slowly progressing precancerous lesions such as
Cervical Cancer Screening in an Early Diagnosis and Screening Center in Mersin, Turkey
Tufan Nayir 1 , Ramazan Azim Okyay 2 *, Ersin Nazlican 3 , Hakki Yesilyurt 4 , Muhsin Akbaba 3 , Berrin Ilhan 1 , Aytekin Kemik 1 CIN 2 and 3 and adenocarcinoma in situ, caused by persistent infection with one of the oncogenic types of HPV, particularly HPV 16 and 18.The precursor lesions may progress to invasive cervical cancer over a period of 1 to 4 decades.Therefore, screening programs such as Pap smear screenings play an important role in cervical cancer prevention (Sankaranarayanan, 2014).Pathological changes in epithelial tissues can be diagnosed with the Pap smear test even when there is no indication (Ozdemir and Bilgili, 2010).The valueof the cervical cancer screening in reducing the risk of cervical cancer and mortality has been firmly established, and it is estimated that regular screening reduces the risk of cancer up to 80% (Stewart and Kleihues, 2003;Ozgül, 2010).
Being the eighth most common cancer type in terms of both incidence and cause of death, cervix cancer is a remarkable health problem in Turkey which is a country with a young population (Kaya, 2009;Ozgul, 2010).In Turkey, cancer screening activities are mainly being conducted by two health institutions of the Turkish Ministry of Health; by "Early Diagnosis and Screening Centers for Cancer" (abbreviated as KETEM in Turkish) and by "Mother and Child Care and Family Planning Centers" in the context of the Reproductive Health Program and by means of the polyclinic and clinic activities at the hospitals (Demirhindi et al., 2012).
The purpose of this study is to present the results of a screening survey for cervical cancer targeting women population living in an urban area in the province of Mersin, located at the Mediterranean region of Turkey.The survey also aims to raise population awareness and level of knowledge about cancers besides early diagnosis and cancer prevention.
Materials and Methods
This community-based descriptive study included women living at Akdeniz county of Mersin province.Akdeniz county is an urban district of Mersin province with a population of 53.277 women between 35-65 ages which is the accepted target population for cervical screening according to national standards for cervical cancer screening (Turkish Ministry of Health, 2009).
A total of 1032 screened women between 30 and 65 ages within the routine screening programme conducted by the Early Diagnosis, Screening and Education Center for Cancer of the Mersin Province (KETEM) constituted the study population.
The study was carried out between January and June 2013.The women in the study group were educated about all types of cancer, but in particular about cervical cancer by the researchers.The local population was also informed about the subject by the help of printed handouts and posters.The names and addresses of women in the study group were listed and home visits were performed to inform them about the study and invite them at the proper time of their menstrual cycle in groups of ten to the KETEM where a room was specifically equipped for cervical smear.
The women who arrived at the KETEM to participate in the study signed an informed consent form after being re-informed and clarified for any question.A questionnaire about personal information, history of reproductive health and physical examination was filled for every woman followed by cervical smear sampling by brushing method at the transformation zone of the cervix according to medical literature (Fiscella and Franks, 1999).The samples obtained and carefully fixed with ethanol were transferred to the cytology laboratory before the end of the official working hours.The maximum attention was paid during the process taking into account that cervix carcinomas were greatly missed due to defects in sampling and evaluation procedures associated with 30% of the new cases of cervical cancer each year (ACOG, 2009).
The women who did not care about the rules sited below and previously declared at home visits, which were also depicted at the handouts, were not sampled and re-invited for the procedure: 1) A sexual abstinence of at least 48 hours before sampling, 2) No vaginal douching since 24 hours before sampling, 3) No use of any vaginal medication (cream, tablets or any form) since 48 hours before sampling, 4) No menstrual bleeding.
Cytology laboratory reported the examination results according to the Bethesda III classification system (2001).In Bethesda classification smears' cytology abnormalities were classifid under 3 categories: atypical squamous cells (ASC); low-grade squamous intraepithelial lesions (LSIL); and high-grade squamous intraepithelial lesions (HSIL).The ASC category was subdivided into 2 categories: the unknown signifiance category (ASC-US); and the one, which high-grade lesions cannot be excluded (ASC-H) (Apgar et al., 2003).
The data were analyzed by SPSS 11.5 statistical package program.
Results
The number of women agreed to participate in the study was 1032.The mean age of the participants was 43.8±8.6 (min.30, max.65) years.The main education level was primary school (54.9%), and the majority of participating women were principally unpaid domestic workers (91.8%).The characteristics of participants are presented in Table 1.
The most common gynecological symptom in participants was abnormal vaginal discharge with 30.1%.The percentage of the participants who had previously undergone smear was 40.6%.History and clinical presentation of participants included in the study is presented in Table 2.
Epithelial cell changes were found to be in 26 (2.5%) participants with ASC-US in 18 (1.7%),ASC-H in 2 (0.2%), LSIL in 5 (0.5%) and HSIL in 1 (0.1%).The most observed clinical presentation together with epithelial changes was abnormal vaginal discharge.In 25 (96.1%)participants with epithelial changes abnormal vaginal discharge was present.Another remarkable finding is that the 24(92.3%) of the participants with epithelial changes had primary school or lower education levels.Comparison of epithelial cell changes found in our study with other similar studies is presented in Table 3.
Discussion
Cervical cancer is the most widely screened cancer in the world both in high-and middle-income countries.Population-based cervical cytology screening programs offering Papanicolaou testing every 3 to 4 years have reduced cervical cancer incidence and mortality by up to 80% in developed countries of Europe, North America, Japan, Australia, and New Zealand in the past 5 decades (Stewart and Kleihues, 2003).This has emphasized the key role of effective screening program to detect precancerous lesions that could develop into invasive cancer (IdestrOm et al., 2002;Saraiya, 2003) as in the example of a cervical cancer screening program in Taiwan that achieved a 47.8% of decrease in the incidence of invasive cervix cancer between 1995 and 2006 (Chen et al., 2009).
The mean age of the participants was 43.8.It is thought that the average age of the women population included in the study was appropriate considering the fact that the most common age to develop cervical cancer is between 40 and 50 years, and its precursor lesions usually occur 5-10 years prior (Banik et al., 2011).The most observed clinical presentation together with epithelial changes was abnormal vaginal discharge.It is well known that HPV infection, intraepithelial lesions and abnormal vaginal discharge is closely associated (Ojiyi et al., 2013;Vaidya, 2003).As it points out unhealthy cervix, abnormal vaginal discharge sholud not be overlooked (Singh et al., 1992).
In this study, 40.6% of participants had previously undergone the Pap smear test while in other studies conducted in Turkey have reported a rate of 10% and 20% (Demirhindi et al., 2012;Sevil et al., 2013;Karabulutlu, 2013).This might result from the methodological differences between studies.On the other hand in developed countries this rate is near 90% (Solomon et al., 2007).Conclusively, it seems that the awareness and performance of Turkish women is weaker.
Epithelial cell changes were found to be in 2.5% of the participants with ASC-US in 1.7%, ASC-H in 0.2%, LSIL in 0.5% and HSIL in 0.1% in this study.Sengul et. al (2014) found an overall prevalence of the cytological abnormality 1.83%, with ASCUS in 1.18%, LSIL in 0.39%, HSIL in 0.16%, atypical glandular cells of undetermined significance (AGUS) in 0.07% and squamous cell carcinoma in 0.02% in an hospital setting.Mehmetoğlu et al. (2010) reported epithelial cell anomalies in 1.2% of the cases among them 0.6% was LSIL and 0.6% was HSIL in a study performed in 332 married women attending a Family Medicine Clinic in Bursa Province of Turkey in 2010.Turkish Cervical Cancer and Cervical Cytology Group ( 2009) performed an important study with the participation of 33 healthcare centers including 140334 women for the evaluation of the prevalence of cervical cytological abnormalities in Turkey.Their overall prevalence of cytological abnormality was 1.8% and prevalence of ASCUS, LSIL, HSIL, and atypical glandular cells (AGC) was 1.07%, 0.30%, 0.17% and 0.08%, respectively.Another study performed in Turkey by Eroğlu et al. (2008) among applicants to Konya provincial KETEM (Early diagnosis, screening and education center for cancer) for cervical smear reported cytology results as ASC-US in 0.5%, LSIL in 0.02% and HSIL in 0.02%.Oner et al. ( 2004) examined 17 years old and older married women in Doğankent, a semirural area of Adana province, in the Mediterranean region of Turkey and reported epithelial cell changes 1.8% with ASC-US in 1.1%, LSIL in 0.4% and HSIL in 0.4%.Comparison of epithelial cell changes with other similar studies carried out in Turkey is highlighted in Table 3.Although there are differences in the findings obtained in these studies, the diversity is likely due to differences in the methodology of the studies.While some of the above-mentioned studies were designed as community-based researches such as ours, in the other studies, participants were chosen among women admitted to the health care providers.In addition, the age groups included in these studies vary.Nevertheless, when compared with studies conducted in the western communities the rates are quite lower in Turkey (Kulig et al., 2006;Duggan et al., 2006).
The 26 participants with abnormal epithelial changes were referred to gynecology outpatient clinics for advanced diagnosis and treatment, where gynecologists who examined the patients performed colposcopies to take cervical biopsies.In one of them, squamous cell carcinoma was detected and total abdominal hysterectomy and bilateral salpingo-oophorectomy operation was performed.
In conclusion, the incidence, mortality, and morbidity of cervical cancer have been reduced by means of early detection of the cervical abnormalities particularly in the developed countries having the regular cervical cancer screening programmes.Cervical cancer screening remains an evolving field with new HPV DNA tests, and development of new technologies.However, Pap smear test is effective, easily applicable, low-cost, harmless and high-sensitive method of early diagnosis, reducing treatment burden, morbidity and mortality, and have been embraced as the effective population screening method extensively.Taking into account the presence of women who had never undergone Pap test; it should be offered at primary level of health care in the form of a community-based service.The community should be enlightened about Pap smear test, including its aim, the required frequency of application, by diffuse educational activities, media programs.The aim should be the establishment of a well organized, continuous and community-based cervical cancer screening program; with final conclusion of reduced morbidity and mortality.
Table 3 . Comparison of Epithelial Cell Changes with Other Similar Studies
* Turkish Cervical Cancer and Cervical Cytology Research Group
|
2018-04-03T03:52:57.054Z
|
2015-01-01T00:00:00.000
|
{
"year": 2015,
"sha1": "6e5e90dc15da4216db6dc282372c92130ca43503",
"oa_license": "CCBY",
"oa_url": "http://koreascience.or.kr/article/JAKO201534168452444.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "6e5e90dc15da4216db6dc282372c92130ca43503",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
1458412
|
pes2o/s2orc
|
v3-fos-license
|
Prelimbic and Infralimbic Prefrontal Cortex Interact during Fast Network Oscillations
Background The medial prefrontal cortex has been implicated in a variety of cognitive and executive processes such as decision making and working memory. The medial prefrontal cortex of rodents consists of several areas including the prelimbic and infralimbic cortex that are thought to be involved in different aspects of cognitive performance. Despite the distinct roles in cognitive behavior that have been attributed to prelimbic and infralimbic cortex, little is known about neuronal network functioning of these areas, and whether these networks show any interaction during fast network oscillations. Methodology/Principal Findings Here we show that fast network oscillations in rat infralimbic cortex slices occur at higher frequencies and with higher power than oscillations in prelimbic cortex. The difference in oscillation frequency disappeared when prelimbic and infralimbic cortex were disconnected. Conclusions/Significance Our data indicate that neuronal networks of prelimbic and infralimbic cortex can sustain fast network oscillations independent of each other, but suggest that neuronal networks of prelimbic and infralimbic cortex are interacting during these oscillations.
Introduction
The medial prefrontal cortex (mPFC) of rodents consists of several areas including the prelimbic and infralimbic cortex [1][2][3]. These adjacent cortical areas have a different cytoarchitecture [2] and partly differ in their connections with other brain areas [3]. Although many studies have addressed the role of prelimbic and infralimbic cortex without distinguishing between the two areas, other studies show specific involvement of prelimbic or infralimbic cortex in cognitive behavior [4][5][6]. Injection and lesion studies in awake animals suggest that the prelimbic cortex is involved in behavioral flexibility [5,6], whereas the infralimbic cortex seems to be involved in impulsive behavior and habit formation [7][8][9][10].
To address these questions, we induced network oscillations using the muscarinic agonist carbachol in acute brain slices of rat prelimbic and infralimbic cortex, while they were connected with each other, or in isolation. We find that fast network oscillations in infralimbic cortex occur at higher frequencies and with higher power than oscillations in prelimbic cortex. The difference in oscillation frequency disappeared when prelimbic and infralimbic cortex were disconnected. Thus, although neuronal networks of prelimbic and infralimbic cortex can sustain fast network oscillations independent of each other, our data suggest that neuronal networks of prelimbic and infralimbic cortex are interacting during these oscillations.
Results
To record from prelimbic and infralimbic cortex simultaneously, we placed acute rat prefrontal cortex slices on a planar 868 multielectrode grid with an interelectrode distance of 300 mm, covering an area of 2.1 mm 2 [22,25] (Figure 1A-C). Bath application of 25 mM carbachol (CCh) induced fast network oscillations that were dependent on glutamatergic and GABAergic transmission ( Figure S1). Fourier analysis revealed that field oscillations in the infralimbic cortex oscillated at a higher frequency compared to the prelimbic cortex ( Figure 1C-E; mean6s.e.m.; prelimbic (PrL) 12.760.7 Hz; infralimbic (IL) 14.760.9 Hz; p = 0.02, n = 12). Oscillation power in infralimbic cortex was greater than in prelimbic cortex in 13 out of 15 slices, and varied greatly between experiments ( Figure 1F; area power spectrum 5-35 Hz; PrL 1.560.3 mV 2 ; IL 2.160.5 mV 2 ; p = 0.02, n = 15). To calculate the relative power of prelimbic to infralimbic oscillations, we normalized the power in prelimbic cortex to the power in infralimbic cortex per experiment. On average the power in prelimbic cortex was reduced with ,25% ( Figure 1G; PrL 73.766.8% of IL, p,0.01, n = 15). Thus, fast network oscillations in prelimbic and infralimbic cortex differ in frequency and power.
The power of fast network oscillations strongly fluctuated in time, both in prelimbic and infralimbic cortex. Since fast network oscillations reflect synchronized activity of large groups of neurons [26][27][28], these power fluctuations most likely reflect episodes of increased and reduced synchronicity in neuronal activity, which is a property of neuronal networks [29]. To determine whether neuronal networks in prelimbic and infralimbic cortex show differences in power fluctuations, these fluctuations were quantified using time resolved wavelet analysis [24] (Figure 2A-D). In both prelimbic and infralimbic cortex, the episodes during which the power of oscillations was significantly above threshold (see Materials and Methods, Figure S2) occurred around 2 Hz (PrL 1.960.2 Hz, n = 5; IL 1.760.1 Hz, n = 7; p = 0.59) and lasted about 225 ms ( Figure 2E; mean of medians6s.e.m.; PrL 212.9625.6 ms, n = 5; IL 237.6620.4 ms, n = 7; p = 0.46). The oscillation episodes occurred both simultaneously and separately in prelimbic and infralimbic cortex ( Figure 2F; only PrL 18.162.3%; only IL 25.262.1%; both 33.164.7%; neither 23.764.3%, n = 5). These findings suggest that fast network oscillations in prelimbic and infralimbic cortex can occur independently from each other.
To further investigate whether separate neuronal networks generate fast network oscillations in prelimbic and infralimbic cortex, we analyzed the underlying current sinks and sources that generated the field oscillations with two dimensional current-sourcedensity (CSD) analysis [30] (Figure 3). When an electrode in layer 5 from the prelimbic cortex served as reference electrode, a sinksource pair between layer 5 and the superficial layers 1/2 was revealed ( Figure 3C, Movie S1). In 6 out of 12 slices the current sink-source pair was restricted to the prelimbic cortex and did not involve the infralimbic cortex ( Figure 3B). When in the same experiment an electrode in layer 5 from the infralimbic cortex served as reference electrode, a current sink-source pair between the deep and superficial layers of infralimbic cortex was revealed ( Figure 3F, Movie S2). This sink-source pair did not extend to the prelimbic cortex and was completely restricted to the infralimbic cortex ( Figure 3E). These results show that fast network oscillations are generated by separate neuronal networks in prelimbic and infralimbic cortex, and can be restricted to that area.
Since fast network oscillations in prelimbic and infralimbic are generated by their own neuronal networks, this could suggest that they may exist independent from each other. To test whether neuronal networks from prelimbic and infralimbic cortex can generate and sustain fast network oscillations independent of each other, we cut out mini-slices that included either the prelimbic or the infralimbic cortex ( Figure 4A). Application of carbachol to these isolated slice parts induced fast network oscillations in both prelimbic slices and infralimbic slices ( Figure 4). The power of oscillations was much larger in both prelimbic and infralimbic isolated slices compared to the slices that contained both areas ( Figure connected 237.6620.4 ms, n = 7, p,0.01). The power of the field oscillations in the isolated infralimbic slices was larger than the power in isolated prelimbic slices ( Figure 4C; isolated-PrL 4.060.4 pV 2 , n = 17; isolated-IL 7.961.8 pV 2 , n = 12; p = 0.02), similarly to when the areas were connected ( Figure 1F). This suggests that the difference in oscillation power between these areas results from properties within the prelimbic and infralimbic neuronal networks. In contrast, the frequency of oscillations in isolated prelimbic and isolated infralimbic slices was not different ( Figure 4B; isolated-PrL 13.860.9 Hz, n = 17; isolated-IL 13.861.0 Hz, n = 12; p = 0.97). This was surprising since the oscillation frequencies were different in prelimbic and infralimbic cortex when these areas were connected ( Figure 1D). This could suggest that during fast network oscillations, prelimbic and infralimbic cortical neuronal networks affect each other, giving rise to differences in oscillation frequency, which disappear when these areas are isolated from each other ( Figure 5A; PrL: isolated 13.860.9 Hz, n = 17; connected 12.760.7 Hz, n = 12; p = 0.39. IL: isolated 13.86 Hz, n = 12; connected 14.760.9 Hz, n = 12; p = 0.51).
To investigate whether a direct connection between prelimbic and infralimbic cortex modulates the oscillation frequency to be different between these areas, we made a cut in whole coronal slices between prelimbic and infralimbic cortex ( Figure 6A). Indeed, after the cut was made there no longer was a difference in oscillation frequency between prelimbic and infralimbic cortex Figure 3. Current source density analysis of prelimbic and infralimbic electrodes reveals two sink-source pairs. (A) Brain slice with superposed 2D-CSD plot as calculated from peak-to-peak cycle averaged field potentials from 868 multielectrode array. (B) Peak-to-peak cycle averaged field potentials, using a prelimbic field recording as reference oscillation (red trace). Two oscillation cycles are shown for clarity. The white rectangle in (A) marks the column of 8 electrodes, spanning prelimbic and infralimbic cortex, that are displayed. (C) 2D-CSD plots of 868 electrodes at different time points. The CSD plots display alternating sink (red) and source (blue) pairs between deep and superficial layers of prelimbic cortex. Note that sink-source pairs are restricted to prelimbic cortex. (D-F) as (A-C) using an infralimbic field recording (red trace in E) as reference oscillation. Note that sink-source pairs are restricted to infralimbic cortex. doi:10.1371/journal.pone.0002725.g003 Surprisingly, there was also no difference in oscillation power between prelimbic and infralimbic cortex ( Figure 6C; PrL 3.161.5 pV2; IL 3.061.3 pV2; n = 7; p = 0.84), which could result from the large variation in oscillation power.
The above results suggest that prelimbic and infralimbic cortex are interacting during oscillations since connected prelimbic and infralimbic cortex show differences in frequency that disappear when the connection between these areas is cut either in whole coronal slices or in isolated mini-slices. Indeed, prelimbic and infralimbic cortex are known to be strongly connected with each other [31]. If these areas are affecting each other during fast network oscillations, one would expect a larger correlation between the field potentials in these areas when they are connected than in isolation. To that end, we cross-correlated the field potentials in connected prelimbic and infralimbic cortex slices, in disconnected whole coronal slices and in slices from isolated prelimbic and isolated infralimbic cortex placed on the same electrode grid (Figure 7). In slices of connected prelimbic and infralimbic cortex there was a significantly higher correlation between the field potentials than when these areas were disconnected or isolated ( Figure 7B; cross-correlation at 900 mm; connected: r = 0.3360.03, n = 12; disconnected: r = 0.1260.01, n = 6 isolated: r = 0.1260.02, n = 5; ANOVA p,0.01; Newman-Keuls p,0.01). This suggests that although infralimbic and prelimbic cortex can generate and sustain fast network oscillations, they do interact and affect synchronization of each others neuronal networks.
Discussion
Prelimbic and infralimbic cortex are both part of the medial prefrontal cortex [2] and are associated with different aspects of working memory [4][5][6][7][8][9][10]. Despite the distinct roles in cognitive behavior little is known about neuronal network functioning of these areas, and whether these networks show much interaction during fast network oscillations. We investigated carbacholinduced fast network oscillations in acute rat slices of prelimbic and infralimbic cortex. We performed simultaneous field recordings of connected or disconnected prelimbic and infralimbic cortex, and of isolated prelimbic or infralimbic mini-slices. We found that neuronal networks of prelimbic and infralimbic cortex can sustain fast network oscillations independent of each other in isolated mini-slices. When connected, fast network oscillations in prelimbic and infralimbic cortex remain restricted to their own area in 50% of the slices. Fast network oscillations in the infralimbic cortex displayed a higher power than oscillations in prelimbic cortex, in both connected and isolated slices. In disconnected slices there was no difference in oscillation power, but this could be masked by the large extend of variation.
The difference in oscillation power suggests a difference in internal network properties between prelimbic and infralimbic cortex. The architecture of the microcircuit could play a role in this. In the hippocampus fast network oscillations occur with much higher power, than oscillations generated by neocortical networks [19][20][21][22][23]. It is generally assumed that this results from the one layered pyramidal cell structure of the hippocampal circuit [23]. The microcircuit layout of prelimbic and infralimbic cortex is very alike: both consisting of a neocortical multi-layered structure, both lacking layer 4, which is typical of the rodent medial prefrontal cortex [1,2]. However, there are two striking differences between infralimbic and prelimbic cortical architecture: 1-the lamination in general, and especially of layer 2 and layer 3, is less clear in infralimbic cortex [2,32]; 2-the prelimbic cortex is thicker and contains a larger number of cells per column than infralimbic cortex [2,32]. Since fast network oscillations result from the synchronized activity of large groups of neurons [26][27][28], the increased number of cells available in prelimbic cortex would seem an advantage, assuming an equal proportion of cells that participate in fast network oscillations. On the other hand, the thinner cortical layers in the infralimbic cortex could also lead to more alignment of the neurons generating the fast network oscillations, and hence to a greater summation of currents. However, the general decreased lamination of the infralimbic cortex would reduce this effect. If it is not through differences in cell number or lamination, the infralimbic cortex could be more tuned to generate fast network oscillations than prelimbic cortex through other properties, such as possible differences in cell types, intralaminar connectivity or sensitivity to carbachol.
The frequency of fast network oscillations seems to be partly dependent on the interaction between prelimbic and infralimbic cortices. When connected, infralimbic cortex oscillated at a higher frequency than prelimbic cortex. This frequency difference disappeared when prelimbic and infralimbic cortices were disconnected, both in whole coronal slices or in the isolated mini-slices experiments. Cross-correlation analysis of the field potential in prelimbic cortex with the field potential in infralimbic cortex confirmed the interaction between the two areas. Also the power of fast network oscillations was reduced in connected slices compared to isolated slices for both areas, which suggest that the two oscillations could inhibit each other. Prelimbic and infralimbic cortex are intrinsically connected [3,31]. Prelimbic layer 5/6 projects to infralimbic layer 5/6, and infralimbic layers 1-6 project to primarily to prelimbic layers 1, 3 and 5 [31]. Thus it seems likely that pyramidal cells that fire phase-locked to the local fast network oscillations in one area influence pyramidal cell firing in the other area. How the interaction at the macrocircuit level influences the local fast network oscillations remains an intriguing question. We conclude that neuronal networks of prelimbic and infralimbic cortex can sustain fast network oscillations independent of each other, but do interact during these oscillations and affect synchronization of each others neuronal networks.
The present results suggest that the increase in acetylcholine levels seen in the awake animal during working memory tests and cue detection [15][16][17][18], have a profound and parallel effect on the distinct network activity in prelimbic and infralimbic cortex. Although, until now, most emphasis has been placed on the role of the prelimbic cortex in behavioral flexibility, the greater response of the infralimbic network to carbachol application would justify more attention for this area. In addition, the connectivity [3,31] and interaction between prelimbic and infralimbic cortex during fast network oscillations presented here, suggests that these areas could function in concert with each other during high acetylcholine levels.
Electrophysiology
After recovery, slices were mounted on 868 arrays of planar microelectrodes (electrode size: 50 mm650 mm; interpolar distance: 150 mm or 300 mm; Panasonic MED-P5155 or MED-P5305; Tensor Biosciences, Irvine, CA). To improve slice adhesion, the multielectrode probes were coated with 0.1% polyethylenimine (Sigma-Aldrich, St. Louis, MO) in 10 mM borate buffer (pH 8.4) for at least 6 hr before use. The multielectrode probe was then placed in a chamber saturated with humidified carbogen gas for at least 1 hr. For recordings, slices were maintained in submerged conditions at 25uC, and superfused with ACSF, bubbled with carbogen, at 4-5 ml/min. Spontaneous field potentials from all 64 recording electrodes were acquired simultaneously at 20 kHz, using the Panasonic MED64 system (Tensor Biosciences), and down sampled off-line to 200 Hz or 2 kHz.
Data Analysis
Electrophysiological data was analyzed using custom-written procedures in Igor Pro (Wavemetrics, OR, USA).
Significance of oscillations
After application of 25 mM muscarinic acetylcholine receptor agonist carbachol, neuronal activity in the slice gradually increases ( Figure 1C, Figure S2A, B). Power spectrum analysis shows first an increase in 1/f noise during wash-in. Then, after ,200 sec oscillations occur that show a distinct peak in the power spectrum ( Figure S2A, B). Wavelet analysis [24] showed that when the oscillations are clearly present, still the magnitude of oscillations fluctuates in time (Figure 2A-D). To identify and quantify episodes during which field oscillations are present, we compared the wavelet magnitude of ongoing field oscillations with the wavelet magnitude during wash-in period, when the 1/f noise was elevated, but no distinct oscillations were yet visible, i.e. just before the onset of oscillations (time point 128 sec in Figure S2A, B). This prevented false positive identification of oscillation episodes due to the overall increases in 1/f noise.
The onset of oscillations was determined by averaging the wavelet magnitude of 8 sec time windows (but see below) for the whole frequency spectrum: a ''global wavelet'' ( Figure S2A). When followed in time, the global wavelet power spectrum first showed an increase in 1/f noise followed by the appearance of a distinct oscillation peak ( Figure S2A). In each experiment, the increase in magnitude at the oscillation frequency was compared to the reference frequency of 5 Hz, the magnitude of which only increased as part of the 1/f noise. The time point of the onset of oscillations was determined as the intersection of these curves ( Figure S2B). The 95% confidence interval of an exponential fit to the global wavelet power spectrum at the time window preceding the onset of oscillations was taken as the threshold for oscillations during the entire recording ( Figure S2C).
To determine the impact of the time window size on the threshold, we investigated the effect of window size on the two parameters that determine the threshold. Firstly, the threshold will depend on the time point used for constructing the global wavelet power spectrum. Secondly, the threshold will depend on how well the power spectrum before the onset of oscillations was fitted by a mono-exponential function. When the onset time of oscillations was calculated for different lengths of time windows (2-4-8-16-32 s), there was a general trend towards an earlier onset for larger window sizes ( Figure S2D). However, the variation in onset time between experiments and between different electrodes from the same experiment by far exceeded the variation due to different window sizes (avg stdev of time windows within experiments: 6.760.3 s; avg stdev between experiments 37.360.6 s; p,0.05). Thus, time window sizes between 2 s and 32 s do not affect the onset time point of oscillations.
In contrast, the goodness of fit of the global wavelet power spectrum was affected by the time window size ( Figure S2E). The noise in the global wavelet power spectra obtained with short time window sizes below 8 seconds gives rise to a broad 95% confidence interval of the exponential fit. Plotting the 95% confidence interval of the fit against different time window sizes for different experiments and different electrodes showed that the confidence interval becomes narrower with increasing time window lengths ( Figure S2F). Time window sizes above 8 seconds did not result in narrower confidence intervals. To determine the onset of oscillations with sufficient time resolution, but at the same time with sufficiently low noise levels to obtain a good exponential fit, a time window of 8 seconds was used in all experiments.
CSD
All signals were peak-to-peak averaged relative to a reference recording from layer 5. Signals were band-pass filtered between 5 and 35 Hz before detection of negative signal peaks. Each peak-topeak signal was interpolated to a 100-point wave, and these waves were averaged to provide the peak-to-peak average. Current source density (CSD) analysis was performed on the peak-to-peak average cycles. For two-dimensional CSD signals were passed through a 363 Gaussian spatial filter and convolved with a 363 Laplacian kernel (0 21 0, 21 4 21, 0 21 0), as previously described [22,25]. CSD plots are shown using an inverted colour scale, with warm colours corresponding to current sinks (i.e., neuronal membrane inward currents) and cool colours corresponding to current sources.
Statistics
Data are represented as mean6SEM. Statistical analysis used either the Student's t test (paired or unpaired) or an ANOVA with Student Newman Keuls post-hoc test, as appropriate. Asterisks represent p,0.05.
Drugs and Chemicals
Carbamoylcholine chloride (carbachol, CCh), bicucullinemethiodide and atropine were from Sigma-Aldrich (St. Louis, MO); 6,7-dinitroquinoxaline-2,3(1H,4H)-dione (DNQX) from RBI (Natrick, MA, USA). Movie S1 Current-source-density analysis reveals a sink-source pair that is restricted to the prelimbic cortex. CSD movie as calculated from peak-to-peak cycle averaged field potentials, using a prelimbic field recording as reference oscillation, from 868 multielectrode array. Two oscillation cycles are shown for clarity. The CSD movie displays an alternating sink (red) -source (blue) pair between deep and superficial layers of the prelimbic cortex. Note that the sink-source pair is restricted to the prelimbic cortex. Movie S2 Current-source-density analysis reveals a sink-source pair that is restricted to the infralimbic cortex. CSD movie as calculated from peak-to-peak cycle averaged field potentials, using an infralimbic field recording as reference oscillation, from 868 multielectrode array. Two oscillation cycles are shown for clarity. The CSD movie displays an alternating sink (red) -source (blue) pair between deep and superficial layers of infralimbic cortex. Note that the sink-source pair is restricted to the infralimbic cortex. Found at: doi:10.1371/journal.pone.0002725.s004 (3.99 MB MOV)
|
2014-10-01T00:00:00.000Z
|
2008-07-16T00:00:00.000
|
{
"year": 2008,
"sha1": "5f03ff9dc84165d53a7b75c5cb2037859bf08b37",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0002725&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5f03ff9dc84165d53a7b75c5cb2037859bf08b37",
"s2fieldsofstudy": [
"Biology",
"Psychology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
237427608
|
pes2o/s2orc
|
v3-fos-license
|
Method for estimating rockfall failure probability using photogrammetry
Passageways cut through rock might be subjected to rockfalls. If a falling rock reaches the road area, the consequences can be disastrous. The traditional rockfall risk assessment method and risk mitigation are based on on-site investigations performed by a geologist or a rock engineer. The parameters resulting from the investigation, such as discontinuities, orientations and spacings, potential rockfall initiation locations, slope geometry, and ditch profile, are either measured or estimated. We propose a photogrammetry-based method for estimating the probability of failure for rockfall. Several photographs of the rock-cut are taken, and a 3D geometry is computed using photogrammetry. This model already allows remote visual inspection of the site. The information about joint planes can be discovered semiautomatically from the point cloud. Next, the probability of rockfall reaching the road area is computed using probabilistic kinematic analysis on the geometry extracted using photogrammetry. The results can be used to define the rockfall probability for each rock-cut. Furthermore, the results can be used to determine the appropriate rockfall risk mitigation actions for each rock-cut.
Introduction
Falling rocks are a significant risk across all continents [1][2][3][4]. Rockfall is a life-threatening hazard for highways, railroads, water passages, and open-pit mines. Rockfall risk assessment is a significant problem because of the vast amounts of assets and the considerable size of each asset. Currently, the risk assessment is carried out empirically based on visual inspections and expert judgement or using simplified analyses with visually obtained data. The subjective process is slow and prone to human errors. Imprecise risk assessment leads to overly conservative designs that are expensive and dangerous, could threaten life and cause economic or ecological problems.
In this paper, we describe a method to assess the rockfall risk potential. The method is based on visually identifying kinematically admissible rock blocks from a 3D model generated with drone photogrammetry. A cross-section is then extracted and a probabilistic kinematic rockfall analysis is conducted. The probability of failure (PoF) is defined as the fraction of rocks that reach the road area divided with all possibilities. The factor of safety (FoS) is defined as the ratio of the distance-to-road IOP Conf. Series: Earth and Environmental Science 833 (2021) 012063 IOP Publishing doi:10.1088/1755-1315/833/1/012063 2 and the median-distance the trajectories reach. Using these, the PoF and FoS, the remedial measures can be planned for each identified risk.
In a previous pilot project, rock-cuts near main road 51 were located and a preliminary risk mapping was carried out on a 26 km long section. The section contained 72 cuts, of which three rock-cuts could be identified as high-risk sites, and another three cuts were identified as requiring more precise methods. In autumn 2020, in one of the identified high-risk cuts, a rockfall occurred and blocked one lane and damaged three cars. No personnel injuries were suffered. This rock-cut is shown in this paper as the example site and the back-calculation of the accident causing rockfall is also included.
Drone-based photogrammetry and the use of generated 3D models allow for a detailed investigation of the rock-cuts without access limitations and safety risks [25]. Precise measurements, marking potentially dangerous rock blocks and creating an overview of the rockfall probability in the model create a very useful decision-making database. The digital database can be used to design actions and measures such as reinforcement plans and maintenance plans. The creation of the database helps to manage the rockfall risks, to access them at any time and to track changes over time, to design a preventive course of actions instead of dealing with consequences of potentially costly, life-threatening events like rockfalls.
Site description
The site is 361 m long and at maximum 22 m high rock-cut in Southern Finland. It is facing north on the south side of a motorway (figure 1). Three predominant naturally occurring joint sets were identified using photogrammetry-based extraction of discontinuity sets (tables 1 and 2). The Density column in table 2 is the Kernel Density Estimation and the percentage value represents how large portion of the observations fit to each joint sets.
Drone photogrammetry
The rock-cuts were captured using DJI Phantom 4 Pro camera drone. The camera was set to take JPEG formatted images at 3-second intervals while the drone was flown along the rock-cut, capturing a set of overview images ( Figure 2a) and a set of detail images (Figure 2b). Since the drone was moved continuously, shutter priority mode with 1/320 second shutter speed was used to avoid motion blur. The overview set is intended to capture the rough geometry of the whole area from the middle of the road to the top of the rock-cut. The overview set consists of three flight passes along rock-cut where the distance to the ground surface was 20 to 40 meters.
The detailed set was captured at a 7 to 15 m distance to the surface and the number of flight passes was 3 to 6 depending on the height of the rock-cut. This set is intended to capture detailed geometry of the rock-cut, which should allow for the detection of joint planes, fractures, and loose blocks. For safety reasons, flying the drone above or close to the road was avoided; thus, the detailed images could not be taken around the toe of the rock-cut. In any case, image capturing positions and orientations were so that 70 % overlap is achieved between images of adjacent flight passes.
Post-processing
The drone images were pre-processed in photo editing software Adobe Lightroom Classic v. 10.1.1 in two steps. First, the images were delighted to even out the lighting by increasing the brightness of the shadow areas and decreasing brightness in overexposed areas. Delighting helps achieve a more consistent quality of the final model that is easier to interpret [26]. Second, the sharpness of the images was increased using the deconvolution sharpening method. The modified parameters and their values are given in table 3.
Photogrammetric reconstruction
The 3D models of the rock-cuts were reconstructed using RealityCapture photogrammetry software. The processed drone images were imported and aligned using default settings. The resulting component consisted of 707 images. Next, the model was reconstructed in Normal settings and colorized. The point cloud was exported as a .xyz file and imported into CloudCompare for visual identification of the risks.
a) b)
Mechanics
Visual identification of risks
The visual identification of risks was made with the help of identified fracture planes. Rock blocks were sought out where three or more rock joints meet. Plane fitting and joint tracing tools were used to extend the length of fracture surfaces to see if they can create a kinematically admissible block. The shape and size of the blocks were measured for further analysis. The rock-cut was also inspected for toppling rock, sliding rock, wedges, weakness zones in addition to rockfall risks.
Visually identified risks in the rockfall
In total, eight geotechnical risks were visually observed, of which four (P1, P3, P5, P8) were rockfall risks, two wedges (P2, P6), and (P4) was the location of the already occurred rockfall used for backanalysis. There is also one place where sliding could occur (P7) ( figure 3 and table 4).
Kinematic rockfall simulations
Photogrammetry was used to measure the block size and the 2D cross section extracted at each potential rockfall location, a kinematic rockfall analysis was conducted using the Roscience RocFall software with 10 000 trajectories. The probability of detachment was not considered. The initial horizontal velocity was simulated using a normal distribution with mean of 0.3 m/s and standard deviation of 0.1 m/s. The material parameters rock bedrock, soil and asphalt are shown in Table 5. The resulting trajectories histograms are shown in Table 6. Then the Probability of Failure (PoF) and the Factor of Safety (FoS) were calculated from the histograms and the results are shown in Table 7
Discussion of the results
In the preceding remote inspection with no site visit associated, the highest cross-section of the rockcut received PoF = 54.7 % and FoS = 1.16 and the nearest-to-road cross-section received PoF = 99.9 % and FoS = 0.39. The site visit allows a much more accurate representation of the unevenness of the rockcut and the quality is sufficient to visually identify potential loose blocks. The unevenness causes bounces which may lead to the road area, as indicated in Table 3. As the location of the potentially dangerous blocks has been indicated, the threat may be dealt with by either rock reinforcement or removal of the threat.
Considering the back-analysis of the occurred rockfall (observation P4), a similar result is obtained and most of the rock trajectories are reaching the lane closest to the rock-cut. Interestingly, the PoF is relatively low 10 % and the FoS is reasonably high 1.15. The damage coincides with the first nights with freezing temperatures, which suggests that the freeze-thaw phenomenon is one possible mechanism why the rockfall occurred.
Conclusion
The proposed method creates photorealistic high-resolution 3D models, which allow safe visual inspection of rock-cuts to locate hazards and digital measurements of the shape, orientation, and dimensions of the blocks identified as potentially unstable. Considering rockfall hazard, four kinematically admissible blocks were selected for analysis using probabilistic kinematic analysis. The results are similar to the earlier preliminary remote results but more precise by detecting the potential rockfall locations. The rock surface geometry is considerably more accurate, enabling a more accurate rockfall trajectory analysis as the bounces from the rock-cut can be analyzed. Simulated trajectories with bounces from the rock-cut were present in all of the analyzed cross-sections.
|
2021-09-07T20:02:20.181Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "a5bf374b65c18b181b1f7361724fab5a0198327a",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/833/1/012063",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "a5bf374b65c18b181b1f7361724fab5a0198327a",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": [
"Physics"
]
}
|
249914791
|
pes2o/s2orc
|
v3-fos-license
|
Debating San provenance and disappearance: Frontier violence and the assimilationist impulse of humanitarian imperialism
This article examines how ideals of humanitarian imperialism informed debate over the provenance and future of Cape San following the Second British Occupation of the Cape Colony. The discussion explores the plight of San along the Cape frontier and how their demise became a focal point in a trans-colonial exchange over the desirability of the incorporation of indigenes as British colonial subjects. Prominent humanitarian protagonists, such as John Philip, called for the integration of San as colonial subjects, owing to the supposed protection this would afford them. The humanitarian campaign for the extension of subjecthood over Cape San was argued on the grounds that it would fend off the devastating consequences of settler colonialism. The principle also applied to indigenous peoples in settler colonies across the expanding empire. This view was not without its detractors, who opposed humanitarian representations of settlers as rapacious and responsible for frontier conflicts. The article argues that the fate of Cape San held a more prominent place in early nineteenth-century contestations over settler identity, frontier relations, and the effectiveness of missions to ‘civilise’ indigenes than has been recognised.
Introduction
In 1808, Colonel Richard Collins, at the behest of the then Governor of the Cape Colony, Du Pre Alexander, Earl of Caledon, set out on a tour of inspection that would take him to the colony's frontiers. One of the key objectives of this undertaking was to investigate the conflict between colonists and Cape San 1 on the northeastern frontier. The region had been experiencing an upswing in violent conflict between Cape San and colonists that had persisted since the mid-eighteenth century. In the year prior to Collins's departure, the Tulbagh district had seen substantial livestock theft at the hands of Cape San. Hundreds of cattle and horses had been raided. Three colonists and 14 Khoekhoe herders had also been killed by San raiders in the early months of 1807. Of particular concern to the frontier farmers was the colonial government's restrictions on commandos, which had to apply for permission from the colonial authorities before being despatched in retaliation for San raids. 2 1.
The terms San and Cape San are used interchangeably in this article to refer to the Cape's hunter-gatherers. The most common contemporary labels for the Cape's hunter-gatherer peoples were 'Bushmen' and 'Bosjesmen' (or similar variations).
Given the derogatory tones of these terms, they are avoided for the most part, except when appearing in quotes. 2.
N. Penn The scale of the conflict between the colonists and Cape San on the northeastern frontier had been radically reduced during the First British Occupation of the Cape Colony, from 1795 to 1803. This was largely due to measures introduced in 1798 by the then Governor, George Macartney. 3 These efforts were intended to forge a more conciliatory tone towards Cape San than had been commonplace under the Verenigde Oost-Indische Compagnie (VOC). However, Governor Macartney's more amicable approach failed to bring the conflict in the northeastern frontier zone to an end. 4 Yet the protectionist, humanitarian tenor of his ideas was to prove influential long beyond his years as Governor and well into the era of the Second British Occupation of the Cape, which commenced in 1806.
In relaying his findings and recommendations to Governor Caledon, Colonel Collins asserted that while Governor Macartney had had noble intentions towards the Cape San, the violent conflict that continued across the northeastern frontier could only be brought to an end if the government's actions were directed at what he regarded as 'the root of the evil'. For Collins, a change had to be effected in the 'habits and manners' of Cape San. He noted that while such change would 'be the work of time', it would also be 'worthy [of] the greatness of the British empire to rescue this unfortunate race from the deplorable state of barbarism to which they [had] been so long condemned '. 5 Contestations over Cape San, or 'Bushmen' identity and how they could and should be incorporated into colonial society were rife in the Cape Colony during the late eighteenth and early nineteenth centuries. Debates concerning the provenance of San, their usefulness to the colonial economy, how they could be reclaimed from 'savagery', and what their legal status ought to be, occurred among and between colonial officials, representatives of the evangelical-humanitarian effort, and Cape colonists. There were disagreements over the origins of the Cape San: were they a distinct, ethno-linguistic group, or were they debased 'Hottentots', 6 reduced to 3 hunting and gathering by the detrimental effects of settler colonialism? Some commentators continued to express the predominant eighteenth-century viewpoint that San were irreclaimable savages, of no value to Cape colonial society, while others praised the services of 'tame Bushmen' as shepherds, stock runners, waggon drivers and domestic servants. There were differing views on how Cape San could be persuaded to abandon their foraging subsistence and adopt a more sedentary mode while also being 'Christianised' and 'civilised'. Disagreements existed over the legal standing of San incorporated as indentured labourers. Many San servants in the northeastern frontier zone were commando captives. The British colonial authorities along with prominent humanitarians considered whether forced assimilation was a more desirable outcome for San in light of the exterminatory campaign that was waged against them during the closing decades of the eighteenth century, which amounted to genocide. 7 The Cape San's formidable resistance to the encroachment of settler stockfarmers along the northeastern frontier had been largely defeated by the time of the Second British Occupation. The commando-led programme of extermination during the late eighteenth century, coupled with the extensive loss of land and access to resources meant that the San's ability to resist further colonial advances had been undermined. 8 Sporadic violence still occurred, such as that which necessitated Colonel Collins's tour of inspection. Many Cape San were also incorporated as forced labourers into the expanding stock-farming economy. 9 This was an important aspect of the decline of Cape San along the Cape's northeastern frontier during both the late eighteenth and early nineteenth centuries. From 1809 onwards, the British colonial authorities introduced legislative measures in order to regulate the employment and treatment of indigenous labourers. Given the propensity of commandos to take captive labourers, a substantial proportion of the labour force in the northeastern districts of the Cape Colony had an ambiguous legal status. In an attempt to address this, it was deemed appropriate to incorporate San children into the colony's labour regime as apprentices, under the same regulatory framework as applied to Khoekhoe servants, or 'Hottentots'.
This article argues that the debate over the fate of the Cape San was influenced in large part by humanitarian-inspired concerns for the fate of San captives, especially children. The perceived vulnerability of San children in the context of the northeastern frontier zone was a significant factor in shaping humanitarian discourse about the Cape San and a desirable future for them. Yet, this has been a neglected theme in the related historiography on the Cape Colony's attitudes and policies towards Cape San. The article also illuminates the role of notable protagonists of the evangelical-humanitarian campaign, such as John Philip, in motivating for the incorporation of Cape San as 'Hottentots' owing to the subject status and concomitant protections and rights this would provide.
The following analysis proceeds with a discussion of the historical context of the Cape Colony's northeastern frontier zone in which interactions between San, settler and missionary unfolded in the early nineteenth century. The article then examines frontier trafficking in San children before highlighting the role of humanitarian ideas and ideals in shaping colonial attitudes towards Cape San. The final section advances the argument that the fate of Cape San featured prominently in humanitarian efforts to mould colonial policy towards indigenes and influenced calls for the extension of British protectionism. The ensuing debate over the fate of Cape San was of global significance, as scores of hunter-gatherer peoples found their ways of life undermined and often targeted for elimination in the context of Britain's expanding empire and the emergence of settler-colonial frontiers.
San, settler and missionary on the Cape frontier
As noted, San resistance to the advance of the trekboers 10 had been weakened by the time of the Second British Occupation of the Cape Colony in 1806. The systematic loss of land and access to resources over the course of the previous century limited their ability to mount a concerted, collective challenge to settler encroachment, such as occurred during the 'Bushman Wars' of the 1770s to 1790s. 11 For the advancing trekboers, the commando system proved an effective means of clearing the land for stock farming. These mounted posse's, made up of frontier farmers and co-opted Khoesan 12 servants, were formidable enemies inflicting a reign of violence and The label Khoesan is anachronistic. However, the term is appropriate for the period under discussion. By the late eighteenth and early nineteenth centuries, the pre-extermination on San. 13 Thousands of San were killed by commandos, both official and unofficial, during the late eighteenth century. It is likely that thousands of San were also captured by commandos during the same period. The majority of these captives were women and children. While the capture of San during raids on kraals was not necessarily the primary purpose of commandos, the dearth of labourers, especially enslaved labourers, in the frontier districts of the Cape Colony meant that pliable, forced labourers were in demand. 14 Commando captives were dispersed to farmers in need of labourers. Children in particular were sought after by frontier farmers who regarded them as more amenable to assimilation and enforced servitude than adult San.
The British colonial authorities were eager to see an end to the hostilities between Cape San and colonists along the northeastern frontier and wanted to pursue a more conciliatory course of pacification. Nonetheless, the commando system was retained. Though the British tended to frown upon the violence that was common to commando raids, it was understood that there were few other options available in the sparsely populated interior for the retrieval of stolen livestock and the capturing of San raiders. 15 It was in this context that the London Missionary Society (LMS) established mission stations among Cape San. The first, initiated in 1799, was a short-lived mission located about a day's journey north of the Sak River. The responsible missionary, Johannes Kicherer, met with little success and the mission was abandoned in 1806. 16 This early setback did not prevent subsequent efforts to try and settle, 'civilise' and 'Christianise' San.
In 1814 and 1816, two mission stations, Toornberg and Hephzibah respectively, were founded in the Seekoei River valley, a few days journey north of the frontier town of Graaff-Reinet. 17 While drawing in large numbers of San, the two missions were ordered to close in 1817 by the then Governor of the Cape Colony, Lord Charles Somerset. His order was in response to concerns over the large number of San assembled at the two sites (estimated to have been 1 700 at the time). 18 Due to the history of violent conflict on the northeastern frontier between San and the colonists, the concentration of so many San at the missions was considered a danger to frontier farmers. There was also mounting hostility between the frontier settlers and the missionaries over labour shortages. 19 The farmers tended to tolerate the presence of the missionaries while they were ministering to those San considered 'wild Bushmen'. The situation became untenable for the British colonial authorities when the missionaries at Toornberg and Hephzibah were accused of harbouring San servants who had fled from the service of farmers. It was largely in response to these accusations that the government felt obliged to act.
In 1822, there was a renewed effort to establish a mission among the San, this time at an institution whose namesake was dedicated to the newly appointed superintendent of the LMS in southern Africa, John Philip. 20 Philippolis, however, was to become part of a more ambitious scheme to consolidate the Griquas into captaincies officially recognised by the Cape colonial government. 21 By 1826, four years after its founding, the San residents at the mission were being squeezed out by the Griquas. 22 Two year later, in response to the marginalisation of San at Philippolis, the missionary assistant, James Clark, established a mission near the confluence of the Gariep and Caledon Rivers, named Bushman Station. 23 Like Toornberg and Hephzibah, Bushman Station met with some initial success. A population of approximately 100 San was assembled at the site within a year of its founding and the prospects for the mission were promising, aside from the detrimental effects of a prolonged drought. 24 Still, increasing instances of Boer incursions into the territory meant that the mission's longevity was precarious at best. Dissatisfied with Clark's efforts, the mission was released by the LMS to the Paris Evangelical Missionary Society in 1833. Shortly thereafter, the mission re-directed its focus towards the BaThlaping.
Apart from the subsequent establishment of a San out-station at the Kat River Settlement -also referred to as Bushman Station and temporarily under the direction of James Read, Sr. -the efforts of the LMS among San of the northeastern frontier came to an uneventful end with the release of Clark's mission in 1833. 25 In the end, the LMS's track record among San was disappointing, if assessed according to missionary criteria. It appears San were reluctant to adopt a fully-fledged sedentary mode of subsistence, which was a requirement of mission residence, even though there were instances of a syncretistic acculturation to the missionary model, exhibited in the embracing of agro-pastoralism without a complete abandonment of hunting and gathering. 26 By the 1820s and 1830s, remnant San communities were an inconvenient reminder to colonial society of the conflict that had plagued the northeastern frontier for much of the late eighteenth and early nineteenth centuries. The fate of surviving San was to become an important point of contention in a contest over the legitimacy and desirability of expanding settlement in British territories. The San encounter with settler colonialism at the Cape bore striking similarities to that of indigenes facing processes of extermination and elimination in other settler colonies, such as New South Wales and Van Dieman's Land, and in North America during the same period. The San's demise was of particular concern to the evangelical-humanitarian lobby at the Cape, which during the 1820s and 1830s held considerable clout in the corridors of power in Cape Town and London. For humanitarian figures such as John Philip, it was crucial for the legitimacy of the LMS in the colony, and indeed in Britain, to establish that the disappointments of the mission effort to 'Christianise' San could not be based on anything distinctly San, but on other influences, notably, the violent excesses common to settler colonialism.
The debate that developed during the 1820s and 1830s concerning the prospects and likelihood of aboriginal peoples to convert to Christianity and adopt its markers of 'civilisation' was contested between missionary, settler and British colonial-governmental circles. Notwithstanding their participation in European 24.
J. Philip colonialism, missionaries were convinced of their moral authority in settler colonies. However, they occupied an anomalous position in settler societies. Like other colonists, they also sought to attain authority over indigenous peoples, yet they attempted simultaneously to fend off the most destructive forces of settler colonialism. The vibrant missionary movement in the Cape Colony of the early nineteenth century, spearheaded by the representatives of the LMS, became embroiled in defending the humanity of the San, and importantly, in advocating for the recognition of San subjecthood.
The narratives recounting the San's demise that were publicised by key missionary figures and widely disseminated in the official publications of the LMS endeavoured to re-cast these 'savage' figures in a mould recognisable to a British audience that had become increasingly sensitive to discourses of liberty in the aftermath of the abolitionist campaign. Contrary to settler notions of San savagery, protagonists such as Philip, drew on the LMS's links to trans-colonial networks to garner metropolitan sympathy for the plight of Cape San. The role of the commandos and frontier farmers in effecting the elimination of Cape San, through both killing and the enforced servitude of captives, became a focus of Philip's criticisms and his argument for the further extension of the Crown's protection. As the following section establishes, Philip embraced a Cape humanitarian tradition that supported San assimilation as opposed to San independence. However, the means to that end, as well as the nature of humanitarian intervention, were to be contested among those claiming to save the San from almost certain extinction.
San captives and the assimilationist impulse of humanitarian imperialism
From the time of its arrival at the Cape, the LMS focused its efforts on the Khoesan, or 'Hottentots' in contemporary colonial parlance. 27 The early work of Johannes van der Kemp and James Read, Sr. at Bethelsdorp set a trend in this regard. Philip took up this mantle following his appointment as superintendent of the LMS in southern Africa in 1819. The LMS was often at pains to stress the difference between the legal status of 'Hottentots' and that of slaves. Though prior to 1828 and the passage of Ordinance 50, which granted equal civil rights to Khoesan, 'Hottentots' were bound by the coercive clauses of the Caledon Code, Philip were adamant that the 'Hottentots' were a free people. 28 References to the freedom of the 'Hottentots' by representatives of the LMS should, however, not be confused with a more modern understanding of the term. Legally speaking, 'Hottentots' were free in that they were not slaves. However, 'Hottentots' fell under the authority of a colonial power and as such, their social and political 'freedom' was bound up with their subjecthood.
The notion of subjecthood relates to ideas and expressions of loyalty, primarily to the Crown, but also, by extension, to the colonial state. Subjecthood also invoked a sense of belonging to a British civic polity founded upon the imperial connection between metropole and colony. The political status of being subjects of the Crown implied a relationship of reciprocity between those who were subjects and the authority to which they were subjected. Though subjecthood existed as a result of colonial conquest and imposition, its reciprocal nature meant that it was appropriated to various ends by the subjects, including indigenous subjects. 29 For indigenous subjects, subjecthood had the potential to raise expectations of equality and protection, however unrealistic such expectations were in a colonial setting grounded in a racial hierarchy. At the Cape Colony, Khoesan subjecthood became entangled with a language of civil rights during the early to mid-nineteenth century.
Following the Second British Occupation of the Cape Colony, the British colonial administration set about introducing a more formal and codified labour recruitment system than had existed prior. The two most significant pieces of legislation to affect the Khoesan were the Caledon Code of 1809 and the Apprenticeship Law of 1812. The former law essentially coerced all Khoesan living in the colony into the service of the colonists, though it did stipulate conditions of contracting and remuneration, and provided legal recourse to Khoesan who were treated harshly or compensated unfairly. 30 The latter law allowed farmers to apprentice, or indenture, 'Hottentot' children who had been born to parents in their service. This new, codified labour regime of apprenticeship was to have important repercussions for the assimilation of San captives, because these regulations, which were intended to apply to 'Hottentot' children, were also applied to San children, including those captured by commandos. While apprenticeship placed certain obligations upon masters for the care and treatment of their servants, the system of child apprenticeship also made it possible for farmers to confound the status of 28 captive San children with those of 'Hottentot' children. For the colonial authorities, the forced assimilation of San children as 'Hottentots' was palatable. This was regarded as a suitable humanitarian intervention. The primary concern in the frontier districts was that San child captives were being enslaved and traded among the colonists.
These sentiments were apparent in 1817, when the landdrost of Graaff-Reinet district, Andries Stockenström, alerted the then Governor, Lord Charles Somerset, to an 'ancient custom' on the frontier. 31 Stockenström noted that 'Bosjesmen children' were being regularly 'transferred from one to another' among the frontier farmers and that payments were 'secretly taken'. He contended that many San children were being carried into the inner districts of the colony and 'passed off as orphans'. For Stockenström, the practice amounted to a 'traffic' in captive and abducted San children. 32 The landdrost was also at pains to stress that the San's situation was so dire that many parents were forced to give up their children to the farmers as they could not provide for their survival. 33 Stockenström revealed an unlikely sympathy for the plight of Cape San. He was an advocate for the establishment of missions. He was also critical of the apprenticeship system, believing that it was open to abuse and 'capable of being made most oppressive to the Hottentot race'. 34 Nonetheless, he considered it more desirable for San children to come under the legal purview of apprenticeship than to risk them becoming de facto slaves.
In response to the alarm raised by Stockenström, Somerset issued a proclamation intended to deal with the apparent enslaving of San children along the northeastern frontier. However, the Governor's legislation did not prohibit the procurement of San children. Rather, it stipulated a series of principles to regulate San child apprenticeship. In doing so, Somerset's 1817 proclamation provided a legal framework for the procurement and apprenticeship of San children and brought the practice in line with the principles of the Apprenticeship Law that applied to 'Hottentots'. In 1822, Somerset reiterated to Stockenström the importance of exercising oversight over the practice of retaining San women and children by commandos and frontier farmers. He insisted that it 'ought never to take place without the greatest precaution, for the future treatment of these unfortunates, and the prevention of the possibility of their merging into the class of slaves'. 35 While the 1817 order declared that San children should be placed with 'respectable and humane' colonists and subjected to 'good treatment' so as to be 'tamed' and raised as assimilated labourers, it was impossible to enforce fully. It was also unlikely to be implemented and adhered to, given the history and scale of the practice of San child removals along the northeastern frontier and the complicity of local veldkornets and veldwagtmeesters. Nonetheless, the humanitarian interventions enacted and supported by Somerset and Stockenström reveal the colonial administration's desire to see Cape San legally incorporated and assimilated as 'Hottentots'. In theory, if not in practice, San were to be subjected to colonial jurisdiction and oversight. This was regarded as being in their best interests as vulnerable indigenes. As will be discussed in the following section, this approach to the fate of Cape San was remarkably similar to that espoused by evangelical-humanitarians in the 1820s and 1830s, who also promoted protectionism and San subjecthood.
Protectionism, subjecthood and the debate over the fate of Cape San
The failure of the mission project to Cape San amid their general demise has been regarded as insignificant in the overall standing of the LMS in the Cape Colony. 36 Yet, a closer reading of Philip's writings and publications reveals that these failed missions played a more prominent role in shaping contemporary humanitarian discourse at the Cape and in Britain than has been recognised. This was especially so with regards to the effectiveness of the missionary cause to re-mould and 'civilise' indigenes. The debate on the fate of the San which had begun during the First British Occupation and which had been influenced in large measure by concerns over San child confiscations and transfers during Somerset's governorship, continued into the 1820s and 1830s. Philip was a prominent humanitarian campaigner during this time. His public profile increased significantly in the years following his appointment at the helm of the LMS's endeavours in southern Africa. Philip was connected to an expansive network that included the metropole and several colonial sites. As such, he was conscious of the broader ramifications of the LMS's successes and failures in southern Africa for the reputation of the international mission endeavour. The LMS was a global organisation and Philip, like many other missionaries in other fields, was aware that he was working in an imperial setting that spanned continents. Missionaries, settlers, colonial authorities, metropolitan Britons and, indeed, indigenous peoples, all constituted different, connected audiences in these transcolonial exchanges of 'knowledge' and debate. Cape settlers were acutely conscious of the imperial reach that missionary publications had in Britain. Keeping in step with the LMS, settlers at the Cape were equally determined to craft their own counterdiscourse, calling into question the justifications for missionary labours and how the treatment of the Cape's indigenous peoples was being misrepresented to metropolitan audiences. The Cape's British settlers were especially concerned about the way they were being portrayed as deviant, backward Britons by some evangelicalhumanitarians. 37 Fears over Khoesan labour rebellion and Xhosa retaliation for encroachment on their lands meant that Eastern Cape settlers were sensitive to the perceived lack of sympathy on the part of the metropole. 38 Networks of communication between metropole and colony, as well as between different settler colonies, were central to the construction and defence of settler and humanitarian arguments, and in turn, identities.
By the late 1820s, the LMS's greatest prospects for creating a viable, Christian peasantry lay among the Griquas of the Transgariep and the Khoekhoe of the Kat River Settlement. By this time, missions to San were being sidelined in favour of these more grandiose schemes. The Griqua had also launched commandos from Griquatown and Philippolis against San in retaliation for stock theft. Even so, some in the LMS, such as the missionary at Griquatown, Henry Helm, 'proposed to the Griquas to incorporate the Bushmans [sic] with themselves', especially given that the Griqua 'wanted them to be their servants '. 39 By the early 1830s, Philip wanted to see the Griqua 'incorporated into the colony on the same terms as the inhabitants of the Kat River Settlement.' 40 This was one of the clearest examples of Philip's humanitarian-imperialist stance. The call for the Crown's formal annexation of territories on the margins of settler colonies was a trademark of the humanitarian-imperialist agenda across the expanding empire. The Aborigines' Protection Society, which was founded in 1837 following the completion of the investigations of the House of Commons Select Committee on Aborigines, actively campaigned for the annexation of territories that were susceptible to settler 37.
A encroachments. 41 The concept of the 'protectorate' emerged as a result. This led to the further conquest of aboriginal lands in the name of protecting their occupants from colonists and resulted in the incorporation of yet more indigenous subjects.
At the Cape, the fate of the San continued to be debated in reference to the colony's future relations with its indigenous neighbours and the opposing humanitarian and settler visions of the territory's expansion. The so-called 'Bushmen Wars' some sixty years earlier became a crucial factor in the public contest over the legitimacy of settler and 'Hottentot' identities that occurred in the 1830s. 42 The colony's memory and settler identity were tested in a propaganda war during this time, at the centre of which was Philip. The debate was set in motion in 1828 when Philip published his widely read Researches in South Africa. The two-volume work caused a stir at the Cape. A libel suit brought against Philip by the landdrost of the district of Somerset was successful and he had to rely on the financial support of evangelicals in Britain to spare him financial ruin. 43 Philip's unpopularity with the Cape's settlers reached its zenith. The Cape Colony's historical interactions with the San were key in the ensuing debate over the legitimacy of his arguments in the Researches.
While the Eastern Cape frontier wars with the Xhosa were the prominent flashpoints of the period, the earlier conflict with the San on the northeastern frontier was contested in the public domain owing to the important historical precedent it held for future frontier relations. The dispute was aired in public by the two leading newspapers in the Cape Colony: the settler-backed Graham's Town Journal edited by the 1820 settler, Robert Godlonton, and the humanitarian mouthpiece South African Commercial Advertiser, edited by John Philip's son-in-law, John Fairbairn. An emerging, assertive settler identity, especially in the Eastern Cape, informed the debate as settler apologists sought to defend and laud the settler character in response to humanitarian accusations. 44 Godlonton was a supporter of settler interests on the frontier and his rebukes of Philip and Stockenström were scathing, with Philip labelled the 'Reverend agitator'. 45 Philip and Stockenström were also depicted as hypocrites, because they supported the establishment of the Kat River Settlement on what used to be Xhosa land, while condemning white settlers for occupying similar lands. 46 While the Xhosa were a very different enemy to San, owing to their military strength and socio-political organisation, the earlier, and ongoing, plight of Cape San was emphasised by the humanitarian lobby as part of their transcolonial alarm at the disastrous consequences of settler colonialism for indigenous populations.
In light of modern, ongoing disagreements over the cultural distinctiveness of San and Khoekhoe, it is necessary to note that the humanitarian use of the appellation 'Bushmen' in this contest did not necessarily imply a distinct racial or ethnic identity. 47 Rather, it was typically a label applied to those who existed on the margins of colonial society, driven there by the advance of the trekboers. Though the usage of the colonial labels 'Hottentot' and 'Bushmen' in his writings were far from consistent, it is clear that Philip regarded 'Bushmen' as dispossessed 'Hottentots'. For Philip, the vilified, 'thievish' 'Bushmen' were a colonial creation. He tended to portray them as the manifestation of the evils of unrestrained European settler colonialism. Philip reiterated this when he claimed that 'the most miserable specimens of the Bushmen race are to be found amongst the frontier boors [sic], or in the immediate vicinity of the Colony', while 'many of the more remote hordes, still remaining in a state of comparative independence, are much superior in stature, and have a vivacity and cheerfulness in their countenances which form a striking contrast with the others '. 48 While settler accusations against the mission stations as safe havens for vagrants and as islands of indolence continued, Philip claimed in Researches that missionary efforts had been obstructed by the laws of the colony and the actions of public officials, such as the veldkornets and veldwagtmeesters. He portrayed the San as victims of unchecked colonial advances, against which the missionaries were unable to resist. The amount of attention Philip granted to the closure of the San missions at Toornberg and Hephzibah in Researches reveals the extent to which this episode, which had occurred prior to his arrival at the Cape, concerned him. 49 Philip endeavoured to situate the plight of the San within a trans-colonial narrative that highlighted the devastating consequences of settler colonialism for indigenes in the emerging British empire. The primary concern for the humanitarian lobby, of which Philip was a prominent representative both locally and internationally, was the debasement of aboriginal peoples as a result of settler colonialism. It was claimed that unchecked interactions with European settlers resulted in social ills among indigenous peoples, such as alcoholism and vagabondism. 52 The pristine, or 'noble savage', was thus reduced to a desperate life of wandering, robbery and addiction. Philip framed the San as an unfortunate product of settler colonial expansion. This view held that if it were not for the European settlers, San would have been able to return to their former status as 'Hottentots'. Further assimilation into Christian subjects could then occur under the guidance of missionaries. As such, by the 1830s the debate surrounding San provenance and incorporation was shaped in large measure by evangelical-humanitarian concern for the recognition of San subjecthood, along with its attendant rights and protection. Though Philip was often critical of the British Government, his goal was not the curtailing of imperialism, but the extension of it, as long as it was humanitarian imperialism. Central to this philosophy was the argument for the allocation of civil rights to indigenous subjects.
50.
Cape Archives (hereafter CA), Government House ( In Researches, Philip called upon the British Government to 'do justice to the aborigines of the country, by imparting to them liberal institutions, and just and equal laws'. 53 Though the effects of settler colonialism on the Cape's indigenes were regrettable, he believed that 'Britain [could] redeem her character' by ensuring that the 'acknowledged civil rights' of the 'Hottentots' were fully adhered to. 54 He urged the British Government 'to declare to the world whether those rights [were] to be realised to them', stressing that 'the Hottentots, despairing of help from every other quarter, now look to the justice and humanity of England for deliverance.' 55 Philip believed he was merely asking the British Government to live up to the expectations it had created in the minds of the Cape's Khoekhoe and San: In the proclamations of the colonial government, in the official documents of the government at home […] the Hottentots are, indeed, represented as a free people, free labourers, and British subjects: but it will be seen […] that their real condition is that of the most abject and wretched slavery. 56 As noted, Philip discussed at length the closure of the San missions at Toornberg and Hephzibah in Researches. He argued that their closure had been due to 'false representations of the farmers' at a time 'when [a] traffic in children was going on'. 57 He lamented the loss of these missions as the resident San were said to have shown 'the greatest readiness to lay aside their savage life and become useful members of religious and civil society'. 58 In keeping with the humanitarian imperial underpinning of mission ideals, Philip avowed that mission stations were 'the channels […] by which the ideas of order, of duty, of humanity, and of justice, flow through the different ranks of the community'. 59 In addition to their Christian influence, Philip was also of the view that missions were crucial for the establishment of a civic identity among 'Hottentots'.
Philip and the directors of the LMS in London stressed the need for the British Government to acknowledge that scores of 'Hottentots' had begun to submit to the authority of the monarchy it represented. While such representations were made to bolster the position and image of the LMS at the Cape, they also point towards the emphasis that Philip and his supporters placed on 'Hottentot' subjecthood. For example, in reference to the 'Hottentots' of Uitenhage who had become associated with the mission at Bethelsdorp, Philip insisted that 'those people who had formerly been the terror of that District of the Colony' had become 'steadily attached to the 53. Philip British Government'. 60 The Directors of the LMS agreed, declaring that the 'Hottentots' of Bethelsdorp, through service to the 'District and the Colony at large', had 'powerfully recommended themselves to the paternal care and protection of His Majesty's Government'. 61 As far as the San were concerned, Philip held the view that missions provided the best means to return them to the status of 'Hottentots'. It is apparent that he was not the only prominent missionary to consider the San as despoiled 'Hottentots'. James Read, Sr. held the same view. He suggested that some of the tensions between San servants and farmers stemmed from the San's misunderstanding of colonial law as it pertained to 'Hottentots'. Read, Sr. suggested that '[t]hey have no idea of the laws made for Hottentots, but think themselves at liberty to return to their kraals at their pleasure, and to take their children back when they please.' 62 He was also 'fully convinced that [San and Khoekhoe] were one and the same nation' and that 'no visible difference [could] be seen in their persons, and their manner of living customs'. 63 In promulgating for the extension of laws as they applied to 'Hottentots' to the San, Philip insisted that 'the liberty we ask is not an exemption from the law, but its protection'. He continued, 'we simply ask that the colonists, and the different classes of the natives, should have the same civil rights granted to them.' 64 For Philip and Read, Sr., as well as other equally enthusiastic humanitarian imperialists, it was desirable for the San to become 'Hottentots' because they could then claim British subjecthood.
The extent of Philip's influence on this debate was particularly evident during the investigations of the House of Commons Select Committee on Aborigines, convened in 1836. The establishment of this committee marked the apogee of humanitarian sway across the British empire. Philip's Researches was an important point of reference for the committee, which interviewed 46 witnesses in total, 29 of whom had had direct, personal experiences at the Cape Colony. 65 It was following the publication of the report of the Select Committee on Aborigines in 1837 that the disastrous effects of settler colonialism on aboriginal peoples found trans-colonial significance. As James Heartfield has emphasised, the report sent 'shockwaves across the Empire that were felt in Sydney, Cape Town, and Hudson's Bay'. 66 testimony to the committee was published along with a number of appendices submitted by the chairman of the committee, Thomas Fowell Buxton. These included reports from Van Dieman's Land, New South Wales and New Zealand, all recounting the miserable state of the aboriginal inhabitants of these territories due to European settlement and the effects of land dispossession. 67 Settler treatment of indigenes stood as a glaring indictment of the British Government and its lack of control over its emerging settler societies, even those inherited from previous European administrations, such as the VOC. The demise of the Cape's indigenous peoples at the hands of wanton settler provocation was a conclusion the Select Committee drew in the opening paragraph of its findings on South Africa. 68 Philip's testimony and submissions were particularly impactful, such that the committee adopted his viewpoints on the 'Bushmen'/'Hottentot' question. The committee's published report declared that the aborigines of South Africa could be 'classed under two distinct races', namely the 'Hottentots' and the Bantu. Furthermore, the 'Hottentots' were said to be 'divided into two branches, the "tame" or colonial Hottentots, and the wild Hottentots or Bushmen.' 69 After 1806 and the advent of the Second British Occupation of the Cape the northeastern frontier was both less threatening and less economically important. However, the events that had occurred along this frontier in the late eighteenth century were to be a focal point in the debates inspired by the evangelicalhumanitarian lobby during the 1820s and 1830s. As discussed, the humanitarian campaign promoted the rights of indigenes as colonial subjects and campaigned for the protection of indigenous peoples. These priorities provided powerful ideological validation for the extension of British imperial rule. Yet, at the same time, European settlers were cast in disparaging terms. Humanitarians made allegations of rapacious misconduct and cruelty towards colonial indigenes by European settlers. Alan Lester has shown how the ensuing contest over settler identity in imperial nodes such as New South Wales, New Zealand and the Cape Colony resulted in a struggle 'over the nature of Britishness itself'. 70 The combined indictment of Philip's Researches and the Report of the Select Committee on Aborigines upon settler respectability did not go unchallenged at the Cape. As Andrew Bank has argued, '[a]scendant Cape liberalism prompted Dutch settlers to defend their national character' and to 'actively construct a specifically colonial history and identity' in response. 71 invented within the context of an ongoing, violent frontier conflict with the Xhosa on the eastern margins of the Cape Colony and its protagonists sought to refute the accusations Philip had made concerning the frontier conflict with the San of the late eighteenth century.
At the forefront of this endeavour was Donald Moodie, a former lieutenant in the Royal Navy who became a prominent Cape settler. In 1828, he was appointed Clerk of the Peace in the District of Albany. 72 His compilation of documents relating to the history of the Cape Colony (published between 1838 and 1841 as The Record, or A Series of Official Papers Relative to the Condition and Treatment of the Native Tribes of South Africa) was intended to serve as a rebuttal of Philip's arguments in Researches and the Report of the Select Committee on Aborigines. With regards to the investigations of the Select Committee, Moodie maintained that much of the evidence presented was 'dependent as much upon memory as upon political feeling'. 73 Moodie's own 'history' of the Cape disputed allegations of settler atrocities perpetrated against San during the late eighteenth century and that San were a colonial creation. Moodie rejected the claim that San had 'descended from the pastoral to the hunting state' and that this change was a consequence of European oppression. 74 Of particular controversy was Philip's claim that in 1774 the VOC's Council of Policy had issued an extermination order against the 'Bushmen'. Moodie contested this assertion, having failed to find a copy of the order in the archives. The implication for Philip was that he had fabricated the accusation. Moodie publicly questioned whether Philip had made a mistake or if he had invented the extirpation order so as 'to attain an ambitious object, by ministering to the morbid sentimentality of a weak and most mistaken set of men in the mother country'. 75 Philip had actually gotten the date wrong, as the extermination order was issued in 1777. 76 Apart from prompting the earliest debate concerning the history of the Cape Colony, this episode is significant given the centrality of the fate of Cape San. Philip was accusing the Dutch settlers of having committed horrendous acts against San during the course of the late eighteenth century. This was anathema to white settler consciousness and respectability at a time when Cape settler identity was becoming increasingly assertive. Moodie was at the helm of a settler-inspired discourse that 72.
V between the Cape Colony and the Xhosa were to be viewed with the lamentable precedent set by the colony's interactions with the San in mind. During the 1820s and 1830s, contests over settler and indigenous identities, inspired by competing histories of previous events on the frontier, were relayed with much enthusiasm to the metropole. This was done with the aim of influencing public sentiments in Britain and its colonies in respect to the impact of settler colonialism on indigenes.
The 1840s would witness the beginning of 'a shift in the discursive terrain', as 'an increasing turn to the language of race to explain and justify the inequalities and persistent differences between peoples' started to take hold in Britain. 81 Andrew Bank has argued that the initial decline of the civilising mission during the course of the 1840s was influenced to a large extent by the Frontier Wars in the Eastern Cape, especially Hintza's War of 1834-5 and the War of the Axe of 1846-7. Historians have tended to focus on later events, such as the Indian Mutiny in 1857 and the Morant Bay Rebellion in Jamaica in 1865, in search of reasons for why 'the age of humanitarianism' was replaced by an 'age of imperialism' during the latter half of the nineteenth century. 82 Bank suggests that the Xhosa Frontier Wars were an earlier, significant precursor in the decline of humanitarian imperialism across the empire. 'Cape liberalism was thrown into crisis' by these wars and even missionaries and humanitarian supporters became disillusioned with the prospects of assimilation. 83 In 1848, William Elliot, a missionary and colleague of Philip, argued that the LMS had arrived at a crucial juncture in the Cape Colony and required a complete change in its management and direction. Philip was quick to respond to the challenge and to reassure his British audience. He continued to espouse the value of humanitarian imperialism as follows: The conversion of so many individuals from among a people supposed to be the lowest of the human race, whose claims to be regarded as of the same stock with the rest of mankind, had been long denied and practically rejected; their elevation from savage to civilised habits, and the education of their children, and the deliverance of their whole nation from the fate of the aborigines in so many other countries, when seized by Europeans, and the effect which all this had on the condition of the coloured people generally, not only in South Africa, but throughout the British dominions, are known to all the world. 84
Conclusion
Those in favour of a more assertive humanitarian imperialism endorsed the assimilation of San as colonial subjects and labourers under the purview of colonial law in order to prevent their destruction and eventual extinction at the hands of settler colonialism. The provenance and incorporation of San captives, especially children, was a focal point in this debate during the governorship of Lord Charles Somerset. Following the arrival of John Philip in the Cape Colony, the ongoing plight of the San continued to influence wider contestations about indigenous subjecthood and the role of missions in promoting assimilation. From a humanitarian perspective, colonial, or 'tame', San could claim British subjecthood and the supposed protection that was meant to accompany it, while 'wild', extra-colonial San could not. Subjecthood stood for protection and was regarded as in the best interests of Cape San. This article has argued that the subsuming of San, especially San children, under the category 'Hottentot' was welcomed by key humanitarian figures following the Second British Occupation. Furthermore, this issue factored into a trans-colonial exchange of ideas about the desirability of humanitarian imperialism and the preferred outcomes for indigenous peoples facing overlapping processes of marginalisation, transformation and extermination. Indeed, the debate over the provenance and disappearance of Cape San featured prominently in a global dialogue about the impact of settler colonialism on indigenes. This was most apparent in the report of the Select Committee on Aborigines. Though the committee did not have our modern terminology at its disposal, it was raising the alarm about processes we would describe as genocidal and drawing insightful links with the role of settler colonialism in the steady erosion of the means of survival of hunter-gatherer peoples. As settler colonies became more established in the nineteenth century and settlers more assertive in expanding frontiers and advocating settler rights, so the displacement and demise of indigenes intensified. 85 Prominent theorist of settler colonialism, Lorenzo Veracini, has noted that settler colonialism is not always genocidal. However, this is often because settlers fail to eliminate an indigenous group completely, as opposed to the settlers not trying. 86 In the Cape Colony of the early nineteenth century, the extinction of San was widely anticipated, including by those who were appalled by the prospect. The debate over the imminent disappearance of San and possible ways to curtail this -while still facilitating San incorporation into the colonial economy as labourers -resulted in diverse views and speculations over San provenance. As has been shown, San provenance was central to the debate around San disappearance, for both had a bearing on the settler character at the Cape. And as with other emerging settler colonies in other parts of Britain's advancing empire, the treatment of vulnerable indigenous peoples was a powerful symbolic yardstick by which the settler character was judged, for metropole, colonist and humanitarian-imperialist alike.
|
2022-06-22T15:21:37.590Z
|
2022-06-16T00:00:00.000
|
{
"year": 2022,
"sha1": "f55a3ae2b5f2571b69cc276dc0744945035eeb78",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.org.za/pdf/hist/v67n1/02.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f46806ffb02f1b342588ecc1171eb50352d83fa9",
"s2fieldsofstudy": [
"History"
],
"extfieldsofstudy": []
}
|
263681168
|
pes2o/s2orc
|
v3-fos-license
|
P ODILLIA C HURCH H ISTORICAL AND A RCHAEOLOGICAL S OCIETY IN THE S TUDY OF A RCHAEOLOGICAL H ERITAGE
The purpose of the research paper is to analyze the role of Podillia Church Historical and Archaeological Society (1865-1920) in the study of archaeological heritage and to determine its significance in the overall heritage of its scientific achievements. Scientific novelty . After studying the documentary sources it has been found out that
INTRODUCTION
In 1904, a postcard with the stamp of the city of Washington was received in Kamianets-Podilskyi.The Library of Congress of the United States, while preparing the World Guide to Scientific Associations and Learned Societies, sent a request for detailed information in order to include Podillia Church Historical and Archaeological Society (hereinafter -the Society) on the list.The Americans were interested in the details of the organizational structure of the Society, its statutory goals, the periodicity of publications and the possible presence of special (thematic) issues, funds of historical sources at the disposal of the Society, and the nature of the fund allocations of researchers' work 1 .That, without a doubt, was evidence of a decent level of scientific achievements, high authority in academic circles, and a good reputation regarding the organization of activities of Podillia Society.Conclusions of a similar content are also confirmed by an appeal to the Society of the Imperial Academy of Sciences in St. Petersburg dated January 17, 1907, with a request to share books from the collected holdings for the establishment of a library of Russian history in Rome 2 , as well as a request from Kyiv Handicraft Society dated October 21, 1908 for participation in the arrangement of the exhibition and the provision of appropriate samples of household item from the funds of Podillia Society 3 .
It was the time of the highest level of activity in the work of Podillia Church Historical and Archaeological Society, but the beginning of the 20 th century was marked by upheavals (revolutions, wars, etc.) that did not contribute to scientific work.Therefore, the activity of the Society gradually faded away, wound down, and eventually completely ceased as early as 1920, leaving behind a rich legacy of historical objects (including various items of material culture) and scientific research works as a result of more than half a century of activity, which began in 1865 (at the time of its establishment, it was called the Committee for Historical and Statistical Description of Podillia Eparchy).
At the same time, one of the noticeable components of the collecting, protecting, and scientific work of the Society was the archaeological heritage, with which our research deals with.Therefore, the purpose of the research paper is to analyze the role of Podillia Church Historical and Archaeological Society (1865-1920) in the study of archaeological heritage and to determine its significance in the overall heritage of its scientific achievements.
LITERATURE REVIEW
The achievements of Podillia Society were highly esteemed already in the days of its activity, but a detailed study of its work and active figures started only with the collapse of the Soviet system and with the gaining of Ukraine's independence.In particular, at the 4 th Republican Scientific Conference on Historical Local Lore in 1989, A. Zinchenko, an expert in the local history of Podillia, presented a general overview of the Society's activities from the time of its foundation to its closure, and also made a brief analysis of twelve issues of the Society's (Committee's) Proceedings 4 .
FORMATION AND MAIN STAGES OF THE SOCIETY'S DEVELOPMENT
In 1834, a hubernia statistical committee was established in Kamianets-Podilskyi, which began collecting and accumulating various information about the life of Podillia region.In 1838, the publication of 'Podolskie Hubernskiie Vedomosti' was started, where sketches on the history of the region were sometimes published, among others.
The next step towards the establishment of Podillia regional history center was the initiation of the 'Podolskie Eparchialnyie Vedomosti' (PEV) publication in January 1862, which was supposed to publish historical and statistical descriptions of churches and monasteries, individual stories from the history of the church, biographies of prominent people, ethnographic information, etc 13 .Leading experts in local history, priests of Podillia -Mykhailo Orlovskyi, Moisei Doronovych, Pavlo Troitskyi, and others rallied around the editorial board of the PEV.So even before the establishment of the organizing center of researchers, that is, during 1862-1865, 32 historical and statistical descriptions and sketches on the history of Podillia towns and villages, as well as on the history of local churches, were published on the pages of PEV 14 .
It is quite likely to assume that it was the specified research and journalistic activity of local priests that led to the emergence of the idea of organizing them and direct their activities in the 'proper direction'.On July 8, 1865, with According to the 'Committee's Action Plan', it was expected that its members would check, correct, and bring to a certain standard (in fact, ready for printing) historical and statistical descriptions of churches and monasteries, which were supposed to be prepared by priests and reverend fathers of monasteries on the ground 17 .In order to organize the work more effectively, the members of the Committee (which initially were 20 people) shared among themselves the responsibility for processing materials that were to come from the povits of the hubernia.
However, the materials from the field were obtained extremely slowly, and very often, due to the formality of execution and the incompetence of the priests, did not have any cognitive value.According to the report, during three years, as of September 1, 1868, 774 descriptions were received from the entire eparchy, but only 231 of them were considered valuable by the members of the Committee and processed for publication 18 .
In 1874, Feohnost, Bishop of Podillia, ordered the establishment of a special printed publication entitled 'Proceedings of the Committee for the Historical and Statistical Description of Podillia Eparchy.'The pages of the 'Proceedings' were opened for the publication of historical descriptions of churches and monasteries, articles on the general history of the Church in Podillia, as well as for the publication of important documents that were expected to be found in the course of working with the consistory's funds 19 .
Thus, during the 1880s, the activity of the Committee was characterized by an indepth study of historical sources and the subsequent accumulation of materials for writing generalized works.
A milestone event was the establishment by the Committee of the Repository for Ancient Objects in 1890, which consisted of three parts -the library (the task of which was to collect literature in all languages), the archive (for the collection of manuscripts and documentary materials associated with the history of Podillia, as well as the paperwork and records of the Committee itself), and the museum (storage of material objects of the past, or their copies).The second paragraph of the Society's charter (approved by the Synod on October 14-24, 1894) emphasized that the established Repository for Ancient Objects would collect items contributing to familiarity with the history of the region20 .
In the course of the 1890s, the work of Podillia researchers became significantly more intensive, their authority in the scientific world increased, and, by the nature of its activity, the Committee acquired all the characteristics of a scientific research center.Regular meetings of committee members were held, the work was constantly planning and reports on its implementation were heard.The existence of its own print periodicals -the 'unofficial part' represented by PEV and 'Proceedings' -made it possible to publish the results of research work without delay.
The success of the activities of Podillia researchers intensified the interest and constantly expanded the number of new amateurs who were interested in and joined the work on local history problems.So, if the first members of the Committee included 20 people21 , and by 1883 their number had even decreased to 7 people22 , then by the end of 1890 there were 53 employees, of which 37 people had the status of a 'full members', and 16 people were 'candidates for full members' 23 .
At the end of 1902, the idea of transforming the Committee into Podillia Society, with slightly different goals, appeared.There was a desire to establish a broad and thorough work on the study of secular local history and the study of ancient sites and objects, with the organization of their more thorough search, collection, and ensuring safe preservation 24 .At the same time, a draft charter of the Society of Researchers of History and Archeology was developed, which, after consideration by the members of the Committee in April 1903, was sent to the Synod for approval.Accordingly, by the Decree of the Synod of September 29, 1903, the charter was approved, and thus, Podillia Church Historical and Archaeological Society was established 25 .The new charter did not differ significantly from the previous one, only the Repository for Ancient Objects was renamed the Museum, but with the preservation of the previous structure 26 .Thus, the Committee was transformed into a society.
On October 26, 1903, a general meeting of Podillia researchers took place, where the renaming of their organization into the Society was officially proclaimed, and it was decided to consider it open to anyone who wished to join.Ye.Setsinskyi was elected Chairman of the Society and Curator of the Museum, and M. Yavorovskyi 27 was elected Deputy Chairman and Treasurer.The Presidium of the Society, as Ye.Setsinskyi later recalled that, remained in the same composition until the very end of its existence 28 .
In 1910, the government administration demanded that the Society quickly vacate all the premises of the 'Dominicans buildings' (that is, the buildings of the former Dominican Order church), which were granted in 1903 to house the Museum's holdings.Even though the Society managed to quickly raise funds for the construction of its own premises, in the spring of 1915, the Museum occupied the premises of the former theological professional school 29 .
With the establishment of Soviet power, in 1919, Podillia Society for the Protection of Antiquities and Art Objects was established (on the principles of public selforganization) in the city, and by order of the Department of Public Education, the Museum was placed under the administration of the specified newly established Society.Then, it was initiated to hold a joint meeting of the members of Podillia Society for the Protection of Antiquities and Art Objects and Podillia Church Historical and Archaeological Society, where the decision was made to unite them into one under the first name.However, as Ye.Setsinskyi recalled: "Already at the end of 1920, the Soviet authorities established a new body -Kamianets-Podilskyi Committee for the Protection of Objects of Antiquity, Art, and Nature.That Committee became the official center of archaeological activity in Kamianets, and therefore the former scientific and archeological societies, if they were not registered, were closed by themselves according to the then-approved regulations" 30 .
Antonovych and reported on the results of his study at the 6 th Archaeological Congress in Odesa in 1884 32 .
After visiting the caves of the former monastery in Bakota, Ye.Setsinskyi and M. Yavorovskyi returned with a great number of antiquities that could have become valuable museum exhibits but faced the problem of their placement.That became a kind of additional catalyst for setting up and solving the problem of founding and equipping the museum 33 .In addition to the antiquities, the members of the Committee, who had been working on various documents, old prints, manuscripts, etc. for a long time, faced the problem of placing and storing written sources that made up a significant collection of old prints and even manuscripts 34 .Also, archaeological finds were constantly sent from various places in Podillia, and they needed to be stored somewhere.
Those circumstances determined the questions discussed during the meeting of the Committee's members on October 29, 1889, when M. Yavorovskyi proposed to organize a museum where antiquities and historical documentation could be stored.The Committee decided to name that museum as Podillia Eparchy Repository for Ancient Objects.Along with M. Doronovych and V. Yakubovych, Ye.Setsinskyi also joined the Commission for the Establishment of the Repository for Ancient Objects.It was he who drafted the regulations of operation of the newly established institution, which were presented for consideration and approved by the members of the Committee on January 30, 1890 35 .
According to the adopted regulations of the Repository for Ancient Objects, the purpose of its work was recognized, firstly, in collecting and storing antiquities associated with Podillia, and secondly, in providing comprehensive assistance and support to the Committee members in their study of the region 36 .The Repository for Ancient Objects consisted of three structural units -the library of the Committee, the archive, and the actual museum of antiquities.Priest V. Yakubovych was elected Curator of the Repository for Ancient Objects, and Ye.Setsinskyi was elected Secretary of the Committee for the Repository.But already in the following year, 1891, V. Yakubovych resigned from the position of Curator, while Ye.Setsinskyi, who performed the duties of the Secretary of the Committee, was also elected Curator of the Repository for Ancient Objects.He served as Curator until 1922.
The Repository for Ancient Objects (which was renamed the Museum in 1903), ran by Ye.Setsinskyi, and taking into account the experience of the Museum of Kyiv Society, was successfully enriching with antiquities (including archaeological ones) and in a short time, considerable funds were formed there.To date, we have its Inventory List as of 1909, which was compiled by Ye.Setsinskyi and published in the 11 th Issue of the Society's 'Proceedings'.According to the List, the artifacts collected in the Museum were divided into 16 categories, 5 of which were related to archaeological antiquities (the others -to church antiquities, historical documentation, and ethnography): I. Primitive antiquities -stone chopping tools (stone scrapers, knives, arrowheads, and spearheads); polished and bronze stone tools (wedge-axes, chisels, small axes); bones of fossil animals (bones and teeth of mammoth, bull, and antlers).A total of 114 items.
XIV. Coins -antique Greek and other unclassified, Roman, Byzantine, Western European (Sweden, Spain, the Netherlands of the 17 th -18 th centuries, Austria, Prussia, France of the 17 th -19 th centuries, Bavaria, Saxony of the 18 th century, Czechia of the 15 th century, Hanover, Freiburg, Italy, Greece, America of the 19 th century, England of the 17 th -early 20 th century, Venice of the 13 th -14 th centuries, Wallachia of the 16 th century, Romania of the 19 th -early 20 th century) Eastern European (Genoese-Tatars of the 14 th -16 th centuries), Golden Horde of the 14 th century, Crimean of the 18 th century, Georgian of the 13 th -19 th centuries, Persian, Turkish, as well as unclassified, Polish-Lithuanian of the 14 th -18 th centuries, Old Rus, Russian of the 17 th -19 th centuries); monetary objects and banknotes (fiveruble and one-ruble notes, counterfeit money, stamped papers and official stamps, bill of exchange forms, landlord money bills, which were used instead of small change).A total of 2.968 items, including 3 gold coins, 1.113 silver coins, 1.828 copper coins, and 24 monetary objects and banknotes.
XV. Medals, tokens, orders, etc. -Russian (silver, bronze, lead medals of the 19 thearly 20 th century, bronze military and priest crosses, silver token); foreign (silver and bronze medals, Polish and Jewish tokens).A total of 71 items.
Thus, at the beginning of the 20 th century, the Museum had over 7.5 thousand exhibits37 , and its funds became the basis for compiling the Archaeological Map of Podillia, which was a kind of generalization of archaeological research by scholars historians and local amateur historians, who for many decades accumulated factual material (thematic collections of artifacts), put it in order and developed its theoretical understanding.
SOCIETY IN THE STUDY OF ARCHAEOLOGICAL OBJECTS AND SITES
Addressing archaeological material, mentions of archaeological landmarks of the region, and descriptions of certain artifacts appeared in the writings of Podillia researchers from the very beginning of their activity.While compiling historical and statistical sketches about churches and parishes, their authors, of course, did not miss the mention of existing archaeological antiquities.For example, when describing the town of Bohopil (now Pervomaisk, Mykolaiv oblast), it was mentioned that "of the ancient sites, remarkable are only mounds or graves, as common people call them, which are located symmetrically in parallel lines around Bohopil.One of them, four versts from Bohopil, on a huge size hill, is called Rozkopana Mohyla (Excavated Grave) because the treasure hunters unearthed it".At the same time, the author of the description assumed that those "could be the remainings of former defensive ramparts" 38 .Similarly, reports of unusual finds were also published.In the description of the village of Chorna, Olhopil povit, it was noted that the local volost clerk found «two large teeth and six fossilized bones» there and sent all that to Odesa University 39 .So, as early as the 1860s and 70s, local researchers began recording Paleolithic human settlements and other archaeological sites, which later became the initial reference point for a more thorough study.
Active, systematic, and meaningfully organized archaeological research by members of the Society was started already in the 1890s and was headed by the scholars of Kyiv University.In 1891, one more expedition of university professor Volodymyr Antonovych to Podillia took place to continue researching the remainings of Bakota cave monastery.Together with at that time young scholarship holder Mykhailo Hrushevskyi, who was collecting material for his master's thesis "Barske Starostvo", the scholars visited Kamianets-Podilskyi Society (Committee) 40 .
The works in Bakota, which were not completed in 1891, were continued in the following 1892 year, and took place during August 7-25, when, with the support of that time Bishop of Podillia Dimitrii (Sambikin), 30-40 local villagers were involved in the work to help in the excavations, and the results of the study were described in a special publication by Ye.Setsinskyi and in the Archaeological Map of Podillia prepared by him later 41 .The arrival of leading Ukrainian scholars in Kamianets-Podilskyi, in addition to official relations with the Society, also initiated their close friendship and fruitful further cooperation with Ye.Setsinskyi.This is evidenced by the long-term correspondence of the latter with V. Antonovych 42 and M. Hrushevskyi 43 .
During the 1890s, the Society established close cooperation with a great number of official state and public institutions and societies of the entire Russian Empire, which among others also performed archaeological research.In the archive funds of the Society (Committee) we find official letters of the Imperial Moscow Archaeological Society, the Tavriia Learned Archival Commission, the Imperial Archaeological Commission of St. Petersburg, the Imperial Russian Archaeological Society, Volyn Church Archaeological Society, Kyiv City Public Library, Odesa City Public Library, and others institutions 44 .The numerous letters show the close scientific ties of Podillia Society with similar institutions and societies in Poltava, Chernihiv, Voronezh, Tambov, Orenburg, Tomsk, and other cities 45 .There was an exchange of new publications, literature, information about scientific work, and various events.
The mentioned contacts of Podillia Society and the exchange of experience naturally contributed to the development of persistent and scrupulous work in the field of archaeological research.Ye.Setsinskyi, even before his direct acquaintance with V. Antonovych and M. Hrushevskyi, independently studied the archeology of the ancient Bakota, based on the discovered material remnants dated to the 12 th -14 th centuries, wrote the work "Bakota, the Ancient Capital of Ponyzzia", where he argued its importance as an urban and trade center of the entire Middle Dniester region 46 .And then, performing expeditionary surveys in various parts of Podillia and publishing numerous articles, he continued archaeological research of his native land 47 .
In 1895, Ye.Setsinskyi studied a mound in the village of Chausove Kazenne, Balta povit, where he found fragments of weapons, small Turkish coins, ceramic vessels: dark gray bowls, jugs, pots, and all that was transferred to the Museum of the Society 48 .Important finds were found during excavations near the village of Pryvorottia in 1898, which provided new materials dated to the ancient and medieval history of the region when the Museum got 24 items, among which: fragments of weapons, metal produced items, a silver frame for a medallion, ceramic spindle whorls, and fragments of vessels 49 .Later, an essential site of the Eneolithic era was discovered by Ye.Setsinskyi near the village of Keptyntsi, Kamianets povit.A megalithic burial was found there, and the researcher described the tomb, two skeletons along with the accompanying inventory: pottery (pots) and tools (two polished flint axes were especially worthwhile) 50 .In addition, in the works 'The Study of Underground Passageways in Kamianets-Podilskyi' 51 and 'A Few Explanations about Archaeological Map of Podillia Hubernia' 52 Ye.Setsinskyi provided broad outlines of the presence of troves in certain parts of Podillia.
The active usage of archaeological material during the compilation of historical descriptions of inhabited places was demonstrated in the 7 th Issue of the 'Proceedings' (1895).That issue was prepared by Ye.Setsinskyi and included the sketches on the history of all settlements of Kamianets povit.While working on it, the researcher tried to provide as much information as possible about the inhabitants of that land from the earliest times, including mentions of the Stone, Copper, and Bronze ages and therefore did not miss mentions of the ancient settlements of prehistoric people and the discovered archaeological sites 53 .That work, according to the author himself, was an attempt to present a sample of writing the history of his native land.The attempt was successful and the issue received many favorable reviews.In particular, the study was highly praised by V. Antonovych, who in a special review noted that Ye.Setsinskyi comprehensively used published sources and newly discovered local documents and artifacts 54 .
It should be noted that the fact that V. Antonovych reviewed the work by Ye.Setsinskyi was not accidental.The researchers had been collaborating fruitfully for a long time, and it was very likely that V. Antonovych exercised a kind of patronage over his colleague from Podillia.In, at least, one of the letters to V. Antonovych, Ye.Setsinskyi wrote that he had got 200 rubles from Kyiv branch of the Preparatory Committee for the organization of the 11 th Archaeological Congress to conduct archaeological excavations in 1898.Knowing that it became possible thanks to the petitions of V. Antonovych, Ye.Setsinskyi thanked him and requested the professor's advice and general supervision of the work 55 .As a result, in the materials of the 11 th Archaeological Congress held in Kyiv in 1899, archaeological maps of Volyn hubernia, authored by V. Antonovych, and Podillia hubernia, prepared by Ye.Setsinskyi, were placed next to each other 56 .
Also, the organizing committee for the preparation of the 11 th Archaeological Congress officially appealed to Podillia Society for assistance in holding that event, and in particular in sending available antiquities from its funds for the organization of the exhibition, during the work of the Congress 57 .And of course, Ye.Setsinskyi personally participated in the work of that forum and delivered two reports there: 'The Most Ancient Churches of Podillia' and 'A Few Explanations about Archaeological Map of Podillia Hubernia' 58 .Accordingly, after the end of the specified Kyiv Congress, the Imperial Archaeological Society, which organized the 11 th Congress, on August 20, 1899, expressed its gratitude to Podillia Committee for its assistance in organization and participation in the Congress 59 .
In the 'Archaeological Map of Podillia Hubernia', the scholar systematized and recorded about 2 thousand sites of the Stone Age, the Bronze, and Copper Ages, the Early Iron Age, and the period of Kyivan Rus of the 9 th -13 th centuries, and also mentioned all the important ruins of castles, fortresses, and various fortifications, provided detailed information about religious buildings, and troves 60 .In general, information about archaeological sites that were concentrated in 868 settlements of Podillia was included 61 .
In addition to using the results of his own expeditionary and research work, Ye.Setsinskyi, when compiling the 'Archaeological Map of Podillia Hubernia' (fig.1), also used a large amount of thematic literature, in particular the works of V. Antonovych, P. Batiushkov, V. Huldman, M. Symashkevych, the publications: 'Proceedings of Moscow Archaeological Society', 'Notes of the Imperial Russian Archaeological Society', 'Reports of the Imperial Archaeological Commission', publications of 'Kyivska Starovyna', 'Kyivskyi Telegraph', as well as a wide range of Polish-language literature, and manuscript materials from private and museum collections, etc 62 .
Ye. Setsinskyi chose a rather reader-friendly scheme for presenting information about archaeological sites.He submitted descriptions for each of the twelve povits of the hubernia in turn, and within each of them grouped the objects by river basins.Of course, not all descriptions are equal in content.Some sites are only mentioned.However, most of them are described rather in detail: in addition to a general description of the site itself, the geographical reference is also given, as well as the dimensions, state of preservation, and the like.Quite often, the description of the site is also accompanied by the retelling of legends and tales that have survived about it in people's memory.However, the most detailed descriptions are given for those sites where excavations were performed, such as Krynychky, Balta povit; Studenytsia, Ushytsa povit; Ivakhnivtsi, Syrvatyntsi, Pryvorottia, Kamianets povit, and special attention is paid to Bakota 63 .Of course, the largest number of archaeological sites are those objects that were best captured visually and could not remain unnoticedembankments, ramparts, ditches, and caves.They managed to record 3202 burial mounds and 287 settlements in the territory of hubernia 64 .Megalithic structures, stone sculptures, castles, fortresses, places of worship, various inscriptions, etc. are also well described.The Map has appendices, which include detailed geographical and subject indexes, which allow one to find the object of search quickly.
The Map of Ye.Setsinskyi became a kind of guide for the later archaeologists, i.e. certain archaeological sites mentioned in it aroused considerable professional interest in the following decades, and accordingly, their study was continued.In
ЕМІНАК
Eminak, 2023, 2 (42) particular, the most notable such site was Nemyriv settlement dated to the Scythian period.First described in Setsinskyi's Archaeological Map, it was excavated by the expeditions of S. Hamchenko in 1909, A. Spitsyn in 1910, and A. Smyrnov in 1941, and in the post-war period, it continued to be studied by expeditions headed by M. Artamonov, A. Moruzhenko, and P. Havliuk.M. Artamonov especially emphasized the important role of the Map by Ye.Setsinskyi.He verified all his reconnaissance expeditions to the Middle Dniester region with the Map, made clarifications, and wrote down comments 65 .
The Map of Ye.Setsinskyi was useful during the period of large-scale preparations for the construction of the Dniester Pumped Storage Power Station when a great number of ancient settlements in the Dniester region were being prepared for flooding.The archaeological map made it possible to quickly and clearly decide which settlements should be examined for sites and objects of certain archaeological eras.Then, among others, early Slavic settlements in Bakota, Kavetchyna, Sokil, and Teremtsi 66 , as well as ancient Rus settlements in Hrynchuk, Ivankivtsi, and Stara Ushytsia, were studied 67 .
Ye. Setsinskyi's map is also taken into account by current researchers, in particular archaeologists from Kamianets-Podіlskyi Ivan Ohiienko National University, following the descriptions of their fellow countryman, excavated dozens of burial mounds in Khmelnytskyi region dated to the 7 th -6 th centuries BCE.The most informative of them were such sites as Tarasivka, Shutnivtsi, Chabanivka, Kolodiivka, and Spasivka 68 .
CONCLUSIONS
Thus, during the 55 years of its activity, Podillia Church Historical and Archaeological Society (founded under the name of the Committee) had undergone a rather significant evolution.The Society managed to go beyond the established framework of promoting the policy of Russification of the region and actually became a research center for studying the history and national-cultural identity of Ukrainians, including conducting archaeological research, which is actually one of the most difficult areas of the development of historical knowledge and necessarily requires special training.Over the years of work, the administratively established circle of amateur local historians had evolved into a real scientific Society, with its print publications, and clearly defined organizational structure for performing activities, which were regulated by statutory provisions, with planning and reporting on activities.
The Society played a significant role in the study of archaeological sites and objects of Podillia hubernia.A primary role in that field was played by the tireless Yefimii Setsinskyi, who not only performed archaeological surveys in the region but also directly participated in the activities of Kyiv expeditions.The pinnacle of the scholar's archaeological research was his Archaeological Map of Podillia Hubernia, which was presented and highly estimated at Kyiv Archaeological Congress of 1899 (published in 1901).
Today, archaeological materials of Podillia Church Historical and Archaeological Society and descriptions of settlements prepared by its researchers continue to play the role of a guide in archaeological surveys and excavations.
the sanction of the Holy Synod and by order of Leontii Archbishop of Podillia, the Committee for Church Historical and Statistical Description of Podillia Eparchy was founded at Podillia Theological Seminary 15 , which in 1890 was called Podillia Eparchy Historical and Statistical Committee and already in 1903 it was renamed to Podillia Church Historical and Archaeological Society.The first Head of the Committee was Feohnost (Lebediev), Rector of Podillia Seminary, while O. Pavlovych, the professor of the Seminary, was elected to be the Secretary 16 .
|
2023-10-06T15:09:37.658Z
|
2023-08-15T00:00:00.000
|
{
"year": 2023,
"sha1": "bef88c6ca6e05cc8e012385e2bace53a6c561251",
"oa_license": "CCBY",
"oa_url": "https://www.eminak.net.ua/index.php/eminak/article/download/650/470",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e2999d00872e9f86dd44fdbb73bf221130324b18",
"s2fieldsofstudy": [
"History"
],
"extfieldsofstudy": []
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.