id
stringlengths 3
9
| source
stringclasses 1
value | version
stringclasses 1
value | text
stringlengths 1.54k
298k
| added
stringdate 1993-11-25 05:05:38
2024-09-20 15:30:25
| created
stringdate 1-01-01 00:00:00
2024-07-31 00:00:00
| metadata
dict |
|---|---|---|---|---|---|---|
221652240
|
pes2o/s2orc
|
v3-fos-license
|
Enhancement Punching Shear in Flat Slab Using Mortar Infiltrated Fiber Concrete
In this paper, improving the punching shear of slab column connection using mortar infiltrated fiber concrete is studied. Eight specimens of reinforced concrete slabs identical in dimension and reinforcement were tested, six of them were casting with hybrid concrete (normal strength concrete and mortar infiltrated fiber concrete) and two specimens were cast with normal strength concrete as control specimens. All specimens were tested under vertical loading. The mortar infiltrated fiber concrete was cast monolithically with the normal strength concrete at different thickness at one and a half times of the effective depth (1.5d) at the center of the slab, once at all the thickness of cross section of the slab and the others at half thickness either tension or compression face of the slabs all cases cast with two types of fiber. The vertical load was applied upward through a square column with a dimension of (100 mm). In all slabs, no failure in mortar infiltrated fiber concrete was observed. The test results showed that the use of mortar infiltrated fiber concrete improves the punching shear strength for some cases according to the type of fibers and the location of casting mortar infiltrated fiber concrete in slabs. The enhancement in punching shear strength due to using mortar infiltrated fiber concrete at 1.5d square shape (265 mm) ranged from 4% to 46% compared with the control specimens.
Introduction
The flat plate slab is susceptible to punching shear failure. This type of failure is catastrophic because no visible signs are shown before failure. There are actually specific punching shear strength formulas for slab column connections such as those suggested by ACI 318 [1] and BS 8110 [2] codes. These formulas were developed for slab casted using normal strength concrete; so, they might not be applicable to strengthened slab using mortar infiltrated fiber concrete. The classical strengthening techniques using to avoiding sudden punching failure, include the use of transverse pre-stressed reinforcement, steel plates and bolts, increase the thickness of the slab around column or use of a larger column cross-section and use of an epoxy bonding steel plate. Further focus has been given to use advanced composite materials to strengthen especially fibers in all different types, to prevent sudden punching shear failure. In this study using the mortar infiltrated fiber concrete as a strengthening martial. Mortar infiltrated fiber concrete is a comparatively modern material differentiated from Fiber Reinforced Concrete (FRC) in two aspects that is fiber volume fraction and manufacturing process.
Mortar infiltrated fiber concrete was developed to incorporating large amounts of fibers in cement composites, to get a very high strength property. The researchers began to use a large variety of fibers. Mortar infiltrated fiber concrete has high strength as well as large ductility and far significant potential for structural applications [3][4][5][6][7]. The matrix does not contain coarse aggregate which, of course, cannot infiltrate through the tiny spaces between dense fibers network, but has a high cements content. However, it may contain fine or extra-fine sand and additives such as silica fume, fly ash, and slag. The mortar fineness must be designed to properly infiltrate the dens fiber layer placed in mold. Limited research work has been carried out on using mortar infiltrated fiber concrete to improving punching shear, and all of this study depending on cast all the slab with mortar infiltrated fiber concrete [8][9][10], but this way not economical due to the need for large quantities of fiber. Therefore, in this study, work will be done to improve the normal strength concrete slabs by using small quantities of mortar infiltrated fiber concrete as a hybrid slabs, to obtaining good results with great economy. The influence of the variables studied in this study covers the thickness and position of mortar infiltrated fiber concrete and the type of fiber used.
Materials and Methods
The experimental work includes initialization and test of raw materials and making trail mixes right up to the required mix of normal concrete and mortar after that eight slabs were cast for test them and studying the effect of using mortar infiltrated fiber concrete. Figure 1 show the flowchart for the experimental work.
Materials Used for Cast Specimens
For the slab specimens, two types of concrete used normal strength concrete mixtures with compressive strengths of 25 MPa and the other type are mortar infiltrated fiber concrete with two types of fiber (steel fiber and hybrid fiber) with compressive strengths getting from trail mix (92 and 62 MPa) Sequentially [11].
Materials Used for Preparing Normal Strength Concrete
The normal strength concrete was design according to American method of mix proportions selection (ACI Committee 211.1-91) [12]. The target concrete strength f'c was 25 MPa.
Cement
In this study the type of cement used was limestone Portland cement (CEM II/A-L-42.5 R) mate with the (IQS No. 5/1984) limitations [13].
Fine Aggregate
Natural local sand conforms to the limits of Iraq specification (IQS No.45/1984) [13], Zone (2). Figure 2 shows the grading curve of the natural sand after sieving.
Coarse Aggregate
Natural rounded gravel with a maximum size of 10 mm as shown in the grading curve used as the coarse aggregate in this work. Mechanical and chemical properties meet the requirements of (ASTM C33 /86) [14]. The grading curve of the natural gravel shown in Figure 3.
Materials Used for Preparing Mortar Infiltrated Fiber Concrete
In the experimental study, many trail mortar mixtures were performed to find the correct mixing proportions and with the assistance of some previous studies [15][16][17].
Cement
In this study the type of cement used was limestone Portland cement (CEM II/A-L-42.5 R) mate with the (IQS No. 5/1984) limitations [13].
Extra Fine Sand
Natural local sand was used as a fine aggregate. Only extra fine sand, which was sieve through (600 µm) to separating the coarser particles used in preparing mortar. It conforms to the limits of Iraq specification No.45/1984 [13], Zone (3). Figure 4 shows the grading curve after sieving process.
Micro Silica Fume (SF)
In this work the silica fume used was commercially known as Mega Add MS (D) from the chemical company (CONMIX), with the replacement (10%) by weight of cement.
High-Range Water Reducing Admixture (HRWRA)
The high range water reducing admixture was used in this work for the preparation of mortar, known commercially as (Hyperplast PC200). It is factory by the company (DCP), and meets with the (ASTM C494/C494 M) requirements [18].
Fibers
In this work two different types of fibers were used. The first type was hooked end steel fibers with a length of (30 mm), a diameter of (0.5 mm) and the tensile strength of about 1100 MPa, the hooked fiber was supplied from JATLAS Company in Turkey. The second type was synthetic polypropylene fiber with a length of (27 mm), a diameter of (27 mm) and with a tensile strength of 570-660 MPa, this type of fiber was manufactured by the FORTA -FERRO Company in U.S.A. The two types of fibers following the ASTM A820/A820M-04 [19]. Figure 5 shows the two types of used fibers.
Reinforcing Steel
Deformed steel bars with two different diameters were used in the specimens. Uniaxial tension tests were carried out on the (10 and 6 mm) nominal diameter bars to determine the yielding stress and the ultimate strength. Reinforced steel bars diameter 10 mm with a yield stress of 554 MPa was used to be flexural reinforcement of the flat slab, while 6 mm in diameter bars with a yield stress of 560 was used in column stirrups. Three samples were tested for each bar diameter and the average of the results was used. The test samples were placed in a computerized tensile test machines and tested until ruptures, according to ASTM A615 [20].
Specimens Casting Process
Eight slabs specimens with the dimensions of (900 × 900 × 80) mm with a square column in the middle of the slab with dimension (100 × 100 × 200) mm were cast. Before casting, the selection materials were prepared and weighed according to the results obtained from the trail mixes as shown in Table 1. The mortar was mixed by electrical drill mixer with a suitable pan about (0.02 m 3 ) capacity, mixing time about (7 − 9) minutes. Before mixing operation, the pan was cleaned off. The binder material (cement and silica fume) were mixed in the pan to disperse the silica fume particles throughout the cement particles. Then the sand was added and the mixture was mixed to get a uniform dry mixture. The whole amount of high-range water-reducing admixture (Hyperplast PC9200) was mixed separately with (1/3) of mixing water, after that, (2/3) of mixing water was added to the mix, then HRWR with (1/3) of mixing water was fed to the mixer to obtain the required fluidity [21]. At the same time, the mixing procedure of normal strength concrete start by mixing the gravel with the dry sand in the electrical horizontal rotary drum mixer of (0.09 m 3 ) volume capacity and mixed for several minutes. Then added cement to the mixer, and gradual adding the weighted water to the mix. The mixing time is total about (8 − 10 minutes). All slabs used in this work were cast in plywood molds with clear dimensions (900 × 900 × 80 mm), and steel mold for isolation and limit the area will be cast with mortar infiltrated fiber concrete square shape with dimension (265 × 265 mm) have a clear height (80 mm). The reinforcing bar ratio constant with ( = 0.0158) and the concrete cover for reinforcing bars was 15 mm for all slabs. The two types of concrete casting together to achieve the bonding between the two types of concrete. The casting process is defined according to the following stages: 1. Before each casting, the plywood and steel molds were prepared by cleaning and lightly lubricating the internal faces by oil to prevent adhesion with hardened concrete and placed on horizontal ground.
2. After preparing the molds, put the pre-equipped reinforcing steel and centering it with cover 15 mm with all directions then connect the reinforcing steel of the column in the center of the slab as shown in Figure 6. 3. Normal strength concrete preparing as mentioned previously, after many trails of casting mortar infiltrated fiber concrete technique in the laboratory, multi-layers technique was used for incorporating the steel fiber into the mortar matrix. The multi-layers technique involved initial placing and packing the fibers inside the steel mold as shown in Figure 7 A, which were oriented randomly, followed by filling the mold by the mortar up to this level as shown in Figure 7 B. The mortar has to be flowable enough to ensure infiltration through the fiber. At the same time the normal strength concrete casting in the plywood mold around the steel mold reaching to the required level. 4. The contents of mortar infiltrated fiber concrete in the steel mold were compacted using a steel rod with a diameter of 4 mm to avoid honeycombing or voids. This process was repeated (for each layer) where the entire mold was filled with the required volume fraction of fiber.
5. Soon after filled all molds (plywood and steel) to the same level as required the steel mold removed by a steady upward pull, then use vibration for normal strength concrete and around the area cast with mortar infiltrated fiber concrete to compact and achieve the bonding between the two types of concrete casting and, the specimens were leveled by hand trawling, and covered with polyethylene sheet in the laboratory for 24 hours to prevent evaporation of moisture from the fresh concrete.
6. The compressive strength was measured for each casting series by testing three standard concrete cubes with dimensions of (100 × 100 × 100 mm) for compressive strength of mortar infiltrated fiber concrete. And cubes (150 × 150 × 150 mm) for compressive strength of normal strength concrete. After (24) hours remove the plywood mold and cast the column using square steel mold with dimension (100 × 100 × 200 mm height) as shown in Figure 8, the column cast with normal strength concrete.
Figure 8. Preparing the column for casting with normal strength concrete
The conventional curing method was used to simulate the practical site conditions. Thereafter, slabs specimens were cured by saturated burlap and covering with a polyethylene sheet to prevent evaporation of curing water.
Test Setup and Procedure
The tested specimens were simply supported at the four edges and loaded centrally through square column stubs with dimension of 100 mm sides with the height of 200 mm identical for all specimens. All slabs supported by a large reaction steel frame and tested using a hydraulic jack with maximum capacity of 600 KN as shown in Figure 9. The deflection of the specimens at the center of the tension side of slabs was measured using a dial gage with capacity of 30 mm. An electric pressure transducer was used to measure the applied load. The duration time for each of these tests was about 30 minutes.
Description and Identification of the Tested Slabs Specimens
To facilitate the comparison between the slabs, each slab specimen is identified by symbols as listed in Table 2. Average of two specimens casting with normal strength concrete (N.S.C) only were tested as reference specimens. Other specimens were cast with N.S.C except the square area with the dimension of 265 mm in the middle of the slab that cast with mortar infiltrated fiber concrete with different thickness (80 and 40 mm). Each case casting with two types of fiber (hooked end steel and hybrid fiber 50% hooked end steel with 50% synthetic polypropylene fiber).
Results and Analysis of Tests Specimens
All eight specimens failed in punching shear with different ultimate load after relatively large deflection for some cases as shown in Table 3. It is clear from the results that there is no improvement in the appearance of the first crack. In general, through the results, it was found that the use of mortar infiltrated fiber concrete improving punching shear in some cases according to the position of mortar and the type of fiber used. For the flexural steel reinforcement high significant effect on the punching shear resistance through the dual-action E. Rizk [22], since the ratio ρ (0.0158) used is high compared to the ρ max (0.02) to control the type of failure and keep away from flexural failure. The thickness of the concrete cover with steel reinforcement reached about the half-thickness of slab (40) mm. That show why some case of using mortar infiltrated fiber concrete don't have any effect on the ultimate load such as (1.5d S.T and 1.5d H.T) as shown in Figure 11. When the mortar infiltrated fiber concrete thickness is 40 mm at the tension face and not exceeded the flexural reinforcement, which led to no effect on punching shear because of the main reinforcement dual action in this region have enough resistance for punching shear. Also, for the same previously reason there was no improvement in the values of deflection comparing with the reference specimen as shown in Figure 12. The final shape and cracks of the two-above failure specimens shown in Figure 13. According to cases of (1.5d S and 1.5d H), the mortar infiltrated fiber concrete was cast with 80 mm thickness as shown in Figure 14. The result depends on the type of fiber when using hybrid fiber as in this specimen (1.5d H) the ultimate load increase about 30% with respect to the reference this increase because of the good distribution of fiber at all the cross-section of mortar and this good spread of fiber led to increase in deflection about 34%. While in case that cast with steel fiber (1.5d S), the steel was isolated due to the effect of its high density. Figure15 A explains the crosssection of failed prism in the test of modules of rapture (Fr). The high density of steel fiber led to descending to the bottom and irregular distribution within the cross-section and that its assembly was within the area of the main reinforcing steel. While using hybrid fiber, eliminates the phenomenon of steel fiber diving because of the difference in densities between steel fiber and mortar and using lightweight fiber like synthetic polypropylene work to provides a carrier layer for steel and prevent it from submergence and achieve a good distribution as shown in Figure15 B. This shows why no improvement when using steel fiber only as in case (1.5d S) and thus not giving any enhancement in the ultimate load and deflection with respect to the reference as shown in Figure 16. Figure 17 shows the final shape for the tested slabs. For the two remaining cases (1.5d S.C and 1.5d H.C) the mortar infiltrated fiber concrete cast with a thickness of 40 mm at the compression side directly under the column as shown in Figure 18. These cases gave excellent results in the ultimate load 12% and 46% respectively according to the type of fiber used in mortar infiltrated fiber concrete, using hybrid fiber gave better results from use steel fiber only. This variation in the results between these two cases is due to the distribution of fibers within the cross-section that was previously mentioned and shown in Figure 15. Using mortar infiltrated fiber directly under column as in this specimens worked as a strong area compared with around region that led to distribute the load over a bigger area and faraway the load from the center towered the supports that lead to increase the ultimate load and deflection as shown in Figure 19. The shape of failure for these two slabs shown in Figure 20.
Conclusions
The punching shear strength of flat slabs strengthened with mortar-infiltrated fiber concrete was investigated in this study. Eight slabs-column connections were cast and tested under vertical load. The position and thickness of the mortar infiltrated fiber concrete, types of fibers submerged in mortar varied in different specimens. The following conclusions were drawn based on the test results: Using mortar infiltrated fiber concrete, improves the punching shear strength of slabs for some cases according to the location of mortar infiltrated fiber concrete and the type of fiber used. The improvement percentage ranged from 4% to 46% with respect to the control specimen.
Using mortar infiltrated fiber concrete, leads to increase deflection and change mode of failure from sadden to gradually failure.
To the choice of the region, that cast with mortar infiltrated fiber concrete in slab large effect on improving punching shear and its amount.
Using hybrid fiber in mortar have a significant impact through provide a perfect spread of fiber crossing the mortar infiltrated fiber concrete section.
Casting mortar infiltrated fiber concrete with a half thickness of slab directly under the column at compression face, working on distributing the load on the bigger area and pushed away from the line of failure from the hometown of the column thus leads to increasing the ultimate load and deflection.
|
2020-08-06T09:06:18.367Z
|
2020-08-01T00:00:00.000
|
{
"year": 2020,
"sha1": "c3a1f95c09f183bd8b724aefeb7bb8453185165b",
"oa_license": "CCBY",
"oa_url": "https://civilejournal.org/index.php/cej/article/download/2243/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "bc49a3653d70208ae65cd4b03397cab59eb4fb0f",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
11224485
|
pes2o/s2orc
|
v3-fos-license
|
Caveolin 1-related autophagy initiated by aldosterone-induced oxidation promotes liver sinusoidal endothelial cells defenestration
Aldosterone, with pro-oxidation and pro-autophagy capabilities, plays a key role in liver fibrosis. However, the mechanisms underlying aldosterone-promoted liver sinusoidal endothelial cells (LSECs) defenestration remain unknown. Caveolin 1 (Cav1) displays close links with autophagy and fenestration. Hence, we aim to investigate the role of Cav1-related autophagy in LSECs defenestration. We found the increase of aldosterone/MR (mineralocorticoid receptor) level, oxidation, autophagy, and defenestration in LSECs in the human fibrotic liver, BDL or hyperaldosteronism models; while antagonizing aldosterone or inhibiting autophagy relieved LSECs defenestration in BDL-induced fibrosis or hyperaldosteronism models. In vitro, fenestrae of primary LSECs gradually shrank, along with the down-regulation of the NO-dependent pathway and the augment of the AMPK-dependent autophagy; these effects were aggravated by rapamycin (an autophagy activator) or aldosterone treatment. Additionally, aldosterone increased oxidation mediated by Cav1, reduced ATP generation, and subsequently induced the AMPK-dependent autophagy, leading to the down-regulation of the NO-dependent pathway and LSECs defenestration. These effects were reversed by MR antagonist spironolactone, antioxidants or autophagy inhibitors. Besides, aldosterone enhanced the co-immunoprecipitation of Cav1 with p62 and ubiquitin, and induced Cav1 co-immunofluorescence staining with LC3, ubiquitin, and F-actin in the perinuclear area of LSECs. Furthermore, aldosterone treatment increased the membrane protein level of Cav1, whereas decrease the cytoplasmic protein level of Cav1, indicating that aldosterone induced Cav1-related selective autophagy and F-actin remodeling to promote defenestration. Consequently, Cav1-related selective autophagy initiated by aldosterone-induced oxidation promotes LSECs defenestration via activating the AMPK-ULK1 pathway and inhibiting the NO-dependent pathway.
Patients
Fibrotic liver biopsy specimens (fibrosis stage: F3-4) were obtained from 9 patients with liver fibrosis due to bile duct stones (9 cases). Normal liver specimens were obtained from 6 patients who underwent a partial liver resection for hepatic hemangioma. All patients signed the informed written consent, and the Ethics Committee at the local hospital approved the use of samples.
Animal experimental design
Sprague-Dawley (SD) rats and C57 mice were provided by the Laboratory Animal Center (Southern Medical University, China) and were approved by the Committee on the Ethics of Animal Experiments of Southern Medical University. Animals were housed under a 12:12 h light/dark cycle at 22-24°C.
Hyperaldosteronism (Aldosterone-Salt) model
In total, 36 male C57 mice (18-22 g) were randomly divided into five groups (vehicle-control, Aldosterone-Salt, and administration with spironolactone or 3MA; n = 9 per group). All mice were fed with 1% NaCl for 28 days. The Aldosterone-Salt and the two treatment groups were treated with aldosterone (0.1 μg/g h) continuously via osmotic mini-pumps for 28 days, while the two therapy groups were co-treated with spironolactone (40 μg/g d, gavage), 3MA (15 μg/g d, intraperitoneal injection).
Measurement of serum aldosterone
The serum aldosterone was detected by an aldosterone ELISA Kit (Elabscience, E-EL-0070c), according to the manufacturer instructions. The results were read and calculated by ELIASA.
Histological analysis and immunohistochemistry
Paraffin sections (4 µm) of animal and human liver tissues were prepared with hematoxylin and eosin (H & E) staining and Sirius Red staining. Immunohistochemical detection of α-SMA and vWF, were performed on paraffin sections (4 µm), and subsequent sections were exposed to HRP-antibody colored with DAB, and visualized by microscopy (BX51, Olympus, Japan). The degree of liver fibrosis and the number of α-SMAor vWF-positive cells were quantified with Image J software.
Fluorescence staining
Paraffin sections (4 µm) were prepared for immunofluorescence, incubated with primary antibody overnight, followed by the secondary antibody, and then mounted with DAPI. The primary antibodies in-
SEM and TEM
The primary LSECs and liver tissues were fixed with 2.5% glutaldehyde and subsequently dehydrated and then coated with gold using the coating apparatus. Eventually, the LSECs fenestrae of samples were observed with SEM at 15-kV acceleration voltage. Additionally, the samples for TEM were stained with uranyl acetate and lead citrate, and autophagosomes and autolysosomes were observed using TEM at an 80-kV acceleration voltage.
Immunocytochemistry
Paraformaldehyde-fixed primary cells were incubated with the primary antibody, followed by the secondary antibody, and subsequently mounted with DAPI. The primary antibodies included anti-Cav1 (1:50), anti-LC3 (1:200), and anti-ubiquitin (1:100). To detect Factin, after incubation with primary antibody and the secondary antibody, cells were stained with phallotoxins (Thermo, F432). The number of puncta per cell or positive cells was observed by fluorescence microscopy (1X71, Olympus, Japan) and quantified by Image J software.
Co-immunoprecipitation
Primary LSECs were stimulated with aldosterone or co-treated with 3MA for 3 days. IP and immunoblotting (IB) were performed as previously described [16]. The antibodies for IP included anti-Cav1, anti-p62 and anti-ubiquitin; the antibodies for IB included anti-Cav1, anti-p62, anti-ubiquitin, and anti-MR.
Extraction of membrane and cytosol protein of primary LSECs
Membrane and Cytosol Protein Extraction Kit (KGP3100-2) was used to extract membrane and cytosol protein of primary LSECs (10 7 cells/group).
Western blotting
The protein expression in liver tissue or primary LSECs was detected by western blot. The primary antibodies included anti
Hydrogen peroxide and ATP assay
The H 2 O 2 content in cells or liver tissue was measured by a Hydrogen Peroxide Assay Kit (Beyotime, S0038), and the OD value was detected by absorption spectroscopy (562 nm). An ATP Assay Kit (Beyotime, S0026) was used to measure the ATP level in cells, according to the manufacturer's protocol.
Statistics
The experimental data are reported as the mean ± SD. In statistical analysis of 2 groups, a two-tailed Student's t-test was utilized. In analysis of more than 2 groups, ANOVA analyses were performed and analyzed by SPSS17.0 software and P < 0.05 was considered significantly.
Elevated aldosterone/MR level with severe oxidation and autophagy in liver sinusoidal endothelium in human liver fibrosis
The area density of Sirius Red staining, the fibrosis level, and the protein expression of α-smooth muscle actin (α-SMA), von Willebrand Factor (vWF), and CD31 in human liver fibrotic tissue were higher than that in the normal group ( Fig. 1A-D). Meanwhile, the protein levels of MR, NOX4, and LC3II/I in human fibrotic tissue were also increased (Fig. 1D). The immunofluorescence showed that MR and LC3 were simultaneously highly expressed in CD31-positive or vWF-positive liver sinusoidal endothelium in human liver fibrosis (Fig. 1E). Hence, these suggested that the intra-hepatic aldosterone/MR level was elevated, along with the increase of oxidation and autophagy, in capillarized liver sinusoidal endothelium in human liver fibrosis.
Antagonizing aldosterone or inhibiting autophagy improves LSECs defenestration in BDL-induced fibrosis
SEM showed that LSECs defenestration occurred on the 18th day in the BDL-induced liver fibrosis ( Supplementary Fig. 1A); the protein levels of the NO-dependent pathway in LSECs isolated from the BDL model, such as eNOS and VASP, were decreased, while CD31 was highly expressed from the 18th day to the 28th day ( Supplementary Fig. 1B The data of SEM, the protein levels of eNOS and VASP, and the mRNA levels of cGMP and PKG showed that LSECs defenestration and the down-regulation of the NO-dependent pathway were rescued by spironolactone or 3MA treatment ( Fig. 2A-C). Additionally, the H 2 O 2 , mito-ROS and ATP level, the protein levels of MR, Cav1, NOX4 and LC3II/I, the autophagic flux and the data of TEM showed that serious oxidation, mitochondria dysfunction, ATP reduction, and autophagy occurred, along with the elevated MR and Cav1 expression, during LSECs defenestration. However, these effects were improved by spironolactone or 3MA treatment, indicating antagonizing aldosterone or inhibiting autophagy could improve oxidation, autophagy and the NOdependent pathway to reverse LSECs defenestration ( Fig. 2D-H).
Furthermore, antagonizing MR or inhibiting autophagy alleviated the contents of serum ALT and aldosterone, as well as BDL-induced liver fibrosis ( Supplementary Fig. 2).
Antagonizing aldosterone or inhibiting autophagy relieves LSECs defenestration in hyperaldosteronism mice
To directly unravel the effects of aldosterone on LSECs defenestration and liver fibrosis, we employed the hyperaldosteronism mice model, administered with spironolactone or 3MA. Indeed, the data of SEM and TEM, the protein level of MR and LC3II/I, the H 2 O 2 content in liver tissue, as well as the serum aldosterone content showed that in vivo, continuous aldosterone infusion promoted LSECs defenestration, along with the augment of autophagy and oxidation, which could be abirritated by spironolactone or 3MA treatment (Fig. 3). Additionally, the area density of Sirius Red staining, α-SMA protein level in liver tissue, and the serum ALT content, were significantly increased in the hyperaldosteronism mice, which were attenuated by spironolactone or (caption on next page) X. Luo et al.
Acute and chronic aldosterone promotes the AMPK-dependent autophagy during the process of LSECs defenestration
Fenestrae of primary LSECs gradually shrank to disappear during culturing in vitro; while aldosterone-treated LSECs defenestration occurred on the 3rd day, in advance of the control group (Fig. 4A), suggesting that aldosterone promoted LSECs defenestration in vitro. Moreover, the protein levels of eNOS and VASP showed that there were concentration-dependent and time-dependent down-regulation of the NO-dependent pathway during LSECs fenestrae shrinking, which was exacerbated by aldosterone (Fig. 4B). Interestingly, the data of TEM, the autophagic flux, the protein levels of AMPK, ULK1 and LC3II/I, and the ATP level showed that LSECs fenestrae were shrinking, along with the decrease of ATP and the subsequent augment of the AMPK-ULK1dependent autophagy, which were aggravated by chronic aldosterone (Fig. 4C-F). Furthermore, the protein levels of p-AMPK (Thr172), p-ULK1 (Ser555), and LC3II/I were increased, along with the decrease of ATP generation by acute aldosterone, suggested the augment of p-AMPK and p-ULK1 activity ( Supplementary Fig. 4A, B). Taken together, these data implied that acute and chronic aldosterone triggered the AMPK-ULK1-dependent autophagy to promote LSECs defenestration.
(caption on next page) X. Luo et al.
Redox Biology 13 (2017) 508-521 3.5. The AMPK-dependent autophagy is initiated by acute and chronic aldosterone induced-oxidation mediated by Cav1 The protein levels of MR, Cav1 and NOX4, the H 2 O 2 and mito-ROS levels in primary rat LSECs showed a concentration-dependent and time-dependent up-regulation of Cav1 expression and oxidative stress by acute and chronic aldosterone treatment ( Supplementary Fig. 4C-D, Fig. 5A-C); moreover, the co-IP assays revealed that aldosterone induced enhanced Cav1 co-precipitation with MR (Fig. 5D), suggesting the Cav1 closely interacted with MR to mediate oxidation. Additionally, the NOX4 protein level and the H 2 O 2 content in LSECs showed that knockdown of Cav1 by siRNA could reduce aldosterone-initiated oxidation (Fig. 5E-F). These data indicated that aldosterone induced oxidative stress mediated by Cav1.
Furthermore, the H 2 O 2 , mito-ROS and ATP levels, the protein levels of NOX4, AMPK and LC3II/I and the autophagic flux in primary rat LSECs showed that antagonizing aldosterone (spironolactone) or antioxidants (NAC, TEMPO, mito-TEMPO) could attenuate aldosterone- induced oxidation and improve the ATP production to reduce the AMPK-dependent autophagy (Fig. 6A-E). Indeed, antagonizing aldosterone or antioxidants could maintain fenestrae of aldosterone-treated LSECs (Fig. 6F), indicating that aldosterone-induced oxidation, mediated by Cav1, could initiate the AMPK-dependent autophagy to promote LSECs defenestration.
Aldosterone-induced the AMPK-dependent autophagy results in LSECs defenestration via inhibiting the NO-dependent pathway
The protein levels of LC3II/I, eNOS and VASP, and the data of SEM in primary rat LSECs showed that autophagy activator (rapamycin) down-regulated the NO-dependent pathway and induced LSECs defenestration; whereas the opposite results were displayed in inhibiting autophagy treatment (3MA or bafilomycin) (Fig. 7A-B).
Additionally, in aldosterone-treated LSECs, the ROS, mito-ROS and the NOX4 protein level were reduced by pre-treatment with 3MA, bafilomycin or rapamycin, suggesting that either inhibiting or enhancing autophagy could improve oxidative stress induced by aldosterone ( Supplementary Fig. 5). Despite the decrease of oxidation, pre-treatment with rapamycin could induce the AMPK-dependent autophagy, the down-regulation of the NO-dependent pathway and LSECs defenestration; while these effects were reversed by pre-treatment with 3MA or bafilomycin ( Supplementary Fig. 5D, Fig. 7A-B). These data suggested that the AMPK-dependent autophagy induced by aldosterone promoted LSECs defenestration.
Aldosterone induces selective autophagic degradation and redistribution of Cav1, and promotes F-actin remodeling
There was a time-dependent down-regulation of the Cav1 protein level, along with the augment of autophagy during LSECs fenestrae shrinking from the 1st day to the 3rd day in vitro ( Supplementary Fig. 6A). Furthermore, enhancing autophagy (rapamycin), which promoted LSECs defenestration, could reduce the Cav1 protein level; whereas the opposite results were displayed in the 3MA or bafilomycin group (Fig. 8A). Additionally, the immunofluorescence showed that Cav1 co-localized with LC3 in the perinuclear area in the autophagy activator (rapamycin) treatment group, compared to the control group ( Supplementary Fig. 6B). Furthermore, the Cav1 protein level in membrane and cytoplasm showed that rapamycin reduced Cav1 protein expression both in membrane and cytoplasm due to enhanced autophagy (Fig. 8C). These results indicated that autophagy could promote degradation of Cav1.
Interestingly, compared with the control group, aldosterone enhanced the co-localization of Cav1 with LC3 in the perinuclear area, which was reversed by the pre-treatment with 3MA or bafilomycin ( Supplementary Fig. 6B), suggesting the redistribution of Cav1; the co-IP assay revealed that aldosterone enhanced the co-precipitation of Cav1 with p62 and ubiquitin due to enhancing autophagy; in contrast, 3MA inhibited autophagy to break this interaction (Fig. 8B). Additionally, the Cav1 protein level in membrane and cytoplasm showed that aldosterone increased membranal Cav1 level but decreased cytoplasmic Cav1 level. These suggested that aldosterone induced Cav1-related selective autophagy and the redistribution of Cav1 (Fig. 8C).
Furthermore, the immunofluorescence showed that Cav1 co-localized with ubiquitin and F-actin in the perinuclear area of LSECs in the rapamycin-or aldosterone-treated group; in contrast, less co-localization of Cav1 with ubiquitin and F-actin were displayed in the 3MA-or bafilomycin-treated group, and Cav1 and F-actin were uniformly distributed throughout the cytoplasm (Fig. 8D), suggesting remodeling and redistribution of F-actin triggered by aldosterone through the selective autophagy and redistribution of Cav1 (Fig. 9).
Hence, aldosterone exacerbated Cav1-related autophagy in the perinuclear area to promote F-actin remodeling and LSECs defenestration, which were recovered by inhibition of autophagy.
Discussion
In the present study, we demonstrated for the first time that Cav1related autophagy initiated by aldosterone-induced oxidation promotes LSECs defenestration. The principal findings obtained include the followings: (1) In vivo, spironolactone or 3MA could improve NOX4-and mitochondria-mediated oxidation, and inhibit autophagy to alleviate LSECs defenestration and liver fibrosis. (2) Acute and chronic aldosterone increases NOX4-and mitochondria-derived oxidative stress mediated by Cav1, and subsequently initiates the AMPK-dependent autophagy. (3) Aldosterone induces the selective autophagy and redistribution of Cav1 to promote F-actin remodeling. (4) Aldosterone, with pro-oxidation and pro-autophagy capabilities, inhibits the NOdependent pathway to promote LSECs defenestration.
As we know, reactive oxygen species (ROS) plays a key role in pathogenesis of liver fibrosis. The mitochondrial respiratory chain and the NADPH oxidases (NOXs), which are two primary cellular sources of ROS, generate superoxide (O 2 -) and H 2 O 2 [17,18]. NOXs-mediated ROS plays a critical role in HSCs activation and liver fibrosis, suggesting its potential role as a pharmacological target for anti-fibrotic therapy [19]. Additionally, NOX4 mediates distinct profibrogenic actions in HSCs in the liver [20]. What's more, parts of ROS are usually released by the mitochondrial respiratory chain in the liver. Mitochondrial reactive oxygen species also play an important role in liver fibrosis and mitochondria-targeted antioxidants attenuates liver fibrosis [21]. It has been evidenced that local tissue based aldosterone promotes liver fibrogenesis via its pro-oxidation. Our previous study found that aldosterone induced NOX4-mediated oxidative stress to activate HSCs and promote liver fibrosis, which could be attenuated by aldosterone antagonist spironolactone [22]. And spironolactone has been clinically utilized to attenuate portal hypertension [23]. Furthermore, our present study found that continuous aldosterone could directly induce early liver fibrosis through NOXs-and mitochondria-mediated ROS, which could improve by spironolactone. Hence, aldosterone is a promising drug target for liver fibrosis.
In addition, aldosterone promotes LSECs defenestration via its prooxidation and pro-autophagy. In vivo, we found that the serum aldosterone content and MR protein level of LSECs were increased, along with the increase of oxidation and autophagy in LSECs during the process of defenestration in BDL-induced liver fibrosis, which were recovered by antagonizing aldosterone (spironolactone) or inhibiting autophagy (3MA). The hyperaldosteronism model further demonstrated that in vivo, continuous aldosterone infusion could promote LSECs defenestration via oxidative stress and enhanced autophagy, which were relieved by spironolactone or 3MA.
However, how pro-oxidation and pro-autophagy, induced by aldosterone, impact on LSECs defenestration remains unknown. Firstly, it's reported that NOXs-or mitochondria-mediated oxidative stress may cause endothelial dysfunction [19,24,25]. Our present study found that in vitro, both acute and chronic aldosterone could induce NOX4-and mitochondria-mediated oxidative stress during the process of LSECs defenestration, which could attenuate by spironolactone, antioxidants, and autophagy inhibitors (3MA and bafilomycin). These suggest that aldosterone-induced oxidation could promote LSECs defenestration.
Next, we explored the influence of autophagy-initiated by aldosterone on LSECs defenestration. Interestingly, the literatures about the effects of autophagy on liver fibrosis are diverse [3,4], maybe due to the roles of autophagy in liver fibrosis varying with intra-hepatic cell type. Recently, autophagy was reported to modulate the phenotype of LSECs and protect against acute liver injury induced by I/R [5]. Here, we found that the AMPK-dependent autophagy was increased, along with the down-regulation of the NO-dependent pathway during LSECs fenestrae shrinking in vitro; while autophagy activator (rapamycin) could aggravate these effects. However, autophagy inhibitors (3MA or bafilomycin) could maintain LSECs fenestrae and improve the NO-dependent pathway. These suggest that the AMPK-dependent autophagy induces LSECs defenestration. In addition, we found that aldosterone promoted LSECs defenestration and the reduction of the NO-dependent pathway, along with enhancing autophagy and oxidation. So, how does aldosterone affect on LSECs defenestration via its pro-oxidation and pro-autophagy capabilities? It was reported that ROS and depletion of ATP could directly induce autophagy via the AMPK-ULK1 pathway [26]. The present study showed that both acute and chronic aldosterone led to a persistent oxidation and the reduction of ATP generation, and subsequence the activation of the AMPK-ULK1-dependent autophagy, so as to LSECs defenestration and the reduction of NO-dependent pathway, which were attenuated by sprionolactone, antioxidants or autophagy inhibitors. Hence, aldosterone-induced oxidation and dysfunction of ATP generation initiate autophagy via the AMPK-ULK1 pathway to promote LSECs defenestration.
As a crucial factor, Cav1 connects with oxidation and autophagy. Cav1 is a structural protein on the plasma membrane of fenestrae as well as vesicles in LSECs [27]. In membrane, Cav1 is necessary for caveolae biogenesis, which is a functional small invagination, interacting with various enzymes and receptors, and mediating a rapid signaling cascade; on the other hand, intracellular Cav1 also assists in activation to signal transduction and trafficking [10]. The opinions about the effects of Cav1 on LSECs fenestrae and capillarization are controversial. Carlos Fernandez-Hernando, et al. demonstrate for the first time that Cav1 changes the porosity of the LSECs and reduces the diameter of fenestrations; while the genetic ablation of Cav1 induces defenestration in LSECs [28]. But others argued that the LSECs fenestration in Cav1 knockout mice had no change in normal condition [29]. It seems that Cav1 is not so much essential to maintain the fenestrae of LSECs. But here, we demonstrated for the first time that Cav1 is indeed a multifunctional signaling hub that mediate aldosterone-induced oxidation and autophagy to regulate LSECs defenestration.
In caveolae, Cav1 plays an important role in mediating aldosteroneinduced oxidation. A variety of ROS, such as superoxide and H 2 O 2 , in caveolae or other microdomains, play a crucial role in cell signaling [30]. The literatures about the effects of Cav1 on oxidation are complex. It's reported that Cav1 is a negative regulator of NOXs-derived ROS through direct binding and alteration of expression [31]. The knockdown or knockout of Cav1 has been shown to increase ROS levels in the vasculature and can promote cardiovascular diseases. [32]. The decrease of Cav1 level could activate the aldosterone/MR signaling on the pathways of glycemia, dyslipidemia, and resistin [33]. However, Cav1 interacts with MR and form a MR/Cav1 complex, which mediates a rapid signaling cascade of oxidation initiated by aldosterone [10]. Aldosterone induced more abundant of MR/Cav1 complexes to interacted with NOX4, leading to oxidation [34]. These indicate that the effects of Cav1 on redox modification may be different with a variety of cell types. Our present study demonstrated for the first time that aldosterone induced the augment of membranal Cav1, which was enhanced the co-precipitation with MR, leading to NOX4-or mitochondrial-derived oxidative stress in LSECs, and subsequence the triggering of the AMPK-dependent autophagy. Furthermore, knockdown of Cav1 could attenuate NOX4-and mitochondrial-mediated ROS induced by aldosterone.
In addition, intracellular Cav1 participates in the regulation of aldosterone-induced autophagy. Song et al. [11] found that Cav1 mediated autophagy of intestinal epithelial cells via triggering NOX dependent oxidation. Besides, intracellular Cav1 also mediates autophagy through the regulation of ATG12-ATG5 system and energetic generation. Chen et al. [12] reported that Cav1 could interact with ATG12-ATG5 system to suppress autophagy in lung epithelial cells. Ha et al. [35] described that depletion of Cav1 led to reduction of GLUT3 related glucose uptake and ATP generation, activating the AMP-ATP ratio/ AMPK pathway to induce autophagy and diminish cellular metabolism, which in turn reinforced cytosolic AMPK-dependent autophagy. Interestingly, our data firstly showed that there was a time-dependent downregulation of the Cav1 protein level, along with the augment of Fig. 9. A schematic view of major signal transduction pathways involved in the conclusion that aldosterone-induced autophagic degradation of Cav1 promotes defenestration of liver sinusoidal endothelial cells via F-actin remodeling and inhibiting the NOdependent pathway. autophagy during LSECs fenestrae shrinking. Enhancing autophagy (rapamycin), which promoted LSECs defenestration, could reduce both membranal and intracellular Cav1 protein expression; whereas the opposite results were displayed in the 3MA or bafilomycin group due to inhibiting autophagy. Moreover, the enhanced co-localization of Cav1 with LC3 in the rapamycin group directly showed that the increase of autophagy promoted the redistribution of Cav1 to autophagosome (denoted by LC3) in the perinuclear area of LSECs. These indicate that autophagy could initiate degradation of Cav1 to promote LSECs defenestration. It is noteworthy that more Cav1 co-localized with LC3 or ubiquitin in perinuclear area of LSECs in aldosterone-treated group, which indicated that aldosterone induced intracellular Cav1 redistribute to autophagosome in LSECs; meanwhile aldosterone enhanced the co-immunoprecipitation of Cav1 with p62 and ubiquitin due to enhancing autophagy, whereas 3MA inhibited autophagy to break this interaction. These suggest that aldosterone induced Cav1-related autophagy in cytoplasm. Additionally, in spite of the increase of membranal Cav1 expression, aldosterone decreased cytoplasmic Cav1 level due to enhanced autophagy. These indicate the selective autophagic degradation and redistribution of intracellular Cav1 were induced by aldosterone.
Furthermore, it has been confirmed that F-actin, being part of the cytoskeleton around fenestrae, modulates contraction of fenestrae [13], whose remodeling may facilitate defenestration. Besides, Lee et al. [14] reported that autophagy could assemble an F-actin network and facilitated remodeling, indicating that autophagy contributes to F-actin remodeling and subsequent fenestrae contraction. Here we found that the immunofluorescence showed that rapamycin or aldosterone increased the co-localization of Cav1 with ubiquitin and F-actin in the perinuclear area of LSECs due to enhancing autophagy, which was reversed by autophagy inhibitor (3MA or bafilomycin). Hence, we suspected that Cav1-related autophagy participates in LSECs defenestration through regulating F-actin remodeling, which were aggravated by aldosterone.
As mentioned above, it seems that Cav1 plays a dual role in regulation of defenestration in aldosterone-treated LSECs: on the one hand, in plasma membrane, NOX4-and mitochondria-derived oxidation, mediated by MR/Cav1 complex, lead to depletion of ATP and subsequent autophagy, which may impair the fenestrae; on the other hand, in cytoplasm, the autophagic degradation and redistribution of Cav1 promote F-actin remodeling and LSECs defenestration.
Finally, we focus on the effects of autophagy and oxidation on the NO-dependent pathway. There are two kinds of signaling pathways to maintain LSECs differentiation status: the NO-dependent pathway (namely the NO/eNOS/sGC/cGMP/PKG/VASP pathway) and the NOindependent pathway [36,37]. The literatures indicate aldosterone negatively regulates the NO/eNOS signaling via its pro-oxidation. Toda et al. [38] demonstrated that oxidative stress, induced by chronic exposure to aldosterone, leads to endothelial dysfunction and vasoconstriction because of decline and degradation of NO synthesis. Additionally, acute exposure to aldosterone induced serious oxidation in human umbilical vein endothelial cells (HUVECs), along with the reduction of the eNOS dimmer/ monomer ratio [39]. Activation of MR signaling contributes to eNOS uncoupling and the vascular dysfunction; while spironolactone restored NO bioavailability [40]. Here, our data showed that the NO-dependent pathway of LSECs was down-regulated by aldosterone treatment or BDL, which was reversed by spironolactone or antioxidants. Hence, oxidation mediates aldosterone-induced defenestration via inhibiting the NO-dependent pathway. Besides, Sarkar et al. [41] reported that NO could negatively regulate autophagosome synthesis and autophagosome-lysosome fusion, while an eNOS inhibitor (L-NAME) induced autophagy. Rapamycin is associated with the reduction of eNOS, affecting vasomotion [42]. Hence autophagy negatively interacts with the NO/eNOS signaling. We also found that rapamycin inhibited the NO-dependent pathway; while autophagy inhibitors up-regulate the NO-dependent pathway to maintain LSECs fenestrae. Hence, we demonstrated that aldosterone-induced autophagy promoted LSECs defenestration via inhibiting the NO/eNOS/cGMP/ PKG pathway.
However, there are some limitations to the present study. The role of autophagy-induced Cav1 redistribution to perinuclear area of LSECs in defenestration needs further investigation. Moreover, we still have not clearly revealed the mechanism underlying Cav1-related selective autophagy regulating F-actin remodeling and the NO-dependent pathway.
Conclusion
Cav1-related selective autophagy initiated by aldosterone-induced oxidation promoted LSECs defenestration via activating the AMPK-ULK1 pathway and inhibiting the NO-dependent pathway. Inhibition of LSECs autophagy is a promising strategy for preventive treatment of sinusoidal capillarization.
Conflicts of interest
No potential conflicts of interest were disclosed.
|
2018-04-03T02:48:37.270Z
|
2017-07-13T00:00:00.000
|
{
"year": 2017,
"sha1": "3aff92b38f0bfb13d9ced40befc3b6f26cf142b5",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.redox.2017.07.011",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3aff92b38f0bfb13d9ced40befc3b6f26cf142b5",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
261341648
|
pes2o/s2orc
|
v3-fos-license
|
System Report for CCL23-Eval Task 7: Chinese Grammatical Error Diagnosis Based on Model Fusion
“The purpose of the Chinese Grammatical Error Diagnosis task is to identify the positions andtypes of grammar errors in Chinese texts.In Track 2 of CCL2023-CLTC, Chinese grammarerrors are classified into four categories: Redundant Words, Missing Words, Word Selection, andWord Ordering Errors. We conducted data filtering, model research, and model fine-tuning insequence. Then, we performed weighted fusion of models based on perplexity calculations andintroduced various post-processing strategies. As a result, the performance of the model on thetest set, measured by COM, reached 49.12.”
Introduction
The purpose of the Chinese grammatical error diagnosis (CGED) task is to detect the location and type of each grammatical error in the Chinese text. The types of grammatical errors are divided into four categories: Redundant Words (R), Missing Words (M), Word Selection (S), and Word Ordering Errors (W). In recent years, the task of Chinese grammar error correction has attracted more and more attention, and some applications with potential commercial value have also appeared. This technology has a broad application space in education, news, official documents and other fields. The mainstream methods to solve this task are Seq2Seq and Seq2Edits. The Seq2Seq method regards the grammatical error correction task as the process of translating an erroneous sentence into a correct sentence, and uses an advanced neural translation model to solve it; the Seq2Edits method is to design editing actions (such as insertion, deletion, replacement, etc.), the grammar diagnosis task is regarded as a sequence labeling task to solve. In the CCL2023-CLTC track 2: Chinese grammar error detection task, we use the multi-model fusion method and post-processing strategy to realize the text grammar error correction function. Finally, on the CCL2023-CLTC track 2 Chinese grammar error diagnosis task, the result of COM is 49.12.
Model
We did a lot of research on models and papers when we were doing the task of Chinese grammar error detection in Track 2. The mainstream methods to solve this task are Seq2Seq and Seq2Edits. The benchmark models we choose are the current mainstream BART (Bidirectional and Auto-Regressive Transformers) (Lewis et al., 2020), GECToR (Grammatical Error Correction: Tag, Not Rewrite) and T5 (Text-to-Text Transfer Transformer) that have achieved SOTA performance on the CGEC (Chinese Grammatical Error Correction) dataset. The following is a detailed introduction to the models we use in this task.
BART
The BART model (Lewis et al., 2020) uses the Transformer structure (Vaswani et al., 2017). The overall architecture consists of two parts: an encoder and a decoder. The encoder is responsible for converting the input sequence into a high-dimensional representation, and the decoder generates an output sequence based on the representation.
The encoder of the BART model is stacked by multi-layer encoders. Each encoder consists of a multihead self-attention mechanism and a feed-forward neural network. This structure enables the encoder to model different positions of the input sequence and capture the dependencies between global and local. Although the decoder of the BART model also uses the Transformer structure, it is different from the traditional Transformer decoder in that it uses an autoregressive generation method. In the decoding stage, the BART model gradually generates output sequences through autoregressive methods, and the generation of each step depends on the previously generated parts.
The basic architecture of the BART model based on the Transformer neural network is shown in Figure 1. Pre-training of the BART model: First, the original text is destroyed by using a variety of noises, and then the original text is reconstructed by the seq2seq model. Therefore, the loss function is the cross entropy of the output of the decoder and the original text. The BART model introduces a total of 5 noise methods that destroy the original text, as shown in the figure 2. Token Deletion: Token deletion, which randomly deletes tokens from the input. Unlike masks, this strategy is for the model to learn which positions lack input information.
Text Infilling: Text filling, randomly select a text segment (the length of the text segment conforms to the Poisson distribution of λ = 3), and replace it with a [MASK] tag. When the fragment length is 0, it is equivalent to inserting a [MASK] mark at the original position. Different from the SpanBERT model, the SpanBERT model is replaced by the [MASK] mark of the number of fragment lengths.
Sentence Permutation: Sentence sorting, splitting the text according to periods, generating a sequence of sentences, and then randomly shuffling the order between sentences. Document Rotation: Document rotation, randomly select a token, and then rotate the text, that is, select the token as the beginning of the text. This strategy lets the model learn the beginning of the text. Since the pre-training process is trained in the same language, but machine translation is translated from one language to another, the BART model randomly initializes the Embedding layer of the encoder when performing machine translation tasks, that is, replaces the dictionary. Retrain representations for another language.
Fine-tuning
In the fine-tuning process, first freeze most of the parameters of the original BART model, and only train the randomly initialized Embedding, the BART model position embedding and the self-attention parameters of the first layer of the BART model encoder connected to the Embedding; then all parameters of the model Do a small amount of training.
GECToR
GECToR (Grammar Error Correction with Transformer) (Omelianchuk et al., 2020) is also a Transformer-based neural network model, which is specially used for text error correction tasks. Its goal is to automatically detect and correct grammatical and spelling errors in text. The GECToR model as a whole is similar to a sequence labeling task. For the Chinese error correction training task, the input of the model needs to compare two sentences, and use the edit distance operation to represent the label of each character in the original sentence. The total score of the label is There are four forms (KEEP, AP-PEND, DLETE, REPLAES). The training objective of the model is to minimize the difference between the generated sequence and the reference sequence, using the cross-entropy loss function or other similar objective function as the model loss function. During training, the model learns how to automatically detect and correct grammatical and spelling errors in text.
T5
T5 (Text-to-Text Transfer Transformer) (Raffel et al., 2020) is a powerful language generation model, which is a model architecture or a paradigm for solving NLP tasks. The author borrows the idea of Seq2Seq to unify the tasks of different stages of the model (Pretrain, Fine-tune, Predict) into the task of Text-to-Text (that is, the model input is text, and the output is also text).
T5 retains most of the architecture of the original Transformer, but emphasizes some key aspects. Additionally, some minor changes have been made to vocabulary and functionality. Some main concepts of the T5 mode are listed below: The encoder and decoder remain in the model. The encoder and decoder layers become blocks, and the sublayers become subcomponents that contain self-attention layers and feed-forward neural networks. The self-attention mechanism is order-independent. Use dot product of matrices instead of recursion. Positional encodings are added to word embeddings before doing the dot product, which explores the relationship between each word and other words in a sequence.
Transformer uses sine and cosine to generate positional encoding, while T5 uses relative positional encoding. In T5, positional encoding relies on the extension of self-attention to compare pairwise relations. The positional encoding of T5 is shared and re-evaluated in all layers of the model simultaneously.
Data
Track 2 provides the processed Lang8 dataset and CGED dataset (Rao et al., 2020). The word count statistics of the lang8 dataset and the official test set are shown in Table 1 and Table 2. The statistical results show that: basically all are within 80 characters, of which 97.6% are within 80 characters in the test set, and a few exceed 80 characters. The data sources of CGED are the HSK dynamic composition corpus and the global Chinese interlanguage corpus. CGED-8 includes about 1,400 paragraph units and 3,000 errors. Each unit contains 1-5 sentences, and each sentence is marked with the position, type and modification result of the grammatical error. 5,000 entries were randomly selected from the Lang8 and CGED datasets as the in-group test set.
Experiment and Results
We use the above data sets to conduct fine-tuning training and comparative experiments on several models, and adopt model fusion and various post-processing strategies to achieve the final submitted result COM: 49.12. Below we will introduce in detail several of the important experiments that have obvious improvement effects.
Model training
During the experiment, we used the fairseq tool library to load the BART pre-training model, trained the model with the Lang8 and CGED datasets announced by the competition organizer, and optimized the model parameters through backpropagation and gradient descent algorithms. In each training step, the source language sequence is input into the encoder, and then the decoder is used to generate the target language sequence, and the loss function is calculated, and the decoder parameters are updated according to the gradient of the loss function to gradually improve the performance of the model. The hyperparameter settings during model training are shown in Table 3.
Experimental results
After the model is trained, use the trained BART model to perform reasoning on grammar error correction tasks. Feed the test data as the source language into the encoder, and use the decoder to generate the target language sequences.
In the process of using the model for inference, I tried to average the weights of the model, multiple rounds of error correction, correct UNK characters, optimize decoding parameters, etc. The specific optimization process of the experimental results is introduced as follows: Model weight averaging: This strategy refers to averaging the parameters of the model saved at different time points during the training process to obtain a model with smoother and better generalization performance. In the experiment, we selected 5 models with better effects for parameter averaging operation. This strategy will increase the comprehensive score on the test set by 0.15 compared with the baseline.
Multiple rounds of error correction: This strategy is to iterate the reasoning process of the model for multiple rounds, and obtain the best number of multiple rounds of iterations by comparing the experimental results. By comparing the experimental results, it is found that when the number of iterations N=2, the comprehensive score on the test set is the highest. Using this strategy further improves the composite score by 1.11 on the test set. The flow chart of using the model for multiple iterations of inference is shown in Figure 4. Correct UNK characters: By analyzing the results, it is found that the model decodes some English characters such as BAHAYKUBO, SOGO, MOS into UNK characters during the inference process, then compares the result with the original sentence and replaces the UNK characters in the result by using the content in the original sentence. This strategy will further improve the composite score on the test set by 0.13.
Optimize decoding parameters: In the process of model inference, we use Beam Search, a decoding algorithm that explores potential high-probability sequences by retaining a certain number of candidates at each time step. Among the parameters of the Beam Search algorithm, the beam size is an important parameter, which controls the number of candidates retained at each time step. It is verified by experiments: when the beam size is 12, the comprehensive score on the test set is the best . By using the above various strategies, the comprehensive score COM obtained by a single model on the validation set is 47.89. The specific experimental results of each item are shown in Table 4. 2. Set max length to 100 and 256 respectively, and then train with lang8 data. The model parameters are shown in Table 5.
Configurations
Values Pretrained Language model Chinese-BART-Large Learning rate 3 ×10 −6 Max epochs 10 Learning rate scheduler Polynomial Batch size per GPU 32 In this experiment, the GECToR model was used, and two pre-trained models, chinese-bert-wwm-ext (Cui et al., 2020) and structbert-large-zh (Wang et al., 2019), were used for comparative experiments. During the initial experiments, it was found that the model was more inclined to predict the deletion label-$DELETE label. To solve this problem, split the $DELETE tag in the original task to delete the corresponding Chinese character -$DELETE char. The training set uses the preprocessed Lang8 data set provided by Track 2 and all the simplified Chinese data sets of CGED, and the test set of CGED2021 is used as the test set for training. In the prediction process of this experiment, multiple predictions are used to obtain the final prediction result, that is, the first prediction result of the model is used as the input of the second prediction of the model, and the process is repeated three times to obtain the final result. The hyperparameter settings of the GECToR model are shown in Table 7.
Configurations
Values Learning rate 1 ×10 −5 Max epochs 10 Max length 128 Batch size per GPU 64 Adam(β 1 = 0.9, β 2 = 0.99, ϵ = 1 × 10 −8 ) The experimental results of the T5 model on the test set within the group are shown in Table 10. The results show that when the T5 model is trained and fine-tuned using the lang8 and CGED datasets, the effect is the best at 50 epochs, and the model indicators will decline after more than 50 epochs.
Experimental analysis
A single model may be weak in correcting certain types of grammatical errors. By using multiple models, especially specialized models for different types of errors, the ability to cover a wide range of grammatical errors can be improved. Different models may have different focuses and expertise, so fusion can integrate their strengths to provide a more comprehensive correction capability. Individual models may have problems with missing or false positives when correcting certain types of syntax errors. With model fusion, the output of multiple models can be combined, thereby reducing the number of missed and false positives of errors. For example, GECToR is a non-autoregressive model, so it does not correct errors for multiple consecutive characters correctly, e.g., GECToR will change "This opinion reflects the theory of the average Briton." to "This opinion reflects the theory of the average Briton." This kind of error correction has a high score in ppl and can be filtered by adding ppl to the fusion strategy.
Through the above analysis for single-model results, we find that by model fusion, the advantages of multiple models can be combined and the shortcomings of a single model can be compensated to improve the performance of Chinese grammar error correction tasks.
Fusion strategy
Based on traditional voting strategy 1. In this evaluation, we integrated the corrected results of multiple models based on the inference results of different models and the perplexity calculated on sentences modified by the models.
We set the threshold value for error location detection is threshold detect, and the threshold value for error correction is threshold correct. Each model has different thresholds, and the exact values can be found in the code.
2. The score for an error detection position in a sentence is: Where detect i is the error detection result of the i-th model, The score for correcting an error-checked position to a token in a sentence is: token i represents whether model i corrects this error-checking position to token: token i = 1, if the i-th model corrected this position to this token 0, if the i-th model didn't correct this position to this token (5) ppl-based voting strategy 1. Add confusion strategy, calculate the perplexity for the original sentence and corrected sentences of n models respectively, and analyze the difference of perplexity: ppl i is perplexity of the i-th model prediction sentence, ppl src is perplexity of the original sentence. We set perplexity weighting value is in out experiment.
The formula for each model's score on whether the error-checking position in a sentence requires error correction becomes: weight ppl i is the perplexity weighted value of i-th model to make error detection judgments for that location.
The score for correcting an error-checking position to this token in a sentence is: score token = max(weight ppl token 1 , ..., weight ppl token n ) weight ppl token i indicates the perplexity weighting value when the i-th model modifies this error detection location to this token, and if the error correction result of the i-th model is not a token, then weight ppl token i is 0.
Screening Strategy:
Calculate score detect for the error correction results of n models at the position, and score token i for each of the multiple error corrections token i given by n models at the position. Only when: and: max(score token 1 , ..., score token i , ..., score token n ) ≥ threshold correct (11) are satisfied, the maximum token is adopted as the correction result for the error-checking position. When either condition is not satisfied, the error is not corrected.
Experimental results of two fusion strategies
We conducted experiments with the above two fusion strategies separately, and the results are shown in Table 11. By comparison, the ppl fusion strategy performs better than the traditional voting strategy. (1) First correct the spelling of the sentence based on pycorrector (Xu, 2021), and then perform grammatical error correction.
(2) Segmentation of long sentences: Words of more than 80 characters are first segmented according to punctuation marks, and then sequentially spliced according to the rules of no more than 80 characters.
(3) For non-Simplified Chinese conversion strategy: Traditional Chinese is uniformly corrected to Simplified Chinese, Japanese Kanji is uniformly corrected to the corresponding Chinese Kanji, and English is not corrected.
Results
Different strategies on the official test set can yield results as shown in the
Summarize
In this competition, we adopted the method of multi-model fusion, combined with various postprocessing strategies, which can effectively improve the performance of the model, and finally obtained the result of COM being 49.12 on the official test set.
Innovation
For this competition, we have the following innovations: (1) In the process of model reasoning, methods such as averaging model weights and multiple rounds of error correction have been tried.
(2) The fusion technology of multi-model error detection results, including the ppl strategy, maintains the characteristics of each model, complements each other's advantages, first screens the position, and then screens the results.
(3) In the experiment, we tried a variety of post-processing strategies, and compared and selected several strategies that can improve the results.
Disadvantages
For this competition, we have the following regrets and deficiencies: the models investigated and used in this competition are limited, and they are all character-sized, which cannot cover all the current error correction models. In future work, we can study and use Chinese-specific words information and rich semantic information, further improving the performance of the Chinese grammar error correction model.
|
2023-08-31T13:05:36.388Z
|
2023-01-01T00:00:00.000
|
{
"year": 2023,
"sha1": "b0e9392c35832fd8fcca5629db867084e3f5c602",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ACL",
"pdf_hash": "b0e9392c35832fd8fcca5629db867084e3f5c602",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
}
|
12373719
|
pes2o/s2orc
|
v3-fos-license
|
Characterization of biomass burning emissions from cooking fires, peat, crop residue, and other fuels with high-resolution proton-transfer-reaction time-of-flight mass spectrometry
We deployed a high-resolution proton-transferreaction time-of-flight mass spectrometer (PTR-TOF-MS) to measure biomass-burning emissions from peat, crop residue, cooking fires, and many other fire types during the fourth Fire Lab at Missoula Experiment (FLAME-4) laboratory campaign. A combination of gas standard calibrations and composition sensitive, mass-dependent calibration curves was applied to quantify gas-phase non-methane organic compounds (NMOCs) observed in the complex mixture of fire emissions. We used several approaches to assign the best identities to most major “exact masses”, including many high molecular mass species. Using these methods, approximately 80–96 % of the total NMOC mass detected by the PTR-TOFMS and Fourier transform infrared (FTIR) spectroscopy was positively or tentatively identified for major fuel types. We report data for many rarely measured or previously unmeasured emissions in several compound classes including aromatic hydrocarbons, phenolic compounds, and furans; many of these are suspected secondary organic aerosol precursors. A large set of new emission factors (EFs) for a range of globally significant biomass fuels is presented. Measurements show that oxygenated NMOCs accounted for the largest fraction of emissions of all compound classes. In a brief study of various traditional and advanced cooking methods, the EFs for these emissions groups were greatest for open threestone cooking in comparison to their more advanced counterparts. Several little-studied nitrogen-containing organic compounds were detected from many fuel types, that together accounted for 0.1–8.7 % of the fuel nitrogen, and some may play a role in new particle formation.
Introduction
Biomass burning (BB) injects large amounts of primary, fine carbonaceous particles and trace gases into the global atmosphere and significantly impacts its physical and chemical properties (Crutzen and Andreae, 1990;Bond et al., 2004Bond et al., , 2013)).While BB emissions are recognized as the second largest global atmospheric source of gas-phase non-methane organic compounds (NMOCs) after biogenic emissions, a significant portion of the higher molecular weight species remains unidentified (Christian et al., 2003;Warneke et al., 2011;Yokelson et al., 2013).It is widely accepted that the addition of large amounts of these highly reactive species into the atmosphere alters chemistry on local to global scales (Andreae and Merlet, 2001;Andreae et al., 2001;Karl et al., 2007).NMOCs particularly impact smoke evolution by rapid formation of secondary organic aerosols (SOA) and secondary gases including photochemical ozone (O 3 ) (Reid et al., 1998;Trentmann et al., 2005;Alvarado and Prinn, 2009;Yokelson et al., 2009;Vakkari et al., 2014).
The many unknowns and initial gas-phase variability of BB emissions limit our ability to accurately model the atmo-Published by Copernicus Publications on behalf of the European Geosciences Union.spheric impacts of fire at all scales (Trentmann et al., 2005;Mason et al., 2006;Alvarado and Prinn, 2009;Alvarado et al., 2009;Wiedinmyer et al., 2011).Estimating or modeling the potential of smoke photochemistry to generate secondary aerosols or O 3 requires realistic estimates of NMOC emissions in fresh smoke and knowledge of the chemical processing environment.Measurements capable of identifying and quantifying rarely measured and presently unidentified emissions of NMOCs, in particular the chemically complex low volatility fraction, are vital for advancing current understanding of the BB impact on air quality and climate.
Proton-transfer-reaction time-of-flight mass spectrometry (PTR-TOF-MS) is an emerging technique that simultaneously detects most NMOCs present in air samples, including oxygenated organics, aromatics, alkenes, and nitrogen (N)containing species at parts per trillion detection limits (pptv) (Jordan et al., 2009;Graus et al., 2010).The instrument uses H 3 O + reagent ions to ionize NMOCs via proton transfer reactions to obtain high-resolution mass spectra of protonated NMOCs with a low degree of molecular fragmentation at a mass accuracy sufficient enough to determine molecular formulas (C w H x N y O z ).
Although there are many advantages to PTR-TOF-MS over conventional PTR quadrupole mass spectrometers (increased mass range, high measurement frequency, and high mass resolution) there remain several difficulties involving PTR technology, including (1) detection being limited to molecules with a proton affinity greater than water, (2) complicated spectra due to parent ion fragmentation or cluster ion formation, and (3) the inability of the method to isolate isomers.Despite the limitations of this technology, PTR-TOF-MS is ideal for studying complex gaseous mixtures such as those present in BB smoke.
This study was carried out as part of a large-scale experiment to characterize the initial properties and aging of gas-and particle-phase emissions in smoke from globally significant fuels.Experiments were conducted from October to November 2012 during the fourth Fire Lab at Missoula Experiment (FLAME-4) as detailed by Stockwell et al. (2014).A major goal of the study focused on the identification and quantification of highly reactive NMOCs in order to (1) better characterize the overall chemical and physical properties of fresh BB emissions; (2) better understand the distribution of emitted carbon across a range of volatilities in fresh and aged smoke; and (3) improve the capability of current photochemical models to simulate the climatic, radiative, chemical, and ecological impacts of smoke on local to global scales.In a companion paper, the FLAME-4 emissions were compared extensively to field measurements of fire emissions and were shown to be representative of "realworld" BB either as is or after straightforward adjustment procedures detailed therein (Stockwell et al., 2014).In this work, we describe the first application (to our knowledge) of PTR-TOF-MS technology to laboratory BB smoke to characterize emissions from a variety of authentic globally sig-nificant fuels.We report on several new or rarely measured gases and present a large set of useful emission ratios (ERs) and emission factors (EFs) for major fuel types that can inform/update current atmospheric models.
Missoula fire sciences laboratory
The US Forest Service Fire Sciences Laboratory (FSL) in Missoula, MT houses a large indoor combustion room described in detail elsewhere (Christian et al., 2003;Burling et al., 2010;Stockwell et al., 2014).In short, fuels are burned on a bed located directly below a 1.6 m diameter exhaust stack.The room is slightly pressurized by outdoor air that generates a large flow, entraining the fire emissions up through the stack.Emissions are drawn into sampling lines fixed in the stack at a platform height 17 m above the fuel bed.Past studies demonstrated that temperature and mixing ratios are constant across the width of the stack at the platform height, confirming well-mixed emissions (Christian et al., 2004).
Burns were conducted using two separate configurations as described in Stockwell et al. (2014).In this paper we will focus on 125 of the 157 burns.During these fires, wellmixed fresh smoke was sampled directly from the combustion stack by PTR-TOF-MS roughly 5 s after emission.Results obtained during the remaining burns that investigate photochemically processed smoke composition in dual smog chambers with a suite of state-of-the-art instrumentation are presented elsewhere (Tkacik et al., 2014).
Biomass fuels
Descriptions and ignition methods of each fuel type burned during FLAME-4 are detailed in Stockwell et al. (2014).Authentic globally significant fuels were collected, including African savanna grasses; US grasses; US and Asian crop residue; Indonesian, temperate, and boreal peat; temperate and boreal coniferous canopy fuels; woods in traditional and advanced cooking stoves; shredded tires; and trash.The range of fuel loading was chosen to simulate real-world conditions for the investigated fuel types with global examples of biomass consumption shown in Akagi et al. (2011).
Proton-transfer-reaction time-of-flight mass spectrometer
Real-time analysis of NMOCs was performed using a commercial PTR-TOF-MS 8000 instrument from Ionicon Analytik GmbH (Innsbruck, Austria) that is described in detail by Jordan et al. (2009).The PTR-TOF-MS sampled continuously at a frequency of 0.2 Hz through heated PEEK tubing (0.0003 m o.d., 80 • C) positioned facing upward to limit particulate uptake.The instrument was configured with a mass resolution (m/ m) in the range of 4000 to 5000 at m/z 21 and a typical mass range from m/z 10 to 600.The drift tube was operated at 600 V with a pressure of 2.3 mbar at 80 • C (E/N ∼ 136 Td, with E as the electric field strength and N as the concentration of neutral gas; 1 Td = 10 −17 V cm 2 ).A dynamic dilution system was set up to reduce the concentration of sampled smoke and minimize reagent ion depletion.Mass calibration was performed by permeating 1,3diiodobenzene (protonated parent mass at m/z 330.85; fragments at m/z 203.94 and 204.94) into a 1 mm section of Teflon tubing used in the inlet flow system.The high mass accuracy of the data allowed for the determination of the atomic composition of protonated NMOC signals where peaks were clearly resolved.The post-acquisition data analysis to retrieve counts per second based on peak analysis was performed according to procedures described in detail elsewhere (Müller et al., 2013(Müller et al., , 2011(Müller et al., , 2010).An initial selection of ions (∼ 68 masses up to m/z ∼ 143) was chosen based upon incidence and abundance for post-acquisition analysis.In select cases (nominally one fire of each fuel type), additional compounds (∼ 50 masses) were analyzed and are reported separately within this paper.A reasonable estimation procedure showed that the peaks selected for analysis accounted for > 99 % of the NMOC mass up to m/z 165 in our PTR-TOF-MS spectra.An earlier BB study (Yokelson et al., 2013) using mass scans to m/z 214 found that ∼ 1.5 % of NMOC mass was present at m/z > 165.
The normalized sensitivity of the instrument (ncps ppbv −1 ) was determined for calibrated compounds based on the slope of the linear fit of signal intensities (normalized to the H 3 O + signal, ∼ 10 6 cps) versus a range of volumetric mixing ratios (VMR).Multipoint calibration curves varied due to instrumental drift and dilution adjustments accordingly, and average calibration factors (CFs, ncps ppbv −1 ) were determined throughout the field campaign as described by Warneke et al. (2011) and were used to calculate concentrations.
Quantification of the remaining species was performed using calculated mass-dependent calibration factors based on the measured calibration factors.Figure 1a shows the spread in the normalized response of compounds versus mass (labeled by compound name) overlaid with the linearly fitted mass-dependent transmission curve (black markers and dotted line).It is clear from Fig. 1a that the oxygenated species (blue labels) and the hydrocarbon species (green labels) ex-hibit a slightly different mass-dependent behavior; however, both groups show a linear increase with mass that is similar to that observed for the transmission efficiency (Fig. 1b and c).To reduce bias, mass-dependent calibration factors were determined using a linear approximation for oxygenated and hydrocarbon species separately (Fig. 1b and c).α-Pinene was not included in the linear approximation for hydrocarbons as this compound is well known to be susceptible to substantial fragmentation in the drift tube.Sulfur (S)-and N-containing compounds were considered collectively, and together they more closely follow the trend of the oxygenated species.Thus, in cases where a compound contains a non-oxygen heteroatom (such as methanethiol), the mass-dependent calibration factor was determined using the relationship established using the oxygenated species.Calibration factors were then determined according to the exact mass for all peaks where the chemical formula has been determined.Our approach does not yet account for the potential for ions to fragment and/or cluster; however, we expect this impacts less than 30 % of NMOC and usually to a small degree for any individual species.These latter issues change the mass distribution of observed carbon but should not have a large effect on the total observed carbon.
It is difficult to assess the overall error introduced using this method of calibration factor approximation, as only a limited number of comparable measurements of calibration factors are available.The deviation of measured calibration factors for species contained in the gas standard from the linear approximation yields a range of errors (21 ± 19 %) with a maximum of 50 % observed in all cases (excluding α-pinene for reasons detailed above).While PTR-TOF-MS is typically known as a soft ionization method, fragmentation is common among higher molecular weight species and therefore needs to be considered as a limitation of this technique.For the individual species identified it would be misleading to give a set error based on this limited analysis; however, in the absence of any known molecular fragmentation, a maximum error of 50 % is prescribed although larger errors are possible for compounds with N and S heteroatoms.Better methods for the calculation of mass-dependent calibration factors by compound class should be developed in the near future to improve the accuracy of volatile organic compound (VOC) measurements using PTR-TOF-MS.
OP-FTIR
To enhance application of the MS data, emission ratios to carbon monoxide (CO) were calculated where possible using measurements from an open-path Fourier transform infrared (OP-FTIR) spectrometer described elsewhere (Stockwell et al., 2014).The system includes a Bruker Matrix-M IR cube spectrometer with an open White cell that was positioned to span the width of the stack to sample the continuously rising emissions.The spectral resolution was set to 0.67 cm −1 and spectra were collected every 1.5 s with a duty cycle greater than 95 %.Other gas-phase species quantified by this method included carbon dioxide (CO 2 ), methane (CH 4 ), ethyne (C 2 H 2 ), ethene (C 2 H 4 ), propylene (C 3 H 6 ), formaldehyde (HCHO), formic acid (HCOOH), methanol (CH 3 OH), acetic acid (CH 3 COOH), glycolaldehyde (C 2 H 4 O 2 ), furan (C 4 H 4 O), water (H 2 O), nitric oxide (NO), nitrogen dioxide (NO 2 ), nitrous acid (HONO), ammonia (NH 3 ), hydrogen cyanide (HCN), hydrogen chloride (HCl), and sulfur dioxide (SO 2 ) and were obtained by multi-component fits to selected regions of the mid-IR transmission spectra with a synthetic calibration non-linear least-squares method (Griffith, 1996;Yokelson et al., 2007).
The OP-FTIR system had the highest time resolution with no sampling line, storage, fragmentation, or clustering artifacts; thus, for species in common with PTR-TOF-MS, the OP-FTIR data was used as the primary data.The results from the intercomparison (for methanol) of OP-FTIR and PTR-TOF-MS show excellent agreement using an orthogonal distance regression to determine slope (0.995 ± 0.008) and the R 2 coefficient (0.789).This result is consistent with the good agreement for several species measured by both PTR-MS and OP-FTIR observed in numerous past studies of laboratory BB emissions (Christian et al., 2004;Karl et al., 2007;Veres et al. 2010;Warneke et al., 2011).
Emission ratio and emission factor determination
Excess mixing ratios (denoted X for each species "X") were calculated by applying an interpolated background correction (determined from the pre-and post-fire concentrations).The molar emission ratio (ER) for each species "X" relative to CH 3 OH ( X/ CH 3 OH) is the ratio between the integral of X over the entire fire and the integral of CH 3 OH over the entire fire.We selected CH 3 OH as the species in common with the OP-FTIR to serve as an internal standard for the calculation of the fire-integrated ERs of each species X to CO (Supplement Table S1).We do this by multiplying the MS-derived ER ( X/ CH 3 OH) by the FTIR-derived ER ( CH 3 OH/ CO), which minimizes error due to occasional reagent ion depletion or the different sampling frequencies between instruments that would impact calculating X to CO directly.Several fires have been excluded from this calculation as data were either not collected by OP-FTIR and/or PTR-TOF-MS or, alternatively, methanol data could not be applied for the conversion because (1) the mixing ratios remained below the detection limit or (2) methanol was used to assist ignition purposes during a few fires.In the case of the tire fires only, the latter issue with CH 3 OH was circumvented by using HCOOH (m/z 47) as a suitable, alternative internal standard.As dis-cussed in Sect.2.3., ∼ 50 additional masses were analyzed for selected fires and the ERs (to CO) for these fires are included in the bottom panels of Table S1.The combined ERs to CO from the FTIR and PTR-TOF were then used to calculate emission factors (EFs, g kg −1 dry biomass burned) by the carbon mass-balance method (CMB) based on the assumption that all of the burned carbon is volatilized and that all of the major carbon-containing species have been measured (Ward and Radke, 1993;Yokelson et al., 1996Yokelson et al., , 1999;;Burling et al., 2010).EFs were previously calculated solely from FLAME-4 OP-FTIR data as described in Stockwell et al. (2014), and a new larger set of EFs, which includes more carbon-containing species quantified by PTR-TOF-MS, is now shown in Supplement Table S2.With the additional carbon compounds quantified by PTR-TOF-MS, the EFs calculated by CMB decreased ∼ 1-2 % for most major fuels with respect to the previous EFs reported in Stockwell et al. (2014).In the case of peat and sugar cane fires, the OP-FTIR-derived EFs are now reduced by a range of ∼ 2-5 % and 3.5-7.5 %, respectively.Along with these small reductions, this work now provides EFs for many additional species that were unavailable in Stockwell et al. (2014).Finally, the EFs reported in Supplement Table S3 were adjusted (when needed) according to procedures established in Stockwell et al. (2014) to improve laboratory representation of real-world BB emissions.This table contains the EF we recommend other workers use and it appears in the Supplement only because of its large size.In addition to the comparisons considered in Stockwell et al. (2014), we find that our EFs in Table S3 are consistent (for the limited number of overlap species) with additional, recent field studies including Kudo et al. (2014) for Chinese crop residue fires and Geron and Hays (2013) for North Carolina (NC) peat fires.
Fire emissions are partially dependent on naturally changing combustion processes.To estimate the relative amount of smoldering and flaming combustion that occurred over the course of each fire, the modified combustion efficiency (MCE) is calculated by (Yokelson et al., 1996 Though flaming and smoldering combustion often occur simultaneously, a higher MCE value (approaching 0.99) designates relatively pure flaming combustion (more complete oxidation), a lower MCE (0.75-0.84) designates pure smoldering combustion, and thus an MCE of ∼ 0.9 represents roughly equal amounts of flaming and smoldering.Each fireintegrated MCE is reported in Tables S1-S3. 3 Results
Peak assignment
As exemplified by a typical PTR-TOF-MS spectrum of diluted smoke (Fig. 2a), the complexity of BB smoke emissions presents challenges to mass spectral interpretation and ultimately emissions characterization.Figure 2b shows a smaller mass range of the smoke sample shown in Fig. 2a on a linear scale to illustrate the typical relative importance of the masses (note the intensity of acetaldehyde (m/z 45) and acetic acid plus glycolaldehyde (m/z 61), which together account for almost 25 % of the total signal).Although the spectra are very complex, systematic treatment of the burn data, assisted at some m/z by extensive published "off-line" analyses, can generate reasonable assignments for many major peaks and result in useful emissions quantification.
As described earlier, the PTR-TOF-MS scans have sufficiently high resolution to assign molecular formulas (C w H x N y O z ) to specific ion peaks by matching the measured exact mass with possible formula candidates for the protonated compound.Specific compound identification for formula candidates can be unambiguous if only one species is structurally plausible or explicit identification of the compound had previously been confirmed by BB smoke analysis (Akagi et al., 2011;Yokelson et al., 2013, etc.).Supplement Table S4 lists every mass and formula assignment for observable peaks up to m/z 165 and categorizes each mass as a confirmed identity, a tentative (most likely) species assignment, or an unknown compound.For several confirmed identities, the most abundant species at that exact mass is listed with likely contributions to the total signal from the secondary species listed in column 5. Most of the tentatively
C. E. Stockwell et al.: Characterization of biomass burning emissions with high-resolution PTR-TOF-MS
identified species have, to our knowledge, typically not been directly observed in BB smoke but have been frequently verified as major products with off-line techniques in the extensive literature describing biomass pyrolysis experiments of various fuel types (Liu et al., 2012;Pittman Jr. et al., 2012;Li et al., 2013; more citations in Table S4).Several tentative assignments are supported by off-line analyses being published elsewhere (Hatch et al., 2014); for example, simultaneous grab samples analyzed by two-dimensional gas chromatography (2D-GC) support tentative assignments for furan methanol, salicylaldehyde, and benzofuran.In the case of N-containing formulas, the suggested compounds have been observed in the atmosphere, tobacco smoke, or lab fire smoke at moderate levels (Lobert, 1991;Ge et al., 2011;etc).Select studies supporting these assignments are referenced in the mass table with alternative possibilities also listed.An exhaustive list of all the many papers supporting the assignments is beyond the scope of this work.Several remaining compounds are also classified as tentative assignments as the identities designated are thought to be the most structurally likely.We anticipate that some or even many of the tentative assignments (and a few of the confirmed assignments) will be refined in future years as a result of more studies becoming available.We offer the tentative assignments here as a realistic starting point that improves model input compared to an approach in which these species are simply ignored.
Unidentified compounds
The identities of several compounds remain unknown, especially at increasing mass where numerous structural and functional combinations are feasible.However, compared to earlier work at unit mass resolution (Warneke et al., 2011;Yokelson et al., 2013), the high-resolution capability of the PTR-TOF-MS has enhanced our ability to assign mass peaks while always identifying atomic composition.With unit mass resolution spectrometers, FTIR, and GC-MS grab samples, Yokelson et al. (2013) estimated that ∼ 31 to 72 % of the gasphase NMOC mass remained unidentified for several fuel types.For similar, commonly burned biomass fuels (chaparral, grasses, crop residue, etc.), considering a PTR-TOF range up to m/z 165, we estimate that ∼ 7 % of the detected NMOC mass remains unidentified, while ∼ 12 % is tentatively assigned using selection criteria described in Sect.3.1.The compounds considered in this study cover a smaller mass range (up to m/z 165 rather than m/z 214) than in the earlier study, but in that earlier study the compounds in the range m/z 165-214 accounted for only ∼ 1.5 % of the NMOC mass (Yokelson et al., 2013).Thus, the molecular formula assignments from the PTR-TOF aided in positive and tentative identification and quantification resulting in a reduction of the estimate of unidentified NMOCs from ∼ 31 % down to ∼ 7 %.
Calculations of unidentified and tentatively assigned emissions relative to overall NMOC emissions (including FTIR species) for several lumped fuel groups are summarized in Table 1.Estimates of total intermediate and semivolatile gasphase organic compounds (IVOC + SVOC, estimated as the sum of species at or above the mass of toluene) are also included as these less volatile compounds are likely to generate SOA via oxidation and/or cooling.Similar to previous organic soil fire data, the percentages of unidentified and tentatively identified NMOCs for peat burns are significantly larger than for other fuel types (sum ∼ 37 %), and they could be a major source of impacts and uncertainty during El Niño years when peat combustion is a major global emission source (Page et al., 2002;Akagi et al., 2011).
Discussion
For all fuel types, there is noticeable variability concerning which compounds have the most significant emissions.Figure 3 includes both FTIR and PTR emissions grouped into the following categories: non-methane hydrocarbons, oxygenates containing only one oxygen, oxygenates containing two oxygen atoms, and oxygenates containing three oxygen atoms.Within these categories, the contributions from aromatics, phenolic compounds, and furans are further indicated.As shown in Fig. 3, oxygenated compounds account for the majority of the emissions for all biomass or biomass-containing fuels (i.e.tires and plastic bags are excluded).Oxygenated compounds containing only a single oxygen atom accounted for ∼ 50 % of the total raw mass signal (> m/z 28, excluding m/z 37) on average and normally had greater emissions than oxygenated compounds containing two oxygen atoms or hydrocarbons.Sugar cane has the highest emissions of oxygenated compounds, as was noted earlier in the FTIR data (Stockwell et al., 2014), and is one of the few fuels where the emissions of compounds containing two oxygens are the largest.To facilitate discussion we grouped many of the assigned (or tentatively assigned) mass peak features into categories including aromatic hydrocarbons, phenolic compounds, furans, N-containing compounds, and S-containing compounds.These categories do not account for the majority of the emitted NMOC mass but do account for most of the rarely measured species reported in this work.We then also discuss miscellaneous compounds at increasing m/z.
Aromatic hydrocarbons
Aromatic hydrocarbons contributed most significantly to the emissions for several major fuel types including ponderosa pine, peat, and black spruce.The identities of these ringed structures are more confidently assigned due to the small H to C ratio at high masses.The aromatics confidently identified in this study include benzene (m/z 79), toluene (m/z 93), phenylacetylene (m/z 103), styrene (m/z 105), xylenes/ethylbenzene (m/z 107), 1,3,5- Aromatic structures are susceptible to multiple oxidation pathways and readily drive complex chemical reactions in the atmosphere that are highly dependent on hydroxyl radical (OH) reactivity (Phousongphouang and Arey, 2002;Ziemann and Atkinson, 2012).Ultimately these gas-phase aromatic species have high yields for SOA as their physical and chemical evolution lead to lower volatility species that condense into the particle phase.SOA yields from these par-ent aromatic HCs have been shown to strongly vary depending on environmental parameters including relative humidity, temperature, aerosol mass concentration, and particularly the level of nitrogen oxides (NO x ) and availability of RO 2 radicals, further adding to the complexity in modeling the behavior and fate of these compounds (Ng et al., 2007;Song et al., 2007;Henze et al., 2008;Chhabra et al., 2010Chhabra et al., , 2011;;Im et al., 2014).
Domestic biofuel burning and open BB together comprise the largest global atmospheric source of benzene (Andreae and Merlet, 2001;Henze et al., 2008); thus, not surprisingly, benzene is a significant aromatic in our data set.The ERs relative to benzene for the aromatics listed above are shown in Table 2 and are positively correlated with benzene as demonstrated by Fig. 4b.Henze et al. (2008) outline how ERs to CO of major aromatics (benzene, xylene, and toluene) can be implemented as a part of a model to predict SOA forma- tion.An identical or similar approach that incorporates the additional aromatics detected by PTR-TOF-MS in this work may be useful to predict the contribution of aromatics from BB to global SOA by various reaction pathways.Toluene, another major emission, often serves as a model compound to study the formation of SOA from other small ringed volatile organic compounds (Hildebrandt et al., 2009).Black spruce yielded the greatest toluene ER (to benzene) during FLAME-4 (3.24 ± 0.42) and has been linked to significant OA enhancement during chamber photo-oxidation aging experiments investigating open BB emissions during FLAME-3, though toluene was not significant enough to account for all of the observed SOA (Hennigan et al., 2011).
Naphthalene is the simplest species in a class of carcinogenic and neurotoxic compounds known as polycyclic aro-matic hydrocarbons (PAH) and was detected from all fuels.The rapid rate of photo-oxidation of these smaller ringed gas-phase PAHs (including naphthalene and methylnaphthalenes) can have important impacts on the amount and properties of SOA formed and yields significantly more SOA over shorter time spans in comparison to lighter aromatics (Chan et al., 2009).Under low-NO x conditions (BB events generate NO x , though at lower ratios to NMOC and/or CO than those present in urban environments) the SOA yield for benzene, toluene, and m-xylene was ∼ 30 % (Ng et al., 2007), while naphthalene yielded enhancements as great as 73 % (Chan et al., 2009).
In summary, many of the species identified and detected during FLAME-4 are associated with aerosol formation under diverse ambient conditions (Fisseha et al., 2004;Na et a Species were only selected for a few key fires and are not considered the average of each fuel type.b Significant contributions from both methylfurfural and catechol were reported in pyrolysis reference papers, thus there is no indication which species is the major contributor at this mass.al., 2006;Ng et al., 2007;Chan et al., 2009).We present here initial emissions for a variety of aromatics from major global fuels.A more focused study to probe the extent and significance of SOA formation in BB plumes by these aromatic precursors was performed by chamber oxidation during the FLAME-4 campaign and will be presented in Tkacik et al. (2014).
Phenolic compounds
Phenol is detected at m/z 95.Earlier studies burning a variety of biomass fuels found that OP-FTIR measurements of phenol accounted for the observed PTR-MS signal at this mass even at unit mass resolution, though small contribu-tions from other species such as vinyl furan were possible but not detected (Christian et al., 2004).2D-GC grab samples in FLAME-4 found that other species with the same formula (only vinyl furan) were present at levels less than 2 % of phenol (Hatch et al., 2014).Thus, we assumed that within experimental uncertainty, m/z 95 was a phenol measurement in this study and found that phenol was one of the most abundant oxygenated aromatic compounds detected.Several substituted phenols were speciated for every fire and included catechol (m/z 111), vinylphenol (m/z 121), salicylaldehyde (m/z 123), xylenol (m/z 123), and guaiacol (m/z 125) (Fig. 5a).Several additional species were quantified for selected fires and included cresol (m/z 109), creosol (m/z 139), 3-methoxycatechol (m/z 141), 4-vinylguaiacol (m/z 151), and syringol (m/z 155).The EFs for these additional phenolic compounds were calculated for select burns and are included in Fig. 5a with the regularly analyzed compounds.Significant emissions of these compounds are reported in Table 2 relative to phenol, and the selected compounds shown in Fig. 5b demonstrate the tight correlation between these derivatives and phenol.Phenol, methoxyphenols (guaiacols), dimethoxyphenols (syringol), and their derivatives are formed during the pyrolysis of lignin (Simoneit et al., 1993) and can readily react with OH radicals leading to SOA formation (Coeur-Tourneur et al, 2010;Lauraguais et al., 2014).Hawthorne et al. (1989Hawthorne et al. ( , 1992) ) found that phenols and guaiacols accounted for 21 and 45 % of aerosol mass from wood smoke, while Yee et al. (2013) noted large SOA yields for phenol (24-44 %), guaiacol (44- 50 %), and syringol (25-37 %) by photo-oxidation chamber experiments under low-NO x conditions (<10 ppb).
Softwoods are considered lignin-rich and are associated predominately with guaiacyl units (Shafizadeh, 1982).Thus not surprisingly, guaiacol emissions were significant for ponderosa pine.Peat, an accumulation of decomposing vegetation (moss, herbaceous, woody materials), has varying degrees of lignin content depending on the extent of decomposition, sampling depth, water table levels, etc. (Williams et al., 2003).The peat burns all emitted significant amounts of phenolic compounds, with noticeable compound-specific variability between regions (Indonesia, Canada, and North Carolina).It is also noteworthy that sugar cane, which also produced highly oxygenated emissions based on FTIR and PTR-TOF-MS results, had the greatest total emissions of phenolic compounds.
The photochemical formation of nitrophenols and nitroguaiacols by atmospheric oxidation of phenols and substituted phenols via OH radicals in the presence of NO x is a potential reaction pathway for these compounds (Atkinson et al., 1992;Olariu et al., 2002;Harrison et al., 2005;Lauraguais et al., 2014).Nitration of phenol in either the gas or aerosol phase is anticipated to account for a large portion of nitrophenols in the environment.Higher nitrophenol levels are correlated with increased plant damage (Hinkel et al., 1989;Natangelo et al., 1999) and consequently are linked to forest decline in central Europe and North America (Rippen et al., 1987).Nitrophenols are also important components of brown carbon and can contribute to SOA formation in BB plumes (Kitanovski et al., 2012;Desyaterik et al., 2013;Mohr et al., 2013;Zhang et al., 2013).Nitrated phenols including nitroguaiacols and methyl-nitrocatechols are suggested as suitable BB molecular tracers for secondary BB aerosol considering their reactivity with atmospheric oxidants is limited (Iinuma et al., 2010;Kitanovski et al., 2012;Lauraguais et al., 2014).The oxidation products from the phenolic compounds detected in fresh smoke here have not been directly examined and would require a more focused study beyond the scope of this paper.
As with the aromatic compounds, the ERs provided in Table 2 can be used to estimate initial BB emissions of phenolic species, both rarely measured or previously unmeasured, from a variety of fuels in order to improve atmospheric modeling of SOA and nitrophenol formation.
Furan and substituted furans are oxidized in the atmosphere primarily by OH (Bierbach et al., 1995), but also by NO 3 (Berndt et al., 1997) or Cl atoms (Cabañas et al., 2005;Villanueva et al., 2007).Photo-oxidation of furan, 2methylfuran, and 3-methylfuran produces butenedial, 4-oxo-2-pentenal, and 2-methylbutenedial (Bierbach et al 1994(Bierbach et al , 1995)).These products are highly reactive and can lead to free radical (Wagner et al., 2003), SOA, or O 3 formation.In fact, aerosol formation from photo-oxidation chamber experiments has been observed for furans and their reactive intermediates listed above (Gomez Alvarez et al., 2009;Strollo and Ziemann, 2013).Even less is known concerning SOA yields from furans with oxygenated functional groups, which comprise the majority of the furan emissions in this study.Alvarado and Prinn (2009) added reaction rates for furans based on 2-methylfuran and butenedial values (Bierbach et al., 1994(Bierbach et al., , 1995) ) to model O 3 formation in an aging savanna smoke plume.Although a slight increase in O 3 was observed after 60 min, it was not large enough to account for the observed O 3 concentrations in the plume.The furan and substituted furan ERs compiled here may help explain a portion of the SOA and O 3 produced from fires that cannot be accounted for based upon previously implemented precursors (Grieshop et al., 2009).
Furfural was generally the dominant emission in this grouping, consistent with concurrent 2D-GC measurements (Hatch et al., 2014), while emissions from 2-furanone and furan also contributed significantly.Friedli et al. (2001) observed that ERs of alkyl furans linearly correlated with furan and concluded that these alkylated compounds likely break down to furan.Our expanded substituted furan list includes a variety of functionality ranging from oxygenated substituents to those fused with benzene rings for diverse fuel types.Similar to the behavior observed for alkylated fu-rans, the emissions of our substituted furans linearly correlate with furan as shown in Fig. 6b.As noted for phenolic compounds, sugar cane produced the largest emissions of furans excluding Canadian peat, supporting sugar cane as an important emitter of oxygenated compounds.The emissions from furan, phenol, and their derivatives reflect variability in cellulose and lignin composition of different fuel types.Cellulose and hemicellulose compose ∼ 75 % of wood while lignin only accounts for ∼ 25 % on average (Sjöström, 1993).Accordingly, the furans/ phenols for initially analyzed compounds indicate that furans are dominant in nearly every fuel type.
Nitrogen-containing compounds
Many N-containing peaks were not originally selected for post-acquisition analysis in every fire.However, the additional analysis of selected fires included a suite of Ncontaining organic compounds to investigate their potential contribution to the N budget and new particle formation (NPF).Even at our mass resolution of ∼ 5000, the mass peak from N compounds can sometimes be overlapped by broadened 13 C "isotope" peaks of major carbon-containing emissions.This interference was not significant for the following species that we were able to quantify in the standard or added analysis: C 2 H 3 N (acetonitrile, calibrated), C 2 H 7 N (dimethylamine; ethylamine), C 2 H 5 NO (acetamide), C 3 H 9 N (trimethylamine), C 4 H 9 NO (assorted amides), C 4 H 11 NO (assorted amines), and C 7 H 5 N (benzonitrile).As illustrated by the multiple possibilities for some formulas, several quantified N-containing species were observed but explicit single identities or relative contributions could not be confirmed.The logical candidates we propose are based upon atmospheric observations and include classes of amines and amides shown in Table S4 (Lobert et al., 1991;Schade and Crutzen, 1995;Ma and Hays et al., 2008;Barnes et al., 2010;Ge et al., 2011).Additional N-containing compounds were clearly observed in the mass spectra such as acrylonitrile, propanenitrile, pyrrole, and pyridine, but they were often overlapped with isotopic peaks of major carbon compounds; thus a time-intensive analysis would be necessary to provide quantitative data.For the species in this category, quantification was possible for select fires by 2D-GC-MS and they are reported by Hatch et al. (2014) for the FLAME-4 campaign.
We present in Supplement Table S5 the abundance of each N-containing gas quantified by PTR-TOF-MS and FTIR relative to NH 3 for selected fires.The additional N-containing organic gases detected by PTR-TOF-MS for these 29 fires summed to roughly 22 ± 23 % of NH 3 on average and accounted for a range of 0.1-8.7 % of the fuel N.These compounds contributed most significantly to fuel N for peat and this varied by sampling location.This is not surprising since environmental conditions and field sampling depths varied considerably.Stockwell et al. (2014) reported large differences for N-containing compounds quantified by FTIR be-tween FLAME-4 and earlier laboratory studies of emissions from peat burns.In any case, the additional NMOCs (including N-containing compounds) speciated by PTR-TOF-MS substantially increases the amount of information currently available on peat emissions.
The relevance of the N-containing organics to climate and the N cycle is briefly summarized next.Aerosol particles acting as cloud condensation nuclei (CCN) critically impact climate by production and modification of clouds and precipitation (Novakov and Penner, 1993).NPF, the formation of new stable nuclei, is suspected to be a major contributor to the amount of CCN in the atmosphere (Kerminen et al., 2005;Laaksonen et al., 2005;Sotiropoulou et al., 2006).Numerous studies have suggested that organic compounds containing N can play an important role in the formation and growth of new particles (Smith et al., 2008;Kirkby et al., 2011;Yu and Luo, 2014).The primary pathways to new particle formation include (1) the reaction of organic compounds with each other or atmospheric oxidants to form higher molecular weight, lower volatility compounds that subsequently partition into the aerosol phase or (2) rapid acid/base reactions forming organic salts.The observation of significant emissions of N-containing organic gases in FLAME-4 could improve understanding of the compounds, properties, and source strengths contributing to new particle formation and enhance model predictions on local to global scales.The identities and amounts of these additional N-containing emissions produced by peat and other BB fuels are also important in rigorous analysis of the atmospheric N budget.
Sulfur, phosphorous, and chlorine-containing compounds
S emissions are important for their contribution to acid deposition and climate effects due to aerosol formation.Several Scontaining gases have been detected in BB emissions including SO 2 , carbonyl sulfide (OCS), dimethyl sulfide (DMS), and dimethyl disulfide (DMDS); DMS is one of the most significant organosulfur compounds emitted by BB and is quantified by PTR-TOF-MS in our primary data set (Friedli et al., 2001;Meinardi et al., 2003;Akagi et al., 2011;Simpson et al., 2011).The signal at m/z 49 had a significant mass defect and is attributed to methanethiol (methyl mercaptan, CH 3 SH), which to our knowledge has not been previously reported in real-world BB smoke, though it has been observed in cigarette smoke (Dong et al., 2010) and in emissions from pulp and paper plants (Toda et al., 2010).Like DMS, the photochemical oxidation of CH 3 SH leads to SO 2 formation (Shon and Kim, 2006), which can be further oxidized to sulfate or sulfuric acid and contribute to the aerosol phase.The emissions of CH 3 SH are dependent on the fuel S content and are negatively correlated with MCE.The greatest EF (CH 3 SH) in our additional analyses arose from organic alfalfa, which had the highest S content of the selected fuels and also produced significant emissions of SO 2 detected by FTIR.
Other organic gases containing chlorine and phosphorous were expected to be readily detectable because of their large, unique mass defects and possible enhancement by pesticides and fertilizers in crop residue fuels.However, they were not detected in significant amounts by our full mass scans.Fuel P and Cl may have been emitted primarily as aerosol, ash, low proton affinity gases, or as a suite of gases that were evidently below our detection limit.
Miscellaneous (order of increasing m/z)
m/z 41: The assignment of propyne is reinforced by previous observations in BB fires, and it is of some interest as a BB marker even though it has a relatively short lifetime of ∼ 2 days (Simpson et al., 2011;Akagi et al., 2013;Yokelson et al., 2013).Considering that propyne was not detected in every fuel type, a level of uncertainty is added to any use of this compound as a BB tracer, and in general the use of multiple tracers is preferred when possible.
m/z 43: The high-resolution capabilities of the PTR-TOF-MS allowed propylene to be distinguished from ketene fragments at m/z 43.The propylene concentrations are superseded in our present data set by FTIR measurements; however, the two techniques agree well.m/z 45: PTR technology has already been reported as a reliable way to measure acetaldehyde in BB smoke (Holzinger et al., 1999;Christian et al., 2004).Photolysis of acetaldehyde can play an important role in radical formation and is the main precursor of peroxy acetyl nitrate (PAN) (Trentmann et al., 2003).A wide range in EF (acetaldehyde) (0.13-4.3 g kg −1 ) is observed during FLAME-4 and reflects variability in fuel type.The detailed emissions from a range of fuels in this data set can aid in modeling and interpretation of PAN formation in aging BB plumes of various regions (Alvarado et al., 2010(Alvarado et al., , 2013)).Crop-residue fuels regularly had the greatest emissions of acetaldehyde, which is important considering many crop-residue fires evade detection and are considered both regionally and globally underestimated.Sugar cane burning had the largest acetaldehyde EF (4.3 ± 1.4 g kg −1 ) and had significant emissions of oxygenated and N-containing compounds; consequently it is likely to form a significant amount of PAN.
m/z 57: The signal at m/z 57 using unit-mass resolution GC-PTR-MS was observed to be primarily acrolein with minor contributions from alkenes (Karl et al., 2007).In the PTR-TOF-MS, the two peaks at m/z 57 (C 3 H 5 O + and C 4 H + 9 ) are clearly distinguished and acrolein is often the dominant peak during the fire with the highest emissions from ponderosa pine and sugar cane.
m/z 69: The high resolution of the PTR-TOF-MS allowed three peaks to be distinguished at m/z 69, identities attributed to carbon suboxide (C 3 O 2 ), furan (C 4 H 4 O), and mostly isoprene (C 5 H 8 ) (Fig. 7).Distinguishing between isoprene and furan is an important capability of the PTR-TOF-MS.The atmospheric abundance and relevance of carbon suboxide is fairly uncertain and with an atmospheric lifetime of ∼ 10 days (Kessel et al., 2013), the reactivity and transport of C 3 O 2 emitted by fires could have critical regional impacts.The emissions of C 3 O 2 by BB will be interpreted in detail at a later date (S.Kessel, personal communication, 2014).m/z 75: Hydroxyacetone emissions have been reported from both field and laboratory fires (Christian et al., 2003;Akagi et al., 2011;Yokelson et al., 2013;St. Clair et al., 2014).Christian et al. (2003) first reported BB emissions of hydroxyacetone and noted very large quantities from burning rice straw.The EF (C 3 H 6 O 2 ) for rice straw was noticeably high (1.1 g kg −1 ) in the FLAME-4 data set and only sugar cane had greater emissions.
m/z 85, 87: The largest peak at m/z 85 was assigned as pentenone as it was monitored/confirmed by PIT-MS/GC-MS in an earlier BB study (Yokelson et al., 2013).Pentenone was a substantial emission from several fuels with ponderosa pine having the greatest EF.By similar evidence the minor peak at m/z 87 was assigned to pentanone but was only detected in a few of the fires in the second set of analyses with the most significant emissions arising from Indonesian peat.m/z 107: Benzaldehyde has the same unit mass as xylenes, but is clearly separated by the TOF-MS.Greenberg et al. (2006) observed benzaldehyde during low-temperature pyrolysis experiments with the greatest emissions from ponderosa needles (ponderosa pine produced the greatest EF in our data set, with a range of 0.1-0.28g kg −1 ).Benzaldehyde emissions were additionally quantified by GC-MS during a laboratory BB campaign and produced comparable EF to that of xylenes (Yokelson et al., 2013).During FLAME-4 the EF (benzaldehyde) was comparable to EF (xylenes calibrated as p-xylene) as seen earlier, except for peat burns where xylenes were significantly higher.
m/z 137:
At unit mass resolution, the peak at m/z 137 is commonly recognized as monoterpenes, which can further be speciated by GC-MS.However, as shown in Fig. 8 there can be up to three additional peaks at this mass that presently remain unidentified oxygenated compounds.As anticipated, the hydrocarbon monoterpene peak is significant for coniferous fuels such as ponderosa pine but much smaller for grasses.In this work we calibrated for α-pinene, which has been reported as a major monoterpene emission from fresh smoke (Simpson et al., 2011;Akagi et al., 2013).
Cookstoves
Trace gas emissions were measured for four cookstoves including a traditional three-stone cooking fire, the most widely used stove design worldwide; two "rocket" type designs (Envirofit G3300 and Ezy stove); and a "gasifier" stove (Philips HD4012).Several studies focus on fuel efficiency of cookstove technology (Jetter et al., 2012), while the detailed emissions of many rarely measured and previously unmeasured gases are reported here and in Stockwell et al. (2014) for FLAME-4 burns.For cooking fires, ∼ 3-6 % of the NMOC mass remained unidentified, with the Envirofit rocket stove design generating the smallest percentage in the study.To improve the representativeness of our laboratory open cooking emissions, the EFs of smoldering compounds reported for three-stone cooking fires were adjusted by multiplying the mass ratio of each species "X" to CH 4 by the literature-average field EF (CH 4 ) for open cooking in Akagi et al. (2011).Flaming compounds were adjusted by a similar procedure based on their ratios to CO 2 .The preferred values are reported in Table S3.With these adjustments, the emissions of aromatic hydrocarbons (Fig. 9a), phenolic compounds (Fig. 9b), and furans (Fig. 9c) distinctively increased with the primitiveness of design; thus, three-stone cooking fires produced the greatest emissions.The advancement in emissions characterization for these sources will be used to upgrade models of exposure to household air pollution and the ERs/EFs should be factored in to chemical-transport models to assess atmospheric impacts.
BB is an important source of reactive N in the atmosphere, producing significant emissions of NO x and NH 3 while nonreactive HCN and CH 3 CN are commonly used as BB marker compounds (Yokelson et al., 1996(Yokelson et al., , 2007;;Goode et al., 1999;de Gouw et al., 2003).The FTIR used in FLAME-4 provided the first detection of HCN emissions from cooking fires and the HCN/CO ER was about a factor of 5 lower than most other BB fuels burned (Stockwell et al., 2014).Similarly, acetonitrile emissions were measured for the first time for cooking fires by PTR-TOF-MS in this study, and the CH 3 CN/CO ERs from cooking fires are much lower (on average a factor of ∼ 15) than those from other fuels.This should be considered when using CH 3 CN/CO ERs to drive source apportionment in areas with substantial emissions from biofuel cooking sources.
Conclusions
We investigated the primary BB NMOC emissions from laboratory simulated burns of globally significant fuels using a PTR-TOF-MS instrument.In this first PTR-TOF-MS deployment dedicated to fires, we encountered some specific challenges.The fast change in concentration necessitated a fast acquisition rate, which decreased the signal to noise for the emissions above background.The large dynamic con- centration range necessitated dilution to minimize reagent ion depletion at peak emissions and the dilution further reduced the signal to noise ratio.Positive identification of some species by co-deployed grab sampling techniques will be explored further in a separate paper, but is challenged by the difficulty of transmitting some important fire emissions through GC columns (Hatch et al., 2014).We attempted to enhance compound identification by switching reagent ions (O + 2 and NO + ); however, this approach with two broadly sensitive ions in a complex mixture resulted in complex spectra for which comparative analysis is beyond the scope of the present effort.Future experiments might consider instead using a less broadly sensitive reagent ion such as NH + 3 as the alternate reagent ion.We were limited to our pre-chosen calibration mixture based primarily on gases previously observed in smoke.For future experiments we suggest adding more standards to generate more accurate calibration factors, specifically including major species such as furan and phenol and more compounds with S and N heteroatoms.In addition, measuring the fragmentation, if any, of more of the species identified in this work would be of great value.Despite these practical limitations, the experiment produced a great deal of useful new information.
The PTR-TOF-MS obtains full mass scans of NMOCs with high enough resolution to distinguish multiple peaks at the same nominal mass and high enough accuracy to assign chemical formulas from the "exact" masses.This aided in compound identification and more than 100 species were categorized as a confirmed identity, a tentative (most likely) assignment, or unidentified but with a chemical formula.Chemical identification was aided by observations of compounds reported in smoke emissions, pyrolysis experiments, and those species at relevant concentrations in the atmosphere.This allowed the identification of more masses up to m/z 165 than in earlier work at unit mass resolution, although an estimated range of 12-37 % of the total mass still remains unidentified and tentatively identified.The analysis provides a new set of emission factors for ∼ 68 compounds in all fires plus ∼ 50 more in select fires, in addition to species previously quantified by FTIR (Stockwell et al., 2014) and other techniques during FLAME-4 (Hatch et al., 2014).While significant variability was observed between fuels, oxygenated compounds collectively accounted for the majority of emissions in all fuels, with sugar cane producing the highest EF of oxygenated species on average, possibly due to its high sugar content.
We also report emission ratios to benzene, phenol, or furan for the aromatic hydrocarbons, phenolic compounds, and substituted furans, respectively.Reporting emissions of previously unmeasured or rarely measured compounds relative to these more regularly measured compounds facilitates adding several new compounds to fire emissions models.To our knowledge this is the first on-line, real-time characterization of several compounds within these "families" for BB.Observed emissions varied considerably between fuel types.Several example compounds within each class (toluene, guaiacol, methylfuran, etc.) have been shown, by chamber experiments, to be highly reactive with atmospheric oxidants and contribute significantly to SOA formation.The ERs and EFs characterized by PTR-TOF-MS of fresh BB smoke are presented in Tables S1-S3 and (especially the recommended values in Table S3) should aid model predictions of O 3 and SOA formation in BB smoke and the subsequent effects on air quality and climate on local-global scales.
A large number of organic N-containing species were detected with several identities speculated as amines or amides.These N-containing organic gases may play an important role in new particle formation by physical, chemical, and photochemical processes, though a more focused study is necessary to measure NPF yields from these compounds and processes.The additional N-containing gases detected here account for a range of 1-87 % of NH 3 dependent on fuel type with the most significant contribution of additional N species to fuel N arising from peat burns.The ERs of acetonitrile to CO for cooking fires were significantly lower than other fuels and should be factored into source apportionment models The S-containing compounds detected by PTR-TOF-MS included dimethyl sulfide and methanethiol, where methanethiol was detected for the first time in BB smoke to our knowledge.These compounds may play a role in acid deposition and aerosol formation, though to what extent has yet to be extensively studied.Phosphorous-and chlorinecontaining organic gases were not readily observed in our data set, which may indicate that these species were below our detection limit.
Using full mass scans from a high-resolution PTR-TOF-MS to characterize fresh smoke has aided in identifying several compounds and provided the chemical formula of other organic trace gases.The additional NMOCs identified in this work are important for understanding fresh BB emissions and will improve our understanding of BB atmospheric impacts.The subsequent oxidation products of these gases are the focus of a companion paper probing BB aging.Taken together, this work should improve BB representation in atmospheric models, particularly the formation of ozone and secondary organic aerosol at multiple scales.
The Supplement related to this article is available online at doi:10.5194/acp-15-845-2015-supplement.
C
Figure 1.(a)The normalized response of calibration factors ("CF," ncps ppbv −1 ) versus mass (calibrated species labeled by name) overlaid with the linearly fitted mass-dependent transmission curve (black markers and dotted line).Separate linear approximations of (b) oxygenated (blue) and (c) hydrocarbon (green) species used to calculate approximate calibration factors for all observed masses where explicit calibrations were not available.
Figure 2 .
Figure 2. A typical full mass scan of biomass burning smoke from the PTR-TOF-MS on a logarithmic (a) and a smaller range linear (b) scale.The internal standard (1,3-diiodobenzene) accounts for the major peaks ∼ m/z 331 and fragments at peaks near m/z 204 and 205.
Figure 3 .
Figure 3.The emission factors (g kg −1 ) of total observed hydrocarbons and total observed species oxygenated to different degrees averaged for each fire type based on a synthesis of PTR-TOF-MS and OP-FTIR data.The patterned sections indicate the contribution to each of the above categories by selected functionalities discussed in the text (aromatic hydrocarbons, phenolics, furans).The parenthetical expressions indicate how many oxygen atoms are present.
Figure 4 .
Figure 4. (a) The EFs of the aromatics analyzed in all fires averaged and shown by fuel type.Individual contributions from benzene and other aromatics are indicated by color.The EFs for p-cymene are only calculated for select fires and should not be considered a true average.(b) The correlation plots of selected aromatics with benzene during a black spruce fire (Fire 74).Similar behavior was observed for all other fuel types.
Figure 5 .
Figure 5. (a) The distribution in average fuel EF for several phenolic compounds, where compound-specific contributions are indicated by color.The EFs for compounds additionally analyzed a single time for select fires are included but are not a true average.(b) The linear correlation of select phenolic compounds with phenol during an organic hay burn (Fire 119).
Figure 6 .
Figure 6.(a) The distribution in average fuel EF for furan and substituted furans, where individual contributions are indicated by color.The EFs for substituted furans additionally analyzed a single time are not true averages (b) The linear correlation of furan with select substituted furans for an African grass fire (Fire 49).
Figure 7 .
Figure 7. Expanded view of the PTR-TOF-MS spectrum at m/z 69 demonstrating the advantage over unit mass resolution instruments of distinguishing multiple peaks, in this instance separating carbon suboxide (C 3 O 2 ), furan (C 4 H 4 O), and mostly isoprene (C 5 H 8 ) in ponderosa pine smoke (fire 70).
Figure 8 .
Figure 8. Expanded view of the PTR-TOF-MS spectrum of NC peat (fire 61) at m/z 137 showing multiple peaks.
C
. E. Stockwell et al.: Characterization of biomass burning emissions with high-resolution PTR-TOF-MS in regions where biofuel use is prevalent if CH 3 CN is used as a tracer.
Table 1 .
Quantities for various categories of compounds (g kg −1 ) and calculation of mass ratios and/or percentages for several fuel types.
Table 2a .
Emission ratios to benzene, phenol, and furan for aromatic hydrocarbons, phenolic compounds, and substituted furans in lumped fuel categories.
Note: "nm" indicates not measured; blank indicates species remained below the detection limits; values in parentheses indicate one standard deviation.
|
2017-07-16T23:28:49.000Z
|
2015-01-23T00:00:00.000
|
{
"year": 2015,
"sha1": "92d5f9800692c194b0337eba67bd5c17149ec040",
"oa_license": "CCBY",
"oa_url": "https://acp.copernicus.org/articles/15/845/2015/acp-15-845-2015.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "b432e7203dbc1bcd5ca3787e49d09eada8ed013c",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
11052900
|
pes2o/s2orc
|
v3-fos-license
|
Hepatocyte-targeting gene transfer mediated by galactosylated poly(ethylene glycol)-graft-polyethylenimine derivative
Biscarbamate cross-linked polyethylenimine derivative (PEI-Et) has been reported as a novel nonviral vector for efficient and safe gene transfer in our previous work. However, it had no cell-specificity. To achieve specific delivery of genes to hepatocytes, galactosylated poly(ethylene glycol)-graft-polyethylenimine derivative (GPE) was prepared through modification of PEI-Et with poly(ethylene glycol) and lactobionic acid, bearing a galactose group as a hepatocyte-targeting moiety. The composition of GPE was characterized by proton nuclear magnetic resonance. The weight-average molecular weight of GPE measured with a gel permeation chromatography instrument was 9489 Da, with a polydispersity of 1.44. GPE could effectively condense plasmid DNA (pDNA) into nanoparticles. Gel retardation assay showed that GPE/pDNA complexes were completely formed at weigh ratios (w/w) over 3. The particle size of GPE/pDNA complexes was 79–100 nm and zeta potential was 6–15 mV, values which were appropriate for cellular uptake. The morphology of GPE/pDNA complexes under atomic force microscopy appeared spherical and uniform in size, with diameters of 53–65 nm. GPE displayed much higher transfection efficiency than commercially available PEI 25 kDa in BRL-3A cell lines. Importantly, GPE showed good hepatocyte specificity. Also, the polymer exhibited significantly lower cytotoxicity compared to PEI 25 kDa at the same concentration or weight ratio in BRL-3A cell lines. To sum up, our results indicated that GPE might carry great potential in safe and efficient hepatocyte-targeting gene delivery.
Introduction
The curing effect of gene therapy greatly depends on safe and efficient delivery of therapeutic gene to the target site. 1,2 In recent years, nonviral vectors have been investigated intensively for gene delivery, due to their ease of production and chemical modification, safety, lower immune response, and the capacity to delivery larger DNA molecules. [3][4][5] Consequently, alternative gene carriers have been proposed based on nonviral vectors, such as cationic lipids 6,7 and cationic polymers. [8][9][10][11][12][13] However, there are still many obstacles that can hamper the delivery capability of nonviral carriers in vivo. 14 For example, without modification with an appropriate targeting moiety, the carrier will have an increased risk of entering undesired cells and possibly cause damage to healthy tissues. Therefore, targeted transfer of nucleic acid drugs to specific tissues is a significantly important concern to address in the field of gene delivery.
Receptor-mediated endocytosis has been shown to be a promising way to achieve specific delivery of genes to certain cell types or tissues. Surface modification of nanosized vectors like nanoparticles is usually used for specific targeting purposes. Various ligands, including antibody, 15 folate, 16,17 asialoglycoprotein, 18 galactose, 19,20 mannose, 21 epidermal growth factor, 22 and transferrin 23 have been conjugated with nonviral carriers for cell specificity. It was reported that the asialoglycoprotein receptor (ASGPR) was abundantly expressed in normal hepatocytes and hepatoma cell lines, such as BRL-3A, HepG2, and parental human hepatocellular carcinoma BEL-7402 cells. There are on average 500,000 ASGPRs on every hepatocyte. ASGPR can selectively bind to galactose or N-acetylgalactosamine residues of desialylated glycoproteins. 24 ASGPR has received much attraction in gene targeting and has also acted as a model system for studying receptor-mediated endocytosis due to its high affinity and rapid internalization rate. Therefore, the delivery of genes to hepatocytes through ASGPR-mediated endocytosis using galactosylated polymers has gained significant interest. For instance, as Gref et al 25 reported, galactose-modified oligosaccharides displayed a high affinity for ASGPR in liver tumor cells. Gao et al 26 reported a gene carrier based on galactosylated chitosans that showed obvious targeting in hepatoma cells HepG2, SMMC-7721, and normal hepatic cell L-02. Kim et al 27 conjugated galactose to poly(ethylene) glycol (PEG)-polyethylenimine (PEI) to obtain a hepatocyte-targeting gene carrier; the polymer they synthesized exhibited improved transfection efficiency in hepatoma cells.
In our previous work, we synthesized biscarbamate crosslinked PEI derivative (PEI-Et) as a nonviral gene carrier. Our results showed that PEI-Et displayed significantly enhanced transfection efficiency and much lower cytotoxicity than commercially available PEI 25 kDa in three cell lines (COS-7, BRL-3A, and HeLa). 28 However, the polymer had no cellspecificity. Therefore, in the present study, galactosylated PEG-graft-PEI derivative (GPE) was prepared to achieve hepatocyte specificity. Two chemical modifiers, PEG and galactose, were included in GPE. Galactose was acted as a hepatocyte-targeting moiety. PEG modification promoted the formation of complexes with diminished aggregation, and reduced opsonization with serum proteins in the bloodstream. 29 Furthermore, PEGylation provided a polyplex with improved solubility, lower cytotoxicity, and longer circulation time in vivo. 30 In this paper, the synthesized GPE was characterized with proton nuclear magnetic resonance ( 1 H-NMR) and GPC. GPE/plasmid DNA (pDNA) complexes were prepared and investigated by particle size, zeta potential, gel retardation ability, and morphology under atomic force microscopy (AFM). Moreover, cytotoxicities of GPE and GPE/pDNA complexes were examined in terms of cell viability, and transfection efficiencies as well as hepatocyte specificity of GPE/pDNA complexes were examined with luciferase activity assay, fluorescence microscopy, and fluorescence-activated cell-sorting analysis (FACS).
Cell culture
Normal rat liver cell (BRL-3A) and human cervix epithelial carcinoma cell (HeLa) were incubated in DMEM medium containing 10% fetal bovine serum at 37°C in a humidified atmosphere supplemented with 5% CO 2 .
Synthesis of GPE
The polymer GPE was synthesized in two steps. In the first step, galactosylated PEG (Gal-PEG) was prepared by an amide-formation reaction between activated carboxyl groups of galactose bearing LA and amine groups of NH 2 -PEG-COOH in accordance with a previous report, 31 with some changes. Briefly, LA (1.5 mmol) dissolved in 30 mL of 2-(N-morpholino)ethanesulfonic acid (MES) buffer solution (0.1 M, pH 6.5) was activated with a mixture of N-hydroxysuccinimide (NHS) (6 mmol) and 1-ethyl-3-(3dimethylaminopropyl)-carbodiimide hydrochloride (EDC) (6 mmol). After activating the carboxyl groups for 30 minutes, 0.075 mmol of PEG was added. The reaction was performed in an ice bath for 12 hours, followed by an additional submit your manuscript | www.dovepress.com Dovepress Dovepress 12 hours at room temperature. Then the sample was dialyzed against distilled water in a dialysis tube (MW cutoff 1000 Da) for 3 days, followed by lyophilization. The resulting polymer Gal-PEG was stored at −20°C for further use.
In the second step, GPE was synthesized by an amideformation reaction between activated carboxyl groups of Gal-PEG and amine groups of PEI-Et. PEI-Et was synthesized according to our previous study. 28 Gal-PEG (0.02 mmol) dissolved in 10 ml of MES buffer solution (0.1 M, pH 6.5) was activated with a mixture of NHS (0.2 mmol) and EDC (0.2 mmol). After activating the carboxyl groups for 30 minutes, 0.02 mmol of PEI-Et was added. The reaction was performed in an ice bath for 12 hours, followed by an additional 12 hours at room temperature. Then the sample was dialyzed against distilled water in a dialysis tube (MW cutoff 3500 Da) for 3 days and lyophilized to obtain the polymer GPE. The reaction scheme is shown in Figure 1.
Synthesis of PEG-Et
PEI-Et (0.04 mmol) was dissolved in 0.1 M sodium bicarbonate, followed by the addition of 0.04 mmol of mPEG-Sc and stirred for 4 hours at room temperature. The resultant PEG-Et was dialyzed against distilled water in a dialysis tube (MW cutoff 3500 Da) for 2 days, followed by lyophilization. The resulting polymer PEG-Et was stored at −20°C for further use.
Characterization of GPE
1 H-NMR spectra of GPE were recorded on a Varian Unity 300 MHz spectrometer (Mercury plus 400; Varian, Palo Alto, CA, USA), using D 2 O as a solvent. GPC relative to PEG standards (molecular weight range Mp 106, 430, 633, 1400, 4290, 7130, 12,600, 20,600 Da) was used to measure the MW of GPE by a Waters (Milford, MA, USA) high-pressure liquid chromatography (HPLC) system. The mobile phase of HPLC was formic acid.
Preparation of polymer/pDNA complexes
Both pDNA and polymer were separately diluted to the required concentration in phosphate-buffered saline (PBS; pH 7.4). After that, the polymer/pDNA complexes were prepared by adding polymer solution to the pDNA solution at the desired weight ratio with gentle vortexing. The polymer/ pDNA complexes were incubated at room temperature for 30 minutes prior to use.
Gel retardation assay
The gel retardation ability of GPE was evaluated using agarose gel electrophoresis with pEGFP-N1. GPE and pDNA solutions were mixed at various weight ratios from 1 to 70 and incubated for 30 minutes at room temperature. The complexes and naked pDNA were electrophoresed on 1% (w/v) agarose gels pretreated with EB (0.5 µg/mL of the gel) in 1 × Tris-acetate buffer at 80 V for 40 minutes.
Particle-size and zeta-potential measurements A particle-size analyzer (90Plus; Brookhaven Instruments, Holtsville, NY, USA) was used to examine the particle size and zeta potential of GPE/pDNA complexes. GPE/pDNA complexes at various w/w ratios from 1 to 70 were prepared and incubated for 30 minutes at room temperature before measurement. Each sample was performed in triplicate.
Atomic force microscopy
The morphology of GPE/pDNA complexes at w/w 70 was examined under AFM (E-Sweep; Hitachi High-Tech Science, Tokyo, Japan). The complexes were deposited on a mica disk and dried for 3 hours at room temperature. Then it was observed under AFM.
Cytotoxicity assay
Cytotoxicity evaluation of the polymer was measured with MTT assay. PEI 25 kDa was used as a control. BRL-3A cells were grown in 96-well plates at an initial density of 5000 cells/well in 100 µL of DMEM and incubated for 24 hours. After that, the media were changed with fresh serum-free DMEM pretreated with polymers at various concentrations (5, 10, 20, 50, and 100 µg/mL) or polymer/ pDNA complexes at various w/w ratios (2, 5, 10, 20, 30, and 50). After further incubation for 4 hours, the media were replaced with fresh serum-free DMEM, and 25 µL MTT solution (5 mg/mL in PBS) was added per well. After an additional incubation for 6 hours, 150 µL of DMSO was added. Then the plate was agitated for 15 minutes. Finally, the absorbance was recorded with an enzyme-linked immunosorbent assay reader (MK3; Thermo Fisher Scientific) at 570 nm (with 630 nm as a reference wavelength). The data from five separate experiments were expressed as a percentage of viable cells over untreated control.
in vitro transfection experiments
The transfections mediated by GPE were performed in BRL-3A and HeLa cells. Cells were grown in 48-well plates at an initial density of 5 × 10 4 cells/well in 500 µL of DMEM and incubated for 24 hours. After that, wells were washed with PBS, and polymer/pGL3-control (500 ng) complexes at desired w/w ratios were added to the cells. After an additional incubation for 4 hours, the media were replaced with fresh and complete DMEM and allowed to incubate for 44 hours. Luciferase assays were performed according to the manufacturer's suggested protocol (Promega). The luciferase activity was expressed in terms of relative light units/mg protein. Each sample was performed in triplicate. The optimal w/w ratio of GPE/pEGFP-N1 complexes from the luciferase activity assay was selected for the GFP-expression experiment. The transfection efficiency was estimated by scoring the percentage of cells expressing GFP using a FACSCalibur system (BD, Franklin Lakes, NJ, USA). Each sample was performed in triplicate. The data were presented as means ± standard deviation. For the competition assay, BRL-3A cells were preincubated with galactose (1, 10, and 100 mM) for 15 minutes, then the cells were incubated with GPE/pDNA and PEG-Et/pDNA complexes for 3 hours. The luciferase activity was determined as described above after 45 hours' further incubation. 28
Statistical analysis
Data were expressed as means ± standard deviation. Statistical analysis was performed with SPSS software (v 19.0; IBM, Armonk, NY, USA). Student's t-test (twotailed) was used to test the significance of the differences between two groups. Data were considered significantly different at the level of P , 0.05 and very significantly different at the level of P , 0.01.
Results and discussion
GPE was successfully synthesized Characterization of GPE/pDNA complexes was appropriate for cellular uptake As for cationic polymers, the condensation of pDNA into small particles is an important prerequisite for gene delivery. 32 The gel retardation ability of GPE was measured with agarose gel electrophoresis. Naked pDNA was used as the control group. As indicated in Figure 3, GPE completely retarded the migration of pDNA when the w/w ratio was 3, suggesting that GPE/pDNA complexes were completely formed at w/w ratios over 3. Interaction of cationic polymers with nucleic acid could protect the nucleic acid from enzymatic degradation, 33,34 which facilitated efficient gene transfection.
The particle size of the polymer/pDNA complexes was an important factor for hepatocyte gene delivery. As Hashida et al mentioned, the majority of the fenestrate of the liver sinusoid was smaller than 200 nm in diameter. 35 for large particles to arrive at the parenchymal cells of the liver. In addition, gene carriers with diameters larger than 200 nm are readily scavenged nonspecifically by monocytes and the reticuloendothelial system. 36 A positive surface charge of GPE, which comes from the protonated amino groups on PEI, may be an advantage for cellular uptake, due to the electrostatic interaction between the negatively charged cellular membrane and the positively charged complexes. 37,38 As shown in Figure 4, at a w/w ratio of 1, the particle size of GPE/pDNA complexes was 108 nm and the zeta potential was −8.9 mV, indicating that the complexation between GPE and pDNA was incomplete. However, when the w/w ratios were over 5, GPE could condense pDNA into nanoparticles with relatively constant diameters of 79-100 nm, implying that stable complexes were formed with a size appropriate for cellular uptake. Meanwhile, zeta potential ranged from 6 mV to 15 mV. These results accorded well with the results of the gel retardation assay. The representative morphologies of GPE/pDNA complexes (w/w 70) under AFM are shown in Figure 5. The results show that the complexes appeared spherical in shape with compact structure, and the diameters of the complexes ranged from 53 nm to 65 nm, smaller than those determined by dynamic light scattering. This phenomenon was possibly due to the shrinkage of the PEG shell caused by the evaporation of water during drying before AFM examination. 39
GPE showed low cytotoxicity in BRL-3A cells
For polycationic gene carriers, cytotoxicity was a main hurdle for clinical application. 40 The cytotoxicity associated with GPE could be divided into two types: the immediate toxicity mediated by free GPE, and the delayed toxicity mediated by GPE/pDNA complexes. For this reason, cell viabilities of free GPE and GPE/pDNA complexes were assayed using BRL-3A cells. Free polymers were used in order to mimic a worst-case scenario and get much larger sensitivity results, because cytotoxicity was remarkably reduced with formation of polymer/pDNA complexes. 41 As shown in Figure 6, cytotoxicity of GPE was much lower than PEI 25 kDa at the same concentration. In addition, GPE displayed negligible cytotoxicity at concentrations below 100 µg/mL. The cell viabilities were 104% ± 7% at a polymer concentration of 5 µg/mL. The value slightly decreased to 95% ± 6% with GPE concentration increasing to 100 µg/mL, implying that a wide dose range of GPE may be used for gene transfection. In contrast, with increasing concentrations of PEI 25 kDa, cell viability decreased drastically. For example, cell viability was from 88% ± 3% at a PEI 25 kDa concentration of 5 µg/mL to 23% ± 1% at a PEI 25 kDa concentration of 100 µg/mL. In the case of the cell viabilities of polymer/pDNA complexes, GPE/pDNA complexes also showed dramatically lower cytotoxicity than PEI 25 kDa/ pDNA complexes. These results suggested that the immediate toxicity and the delayed toxicity of GPE were all lower than that of PEI 25 kDa, which demonstrated that GPE was a significantly promising carrier for safe gene transfer. According to previous studies, chemical modification of PEI with PEG could help reduce the cytotoxicity by reducing the number of PEI amino groups. 42,43 Therefore, GPE with much lower cytotoxicity than PEI 25 kDa was probably due to the properties of the hydrophilic groups of PEG. In addition, Bieber and Elsässer reported that a positive correlation existed between MW and cytotoxicity of PEI: cytotoxicity of PEI with low MW was much lower than PEI with high MW. 44 For this reason, the lower molecular weight of GPE was another factor that produced lower cytotoxicity than PEI 25 kDa.
GPE exhibited high transfection efficiency and good hepatocyte specificity in BRL-3A cells
To observe the in vitro transfection efficiency and hepatocyte specificity of GPE, BRL-3A and HeLa cells were transfected with polymer/pDNA complexes with various w/w ratios. PEI 25 kDa at optimal w/w ratio was used as a positive control.
As illustrated in Figure 7A, the transfection efficiency was dependent on the GPE/pDNA weight ratio. Transfection efficiency of GPE increased with increasing w/w ratios below 70, and then decreased at higher w/w ratios. A reasonable explanation may be as follows: a low w/w ratio would produce unstable complexes and low transfection efficiency; however, a high w/w ratio yielded low transfection efficiency due to the stability, because the pDNA could not be released from the complexes. 28 In addition, the transfection efficiency of GPE was higher than that of PEI 25 kDa at w/w from 30 to 70 (P , 0.05). As for the naked pDNA, it produced almost negligible luciferase activity, indicating that pDNA without any vector showed significantly low transfection efficiency, which was in agreement with the previous study. 45,46 As illustrated in Figure 7B, transfection efficiency of GPE was 4.6-fold higher than that of PEG-Et at a w/w ratio of 70 in BRL-3A cells (P , 0.01). Moreover, GPE showed a 13.2-fold higher transfection efficiency in BRL-3A cells in comparison to HeLa cells (P , 0.01), which did not express ASGPR, suggesting that the attachment of galactose residues in GPE might be beneficial for the recognition of ASGPR and lead to the significant improvement of transfection efficiency in BRL-3A cells.
To confirm the hepatocyte specificity of GPE, genetransfection efficiency was evaluated in BRL-3A and HeLa cells, using pEGFP-N1 as a reporter gene. Figure 8A displays typical fluorescence microscope images; BRL-3A cells transfected with GPE/pEGFP-N1 showed more bright fluorescent spots than PEG-Et/pEGFP-N1. In addition, transfection efficiency was monitored by flow cytometry. As shown in Figure 8C, GPE exhibited higher efficiency (33%) than PEG-Et (27%) in BRL-3A cells (P , 0.01). Also, transfection efficiency of GPE was higher in BRL-3A cells (33%) than in HeLa cells (26%) (P , 0.01). These results obtained also confirmed the results of luciferase activity assays, implying that GPE showed good hepatocyte specificity.
To confirm further the effect of galactose on receptor-mediated gene delivery, the competition assay was performed at the presence of free galactose (1, 10, and 100 mM) as a competitor. Figure 9 shows that the transfection efficiency of GPE in BRL-3A cells was reduced in the presence of free galactose. Especially, inhibition of the transfection efficiency of GPE in the presence of galactose depended on concentration of pretreated galactose, whereas the phenomenon was not observed on the transfection efficiency of PEG-Et. Transfection efficiency when using PEG-Et as a carrier was very low and was not affected irrespective of the addition of free galactose. These results indicated that pretreatment of free galactose as a competitor could reduce cellular uptake of GPE by competitive binding to ASGPR on the cell surface, although the inhibition of transfection efficiency was incomplete in the competition assay, because GPE still entered into BRL-3A cells via both nonspecific endocytosis and receptor-mediated endocytosis.
Conclusion
In the current study, a novel hepatocyte-targeting gene carrier, GPE, was successfully prepared. The polymer was constructed by a simple procedure, possessed a enhanced ability to condense pDNA effectively into nanoparticles with physicochemical properties appropriate for cellular uptake. GPE displayed significantly higher transfection efficiency and much lower cytotoxicity than commercially available PEI 25 kDa in BRL-3A cells. Importantly, GPE exhibited good hepatocyte specificity. To sum up, it is reasonable to conclude that GPE might carry potential for efficient and safe hepatocyte-targeting gene delivery.
Publish your work in this journal
Submit your manuscript here: http://www.dovepress.com/drug-design-development-and-therapy-journal Drug Design, Development and Therapy is an international, peerreviewed open-access journal that spans the spectrum of drug design and development through to clinical applications. Clinical outcomes, patient safety, and programs for the development and effective, safe, and sustained use of medicines are a feature of the journal, which has also been accepted for indexing on PubMed Central. The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use.
|
2018-04-03T04:02:06.655Z
|
2013-03-26T00:00:00.000
|
{
"year": 2013,
"sha1": "2fd62e618a664a3fa65a4bd373eb530ce9fc68ee",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.2147/dddt.s42582",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bf78b50e5d62690281429bbfe443d444e9f0baff",
"s2fieldsofstudy": [
"Materials Science",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
2890093
|
pes2o/s2orc
|
v3-fos-license
|
Evolutionary Dynamics of Vibrio cholerae O1 following a Single-Source Introduction to Haiti
ABSTRACT Prior to the epidemic that emerged in Haiti in October of 2010, cholera had not been documented in this country. After its introduction, a strain of Vibrio cholerae O1 spread rapidly throughout Haiti, where it caused over 600,000 cases of disease and >7,500 deaths in the first two years of the epidemic. We applied whole-genome sequencing to a temporal series of V. cholerae isolates from Haiti to gain insight into the mode and tempo of evolution in this isolated population of V. cholerae O1. Phylogenetic and Bayesian analyses supported the hypothesis that all isolates in the sample set diverged from a common ancestor within a time frame that is consistent with epidemiological observations. A pangenome analysis showed nearly homogeneous genomic content, with no evidence of gene acquisition among Haiti isolates. Nine nearly closed genomes assembled from continuous-long-read data showed evidence of genome rearrangements and supported the observation of no gene acquisition among isolates. Thus, intrinsic mutational processes can account for virtually all of the observed genetic polymorphism, with no demonstrable contribution from horizontal gene transfer (HGT). Consistent with this, the 12 Haiti isolates tested by laboratory HGT assays were severely impaired for transformation, although unlike previously characterized noncompetent V. cholerae isolates, each expressed hapR and possessed a functional quorum-sensing system. Continued monitoring of V. cholerae in Haiti will illuminate the processes influencing the origin and fate of genome variants, which will facilitate interpretation of genetic variation in future epidemics.
cern because of its potential to cause large epidemics and pandemics and its high case fatality rate when the disease is left untreated. The disease cholera is caused by V. cholerae strains of serogroups O1 and O139 that can produce a potent enterotoxin, cholera toxin, which is encoded by the ctxAB genes on the bacteriophage CTX (1). Seven pandemics of cholera have been recorded since 1817, when the disease first emerged from the Bay of Bengal and spread around the globe (2). The current seventh pandemic of V. cholerae originated in Southeast Asia and has spread across the globe in several waves of transmission (3). In October of 2010, cholera made its appearance in Haiti. Prior to 2010, there were no documented cases of cholera in that country, despite the devastating outbreaks occurring in the Caribbean in the 19th century (4). Its introduction to the island of Hispaniola following the earthquake that occurred there in January of 2010 has resulted in the largest epidemic of cholera in recent times: 604,634 cases and 7,436 deaths were documented in the first two years of the epidemic (5).
Initial epidemiological and genetic studies focused on the origin of the Haiti epidemic and quickly attributed the outbreak to human introduction of a V. cholerae O1 strain from outside the region, most likely South Asia (6,7). Epidemiological investigations pointed to Nepalese troops serving as United Nations (UN) peacekeepers as the source of cholera, based on reports of unsanitary conditions at the UN camp, the spatial-temporal pattern of disease clusters, and the coincidence of the outbreak with the arrival of the UN troops from Nepal (8). Phylogenetic analysis of time-relevant isolates from Haiti and Nepal provided additional support for the hypothesis that the epidemic strain was imported from Nepal (9).
The single-source introduction and geographic isolation of the Haiti epidemic, along with the extended duration of the outbreak, provide an unprecedented natural experiment for characterizing in detail the intrinsic tempo and mode of genome evolution in this deadly pathogen. We performed whole-genome sequencing on a set of well-characterized isolates collected near or after the 1-year anniversary date of the Haitian outbreak, and compared them with isolates collected early in the epidemic to gain insight into the dynamics of genome evolution in V. cholerae O1. The sample set includes isolates collected at different time points and in different localities (9,10) as well as phenotypically and genotypically distinct isolates discovered during routine laboratory surveillance by the Centers for Disease Control and Prevention. The variants that have arisen in the course of the outbreak include various pulsedfield gel electrophoresis (PFGE) pattern combinations, serotype Inaba (11), an altered antibiotic susceptibility pattern (ASP), and a nonagglutinating (NAG) V. cholerae strain. We first conducted phylogenetic analysis to determine whether the diverse set of isolates were all part of the same outbreak and then used the genome sequences to compare gene content and structural arrangement of chromosomes.
RESULTS
We sequenced 23 genomes on the Illumina platform (see Table S1 in the supplemental material). The sample set represents geographically dispersed isolates collected over an array of time points and representing multiple PFGE pattern combinations (see Fig. S1 to S3 in the supplemental material). Eighty-seven genomes were downloaded from the Sequence Read Archive (SRA) (see Table S2 in the supplemental material); two were found to have Ͼ20% non-Vibrio genetic material and were excluded from the study: hc-17a1 and hc-77a1. Comparing the 108 genomes yielded 566 core genome single-nucleotide polymorphisms (SNPs). Of the 23 isolates, we sequenced 9 on the PacBio platform and resequenced the reference strain 2010EL-1786.
A phylogenetic tree constructed from the 566 core SNPs grouped all Haiti isolates and three Nepal isolates (14,25,26) in a single monophyletic group within the context of a global collection of 108 Vibrio cholerae O1 strains (see Fig. S4 in the supplemental material). Next, we uncovered 45 high-quality SNPs (hqS-NPs) in the Haiti-Nepal group. The minimum spanning tree (MST) constructed from the hqSNPs was concordant with the clustering of isolates by maximum likelihood analysis ( Fig. 1; also, see Fig. S4 in the supplemental material). The MST illustrated the radiation of numerous lineages from a single sequence type that predominated in the early part of the epidemic.
We then examined the 45 hqSNPs for potential effects on function (see Table S3 in the supplemental material). Most notable was a GAA-to-TAA substitution in the wbeT gene of 2012EL-1410, a representative of five serotype Inaba isolates from our Haiti collection. The substitution introduces a premature stop codon into the gene, which predicts a truncated protein, a result that is consistent with other studies showing that serotype conversion results from mutations in the wbeT gene (12). Comparison of three different molecular clock models showed that overall the changes at nucleotide sites were consistent with the epidemic behavior, as the highest likelihood was obtained under the exponential growth model. Using a strict molecular clock, analysis of 10 8 states from eight independent runs yielded a median estimate of the date of the most recent common ancestor to be 28 September 2010 (95% credible interval, 23 July 2010 to 17 October 2010).
Variation in gene content and structural arrangement. Few differences in gene content were observed (Fig. 2). The BLAST atlas showed no evidence of gene acquisition, but a few deletions were apparent, and the assembly of long reads showed similar results. In addition, three large inversions in or around the SXT region were evident in the long-read assemblies (Fig. 3). Amplification across the 3= end of the inversion boundaries in isolates with rearrangements and in the reference strain 2010EL-1786 confirmed the structural variation observed in the assemblies.
Quorum sensing and transformation. To determine whether the Haiti clone was capable of natural transformation, one mechanism of horizontal gene transfer (HGT), standard laboratory DNA uptake assays (18) were performed on twelve isolates. Virtually no kanamycin-resistant (Kan r ), Lac Ϫ transformants were detected (Table 1) for Haiti isolates in assays using DNA from reference V. cholerae strain C6706 with a kan gene disrupting the lacZ gene (14). C6706, which is capable of quorum sensing (QS), transformed at an efficiency 10 3 to 10 4 times greater than that of each of the Haiti isolates and a QS-deficient C6706 ⌬hapR strain. Similar results were observed when we attempted to transform each Haiti isolate with its own kan genomic DNA (Table 2), or with C6706 genomic DNA with an ampicillin resistance gene disrupting the lacZ gene (data not shown). Thus, it appears that the Haiti clone not only failed to acquire any genes by HGT but also was poorly transformable by standard laboratory DNA uptake assays. We examined the sequences of 47 genes that are involved in the QS and other signaling systems known to control transformation (14); each gene was present, and there were no loss-offunction or nonsense mutations in these genes or their promoters (data not shown). Also, each Haiti isolate was experimentally shown to be QS proficient, as expression of a QS-dependent reporter gene introduced into each of the 12 Haiti strains was similar to that of the positive control (PC) C6706 strain, while the negative control (NC) C6706 ⌬hapR strain was Ն1,000-fold impaired, which corresponds to the detection limit (data not shown).
DISCUSSION
We used genomic approaches to characterize evolutionary changes in the V. cholerae O1 population following the introduction to Haiti. Our results were consistent with previous findings that show that the Haiti cholera outbreak is clonal and that Nepalese isolates are the closest relatives to the Haiti strain identified to date (9), even when placed in a phylogeny with a larger collection of isolates representing recent cholera epidemics. A previous study based on WGS (9) showed that Nepalese isolates were almost indistinguishable from Haiti isolates; however, that phylogeny did not include isolates recovered from recent cholera outbreaks. The phylogeny presented here provides evidence that the observed variants that were detected by our surveillance are part of the same outbreak and not representatives of secondary introduc- tions. The synthesis of PFGE and sequence data demonstrated the utility of WGS in establishing the clonality of isolates that exhibit greater PFGE dissimilarity than would normally be attributed to a single-source outbreak. The use of high-resolution sequence data that are amenable to evolutionary analysis will greatly enhance our ability to discern transmission pathways of virulent clones, such as the one implicated in this epidemic.
The nucleotide polymorphism that we detected was consistent with the observed epidemic behavior, as the most supported model of population growth was exponential. The molecular clock calculated with this model estimated a most recent common ancestor date of 28 September 2010 (95% credibility interval [CI], 23 July 2010 to 17 October 2010) (see Fig. S5 in the supplemental material). The credibility interval encompasses the date that the Nepalese soldiers arrived in Haiti (9 October 2010) (8), as well as the first reported hospitalization of a cholera case (17 October 2010) (although an earlier fatal case with an onset date of 12 October may have been the index case) (15). The consistency between the molecular data and the epidemiological information demonstrates the utility of advanced statistical tools in outbreak investigations where epidemiological information may be lacking. Our results suggest that a population genomic approach can be very powerful in delimiting the time frame of an outbreak.
We observed remarkably few differences in the genetic repertoire of the V. cholerae O1 population (Fig. 2). All genes from Haiti isolates were found in the genome of the reference strain (2010EL-1786), and the differences in gene content could be attributed to loss of genetic material. The NAG isolate (2012V-1060) had a unique~10-kb deletion, and closer examination revealed that several key components were missing from the rfb region (see Fig. S6 in the supplemental material). The isolate therefore appeared to be a serogroup O1 strain that was unable to synthesize or transport O antigen to the cell surface. Five representative strains were observed to have large deletions in the SXT, an~100-kb integrative conjugative element that exhibits considerable diversity in gene content (16). One~10-kb SXT deletion was found in three isolates ( Fig. 1; also, see Fig. S7 in the supplemental material) that also shared an altered ASP. The typical ASP for Haiti V. cholerae includes intermediate resistance to chloramphenicol and nonsusceptibility to streptomycin, sulfisoxazole, trimethoprim/sulfame-thoxazole, and nalidixic acid, resistance traits that are encoded by floR, strA, strB, sul2, dfrA1, and a point mutation in gyrA (17). The isolates with the altered ASP displayed nonsusceptibility only to nalidixic acid, and both PCR and genome analysis confirmed the loss of floR, strA, strB, and sul2 resistance genes in the SXT region of these isolates. The deletion could not be placed parsimoniously on the MST (Fig. 1); however, it falls within variable region III (16) (see Fig. S7) and is flanked by transposase genes, so it is reasonable to suppose that the same deletion occurred independently in two different lineages. The SXT deletion could be made parsimonious by inferring that the nonsynonymous transversion (Fig. 1, asterisk) reverted to its original state, an event that we concluded is less likely to occur. We note that a parsimonious reconstruction indicates loss of SXT genes in the three closest Nepal isolates after divergence from the common ancestor of the Nepal-Haiti cluster (Fig. 1), as the other two most closely related Nepal lineages from Hendriksen et al. (9) (Nepal-2 and Nepal-3) possess an intact SXT (see Fig. S7). The large deletions in the SXT and rfb regions (Fig. 3) were confirmed by the continuous-long-read (CLR) assemblies. The nearly complete assemblies from the CLR and the BLAST atlas were both consistent with the notion that the Haiti V. cholerae strain has not acquired genes or genomic islands. Thus, we found no evidence that unrelated bacterial strains in the environment have contributed to the diversification of the Haiti outbreak strain.
It is well accepted that HGT is a major force driving evolution in bacteria, including Vibrio (18,19); thus, the lack of HGT observed in our study might be surprising. Limited data suggest that changes can accumulate over a relatively short time frame (20), and a previous study of V. cholerae from Haiti (10) reported accumulation of diversity early in the epidemic although gene acquisition was not specifically demonstrated in V. cholerae serogroup O1. In Vibrio cholerae, natural transformation, an important mechanism of HGT, occurs on chitinous surfaces and requires quorum sensing (QS) (i.e., the HapR QS transcription factor) (13,14). Thus, we first considered whether transmission dynamics precluded the establishment of epidemic V. cholerae in the environment so that they lacked exposure to other Vibrio and to chitin, which are critical for HGT by transformation (12). Although it is possible that environmental factors could limit HGT between en- (Table 2). Further experiments confirmed that the isolates, like C6706, were quorum-sensing proficient and expressed hapR (data not shown). Thus, the Haiti strain appears to be limited in its ability to acquire new genetic material through transformation, but this is not due to QS deficiencies, as have been identified in other nontransformable V. cholerae isolates (13) (B. K. Hammer and E. E. Bernardy, unpublished results). It remains possible that V. cholerae isolates defective for transformation in lab settings may, nonetheless, be naturally competent in nature. However, to our knowledge no such strains have yet been described. Also, a longer time frame may be required for recombination to occur with more distantly related lineages and create mosaics that are successful enough to be observed. We note that the low rates of transformation would presumably not affect HGT via other mechanisms such as phage transduction or conjugation. In summary, the Haiti cholera epidemic provided a unique opportunity to study the evolutionary dynamics of an isolated population of V. cholerae O1. Although PFGE was initially critical for determining that a single strain caused the outbreak, subsequent changes in PFGE patterns precluded our ability to determine that observed variants traced back to a single epidemic founder, an issue that was addressed by using WGS in our comprehensive surveillance strategy. The sample set was virtually homogeneous in gene content, an observation that led to the discovery that the Haiti strain is poorly transformable. Thus, our study indicates that transformation by unrelated environmental strains of V. cholerae has played no detectable role in the evolution of the outbreak strain. Further studies will define the genetic mutation(s) that rendered the Haiti strain defective in HGT via natural transformation. Once the mutation(s) is identified, its temporal origin and global prevalence can be determined to understand more about the stability and success of this particular genotype and the role that transformation plays in its genome evolution and success in establishing itself in a new environment. Atypical O1 El Tor V. cholerae strains such as the Haiti strain have already displaced prototypical El Tor strains and emerged as the predominant clone circulating in Asia and Africa (21)(22)(23)(24). These strains have acquired multidrug resistance and enhanced virulence traits such as classical or hybrid CTX prophage and SXT-ICE, resulting in higher infection rates and harsher symptoms (25). With the tools such as WGS now being available for epidemiological surveillance and case tracking, we argue for renewed efforts aimed at cholera prevention to avert more widespread and difficult-totreat cholera outbreaks.
Bacterial isolates.
A total of 23 bacterial isolates were chosen for wholegenome sequencing based on phenotypic and genetic diversity that was observed during routine molecular surveillance as previously described (26,17) (see Table S1 in the supplemental material). Clinical isolate C6706, the isogenic ⌬hapR mutant, and the C6706 derivative carrying kan at the lacZ site (27) were from our strain collection.
Genome sequence determination. Single-molecule, real-time (SMRT) sequencing was performed on the PacBio RS platform using SMRTbell libraries targeting 10-kb inserts using 90-min movies and C2 chemistry (Pacific Biosciences, Menlo Park, CA) using previously established methods and commercially available chemistries (28). Single-end 70-and 100-bp Illumina reads were generated on the GAIIx platform (Illumina, San Diego, CA) using standard procedures.
Genome data acquisition and initial data processing. Illumina runs of publically available genomes were retrieved from the Sequence Read Archive (SRA) and the closed genome of 2010EL-1786 from GenBank (accession numbers NC_016445.1 and NC_016446.1). 2010EL-1786 is an early outbreak isolate from Haiti, which we sequenced, closed, and annotated (7). To improve assembly quality, we trimmed poor quality from the ends of reads and removed reads of bad quality using the script run _assembly_trimClean.pl from CG-Pipeline (CGP) with default options or with a minimum length of 30 bp and no trimming for the 36-bp read sets (29). De novo assembly was performed on resulting reads using CGP, which assembles with Velvet (30). Optimal parameters were determined for each assembly using VelvetOptimiser with a k-mer range from 27 to 63 bp. Coding gene predictions were prepared with CG-Pipeline, which uses a comprehensive gene prediction approach (29). Genomes of Ͼ4.2 Mb were analyzed for potential contamination by comparing the contigs against the RefSeq database of microbial genomes using BLASTn.
Variant calling and annotation. Illumina reads were mapped against 2010EL-1786 using the SMALT mapper (31), and variants were called with FreeBayes (32). Analysis parameters for calling high-quality SNPs (hqSNPs) were optimized by manually reviewing the pileups versus variant calls for seven random genomes analyzed under different conditions. The following parameters were used: smalt index -k 13 -s 6; smalt map -f samsoft; freebayes --pvar 0 --ploidy 1 --left-align-indels --min-mappingquality 0 --min-base-quality 20 --min-alternate-fraction 0.75. We removed indels and SNPs with a depth of coverage less than 10 from variant calls. The set of SNPs that passed our filters comprise the final set of hqSNPs used in all subsequent analysis. The set of 45 hqSNPs within the core genome of the Haitian and the three closely related Nepalese genomes were annotated with snpEff (33).
Core genome phylogeny. The Haiti isolates were first examined in a broad phylogenetic context by constructing a tree containing 108 genomes with PhyML using the K80 substitution model, best of NNI and SPR for tree topology searching, and SH-like branch supports (34) (see Fig. S4 in the supplemental material). A phylogeny was constructed for all genomes clustering with the original Haiti genomes (7, 10) plus the three related Nepal genomes (9) using the same approach (n ϭ 63).
Bayesian analysis. The analysis of the date for the most recent common ancestor (MRCA) was based on the alignment of 45 hqSNPs from 32 Haiti genomes with known DOCs (see Tables S1 and S2 in the supplemental material). To estimate the date of the outbreak, the sequences were analyzed using BEAST1.72 with an HKY model (35) and 10 8 iterations of the Markov Chain Monte Carlo Simulation. To minimize the number of parameters estimated, we used a strict molecular clock with a starting estimate of 3E-4 hqSNPs/site/day, as estimated by Path-o-Gen (36). We tested three different growth models: constant population size, expansion, and exponential and compared models using Bayes factors based on marginal likelihoods sampled from the posterior. TreeAnnotator, with a burn-in of 1,000 trees, was used to find the best-fitting tree with default parameters.
BLAST atlas. The BLAST atlases were constructed using Illuminaonly assemblies relative to a pan-genome, initiated by a concatenated and closed assembly of 2010EL-1786 (accession numbers NC_016445.1 and NC_016446.1). The pangenome for the BLAST atlas was constructed by iterative comparisons of gene predictions from each of the assemblies using BLASTn against the genes in the pangenome set (37). Any genes among the other assemblies which did not have a hit to the pangenome with Ͼ80% identity and Ͼ100 nucleotides were added to the pangenome set.
Detecting rearrangements with PacBio assemblies. Ten Haiti isolates, including the reference strain 2010EL-1786, were sequenced on the PacBio RS platform using standard C2 chemistry and protocols. For each strain, reference-based alignments as well as de novo assemblies were compared to the closed reference genome of Haiti isolate 2010EL-1786 (7). The reference-based approach to detect structural variants was performed as described previously (28). A novel assembly method was employed that enabled high-quality de novo assembly using solely continuous-long-read (CLR) sequencing data from the PacBio RS platform. The assembly pipeline has three main components: preassembly, assembly, and assembly polishing (38).
The preassembly step utilizes the error correction framework from the AHA pipeline (39). However, rather than correcting the long reads with high-quality short reads, as previously described, long reads are used to generate a high-accuracy consensus of other long reads (28). Specifically, the full CLR data set was divided into subsets to include the longest CLRs. The length cutoff for each strain was set to obtain at least 10ϫ corrected read coverage; depending on the sequencing depth and read length pro-files, these size cutoffs ranged from 4 to 6 kb. This read subset was then corrected using the complete CLR set for each strain.
The resulting corrected (or "preassembled") reads were trimmed to eliminate any low-quality regions in the consensus for each read. The resultant trimmed long reads were then size selected again to obtain at least 10ϫ long-read coverage of the genome (cutoffs ranged from 3 to 6 kb for the preassembled reads). These high-quality reads were passed directly into the Celera Assembler (40).
We performed a final finishing step using Quiver from Pacific Biosciences (https://github.com/PacificBiosciences/GenomicConsensus), which leverages specific quality values and features unique to single molecule sequencing. For each strain, all raw reads were aligned back to the de novo assembled output from the Celera Assembler using BLASR (41). Consensus calling was performed using Quiver to obtain the final finished assemblies.
Sequencing for the superintegron. To validate the PacBio assemblies, we employed Sanger sequencing of the integron region of one isolate. Fosmid libraries were constructed from genomic DNA fragments of 2011EL-2320 using the PCC2FOS Vector (Epicentre, Madison, WI). Libraries were screened using attC-targeted primers (reference PMID 17464063) to find integron-containing fragments. Transposons were introduced into the selected fosmids with the EZ-Tn5 ϽKAN-2Ͼ insertion kit (Epicentre) and sequenced using EZ-Tn5-specific primers. Geneious v6.0.3 (Biomatters Ltd., Auckland, New Zealand) was used to assemble sequences to generate the full-length integron sequence. The Sanger consensus sequence was compared to the PacBio assembly.
Chitin-induced natural transformation assay for HGT. As described previously (13,14), sterile crab shells in triplicate wells were inoculated with each V. cholerae strain in a 12-well plate and provided with 2 g of genomic DNA (gDNA) marked with a kanamycin resistance (kan) gene. Following a 24-h incubation, attached cells were harvested and plated to quantify transformation frequency (TF), defined as kan CFU ml Ϫ1 /total CFU ml Ϫ1 . Experiments were performed in triplicate. For the experiments whose results are shown in Table 1, each strain was provided with donor gDNA from a C6706 derivative with kan at the lacZ locus, and the fold TF defect was calculated relative to C6706. In a separate assay (Table 2), twelve pools of donor gDNA were generated from Ͼ1,000 Tn5(kan) mutants of each isolate, and each pool was used to transform that same isolate and C6706. The fold TF defect was calculated for each isolate relative to C6706 that was also incubated with Tn5(kan) gDNA from that isolate. Transposon mutagenesis of C6706 and the Haiti isolates was performed as described elsewhere (43).
Quorum-sensing assay. As described (44), a quorum-sensing reporter plasmid (pBB1) was introduced into each isolate, and then triplicate cultures were grown overnight at 30°C. Luciferase levels and optical density at 600 nm (OD 600 ) were determined for each culture to calculate the relative light units (RLUs), expressed as (lux/ml)/(OD 600 units/ml). A quorum-sensing assay was also performed as previously described (44) (data not shown).
|
2016-05-04T20:20:58.661Z
|
2013-07-02T00:00:00.000
|
{
"year": 2013,
"sha1": "d56ce2405504e994c9a7a8f5759a9085cede4371",
"oa_license": "CCBYNCSA",
"oa_url": "https://mbio.asm.org/content/4/4/e00398-13.full.pdf",
"oa_status": "GOLD",
"pdf_src": "Highwire",
"pdf_hash": "de031e3a8b91fce52281b093d2f6b458ea27038b",
"s2fieldsofstudy": [
"Biology",
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
53090880
|
pes2o/s2orc
|
v3-fos-license
|
Opportunities for life course research through the integration of data across Clinical and Translational Research Institutes
Introduction Early life exposures affect health and disease across the life course and potentially across multiple generations. The Clinical and Translational Research Institutes (CTSIs) offer an opportunity to utilize and link existing databases to conduct lifespan research. Methods A survey with Lifespan Domain Taskforce expert input was created and distributed to lead lifespan researchers at each of the 64 CTSIs. The survey requested information regarding institutional databases related to early life exposure, child-maternal health, or lifespan research. Results Of 64 CTSI, 88% provided information on a total of 130 databases. Approximately 59% (n=76/130) had an associated biorepository. Longitudinal data were available for 72% (n=93/130) of reported databases. Many of the biorepositories (n=44/76; 68%) have standard operating procedures that can be shared with other researchers. Conclusions The majority of CTSI databases and biorepositories focusing on child-maternal health and lifespan research could be leveraged for lifespan research, increased generalizability and enhanced multi-institutional research in the United States.
Introduction
Health at any point across the life course is determined by a complex interplay of genetic and environmental exposures from gamete to grave [1][2][3][4]. Early life factors, such as in utero exposure to undernutrition or toxins, may be particularly important because they have the potential to adversely alter short-term health and long-term trajectories of physical and mental health [5][6][7]. While basic science and epidemiological studies have shown the importance of considering the role of early life exposures on later life health outcomes, our understanding of these mechanisms needs to be expanded. However, the data requirements for a well-designed life course study may deter some investigators from adopting such a comprehensive approach to understanding health. Longitudinal studies are costly and time-consuming, and therefore most prospective data sources are constrained to specific geographic subpopulations and lack generalizability.
Life course research also requires a diverse set of data sources and analytic techniques because a combination of genetic, social, psychological, and environmental factors must be incorporated into the analyses. The interdependent role of these factors and timing of exposures, as well as cumulative effects over time, remains poorly understood. To address these concerns, we have compiled a list of available data sources across 64 research institutions. Leveraging data from multiple sources across a variety of subpopulations allows for the power necessary to further investigate the importance of timing of exposures and their later life health outcomes [8,9]. However, there are few data catalogs that define data sources available for investigating how early life exposures affect later life health to conduct this type of lifespan research.
The US National Institutes of Health (NIH) designed a Roadmap for Medical Research with the purpose of improving the translation of research into practice by improving the understanding of complex biological systems, encouraging scientists to test multiple models for conducting research, and facilitating the efficient dissemination of research findings into clinical care [10]. Such a broad and lofty mission is essential for improving the health and well-being of the US population and requires the implementation of new forms of collaboration in the medical community. The Clinical and Translational Science Awards (CTSA) program of the NIH National Center for Clinical and Translational Sciences (NCATS) is a national network of institutions (Clinical and Translational Research Institutes (CTSIs)) designed to address this goal. Thus, the CTSA program creates a definable academic home designed to facilitate translational research and includes 64 medical research institutions in 31 states and the District of Columbia. Harnessing the data from these institutions with the goal of further elucidating links between early life exposures and later life health and using findings to inform focused interventions has the potential to affect the health of millions of the US population. The vast data sources that already exist to conduct lifespan research across all the CTSIs could be integrated to conduct lifespan research. Therefore, we conducted a survey to identify these resources and to begin to identify common data elements as well as linkages to established biorepositories.
The NCATS national CTSA organization created domain task forces (DTFs) to serve as the infrastructure for sharing ideas and collaborating to develop efficient and effective approaches to conducting and translating research into improved health. The Lifespan Domain Task Force is comprised of researchers across domains from preconception, infancy to geriatrics who examine ideas and conduct studies needed to advance lifespan research. A group of maternal and child health researchers and life course epidemiologists formed a sub-group of the Lifespan DTF, the Early Life Exposures Working Group (ELE WG), and identified the need to create a publicly available catalog of existing studies and cohorts that would broadly benefit investigators interested in ELE research. Developing a catalog of datasets from a national network of clinical research centers will provide a resource for future research examining the role of early life factors on later life health across the US population. It will also encourage collaboration between academic institutions and their community health partners and facilitate the future evaluation of programs aimed at integrating information about social, psychological, and environmental factors contributing short-term and long-term health outcomes. In order to address this objective, a survey was designed in REDCap and disseminated to all CTSIs with the goal of identifying potential resources that would benefit investigators interested in life course research with a special interest in early life exposures.
Materials and Methods
The ELE WG created a RedCap survey designed by members of the ELE WG to be distributed to all CTSIs (n = 64). The REDCap survey requested information regarding institutional databases, such as cohorts or biorepositories from unique populations, related to early life exposure, child-maternal health, or lifespan research.
Surveys were sent to all CTSA Principal Investigators (PIs) who were asked to identify and send the survey to those in their institution with the greatest knowledge about lifespan research and/or existing data repositories. Reminder prompts were then sent to the PIs if there had been no initial response. Prompts were followed with personal appeals from members of the task force if the surveys had not been completed. If responses had not been received in a timely manner (2-3 months), follow-up emails were sent to each of the PIs by J.E.H. and thereafter his administrative assistant reached out to the PIs' administrative assistants to be certain that the PI had received and responded to the request.
Data collected from the RedCap survey were stored in the Early Life Exposure Database Repository and can be downloaded from the Center for Leading Innovation & Collaboration Web site (https://clic-ctsa. org/content/ele-redcap-table-resources). The full list of questions asked of participants is available in Supplementary Table S1.
Results
The survey was completed by 56 of the 64 CTSA hubs for an overall response rate of 88%. All CTSA hubs completing the survey were academic centers and are widely dispersed across the United States (see Fig. 1a). There were 73 total respondents to the survey, with multiple respondents from 7 of the institutions. Nearly all respondents completed the survey, with an overall survey completion rate of 96%. In all, 90 completed surveys representing 130 lifespan related databases formed the basis of the result section.
Information on a total of 130 databases relating to early life exposures, maternal-child health, or life course research was collected from 49 of the participating CTSA centers. Fig. 1b shows the number of early life exposures, child-maternal health, or lifespan research databases by institution. The majority of CTSA hubs with a life course database had more than one database relating to early life exposures, child-maternal health, or lifespan research (n = 26), with the maximum of 10. Table 1 provides a broad overview of the data collected from the RedCap survey (Supplementary Table S2 provides a detailed summary of each database). The reported databases contain information on cohorts ranging in size from 1-500 participants (n = 39 or 30.5%) to more than 100,000 participants (n = 13 or 10.2%), with cohort size being unknown for 18 of the databases. Cohorts included prenatal (n = 47), infants (n = 66), children (n = 55), young adults (n = 45), pregnant women (n = 46), adults (n = 53), older adults (n = 30). Longitudinal data, defined as having multiple measurements for a single patient over multiple time points, were available for 72% (n = 93) of the reported databases.
Approximately 59% (n = 76/130) of all reported databases have an associated biorepository, with multiple types of biosamples (n Blood = 58, n Placenta = 14, n Tissue = 28, n Other/Unknown = 23). Blood is the most commonly collected biosample. Examples of the other/unknown category of biosample include breast milk, fecal samples, umbilical cord, and omental adipose tissue. Nearly 57% of biorepositories were considered shareable (n = 43/76), which was defined as storing data on a platform that permits sharing and having an Institutional Review Board [IRB] protocol that facilitates sharing. Participants were asked to provide a brief description of how researchers can request biospecimen data and the responses ranged from contacting the PI to contacting specific NIH institutes that oversee the study. More than half of the biorepositories (n = 44/76; 58%) have standard operating procedures (SOP) that can be shared with other researchers. These procedures include the time between sample collection, collection cambridge.org/jcts method, and other SOPs. Of the biorepositories with SOPs, 64% (n = 28/44) have collection procedures that can be modified to accommodate prospective or new studies.
Most biorepositories have collected samples from subjects in both healthy and diseased states (n = 31/76; 41%). There are smaller numbers collected for disease-only (n = 11/76; 14%), healthy-only (n = 19/ 76; 25%), or unknown/other purposes (n = 15/76; 20%). The types of subjects that were classified as "other" include peri-menopausal women, children with lead poisoning, genetically at risk individuals, or pregnant women. The disease states reported include general disorders such as autoimmune diseases, autism, diabetes, preterm births, obese subjects, kidney disease, peripartum depressed women, and neurological disorders, as well as specific disorders such as Wolfram syndrome.
Data integrated with electronic medical records provide an exciting prospect for observing how early life exposures affect later life health trajectories. Nearly 70% of the biorepositories have been integrated with electronic medical records in some manner (n Integrated = 37/76; 49% and n Somewhat/Maybe = 16/76; 21%). Nearly all data that have been linked to electronic health records (EHRs) have systems that are amenable to natural language processing (n = 49/53; 92%). In addition to administrative health care data, 49% of all biorepositories (n = 37/76) have laboratory results on tissues that are part of research and not medical practice. Another 21% (n = 16/76) may have these data available in partial form. Figure 2 displays a summary of features of the databases with longitudinal and biorepository data by cohort size. A large proportion of longitudinal databases also have a biorepository (n = 57/93; 61%). The majority of the cohorts with biorepositories are smaller studies with under 5000 participants (n = 52/93; 56%). Three CTSA hubs (4 databases) reported having cohorts with over 100,000 participants and biorepository data. Biospecimens are available for the longitudinal studies over a range of cohort sizes, including cohorts with over 100,000 individuals. Blood is the most commonly available sample in databases with longitudinal data (n = 44/57; 77%), followed by tissues/ fluids (n = 21/57; 37%). The data sets are not merely collections of diseased cohorts, with nearly half of the databases having subjects that are healthy and in a disease state (n = 25/57; 44%). Another 37% have subjects that are all healthy or all in a diseased state (n Healthy = 11/57; 19% and n Diseased = 10/57; 18%) and the remaining 19% (n = 11/57) are unknown. The available databases also encompass many stages across the life course. Nearly all of the databases enrolled individuals between the prenatal period and young adulthood (n = 51/57; 89%), with the distribution by period of development as follows (categories are not mutually exclusive); prenatal (n = 23/57; 40%), infant (n = 33/57; 58%), childhood (n = 27/57; 47%), and young adult (n = 23/57; 40%).
One of the most exciting prospects for future life course research is the development of longitudinal databases that are linked to biorepository data and EHR. Fig. 3 displays a summary of features of the 40 CTSA databases from 22 CTSA hubs with all 3 components (n Longitudinal = 40/ 93; 43% and n Total = 40/130; 31%). The 4 large databases have been integrated with electronic medical records. Biospecimen data collected by longitudinal studies linked to EHR is available for a range of cohort sizes, health statuses, and age groups. Blood is the most commonly available biospecimen in databases with all 3 components (n = 33/40; 83%), followed by tissues/fluids (n = 13/40; 33%). Most of the longitudinal data with biosamples and electronic medical records have fewer than 5000 individuals enrolled (n = 29/40; 73%). Databases with all 3 components also span the entire life course, from prenatal to older adulthood, with 90% having enrolled individuals between the prenatal period and young adulthood (n = 36/40; 90%). The distribution of records by period of development is as follows (categories are not mutually exclusive); prenatal (n = 16/40; 40%), infant (n = 24/40; 60%), childhood (n = 22/40; 55%), and young adult (n = 20/40; 50%).
Discussion
Life course methods conceptualize health as the dynamic interplay between biologic and environmental factors from conception to death and has long been accepted by the World Health Organization [11][12][13]. Understanding factors that are amenable to intervention during early periods of development is particularly important because of its potential to improve health over an entire life course and possibly for future generations [14]. It may also prove useful for predicting the occurrence or progression of disease in current populations, allowing for a more targeted approach to disease specific surveillance and screening programs. Multiple databases and biorepositories focusing on maternal-child health and life course research are available to investigators within or outside the responding institutions that can be used to facilitate lifespan research.
Life course research is expensive. Utilizing the massive volume of research data and patient-specific information already being collected by health care systems to study the short-term and long-term effects of early life exposures may prove to be a cost-effective and powerful way to elucidate further factors that affect health during critical periods of development, and may reduce the selection biases inherent with recruiting research participants, and will contribute to the development of Learning Healthcare Systems. Combining research repositories with population-level data, such as vital records and EHR, makes it possible to quantify and potentially correct for the differences between the sample and overall population. Further, combining repositories that have been collected from multiple geographic locations and for diverse populations and purposes may result in a sample that is more representative of the larger population, as well as samples with larger sample sizes for subgroup analyses. Cataloging research databases and biorepositories across institutions that facilitate research on early life exposures and health across the lifespan is the first step in beginning to combine and analyze data that have already been collected. Linking clinical research records to administrative records within and between institutions could potentially revolutionize health care research by allowing individuals to be followed over longer periods of time. While challenging, successful examples exist on a smaller scale that demonstrate the feasibility of linking to records across institutions and to external data sources, such as vital statistics and Driver's License Data [15][16][17][18].
Synthesizing EHRs with data from external sources, such as population databases, biomonitors and environmental exposure data, would allow for investigations into the immediate and latent effects of risk factors over all ages. For example, individual level birth certificate and death certificate data can be linked to existing cohorts to increase the breadth and quality of measures relating to early life exposures [15,[19][20][21]. Combining these records also allows researchers to investigate dynamic health outcomes, such as how the relationship between changes in weight during mid-life affects later life disability [22] or how pregnancy outcomes affect trajectories of chronic conditions after the age of 65 [23]. Using geocoded data to link the databases and biorepositories identified in this study to other external data sets, such as environmental toxins and measures of the social determinants of health, also have great potential to improve our understanding of the long-term effects of early life exposures. One area that appears underrepresented in current databases is patient-reported measures, such as subjective well-being, which has been shown to be distinct from mental illness and predictive of long-term health and longevity [24,25]. Whereas mental illness may be captured as diagnoses and prescriptions in electronic medical records, social well-being will not be captured, and thus adding brief indicators to existing databases could yield valuable information related to long-term health and disease prognosis, as well as patient centered outcomes [26].
Although combining data from multiple sources with computational, bioinformatics, and statistical methods allow us to observe previously unseen patterns in biomedical data, conceptual models, such as those used in life course epidemiology, can be used to provide the scaffolding for integrating scientific theory and approach to making sense of the patterns. There are multiple opportunities to utilize this framework in ongoing initiatives such as the Precision Medicine Initiative and the Environmental Influences on Child Health Outcomes program. example of a successful data sharing resource that began archiving data in 1962 and currently holds over 68,000 data sets from more than 8000 studies [27]. A similar resource combining clinical and population health existing data sources housed across multiple institutions, guided by a conceptual model of life course research, and supported by the CTSI program across the United States would be a cost-effective way to further investigate the relationship between early life exposures and health. Second, to support reproducibility, data sharing across institutions should include sharing the protocols and methodologies used to collect, clean, analyze, and curate the data. Examples of online protocol repositories include Protocols.io [28] and Protocol Exchange [29]. Third, building off of the ICPSR model, training in data access, curation, and the analytic methods of life course research should be part of the life course data repository. Although there are many aspects, such as confidentiality and data sharing agreements that must be considered if such an endeavor were to be undertaken, these should not be seen as unsurmountable obstacles. Sensitive data sources could also be held by their respective intuitions and assigned a linkage id that would allow data sharing between groups that have gained the appropriate approvals from the relevant data contributors and IRB [15]. Insufficient time, lack of funding, and lack of data sharing platforms may also be prohibitive to the promotion of data sharing across institutions [30].
Other barriers also need to be addressed for large-scaled collaborations across institutions. For instance, data and biospecimens may only be internally available to researchers in the same institution. Thus, alternative strategies for collaborations across centers for replication of previous findings will be required. This includes concerns about confidentiality and privacy issues revolving around creating large databases with personal health information require pragmatic strategies that minimize the risk of loss of confidentiality while enhancing the opportunity to learn from real-world experience. One approach is to allow collaborators to perform analyses within their own institutional firewalls and share statistical estimates for pooling in collaborative analyses. Several approaches, from simple to complex, could be taken to achieve such collaborations. For example, a simple approach is to form cross-institutional research teams focused on a single research question, each with access to their own data sets and have them design and execute the study and analysis protocol simultaneously, and then combine summary data across sites. This model has also proven successful in the social sciences [31][32][33]. A more complex approach would be to develop a consortium of data science teams from participating institutions to develop common data elements and common procedures for life course research, also referred to as a "Federated Model." The National Patient-Centered Clinical Research Network Model and CTSA Informatics Domain Task Force is an example of this type of collaboration. It might be more successful, however, if the sometimes daunting task of sharing all data across institutions were focused on a smaller scale. This would circumvent the need for a data repository, which raises complex social, legal, and ethical challenges, and allow for the formation of cross-institutional research teams with common goals but independent data holdings. Further, the NCATS Streamlined, Multisite, Accelerated Resources for Trials IRB platform (SMART IRB) will help expedite multi-site clinical studies across CTSAs by providing a single IRB review process. Transforming such a platform from vision to reality, however, would require substantial support from multiple institutions and creative solutions for a complex problem.
There are noteworthy limitations to our study. Our survey was specific to CTSIs, and it is likely that the number of databases and biorepositories focusing on child-maternal health and lifespan research within CTSIs and available to investigators is underreported. It is possible that the respondent at each CTSA was not fully aware of all related databases housed within each institution. Nonetheless, we see the development of our data catalog as a dynamic process and plan to incorporate other databases as we identify them. We also have considered updating the catalog to incorporate new or expanded databases. At the very least, this is a good start to which additional databases could be added in the future, and facilitate conversations and collaborations across multiple institutions.
|
2018-11-11T01:39:44.598Z
|
2018-06-01T00:00:00.000
|
{
"year": 2018,
"sha1": "bc18c36a60c3307fa5210846fbad78bda10f6e17",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/5EB93C26F82A6727387F480AFED6F824/S2059866118000298a.pdf/div-class-title-opportunities-for-life-course-research-through-the-integration-of-data-across-clinical-and-translational-research-institutes-div.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bc18c36a60c3307fa5210846fbad78bda10f6e17",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
}
|
243826155
|
pes2o/s2orc
|
v3-fos-license
|
The Innate Immune Response to Infection by Polyascus gregaria in the Male Chinese Mitten Crab (Eriocheir sinensis), Revealed by Proteomic Analysis
The Chinese mitten crab (Eriocheir sinensis) is a representative catadromous invertebrate of the Yangtze River and a commercial species widely cultivated in China. Both cultivated and wild crabs suffer from a variety of parasites and pathogens, which can result in catastrophic economic losses in aquaculture revenue. Polyascus gregaria, a parasitic barnacle with a highly derived morphology, is specialized in invading these crabs. This study examines the immunological mechanism in E. sinensis infected with P. gregaria. Tandem mass tags (TMT), a specialized method of mass-spectrometry, was used to analyze the infection by P. gregaria resistance at the protein level. In the hepatopancreas of infected crabs, 598 proteins differentially expressed relating to physiological change, of which, 352 were upregulated and 246 were downregulated. Based on this differential protein expression, 104 GO terms and 13 KEGG pathways were significantly enriched. Differentially expressed proteins, such as ATG, cathepsin, serpin, iron-related protein, Rab family, integrin, and lectin, are associated with the lysosome GO term and the autophagy-animal KEGG pathways, both of which likely relate to the immune response to the parasitic P. gregaria infection. These results show the benefit of taking a detailed, protein-level approach to understanding the innate immune response of aquatic invertebrates to macroparasite infection.
Introduction
The Chinese mitten crab (Eriocheir sinensis) is a well-known and important decapod crustacean with both ecological and economic value [1]. This migratory crustacean is native to the coastal waters of East Asia, but is now considered an invasive species throughout Europe and North America [2]. In China, more distinctive germplasm characteristics, and a high output of this native crustacean from the Yangtze River, are generally acknowledged by the public. Due to the commercial value of this species, intensive cultivation became popular after the 1950s [3] from the Yangtze River. Along with the rapid development of the large-scale aquaculture of these crabs are frequent outbreaks of viruses, bacteria, rickettsia-like organisms, and parasites, all of which have led to catastrophic economic losses for Chinese mitten crab farmers [4,5]. These infections can also cause remarkable morphological, physiological, and behavioral changes in the host [6]. Although the interactions among bacteria [7,8], funguses [9], parasites [10], and even ecological factors [6,11] in the host of the Chinese mitten crab, are represented in the literature, to date, there has P. gregaria. We identified hundreds of new relevant proteins and assessed their biological importance through enrichments of gene ontology (GO) terms and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways to discriminate the immunological mechanisms involved in the response to P. gregaria infection. These results provide a novel, deep, and comprehensive view of the innate immune response in the Chinese mitten crab, especially for macroparasites, which provide an academic reference for preventing the infection by P. gregaria, to reduce the economic losses in agricultural farm. These molecular mechanisms also contribute toward pharmacological research and development, regarding new medicine against parasite infection.
Sample Site and Crabs
Male Chinese mitten crabs were obtained from the Yangtze Estuary (31 • 10 59.06 N-121 • 53 40.56 E), Shanghai, China, in December 2020, during the spawning migration. Animals were collected from gill nets (5 mm mesh), set perpendicular to the water flow after 2-3 h in the water. Crabs were quickly stunned on ice as soon as they were caught and immediately taken to the lab. The biological information of each crab was measured and the abdomen was scrutinized to separate animals parasitized with P. gregaria from those without the parasites. Slight exfoliation was necessary to collect the parasites and to note the number. Next, 4-5 g of hepatopancreas tissue was rapidly extracted from both parasitized and non-parasitized crabs, and they were flash frozen in liquid nitrogen. All of the extracted samples were stored at −80 • C to ensure effectiveness of the tissue. The parasites collected from each host were identified based on their morphological characteristics [40]. Crabs were considered healthy (i.e., non-parasitized) if no parasite or scar was found in the abdomen, appendages, or copulatory organ, while those with at least 15 parasites evident in the abdomen were considered of the parasitized crabs ( Figure 1). Six non-parasitized (mean ± SD, case length: 56.67 ± 0.98 mm) and six parasitized (mean ± SD, case length: 53.19 ± 7.02 mm; parasite count: 21 ± 4) ( Table 1) were chosen to analyze the innate immune response in this present study.
Experimental Protein Preparation
A single-to differential (STD) buffer was added to hepatopancreas tissue and then transferred into 2 mL tubes with quartz sand (1:1). An MP Fastprep-24 Automated Homogenizer was used to homogenize the lysate in 2 cycles, 6.0 M/S for 30 s. The homogenate was sonicated and then boiled for 15 min, followed by centrifugation at 14,000 g for 40 min. The supernatant was filtered through a 0.22 µm filter and quantified with the BCA Protein Assay Kit (P0012, Beyotime) before being stored at −20 • C. To separate proteins, a 20 µg of 6X loading buffer was mixed in each sample and boiled for 5 min. The concentrations of proteins were detected (Table S1) and the proteins were separated on 12.5% SDS-PAGE gel ( Figure S1) and visualized by Coomassie Blue R-250 staining.
A total of 200 µg of proteins was taken for each sample and combined with 30 µL of STD buffer (4% sodium dodecyl sulfate (SDS), 100 mM dithiothreitol (DTT), and 150 mM Tris-HCl pH 8.0). The detergent, DTT, and other low-molecular-weight components were removed using UA buffer (8 M Urea, 150 mM Tris-HCl pH 8.5) by repeated ultrafiltration (Sartorius (Göttingen, Germany) 30 kD). Then, 100 µL of iodoacetamide (100 mM IAA in UA buffer) was added to block reduced cysteine residues and the samples were incubated for 30 min in darkness. The filters were washed three times with 100 µL of UA buffer and then twice with 100 µL of 0.1 M triethylammonium bicarbonate (TEAB) buffer. Finally, the protein suspensions were digested with 4 µg of trypsin (Promega, Madison, WI, USA) in 40 µL 0.1 M TEAB buffer overnight at 37 • C. The resulting peptides were collected as a filtrate. The peptide content ( Figure S2) was estimated by UV light spectral density at 280 nm that was calculated based on the frequency of tryptophan and tyrosine in vertebrate proteins.
TMT Protein Labelling and HPLC Fractionation
The 100 µg peptide mixture of each sample was labeled using TMT reagent, according to the manufacturer's instructions (Thermo Fisher Scientific, Waltham, MA, USA). Each TMT labeled mixture was then fractionated by RP chromatography using the Agilent 1260 infinity II HPLC. Next, the mixture was diluted with buffer A (10 mM HCOONH4, 5% ACN, pH 10.0) and loaded onto an XBridge Peptide BEH C18 Column, 130 Å, 5 µm, 4.6 mm X 100 mm column. The peptides were eluted at a flow rate of 1 mL/min with a gradient of 0-7% buffer B (10 mM HCOONH4, 85% ACN, pH 10.0) for 5 min, 7-40% buffer B for 5-40 min, 40-100% buffer B for 45-50 min, and 100% buffer B for 50-65 min. The elution was monitored at 214 nm based on the UV light trace, and fractions were collected every 1 min between 5 and 50 min.
LC-MS/MS Analysis
Each fraction was injected for nanoLC-MS/MS analysis. The peptide mixture was loaded onto the C18-reversed phase analytical column (Thermo Fisher Scientific, Acclaim PepMap RSLC 50 um × 15 cm, nano viper, P/N164943) in buffer A (0.1% Formic acid), and separated with a linear gradient of buffer B (80% acetonitrile and 0.1% Formic acid) at a flow rate of 300 nl/min. The gradient consisted of 6% buffer B for 3 min, 6-28% buffer B for 42 min, 28-38% buffer B for 5 min, 38-100% buffer B for 5 min, and 100% buffer B for 5 min. The peptides were analyzed by a Q Exactive Plus mass spectrometer (Thermo Fisher Scientific) that was coupled to Easy nLC (Thermo Fisher Scientific) for 90 min. The mass spectrometer was operated in positive ion mode. MS data were acquired using a data-dependent top10 method, dynamically choosing the most abundant precursor ions from the survey scan (350-1800 m/z) for high-energy collisional dissociation (HCD) fragmentation. Survey scans were acquired at a resolution of 70,000 at m/z 200 with an automatic generation control (AGC) target of 3e6 and a maxIT of 50 ms. MS2 scans were acquired at a resolution of 17,500 for HCD spectra at m/z 200 with an AGC target of 2 × 10 5 and a maxIT of 45 ms, and isolation width was 2 m/z. Only ions with a charge state between 2 and 6, and a minimum intensity of 2 × 10 3 were selected for fragmentation. Dynamic exclusion for selected ions was 30 s. Normalized collision energy was 30 eV.
Data Analysis
MS/MS raw files were processed using MASCOT engine (Matrix Science, London, UK; version 2.6), analyzed in Proteome Discoverer 2.2 (Thermo Fisher Scientific), and searched against the UniProt database. The search parameters included trypsin as the enzyme used to generate peptides with a maximum of 2 missed cleavages permitted. A precursor mass tolerance of 10 ppm was specified and 0.05 Da tolerance for MS2 fragments. Except for TMT labels, carbamidomethyl(C) was set as a fixed modification. Variable modifications were oxidation (M) and acetyl (protein N-term). A peptide and protein false discovery rate (FDR) of 1% was enforced using a reverse database search strategy. Proteins with fold change >1.2 and p-value (Student's t test) < 0.05 were considered to be differentially expressed proteins.
Enrichment of Pathways Analysis
All protein sequences were aligned to protein database that were assembled (Trinity, V2.4.0) and predicted (TransDecoder, V3.0.1) from the coding sequence of transcriptome, only the sequences in the top 10 and E-value ≤ 0.001 were kept. The GO term of the sequence with the top bit-score by Blast2GO was selected. Then, the annotations from GO terms to proteins was completed by the Blast2GO Command Line. After the basic annotation, InterProScan/GO (http://www.ebi.ac.uk/interpro/ (accessed on 5 October 2021)) was used to search the EBI database by motif and then add the functional information of the motif to proteins to improve annotation. Fisher's exact test was used to enrich GO terms and KEGG pathways by comparing the number of differentially expressed proteins and total proteins correlated to GO terms. Correction for multiple hypothesis testing was carried out using standard false discovery rate (FDR) control methods.
Enrichment of Protein Domain Analysis
InterProScan (http://www.ebi.ac.uk/interpro (accessed on 1 November 2021)) was used to predict the protein families, domains, and special sites based on the protein sequence alignment method. The database was blasted and the two-tailed Fisher's exact test was employed, aiming to test the enrichment of the differentially expressed proteins against all identified proteins. A correction for multiple hypothesis testing was performed using the FDR, and the p-values < 0.05 were considered as a significant domain.
Identification and Quantitative Protein Profiling
The parasites were identified in six Chinese mitten crabs based on morphological characters, such as size, profile, parasitic location, color, and capture site in this study. The proteomic analysis was successfully performed on hepatopancreas tissue from six parasitized and six non-parasitized male crabs based on the TMT method, along with simultaneous identification and quantification. In total, 10,616 unique peptides and 2143 proteins were identified, of which 2046 proteins showed quantitative information. Using a 1.2-fold increase or decrease as a benchmark, 598 proteins showed differential expression between parasitized and non-parasitized crabs. Of these, 352 were upregulated and 246 were downregulated in response to P. gregaria infection. Among the upregulated proteins, several kinds of proteins related to the innate immune system were identified, such as cathepsin F and GTPase KRas, while immune system related proteins in the downregulated set included: serpin B, Autophagy-related 4 (ATG4), ATG5, ATG9, scavenger receptor class B, cathepsin D and L, and lectin ( Table 2).
GO Enrichment
The analysis of GO terms that were significantly enriched in differentially expressed proteins showed a total of 104 GO terms (Table S2). Cellular component terms were especially enriched for the lysosomal lumen, plasma membrane, smooth endoplasmic reticulum, and azurophil granule membrane. Molecular function GO terms were enriched for carbohydrate binding, guanosine triphosphate (GTP) binding, GTPase activity, and others. Biological process terms were mainly enriched for antigen processing and presentation, hemocyte migration, nuclear-transcribed mRNA catabolic process, nonsense-mediated decay, and regulation of filopodium assembly (Figure 2). Of the enriched GO terms, 31 were related to immune response, including autophagosome assembly, immune effector process, astrocyte activation involved in immune response, innate immune response, and adaptive immune response (Table 3). Table 3. Enrichment of GO terms related to innate immune response in the Chinese mitten crab infected with P. gregaria.
KEGG Analysis with DEPs
In the KEGG pathway analysis, a total of 13 pathways were significantly (p < 0.05) enriched in the set of differentially expressed proteins ( Figure 3, Table 4). These pathways included lysosome, protein processing in endoplasmic reticulum, extracellular matrix (ECM)_receptor interaction, other glycan degradation, proteoglycans in cancer, glycosaminoglycan degradation, focal adhesion, antigen processing and presentation, autophagy animal, endocytosis, nucleotide excision repair, and ribosome.
KEGG Analysis with DEPs
In the KEGG pathway analysis, a total of 13 pathways were significantly (p < 0.05) enriched in the set of differentially expressed proteins ( Figure 3, Table 4). These pathways included lysosome, protein processing in endoplasmic reticulum, extracellular matrix (ECM)_receptor interaction, other glycan degradation, proteoglycans in cancer, glycosaminoglycan degradation, focal adhesion, antigen processing and presentation, autophagy animal, endocytosis, nucleotide excision repair, and ribosome.
Enrichment of Protein Domain
Proteins were clustered ( Figure S3) and the protein domains were further analyzed for enrichment in order to better understanding the functional aspects of the DEPs. We found significant enrichment of 32 domain categories (Table 5), including the small GTPbinding protein domain, glycoside hydrolase superfamily, glycosyl hydrolase, ferritin-like, and ferritin.
Subcellular Location of DEPs
Obtaining information about the subcellular location is an important and helpful step towards understanding the mechanism and function of proteins. In the present study, location in the cytosol was enriched with 33.9% of DEPs (n = 203) located there. The second most enriched subcellular location was the mitochondria (n = 110, 18.4% of all DEPs). The least enriched location was the peroxisome, with two DEPs (Figure 4).
Subcellular Location of DEPs
Obtaining information about the subcellular location is an important and helpful step towards understanding the mechanism and function of proteins. In the present study, location in the cytosol was enriched with 33.9% of DEPs (n = 203) located there. The second most enriched subcellular location was the mitochondria (n = 110, 18.4% of all DEPs). The least enriched location was the peroxisome, with two DEPs (Figure 4).
Discussion
The Chinese mitten crab, as an economically important crab, has become increasingly popular in the freshwater aquaculture industry; it is now widely cultured in the provinces of Jiangsu, Anhui, Hubei, and Liaoning in China [41]. Several types of hepatopancreatic diseases from bacteria, viruses, and parasites were reported on in recent years, but little attention has been devoted to the innate immune response to macroparasites in this crab species. Here, we focus on the importance of enriched GO terms, the KEGG pathway, and functional protein domains to reveal substantial insights into the innate immune response of this host-parasite system. Importantly, we found that these components, including GTPase KRas, complement 1q-binding protein (C1QBP), serpin, ATG5, lysosomal lumen term, autophagosome assembly terms, antigen processing, the presentation pathway, and the autophagy animal pathway.
Autophagy is an intracellular degradation system that plays an important role in maintaining cellular homeostasis, and is evolutionarily conserved from yeast to mammals [42]. This system is activated in response to environmental signals, from starvation,
Discussion
The Chinese mitten crab, as an economically important crab, has become increasingly popular in the freshwater aquaculture industry; it is now widely cultured in the provinces of Jiangsu, Anhui, Hubei, and Liaoning in China [41]. Several types of hepatopancreatic diseases from bacteria, viruses, and parasites were reported on in recent years, but little attention has been devoted to the innate immune response to macroparasites in this crab species. Here, we focus on the importance of enriched GO terms, the KEGG pathway, and functional protein domains to reveal substantial insights into the innate immune response of this host-parasite system. Importantly, we found that these components, including GTPase KRas, complement 1q-binding protein (C1QBP), serpin, ATG5, lysosomal lumen term, autophagosome assembly terms, antigen processing, the presentation pathway, and the autophagy animal pathway.
Autophagy is an intracellular degradation system that plays an important role in maintaining cellular homeostasis, and is evolutionarily conserved from yeast to mammals [42]. This system is activated in response to environmental signals, from starvation, disease, and pathogen infection [43]. The targets for degradation are not only proteins, but also organelles and other cellular components. In recent years, the relationship between autophagy and disease has been explored in infections, neurodegenerative diseases, and cancers [44]. In the present study, Ras-associated binding (Rab), ATG, cathepsin, urinary gonadotropin peptide (UGP), (protein kinase D) PKD, and Psen1 were differentially expressed across parasitized and non-parasitized samples, and all have GO terms or KEGG pathways associated with autophagy. During autophagy, the autophagosomes surround the cytosolic components and then fuse with a vacuole, leading to the degradation of the target by lysosomal hydrolases. Autophagy-related (ATG) proteins play a crucial role in the regulation of this process [45]. Prior research showed that the gathering of ATG proteins to form the pre-autophagosomal structure (PAS) is the first step of autophagy [46,47], where autophagosomes are normally generated. In mammals, the overexpression of ATG4B was found to make the LC3-PE complex quickly rupture and form the stable complex of LC3, which shows it is a suppressive effector in autophagy [45]. In grouper cells, the transcriptional level of ATG5 was upregulated after infection with Singapore grouper iridovirus (SGIV) and red-spotted grouper nervous necrosis virus (RGNNV). However, the overexpression of ATG5 simultaneously decreased the expression of interferon and negatively regulated the expression of pro-inflammatory factors [48]. ATG9 has been described as a positive regulator that modulates the number of autophagosomes [49]. In a previous study on the Chinese mitten crab, the transcriptional levels of Atg12, Atg13, and Atg16L were upregulated in crabs with hepatopancreatic necrosis disease [41]. These results indicate that the upregulation of ATG family members has a positive effect on the immune response in different species. On the contrary, all ATG4 (0.78-fold), ATG5 (0.54-fold), and ATG9 (0.66-fold) proteins were significantly downregulated in crabs with P. gregaria infection in the present study. This suggests that autophagy is suppressed in the Chinese mitten crab infected by P. gregaria, and that the expression of ATG proteins is involved in this biology process, as part of the host innate immune response. Taken together, this indicates that ATG may play a suppressive role of autophagosomes generation in the autophagy process response to P. gregaria infection through diverse mechanisms in the Chinese mitten crab. These mechanisms can differ across species from different pathogens, such as the ATG family, performing differential regulations of the formation of autophagosomes and, therefore, of the autophagy process. One link that is likely very important to this process is the recognition of exogenous ligands. The scavenger receptor (SR), one of the sub-families of pattern recognition receptors (PRRs), recognizes the modified lipoproteins and danger-associated molecular patterns (DAMPs) [50]. One study showed that an increase in the expression of SRs induced by Vibrio parahaemolyticus, lipopolysaccharide (LPS), and white spot syndrome virus (WSSV) efficiently enhanced host phagocytosis to clearance bacteria [51]. Here, we found that SR protein was significantly decreased in response to P. gregaria infection in the Chinese mitten crab, likely relating its role in phagocytosis as well as the innate immune system. Interestingly, this pattern is the opposite of what was previously found in a study on Spiroplasma eriocheiris infection in the Chinese mitten crab [52]. This promotes our understanding of P. gregaria, whose infection probably silences the innate immune system through inhibition of cell recognition and autophagosome generation on the autophagy process, depending on several modulators (i.e., ATG and SR) in the Chinese mitten crab.
Lysosomes are acidic and hydrolytic organelles responsible for generating targets during endocytosis, phagocytosis, and autophagy [53]. Lysosomes receive or degrade their substrates via various pathways, including endocytosis, phagocytosis, autophagy, lysosomal proteins, soluble lysosomal hydrolases, and others [54]. Lysosome mobilization is a crucial process for phagocyte migration and bactericidal function, although the molecular mechanisms linking these processes remain unclear. Moreover, lysosomes and related organelles travel over long distances along microtubules within the cell cytoplasm during phagocytosis [55]. For lysosomes and endosomes, the active site is mostly a cysteine thiol or an aspartic acid, which functions as the key catalytic site. Some serine proteases, such as cathepsin, granzymes, and a thymus specific serine protease (TSSP), play important roles in the immune system [56]. In this study, 23 DEPs had lysosome-related lysosome GO terms and/or KEGG pathways. These included cathepsin, lysosomal alpha-glucosidase, hexosaminidase, CD63 antigen, and β-mannosidase. The best known lysosomes and cathepsins are involved in a number of important biological processes, such as intracellular protein turnover, immune response, hormone activation, remodeling of extracellular matrix (ECM), and apoptosis [57,58]. In invertebrates, the signaling pathways of MAPK and Imd are the primary components of the innate immune system, and the MAPK pathway has been shown to mediate cathepsin expression induced by all types of cells [59,60]. Moreover, JNK, ERK, p38, and Relish are regulators in these signaling pathways [61,62]. In the Chinese mitten crab, previous research has found that all of the expression key factors were decreased when cathepsin D was silenced [63]. Furthermore, using RNAi silenced cathepsin D expression caused an obvious decrease in crab immunity and resulted in a significant increase in the mortality of crabs [63]. In mice, the null expression of cathepsin D led to death shortly after birth [64]. Similarly, the expression of cathepsin L was found to distinctly increase following V. anguillarum infection in the Chinese mitten crab [65]. In addition, the over-expression of cathepsin L was homoplastically induced in black tiger shrimp (Penaeus monodon) [66] and Pacific white shrimp (Litopenaeus vannamei) [67] by lipopolysaccharide and WSSV infection, respectively. On the other hand, in this study, we found that both cathepsin D (0.68-fold) and L (0.65-fold) were dramatically inhibited in parasitized crabs, which suggests that both cathepsin D and L perform a crucial role in innate immune function in the Chinese mitten crab. Another cathepsin family member, cathepsin F, was detected as an upregulated (1.40-fold) motif in our study. This protein likely has a similar role to cathepsin S, which cleaves Ii to cross-linking and immunoprecipitation (CLIP) during major histocompatibility complex II (MHC II) Ag processing and presentation [68]. While in crabs, further analysis is needed for this protein function in lysosome-related biology.
The activities of cathepsin, such as cystatins, stefins, tyropins, and serpins, are endogenous protein inhibitors and they tightly bind their target enzymes to prevent substrate hydrolysis [69]. Within the proteinase inhibitor superfamily, serpins are the largest and most diverse family of protease inhibitors [70], and play important roles in many immune processes, such as blood coagulation, complement activation, melanization, and phagocytosis [71]. In recent years, research on serpins in invertebrate indicated that serpins regulate the prophenoloxidase (proPO) activity in Drosophila [72], Penaeus monodon [73], and the Chinese mitten crab [74]. In invertebrates, serpins appear to be unique components of the innate immune response, and are regulated by prophenoloxidase activating enzymes (PPAEs), proteinase inhibitors, lipopolysaccharide, LGBP, and hemolin [75]. A prior study of a Chinese mitten crab infected with Vibrio anguillarum and Pichia pastoris showed that serpins were upregulated, which could be related to serine proteinase involvement in wound healing, proPO activation, phagocytosis, and other defense responses after bacterial and fungal challenges. In Hyphantria cunea, the recombinant serpin and the serine protease inhibitor aprotinin were used to investigate the relationship between serpin and PO activity, the results of which showed that the aprotinin has a stronger inhibitory activity than the recombinant protein at the same concentration, and that the increased serpin expression inhibited PO activity through competition with proPO against the target protease (PPAE) [76]. In the present study, we found a different expression pattern, in which serpin expression was significantly decreased (0.49-fold) after P. gregaria infection, which could affect the proPO system of the innate immune response in the Chinese mitten crab. It should be noted that the decrease in serpin expression does not indicate a positive effect on the proPO of the innate immune system, but the abnormal alteration showed an important role in the innate immune response to P. gregaria infection. In other words, it is possible that the differential expression of serpin mediates the autophagy process, together with ATG proteins, through the MAPK and/or IMD signaling pathways, also involved in the lysosome and autophagy-animal pathways, serving a direct role in the innate immune response to P. gregaria infection in the Chinese mitten crab.
Biology processes are inarguably complicated, polytropic, and are involved in several molecular factors. Interestingly in the present study, most DEPs relating the innate immune response were downregulated after P. gregaria infection, which suggests that P. gregaria may induce a suppressive effect on the Chinese mitten crab's innate immune system during infection. As a hypothesis, on the one hand, this consequence may result from the longterm parasitizing [22] of P. gregaria and the newborn cyprid growing in the Chinese mitten crab. On the other hand, this may be due to the co-evolution of the host and parasite, which has resulted in reductions in resistance over time [77]. It is easily understood that the lifestyle of P. gregaria continually grows for several months [27,30], from development of the external parts to wither off, upon which, the adults release several broods of larvae [78] during parasitization. It is likely difficult to clean or kill the parasites by themselves of the host, which is beneficial for long-term parasitizing and circumventing the crab's attention of the innate immune system [22]. Here, we identified many proteins, domains, GO terms, and KEGG pathways, which were significantly changed in response to P. gregaria infection in the Chinese mitten crab at the protein level, and screened for the crucial component relations to the innate immune system, such as ATG, cathepsin, serpins, lysosome-and autophagy-related GO terms, and KEGG pathways. Further study was performed to discuss the mechanisms of ATG, cathepsin, and serpins, which showed a response to P. gregaria infection through the autophagy process, with lysosomes participation in the innate immune response in the Chinese mitten crab. Other proteins, such as the Rab family [79], GTPase KRas [80], lectin [81], CD63 [82], and C1QBP [83] were also identified, which were shown to have immune functions in many organisms, and were significantly differentially expressed after P. gregaria infection. In this area, to better disentangle these complex signals, more research is needed on the immune response to macroparasites in crabs.
Conclusions
P. gregaria is a specialized crustacean parasite and draws more attention in recent years. Up to date, few researches were reported for the interactions between this parasite and its host, especially in Chinese mitten crab. Thereby, we first used TMT method to research the innate immune response against P. gregaria infection in Chinese mitten crabs. In the present study, many DEPs were identified after P. gregaria infection, and protein domain, subcellular location and Go enrichment were used to analyze protein functions. Moreover, the KEGG pathways were also analyzed to research the mechanisms responding P. gregaria infection in Chinese mitten crab. Finally, we identified DEPs such as Atgs, cathepsins, serpins, GTPase KRas, and lectin, which were mostly enriched in autophagosome assembly, innate immune response GO terms and lysosome, autophagy animal pathways to against the parasite infection through Autophagy process. The innate immune system of Chinese mitten crab was silenced after long-term parasitizing of P. gregaria. These results provided a novel understanding of the innate immune response against P. gregaria infection in crabs, as well as other crustaceans. Simultaneously, provided a basis to research the innate immune response of Chinese mitten crab and prevent the parasite infection in aquaculture industry.
Supplementary Materials: The following are available online at https://www.mdpi.com/article/ 10.3390/fishes6040057/s1, Figure S1: The quality control by SDS-PAGE. Figure S2: The molecular weight distribution for identified proteins. Figure S3: The heatmapimage analysis between parasitized and non-parasitized crabs. Table S1: The concentration of proteins detected in Chinese mitten crab hepatopancreas. Table S2: Significant enrichment of GO terms (p < 0.05) in Chinese mitten crab infected with P. gregaria.
|
2021-11-07T16:08:28.505Z
|
2021-11-04T00:00:00.000
|
{
"year": 2021,
"sha1": "e7f5263711ff40a8fc6b82adade3e16daa74208e",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2410-3888/6/4/57/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "33c47807b9886ddb1e9853f2ba2fa4e243db2bff",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": []
}
|
270814856
|
pes2o/s2orc
|
v3-fos-license
|
Antioxidant and Stress Resistance Properties of Flavonoids from Chinese Sea Buckthorn Leaves from the Qinghai–Tibet Plateau
The unique ecological environment of the Qinghai–Tibetan Plateau has endowed Chinese sea buckthorn leaves with rich bioactivities. In this study, we investigated the bioactivity and stress resistance mechanisms of flavonoids derived from Chinese sea buckthorn leaves (FCL) native to the Qinghai–Tibet Plateau. Our analysis identified a total of 57 flavonoids, mainly flavonol glycosides, from FCL, of which 6 were novel flavonoids. Isorhamnetin glycosides, quercetin glycosides and kaempferol glycosides were the three most dominant classes of compounds in FCL. In particular, isorhamnetin-3-O-glucoside-7-O-rhamnoside emerged as the most abundant compound. Our results showed that FCL possesses potent antioxidant properties, as evidenced by its ability to effectively scavenge DPPH free radicals and demonstrate ferric reducing antioxidant power (FRAP) and oxygen radical absorbance capacity (ORAC) levels comparable to Trolox, a well-known antioxidant standard. Furthermore, FCL showed remarkable efficacy in reducing reactive oxygen species (ROS) levels and malondialdehyde (MDA) levels while enhancing the activities of key antioxidant enzymes, namely superoxide dismutase (SOD) and catalase (CAT), in Caenorhabditis elegans, a widely used model organism. Mechanistically, we elucidated that FCL exerts its stress resistance effects by modulating of transcription factors DAF-16 and HSF-1 within the insulin/insulin-like growth factor-1 signaling pathway (IIS). Activation of these transcription factors orchestrates the expression of downstream target genes including sod-3, ctl-1, hsp16.2, and hsp12.6, thus enhancing the organism’s ability to cope with stressors. Overall, our study highlights the rich reservoir of flavonoids in Chinese sea buckthorn leaves as promising candidates for natural medicines, due to their robust antioxidant properties and ability to enhance stress resistance.
Introduction
Chinese sea buckthorn (Hippophae rhamnoides subsp.sinensis Rousi) is an important subspecies of Hippophae rhamnoides L., which accounts for about 85% of all sea buckthorn in China.Chinese sea buckthorn, including wild forests and artificial forests, is a pioneering species for windbreak and sand fixation and has good economic value and ecological efficiency [1,2].It is mainly distributed on the Loess Plateau, Inner Mongolia Plateau, and Qinghai-Tibet Plateau of China [2].The Tibetan Plateau, which serves as the primary source of sea buckthorn in China, provides unique conditions for the secondary metabolism of sea buckthorn.These conditions result from its high altitude, low latitude, and intense ultraviolet radiation environment [1,3].
Sea buckthorn leaf extract exhibits potent antioxidant activity.Research indicates that the ethyl acetate extract of sea buckthorn leaves effectively scavenges superoxide anions (O 2 − •), showing a significant dose-response relationship [10].Similarly, the methanol extract of sea buckthorn leaves demonstrates effective scavenging of DPPH free radicals, with an activity reaching 54.17 mg TE•g −1 [11].Moreover, Chinese sea buckthorn leaf extract not only exhibits high cellular antioxidant capacity but also possesses anti-HepG2 cell proliferation properties [5].Furthermore, it reduces oxidative stress in diabetic kidneys by decreasing the accumulation of advanced glycation end products (AGE), thereby ameliorating kidney damage [12].It also shows cytoprotective and antioxidant properties against oxidative stress in mouse macrophages [13].
Despite previous characterizations of the flavonoid composition in Chinese sea buckthorn leaves, these studies predominantly used traditional solvent extraction and ultrasonicassisted extraction methods, often resulting in incomplete identification of flavonoid types.Additionally, investigations into the antioxidant activity of flavonoids from Chinese sea buckthorn leaves sourced from the Tibetan Plateau, together with their anti-stress effects and underlying molecular mechanisms, remain largely unexplored.Therefore, this study aims to address these gaps by preparing a flavonoid extract (FCL) from Chinese sea buckthorn leaves using ultra-high pressure-assisted technology and AB-8 macroporous resin purification methods.Subsequently, the flavonoids in FCL will be systematically characterized using UPLC-ESI-QTOF-MS/MS and UPLC-PAD techniques.Moreover, the antioxidant activity of FCL will be evaluated to elucidate its anti-stress effects and mechanisms in a C. elegans model.
Materials
Chinese sea buckthorn leaves were harvested from 10 different sampling sites in Lexiu Town, Hezuo City, Gannan Tibetan Autonomous Prefecture, Gansu Province, China in September 2021 (longitude 102.92 ′ E, local latitude 34.89 ′ N, about 3028 m above sea level), and the samples were collected by the S-shape sampling method, mixed, and then sampled according to the quartering method.After natural drying (28 ± 2 • C, RH 38 ± 2%), the leaves were crushed with a high-speed multifunctional pulverizer (H2489, Hebei Zhizhong Machinery Technology Co., Ltd., Xingtai, China) and sieved with a 40-mesh sieve, after which the sieved powder was treated with petroleum ether by Soxhlet extraction for 2 h.The sea buckthorn leaf powder was collected and placed at room temperature to evaporate the petroleum ether to dryness and then packed in sealed packages and refrigerated for further study.
We purchased Escherichia coli OP50 and wild-type N2 Caenorhabditis elegans from Fujian Shangyuan Biological Science & Technology Co., Ltd.(Fuzhou, China).TJ375, TJ356, CF1553, CL2166, and LD1 were purchased from the official website of Caenorhabditis Genetics Center (CGC).Without any specific instructions, C. elegans were maintained on agar plates at 20 • C using E. coli OP50 as the food source.Simultaneous treatment of adult C. elegans using a sodium perchlorate bleaching procedure to ensure that the growth period of each C. elegans remained constant during the experiment [14].
Preparation of Leaf Flavonoids
Based on our previous studies, the defatted sea buckthorn leaf powder was extracted with 50% ethanol by ultra-high pressure-assisted extraction under the conditions of pressure 143 MPa, temperature 44 • C, pressure hold time 3 min, and liquid-solid ratio 41:1.AB-8 macroporous resin was used to purify the crude extract, and the eluate was concentrated and lyophilized to obtain the purified sea buckthorn leaf flavonoids (FCL).
Determination of Antioxidant Activity In Vitro
The DPPH free radical scavenging ability follows the method proposed by Zhang et al. (2010) [15].FCL and Trolox reference standards were prepared in mixtures with 50% ethanol to create solutions ranging from 0-0.5 mg•mL −1 .A 10 µL quantity of sample solution and 190 µL DPPH (50 mg•L −1 ) solution were added to ELISA plates.After reacting in dark for 30 min, the absorbance was measured at a wavelength of 517 nm.Results were expressed using DPPH free radical scavenging (%), with Trolox reference standards as positive controls.
The ferric reducing antioxidant power (FRAP) was determined using the kit.FCL was prepared as a 0.01-0.05mg•mL −1 solution with 50% ethanol.The 950 µL of FRAP working solution and 50 µL of sample solution were mixed completely and the reaction was carried out for 20 min and then the absorbance was measured at 593 nm.FRAP values were expressed as mg Trolox (TE)•mL −1 .
The oxygen radical absorbance capacity (ORAC) was based on the method proposed by Zhang et al. (2010) [15].Briefly, 20 µL Trolox standard solution, 20 µL phosphate buffer (blank), and 20 µL sample solution were added to ELISA plates and incubated in a fluorescence microplate reader at 37 • C for 10 min.Then, 200 µL 6.0 µmol•L −1 fluorescein sodium working solution was added to ELISA plates, and after shaking incubation for 20 min, 20 µL 119 mmol•L −1 AAPH solution was added to each well.The fluorescence values of each well were measured every 5 min for a total of 31 measurements.The net area under the curve (AUC) was determined by subtracting the AUC of the blank from the AUC of the sample/Trolox fluorescence intensity.The ORAC values of the samples were computed by plotting the standard curve derived by the net AUC of Trolox; results were expressed in mmol TE•g −1 DW.
The method proposed by Adom and Liu (2005) was used as the basis for peroxide radical scavenging capacity (PSC) determination [16].Briefly, 100 µL sample solution, 100 µL Trolox standard solution, and 100 µL phosphate buffer (blank) were added to the ELISA plate, and then 100 µL DCFH-DA solution and 80 µL 200 mmol•L −1 AAPH were added to each well, respectively.Fluorescence values were measured every 2 min at 37 • C and recorded for 40 min.Construct a standard curve based on the curve area of the Trolox dynamic curve to calculate the PSC value of the sample.The result is expressed as mmol TE•g −1 DW.
2.6.Determination of FCL Stress Resistance 2.6.1.Heat and Oxidative Stress Assays Synchronized L4-stage nematodes were selected and placed on NGM plates containing three different mass concentrations of FCL (50, 200, and 400 µg•mL −1 in E. coli OP50 bacterial solution, respectively), all of which contained 150 µM of 5-fluorouracil to inhibit the reproduction of nematode offspring.M9 buffer was used in place of FCL solution in the control group.Each group was set up with 3 NGM plates, and 40 nematodes were transferred to each plate and incubated at 20 • C. In the heat stress experiment, nematodes were treated with FCL for 96 h, and the culture temperature was changed from 20 • C to 35 • C, and then the culture was continued at constant temperature.Numbers of surviving nematodes in each group were monitored and counted at 2-h intervals until all nematodes died, for a total of three independent experiments.In the oxidative stress test, the nematodes were fed with FCL for 96 h and placed on NGM plates with 10 mM H 2 O 2 .The nematodes were observed and counted at 1 h intervals until all nematodes had died, for a total of three independent experiments [17].
Determination of Reactive Oxygen Species (ROS) Levels
The nematodes were treated with FCL for 96 h, and then they were exposed to heat stress for 6 h at 35 • C to cause oxidative damage.In contrast, the control group did not receive either heat shock or FCL treatment.After three rounds of washing in M9 buffer, the nematodes on NGM plates were moved to an opaque 96-well plate that was filled with 50 µL of 100 µmol•L −1 DCFH-DA and 50 µL of M9 buffer solution.Following two hours of incubation at 20 • C, the nematodes were removed from the plate and placed on 2% agarose slides.Next, fluorescence intensity was measured with a fluorescence microscope [18].Images under bright field and fluorescent field were recorded separately for each nematode and superimposed using Image Pro Plus 6.0 software.
Determination of Malondialdehyde (MDA) Content and Superoxide Dismutase (SOD) and Catalase (CAT) Activities
The nematodes were treated with FCL at 20 • C for 96 h and then were exposed for 6 h to heat stress at 35 • C to induce oxidative damage.After three rounds of washing in M9 buffer, the nematodes on NGM plates were moved to centrifuge tubes, with approximately 800 nematodes per group.The nematodes were centrifuged at 5000 rpm•min −1 for 1 min, discarding the supernatant, and washing three times with M9 buffer solution.The nematodes were then crushed with a pestle provided in the kit, and the precipitate was adjusted with 1 mL of M9 buffer.Centrifuge the homogenate at 5000 rpm at 4 • C for 10 min and set the supernatant aside for testing.According to the instructions of the reagent kit (Soraibao Technology Co., Ltd., Beijing, China), MDA content and CAT, SOD activities were determined in the homogenate of nematodes, and the enzyme activities were expressed as U•mg pro −1 .
RNA Extraction and Determination of Gene Expression
Transfer of approximately 2000 synchronized nematodes to plates with or without 200 µg•mL −1 FCL was incubated at 20 • C for 48 h, then heat stressed at 35 • C for 6 h to induce oxidative damage.Total RNA was extracted using TRIzol reagent and reverse transcribed into cDNA using UnionScript First-stand cDNA Synthesis Mix for qPCR kit (Genesand Biotech Co., Ltd., Jinan, China).The transcript levels of daf-2, akt-2, gcs-1, daf-16, gst-4, sod-3, hsf-1, hsp12.6,ctl-1, hsp16.2, and skn-1 genes were detected using GS AntiQ qPCR SYBR Green Fast Mix real-time fluorescence quantitative PCR kit (Genesand Biotech Co., Ltd.).The 2 −∆∆CT method was used to assess relative gene expression levels and normalize them to the expression of the gene β-actin.
2.6.5.Detection of Nuclear Translocation of DAF-16::GFP and SKN-1::GFP LD1 (SKN-1::GFP) and TJ356 (DAF-16::GFP), synchronized transgenic strains of the L4 stage, were placed on NGM plates and treated at 20 • C for 48 h with or without 200 µg•mL −1 FCL, and groups of nematodes were divided into two parts: one part was treated with heat stress for 6 h at 35 • C, and the other was untreated.Nematodes were washed with M9 buffer solution and transferred to 2% agarose slides for observation under a fluorescence microscope.DAF-16 localization in each nematode was divided into 3 types (nucleus, cytoplasm, and intermediates), taking into account the primary location of DAF-16::GFP [19], with a count of at least 60 nematodes per group.
Data Statistics
All tests were repeated three times, and data are presented as mean ± standard deviation.All data were analyzed by analysis of variance (ANOVA) with Duncan's test of significance of differences using SPSS 20.0 statistical software.ChemDraw20.0 was used to draw the chemical structural formula of flavonoids, and Origin 9.0 was used to draw the graph.To determine the lifespan of C. elegans, Kaplan-Meier survival analysis was performed and plotted using Graphpad Pism9.5.The fluorescence images of C. elegans were processed and analyzed using Image Pro Plus 6.0 software, and the relative fluorescence values were statistically analyzed, with at least 15 images in each group.
Identification of Flavonoids in FCL
A total of 60 flavonoids were found in FCL (Supplementary Figure S1), of which 57 were finally identified.Although there were characteristic fragments of flavonoid aglycones present in the MS/MS fragments of compounds 37, 54 and 57, they were not identified due to insufficient structural information (Table 1).According to the MS/MS fragment information of the flavonoid aglycone reference standard (Supplementary Figure S2), it can be inferred that the major flavonoid components of FCL are flavonol glycosides derived from isorhamnosin, quercetin, kaempferol, and myricetin.In MS/MS spectrometry, glycosidic bond cleavage is a characteristic cleavage mode of flavonoid glycosides [9,20].As an example, compound 9 had an [M-H] − ion at m/z 785.2126, and its molecular formula was speculated to be C 34 H 42 O 21 .The four major fragment ions were observed at m/z 623.16, 477.10, 315.05, and 300.03.The fragment ion at m/z 623.16 was raised as a result of the loss of glucose, and the loss of glucose further produced the ion at m/z 477.10.The fragment ion m/z 315.05 was generated by the loss of glucose at m/z 477.10, which was further cleaved to produce the same characteristic ion peaks 300.02 and 151.01 as the reference standard of isorhamnetin.Combined with Chemspider, Massbank information and MS/MS spectral data preliminary identification of compound 9 as isorhamnetin-3-O-rutin-7-O-glucoside, and its possible cleavage pathway was shown in Supplementary Figure S3.Based on the above fragmentation pattern, a total of 45 flavonols were identified from FCL (Figure 1 The flavonoid components of FCL were mainly flavonol glycosides formed by the combination of flavonoid aglycones and glycosyl groups.The glycoconjugates mainly include glucose, rhamnose, rutinose, sophorose, arabinose and galactose.According to the glycosylation method, the flavonoid components of FCL are mainly classified as O-glycosides, and the glycosidic linkages were mainly formed at the 3-position of the C-ring and the 7-position of the A-ring (Figure 1).Among the 57 identified flavonol glycosides, there were 18 types of isorhamnetin glycosides, 16 types of quercetin glycosides, and 14 types of kaempferol glycosides, among which 6 compounds were new flavonoid compounds (43, 45, 46, 47, 49, and 56), and 7 compounds (compounds 16, 24, 25, 32, 39, 48, and 55) were detected for the first time in sea buckthorn leaves.The 3 unidentified flavonoids (compounds 37, 54, and 57) were found for the first time in sea buckthorn.
Antioxidant Activity of FCL In Vitro
A dose-dependent DPPH radical scavenging ability was observed for both FCL and Trolox (Figure 2A), with IC 50 values of 0.123 mg•mL −1 and 0.157 mg•mL −1 , respectively, suggesting that FCL scavenges DPPH radicals 1.28 times more than Trolox.The FRAP of FCL increased with the increasing concentration (Figure 2B), and in the concentration range of 0.01-0.05mg•mL −1 , the iron-reducing antioxidant power of FCL was equivalent to that of Trolox.Figure 2C, D plot the temporal kinetic curves of the relative fluorescence intensity of Trolox standard solution and FCL decaying and increasing with time.The ORAC value of FCL was 3.74 ± 0.22 mmol TE•g −1 , which was 0.935 times higher than that of Trolox.The fluorescence intensity decay trend of 12.5 µg•mL −1 FCL was similar to that of Trolox (Figure 2C).Trolox and FCL showed a clear dynamic trend of fluorescence increase, and the fluorescence intensity increase trend of 25 µg•mL −1 FCL and 6.25 µg•mL −1 Trolox was similar (Figure 2D).Taken together, this indicated that FCL can effectively scavenge DPPH radicals and has iron-reducing antioxidant power and ORAC comparable to that of Trolox.
The radical scavenging reaction of DPPH and the FRAP reaction belong to the single electron transfer reaction, which mainly reflects the reducing ability to high valence ions [26] The antioxidant reaction of ORAC and PSC belongs to the hydrogen atom transfer mechanism, which mainly reflects the ability of substrates and antioxidants to compete for peroxy radicals [15].Flavonoids can provide hydrogen atoms to free radicals, acting as good electron donors to inhibit the development of peroxide chain reactions [15,27].Thus, we could infer that the strong antioxidant capacity of FCL is mainly due to its abundant flavonoid components.
The radical scavenging reaction of DPPH and the FRAP reaction belong to th electron transfer reaction, which mainly reflects the reducing ability to high vale [26] The antioxidant reaction of ORAC and PSC belongs to the hydrogen atom mechanism, which mainly reflects the ability of substrates and antioxidants to c for peroxy radicals [15].Flavonoids can provide hydrogen atoms to free radicals as good electron donors to inhibit the development of peroxide chain reactions Thus, we could infer that the strong antioxidant capacity of FCL is mainly du abundant flavonoid components.Under oxidative stress treatment, the mean survival time of nematodes groups fed with three different concentrations of FCL was significantly higher than that of the control group (Figure 3A).The mean survival time of nematodes in the 200 µg•mL −1 FCL group was the longest and 29.58% higher than that of the control group.The survival curves of the FCL-treated group were generally shifted to the right compared with the control group (Figure 3B), suggesting that FCL could effectively enhance the tolerance to oxidative stress in nematode.High temperature can lead to metabolic disorders and enzyme inactivation in the body, generating large amounts of ROS and causing oxidative stress [28].Under the heat stress treatment, the mean survival time of nematodes in all three FCL groups with different concentrations was extremely significantly higher than that of the control group (p < 0.01) (Figure 3C).Among them, the 400 µg•mL −1 FCL-treated group showed the longest survival time of nematodes.Survival curves were generally shifted to the right in the FCL-treated group compared with the control group (Figure 3D), indicating that FCL could effectively enhance the heat stress resistance of nematodes and showed strong antioxidant capacity.Since flavonoids such as quercetin and rutin could enhance the resistance of C. elegans to heat stress [29], and isorhamnetin and its derivatives had potential protective effects against oxidative stress in human RPE cells [30], it was suggested that the abundant isorhamnetin and quercetin derivatives in FCL may be mainly responsible for the enhanced resistance of C. elegans under stress conditions.
showed strong antioxidant capacity.Since flavonoids such as quercetin and ruti enhance the resistance of C. elegans to heat stress [29], and isorhamnetin and its der had potential protective effects against oxidative stress in human RPE cells [30] suggested that the abundant isorhamnetin and quercetin derivatives in FCL mainly responsible for the enhanced resistance of C. elegans under stress condition
Effects of FCL Treatment on ROS, MDA Levels, and SOD and CAT Activities
The ROS level in the heat stress-treated model group increased sharply (Figure 4A,B), which was 2.64 times that of the control group (without heat stress treatment), indicating that stress treatment could significantly increase the ROS level in C. elegans.Under heat stress, FCL treatment significantly reduced ROS levels, 50 µg•mL −1 FCL reduced ROS levels by 18.76%, and at high concentrations of 200 µg•mL −1 and 400 µg•mL −1 , ROS levels in C. elegans were reduced by 49.62% and 44.08%, respectively.This indicates that FCL treatment could effectively remove ROS levels in C. elegans, with the high concentration of FCL being particularly effective in ROS removal.Similarly, the FCL-treated group resulted in C. elegans with lower MDA levels (Figure 4C), which were reduced by 14.22%, 32.0%, and 37.07% in the 50 µg•mL −1 , 200 µg•mL −1 and 400 µg•mL −1 treatment groups, respectively, compared with the control.Peroxidation and oxidative stress in the organism are directly caused by excessive ROS [31].The level of lipid peroxidation in the body can be indicated by the amount of MDA present, which is created by oxygen free radicals in cell membranes and the peroxidation of unsaturated fatty acids [30].The findings of this study demonstrated that the FCL significantly reduced the amount of ROS and lipid peroxidation in nematodes under acute stress.Consequently, we deduced that FCL would improve oxidative stress resistance of C. elegans by inhibiting the accumulation of excess ROS and the onset of lipid peroxidation.
The SOD activity of nematodes fed with different concentrations of FCL were significantly higher than the control (p < 0.01).The SOD activities of the 50 µg•mL −1 , 200 µg•mL −1 , and 400 µg•mL −1 FCL groups were 50.92%, 65.86%, and 72.80% higher than those of the control group, respectively (Figure 4D).In the same way, C. elegans treated by various FCL doses exhibited significantly increased CAT activity compared to the control group (p < 0.05).The 200 µg•mL −1 and 400 µg•mL −1 FCL groups were 106.08% and 96.86% higher than the control group, respectively.This indicated that, under situations of heat stress, FCL might considerably increase the SOD and CAT activities and eliminate excess ROS, extending the survival rate of C. elegans.
The Molecular Mechanism of FCL Regulation of Stress Resistance in C. elegans
The insulin/insulin-like growth factor-1 signaling pathway (IIS) is a classical and conserved signaling pathway for aging regulation, which can regulate the activities of three transcription factors (DAF-16, SKN-1, and HSF-1), thereby inducing the expression of a series of genes related to stress response, homeostatic regulation, and metabolism [32].To investigate if the IIS pathway has a role in FCL-mediated stress resistance in nematodes, we detected the expression levels of genes and proteins related to the IIS pathway.
The 200 µg•mL −1 FCL treatment significantly decreased the transcript levels of daf-2 and akt-2 in the IIS pathway compared with the control (p < 0.05) and significantly increased the transcript levels of daf-16 and its downstream genes, ctl-1 and sod-3 (p < 0.05), with a 1.02-fold increase in the expression of the sod-3 gene (Figure 5A).In the DAF-16 The insulin/insulin-like growth factor-1 signaling pathway (IIS) is a classical and conserved signaling pathway for aging regulation, which can regulate the activities of three transcription factors (DAF-16, SKN-1, and HSF-1), thereby inducing the expression of a series of genes related to stress response, homeostatic regulation, and metabolism [32].To investigate if the IIS pathway has a role in FCL-mediated stress resistance in nematodes, we detected the expression levels of genes and proteins related to the IIS pathway.
The 200 µg•mL −1 FCL treatment significantly decreased the transcript levels of daf-2 and akt-2 in the IIS pathway compared with the control (p < 0.05) and significantly increased the transcript levels of daf-16 and its downstream genes, ctl-1 and sod-3 (p < 0.05), with a 1.02-fold increase in the expression of the sod-3 gene (Figure 5A).In the DAF-16 nuclear translocation assay (Figure 5B), FCL treatment decreased the DAF-16::GFP expression in the cytoplasm and increased the proportion of C. elegans with its localization in the nucleus.In the control group, the proportion of DAF-16::GFP in the nucleus was 8.3%, while in the FCLtreated group, it increased to 22.5%; after the stress treatment, FCL increased the nuclear localization of DAF-16::GFP to 75.0% compared to 67.9% in the control group (Figure 5C).DAF-16 is an important transcription factor that influences the ability of C. elegans to resist stress [33].The genes upstream of the IIS pathway, daf-2 and akt-2 are repressed in a stress environment, which prompts DAF-16 to dephosphorylate and translocate from the cytoplasm to the nucleus.There, it binds to the DNA promoter-binding region, inducing the expression of target genes and increasing the resistance of C. elegans to stress [30].Results of this study indicated that FCL activated the transcription factor DAF-16, promoted its nuclear translocation, up-regulated the expression of the downstream target genes sod-3 (sod gene) and ctl-1 (CAT gene), and enhanced the resistance of C. elegans.This result was also verified in the CF1553 transgenic strain, in which FCL treatment dramatically up-regulated the SOD-3 protein expression level (Figure 5F,G).
Antioxidants 2024, 13, 763 FCL treatment increased gene expression of hsf-1 compared to control, and its downstream genes hsp-16.2(small heat shock protein gene) and hsp-12.6 increased by 20% and 81%, respectively (Figure 5A).Expression of HSP-16.2::GFPprotein was significantly upregulated by FCL treatment in the TJ375 transgenic strain (HSP-16.2::GFP)(Figure 5H,I).HSF-1 is a heat shock transcription factor that maintains protein homeostasis by regulating the expression of chaperone heat shock proteins, which can protect proteins in nematodes from damage caused by external stresses environmental stress [34].The high expression of HSF-1-regulated HSP-16.2 improves cellular heat resistance and protein homeostasis in unknown organisms by preventing misfolding and translation [35].The results indicated that HSF-1 and its downstream genes were significantly activated and had an important role in the enhancement of heat stress resistance activity of C. elegans by FCL.
No significant difference was observed in the expression level of skn-1 compared to the control group.Gst-4 expression was up-regulated but not statistically significant (p > 0.05); gcs-1 expression was down-regulated (Figure 5A).The same results were found in the SKN-1 nuclear translocation assay, where the FCL-treated group had no significant effect on the level of nuclear translocation of SKN-1::GFP compared with the control group (Figure 5D,E).In addition, the expression of GST-4::GFP protein in the CL2166 (GST-4::GFP) transgenic strain was not significantly altered by FCL treatment (Figure 5J,K).Skn-1, a gene homologous to mammalian Nrf2, can augment the resistance to oxidative stress in C. elegans by modulating phase II detoxification genes [36].However, the results showed that FCL treatment did not significantly affect skn-1 gene and protein expression, which suggested that skn-1 may not be related to the enhancement of C. elegans antioxidant capacity by FCL.
Discussion
Oxidative stress is caused by an imbalance between the generation of oxidants and the elimination of free radicals by antioxidants, and this imbalance leads to the destruction of biomolecules and cells, which is potentially destructive to the whole organism [12,28].
Therefore, enhancing the antioxidant capacity of the organism is an effective approach to defend against oxidative stress.The antioxidant defense system of organisms mainly includes endogenous antioxidant enzymes, endogenous non-enzymatic antioxidants, and exogenous antioxidants [29].Some natural and harmless compounds, such as polyphenolics and flavonoids, are good exogenous antioxidants, which could effectively scavenge free radicals and reduce the damage caused by oxidative stress [14].
In this study, we used UPLC-ESI-QTOF-MS/MS and UPLC-PAD techniques to separate and characterize flavonoids with different polarities using binary gradient elution consisting of water-formic acid and acetonitrile [9], and found that FCL was rich in flavonoids.In terms of antioxidant activity, FCL could effectively scavenge DPPH radicals and possessed ferric reducing antioxidant power and oxygen radical absorption capacities comparable to those of Trolox, in which the ORAC value was 3.74 ± 0.22 mmol TE•g −1 , higher than that of the forsythia flavonoids extract (0.928 mmol TE•g −1 ) and hawthorn extracts (1.17 mmol TE•g −1 ) [37,38], demonstrating strong in vitro antioxidant activity.Using C. elegans as a model, we induced oxidative stress with H 2 O 2 and high temperatures (35 • C), which significantly increased ROS levels in nematodes, while FCL-fed nematodes had lower levels of ROS and resulted in a significant increase in their lifespan.This indicated that FCL could enhance the ability of stress resistance in nematodes through directly scavenging ROS.This effect was similar to that of some reported flavonoid compounds, which could also improve the oxidative stress resistance of nematodes, such as flavonol glycoside complanatoside A, Rhodiola rosea extract, quercetin, rutin, etc. [27,29,34].
In addition to exogenous antioxidants, endogenous antioxidant enzymes also play important roles in the cellular antioxidant defense system, such as SOD and CAT.SOD [39,40].To verify if the stress resistance ability of FCL is associated with the antioxidant enzyme system, we evaluated antioxidant enzyme activities in nematodes and found that FCL treatment resulted in a significant increase in enzyme activities in nematodes.A number of studies have shown that the regulation of antioxidant enzyme gene expression greatly influences antioxidant enzyme activity, implying that FCL may enhance the defense mechanism of nematodes by modulating endogenous metabolic pathways.
The IIS pathway is a key regulatory mechanism for the growth, development, immune defense, and stress resistance of organisms [36,41].The initiation of its kinase cascade is dependent on the phosphorylation of DAF-2 (insulin receptor), which then regulates the downstream signaling molecule AGE-1/PI3K (phosphatidylinositol kinase), followed by the further activation of serine/threonine protein kinases, including AKT, which in turn regulates downstream transcription factors [42].DAF-16, SKN-1, and HSF-1 are three important transcription factors in the IIS pathway.The IIS kinase cascade regulates the nuclear translocation of DAF-16 and SKN-1, which can activate the downstream antioxidant enzyme (SOD, CAT, GST-4, GCS-1) genes and enhance the antioxidant capacity of the organism [33,34].HSF-1 plays an important role in maintaining protein homeostasis by regulating the expression of heat shock proteins (HSP12.6,HSP16.2) [34].The research results indicate that FCL treatment decreased daf-2 and akt-2 gene relative expression, which negatively regulates lifespan in the IIS pathway of nematodes, and increased the relative expression levels of daf-16 and its downstream target genes, sod-3, ctl-1, and hsp-16.2 in this pathway.At the same time, it promoted DAF-16 entry into the nucleus and enhanced the expression of DAF-16 and HSF-1-regulated downstream proteins SOD-3 and HSP-16.2,indicating that FCL could mediate stress resistance in nematodes through the IIS signaling pathway, which required the involvement of transcription factors DAF-16 and HSF-1.
Taken together, FCL, as an antioxidant, could directly scavenge free radicals to reduce ROS levels and achieve stress resistance and also modulate the IIS signaling pathway, activate the downstream transcription factors DAF-16 and HSF-1, and regulate the expression of antioxidant enzymes and heat shock proteins, to achieve the virtuous circle of alleviating oxidative stress and prolonging lifespan (Figure 6).To further explore the potential contribution of FCL's antioxidant effects and its underlying mechanisms, we will investigate the signature flavonoids and conduct extra experiments, such as protein-specific analyses and more relevant mutational analyses.
Taken together, FCL, as an antioxidant, could directly scavenge free radicals to reduce ROS levels and achieve stress resistance and also modulate the IIS signaling pathway, activate the downstream transcription factors DAF-16 and HSF-1, and regulate the expression of antioxidant enzymes and heat shock proteins, to achieve the virtuous circle of alleviating oxidative stress and prolonging lifespan (Figure 6).To further explore the potential contribution of FCL's antioxidant effects and its underlying mechanisms, we will investigate the signature flavonoids and conduct extra experiments, such as proteinspecific analyses and more relevant mutational analyses.
Conclusions
In FCL, 57 flavonoids were identified, including 53 flavonols, 3 flavanols, and 1 flavan, with 6 new flavonoid compounds being detected in sea buckthorn leaves.In particular, flavonols, especially isorhamnetin-3-O-glucoside-7-O-rhamnoside, were predominant.FCL exhibited Trolox-like antioxidant activity and showed potential to enhance nematode stress resistance by modulating ROS and MDA levels, enhancing SOD and CAT activities, and activating antioxidant enzymes and heat shock protein genes in the IIS pathway.These findings highlight the antioxidant and stress resistance properties of Chinese sea buckthorn leaf flavonoids and provide insights for their application in functional foods.
3. 4 .
The Effect of FCL on the Stress Resistance in C. elegans 3.4.1.Effects of FCL Treatment on the Ability of C. elegans to Resist Oxidative and Heat Stress
Figure 3 .
Figure 3. Effects of FCL on average lifespan (A), life curve (B) under oxidative stress, lifespan (C) and life curve (D) under heat stress in C. elegans.The symbol ** indicates an e significant difference compared to the control group (p < 0.01), the symbol # indic comparison between different treatment groups (# p < 0.05), and ns indicates no si difference.
Figure 3 .
Figure 3. Effects of FCL on average lifespan (A), life curve (B) under oxidative stress, average lifespan (C) and life curve (D) under heat stress in C. elegans.The symbol ** indicates an extremely significant difference compared to the control group (p < 0.01), the symbol # indicates the comparison between different treatment groups (# p < 0.05), and ns indicates no significant difference.
Figure 4 .
Figure 4. Effects of FCL on the levels of ROS and MDA and the activities of SOD and CAT in C. elegans.(A) Comparison of the fluorescence levels of ROS (scale 200 µm).(B) Quantitative determination of ROS fluorescence intensity.The symbol ∆∆ indicates that the difference was highly significant compared with the control group (p < 0.01), the symbol * indicates the comparison with the heat stress group (* p < 0.05, ** p < 0.01), and the symbol # indicates the comparison between different treatment groups (# p < 0.05, ## p < 0.01), and ns indicates no significant difference.(C) MDA content; (D) SOD activity; (E) CAT activity.
Figure 4 .
Figure 4. Effects of FCL on the levels of ROS and MDA and the activities of SOD and CAT in C. elegans.(A) Comparison of the fluorescence levels of ROS (scale 200 µm).(B) Quantitative determination of ROS fluorescence intensity.The symbol ∆∆ indicates that the difference was highly significant compared with the control group (p < 0.01), the symbol * indicates the comparison with the heat stress group (* p < 0.05, ** p < 0.01), and the symbol # indicates the comparison between different treatment groups (# p < 0.05, ## p < 0.01), and ns indicates no significant difference.(C) MDA content; (D) SOD activity; (E) CAT activity.
3. 5 .
The Molecular Mechanism of FCL Regulation of Stress Resistance in C. elegans
Figure 5 .
Figure 5.Effect of FCL on stress resistance-related genes and protein expression in C. elega The expression level of stress resistance related genes.(B) Three kinds of DAF-16::GFP locali (C) Histogram of the distribution ratio of DAF-16::GFP in C. elegans.(D) Three types of SKNlocalization.(E) Histogram of the distribution ratio of SKN-1::GFP in C. elegans (Red arrows in
Figure 6 .
Figure 6.Pattern diagram of FCL regulation of stress resistance in C. elegans.The symbol ↓ indicates down-regulation of gene expression, the symbol ↑ indicates up-regulation.
Table 1 .
Flavonoids in the leaves of Chinese sea buckthorn.
could disproportionate O 2 − • to H 2 O 2 and O 2 , and CAT could catalyze H 2 O 2 to produce water and O 2
|
2024-06-29T15:08:52.933Z
|
2024-06-25T00:00:00.000
|
{
"year": 2024,
"sha1": "258fff5020fab85960aec817f45af3940c9fc5dd",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "d95c7f52a81a46423de9a910b1ea03ac8dac8bb4",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": []
}
|
10971933
|
pes2o/s2orc
|
v3-fos-license
|
Harwood Academic Publishers imprint, part of the Gordon and Breach Publishing Group. Printed in Malaysia From the ECM to the Cytoskeleton and Back:
T lymphocytes constitute a highly dynamic tissue type. During the course of their lives, they travel through a variety of physiological environments and experience a multitude of interactions with extracellular matrix components and other cells. In order to do this, they must receive many environmental cues, and translate these signals into the appropriate biological actions. Particularly dramatic are the cytoskeletal shape changes a T cell must undergo during the processes of leaving the bloodstream, migrating through tissues, and encountering antigen. In this review, we highlight the role of integrins in providing a link between the extracellular environment and cytoskeletal regulation and how these receptors help to orchestrate T cell migration and antigen recognition.
T lymphocytes constitute a highly dynamic tissue type. During the course of their lives, they travel through a variety of physiological environments and experience a multitude of interactions with extracellular matrix components and other cells. In order to do this, they must receive many environmental cues, and translate these signals into the appropriate biological actions. Particularly dramatic are the cytoskeletal shape changes a T cell must undergo during the processes of leaving the bloodstream, migrating through tissues, and encountering antigen. In this review, we highlight the role of integrins in providing a link between the extracellular environment and cytoskeletal regulation and how these receptors help to orchestrate T cell migration and antigen recognition.
Integrin cell surface receptors are heterodimeric pairs that have extracellular matrix (ECM) components as well as other cell surface proteins in their ligand repertoire. An array of structurally distinct cz and subunits combine to form over 20 distinct integrin receptors. Integrin subunits are characterized by large extracellular domains and comparatively short cytoplasmic tails, with the notable exception of [34, which has a large cytoplasmic domain. The 132 integrin subfamily, which includes the LFA-1 (oL[32), 95 (cX[32) integrins, the integrin subfamily, which includes the c41 and o5[1 integrins, and the c4137 integrin play particularly notable roles in T cell function. The relevance of integrins to a variety of biological processes is illustrated by their ubiquitous expression on all nucleated cells and the dramatic effects of genetic ablation of most integrin c and [3 subunits in mice (Clark and Brugge, 1995;Schwartz et al., 1995;Shimizu et al., 1999;Hynes and Bader, 1997;Hynes, 1996).
Several aspects of integrin structure and function lead to their prominent role in regulating the ability of a T cell to interact with and respond to the extracellular environment. First, the short cytoplasmic tails associate with cytoskeletal proteins, such as talin, o-actinin and paxillin (Clark and Brugge, 1995). In this way, integrins act as a cell surface bridge that links the structure of the extracellular matrix environment around a cell with its own cytoskeletal scaffold. In adherent cells, this linkage with the cytoskeleton results in the formation of a structure known as a focal adhesion at the point of contact of a cell with the underlying ECM (Schwartz et al., 1995;Guan, 1997). In addition to providing cell anchorage, focal adhesions are now known to be centers of signaling activity. Kinases, such as src kinase and focal adhesion 156 JENNIFER A. EPLER et al. kinase (FAK), as well as adapter proteins, localize in focal adhesions (Guan, 1997). Although lymphocytes do not form classical focal adhesions (Serrador et al., 1999), integrin linkage to the cytoskeleton plays an equally critical role in regulating T cell function.
A second aspect of integrin function that is critical to orchestration of T cell action is the ability of T cells to dynamically regulate the functional activity of integrins in response to environmental cues (Diamond and Springer, 1994;Shimizu and Hunt, III, 1996).
Thus, integrins can cycle between different states of activity that consequently alter T cell adhesiveness to the ECM, and to cells expressing integrin counter-receptors. These changes may involve alterations in the conformation of integrin extracellular domains that result in increased ligand binding affinity, as well as cytoskeleton-dependent changes in the localization of integrins on the cell surface that result in increased avidity (Diamond and Springer, 1994;Bazzoni and Hemler, 1998). The dynamic nature of these changes in integrin activity allows for the precise regulation of T cell interactions with the ECM and with other cells. These responses are necessary for appropriate migration and responses to antigen.
A final integrin function that is critical to T cell action is the ability of integrins to transduce intracellular signals upon ligand engagement (Clark and Brugge, 1995;Schwartz et al., 1995). In adherent cells, integrin signaling plays a central role in regulating integrin-dependent cell migration (Guan, 1997;Schlaepfer et al., 1999), as well as providing signals that insure cell survival upon anchorage to the ECM (Clark and Brugge, 1995;Giancotti, 1997). Although T cells do not exhibit a similar requirement for ECM attachment in order to survive, integrin signaling does promote T cell proliferation (Shimizu et al., 1990a;Udagawa et al., 1996;Geginat et al., 1999). In addition, the highly migratory lifestyle of a T cell suggests a central role for integrin signaling in regulating T cell movement.
The initiation of an antigen-specific T cell response requires that T cells move out of the bloodstream into secondary lymphoid tissues or sites of inflammation, migrate through these tissues, and interact with antigen-presenting cells (APCs). T cells face a formidable task in achieving the morphological changes necessary for this characteristic travel. In this review, we highlight recent insights into the role of integrins and the ECM in each of these stages of T cell action (Figure 1).
LEAVING THE BLOODSTREAM
Integrins figure prominently in the ability of T cells to traffic to different sites around the body (Butcher et al., 1999). Different integrins help to determine different routes, and promote different stages of travel. The route covered by circulating naive T lymphocytes is limited and relatively uncomplicated, covering the blood stream and secondary lymphoid tissue, such as lymph nodes. Memory/effector T cells cover a much more diverse area as they carry out surveillance functions. They may enter non-lymphoid tissues, such as the skin, and can also travel the same routes covered by naive cells.
The specificity in the routes of migration of T cells is orchestrated in large part by the interplay of adhesion receptors, endothelial cell substrata and chemokines. This interplay determines the ability of an individual T cell to extravasate at a specific site. Lymphocyte extravasation involves three successive steps: (1) primary adhesion of lymphocytes to the endothelium, which is manifested as rolling or tumbling under shear flow conditions; (2) lymphocyte activation, which results in integrin-dependent stable arrest on endothelium; and (3) transmigration of lymphocytes out of the blood stream to lymphoid tissues or inflammation sites (diapedesis) (Butcher et al., 1999;Springer, 1995;Butcher, 1991). The interaction between integrins and other adhesion receptors with their ECM or cell surface ligands plays an essential role in each step of the extravasation phase.
Primary Adhesion
Although selectin-mediated adhesion to carbohydrate based ligands plays a prominent role in lymphocyte tethering and rolling on the venular endothelium 1. Leaving the Bloodstream 2. Transmigration ,--T lymphocyte 3. Antigen-Specific Activation Endothelium Basement Membrane FIGURE Integrins and T cell action. A T cell must invoke many shape changes in order to the leave the bloodstream (1), migrate through tissue (2), and respond to antigen (3). Integrins figure prominently in the ability of the T cell cytoskeletal architecture to evoke these changes (Butcher et al., 1999;Lawrence et al., 1995;Alon et al., 1994), the c4151 and 4157 integrins can also mediate this initial step in leukocyte extravasation (Berlin et al., 1995;Berlin et al., 1993;Sriramarao et al., 1994;Alon et al., 1995). Like L-selectin, 47 is highly concentrated on the microvilli of lymphocytes (Berlin et al., 1995). This places c4157 in a region of the cell surface that is critical in initiating lymphocyte contact with endothelium under shear flow conditions. This tethering interaction allows sufficient time for a circulating lymphocyte to retrieve information from the endothelial surface, most notably the presence (or absence) of a signal capable of activating integrins and initiating stable, shear-resistant attachment.
Activating Integrins" The Role of Chemokines Chemokines (chemotactic cytokines) are a large group of low molecular weight secretory or mem-brane bound proteins that provide directional cues for lymphocyte migration (Ward et al., 1998;Baggiolini, 1998;Kim and Broxmeyer, 1999). Several lines of evidence strongly suggest that one function of chemokines in lymphocyte extravasation is to provide an activating signal to integrins, resulting in shear-resistant attachment to endothelium. First, the ability of pertussis toxin to block lymphocyte extravasation (Bargatze and Butcher, 1993;Bargatze et al., 1995) is consistent with the interaction of chemokines with pertussis toxin-sensitive G protein-coupled receptors. Second, numerous chemokines can rapidly increase integrin-mediated adhesion of lymphocytes. In vitro, RANTES, MCP-1 and MCPall induce integrin-dependent T lymphocyte adhesion to fibronectin (Carr et al., 1996). SDF-I, SDF-1 MIP-3[5 and 6-C-kine all trigger rapid and transient adhesion of human lymphocytes to ICAM-1 via [52 integrins (Campbell et al., 1998). Third, chemokines can be detected on endothelial surfaces, which is where they must reside in order to activate rolling lymphocytes. For example, 6-C-kine, which is a potent chemoattractant for naive T cells, is detectable on high endothelial venules found in peripheral lymph nodes (Gunn et al., 1998;Willimann et al., 1998). This is consistent with a proposed role for 6-C-kine in triggering integrin activation during na'ive T cell interactions with peripheral lymph node HEV.
Fourth, loss of chemokine expression in mice can disrupt lymphocyte trafficking. Notably, naive T cells do not migrate to peripheral lymph nodes in mice lacking 6-C-kine (Gunn et al., 1999), which is consistent with a critical role for 6-C-kine in mediating T cell trafficking into peripheral lymph nodes. Thus, the spectrum of chemokines produced in a local endothelial area, coupled with the spectrum of chemokine receptors expressed by any given T cell, likely determines the efficiency of integrin activation during interactions with endothelium.
The mechanism by which chemokines are "presented" to rolling lymphocytes is critical, since chemokines must achieve a local threshold concentration in order to activate integrins expressed on T cells. This would be difficult to accomplish with chemokines in solution, since they would be rapidly diluted and swept away once secreted into the blood vessel. It is now clear that chemokines can overcome this problem by binding to cell surfaces (Tanaka et al., 1993b). In vitro studies have shown that integrin-dependent adhesion of T cells can be triggered by chemokines that are immobilized via their heparin-binding properties to proteoglycans and glycosaminoglycans (Tanaka et al., 1993a;Gilat et al., 1996;Gilat et al., 1994). In addition, studies with IL-8 have suggested that chemokines may accumulate at membrane protrusions on endothelial cells, increasing the their local concentration (Rot et al., 1996;Middleton et al., 1997). The chemokine fractalkine represents a novel member of the chemokine family that is expressed on cell surfaces by a direct transmembrane linkage. This allows for presentation on the cell surface by a stalk-like extracellular domain (Bazan et al., 1997). Thus, chemokines are likely to be specifically retained on the endothelial cell substrata, resulting in local availability of chemokines to rolling lymphocytes at concentrations sufficiently high enough to result in integrin activation (Witt and Lander, 1994). Specific anatomic "conduits" have also been proposed to serve as a mechanism by which to direct chemokines to HEVs in lymph nodes (Gretz et al., 1996;Ebnet et al., 1996). Furthermore, binding of chemokines to ECM components is likely to play a role in the development of chemokine gradients that are essential for directed lymphocyte migration (Gilat et al., 1996;Lider et al., 1995).
Chemokine signaling and integrin activation
The biological effects of chemokines are mediated by their interaction with serpentine G-protein-coupled receptors (Ward et al., 1998). Despite the well-appreciated role of chemokines in regulating integrin-dependent adhesion and migration, little is still known regarding the biochemical events that mediate chemokine-induced integrin activation. Although chemokine-induced triggering of calcium mobilization is well established (Ward et al., 1998), its role in regulating integrin function is undefined. Chemokines also trigger tyrosine phosphorylation events (Ward et al., 1998), but again, the relationship of these biochemical events to integrin activation has not been established. The small GTP-binding protein, Rho, has been implicated in integrin activation by chemokines, based on the ability of C3 transferase exoenzyme to block chemokine-induced activation of c41 (Laudanna et al., 1996). These findings suggest that Rho participates in a signal cascade from the chemokine receptor to trigger integrin-mediated lymphocyte adhesion. Although the pathways linking chemokine receptors to Rho activation are not fully elucidated for lymphocytes, PKC seems to be involved in integrin activation induced by fMLP in human leukocytes (Laudanna et al., 1998). Although a role for the lipid kinase phosphoinositide 3-OH kinase (PI 3-K) in the regulation of integrin function by immunoglobulin superfamily members has been established , PI 3-K inhibitors do not block integrin activation by fMLP in human neutrophils (Jones et al., 1998). In addition to chemokines, other receptors may play a role in initiating intracellular signals that activate integrins during interactions with endothelium. Ligation of L-selectin results in increased integrin-mediated adhesion (Hwang et al., 1996;Giblin et al., 1997;Steeber et al., 1997), suggesting that selectin-mediated rolling leads to signals resulting in shear-resistant attachment mediated by integrins. In vitro studies have also suggested a role for CD31 in integrin activation, as well as transendothelial migration (Tanaka et al., 1992;Schimmenti et al., 1992;Bogen et al., 1992). However, T lymphocyte homing is normal in CD31-deficient mice (Duncan et al., 1999). Integrins themselves may also play a role in regulating the activity of other integrins, a regulatory phenomenon referred to as integrin "cross-talk" (Blystone et al., 1994;Porter and Hogg, 1998;Porter and Hogg, 1997). Studies with T cells have shown that interaction of LFA-1 with ICAM-1 inhibits a4[l integrin-mediated adhesion but enhances T cell migration on fibronectin. Since chemokines can also differentially modulate the activity of Il and I]2 integrins on T cells (Carr et al., 1996), the temporal regulation of distinct integrin types may be critical to successful lymphocyte extravasation. This model also predicts that the spectrum of integrin ligands expressed on a given endothelial surface is likely to play a critical role in regulating lymphocyte extravasation. For example, VCAM-1 has been proposed to be expressed primarily on inflamed endothelium, although recent studies have detected a low level of basal expression of VCAM-1 on lymph node HEV, as well as a role for a4[ 1 and c4[7 in T cell interactions with lymph node HEV (Berlin-Rufenach et al., 1999). Surface-associated fibronectin expressed on endothelial surfaces has also been proposed to play a role in lymphocyte interactions with endothelium (Szekanecz et al., 1992;Ager, 1997).
LYMPHOCYTE TRANSMIGRATION
In vitro studies have implicated both c4131 and LFA-1 in transendothelial migration (Oppenheimer-Marks et al., 1991;Butcher et al., 1999;Oppenheimer-Marks et al., 1990;Brezinschek et al., 1995). In addition, a role for ov[3 in monocyte transmigration has been proposed (Weerasinghe et al., 1998). The differential and sequential activation of integrin receptors determines the program of temporal and spatial coordination of cell adhesion, spreading and migration in lymphocytes.
The major morphological changes that occur during lymphocyte transmigration include cell spreading in response to activation signals, which results in shear-resistant attachment, and the induction of cell motility. This results in movement of lymphocytes through the endothelial monolayer and into the underlying basement membrane. Cytoskeletal reorganization provides a driving force for cell spreading and migration. The small GTP binding proteins of the Rho family are key regulators of cellular morphology (Hall, 1998;Reif and Cantrell, 1998). In fibroblasts, different members of the Rho family have distinct effects on cell morphology. While Rho acts as a molecular switch to regulate the assembly of focal adhesion complexes and contractile actin-myosin filaments, Rac activation leads to the assembly of meshworks of actin filaments at the periphery to produce lamellipodia and membrane ruffles. Cdc42 activity results in the formation of filopodia, actin-rich cell surface protrusions. In T lymphocytes, expression of an active form of Rac increases o411and c511 integrin-mediated cell adhesion and spreading (D'Souza-Schorey et al., 1998), suggesting a possible role for Rac in morphological changes that are required for lymphocyte transmigration. The role of Rho in chemokine signaling in leukocytes (Laudanna et al., 1996) is also consistent with a function for these GTP-binding proteins in lymphocyte transmigration.
Initiation of cell locomotion requires a membrane protrusion at the leading edge (Serrador et al., 1999). Actin and actin binding proteins such as paxillin and vinculin, along with integrin receptors and kinases are concentrated at the leading edge. In addition, chemokine receptors accumulate at the front portion of migratory cells (Serrador et al., 1999). After the formation and stabilization of the leading edge, cells use myosin-based proteins to generate contractile action and force for cell movement (Serrador et al., 1998). A distinct structure known as a uropod forms at the trailing end of migrating lymphocytes and contains actin binding proteins as well as the cell surface receptors ICAM-1,-2 and-3, CD43 and CD44 (Serrador et al., 1998). In addition to the adhesive force provided by integrins during cell motility, signaling initiated by integrin engagement by ligand regulates cell migration. Studies of adherent cells have implicated FAK in regulating integrin-dependent cell migration, since over-expression of FAK enhances cell migration (Cary et al., 1996) and FAK-deficient fibroblasts show reduced migration when compared to wild-type fibroblasts (Ilic et al., 1995). The effects of FAK on cell migration may be mediated by downstream tyrosine phosphorylation of the adapter protein p130 Gas (Cary et al., 1998;Klemke et al., 1998) as well as the Crk adapter protein (Klemke et al., 1998). Although [1 integrin-mediated tyrosine phosphorylation of FAK in adherent cells has been clearly established, there are conflicting reports on the ability of [1 integrins on T cells to initiate tyrosine phosphorylation of FAK (Nojima et al., 1995;Maguire et al., 1995;Hunter and Shimizu, 1997). Thus, the role of FAK in regulating T cell migration remains an area for further exploration. However, both 1 and [2 integrins expressed on lymphocytes induce tyrosine phosphorylation of p130 Gas and a structurally related protein, HEF1 or Cas-L (Petruzzelli et al., 1996; FROM THE ECM TO THE CYTOSKELETON 161 Hunter Minegishi et al., 1996). PI 3-K has been implicated in regulating integrin-dependent motility of tumor cells in response to growth factor stimulation Adelsman et al., 1999;Adam et al., 1998). In addition, activation of mitogen-activated protein kinase (MAPK) has been suggested to play a role in regulating cell migration via phosphorylation of myosin light chain kinase (Klemke et al., 1997). Enhancement of COS cell migration is also observed following over-expression of ICAP-1 (Zhang and Hemler, 1999), an intracellular protein that associates with the [ integrin cytoplasmic domain (Chang et al., 1997).
It is currently unclear whether these additional pathways of regulating cell migration are also operative in T cells during the transmigration process.
Mechanisms by which integrin receptor activity is inhibited must also be invoked, given findings that active integrin receptors are localized at the leading edge and inactive ones are found at the trailing portion of migrating cells (Serrador et al., 1999). In addition to integrin cross talk mechanisms that allow some integrins to inhibit others (Porter and eIogg, 1997;Blystone et al., 1999), a number of intracellular molecules can negatively regulate integrin-mediated cell adhesion. Overexpression of integrin-linked kinase (ILK), a molecule that was initially identified based on its association with integrin subunit cytoplasmic domains, decreases [1 integrin mediated adhesion of human epithelial cells (Hannigan et al., 1996). A novel expression genetic strategy has also demonstrated a role for active H-ras in suppressing integrin function (Hughes et al., 1997). More recently, expression of PTEN, a tumor suppressor with lipid and protein phosphatase activity, was shown to inhibit cell spreading, focal adhesion and migration (Tamura et al., 1998). The role of these molecules in regulating T cell motility has not been extensively investigated. In hematopoietic cells, tyrosine phosphatases play a central role in negatively regulating integrin-mediated adhesion. T cell lines deficient in expression of the CD45 tyrosine phosphatase show enhanced 1 integrin-dependent adhesion to fibronectin (Shenoi et al., 1999), and studies with macrophages deficient in expression of the SHP-1 phosphatase show that SHP-1 is required for detachment from adhesion mediated by the cM2 integrin (Roach et al., 1998).
Regulation of ECM-degrading Proteases by Integrins
Not only must T cells squeeze through the endothelial monolayer, they must also have mechanisms for breaking through the underlying basement membrane. This complex of various ECM proteins, such as laminin and type IV collagen, constitutes a rigid barrier. It has been shown in a variety of cell types that integrins have the additional function of inducing the expression of specialized proteases on the surface of migrating cells (Romanic and Madri, 1994;Huhtala et al., 1995;Brooks et al., 1996). These proteases are of the matrix metalloproteinase (MMP) family, which are specifically designed for ECM protein degradation (Huhtala et al., 1995). Interaction of T cells with VCAM-1 via c41 results in surface expression of the MMP family member, 72 kD gelatinase (Romanic and Madri, 1994). In addition, the 72 kD gelatinase inhibitor, TIMP-2, blocks in vitro T cell transmigration, suggesting a critical role for c4 1 integrin-mediated induction of expression of this protease in T cell migration.
Interestingly, there is evidence in fibroblast studies of a role for integrin cross-talk in MMP induction.
Engagement of c51 results in increased MMP expression, while 4 integrin stimulation produces only low basal expression. Simultaneous engagement of c51 and 41 also results in low expression of MMPs (Huhtala et al., 1995). Whether these specific integrin roles, or the general cross talk phenomenon, applies to surface MMP induction in T cells remains to be seen. The activity of membrane-bound proteases must be subject to strict regional control. The active protease must sense the leading edge of the cell, and be present only in this area, and in a matrix protein-specific manner. In addition to regulating protease expression, integrins also participate in regional control of protease expression on the cell surface. Studies with CS-1 melanoma cells have revealed the colocalization of oV[3 integrin and MMP-2 on the cell surface that 162 JENNIFER A. EPLER et al. is mediated by direct binding of MMP-2 to V[3 itself (Brooks et al., 1996). This association implies a distinct and directed method by which tumor cells invade specific tissues. The intriguing possibility of similar MMP-integrin juxtapositions in T cells is suggested.
INTEGRINS AND ANTIGEN-SPECIFIC T CELL ACTIVATION
Although T lymphocytes exhibit high rates of migration (Serrador et al., 1999), antigen recognition by T lymphocytes in lymphoid tissue requires that antigen-specific T cells be detained long enough within the tissue site for activation and differentiation to occur. Integrins also participate in this complex process of antigen-specific T cell activation, and many of the same regulatory events that govern cell migration also govern the changes that occur in T cells as a result of encounter with antigen.
Stop Signals
The conversion from a migratory to a more stationary phenotype in T cells is mediated by engagement of the antigen-specific T cell receptor (TCR) with peptide antigen presented by a self-MHC protein on a tissue-resident APC. In vitro studies have shown that TCR stimulation results in two distinct changes in integrin-dependent function. One is a rapid, but transient increase in [1and 12 integrin-mediated adhesion of T cells to ECM proteins such as fibronectin and laminin, as well as cell surface counter-receptors such as ICAM-1 and VCAM-1 (Dustin and Springer, 1989;van Kooyk et al., 1989;Shimizu et al., 1990b). This effect of TCR stimulation is similar in some respects to the effects of chemokines on integrin function under conditions of shear flow. TCR-induced increases in 1 1 integrin-mediated adhesion of T cells to ECM proteins may be particularly critical in providing a mechanism by which to retain antigen-reactive T cells at the site of antigen encounter. A second effect of TCR stimulation is to block T cell migration on purified ICAM-1 . Treatment of T cells with antibodies that induce the high affinity conformation of LFA-1 can also induce this "stop signal", suggesting that TCR stimulation induces this block in migration by inducing an increase in LFA-1 affinity . However, an ability of TCR stimulation to induce changes in LFA-1 affinity has not been uniformly observed (Stewart et al., 1996). Nevertheless, initial TCR engagement leads to changes in integrin function that result in dramatic effects on T cell adhesion and migration.
Morphological and Cytoskeletal Changes Upon APC Engagement
As the TCR recognizes its peptide antigen in the clutches of the appropriate MHC receptor on the APC, it turns its full attention to the site of recognition. The accessory proteins CD4/8 cluster about the engaged TCR, stabilizing the interaction. CD4 and CD8 recognize conserved sites on the MHC protein (class II or I) and recruit critical cytoplasmic signaling proteins (eg., p56lck) into the vicinity of the TCR. Co-receptors that provide amplifying signals to TCR activation, such as CD28, also likely are recruited to the site of contact between the TCR and APC. Early studies with blocking antibodies against LFA-1 and CD2 demonstrated a critical role for these adhesion molecules in mediating conjugate formation between T cells and target cells (Shaw et al., 1986). During this process of APC engagement, signals provided by the TCR and co-receptors, such as CD2 and CD28, serve to stabilize the T cell-APC interaction by enhancing the functional activity of [2, as well as [ 1, integrins. Stimulation of CD2, CD28 or CD7 can enhance integrin-mediated adhesion, even in the absence of simultaneous engagement of the TCR (van Kooyk et al., 1989;Shimizu et al., 1990b;. This suggests that a critical function of co-receptor signaling during T cell activation is to enhance integrin-mediated adhesive forces that are necessary for effective stimulation of T cells (Zell et al., 1998a). Receptors that activate integrin-mediated adhesion also activate PI 3-K, and studies with pharmacological and genetic inhibitors of PI 3-K show a clear role for PI 3-K in the activation of integrin function by the TCR, CD2, CD7, and CD28 (Nagel et al., 1998;Chan et al., 1997;Shimizu et al., 1995;Zell et al., 1998b;Kivens et al., 1998;Zell et al., 1996). For LFA-1, TCR-induced increases in LFA-l-mediated adhesion to ICAM-1 may involve an intracellular protein, cytohesin-1, that associates with the I]2 integrin cytoplasmic domain and is a downstream target of PI 3-K (Nagel et al., 1998;Kolanus et al., 1996).
Recent studies of APC engagement have vividly demonstrated the formation of a specialized structure in the T cell at the point of contact with the APC termed a "supramolecular activation cluster" (SMAC) (Monks et al., 1998) or "immunological synapse" (Shaw and Dustin, 1997). This bull's eye-shaped structure consists of an inner circle, or central SMAC (cSMAC), that contains a tight cluster of specific and co-operative signaling molecules, such as CD4 and the TCR on the surface, and p56lck, p59 fyn and PKC 0 in the proximal cytoplasm (Monks et al., 1998). Complementary studies utilizing an alternative approach with purified adhesion molecules suggest that CD2 is also found in the cSMAC (Dustin et al., 1998). The outer ring, the peripheral SMAC (pSMAC), is defined by the presence of LFA:-I and the integrin-cytoskeletal linker protein, talin (Monks et al., 1998). Based on the extracellular "heights" of these proteins (shorter-in-the-middle, longer-on-the-edge) a concave 3D structure (Shaw and Dustin, 1997) is formed.
Thus, LFA-1, which provides much of the adhesive force between T cells and APCs during this process, forms a ring of adhesion around smaller receptors that mediate lower affinity interactions. Complementing the SMAC arrangement on the surface is a strikingly similar cytoskeletal rearrangement. Upon TCR engagement, the microtubule organizing center (MTOC) moves from the vicinity of the uropod to the point directly below the TCR (Serrador et al., 1999;Sedwick et al., 1999). Simultaneously, the actin cytoskeleton rearranges to form an asymmetric cap structure centered around the MTOC (Sedwick et al., 1999;Holsinger et al., 1998;Serrador et al., 1999). Interestingly, the translocation of actin and the microtubule structures appears to be independent of each other, and dependent upon signals from differ-ent cell surface receptors. Despite evidence for actin cap formation by T cells placed on anti-CD3 coated plates (Holsinger et al., 1998), more recent data suggests that MTOC relocation is the distinct result of TCR engagement, while actin capping (as measured by localization of the actin binding protein talin) is the result of LFA-1 binding. This independence was demonstrated in a novel experiment in which T cells were exposed to antigen-free, ICAM-expressing APC and anti-CD3 coated beads from opposite poles (Sedwick et al., 1999). Talin and actin polarized at the site of LFA-1/ICAM binding at the APC, whereas the MTOC translocated to the site of bead-cell contact.
In many respects, SMACs are strikingly similar to focal adhesions created by integrins in adherent cell types (Guan, 1997) (Figure 2). Both SMACs and focal adhesions are marked by extensive receptor clustering, which leads to the initiation of diverse intracellular signaling events. In addition, the cytoskeleton plays a central role in the formation of both structures. In the case of SMACs the interaction of talin with the I]2 cytoplasmic domain may be particularly important, since mutations in the [2 cytoplasmic domain have dramatic effects on LFA-1 function (Hibbs et al., 1991). However, it has not yet been demonstrated that LFA-1 interactions with talin are required for SMAC formation. Studies of SMAC formation with T cells lacking LFA-1 may be particularly informative regarding the precise role of integrins in the formation and maintenance of SMACs during T cell activation.
Lipid Rafts and SMACs?
Lipid rafts is a term used to define regions of the T cell membrane that consist of detergent-resistant zones enriched in cholesterol and sphingolipids, as well as a variety of key signaling proteins (Xavier et al., 1998;Moran and Miceli, 1998). Intact lipid rafts are required for efficient T cell signal transduction (Moran and Miceli, 1998;Xavier et al., 1998;Stulnig et al., 1999) and T cell stimulation with beads containing anti-CD3 and anti-CD28 mAbs results in polarization of lipid rafts to the point of contact between the T cell and the bead (Viola et al., 1999). Src family tyrosine kinases, certain PI 3-K isoforms, and adapter proteins such as LAT (linker for activation of T cells) are enriched in the lipid rafts Xavier et al., 1998;Harder and Simons, 1999), consistent with a role for these structures in T cell activation. Recently, it has been suggested that aggregation of lipid rafts at the T cell-APC zone of contact is associated with actin cytoskeleton reorganization, as disruption of the rafts inhibits the association of signaling proteins, such as TCR , with the cytoskeleton. However, the mechanism of promotion remains unknown (Moran and Miceli, 1998). Because lipid rafts also localize to the T cell-APC contact zone and have a provocative connection to the cytoskeleton, they have a striking resemblance both to SMACs and focal adhesions. However, the relationship between these biochemically defined raft regions of the T cell plasma membrane and the microscopically defined SMACs remains unclear. In particular, the localization of integrins to lipid rafts remains an unexplored area.
TCR Signaling and Cytoskeletal Reorganization
Signals transduced by the TCR have now been linked to intracellular events that lead to reorganization of the cytoskeleton. Tyrosine phosphorylation of the tail by p56 lck induces the association of TCR with the actin cytoskeleton (Rozdzial et al., 1995;Rozdzial et al., 1998). In addition, the immunoreceptor tyrosine-based activation motifs (ITAMs) in the TCR cytoplasmic domain are important in TCR-driven reorientation of the microtubule organizing center and polymerization of the actin cytoskeleton (Lowin-Kropf et al., 1998;Rozdzial et al., 1998). Since tyrosine phosphorylation of ITAMs results in the association and activation of the ZAP-70 tyrosine kinase, a role for ZAP-70 and its downstream substrates in regulating the cytoskeleton upon TCR stimulation would be predicted. Recent studies have confirmed this hypothesis. Overexpression of a dominant negative form of ZAP-70 in T cells prevents MTOC reorganization (Lowin-Kropf et al., 1998). In addition, ZAP-70-mediated tyrosine phosphorylation of the adapter protein SLP-76 is critical to the forma-tion of a trimolecular complex consisting of tyrosine phosphorylated SLP-76, Vav and Nck (Wardenburg et al., 1998). This complex results in the recruitment via Nck of p21-activated protein kinase 1 (PAK1), a kinase that has been implicated in actin polymerization and that is activated by GTP-bound Rac and Cdc42 (Sells et al., 1997;Adam et al., 1998). Since Vav functions as a GDP-GTP exchanger for Rho family proteins, including Cdc42 and Rac, PAK is activated in this complex due to its proximity to GTP-bound Rac and Cdc42. The ability of dominant-negative forms of SLP-76, Vav and Nck to inhibit TCR-induced polymerization of the actin cytoskeleton (Wardenburg et al., 1998)is consistent with a role for this trimolecular complex in regulating TCR-driven cytoskeletal rearrangement, possibly via PAK1 or another protein that interacts with Nck, such as Wiskott-Aldrich syndrome protein (Ramesh et al., 1999;Bi and Zigmond, 1999). Independent studies with Vav-deficient T cells have also demonstrated a role for Vav in the induction of actin caps following stimulation with immobilized anti-CD3 mAbs (Holsinger et al., 1998;Fischer et al., 1998;Cantrell, 1998). However, the precise relationship of these biochemical events to the SMAC formation or polarization of lipid rafts remains to be determined.
Co-receptor Signaling and the T Cell Cytoskeleton
The role of integrins in regulating the cytoskeleton during the process of antigen-specific T cell activation remains poorly characterized. However, both [ and [2 integrins can enhance TCR-driven T cell proliferation (Shimizu et al., 1990a;van Seventer et al., 1990;Udagawa et al., 1996;Abraham et al., 1999), suggesting that integrin signaling might contribute to the cytoskeletal rearrangements that are required for T cell activation. Recent studies demonstrating a role for LFA-l-dependent cell spreading in facilitating TCR-driven activation of MAPK is consistent with this notion (Geginat et al., 1999). However, the spatial segregation in SMACs of LFA-1 from other receptors that participate in T cell activation, such as the TCR itself, CD2 and CD28, suggests that there may be unique features of integrin signaling as it relates to its FROM THE ECM TO THE CYTOSKELETON 165 downstream effects on T cells. Other co-receptors that promote T cell proliferation clearly can initiate signals that result in cytoskeletal rearrangements. CD28 stimulation leads to activation of PAK1 that can be enhanced by simultaneous engagement of the TCR (Kaga et al., 1998a), and CD28 signaling also leads to an increase in F-actin content in T cells (Kaga et al., 1998b). Engagement of CD2 leads to reorientation of the MTOC that is regulated by a novel intracellular protein, CD2AR that associates with the CD2 cytoplasmic domain (Dustin et al., 1998).
Localization of 1 Integrins During T Cell Activation Although LFA-1 has been localized to pSMACs, the redistribution of [1 and [7 integrins during antigen-specific T cell activation remains unknown. This is a critical issue for several reasons. First, VCAM-1 has been detected on certain antigen-presenting cells, including follicular dendritic cells (Ogata et al., 1996;Gao et al., 1997). Thus, the potential exists for c4 1, and possibly c47, to mediate adhesion between antigen-specific T cells and certain APCs. Second, signals provided by the ECM via 31 integrins can enhance TCR-induced T cell proliferation and induction of gene expression (Shimizu et al., 1990a;Udagawa et al., 1996). The localization of [ integrins in either SMACs or lipid rafts is likely to provide important insights into the biochemical basis and functional outcomes of ECM-mediated signals that impinge on antigen-specific T cell activation.
TERMINATING THE T CELL-APC CONTACT
The mechanistic basis for the termination of the interaction between an antigen-specific T cell and an antigen-laden APC remains unclear, although this process of termination is clearly critical to the dissemination of effector T cells through the body following antigen challenge. Biochemical mechanisms that downregulate integrin function are likely to play a critical role in this termination event. Processes that downregulate integrins during cell motility might also participate in downregulating integrin function during T cell activation. Recent findings that CD45 and other tyrosine phosphatases may have a negative regulatory effect on integrins are particularly intriguing (Shenoi et al., 1999;Roach et al., 1998), given the vital role that CD45 plays in T cell activation in general.
IL-2 and T Cell Migration
Although the role of IL-2 in promoting T cell proliferation is well appreciated, this cytokine may also play an important role in regulating T cell motility during antigen challenge. IL-2 stimulation leads to the transcription of a number of cytoskeletal proteins, including I-catenin, -actin and a-tubulin. These transcriptional events may be related to the increase in T cell size that occurs during T cell activation (Herblot et al., 1999). IL-2 also enhances T cell adhesion to fibronectin, laminin and type-IV collagen, as well as fibronectin-dependent migration in vitro (Ariel et al., 1998). Interestingly, naturally occurring IL-2 fragments produced by the neutrophil enzyme, elastase, can alter these pro-adhesion and pro-migration abilities (Ariel et al., 1998). However, the prevalence of these IL-2 fragments in vivo is still uncertain. IL-2 also induces actin polymerization and membrane ruffling in human and mouse T cells (Arrieumerlou et al., 1998). Many intracellular signaling events that regulate integrin function, such as activation of PI 3-K and phosphorylation of the phosphatase SHP-2, occur upon engagement of the IL-2 receptor (Arrieumerlou et al., 1998;Brennan et al., 1997;Gonzfilez-Garcia et al., 1997). How these IL-2 receptor-mediated signals integrate with signals provided by the TCR, integrins and other co-receptors to regulate T cell adhesion and motility remain important areas of future investigation. CONCLUSION Each step of a T cell's journey through lymphoid and remote tissues depends on integrin activity. Not only does this activity exert concerted homing effects, it is also a critical component of the cytoskeletal regulation physically necessary for motion through diverse environments. Although much of our knowledge of how integrins and the cytoskeleton influence cell motility results from studies in non-lymphoid cells, it is becoming clear that lymphocytes utilize many of these same regulatory mechanisms. In addition, it is apparent that integrins participate in the development of novel specialized structures, such as SMACs, that are critical for antigen-specific T cell activation. Our understanding of T lymphocyte action will continue to be enriched by further examination of the dynamic, integrin-mediated actions immediately preceding and distantly following antigen recognition.
|
2014-10-01T00:00:00.000Z
|
0001-01-01T00:00:00.000
|
{
"year": 2000,
"sha1": "51f46b57043aa84a0089d6f4dcdc04749a8bbcfe",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/jir/2000/086281.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "51f46b57043aa84a0089d6f4dcdc04749a8bbcfe",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": []
}
|
239803434
|
pes2o/s2orc
|
v3-fos-license
|
Sustainable Agriculture Transformation for The Nation’s Welfare of Indonesia and Malaysia
FOREWORD OF EDITOR IN CHIEF FOR PROCEEDING OF AGRICULTURAL FORUM IPIMA 2017 MALAYSIA We would like to express our sincere gratitude to God Almighty for making it possible for us to complete and publish this IPIMA (Association of Indonesia-Malaysia Professors) proceeding. The proceeding cover the results of IPIMA Conference, held in Kuala Lumpur, Malaysia on 6 – 9 November 2017. The Conference raised the theme of “Sustainable Agriculture Transformation for The Nation's Welfare of Indonesia and Malaysia”. The Conference are the second meeting of Professors from Indonesia and Malaysia, who are members of the Association of Indonesia-Malaysia Professors (IPIMA). Papers in this proceeding are divided into four groups, i.e. (1) Food and Agriculture, (2) Forestry and Environment, (3) Health, Animals and Fisheries, and (4) Economic and Policy.
FOREWORD OF EDITOR IN CHIEF FOR PROCEEDING OF AGRICULTURAL FORUM IPIMA 2017 MALAYSIA
We would like to express our sincere gratitude to God Almighty for making it possible for us to complete and publish this IPIMA (Association of Indonesia-Malaysia Professors) proceeding. We thank all the authors, reviewers, and editors who have contributed to the timely completion of this proceeding. We also thank the organizing committee of the IPIMA Conference for all of their supports, which enable the completion of this proceeding. The conference and round table discussion developed a new fruitfull collaboration activities among Indonesian and Malaysian Professors in the field of research, academics and professional activities in the field of agriculture and related fields. The conference theme, "Sustainable Agricultural Transformation for 7KH 1DWLRQV Welfare of Indonesia and Malaysia". There were 60 and 18 papers were presented orally and poster, respectively. The papers in this proceedings were presented and selected by an editor team led by Prof. E.K.S Harini Muntasib.
The conference particularly encouraged the interaction of professors and lecturers, students, and professionals in an informal setting to present and to discuss new and current work. Their contributions helped to make the conference as outstanding as it has been. The 50 papers in this proceedings contributed significantly the most recent scientific knowledge known in the field of food and agriculture, forestry and environment, health, veterinary, medicine and fisheries, economy and policy.
This Proceedings will furnish the scientists between two countries (Indonesia and Malaysia) with an excellent reference book. I trust also that this will be an impetus to stimulate further excellent collaboration of study and research activities in the area of agriculture and related fields.
We thank Asosiasi Profesor Indonesia (API), Majelis Professor Negara -Malaysia (MPN), %RJRU $JULFXOWXUDO 8QLYHUVLW\ 8QLYHUVLWL 3XWUD 0DOD\VLD all authors, editors, steering committees, organizing committees, and participants for their excellent and remarkable contributions. A very encouraging response, especially among Indonesian scientific papers of publishers to publish their present papers in this Proceedings thus reflecting on the integrity and intellect of your own. I believe that with the spirit and commitment found on all parties involved in the publication of this Proceedings, the agricultural sector will continue to develop and develop into one of the major contributors to Indonesia's and Malaysia's economic growth.
The papers published in this Proceedings will certainly be an important reference for policy makers, practitioners in the agricultural industry, academicLDQs, researchers and VWXGHQWV of higher learning LQVWLWXWLRQV if related to the field of knowledge to seek. In fact, the knowledge gained from rigorous research will make it easier for the individual's daily tasks or for anybody who needs certain information DQG DYRLG UHSLWLWLRQ RU UHGXQGDQF\.
The publication of such proceedings, if done in series, is also one of the strategies for sharing knowledge and information for Malaysians and Indonesians as recommended in Malaysia-Indonesia bilateral discussions at the $JULFXOWXUDO )RUXP IPIMA 2017.
As a conclusion, I would like to once again congratulate all those involved in the successful publication of the Proceedings of $JULFXOWXUDO )RUXP IPIMA 2017.
|
2019-08-18T09:24:51.040Z
|
2018-11-27T00:00:00.000
|
{
"year": 2018,
"sha1": "03d9548036c24c38a86257542e20337010a34430",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/196/1/011001",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "84ef47763cc99de5984b27028d36feb7feba9aed",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Environmental Science"
],
"extfieldsofstudy": [
"Physics"
]
}
|
233879966
|
pes2o/s2orc
|
v3-fos-license
|
Simulation model of single phase PWM inverter by using MATLAB/Simulink
Received Mar 7, 2020 Revised Jan 20, 2021 Accepted Feb 1, 2021 This work is presenting under the title simulation model of single phase PWM inverter by using MATLAB/Simulink. There are many researchers’ works in this field with the different ways because it is important field and it has many applications. The converter DC power to AC power for any system that mean it need the power electronic device (inverter). The inverter is using when the source DC power and the load AC power. In this work, the simulation system includes the source 300V DC power, inverter, LC filter and load (R). The simulation result shows the waveform of all part in this system like input and output current and voltage.
INTRODUCTION
Power electronic, there are four kind for power electronic device AC-AC, DC-DC, AC-DC and DC-DC [1]- [5]. The AC-AC is called converter also the DC-DC is called converter, the AC-DC is called rectifier and DC-AC is called inverter that device is using in this work [6]- [9]. Inverter, it uses to convert DC power to AC power for any system [10]- [12]. The inverter is one of the parts of the renewable energy systems (wind energy or the photovoltaic system) and has many industrial applications such as UPS [13]- [15]. Adopting the pulse width rate to control the output voltage of the inverter through the stimulation pulses of the inverter switch gates [16]- [21]. The inverter voltage changes with the change in load, so I found the need for a control process to stabilize the voltage by controlling the inverter output (correcting and setting) according to the load value [22]- [26]. The current study includes a reflector with a pulse width rate, and the study is using a computer program that helps simulate the system designed for the current study. By setting different criteria to study different cases to determine the response of the system. Moreover, to address instabilities within the design boundaries of the system used and to reach the best model can give a high response and a short time.
THE SIMULATION MODEL
The Simulation model of single phase PWM inverter by using MATLAB as shown in Figure 1, that include voltage source (V DC =300 V), LC filter (L=2 mH and C=11 microF), Load resitance (R=1 ohm), PWM as shown in Figure 2 and inverter as shown in Figure 3. The simulation model of PWM that had input & output, input (sine wave, sawtooth and comparative) and output had pulses. The simulation model of single
SIMULINK RESULTS
The Simulink results have many parts include simulink voltage and current input system as show in Figure 4, Simulink input filter as show in Figure 5 and simulink voltage and current output system as show in Figure 6.
Simulink voltage and current input system
In this part, the simulink voltage and current input system as show in Figure 4. By using 300 V DC input to 100 V AC output. Options have been developed to conduct the test for the proposed system that adopts the circuit feeding a 300 volt continuous voltage source to obtain experimental results as in the figure above, the result in Figure 4. Show the input voltageVin= VDC=300 Volts and input current Iin (A).
Simulink voltage of input filter
In this part, the Simulink input filter as show in Figure 5. The second stage is the candidate whose income is the output of the first stage, which can be obtained through the results shown in Figure 5. That show the output voltage before filter.
Simulink voltage and current output system
In this part, the simulink result voltage and current output system as show in Figure 6. By using 300 V DC input to 100 V AC output (DC to AC PWM), LC filter (L=2 mH& C=11 microF) and load (R) in ohm.
Simulink pulses and carrer signal
In this part, in simulation of DC to AC PWM inverter. In this simulation model consisting of four mosfet that had amplitude gate pulses 5 V selected Mosfet with 1 kHz frequency switching. The simulink pulses in Figure 7, sine wave in Figure 8 and carrer (sawtooth) signal as show in Figure 9.
CONCLUSION
The characteristic of this system includes first the parameters of DC-AC inverter like switching type diode, transistor or thirestor. Second, the characteristic of PWM like amplitude value and frequency. Finally, the filter and load characteristic and types like LC filter and Rload. In simulation of DC to AC PWM inverter, selected Mosfet with 1 kHz frequency switching for 300VDC input to more than 100VAC output. Simulation of this proposed model for voltage regulation was conducted with the aim of validating system, simulation results show that the proposed system can be used effectively in many Applications that fit the specifications of the proposed system.
|
2020-10-19T18:03:19.830Z
|
2021-03-01T00:00:00.000
|
{
"year": 2021,
"sha1": "904f4f3eff3835936a3398d70be288b21e02d5af",
"oa_license": "CCBYSA",
"oa_url": "http://ijpeds.iaescore.com/index.php/IJPEDS/article/download/20749/13340",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "0626f29fed3548234b1ad36e8d1fd7c937da7cc0",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
204381646
|
pes2o/s2orc
|
v3-fos-license
|
The Physical Fitness Gap between Strikers and Defenders in Football Extracurricular Programs
This research can be categorized as a comparative descriptive study conducted through a survey to measure physical fitness. The subjects of this study were the students who joined the football extracurricular program in the State Junior High School 3 Sleman that consisted of 14 strikers and 23 defenders. The research instrument used the physical fitness test. The data analysis technique employed t-test analysis that required normality test and homogeneity test with 5% significance level. The results showed t count 3.956 with t table 2.027 (3.956> 2.027). It means that there is a significant physical fitness difference between strikers and defenders in the football extracurricular program. The average physical fitness for strikers was 14.57, while the average physical fitness of defenders was 12.04. Based on the research findings, it can be concluded that there is a significant physical fitness difference between the strikers and the defenders in the football extracurricular program in the State Junior High School 3 Sleman. Correspondence Address : Jl. Colombo No.1 Yogyakarta E-mail : adriana.galih1802@gmail.com ISSN 2580-071X (online) ISSN 2085-6180 (print) DOI : 10.17509/jpjo.v4i2.18678
INTRODUCTION
Football is listed as the most popular sport among children to adults. Sometimes, it is called soccer and has received the most attention in terms of being used to enhance mental health, which is perhaps due to it being the most popular sport (The FA, 2015;Sport England, 2018) It is a team game played by two groups of eleven players. This game is done by kicking the ball in various directions (Sukiyani, 2013). This game is played by two teams consisting of 11 players for each team. Of the 11 players divided into several main positions that have their respective duties. Those are goalkeeper, defenders, midfielders, forwards. Each position has different roles and functions into three major groups namely defender to control defensive line, midfielder to manage the midfield line and forward or striker to score goal (Hadjar Kh. M. et.al, 2016).
In this game, the quality of physical fitness is very crucial during the training and competition. Besides, age, morphology, and physical fitness were influential parameters of football performance in elite-level football players, but also confirmed that playing position decisively dictated absolute performance loads and the intensity of fast movements during matches (Bujnovky, et al., 2019). The fitness refers to the person's ability to adjust to the physical imposition in performing effective and efficient work and daily tasks without having excessive fatigue (Muhajir, 2007). The physiological factors that cause fatigue include : the problems in the energy system; accumulation of lactic acid; muscles fails to mechanically contract; nerve system changes (Kusnanik, Nasution, & Hartanto, 2011). In modern football, the defensive players sometimes get petrified to attack as well as the strikers come down to help defense in certain situations. It means good physical fitness needs to be possessed by all players, especially the strikers and defenders, to maintain good playing performance in 90 minutes. This similar condition is also emphasized in State Junior High School 3 Sleman which conducts extracurricular of football twice a week.
This school is one of the participants of the Indonesian Students League (Liga Pelajar Indonesia/ LPI) since 2012 and has won as the first place in Sleman regency, Yogyakarta. However, there is a problem in the case of the low players' involvement who played as the strikers to assist the defensive situation when the team was under pressure. Moreover, the defensive players are also trained to be active in joining the attacking role.
Based on this information, it is important to direct the level of physical fitness for several different positions. Therefore, it will be beneficial to prove the assumption regarding the differences of physical fitness between the strikers and defenders among the students who join football extracurricular activities in State Junior High School 3 Sleman.
METHODS
This study can be categorized as a comparative study through a survey method. This research model was to investigate the existing symptoms without investigating why these symptoms exist, so it did not need to take into account the relationship among the variables. The population was the whole subject of research (Arikunto, 2014). The population of this study was the students of State Junior High School 3 Sleman who participated in football extracurricular activities with the total of 50 people. The sampling technique used purposive containing strikers and defenders with 37 students.
The research instruments used by researchers to collect the data were the Indonesian physical fitness test (Depdiknas, 2010) with the validity of 0.950 and the reliability of 0.960. The Indonesian physical fitness test series was for 13-15 years old covering 50-meter run, 60-second hanging lift body, 60-second lying down sitting, jumping upright, 1000-meter run. Before joining the series of tests, the students were explained the sequence of tests. After that, they had enough warming up session to lower the injury risk.
Having collected the data, the next step was converting into the categorization norms using descriptive analysis with the percentages. The data obtained from the research was included in the Indonesian Physical Fitness Test (Tes Kebugaran Jasmani Indonesia/ TKJI) table. After knowing the value of the five test items, the value was summed to be confirmed by the TKJI table for boys and girls in the age of 13-15 years. Those were classified with the physical fitness level of "Very Good", "Good", "Moderate", "Bad", or "Very Bad".
After the data was obtained, the next step was analyzing the data to draw temporary conclusions of the research that will be conducted. To find out the difference in the level of physical fitness between the strikers and defenders, the t-test was used the significance level of 5%. The differences were existed between the two variables if the test criteria t count is bigger than t-table.
Before conducting data analysis, the data analysis requirements were needed. Those were the normality and homogeneity test. Here were the testing assumptions and hypothesis testing. The hypothesis testing used the t-test to look for the differences in each group with the significance level of 5%. Furtherly, the way to look for differences from two groups can use uncorrelated t-test. There will be differences from the two variables with the t-test testing criteria (Hadi, 2004).
RESULT
The The research variables represented the physical fitness for the strikers and for the defenders. The data description showed the maximum value, minimum value, mean, standard deviation, median and mode. Those were then compiled in the frequency distribution based on the guidelines of the Indonesian Physical Fitness test. The following was the description of the obtained data from the research subject. The physical fitness for the strikers obtained the minimum value of 18.00 and the maximum value of 11.00. The mean was 14.57, the standard deviation was 1.95, the mode was 14.00 and the median was 14.50, respectively. Meanwhile, the frequency distribution was based on the Indonesian Physical Fitness Test norms. The frequency distribution is presented in table 1.
The physical fitness of the defenders obtained the minimum value of 15.00, the maximum value of 8.00, the mean of 12.04, the standard deviation obtained of 1.85, the mode of 12.00 and the median of 12.00, respectively. Furthermore, the frequency distribution was based on the Indonesian Physical Fitness Test norms as presented in Table 2.
Meanwhile, the normality test with Chi Squares in this study was to measure the the sample came from the population with the normal distribution. The sample from the population with normal distribution was accepted (4.857< 11.070) and (5.913 < 12.592). The homogeneity test used F-test. Based on the two data had (0.969>0.05). It can be concluded that the population variance was homogeneous. The results of the normality and homogeneity tests indicated that the distribution was normal and the variance was homogeneous, so that the data was further analyzed for hypothesis testing. The technique to test the differences in the two populations was t-test of two uncorrelated samples. Based on these results, it can be seen that t count = 3.956 was bigger than ttable = 2.027 in the significance level of 1-½ α (0.975). It indicates that there is a significant difference between the physical fitness of the strikers and the defenders among the students who join football extracurricular activities in State Junior High School 3 Sleman. The mean score from each group was 14.57 for the strikers physical fitness and 12.04 for the defender.
DISCUSSION
Based on the results of the study, it is obtained in the hypothesis testing that the score of tcount was bigger than ttable meaning that there is a significant difference of physical fitness between the strikers and the defenders among the students who joined football extracurricular activities in State Junior High School 3 Sleman. The mean score of the physical fitness for the strikers was 14.57, while the defender was 12.04. The standard deviation score for physical fitness between them was not much different, i.e. 1.95 for the strikers and 1.85 for the defenders respectively.
These results can be used as a beneficial reference to create the exercise model, for example, imrpving dribbling through small side games (Kusuma et.al, 2018) to decrease the differences so that the team's performance remains stable until the game ends. For the attacking players, it is advisable to maintain physical condition, while for defenders should increase the portion of training, especially for increasing physical fitness to be equal to the strikers so that the team's performance can be balanced.
The attacker or strikers have the task to score goals. But in modern soccer games the task of scoring is not only for the strikers. Moreover, the modern attacking player should not only able to score goals but also to create a space that allows other players to score goals. With increasingly intense competition in the opponent's defense area, the attacking player must always be active to take a good position to score goals as much as possible (Salim, 2008). So that, the forward must have the ability to dribble above average, fast running speed, and accurate kick. Dribbling is important because it can develop sensory perceptions that lead to improvements in ball control, which are the basis for individual breakthrough skills (Taga, Ken & Asai, Takeshi, 2012). It means that they must be supported with adequate physical abilities and good physical fitness.
Meanwhile, the defender's task is to prevent the attacks built by the opposing team, either cutting off the passings, seizing the ball from the control of the opposing attacker, even if the opponent's attacker is considered very dangerous. Moreovr, Additional defensive performance indicators should be considered such as areas where defending teams apply pressure, or time required to recover ballpossession (Vogelbein, Nopp, & Hökelmann, 2014). Therefore, a defender must also have good physical fitness in order to prevent the efforts made by the opposing team to score goals.
The forward has the opportunity to score. The defender is in charge for the defensive area, but in modern football like now many forward defenders steal attacks and score goals. Likewise, the attacking players also help to defend in certain conditions. It means the physical condition greatly affects the game results. If more players have a bad physical condition during the match, it will decrease the team achievement as a whole (Abdulullah, 1981). Therefore, football player should also have a fast recovery process in which active recovery is one of the most effective recovery methods for increasing the speed of blood flow through the working muscle system (Mota, Elias, Oliveira-silva, Sales, & Sotero, 2017). Besides, active and combined recovery can reduce the level of fatigue in football athletes (Kurniawan, R., & Elfarabi, A, 2018).
The quality of physical fitness is very important for every soccer player. If the player has good physical fitness, he will not experience fatigue during the matches. In modern football like today, good physical fitness is very important for every player, not only forwards and forwards. Sometimes a defender sometimes helps to attack, as well as front players also at any time down to help defense in certain situations. Therefore, good physical fitness needs to be possessed by all players in order to maintain good play performance for 90 minutes to get a victory. Moreover, physical fitness not only promotes students' participation in outdoor activities, but also has great significance for the development of adult sports habits and health (Ortega, 2008).
CONCLUSION
Based on the research findings, it can be concluded that there is a significant difference on the physical fitness between of the strikers and the defender among the students who joined football extracurricular activities State Junior High School 3 Sleman. These results can be used as evaluation to determine the exercise menu so that the differences that occur are not too significant or minimize so that the team's performance remains stable until the game ends. For the attacking players, it is advisable to maintain physical condition, while for defenders should increase the portion of training, especially for increasing physical fitness to be equal to the strikers so that the team's performance can be balanced.
|
2019-10-13T22:04:49.070Z
|
2019-09-02T00:00:00.000
|
{
"year": 2019,
"sha1": "0e1b2ce75d1fb06fad1f7697ebd948209d38d48a",
"oa_license": "CCBYSA",
"oa_url": "https://ejournal.upi.edu/index.php/penjas/article/download/1904026/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "8ba309428cf16693c3986d719e0ec1e0d0ebcf81",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
260704476
|
pes2o/s2orc
|
v3-fos-license
|
Lower Ricci Curvature and Nonexistence of Manifold Structure
It is known that a limit $(M^n_j,g_j)\to (X^k,d)$ of manifolds $M_j$ with uniform lower bounds on Ricci curvature must be $k$-rectifiable for some unique $\dim X:= k\leq n = \dim M_j$. It is also known that if $k=n$, then $X^n$ is a topological manifold on an open dense subset, and it has been an open question as to whether this holds for $k\lambda$ and $\lambda\in \mathbb{R}$. Then for each $\epsilon>0$ we construct a complete $4$-rectifiable metric space $(X^4_\epsilon,d_\epsilon)$ with $d_{GH}(X^4_\epsilon,X^4)<\epsilon$ such that the following hold. First, $X^4_\epsilon$ is a limit space $(M^6_j,g_j)\to X^4_\epsilon$ where $M^6_j$ are smooth manifolds with $\text{Ric}_j>\lambda$ satisfying the same lower Ricci bound. Additionally, $X^4_\epsilon$ has no open subset which is topologically a manifold. Indeed, for any open $U\subseteq X^4_\epsilon$ we have that the second homology $H_2(U)$ is infinitely generated. Topologically, $X^4_\epsilon$ is the connect sum of $X^4$ with an infinite number of densely spaced copies of $\mathbb{C} P^2$ . In this way we see that every $4$-manifold $X^4$ may be approximated arbitrarily closely by $4$-dimensional limit spaces $X^4_\epsilon$ which are nowhere manifolds. We will see there is an, as now imprecise, sense in which generically one should expect manifold structures to not exist on spaces with higher dimensional Ricci curvature lower bounds.
Let us begin with a historical discussion of measured Gromov-Hausdorff limit spaces and their structure. That metric space limits even exist in this context was the result of Gromov [Gro99,Th. 5.3]. Structural results for the limits X began in earnest with the almost rigidities of Cheeger-Colding [CC96,Th. 6.62]. Their almost splitting theorem allowed them to show that X was the union of rectifiable pieces of various dimensions [CC97], and they conjectured that the dimension is locally constant and hence unique. Colding-Naber resolved this conjecture in [CN12] and showed that the limit X k is k-rectifiable for a unique k. More recently, an example of Pan-Wei [PW22] has shown that while X k is k-rectifiable, its Hausdorff dimension might be larger than k. More specifically, it is possible for the singular part of X to have larger dimension with respect to the Hausdorff measure than it does with respect to the limit ν-measure.
In the context where (1) is noncollapsed, which is to say Vol(B 1 (p j )) > v > 0, one can say quite a bit more. In this case one has that k = n, and by volume convergence [CC97, Th. 5.9] the limit measure ν is the n-dimensional Hausdorff measure on X. The starting point for a more refined analysis of X in the noncollapsed context is another almost rigidity of Cheeger-Colding [CC96,Th. 4.85]. This time one considers the monotone quantity θ r (x) := Vol(B r (x)) ω n r n and shows that in the limit X it is a constant in r iff B r (x) is a metric cone. This opens the door to the techniques of Federer [Fed70], which have been applied to many nonlinear equations. In particular, one can decompose X = Reg(X) ∪ Sing(X) into a regular and singular part and stratify the singular part S 0 (X) ⊆ · · · ⊆ S n−1 (X) = Sing(X). Cheeger-Colding were able to then use the Federer dimension reduction to prove the dimensional estimates dim S ℓ ≤ ℓ. More recently, the work of Cheeger-Jiang-Naber [CJN21] was able to prove that S ℓ (X) is ℓ-rectifiable. This result is sharp for ℓ ≤ n − 2 by an example of Li-Naber [LN20], who built examples whose singular strata S ℓ (X) are ℓ-rectifiable, ℓ-cantor sets.
For the regular set Reg(X) of a noncollapsed limit one can say even more. A Reifenberg type result of Cheeger-Colding [CC97,Th. 5.14] allows one to show that there is an open dense subset on which X is a topological manifold. In the case where the M j are boundary free it was was shown in [CC97,Th. 6.1] that X is a manifold away from a codimension 2 set. More recently, it was shown by Brué-Naber-Semola [BNS22] that even in the boundary case one has a manifold structure away from a codimension 2 set. That is, the top stratum of the singular set S n−1 (X) is itself a manifold away from a codim 2 set, and thus X is a topological manifold with boundary away from a codim 2 set.
1.1. Main Result on Topological Structure. It has remained an open question as to whether in the collapsed case X needs to have a topological manifold structure on some open dense subset. The main result of this paper is to answer this question in the negative: Theorem 1.1. Let (X 4 , h) be a smooth complete manifold with Ric X > λ, where λ ∈ R. Then for every ǫ > 0 there exists a metric space (X 4 ǫ , d ǫ ) such that (1) d GH X 4 , X ǫ < ǫ , (2) X 4 ǫ is 4-rectifiable , (3) There exist (M 6 j , g j ) GH −→ (X 4 ǫ , d) with Ric g j > λ , (4) ∀ open set U ⊆ X ǫ we have that the homology group H 2 (U) is infinitely generated. Consequently, every open set U ⊆ X is noncontractible and therefore not homeomorphic to Euclidean space.
We will see that it is possible to build many such X 4 ǫ . In short, for each countable dense subset C = {x j } ⊆ X and each collection of sufficiently decaying constants ǫ ≥ ǫ j → 0 we will build X ǫ by connectsumming X 4 with a CP 2 of size ≈ ǫ j at the collection of points C. The topological picture will be similar to a complex algebraic blow up, where we replace a point x j with a 2-sphere S 2 ǫ j , though it is important to note that this is purely topological and there is no complex structure being preserved in this process. In particular, the geometric properties of the blow down map that sends the newly added S 2 ǫ j to the chosen x j will be important to the construction. These blow down maps will explain how the rectifiable charts of the space X 4 ǫ collapse each of our added 2-spheres to points, which form a set of measure zero. Most examples of rectifiable structures which are not manifolds arise by allowing for holes in the space. The blow down picture here explains the ability to build a rectifiable space which is nowhere a manifold, but also has no holes. By choosing these points {x j } and scales ǫ j fairly freely we see that a smooth structure is actually quite hard to obtain under higher dimensional lower Ricci curvature bounds, and in a certain generic sense we should expect limit spaces to not have manifold structures.
The above raises the question about whether if we assume bounds on topology we might obtain more. There are two versions of such a question: Question 1.1. Let (M n j , g j , ν j , p j )
GEOMETRIC OUTLINE OF CONSTRUCTION
Let us turn our attention to the construction of the smooth manifolds (M 6 j , g j ) := (X 4 j × S 2 , h j + f 2 j g S 2 ). We will often write X j × f j S 2 to represent that we are geometrically considering the product of two spaces with a warping factor f j . We can view X 4 j as the blow up of X 4 at an increasingly dense sequence of points. That is, to construct X 4 j we will effectively take a collection of points {x a j } ⊆ X 4 and replace these points with 2-spheres. Each time we blow up a point, we are introducing a new noncontractible S 2 into the space. Geometrically, each S 2 being introduced will be of size at most ǫ, but their sizes will decrease quickly as the sequence continues. We will additionally alter the warping factor on the S 2 factor of M 6 j in order to preserve the strict Ricci curvature lower bound. As our collection of blow up points becomes dense, we will arrive at our limit X 4 ǫ .
Our construction will be inductive. That is, given M 6 j which satisfies a handful of inductive properties, we will explicitly construct from it M 6 j+1 with similar inductive properties. Our goals in this section will first be to prove Theorem 1.1 given our inductive sequence, which we will do in Section 2.0.1. We will then focus on the inductive construction itself, which will be broken down into steps. Each step will consist of a main Inductive Lemma. The Inductive Lemmas will be proved later in the paper, but in the meantime we will finish the inductive construction in Section 2.3 based on these Lemmas. As such, our goal for this Section is to complete the proof of Theorem 1.1 modulo the proof of the Inductive Lemmas.
In order to state the conditions of our inductive construction, let us introduce the correct notion of regularity scale for this paper.
Definition 2.1 (Regularity Scale). Let (M n , g) = (X n−2 × S 2 , h + f 2 g S 2 ) be a smooth manifold with x ∈ X, and 0 < η < 1 a constant. Then we define the regularity scale (2) Remark 2.1. The definition depends on a constant 0 < η < 1, though this constant may be fixed somewhat arbitrarily. Pictorially, when η is small we are increasingly close to R n−2 × S 2 with a product metric.
Remark 2.2. It follows that if r x > 2r then topologically B r (x) ⊆ X is contained in a Euclidean ball.
Remark 2.3. If r x > 2r then we can write the metric h in exponential coordinates on B r (x) so that h ab is C 2 close to the identity and ln f is C 3 close to a constant. See Lemma 3.2.
Now let (X 4 , h) be our chosen manifold from Theorem 1.1 with Ric X > λ. We will go ahead and assume X 4 is compact, but it will be clear that this is not needed as all constructions are purely local. Primarily, this allows us to discuss the construction in terms of some global parameters instead of choosing them locally. Let us define and observe that as X is compact we have λ + > λ . There is no loss in generality in us then assuming that the constant ǫ > 0 from Theorem 1.1 satisfies Our inductive construction of (M 6 j , g j ) will produce a choice of parameters where 0 < δ << 1 will later be chosen sufficiently small. We will let our base step of the induction be represented by the space M 6 0 := X 4 × S 2 with product metric g 0 = h + δ 2 g S 2 . Our inductive assumptions will be the following: j (x a j−1 ) S 2 are totally geodesic, round 2-spheres of radius ≤ δ j r j in X j . We have |Dφ j | < C(6), and if x, y ∈ X j with d(x, y) > δ j r j then (1 − δ j )d(x, y) < d(φ j (x), φ j (y)) < (1 + δ j )d(x, y).
The notation C(6) above tells us that |Dφ| is uniformly bounded by a dimensional n = 6 constant, which in this case is just a uniform constant.
Let us discuss some of the above properties and their implications. Condition (I1) tells us that from a geometric standpoint M 6 j is globally a warped product over X 4 j , and that geometrically the S 2 factor is disappearing in the limit. Condition (I2) is an enumeration of the points we will be doing surgery around to move from X j to X j+1 . The important point to observe is that necessarily this set is becoming increasingly dense by Condition (I2.a), as the points are maximal subsets inside the set whose regularity scale is too large.
To move from X j to X j+1 we will be performing surgeries on the balls B r j (x a j ). Condition (I3) is telling us that the surgery is topologically a connect sum with CP 2 , where we are replacing each x a j with a 2-sphere. Near x a j this has the effect of replacing the diffeomorphic ball B r j (x a j ) with the total space E → S 2 of the generating line bundle over S 2 . Note that the unit sphere bundle in E is S 3 , and hence this is the right object for gluing. From a topological perspective, moving from X j to X j+1 adds a second homology generator for each x a j . Condition (I2.b) is telling us that our surgeries never intersect the previously added 2-spheres.
Condition (I4) is explaining the geometric properties of our blow down maps φ j : X j → X j−1 . These will each be smooth mappings, indeed uniformly lipschitz, and will be Riemannian isometries away from some small neighborhoods of the blow up points. We will see that the limit map Φ := lim j→∞ φ j1 : X 4 ǫ → X 4 is uniformly Hölder and locally bi-Lipschitz away from a set of measure zero. In particular we will have a single rectifiable chart of X 4 ǫ over X 4 , that is an a.e. defined locally bi-Lipschitz map onto a full-measure subset of a smooth manifold. The blow up 2-spheres will all be collapsed to single points under this mapping.
2.0.1. Proving Theorem 1.1 given the Inductive Spaces M 6 j . Before focusing on the inductive construction itself, let us see how to use (I1)-(I4) in order to finish the proof of Theorem 1.1.
Let us begin by studying properties of the spaces X 4 j . For each i < j let us denote {S 2 jia } a := {φ −1 ji (x a i )} a . Notice by conditions (I4) and (I2.b) that these are disjoint totally geodesic round 2-spheres inside of X 4 j . Additionally, by (I3) and a Mayer-Vietoris sequence we have that In particular, the rank of the second homology is growing in step with these two spheres.
Let us now flesh out the geometric properties implied by (I4) more completely. From (I4), we expect the lipschitz constant to be uniformly bounded, but not necessarily close to 1. On the other hand, (I4) also tells us that the lipschitz constant is close to 1 away from a small neighborhood of the diagonal in X 4 j × X 4 j . Consequently, we have the bounds If x, y ∈ X j with d(x, y) ≤ δ j r j then d(φ j (x), φ j (y)) < Cd(x, y) for C = C(6) . (7) Thus we can take the lipschitz constant to be small if the distance between two points is also not too small. This in particular implies the much weaker estimate By composing, we see that X 4 j is always Cδ < ǫ-GH close to X 4 for small enough δ . The importance of these estimates is that our real goal is to obtain uniform continuity for the maps φ ji : X j → X i from (I2.b). It is too much to ask that they be uniformly lipschitz. However for δ < δ(C) = δ(6) in the construction, condition (I4) will now allow us to show that the φ ji are uniformly C α maps. Indeed, for any fixed x, y ∈ X j write r := d(x, y), and then using (7) we can estimate To estimate the above we will use that δ j ≤ δ 1+ j , which gives us that where α(δ) = 1 + ln(C)/ ln(δ/2) → 1 as δ → 0 .
Now recall by (8) that φ j is a Cδ j ≤ C2 − j δ-Gromov Hausdorff map. Consequently we have that φ ji : X j → X i is a C j≥k≥i+1 δ k ≤ C2 −i δ Gromov Hausdorff map. This tells us that {X j } is a Cauchy sequence and so Note that this is not a subsequential convergence but an actual convergence by the Cauchy condition. It follows from (I1) that M j GH −→ X 4 ǫ as well since | f j | ≤ δ j r j → 0. As the maps φ ji witness the GH-Cauchy condition, we can take limits where the Φ i are also C2 −i δ Gromov Hausdorff maps. It follows from (10) that the Φ i are C α -Hölder maps 1 , or more precisely that Importantly, we have that the Φ i are continuous maps.
Now recall from (I3) and (I4) that {S 2 jia } ⊆ X 4 j are totally geodesic round 2-spheres in X 4 j . Note also that for i < j < k we have by (I2.b) and (I3) that φ k j S 2 kia : S 2 kia → S 2 jia is an isometry. Consequently, we can limit our sequences of 2-spheres to get round 2-spheres Note that Φ j S 2 ia : S 2 ia → S 2 jia is an isometry for every j > i , and in particular the radius of each 2-sphere S 2 ia is at most r i δ i . Now we claim that each S 2 ia ⊆ X 4 ǫ is a nontrivial generator in the second homology group as well. Indeed, assuming this is not the case, there must be a continuous 3-chain ψ : ∆ 3 → X 4 ǫ with boundary ∂ψ = S 2 ia . If we compose with Φ j then this gives us a continuous chain Φ j • ψ : ∆ 3 → X 4 j whose boundary is the 2-sphere S 2 jia . However, as we know that S 2 jia is a nontrivial generator in the second homology group, this is not possible. A similar argument shows that each {S 2 ia } generates an independent factor in the second homology group.
Finally, let us show more carefully that the 2-spheres {S 2 ia } i,a are dense in X ǫ . Fix any x ∈ X ǫ and ǫ ′ > 0.
) for any j ≥ i and any blow-up point x a j . Thus the ball B ǫ ′ −C2 −i δ−r i (Φ j (x)) has the same Riemannian metric for all such j. In particular, Φ j (x) will have the same regularity scale for all such j (note that the regularity scale is invariant under scaling of the warping function f ). Now for j large enough, a point in B ǫ ′ −C2 −i δ−r i (Φ j (x)) has to be blown-up since the collection of blow-up points is maximal. This is a contradiction.
1 In fact, the Φ i are rectifiable charts. Indeed, the φ j are (1 + δ j )-bi-Lipschitz away from the bubbles φ −1 j (B δ j−1 r j−1 (x a j−1 )), and uniformly locally bi-Lipschitz away from the added 2-spheres. Composing these estimates as in (10) shows that for any j > 0, the Φ i are C(i, j)-bi-Lipschitz away from k>i,a B r k δ j∧k (S 2 ka ).
Thus we have our limit space X 4 ǫ and a dense collection of two spheres {S 2 ia } which are all generators in the second homology group, as claimed. This finishes the proof of Theorem 1.1 under the assumption that we have built our inductive sequence M 6 j satisfying (I1)-(I4). 2.1.
Step 1: The Gluing Block B(ǫ, α, δ). In order to prove Theorem 1.1 we are therefore left with showing how to build M j+1 = X j+1 × f j+1 S 2 from M j = X j × f j S 2 in the inductive construction. The first step of the construction will build what is our main gluing block. When we move from X j to X j+1 we will take our appropriately dense collection of points {x a j } and replace a small neighborhood of each with our gluing block B. From a topological perspective, it will be a connect sum with a copy of CP 2 near each x a j , so that we are blowing up the points {x a j } and replacing them with 2-spheres.
Note that B \ U looks very pGH-close to R 4 × S 2 , where the degree of closeness is being measured by ǫ, α, δ in a quantitative manner.
2.2.
Step 2: Adding Cone Singularities. The second step of the construction involves changing the geometry near each x a j so that the gluing blocks B, which have a very specific geometry at infinity, may be isometrically glued along an annulus into X j . Let us begin with a broader discussion before stating the main constructive Lemma.
Let us start with a discussion of the density of singularities. It is known, see for instance [OS94, Ex. 2], that one can build examples (X, h) with Ric h > 0 for which the singular set is dense. In fact, the example in [OS94] has positive sectional sec h > 0 and even full positive curvature operator Rm h > 0. What is at first counter-intuitive is that the constructions of [OS94] not only produce but essentially rely on these stronger curvature conditions. That is, the construction of singularities with Rm h > 0 in [OS94] is very analogous to the construction of convex functions with nonsmooth points.
Let us now consider what is almost the reverse direction. Begin with a smooth space (X, h) with some form of lower bound on the curvature and ask about adding cone singularities near any point x ∈ X without destroying the lower curvature bound. If for instance sec h > λ > 0 then one can accomplish this by performing a C 0 gluing in the spirit of [Per97,Sec. 4]. Namely, one can remove a sufficiently small neighborhood of x ∈ X and isometrically glue in a rescaled spherical suspension of the boundary. There will be what is essentially distributional curvature added along the gluing, but the sec h > λ assumption will allow us to guarantee that these distributional curvatures all have the right sign. In particular, one can smooth near the gluing region and preserve the sec h > λ > 0 condition.
The procedure described above of adding cone singularities near any point does not work if we are only assuming Ric h > λ. In short, the distributional curvature added from the C 0 gluing is due to the difference in second fundamental forms of the boundary on the two sides of the gluing. This second fundamental form in turn is closely related to sectional curvature, and Ricci curvature control is not sufficient. It turns out that we need to exploit better the local geometry near x ∈ X in order to control the Ricci curvature.
In
Step 2.1 of Section 5 we will see how to resolve this problem and add such cone singularities to arbitrary spaces satisfying Ric h > λ without destroying the lower Ricci curvature bound 2 . The inductive Lemma of this Step of the construction is a generalization of this discussion and will allow us to also add conical singularities with a fixed warping structure near every point. This extra warping control is necessary in order to use the inductive Lemma of Step 1 to add our desired topology. We will focus on the 6-dimensional case M = X 4 × S 2 of interest, though it is clear dimension is not a relevant constraint. Precisely, we have the following local gluing Lemma, which focuses on a ball with controlled regularity scale: Step 2). Consider a warped product space (B 4 2 (p)× S 2 , g) with metric g = g B + f 2 g S 2 satisfying inj g B (p) > 2, Ric g > λ, and Write r := dist g B (·, p). Then for all choices of parameters 0 < ǫ < ǫ(|λ|), 0 < α < α(ǫ), 0 <r <r(α, λ, ǫ), and 0 <δ <δ(λ, α, ǫ, || f || L ∞ ,r), there exists 0 < δ = δ(δ f ∞ | α, λ, ǫ) and a warped product metriĉ g =ĝ B +f 2 g S 2 such that: (1) The Ricci lower bound Ricĝ > λ − C(4)ǫ holds forr/2 ≤ r ≤ 2 , (2)ĝ = g B +δ 2 f 2 g S 2 is unchanged up to scaling the warping factor f byδ for 1 ≤ r ≤ 2 , Remark 2.4. Our notation of the constant dependence δ(δ f ∞ | α, λ, ǫ) means that δ → 0 asδ f ∞ → 0 with the other constants α, λ, ǫ fixed.
Remark 2.5. The caveat in (1) that Ricĝ > λ − C(4)ǫ only away from a small neighborhood of the cone point, e.g. Br /2 (p) × S 2 , cannot be improved. This is simply due to the fact that the fixed warping structure C(S 3 1−ǫ ) × δr α S 2 has infinite negative curvature at the pole, in particular it has Ric| T S 2 → −∞ as r → 0. In practice, Br /2 (p) × S 2 will be replaced by a rescaled bubble B obtained from Lemma 2.2, which does have the appropriate Ricci lower bound.
In practice the above works as follows. If M = X × f S 2 is a smooth manifold with a lower Ricci curvature bound and x ∈ X, then the above tells us we can find a potentially very small neighborhood B 2ρ (x) × S 2 in which we can change the geometry of M. Specifically, after shrinking the warping factor byδ, we can alter the metric so that the ball Br ρ (x) × S 2 will be isometric to that of the warped cone C(S 3 1−ǫ ) × δρ(r/ρ) α S 2 .
In the case of no warping factor, as per the discussion before the statement of the above Lemma, we can repeat this process indefinitely in order to produce a dense set of singularities. In the case of a warping factor we can similarly repeat this process indefinitely, but also combine with the Inductive Lemma 2.2 in order to glue topology in at each step. We will discuss this construction more carefully in the next step.
2.3.
Step 3: Constructing M j+1 . Let us now see how to use the Inductive Lemmas 2.2 and 2.3 in order to complete the inductive step of the construction and build M j+1 from M j . Thus let us assume we have built Due to the unfortunate number of constants being accounted for in the construction, it is helpful to briefly remark on what will happen. The goal is to construct M j+1 in two steps. First we will take each ball B r j (x a j ) ⊆ X j and apply the Inductive Lemma 2.3 in order to add a warped cone singularity on Br r j (x a j ). This space is not smooth at the cone point x a j of course, but will have appropriately positive Ricci at least outside of Br r j /2 (x a j ). We will then apply the Inductive Lemma 2.2 in order to replace each warped cone Br r j (x a j ) with the smooth bubble metricB of positive Ricci. This will produce X j+1 , and if we choose the various constants sufficiently small at each stage, we can do this while keeping control of both the space and its relationship to X j . That is, we can show the inductive hypotheses (I1) − (I4) are satisfied.
Let us now describe the modifications to each disjoint ball B 2r j (x a j ) × S 2 in more detail. It will in fact be more convenient to describe the process on a single rescaled ball B 2 (p) × S 2 : Observe that the ball B 2 (p) × S 2 satisfies the criteria of Lemma 2.3, with Ricci lower bound r 2 j λ j . Therefore let us apply Lemma 2.3 with input parameters ǫ ′ , α,r, δ, and output parameter δ I . We choose ǫ ′ , α,r, andδ to satisfy the conditions of Lemma 2.3 and Lemma 2.2, but we may further shrink these constants later.
We now seek to replace the original ball B 2r j (x a j ) × S 2 ⊆ M j with the rescaled r jB . By construction, r jB is a warped product over a compact manifold with boundary, with a collar neighborhood of its boundary isometric to (A r j ,2r j (p) × S 2 , h j + (δ f j ) 2 g S 2 ), whereδ :=δ min(δ I , δ II )/δ I . If we multiply the warping factor of M j byδ (which is independent of the choice of point in {x a j }!), then we see that B r j (x a j ) × S 2 can be replaced by r jB , glued isometrically along A r j ,2r j (p) × S 2 . We similarly replace B 2r j (x a j ) × S 2 for each other a to form (M j+1 , g j+1 ).
We have almost completed our construction. For any choices of ǫ ′ > 0 andr > 0 appropriately small our construction above holds, and we need only make choices. First observe that we can define φ j+1 : X j+1 → X j to be the identity outside of a B r j (x a j ). On the union of annuli a (B r j (x a j ) \ Br r j (x a j )) we have by Lemma 2.3 that |Dφ j+1 − I| < Cǫ ′ if we set φ j+1 := Id setwise on this region. We can use the second part of Lemma 2.2 in order to extend φ j+1 to each glued bubble r jB , so that |Dφ j+1 | < C(6) on {Br r j (x a j )}. If we now choose ǫ ′ < ǫ ′ (δ j+1 ) andr < δ j+1 /2 sufficiently small, then we have that (7) holds. This finishes the construction of M j+1 .
PRELIMINARIES
3.1. Ricci Curvature of Warping Geometry. The underlying ansatz for all constructions going forward is the warped product with S 2 . We have several formulas for the curvature of such spaces which will be used in this paper. In this section we collect together some elementary remarks and formulas about such constructions. These will be the starting point for many of the other formulas computed in this paper.
Let us now be more precise: consider the data of a smooth Riemannian manifold (X, h) with a positive function f : X → (0, ∞). We can form the warped product of X with S 2 as follows: That is, X × f S 2 is a Riemannian manifold that topologically has the structure of a (trivial) sphere bundle x ∈ X, of the projection map π are metrically round spheres of radius f , and are orthogonal to the natural sections X × {ω}, ω ∈ S 2 . Denoting M := X × f S 2 , we will therefore use the orthogonal splitting to make the identification T M T X ⊕ T S 2 . We obtain the following concise formulas for the Ricci curvature of M in the complementary directions: Let us make a couple of remarks about the general form of these identities, which we will use without further comment throughout the rest of the paper: Remark 3.1. There is no cross-term in Ric M , i.e. Ric M (v, w) = 0 for v ∈ T X and w ∈ T S 2 . In practice, this splitting of the Ricci curvature into orthogonal blocks will allow us to subdivide the problem of lower bounding Ric M into two distinct steps.
Remark 3.2. The scaling action f → λ f for λ > 0 leaves the T X directions Ric M | T X invariant, and acts on the T S 2 directions by Ric M | T S 2 → λ 2 Ric M | T S 2 + (λ −2 − 1)λ 2 g S 2 . Thus, Ric M is non-decreasing under the scaling f → λ f when λ ≤ 1. In practice, this will mean that Ric M | T S 2 can be made as large as desired, without disrupting Ric M | T M , by multiplying f by a suitably small positive number.
3.2. C 1 Gluing Lemma for Warping Geometry. We state in this subsection a C 1 gluing lemma. It is a slight generalization of a result of Menguy [Men00, Lem. 1.170], and its proof is essentially verbatim. We will make use of it in both steps of the construction: Then for every open set N ⊆ U and number ǫ > 0 there exists a smooth metric g U = h U + f 2 U g S k on M × S k such that Ric g U > λ and g U = g outside of U × S k . Moreover, one can arrange that g − g U C 1 (U×S k ) < ǫ . Remark 3.3. It is enough to assume g is C 4 on M \ N .
Remark 3.4. The verbatim result is true for more general warping factors other than spheres.
3.3. Regularity under the Exponential Map. The following is relatively standard, and the proof goes through a series of Jacobi field estimates; however it is surprisingly difficult to find a precise reference for it. For the convenience of the reader we state the result precisely below: Lemma 3.2 (Regularity of Exponential map). Let 0 < η < 1 be a number and (B 1 (p), g) a metric ball which satisfies the regularity scale estimates Given an orthonormal basis {∂ a } of T p M let g ab = exp * g be the metric in exponential coordinates. Then we can estimate Remark 3.5. Let us say a few words about how these estimates can be proved. One rewrites the metric derivatives in terms of the Jacobi vector fields J a := r∂ a along radial geodesics passing through p, e.g.
To obtain estimates on the iterated covariant derivatives J a 1 ,...,a k ,b := ∇ J a 1 · · · ∇ J a k J b , one uses the equation they solve: E a 1 ,...,a k ,b . The inhomogeneous term E a 1 ,...,a k ,b depends only on lower-order covariant derivatives J c 1 ,...,c ℓ J d , ℓ < k, which inductively have already been estimated, the base case ℓ = 0 being the standard estimates for Jacobi vector fields as in [Jos17, Ch. 6.5]. One can then proceed to estimate solutions of this inhomogeneous ODE e.g. by Duhamel's principle.
Remark 3.6. Let us briefly compare this regularity estimate and proof sketch with another possible approach. First, one switches to harmonic coordinates and uses the assumed injectivity radius and curvature bounds (19) to obtain C k+1,α control on the metric in these coordinates for any 0 < α < 1, following [And90]. Then, one converts this into C k−1,α control on the metric in exponential coordinates by [DK81, Th. 2.1]. The loss of (1 − α) derivatives compared to Lemma 3.2 is immaterial in our context, where we are only ever working on the regularity scale in a smooth manifold.
Our primary use of the above will be to view the metric g as a form of twisted cone. Namely, in the context where inj(p) > 1 as above we can use exponential coordinates to write the metric g on B 1 (0) ⊆ C(S n−1 ) = R n as g = dr 2 + r 2 g r , where g r is a smooth family of metrics on S n−1 . The estimates above can then be understood as estimates on this family g r : Corollary 3.3 (Cone Regularity of Exponential Map). Let (B 1 (p), g) be a metric ball which satisfies the regularity scale estimates (19) with 0 < η < 1. Let us use exponential coordinates to write g = dr 2 + r 2 g r as in (21) , where g r is a family of metrics on S n−1 . Then we have the estimates
STEP 1: THE GLUING BLOCK
In this Section we build the gluing block of Step 1 for our construction. Our 4-manifold of interest for this gluing block is the generating line bundle E → S 2 , which we can topologically also view as C 2 blown up at the origin. We will build a metric of positive Ricci curvature on E × S 2 which has the property that at infinity it looks roughly like R 4 × S 2 . More precisely, it will be isometric to the warped product C(S 3 1−ǫ ) × δr α S 2 near infinity. The precise Lemma is the following: Lemma 4.1 (Inductive Step 1). For every 0 < ǫ < 1 10 , 0 < α < α(ǫ) and 0 < δ < δ(ǫ, α) there exists a smooth Riemannian manifold B(ǫ, α, δ) with Ric B > 0 and such that Further, (B, g) has a warped product structure g := g E + f 2 g S 2 for some smooth f : Further, there exists φ : (E, g E ) → C(S 3 1−ǫ ) such that (1) φ is a diffeomorphism away from the cone point 0 ∈ C(S 3 1−ǫ ), with φ −1 (0) S 2 an isometric sphere, (2) |Dφ| ≤ C is uniformly bounded with φ an isometry away from U E . Remark 4.1. As usual our use of the notation C(S 3 1−ǫ ) × δr α S 2 means we are looking at the warped product metric on C(S 3 ) × S 2 given by g := dr 2 + (1 − ǫ) 2 r 2 g S 3 + δ 2 r 2α g S 2 .
The proof of the above Lemma is broken down over the remainder of this Section. In Step 1.1 of Section 4.1 we begin by writing down a metric on (E, g E,1 ) with nonnegative Ricci curvature, and which looks like a cone at infinity. This cone however may not be close to R 4 at this stage.
In Step 1.2 of Section 4.2 we will write down a metric on E × S 2 of the form g 2 = g E,2 + f 2 2 g S 2 . The base metric g E,2 := g E,1 will simply be the metric from the first step, however we will now equip the metric with a warping S 2 factor f 2 (r) := δ 2 (1 + r 2 ) α 2 /2 . The polynomial growth of the warping factor will add extra curvature which will be useful in flatting out the cone in the third step.
In Step 1.3 of Section 4.3 we will use the extra curvature provided by the warping factor to slowly increase the cone angle of g E until it is close to Euclidean. In Step 1.4 we will fix the warping factor so that our space becomes isometric to the warped cone C(S 3 1−ǫ ) × δr α S 2 near infinity. At several steps we will only build geometries which are globally C 1 , but we will end the construction of Lemma 4.1 by applying the C 1 smoothing Lemma 3.1 in order to fix this issue.
4.1.
Step 1.1: Bubble Metric with Positive Ricci. Consider the generating line bundle E → S 2 and let us observe that the unit sphere bundle is diffeomorphic to S 3 . In particular, if we remove the zero section then E \ S 2 is diffeomorphic to R + × S 3 . We will begin by writing our metric in this degenerate coordinate system. To do so let us choose the canonical left invariant vector fields X, Y, Z on S 3 , so that they satisfy the commutator relations [X, Y] = 2Z, [Y, Z] = 2X, and [Z, X] = 2Y . Let dX, dY, dZ denote the dual frames. We first consider a metric on R + × S 3 of the form g E,1 := dr 2 + A(r) 2 dX 2 + B(r) 2 dY 2 + dZ 2 .
In order for this to define a smooth metric on E it is required that A(0) = 0 with A (even) (0) = 0 and B(0) > 0 with B (odd) = 0 . Our construction for this step is the following: Lemma 4.2. Let g E,1 be as in (23), and let 0 < m < 1 100 with r 1 := 2. Then there exists A(r), B(r) such that (1) A(r) = B(r) with A ′ (r) = B ′ (r) = m for r ≥ r 1 , (2) g E,1 defines a C 1 metric on E, smooth away from r = r 1 , with Ric ≥ 0 on the smooth part, and Ric ≥ k 2 2 > 0 on U r 1 := {r < r 1 } for some constant k = k(m).
Remark 4.2. Note that for r ≥ r 1 we have that g E,1 = dr 2 + A(r) 2 g S 3 is exactly the cone metric on C(S 3 m ) .
Proof of Lemma 4.2. Let k > 0 be the smallest number satisfying the relation m = cos(kr 1 ) .
Clearly we then have π 3 ≤ kr 1 < π 2 . Let us define A(r) by the formula be taken so that we can find a smooth function with Note that in this case we necessarily have 0 ≤ B ′ ≤ m ≤ A ′ and B ≥ A. Together with kr 1 < π 2 and m < 1 100 we can estimate Observe from the behavior of A and B as r → 0 + that g E,1 indeed defines a smooth metric on E near the zero section (see e.g. [Per97]).
Let us now estimate the Ricci curvatures. For r < r 1 2 , we have (note that B ′ = 0 = B ′′ here) which is clearly appropriately positive. For r 1 2 ≤ r < r 1 we can estimate Again observe that for m < 1 100 we have the appropriate positivity. Finally for r > r 1 , we have (note that A = B is affine here) Since m < 1 100 we have Ric ≥ 0 for r > r 1 . This completes the construction.
4.2.
Step 1.2: Bubble Metric with S 2 Warping Factor. Recall we ended the last step by constructing a metric g E,1 on E → S 2 which has nonnegative Ricci curvature and is a cone C(S 3 m ) outside a compact subset. The sphere S 3 m in this cone is however potentially quite small, and we will want to take the radius of this sphere closer to 1 in order to geometrically flatten out the space.
In this next step of the construction, we want to add to g E,1 a warped S 2 factor. This factor will add additional curvature to the radial directions which will be used in subsequent sections to flatten out our cone structure. In this Step we will look for a metric of the form g 2 := g E,2 + f 2 (r) 2 g S 2 = g E,1 + f 2 (r) 2 g S 2 . (31) In particular, we will not change the metric on our base E in this Step. The warping factor f 2 (r) will be given explicitly by Our main purpose in this Step is to see that g 2 always has positive Ricci curvature: Lemma 4.3. Let g 2 be as in (31) and (32). Then for any 0 < α 2 ≤ α 2 (m) ≤ 1 2 and 0 < δ 2 < 1, defines a C 1 metric on E × S 2 , smooth away from r = r 1 with Ric > 0 on the smooth part.
Step 1.3 Flattening the Cone. Our goal in this
Step of the construction is to look for a metric on E × S 2 of the form where the warping factor f 3 (r) := f 2 (r) = δ 2 (1 + r 2 ) α 2 /2 will remain unchanged, up to further restrictions on the parameters δ 2 and α 2 . The base metric g E,3 should look like a flat cone C(S 3 1−ǫ ) outside some large radius, and will more generally satisfy if r ≤ r 1 , where the warping function h 3 will be smooth on [r 1 , ∞) and satisfy The following lemma will tell us that for r 3 sufficiently large the resulting metric will have positive Ricci curvature: Lemma 4.4. Let g 3 satisfy (38), (39), (40) with α 2 < α 2 (ǫ, m) ≤ 1 2 , r 3 ≥ r 3 (m, α 2 , ǫ) and δ 2 < δ 2 (m). Then g 3 is smooth away from r = r 1 , globally C 1 , and satisfies Ric > 0 on the smooth region.
Proof. We will focus our computations in the range r ∈ [r 1 , r 3 ], as in the range r ∈ [r 3 , ∞) the metric g E,3 is again conic and the estimate will be similar as in the previous subsection. Let us begin by computing the Ricci curvature of the ansatz g 3 = dr 2 + h 3 (r) 2 g S 3 + f 3 (r) 2 g S 2 as: Let us impose the restriction α 2 ≤ 1 2 and calculate mr 1 ≤ A(r 1 ) ≤ 9 10 r 1 ≤ (1 − ǫ)r 1 . Then in the range r ∈ [r 1 , r 3 ], let us observe the estimates: If we plug these estimates into (41) then we arrive at It follows from the first inequality that if r 3 ≥ r 3 (m, α 2 ) then Ric rr > 0. It follows from the second inequality that if α 2 ≤ α 2 (m, ǫ) and r 3 ≥ r 3 (m, ǫ) then Ric ii > 0. Finally we see from the last equation that if δ 2 ≤ δ 2 (m) that Ric αα > 0.
4.4.
Step 1.4: The Warped Cone Metric. In the last step of the construction we have built a global metric g 3 on E × S 2 such that outside the compact set U 3 := {r ≤ r 3 }, the metric g 3 can be written as where R 3 solves h(r 3 ) = (1 − ǫ)(r 3 − R 3 ). Observe that R 3 > 0 under our assumptions of the parameters. Our goal in this Step of the construction is to build a metric g 4 which agrees with g 3 for r ≤ r 3 , but for r ≥ r 3 should take the form where our warping factor satisfies We want our warping function to be globally C 1 , and thus we will choose the constants Our final Lemma is that for α 2 and δ 2 sufficiently small our new metric g 4 has positive Ricci curvature: Lemma 4.5. Let g 4 satisfy (45) and (46). If we further choose α 2 ≤ α 2 (ǫ) and δ 2 ≤ δ 2 (α 2 ) then for r > r 3 we have that Ric > 0.
Remark 4.3. Notice that after the change of coordinates t := r − R 3 , the metric g 4 above becomes the desired format as in Lemma 4.1.
Proof. The range r > r 3 corresponds exactly to t > h(r 3 ) 1−ǫ , and in these new coordinates we can compute the Ricci curvature of g 4 as Notice that as α 2 → 0 we have that α → 0, and similarly (after fixing α 2 , α, r 3 ) we have that δ 2 → 0 as δ → 0. Thus for α 2 ≤ α 2 (ǫ) the second term is uniformly positive. Finally for δ 2 ≤ δ 2 (α 2 ) we have that the third term is uniformly positive.
4.5.
Finishing the Proof of Lemma 4.1. For ǫ > 0 and m = 1 10 3 fixed we can now choose α < α(ǫ) and δ < δ(ǫ, α). Let us now equip E × S 2 with the metric g 4 . Recall that this metric is smooth away from r ∈ {r 1 , r 3 }, globally C 1 and satisfies Ric > 0 on the smooth part. We can now apply the C 1 smoothing Lemma 3.1 in order to build a smooth metric g = g E + f 2 g S 2 on E × S 2 with Ric > 0 such that g = g 4 for r ≥ 2r 3 . This completes the construction of B = B(ǫ, α, δ). What remains is to define and study the projection map φ : (E, g E ) → C(S 3 1−ǫ ).
Recall that we have coordinates (r, ω) on R + × S 3 , and we have identified E \ S 2 and C(S 3 ) \ {0} with R + × S 3 . Our mapping φ : E → C(S 3 ) will then take the form Note that since λ ′ ≤ 1 and λ(0) = 0, we have λ(r) ≤ r for all r > 0. Also observe that φ sends the zero section S 2 of E to the cone point 0 of C(S 3 1−ǫ ). It then suffices to estimate |Dφ| in terms of the metrics on E and C(S 3 1−ǫ ). Note that as the C 1 gluing lemma produces a smooth metric on E which is an arbitrarily small C 1 perturbation, it is enough to estimate in the metric g 4 on E.
For r < r 1 , we have (using the notation and results in Lemma 4.2) For r 1 < r < r 3 , we have (using the notation and results in Lemma 4.4) Since m = 10 −3 is chosen universally, the bounds above do not depend on any other parameters.
For r > r 3 , Dφ is an isometry. This concludes the proof of Lemma 4.1.
STEP 2: ADDING CONICAL SINGULARITIES
We complete Step 2 of the construction in this Section. Namely, we want to see how to take a manifold M = X 4 × f S 2 and add a cone point in any arbitrarily small neighborhood of X 4 while (almost) preserving a Ricci curvature lower bound. Our primary setup, essentially after rescaling on the regularity scale of M, is to assume we are faced with a warped product space (B 2 (p) × S 2 , g) with metric g = g B + f 2 g S 2 , under the assumptions Ric g > λg , inj g B (p) > 2 , Note that there are no assumptions about the sign of λ ∈ R. Observe that the above hold for any warped product M 4 × f S 2 so long as we work on the regularity scale. Our main result in this Section is the following: Lemma 5.1 (Inductive Step 2). Consider a warped product space (B 4 2 (p)× S 2 , g) with metric g = g B + f 2 g S 2 satisfying (54), and write r := dist g B (·, p). Then for all choices of parameters 0 < ǫ < ǫ(|λ|), 0 < α < α(ǫ), 0 <r <r(α, λ, ǫ), and 0 <δ <δ(λ, α, ǫ, || f || L ∞ ,r), there exists 0 < δ = δ(δ f ∞ | α, λ, ǫ) and a warped product metricĝ =ĝ B +f 2 g S 2 such that: (1) The Ricci lower bound Ricĝ > λ − C(4)ǫ holds forr/2 ≤ r ≤ 2 , (2)ĝ = g B +δ 2 f 2 g S 2 is unchanged up to scaling the warping factor f byδ for 1 ≤ r ≤ 2 , (3)ĝ = dr 2 + (1 − ǫ) 2 r 2 g S 3 + δ 2 r 2α g S 2 has the cone warping structure C(S 3 1−ǫ ) × δr α S 2 for r ≤r , (4) The identity map Id : Remark 5.1. Most of the results of this section hold in general dimensions, where the constants should then include a dimensional dependence. Our notation C(4) denotes that the constant only depends on the dimension n = 4.
The construction will be broken down into three steps. In Step 2.1 of Section 5.1 we begin by writing the metric g B = dr 2 + r 2 g r in exponential polar coordinates, where g r is a smooth family of metrics on S 3 which naturally converges to the standard metric as r → 0. Our primary goal in Step 2.1 is to alter the base metric g B to a metric g B,1 . The metric g B,1 will agree with g B for 1 ≤ r ≤ 2, however it will take the form g B,1 = dr 2 + (1 − ǫ) 2 r 2 g r for r ≤ r 1 . This will give g B,1 a large amount of additional positive Ricci curvature in the nonradial directions, which we will exploit in future steps. Additionally, we are able to ensure that Ric of the total space drops by at worst an ǫ-small amount when passing from g to g 1 .
In
Step 2.2 of Section 5.2 we focus on the S 2 warping factor and leave the base g B,2 := g B,1 fixed. Our goal will be to construct a warping factor f 2 so that f 2 =δ f 1 for r ≥ r 1 , while f 2 = δ r α for r ≤ r 2 . The effect of this will be to add a large amount of Ricci curvature to the radial direction of g 2 := g B,2 + f 2 2 g S 2 .
In the final Step 2.3 of Section 5.3 we will use the additional positive Ricci curvature introduced in the first two steps to once again alter the base metric g B,3 , while fixing f 3 := f 2 . We will preserve g B,3 = g B,2 for r ≥ r 2 , however for r ≤ r 3 we will ensure that g B,3 = dr 2 + (1 − ǫ) 2 r 2 g S 3 is the standard cone C(S 3 1−ǫ ). This will complete the construction ofĝ, and in Section 5.4 we will check the final bi-Lipschitz property of the construction.
5.1.
Step 2.1: Decreasing the Cone Angle. Consider the metric g B on B 2 (p), and by using the radial function r := d(·, p) and exponential coordinates let us write g B as where g r is a smooth family of metrics on S 3 . It follows from Corollary 3.3, recalling that η < 1, that we have the estimates 2 − C(4)η r 2 g r ≤ Ric g r , Let us remark that the first line of (56) follows either from the C 2 estimate Corollary 3.3 or directly from the second line of (56) by applying the Gauss equations and the identity 1 2 (r 2 g r ) ′ = II ∂B r (p) .
In this subsection we will look for a metric under the ansatz Observe that g r is the original family of metrics on S 3 and that f 1 is a function on B 2 (p). For r 1 := 1/2 , the function h(r) will be chosen as any smooth function with the properties that In particular, this construction implies that g B,1 = g B for 1 ≤ r ≤ 2, while g B,1 = dr 2 + (1 − ǫ) 2 g r for r ≤ r 1 . Note that on B r 1 (p) we have introduced a cone singularity at p, and we will see that this introduces a scale invariant (positive) blow up of the Ricci curvature near p. Our main result in this subsection is that for ǫ sufficiently small, the metric g 1 has Ricci curvature that drops by an arbitrarily small amount from that of g: Lemma 5.2. Let g satisfy the assumptions of Lemma 5.1 with g B,1 defined as in (58) and (59). Then for any 0 < ǫ < ǫ(|λ|), we have that Remark 5.3. The reader may wonder at the disagreeable Ric g 1 T S 2 estimate. This is simply an artifact of the division of the proof of Lemma 5.1 into distinct steps. As soon as the radius r 2 is chosen in Step 5.2, we will multiply f by the small but positive number 0 <δ =δ(r 2 , . . .) so that the first term of the above Ric g 1 T S 2 lower bound dominates the second in the range r 2 ≤ r ≤ 2.
5.2.
Step 2.2: The S 2 Warping Factor. We now want to alter the metric g 1 in the range r 2 ≤ r ≤ r 1 = 1 2 . We are looking for a metric g 2 of the form In particular, we will not alter the base metric g B,1 in this step. Our warping function f 2 : B 2 → R + is not a radial function everywhere, though one of our goals will be to make it radial on small radii. We will want f 2 to satisfy the properties Due to the nonradial nature of f 1 , there is some subtlety which makes a naive interpolation between f 1 and δr α insufficient. Morally, this is due to the uncontrolled positivity of Ric, which can contribute very negative terms if altered in a careless manner. The following will be the main constructive lemma for f 2 in this subsection: Lemma 5.3. Let (B 2 (p), g B ) and f 1 be as in Lemma 5.2. Then for each 0 < α < 1, 0 <δ < 1, and 0 < r 2 < r 2 (α), there exists f 2 : B 2 (p) → R + with r 2 < C(4)r 2 = r + 2 < r 1 and δ = δ(δ f ∞ | r 2 ) such that (1) f 2 =δ f 1 on the region r + 2 ≤ r ≤ 2 , (2) f 2 = δr α on the region r ≤ r 2 , (4) α r −2 on the region r 2 ≤ r ≤ r + 2 , (5) f 2 ∞ ≤ C(4)(δ f ∞ ) 1/2 on the region r 2 ≤ r ≤ r + 2 , (6) f 2 is smooth away from r ∈ {0, r 2 , r + 2 }, and C 1 everywhere except {p} .
Remark 5.4. The C 1 nature of f 2 is due to (3), where we force a definite amount of radial concavity throughout the interpolation region. This will later be smoothed with a C 1 gluing lemma.
Proof of Lemma 5.3. Before diving into the proof, we establish some new notation for the sake of legibility. We label where δ will be specified later in the proof. Also, instead of referring to the radii r 2 and r + 2 directly, it will be convenient to write the interval of interpolation (r 2 , r + 2 ) in terms of a midpoint r m := (r 2 + r + 2 )/2 and a radius ρr m := (r + 2 − r 2 )/2. In particular, the choice of radii r 2 , r + 2 from the statement of Lemma 5.3 will instead take the form of a choice of a constant ρ and radius r m < r m (α).
The remaining proof has multiple steps, which we will break down into pieces: (Locating the Intersection Set of f − and f + ): We first seek to set it up so that the intersection set } is approximately at our radius r m , which requires selecting δ depending on r m . We will estimate the deviation of the intersection set from this radius. These estimates will be used in the next steps of the proof.
Begin by observing that the ∇ g B ln f bound gives the estimate One then checks under what conditions f − takes values in this same interval: It is at this point that we make the choice δ := f + (p)r −α m e 2ηr m . For this choice of δ, we have (δ −1 f + (p)e 2ηr m ) 1/α = r m , and thus when (77) holds we have r ≤ 2r m . Observe that we now have the relationships In particular, we have Let us additionally remark that is nonempty for each ω ∈ S 3 by the intermediate value theorem. We may therefore define g(ω) ∈ [r m e −4ηr m /α , r m ] for each ω ∈ S 3 to be a radius such that f + (g(ω), ω) = f − (g(ω)). We can ensure that (g(ω), ω) always lies in the interpolation region r ∈ ((1 − ρ)r m , (1 + ρ)r m ) by requiring that r m < r m (α) is small enough that e −4ηr m /α > 1 − ρ. It is not necessary that g be continuously defined.
(Definition of f 2 and C 1 Cubic Interpolation): Our construction of f 2 will make use of the general notion of a C 1 cubic interpolation. Namely, we will ask that f 2 be the uniquely defined C 1 function satisfying We see from the above that Q(r, ω) is therefore well defined by the values of f ± and f ′ ± at the end points of the interval [(1 − ρ)r m , (1 + ρ)r m ] . It will be helpful to write the form of Q(r, ω) explicitly by (Concavity Estimates for ln f 2 ): We can now estimate (ln f 2 ) ′′ in the interpolation region r ∈ ((1 − ρ)r m , (1 + ρ)r m ). Below, we use the mean value theorem to estimate the difference, at g(ω), between r → ln f ± (r, ω) and its linearization centered at r = (1 ± ρ)r m : The desired concavity should arise from the first term of (82). We therefore pick the universal constant ρ > 0 small enough so that − 1 2ρ + C (1−ρ) < − 4 1−ρ , and then r m ≤ r m (α) small enough so that |1 − e −4ηr m /α | ≤ 2ρ 2 3(1−ρ) . Since g(ω) ∈ (r m e −4ηr m /α , r m ), the constraint on r m implies that |g(ω) − r m | ≤ 2ρ 2 3(1−ρ) r m . Thus, We also require that r m ≤ r m (α) is small enough so that η r m ≤ ( 1 2ρ + 1 (1−ρ) + Cr m ) −1 α (1−ρ) 2 r 2 m in order to absorb the lower order second term of (82). This yields the claimed concavity (Radial Zeroth and First Order Estimates on f 2 ): We begin this step by obtaining a bound on (ln f 2 ) ′ . This allows us to turn the concavity estimates for ln f 2 into concavity estimates for f 2 . Additionally we will use the estimate on (ln f 2 ) ′ to obtain L ∞ control for f 2 in the interpolation region.
(Remaining Derivative Estimates): The last remaining item is Lemma 5.3.4, which we now turn to. The constraint r m ≤ r m (α) is also finalized in this last step.
For the remaining computations, we pick coordinates ∂ i on S 3 that are normal at ω ∈ S 3 for the metric g r , and assume that we are working in the region r ∈ ((1 − ρ)r m , (1 + ρ)r m ). Before launching into the remaining derivative estimates, we recall two basic bounds that we will need, both from Corollary 3.3: g ′ r L ∞ (S 3 ,g r ) ≤ C(4)ηr , g ′′ r L ∞ (S 3 ,g r ) ≤ C(4)η .
Proof. For radii r > 2r 3 , the Ricci lower bound follows from Lemma 5.4 and the fact that g 2 = g 3 in this region. Thus, we assume for the rest of the proof that r ≤ 2r 3 . The main basic estimates we will need are from Corollary 3.3, namely g r − g S 3 C 2 (S 3 ,g r ) ≤ Cηr 2 , g ′ r C 1 (S 3 ,g r ) ≤ Cηr , g ′′ r L ∞ (S 3 ,g r ) ≤ Cη .
We see from these estimates that requiring α ≤ Cǫ, r 3 ≤ r 3 (α, ǫ, λ), and δ < δ(r, |λ|) small enough guarantees that Ric g 3 > λ for the ranger/2 ≤ r ≤ 2r 3 under consideration. Note that without loss of generality these requirements on α and δ (by way ofδ) held when they were chosen in Lemma 5.4. 5.4. Proof of Lemma 5.1. By construction, the metric g 3 with inner radiusr subject tor < r 3 = r 3 (α, λ, ǫ) from Lemma 5.5 satisfies Ricĝ > λ − C(4)ǫ on (B 2 (p) \ Br /2 (p)) × S 2 and all the conditions of Lemma 5.1, except that it is not smooth. This metric is globally C 1 away from p, but fails to be smooth at r ∈ {0, r 2 , r + 2 }. The singularity at 0 is as described in the statement of Lemma 5.1, and the other two radii can be smoothed within the class of warped product metrics to a metricĝ =ĝ B +f 2 g S 2 while preserving the Ric bound by the C 1 gluing of Lemma 3.1.
The last remaining statement of Lemma 5.1 to be proved is that the identity map Id : (B 2 (p),ĝ B ) → (B 2 (p), g B ) is (1 + 2ǫ)-bi-Lipschitz. As the smooth metricĝ can be made C 1 close to g 3 , we can prove a slightly stronger estimate directly for g 3 and the result will follow for our final smooth metric as in Lemma 3.1.
|
2023-08-09T06:42:32.772Z
|
2023-08-07T00:00:00.000
|
{
"year": 2023,
"sha1": "8f773fc6d090ccec6cdeb3ba285c1b27bfe8ac9c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "8f773fc6d090ccec6cdeb3ba285c1b27bfe8ac9c",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
115443541
|
pes2o/s2orc
|
v3-fos-license
|
Study of SOC dynamic estimation method of power lithium battery
The Partnership for a New Generation of Vehicles (PNGV) battery equivalent model is established on the basis of the existing charging state estimation method. Based on the kalman filtering algorithm, a state space expression based on the Davidson’s battery model is established. Finally, Matlab/Simulink is used for simulating calculation. Simulation and experiment show that the selected PNGV model has high precision and can simulate the charging and discharging characteristics of the battery truly, so that the SOC estimation value is controlled within the range of high precision.
Introduction
Power battery refers to the battery that provides power for transportation vehicles [1,2]. According to the different reaction principles of the battery, it can be divided into lead-acid power battery, nickel metal hydride power battery, lithium ion power battery and so on. Compared with other power batteries, li-ion battery has the advantages of high specific energy, high voltage of single battery, long cycle life, low self-discharge rate, no memory effect, strong adaptability to high and low temperature and no pollution [3]. Therefore, it is one of the most promising and widely used batteries in power battery.
Li-ion batteries are generally used with multiple joints connected into a lithium battery pack, so the corresponding battery management system must be designed to manage them, the state of charge (SOC) is one of the most important parameters of the battery management system [4,5]. SOC is an important parameter to describe the residual power of li-ion battery and plays a vital role in the best performance of li-ion battery [6].
Common SOC estimation methods
According to the United States Advanced Battery Consortium, the SOC is defined as the percentage of the Battery's remaining capacity that is rated at a given discharge rate: 2 At present, the battery charging state estimation methods mainly include the ampere time integral method, open circuit voltage method, internal resistance analysis method, artificial neural network method and kalman filtering method [7]. In this paper, the PNGV equivalent circuit model of li-ion battery and the untracked kalman filter method based on this circuit model are proposed to estimate SOC, and then Matlab is used for verification and simulation.
The establishment of equivalent circuit model of li-ion battery
According to the United States Advanced Battery Consortium, the SOC is defined as the percentage of the Battery's remaining capacity that is rated at a given discharge rate: In the equation of state: and are respectively the derivative of time of and .
The untracked kalman filtering algorithm based on the PNGV battery model
Kalman filtering is an algorithm that using the state equation of a linear system to estimate the state of the system optimally through input and output observation data of the system. For discrete systems, system state space model of Kalman is as follows: Among them: is the input vector of the system. Including current, SOC, internal resistance, Temperature, etc. is the output of the system and represents the working voltage of the battery. , , are determined by the parameters obtained in the experiment, is the process noise variable and is the observation noise variable. Definition: According to the above contents, the SOC estimation flow chart is shown in Fig. 2.
The results analysis of battery SOC simulation
The experiment is in the Matlab environment, the specifications of li-ion battery are as follows: the output current is 100A, the battery capacity is 100Ah, and the simulation test of li-ion battery is carried out under constant current condition at room temperature of 25 degrees. The curves of experimental results and untracked kalman filtering in estimating SOC results during discharge are shown in Fig. 3. Table 1 shows the error of untracked kalman algorithm and true value. From the analysis of capacity, it can be seen that the simulated discharge capacity is 96.61% of the actual discharge capacity.
It can be seen from Fig. 3 and Table 1 that under the condition of constant flow, the two methods have a good consistency in estimating the current SOC value. The interpolated SOC in the figure has a partial point mutation, which is caused by the model parameter error.
Conclusion
The prediction and estimation of SOC, as an important part of li-ion battery management system, is of practical significance to its research. In this paper, according to the PNGV dynamic battery equivalence model, considering the influence of temperature on the model parameter value and based on the untracked kalman filtering algorithm of the PNGV battery model, the SOC estimation block diagram is established by the state equation and the simulation analysis is carried out. Compared with simulation terminal voltage and measured terminal voltage, the error only accounts for 3.39%.The results show that the model parameters are accurate and effective, and the PNGV model has high precision.
|
2019-04-16T13:29:14.819Z
|
2018-12-13T00:00:00.000
|
{
"year": 2018,
"sha1": "f57a8dea9f60c581d2abc9dbe891de091528173f",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/452/3/032057",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "df9e4d5ac11efe38ad02be080e0eef1540275ff2",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
55933044
|
pes2o/s2orc
|
v3-fos-license
|
Depth dependence of itinerant character in Mn-substituted Sr3Ru2O7
We present a core-level photoemission study of Sr3(Ru 1-xMnx)2O7, in which we monitor the evolution of the Ru-3d fine structure versus Mn substitution and probing depth. In both Ru 3d3/2 and 3d5/2 core levels we observe a clear suppression of the metallic features, i.e. the screened peaks, implying a sharp transition from itinerant to localized character already at low Mn concentrations. The comparison between soft and hard x-ray photoemission, which provides tunable depth sensitivity, reveals that the degree of localized/metallic character for Ru is different at the surface than in the bulk.
Introduction
The change in carrier density induced by chemical doping is one of the most commonly used techniques to tailor novel properties in a variety of materials, including the strongly correlated electron systems.
Transition metal oxides (TMO) are paradigmatic in this sense: they often display many different electronic phases that are quite close in energy. A comprehensive description of TMO can be reached only by disentangling such different, yet comparable, energy scales. Research on the RuO family (Sr,Ca) n+1 Ru n O 3n+1 has recently suggested a conceptually different approach, based on the substitution of 4d Ru with a 3d transition metal impurity. A central feature in the physics of Ru-oxides is the spatial extension of 4d orbitals: the properties of ruthenates are extremely sensitive to the orbital degrees of freedom, resulting in almost equal chance of displaying itinerant or localized behavior. The substitution of Ru 4d with more localized 3d metal atoms will strongly influence the orbital population, possibly resulting in orbital-induced novel properties. Example of this approach is the use of chromium (Cr 4+ ) in SrRu 1-x Cr x O 3 and CaRu 1-x Cr x O 3 to stabilize itinerant ferromagnetism from the normal paramagnetic state [1,2]. More recently, a 5% Ru-Mn substitution in the bilayer compound Sr 3 Ru 2 O 7 was proven to change the ground state from a paramagnetic metal to an unconventional, possibly Mott-like antiferromagnetic insulator [3,4]. The interplay between the extended, yet anisotropic Ru-4d and O-2p bonds and the localized Mn 3d-impurity states was shown to be responsible for a crystal-field level inversion, with Mn not exhibiting the expected Mn 4+ valence but rather acting as an Mn 3+ acceptor [5], a behavior bearing interesting similarities to the dilute magnetic semiconductor Mn-doped GaAs.
Unravelling what ultimately drives the electronic and magnetic properties of the Sr 3 (Ru 1-x Mn x ) 2 O 7 doped system is strictly linked to a direct measure of the electronic charge distribution over Ru and O-ligand orbitals. Since it is well known that electron correlations in TMO, and in particular in ruthenates, are severely influenced by the surface environment (cleavage plane, electronic/structural reconstruction, defects) [6][7][8][9], a comparison with reliable bulk-sensitive probes is mandatory. Photoemission spectroscopy (PES) possesses all the necessary characteristics in elucidating the aforementioned rich physics. In particular, core-level PES probes the different electronic screening channels via the energy location and relative intensity ratio of specific peaks, in a chemical selective way. This possibility is confirmed by recent experimental results [10], where a systematic study of doping and dimensionality effects in the core-level of various ruthenates has been carried out at fixed photon energy (Al Kα radiation, 1486.7 eV).
In the present paper, we report a study of the Ru-3d core-level fine structure vs. Mn concentration in Sr 3 (Ru 1-x Mn x ) 2 O 7 by soft and hard x-ray PES (HAXPES), hence with tunable depth sensitivity. The choice of focusing on the Ru-3d core-levels, as opposed for instance to the Mn-2p spectra, is dictated by their sharper nature which allows tracking more precisely the fine satellite structure (we measured Mn-2p spectra by HAXPES, obtaining results analogous to those reported in Ref 10; no extra satellites were observed). Already at low energy, a Mn-induced suppression of the screened metallic features is observed in both Ru-3d 3/2 and 3d 5/2 , implying a transition from itinerant to localized character, in analogy with the reported metal-to-insulator transition [3,5]. HAXPES data confirm the change upon Mn substitution, with clear indication of a stronger electronic localization at the surface than in the bulk. Our results suggest a way to control, in the same material, metallicity of the surface-interface region vs. the bulk one, by exploiting the highly sensitive response of conducting perovskites to impurities. Figure 1. Evolution vs. time of the Ru-3d core-level spectra from Sr 3 Ru 2 O 7 collected at hν=455 eV (T=80 K) on a freshly cleaved surface and after 10 hours. In the later case, the intense C-1s contribution located at ~286 eV BE is clearly visible, while the screened peaks on the low BE side of both Ru-3d 3/2 and 3d 5/2 lines are almost absent.
Experimental methods
High quality single crystals of Sr 3 (Ru 1-x Mn x ) 2 O 7 , with x=0, 0.05 and 0.2 were grown by the floating zone technique. PES measurements were performed after fracturing the samples in UHV using two experimental setups: APE beamline for low-energy PES (Elettra, hν = 455 eV, base pressure 1x10 -10 mbar) [11], and VOLPE spectrometer for HAXPES (beamline ID16 at ESRF, hν = 7595 eV, base pressure 6x10 -10 mbar) [12]. The spot size in the normal emission geometry was 50x120 µm 2 in both cases, and the overall beamline-analyzer energy resolution was set to 200 (APE) and 350 meV (VOLPE).
The Fermi energy and overall energy resolution were estimated by measuring a polycrystalline Au foil in thermal and electric contact with the samples. Identical results have been obtained consistently on several cleaved samples. The cleanliness of the surface was checked by monitoring the C-1s and O-1s spectra.
With soft x-ray, a new cleave was needed every ~ 4 hours; instead, no traces of contamination were observed, over two days and at any temperature, in HAXPES measurements.
Results and Discussion
Surface sensitive (hν = 455 eV) Ru-3d core-level spectra from Sr 3 (Ru 1-x Mn x ) 2 O 7 are presented in Figure 1 and Figure 2. In Figure 1 we identify the Sr-3p 1/2 peak at about 279 eV of binding energy (BE) and the Ru spin-orbit split doublet 3d 5/2 and 3d 3/2 in the 280-295 eV BE range. Both Ru-3d 5/2 and 3d 3/2 spectra display multiple components, which have already been observed in the ruthenates; it is generally agreed that the low BE features are not induced by surface-related chemical states, and their spectral weight increases when the system enters a metallic regime [10,[13][14][15][16] More specifically, each spin-orbit partner is comprised of a low BE peak, corresponding to the relaxed lowest-energy core-hole state and referred to as the screened state, and a broader higher BE structure associated with the unscreened core-hole state. The remarkable sensitivity to surface environment is observed via the evolution vs. time of the Ru-3d spectral lineshape in pure Sr 3 Ru 2 O 7 . While on freshly cleaved surfaces the various spectral components are well separated, surface contamination strongly changes the lineshapes, as evidenced by: i) intense C 1s structures at ~285 eV BE; ii) a suppression of the screened peaks; iii) the appearance of a shoulder at ~288 eV of BE. We emphasize that the average photoelectron mean-free-path at this photon energy ranges from 4 to 8 Å [17].
In Figure 2 we present the evolution of the Ru-3d spectrum as a function of Mn substitution x. The intense screened features, for both Ru-3d 5/2 and 3d 3/2 , are severely suppressed already at x= 5% and reduce to only a weak shoulder for x= 20%; an energy shift for the screened features of up to 100 meV for 20% Mn substitution is observed, as in Ref [10]. In addition, the difference spectra at the bottom of Figure 2 highlight the redistribution of spectral weight upon Mn substitution; their lineshape reveals their fine structure (with sizeable intensities around 281.3 and 285.5 eV BE), and clear shoulders on the high BE side of each spin-orbit partner (centered at 287 and 283 eV BE). These broad shoulders can be ascribed to multiplet structure which becomes more prominent upon Mn substitution, possibly suggesting the evolution from itinerant to localized character. It is interesting to note that both 5% and 20% Mn-doped spectra display close similarities with Ru-3d core level results from Ca 2 RuO 4 , a pure antiferromagnetic insulator [18]. This suggests that the suppression of the screened features is compatible with metalinsulator transition induced by Mn substitution in Sr 3 (Ru 1-x Mn x ) 2 O 7 . Figure 2. Ru-3d core-level spectra collected at hν=455 eV (T=80 K) on Sr 3 (Ru 1-x Mn x ) 2 O 7 for x=0,0.05 and 0.2. The spectra have been normalized at the Sr-3p 1/2 peak (279 eV BE, see Figure 1). A clear suppression of the screeened features located at the low BE side of both spin-orbit partner is observed upon Mn-doping. In the bottom of the figure, the difference spectra, obtained after subtraction of the Mndoped spectrum from the undoped one ((undoped -20%) and (undoped-5%)) are presented, highlighting the change of spectral shape. The three bars indicate the energy position of the three sets of doublets corresponding to the main intensities (screened, unscreened and multiplet). The three doublets have been used to fit the experimental spectra in Figure 4. A shift in the energy position upon Mn doping is observed for the peaks as well as for the multiplet contribution on the high BE side.
As for the underlying driving mechanism of the transition, a purely electronic scenario was proposed based on the detection by REXS of an associated magnetic superstructure [4] and on the comparison of linear dichroism XAS data and density functional theory calculations [5]. While the evolution of the screened states in the present XPS study is compatible with the proposed electronic-driven transition induced by Mn impurities playing the role of Mn 3+ acceptors [4,5], PES does not allow determining the precise location of additional holes in the Ru-oxide host [10]. One could expect that two components should be observed in the Ru core level spectra, associated with Ru 4+ and Ru 5+ respectively. For 5 to 20% Mn substitution, at most 5 to 20% Ru atoms would be in a 5+ oxidation state, i.e. a very unfavorable case for measuring different contributions in the broad Ru-3d structure, where at least 80% of the intensity is of Ru 4+ character. Moreover, the induced holes would most likely be in delocalized states involving oxygen ligands, which would make their detection in XPS even less likely. One might speculate that the lack of detection of a Ru 5+ oxidation state, in ours and Ref. 10 at Mn 5% doping. The 5% spectrum is indeed closer to the 20% than one might expect based on the bulk properties, suggesting that MIT is more pronounced at the surface than in the bulk. The residual intensity observed in the screened features of Figure 2 upon Mn-doping can be ascribed to the gradual evolution from metallic to insulator-like character. Although the evolution of the PES intensity clearly identifies the trend, a quantitative analysis is impossible, in particular because the studied system is not a true large gap insulator.
The extreme sensitivity of the screened features --and hence of the available screening channels --to the local environment, calls for a more accurate determination of the metallic-insulating character upon Mn substitution. To this end, in Figure 3 we present HAXPES results which guarantee a bulk sensitivity ~8 nm [19]. The relative intensity of Sr and Ru peaks is significantly different with respect to the soft x-ray data (Figure 1 and Figure3a), which stems from the change in p/d cross-section ratio when passing from soft to hard x-ray PES [19,20]. Zooming on the Ru-3d region (Figure 3b Although the evolution as a function of Mn concentration is confirmed, the relative peak-height ratio between unscreened and screened features is reversed with respect to the surface sensitive results. Based on the enhanced probing depth of HAXPES [17,19] the intensity increase of screened peaks observed on 3d-based TMO in the hard x-ray regime has been recently interpreted as a fingerprint of different screening mechanisms between surface and volume [21][22][23][24][25]. HAXPES results in vanadates across the metal-insulator transition also confirm the relationship between low BE features and metallicity [26]. . Ru-3d core-level spectra acquired at at hν=7595 eV (T=20 K) on Sr 3 (Ru 1- x Mn x ) 2 O 7 for x=0 and 0.2. (a) Spectral region including the Sr-3p 1/2 core level; the Sr-3p intensity is higher than the Ru-3d one, due to the photoionization cross section increase in the HAXPES regime. (b) Enlarged view of the Ru-3d energy range; for clarity, the spectra have been normalized to the Sr-3p 1/2 intensity. A reversed peak-height ratio between screened and unscreened features, for both 3d 5/2 and 3d 3/2 , is observed.
We stress that our data cannot provide direct evidence for the exclusive electronic-driven picture and we have only a compatible scenario with the one described in Ref. 4
by Hossain et al.. While we can directly
show by soft and hard x-ray photoemission, and the relative intensity of screened peaks, that the surface is more correlated than the bulk, we cannot address directly the role and interplay of other degrees of freedom, such as charge, spin, and lattice.
To better disentangle screened and poorly screened features and also to identify any additional components, we performed a spectral decomposition by fitting routines. Each peak was represented by a symmetric function generated by a Lorentzian lineshape convoluted with a Gaussian. The Lorentzian function represents the lifetime broadening effect, while the Gaussian accounts for all other broadenings including energy resolution (350 meV for HAXPES and 200 meV for soft x-ray). The results of these spectral decompositions are presented in Figure 4, with the corresponding parameters summarized in Table 1. The open circles show the experimental spectra and the solid line shows the fitting decomposition. The spectra recorded at 455 eV photon energy could be best fit using such function with three sets of doublets (screened, unscreened and multiplet peaks), where the intensity ratio between the spin-orbit split components has been fixed at 1.5, as determined by the degeneracy ratio. The multiplet structures appearing on the higher BE side, for both 3d 5/2 and 3d 3/2 components, have been modeled with Gaussian intensities and constrained to the energy position of the structures appearing in Figure 2. The fitting procedure confirms the reduction of such multiplet contribution upon Mn substitution. For the bulk sensitive spectra recorded at hν = 7595 eV, the best fit is obtained with two sets of doublets only; in this case adding a multiplet contribution does not increase significantly the quality of the fit. This is partly due to the residual intensity at high BE arising from the Sr-3p core-level, which limits the analysis of fine details in the HAXPES spectra. The integrated intensity ratio between screened and unscreened peaks is summarized in Table 1. At each photon energy, the values are identical within the error bars, for both 3d 5/2 and 3d 3/2 components, confirming that: i) no extra intensity (hence a different ratio for different spin-orbit partners) arising from contamination is found, knowing that C 1s signal overlaps with 3d 3/2 ; ii) the evolution of the metallic/insulating behavior upon Mn-substitution is different for bulk and surface; iii) as for the surface sensitive data, the results of the fit indicate not only a shift to higher BE of the screened peaks but also a shift, in the same direction, of the unscreened contributions. All these findings point again to an important modification of the surface electronic structure compared to the bulk one. The presence and evolution of screened and unscreened features in the 4d PES spectra from ruthenates have been described in terms of a Mott-Hubbard picture within a DMFT approach [27]; cluster calculations of Ru-3d in Sr 2 RuO 4 suggest that the energy separation between screened and unscreened peaks is reminiscent of the Coulomb interaction between Ru-3d and 4d holes, and comparable with the Ru 4d-t 2g bandwidth W [28]. In particular, smaller U dd values correspond to higher screened intensity; thus, the observed difference between surface and bulk sensitive PES spectra suggests a stronger localization in the surface and subsurface region, as possibly due to the reduced coordination or surface relaxation. The interpretation in terms of final-state screening properties, supported by both experimental and theoretical considerations, suggests also that the linewidth of the screened peak bears a degree of proportionality to the Ru bandwidth W [27][28][29]; due to the increase of electron localization in passing from volume to surface, one would expect a line narrowing in surface sensitive PES, reflecting the progressive reduction of W. This argument appears to be corroborated by the linewidth fit results for the screened peak in HAXPES and soft x-ray PES. Although the experimental energy resolution is similar for the different kinetic energy ranges, the Lorentzian contribution of the screened features is sharper in the soft x-ray (FWHM ~300 meV) than in the HAXPES regime (FWHM~600 meV). Note, however, that a conclusive unambiguous statement on this latter point is prevented by the broad nature of the experimental features, which makes the value of this observation purely qualitative.
Conclusions
In conclusion, the Sr 3 (Ru 1-x Mn x ) 2 O 7 metal-insulator transition has been studied by variable probing depth PES. The measured evolution of the Ru 3d core-level signals, provides evidence for a progressive increase of electron correlations, not only upon doping but also from bulk to surface. The reduction of metallic -like screening channels, which is more pronounced in the vicinity of the surface might also play a role at interfaces, leading to potentially new physical properties.
|
2018-12-11T01:22:07.534Z
|
2011-05-01T00:00:00.000
|
{
"year": 2011,
"sha1": "1c781e3992fc7218a6d1d492a8d0bc1687f620de",
"oa_license": "CCBY",
"oa_url": "http://iopscience.iop.org/article/10.1088/1367-2630/13/5/053059/pdf",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "9201415dd0788386ff51fdeb728b4353ccb0576b",
"s2fieldsofstudy": [
"Physics",
"Materials Science"
],
"extfieldsofstudy": [
"Physics"
]
}
|
51955455
|
pes2o/s2orc
|
v3-fos-license
|
Emergence of polarized opinions from free association networks
We developed a method that can identify polarized public opinions by finding modules in a network of statistically related free word associations. Associations to the cue “migrant” were collected from two independent and comprehensive samples in Hungary (N1 = 505, N2 = 505). The co-occurrence-based relations of the free word associations reflected emotional similarity, and the modules of the association network were validated with well-established measures. The positive pole of the associations was gathered around the concept of “Refugees” who need help, whereas the negative pole associated asylum seekers with “Violence.” The results were relatively consistent in the two independent samples. We demonstrated that analyzing the modular organization of association networks can be a tool for identifying the most important dimensions of public opinion about a relevant social issue without using predefined constructs. Electronic supplementary material The online version of this article (10.3758/s13428-018-1090-z) contains supplementary material, which is available to authorized users.
In the present study, we aimed to use one socially prominent issue as a cue (asylum seekers, labeled as Bmigrants^) to capture opinions shared by a social group (Hungarians) (Abric, 1993;Moscovici, 1984;Wagner et al., 1999). As a measure of public opinion, the free association method can be viewed as a semistructured alternative between traditional questionnaires, producing highly structured data, and Web-mining algorithms, collecting large quantities of unstructured data. Hence, the free association method can overcome the predefined scope of questionnaires (Bansak, Hainmueller, & Hangartner, 2016), since respondents can freely express their opinion, yet it has the advantage of representative samples and fast data processing, as opposed to several Web-mining methods (Lazer, Kennedy, King, & Vespignani, 2014). Traditionally, free association analysis has focused on consensual meaning (i.e., the most frequent words and rankings) regarding a social object (Abric, 1993;Moscovici, 1984;Wagner et al., 1999) and has not focused on the polarization of opinions (Bradley, Mogg, & Williams, 1995;Halberstadt, Niedenthal, & Kushner, 1995;Joffe & Elsey, 2014;Niedenthal, Halberstadt, & Innes-Ker, 1999).
Different prior word association methods were introduced in order to distinguish the stable and recurrent associations from peripheral ones. Szalay and Brent (1967) developed the associative group analysis approach of free associations. In this method, the early associations in a continued association task were found to have a high probability of being produced again during a retest. Previous studies in social representation theory (Abric, 1993;Wagner, Valencia, & Elejabarrieta, 1996) have argued that frequent associations are temporally stable and they refer to the consensual meaning regarding a given social object (a.k.a. the central core of the social representation).
Electronic supplementary material The online version of this article (https://doi.org/10.3758/s13428-018-1090-z) contains supplementary material, which is available to authorized users.
Alternatively, Kinsella and her coworkers (Kinsella, Ritchie, & Igou, 2015) used the prototype analysis of free associations, in which most frequent associations (above a threshold) are considered as the consensual prototype of the social object in the perception of the social group.
Despite of the stable core of the representations, social issues can trigger opposite emotions, interpretations, attitudes, ideas and beliefs in a society, which can yield a polarized structure of public opinions. With sufficient data, it is possible to organize free associations not only along a core-periphery dimension, but to identify a more detailed structure with multiple major frames of interpretation in a society. Prior research used the available up-to-date technology to analyze free associations in relation to ideology (Szalay, Kelly, & Moon, 1972) and attitude measurement (Szalay, Windle, & Lysne, 1970). Furthermore, Szalay and Deese (1978) provided an extensive summary of their pioneering factor analytic method for word associations. Apart from these works, to our best knowledge, no recent data-driven studies focused on the polarization of opinions with free associations. Therefore, we aimed to fill this methodological gap.
Method demonstration: Public opinion of BmigrantsŴ e aimed to demonstrate our method on public opinions about the recent Bmigration crisis,^which had a significant political and social effect in many European countries, including Hungary. The increased number of asylum seekers made migration one of the most prominent political and societal topics in the European Union. Eastern European countries, including Hungary, were impacted by the situation since these countries lie on the continental route from the Middle East to western European countries. Similarly to these countries, in Hungary the leading political discourses labeled asylum seekers as migrants who threaten the ethnically and culturally homogeneous country. The criminalization of the asylum seekers contributed to the blurring of the terms migrant, refugee, and asylum seeker (Bansak et al., 2016;Holmes & Castañeda, 2016;Kallius, Monterescu, & Rajaram, 2016). As an opposition to negative responses, solidarity movements also emerged in order to shelter asylum seekers or help them safely cross the country (Kallius et al., 2016). According to a recent study including 15 European countries, (i) humanitarian concerns, (ii) anti-Muslim sentiments, and (iii) economic reasoning were the key factors in the perception of asylum seekers (Bansak et al., 2016).
Research goals and validation process
In this study, we aimed to demonstrate that co-occurrence statistic of associations can identify polarized opinions in the perception of asylum seekers. For this reason, we constructed networks from free associations, in which associations were considered to reflect opinions and associations were connected base on their statistical co-occurrences (log likelihood ratio, LLR); thus, we refer to our free association networks as networks of co-occurring opinions (CoOp networks). We constructed such CoOp networks from multiple response free associations to the cue Bmigrant^in the case of two independent and comprehensive samples in Hungary. Subsequently, we identified modules (densely connected subnetworks) of the CoOp networks.
We hypothesized that frequently co-occurring associations have higher emotional similarity (Hypothesis 1). To test this, respondents were asked to evaluate their own associations with emotion labels. The emotional similarity for every pair of associations was calculated on the basis of the difference in the empirical distributions of their emotional labels. We calculated the correlation between emotional similarity values and co-occurrence connection values applying a permutation method (quadratic assignment procedure; QAP).
We tested the stability of the CoOp networks (Hypothesis 2). First, we aimed to test whether the LLR values were correlated between the two samples (Hypothesis 2a). Second, we aimed to test whether the CoOp networks are more similar to each other-on the basis of normalized mutual informationthan a large number of randomized networks (null-models) with similar properties (Hypothesis 2b). Third, we aimed to test whether the exclusion of rare associations increase the stability of our method due to the lower proportion of peripheral associations and the higher proportion of core associations (Hypothesis 2c).
We assumed that the modules of the CoOp network reflect different opinions. Therefore, we statistically compared the attitude values (POT, GM, SDO) of participants whose associations belonged to different modules (Hypothesis 3). We assumed that explicit attitudes toward migrants (POT scores) can differentiate between modules more clearly than abstract construct related to perceived outgroup features (GM and SDO scores).
Method
Participants and procedure For our research purposes, two nationally comprehensive samples of Hungarian participants were recruited. The samples were nationally comprehensive in terms of gender, age, level of education, and type of residence for those Hungarians who use the Internet at least once a week. The participants were selected randomly from an Internet-enabled panel including 15,000 members with the help of a market research company in June 2016 (Sample 1) and in October 2016 (Sample 2). The samples were created with a random stratified sampling method among panelists in the online panel of the market research company with the average response rate 25%. Individuals were removed from the panel if they gave responses too quickly (i.e., without paying attention to their response) and/or had fake (or not used) e-mail addresses.
The Research Ethics Committee of the local university approved this study. Data were collected via an online questionnaire. Participants were informed that the questionnaire was designed for measuring attitudes toward migrants. No other information was provided about the content and respondents could only see the actual task. All participants provided their written informed consent to participate in this study through a check-box on the online platform. The ethics committee approved this consent procedure. Respondents were assured of their anonymity and as a compensation the market research company drew gift cards among those who participated in the study.
Measures
Multiple response free association task In this study an associative task was used, based on Abric's (Abric, 1994(Abric, , 2003 theoretical underpinnings and on Vergès's (Vergès & Guimelli, 1994) and on Flament and Rouquette's (2003) methodological assumptions. In the most of the social representation studies, a multiple response (a.k.a. continuous association task) response is applied with a limited number (three or five) of required associations. This method can reduce association chaining effects and inhibitory effects (De Deyne & Storms, 2008b) that are more prevalent in open-ended association tasks. Furthermore, open-ended association tasks can generate a lower number of average responses than a task with a predefined number of responses (Kinsella et al., 2015).
In the present case, the respondent's task was to write five words or expressions that comes into their mind regarding the word Bmigrant.^However, in this study, we did not use the traditional methodology of social representations for identifying the central core and periphery or the density of the representations (Abric, 1994;Orosz & Roland-Lévy, 2013;Flament & Rouquette, 2003). Instead, we used a network analytic method. From the perspective of large-scale semantic network studies, multiple response free association tasks generate strong and weak associations as well (De Deyne & Storms, 2008b). Classical social representation studies and network analytic association studies are closely related in terms of data collection procedure. The strong associations can constitute the central core of the representation and weak associations can belong to periphery (Abric, 1993;De Deyne & Storms, 2008b). This associative task was the first question in the questionnaire for avoiding the influence of prior topic relevant questions.
Emotional labeling task After providing all five of the associations, respondents got back their associations one by one and were asked to provide two emotional labels to each of their own associations. We found that the negative-neutral-positive valence evaluation used in prior similar studies (Orosz & Roland-Lévy, 2013) is too constrained. Furthermore, frequently used affect measures as PANAS cannot be effectively used for the present goals as it included several irrelevant items (e.g., active, strong, alert) and excluded relevant ones (e.g., antipathy, empathy, anger). For this reason we reviewed basic emotion theories (Ekman, 1992;Izard, 2013;Ortony & Turner, 1990;Robinson, 2008) to identify topic-relevant emotional labels. More precisely, the selection of the emotions was largely built on the 10 basic emotions of Izard and the 11 pairs of positive and negative emotion pairs of Robinson. However, in a few cases basic emotions were described with synonyms to fit better to the cue. We used the following 20 emotional labels (differences from the original ones can be seen in parentheses): interest-alarm (anxiety), empathy-contempt, surprise-indifference, hope-fear, gratitude-anger, joy-sadness, calmness-relief (frustration), pride-shame, generosityenvy, and love (sympathy)-hate (antipathy). Respondents could choose any two from the 20 emotional labels for each of their own associations (the labels did not appear as opposites).
Perceived outgroup threat (POT) Perceived threat from asylum seekers were assessed using seven items (Sample 1 α = .96, Sample 2 α = .96) that were translated from an implementation (Kteily et al., 2015) of the integrated threat theory (Stephan et al., 2000). The POT scale was translated to Hungarian according to protocol (Beaton, Bombardier, Guillemin, & Ferraz, 2000), and it was adopted to the contemporary Hungarian context on the basis of a preliminary study (e.g., BMigrants pose a physical threat to Hungarians^). Responses were made on 5-point Likert-type scales (1 = strongly disagree, 5 = strongly agree). The higher value indicates higher level of perceived threat from migrants. For further details of this measure see Table S1.
Group malleability (GM) We adopted a 4-item (Sample 1 α = .95, Sample 2 α = .94) version questionnaire (Halperin et al., 2011) to assess respondents' implicit assumptions on whether social groups are capable of development. The GM scale was translated to Hungarian according to protocol (Beaton et al., 2000; e.g., BGroups can do things differently, but the important parts of who they are can't really be changed^). Respondents indicated their level of agreement using a 6point Liker-type scale (1 = strongly disagree, 6 = strongly agree). The higher value indicates higher level of agreement with the concept of nondeveloping groups. For further details of this measure, see Table S2.
Social dominance orientation (SDO) The Social Dominance Orientation (Pratto et al., 1994) questionnaire has eight items (Sample 1 α = .83, Sample 2 α = .83) that measure respondents' degree of preference for inequality among social groups. The SDO measure was translated to Hungarian according to the protocol (Beaton et al., 2000;e.g., BSome groups of people are simply not the equals of others^). Respondents indicated their level of agreement using a 7point Liker-type scale (1 = strongly disagree, 7 = strongly agree). The higher value indicates higher level of preferred inequality among social groups. For further details about this measure, see Table S3.
Preprocessing of associations
The preprocessing and lemmatization of the associations was carried out by four independent coders. Lemmatization is a linguistic process of grouping inflexions of a word into a single word (lemma) without conjugates. In other words, it is basically the grouping of words with the same stem. In the lemmatization process, two associations were merged in the following cases: (i) they had the same lemma (e.g., Brefugeeâ nd Brefugees^were merged; Flament & Rouquette, 2003); (ii) they were semantically so close that the English translation could not distinguish between them (e.g., Bstain^and Bdirt^). Two associations were merged only if the coders could reach to a consensus.
CoOp network construction
Statistical relations among associations were defined on the basis of their co-occurrences to identify connections. We used log-likelihood ratio (LLR) to assess co-occurrence connections between every possible association pairs (Dunning, 1993). For each possible pair of associations, we calculated the likelihood assuming statistical independence for their cooccurrence over the maximum likelihood of the observed cooccurrence: λ i; j ð Þ ¼ L j n ∩i n ; i n ; j n n *L j n ∩:i n ; :i n ; j n n L j n ∩i n ; i n ; j n ∩i n i n *L j n ∩:i n ; :i n ; j n ∩:i n :i n ; where j n and i n denote the number of participants, who mentioned associations i and j, ¬i n denotes the number respondents, who did not mentioned association i and n denotes the number of all participants. L(arg1,arg2,arg3) refer to the probability of a binomail distribution (L) in which arg1 number of succes occured from arg2 number of observations and arg3 is the probability of arg1. More genenrally, the above formula measures the level of statistical dependence between i and j by testing whether the distribution of j given that i is present is the same as the distribution of j given that i is not present. The LLR from λ is calculated as Therefore, the LLR between association i and j was positive (attractive) if their observed co-occurrence number was higher than the expected one and negative (repulsive) if their observed co-occurrence number was lower than the expected one. Basically LLR performs the same task as χ 2test parametric method without the requirement of normality. Multiplication of our LLR values by two can relate them to a χ 2 distribution with the appropriate degrees of freedom.
We constructed CoOp networks, in which the nodes assigned by the associations and edge weights between nodes determined by LLR values. The nodes were the different associations from the total collections of associations. We ignored associations that occurred fewer than three times, since these are not stable parts in the perception of the social object (Abric, 1993) or possibly are related to idiosyncratic expressions; thus, they do not belong to the social representation (Sarrica, 2007). Furthermore, the removal of these nodes ensured higher robustness of the networks.
Affective similarity
Every participant chose two emotional labels from among the 20 options described above to characterize their affective relation to each of their association. The affective similarity between every pair of associations was calculated as where i and j are two different associations, e is the emotional label (ranging from 1 to 20), and E is a two dimensional matrix, in which each item E(e,i) refer to the number of times e emotion assigned by the respondents to association i. The similarity value of 2 indicates identical emotional labels, whereas the similarity value of 0 indicates totally different emotional labels between two associations.
Module detection
Both CoOp networks were divided into nonoverlapping sets of densely linked associations (modules). A modularity maximization process (Newman & Girvan, 2004) was applied to identify the modules of the networks. The original modularity formula is generalized (Gómez, Jensen, & Arenas, 2009) to deal with both the positive (attractive) and negative (repulsive) links: where Q denotes the modularity value of a given partition of a network, v + /v − denote the total positive/negative weights of the network, w þ ij /w − ij denote the positive/ negative weights between node i and j, e þ ij /e − ij denote the chance-expected positive/negative connections between node i and j, ∂ M i M j is an indicator function that is set to 1 if node i and j belong to the same module. The higher the modularity of a network partition, the higher the difference between the fraction of edges fall within the modules minus the expected fraction of edges fall within the same modules in a corresponding random network. The Louvain algorithm (Blondel, Guillaume, Lambiotte, & Lefebvre, 2008) was applied to identify the modular partition with the highest possible modularity, namely the highest ratio of edge weights inside the modules and lowest ratio of edge weights between modules. Therefore, the size and number of modules belong to the modular partition with the maximal modularity is parameterindependent and match with the structure of the network. In our case, it is extremely important to determine algorithmically the number of modules (i.e., number of opinion dimensions) that best describe the data. A drawback of modularity maximization that the resulting modular structure can change in each iteration (Good, Montjoye, & Clauset, 2010) as the optimization process may stuck in local maxima. Therefore, a consensus partition was determined for the sake of higher reliability (Lancichinetti & Fortunato, 2012). In the consensus partitioning process, first the consensus matrix was determined on the basis of 5,000 independent iterations of the Louvain algorithm. The edge weights between every pair of nodes in the consensus matrix determined on the basis of the number of times two nodes fall into the same module. The consensus matrix was partitioned to nonoverlapping modules 100 times by the Louvain algorithm. If the resulting 100 partitions were identical, then it was accepted as the consensus partition, otherwise the sets of 100 partitions were generated from the consensus matrix until the agreement. The average modularity score of the 5,000 independent iterations and the consensus partition of the CoOp networks were determined for both samples.
Reproducibility test
To demonstrate stability regarding the co-occurrences of associations and the identified modular structure, we compared the LLR edges and modular structures of the two independent samples (Sample 1 and Sample 2). Since associations were slightly different in the two samples, only the identical associations were compared in terms of LLR value and modular membership. The similarity of the LLR value between identical associations in the two samples was measured by Spearman's correlation. The significance of the correlation was determined by QAP (Simpson, 2001). A simple pairwise correlation between the LLR values of the two samples would assume the independence of the edges, however a node in a network typically have similar connections, thus multiple similar edges belonging to one node can cause spurious correlation. QAP is a permutation procedure to eliminate the effect of interdependence between network edges belonging to a common node (Simpson, 2001). First, QAP determined the similarity of the LLR values of the two networks. This was done by Spearman's correlation in our case. Second, the edges of the CoOp network in Sample 1 were randomly shuffled by permuting the rows and columns of the adjacency matrix in the same order. Third, Spearman's correlation was calculated between the LLR values of the shuffled CoOp network and the LLR values of the CoOp network from Sample 2. The second and third parts of the QAP were repeated 5,000 times, and the absolute values of the simulated correlation coefficients were saved. The level of significance (p QAP ) was equal to the percentile of the simulated correlation coefficients reached the level of the correlation coefficient from the real data.
The similarity of the modular structures was measured by means of normalized mutual information (nMI): where H(M1) and H(M2) are the entropies of the modular partitions of Sample 1 and Sample 2 separately, and H(M1,M2) is the joint entropy of the two partitions (Meilǎ, 2007). Since 5,000 independent networks were created for both samples to determine the consensus partitions, we determined the final nMI value as the average of all pairwise comparisons of the modular structures based on LLR edge weights.
To determine whether the similarity between modular organizations of the two samples indicates a nonrandom similarity, we compared the nMI calculated from the similarity of the original CoOp networks with the nMI calculated from the similarity of the null models. The simplest null model is the Erdős-Rényi graph, in which the edges are randomly rewired; however, more sophisticated nullmodel generation procedures can maintain certain parameters of the original network in the random network. Here, we generated edge-, weight-, and strength-preserving random networks (Rubinov & Sporns, 2011) for both Samples 1 and 2. The generation of the null model consisted of two steps. First, the randomization of the network was done by connection-switching method (Wormald, 1999) in a way that preserved the positive and negative degrees of the nodes. Then the weights were allocated and iteratively rearranged to converge to the weight distribution of the original network (Rubinov & Sporns, 2011). A set of 5,000 null models were generated and the modular structures of the null models were determined. The similarity (nMI) of the modular structures (only identical association included) was calculated for the null models. The process resulted in a distribution of nMI values. The observed nMI value was compared to the nMI values derived from the null models. The CoOp networks of Sample 1 and Sample 2 were considered significantly similar if the observed nMI value was higher than 95% of the nMI values derived from the null model comparisons.
To demonstrate that higher numbers of observations offer a higher stability of our method, we iteratively raised the threshold of the ignored associations from the default 3 to 13. The similarity of the LLR edges and modular structures were calculated for each threshold between Samples 1 and 2.
Statistical analysis
All statistical analyses were performed with MATLAB version R2014b (The MathWorks Inc, Natick, MA). The applied network measures are all available at https://sites.google.com/ site/bctnet/ (Rubinov & Sporns, 2010). Differences of the POT scores were determined by an independent t test between Samples 1 and 2.
We calculated the correlation with a permutation test based on QAP (Simpson, 2001) to test whether cognitive attraction is related to affective similarity and cognitive repulsion is related to affective dissimilarity. In the QAP procedure, we moderated the effect of near zero cooccurrence connection values. On one hand, many near zero LLR values were expected between associations never mentioned together, but these association pairs could be characterized by very heterogeneous affective similarity values. On the other hand, moderating the effect of the numerous near zero connections can generate a more balanced LLR data for the correlation analysis, in which the low and high LLR values have similar sampling. Hence, co-occurrence connection values were divided into 100 equal intervals in which the values were averaged. This way, the large number of data points representing near zero co-occurrence values were reduced into averages of a few intervals. The affective connection values were averaged for the association pairs that belonged to a given interval of the co-occurrence connection values. All correlation coefficient was calculated between these averaged values.
The final test of our method was to demonstrate that we can differentiate the modules in CoOp networks according to the attitudes toward asylum seekers. Respondent were assigned to the CoOp modules to which the majority of their associations belonged. Respondents were compared by pairwise independent weighted t test on their attitude scores between every pair of modules. Weighted attitude score means (WAM) and weighted attitude score variance (WAV) were calculated for each module (M) for weighted t tests: where i is a respondent assigned to module M. Attitude scores of a respondent (AttitudeScore i ) were weighted equally to the number of their associations that belonged to the given module (AssociationNumber i ). A respondent was discarded from the attitude analysis if he or she could have been assigned to more modules with equally maximum weights. The statistical procedure was conducted on Samples 1 and 2 separately. Since our study was exploratory, we carried out statistical power estimation for a theoretically medium effect size (Cohen's d = 0.5), which we determined to be the indicator of a considerable opinion difference between the respondents assigned to two given modules. We concluded that .8 power could be achieved if the sample size was 64. (Power was determined for Cohen's d = 0.5 with alpha = .05. In the calculation, normal distributions were assumed, with a mean difference equal to 0.5 and a standard deviation equal to 1.)
Results
The total numbers of different associations were 1,067, in the case of Sample 1, and 1,099, in the case of Sample 2. After the lemmatization, the numbers of different associations decreased to 597, in the case of Sample 1, and 533, in the case of Sample 2. The numbers of associations mentioned at least three times-and thus that were included in the network analysis-were 156 in the case of Sample 1, and 163 in the case of Sample 2. Samples 1 and 2 had 114 identical associations. Thus, the analysis was performed on 1,966 association tokens in Sample 1 and on 2,023 association tokens in Sample 2. The POT scores showed no significant overall difference between Sample 1 (M = 3.33, SD = 1.37) and Sample 2 (M = 3.43, SD = 1.36).
CoOp connections and affective similarity (Hypothesis 1)
Significant correlations were found between the cooccurrence and affective similarity values (Fig. 1)
CoOp modules
We labeled the modules according to the two most frequent associations (see Fig. 2). The modular membership and frequency of every association are presented in Table S4 (Sample 1) and Table S5 (Sample 2).
The modularity value was .24 for the CoOp network of Sample 1, and this value was .23 for the CoOp network of Sample 2. The CoOp network of Sample 1 was divided into four modules, and the CoOp network of Sample 2 was divided into six modules. However, in Sample 2, three of the six identified modules contained only a single word, each mentioned by a few respondents (Bassassination,^Bunity,^and Bdeath^). We did not include these modules in the further analyses, so the final number of modules was three in the case of Sample 2.
Reproducibility (Hypothesis 2)
To test the reproducibility of our method, we derived an edgelevel and a modular-level comparison between Samples 1 and 2. The LLR-level comparison was performed by correlation of the LLR values between the identical association pairs of Samples 1 and 2. We have found a significant correlation between the LLR values of the identical association pairs in Samples 1 and 2 [r s (6439) = .36, p QAP < .001]. The modularlevel similarity was determined by the nMI value of the modular membership of the identical associations between Samples 1 and 2. The similarity between the modular structures of the two samples was significantly higher than in the corresponding null models (nMI = .27, p < .001). Precisely, none of the 5,000 generated null models had an nMI value higher than the nMI value between Samples 1 and 2.
The changes in LLR-level and modular-level similarity between the two samples were determined by ignoring associations that occurred less than a given threshold value. The threshold was iteratively raised from the default of 3 to 13. Strong and significant correlations were detected between the threshold and the LLR-level [r s (9) = .88, p < .001] and modular-level [r s (9) = .85, p = .002] similarities of Sample 1 and Sample 2. Ignoring sparse associations from the analysis could raise the edge-and modular-level similarities between Samples 1 and 2. Details about the edge-and modular-level similarities for every threshold are presented in Fig. 3, Table S9, and Table S10. Here we only present the LLRlevel similarity [r s (559) = .48, p QAP < .001] and modularlevel similarity (nMI = .46) for the analysis when ignoring associations that occurred fewer than 13 times.
CoOp modules and POT scores (Hypothesis 3)
In the case of Samples 1 and 2, all pairwise comparisons of the modules showed significant differences in POT scores (Fig. 4).
In The sizes of a node and its label are proportional to the frequency of the given association. An edge means that two associations fall into a common module in the consensus-partitioning procedure at least 40%. The edges with the LLR edges are presented in Table S6 (Sample 1) and Table S7 (Sample 2). Both samples are displayed by the BYifan Hu Proportional^layout algorithm (Hu, 2005), implemented in Gephi (Bastian, Heymann, & Jacomy, 2009). Additional information about each module is shown in a box colored identically to the corresponding module. The box contains the label of the module, referring to the two most frequent associations in a given module. The number of respondents assigned to a given module is displayed below the label in parentheses. The percentages of emotional labels for every module are presented on bar charts. The percentages of the six most frequent emotions (antipathy, anger, fear, anxiety, sadness, empathy) are shown in detail. The three most frequent emotions for a module are displayed with bold letters. (For detailed distributions of the affections in every module, see Table S8.) 1.17], which showed the highest POT score (M = 4.31, SD = 0.83). In the case of Sample 2, all comparisons could be considered to have a power of .8.
Similarly to the POT scores, the GM and SDO scores were compared across the modules. Detailed results about the GM and SDO analyses are presented in Tables S11 and S12. In most cases-similarly to POT-these measure could differentiate the modules. Here we only give a short overview about the few exceptions, where we did not get a significant difference or sufficient power. In the case of Sample 1, the comparisons of every module gave significant differences in the GM analysis, but the comparison of Immigrant & Stranger with Terrorism & Islam did not have sufficient power. In the case of Sample 2, all comparisons were significant with sufficient power. In the case of Sample 1, the comparison of the modules in terms of SDO scores failed to detect a significant difference between the Immigrant & Stranger and Terrorism & Islam modules, and the comparison of the Terrorism & Islam and Violence & Fear modules did not have sufficient power. In the case of Sample 2, the comparisons of the modules in terms of SDO scores all produced significant differences, although the comparison of the Immigrant & Islam and Terrorism & Violence modules did not reach sufficient power. In sum, POT, GM, and SDO showed very similar patterns in most of the cases.
Discussion
In this study, we aimed to introduce and validate a method that identifies groups of associations reflecting distinct attitudes and emotions toward demonstrative cue: migrants. In line with Hypothesis 1, the co-occurrence of the associations (CoOp networks) reflected the emotional similarity between the associations. In line with Hypothesis 2, the modular structures of CoOp networks showed considerable reproducibility in the two independent samples. In line with Hypothesis 3, the distinct cohesive structures of associations (CoOp modules) reflected different results on the POT, GM, and SDO measures. For example, between modules reflecting on violence (Violence & Fear, Terrorism & Violence) and refugee (War & Refugee, Refugee & War) always demonstrated significant differences in the three measures (POT, GM, and SDO). In sum, the present results demonstrated that analyzing the modular organization of CoOp networks can be an inductive tool for identifying the most important dimensions of public opinions about relevant social issues.
CoOp networks can be seen as a subtype of large-scale semantic networks (De Deyne & Storms, 2008a;Nelson, McEvoy, & Schreiber, 2004;Steyvers & Tenenbaum, 2005). Semantic networks are built from multiple cues and organized by constant lexical relations. Our study demonstrated that cooccurrences of multiple free word associations can also follow affective similarity patterns regarding a social issue. This is in line with cognitive studies on roles that emotions play in mental process-for instance, message acceptance/rejection and information recall (Nabi, 1999(Nabi, , 2003. Our results also highlight that module detection in CoOp networks yields a psychologically meaningful mapping of context behind attitudes. The modular membership of the associations creates a context for the interpretation of each individual association. Furthermore, the jointly interpreted associations can link the attitudes to the relevant context. More generally, consistent patterns in individual association sequences can reveal the most prominent frames of opinions regarding a social issue. The polarization of opinions was consistent in the two samples with a positive pole indicated by terms such as BRefugee,^BWar,^or BHelp^and a negative pole indicated by terms such as BViolence,^BFear,^or BTerrorism.F urthermore, modules reflecting these poles comprised the majority of all the respondents in both samples. The Fig. 3 Correlations between the reproducibility and exclusion of rare associations from the analysis. The x-axes show the minimal numbers of occurrences of an association. Below that occurrence number, an association was excluded from the analysis. The y-axes show the LLRlevel (A) and modular-level (B) similarities between Sample 1 and Sample 2. The LLR-level similarity was expressed by the Spearman correlation of the LLR values between the identical association pairs of Samples 1 and 2. The modular-level similarity was determined by the similarity of the modular memberships of Samples 1 and 2. The modularlevel similarity was expressed by the nMI value. The exclusion of rare associations resulted in higher LLR similarity (A) and higher modular similarity (B) between Samples 1 and 2 Violence & Fear (Sample 1) and Terrorism & Violence (Sample 2) modules had the highest POT scores. These modules indicate explicit hostility (Dovidio, Kawakami, & Gaertner, 2002) such as labeling asylum seekers as morally inferior (Haslam & Loughnan, 2014;e.g., Bdirt,^Blazy,B demanding,^Bfreeloader^associations) or emphasizing perceived threats (e.g., Bterrorism,^Bcrime,^Binvasion^associations; Holmes & Castañeda, 2016;Kallius et al., 2016;Stephan et al., 2000). The War & Refugee (Sample 1) and Refugee & War (Sample 2) modules reflect humanitarian concerns and show the lowest POT scores, relative to the other modules. The scores and the contents of these modules indicate that considering asylum seekers as refugees who are forced to leave their homes (e.g., Bwar,^Bfamine,^Bdeath,B flee^associations) is linked to social solidarity (e.g., Bhelp,B pity^associations) (Appelbaum, 2002;Nickerson & Louis, 2008;Verkuyten, 2004).
As compared to Bansak et al. (2016), we could identify modules referring to (i) humanitarian concerns [the War & Refugee (Sample 1) and Refugee & War (Sample 2) modules] and (ii) anti-Muslim sentiment [the Terrorism & Islam (Sample 1) and Immigrant & Islam (Sample 2) modules], but we did not find modules referring to (iii) economic reasoning. Humanitarian concerns are unequivocally present in Hungarians' perceptions of asylum seekers, consistent with Bansak et al.'s results. However, our results indicate that general xenophobia and perceived threats are far more salient than economic or religious concerns.
The LLR values between the identical associations of Sample 1 and Sample 2 showed significant correlation and that the CoOp networks referring to relative stability have a modular structure as compared to the null model in a threemonth-long interval. The differences between Samples 1 and 2 could have originated in the uncertainty of our method and also in complex influential factors related to the Bmigration crisis^that occurred in the three months between the collection of Samples 1 and 2 (e.g., the terror event in Nice, a national referendum on immigration, etc.). For example, the association Bterrorism^can indicate possible changes in opinions between the two samples. Even before the current asylum seeker situation, Bterrorism,^Bviolence,^and BIslam^were frequently linked by individuals (Ernst-Vintila, Delouvée, & Roland-Lévy, 2011;Sides & Gross, 2013). This is in line with Sample 1, in which Bterrorism^belonged with Muslim-related stereotypes (Terrorism & Islam). However, Bterrorismb elonged to a module reflecting explicit hostility (Terrorism & Violence) in Sample 2. A possible explanation can be that between the two data gatherings, a significant terror attack happened in France (Nice, in July, 2016;BBC News, 2016), leading to increased securitization discourse of migration in the political media (Holmes & Castañeda, 2016). Our method also showed higher reproducibility in the case of frequent than of rare associations. From an information theoretical point of view, these results suggest that frequent associations resulted in a more stable pattern of co-occurrences. Following this logic one can reach the desired stability by increasing the sample size. From the social psychological point of view, frequent associations more likely to belong to the core structure referring to a higher stability over time than rare peripheral associations (Abric, 1993;Kinsella et al., 2015). It is possible that complex influential factors such as media can more likely affect the peripheral elements of the representation. This is in line with Abric's (1993) description of progressive transformation in social representations. In sum, reducing the effect of influential factors and the sparsity of the data by excluding rare associations increased the stability of the results, which suggests the reliability of the applied methodological framework.
The measure on word co-occurrence and the appropriate clustering method were selected on the basis of the following considerations. First, frequency of associations-similarly to word occurrence in a corpus-had a power law function (Zipf, 1935), thus an adequate similarity measure should deal with associations occurring sparsely. The LLR was successfully used in previous text processing designs to measure typical word co-occurrences in large corpus of sentences (Bordag, 2008;Dunning, 1993). In our case, a five-associations-long response sequence was considered as a sentence and the typical pattern of co-occurrence across the sequences was measured by the LLR. The first advantage of LLR that it does not depend on normality as well as it allows the comparison of the co-occurrence of both rare and common associations (Dunning, 1993). Second, the LLR can handle the attraction and repulsion of association pairs based on the expected number of co-occurrences, in the case of independence for two associations. In contrast, a simple co-occurrence count can only distinguish between weak and strong connections. For example, simple co-occurrence count gave a relatively high value (i.e., strong connection) between the Violence and Refugee associations (6/13 in Sample 1/Sample 2) as compared to the other co-occurrence values in our data. However, on the basis of the frequencies of the two associations (93/99 for Violence and 97/146 for Refugee in Sample1/ Sample2), expected co-occurrence should have resulted in a higher co-occurrence count (17/28 in Sample 1/Sample 2). The expected co-occurrence was related to the observed cooccurrence count in the LLR formula and resulted in a high negative value (i.e., strong repulsive connections; -7.27/-8.4 in Sample 1/Sample 2). Third, LLR can be related to the cumulative distribution of χ 2 test with one degree of freedom, hence one can calculate the significance of the co-occurrences. The modularity-clustering procedure can give a partitioning that matches with the structure of the network without selecting parameters. Most importantly, the size and number of the modules are not predefined (as in K-means clustering) or assigned by the researcher on the basis of a dendrogram (as in Ward's method). The parameter-free and unconstrained characteristics of the modularity formula ensures the datadriven clustering of associations.
The major limitation is that connections of the CoOp networks were often created from relatively few observations. As a consequence of this sparsity, it is important to be careful with interpretations based on a single connection and to rely more on the modules that were proved to be meaningful indicators of different attitudes. Furthermore, the modular investigation of the CoOp network is as an exploratory analysis. Therefore, a minimum number of respondents cannot be guaranteed in each module. As an example, three modules were identified containing only one association in the case of Sample 2 (Bassassination,^Bunity,^and Bdeath^). As a consequence, we cannot provide a lower bound (holding for all comparisons) for statistical power. However, small modules can be filtered according to future study designs to achieve a desired statistical power for a given effect size.
We will now provide a few recommendations for further similar studies to choose an appropriate sample, cue and additional questionnaires for the associations. Large and diverse sample is recommended to increase the stability of the method (increased threshold for ignoring associations increase the stability) and to capture the heterogeneity of opinions in the target group. Selection of the appropriate cue for the study is crucial.
Most importantly, the respondents should have an elaborated opinion about the provided cue. For example, there should be an active group-level discourse about the topic in the target group. In our case, during data collections migration was a prominent topic in the political public and media discourses for the Hungarian population. Indefinite cues should be avoided; different respondents can easily provide different meanings for a cue, hence the segregation of the CoOp modules can easily reflect to semantic differences. For instance, the cue play can refer to sport, music, or games (Lancichinetti, Radicchi, Ramasco, & Fortunato, 2011). An appropriate cue should be a single word. Even for compound words certain respondents may associate to the first word as others to the second word. Further studies can also guide associations by manipulating the instructions. For example, simply asking Bclimate change^as a cue may be result in a CoOp module structure in which technical terms, beliefs and associations for Bclimate^are segregated. If one is interested in the different beliefs for climate change, the instruction could be restricted to opinions. For the preprocessing of the associations, automatized lemmatization methods are available in the case of English responses-for instance, Porter's algorithm (Porter, 1980). For sake of higher reliability, we recommend further studies to apply additional questionnaires to test the relevance of the CoOp modules. Although we demonstrated that only the cooccurrence analysis of associations can yield meaningful results, we only tested and validated for a single cue. On the basis of our results, not only an explicit questionnaire about the cue (POT), but questionnaires measuring more abstract constructs (GM and SDO) can differentiate between CoOp modules. This suggests that a broad spectrum of dependent questionnaires is appropriate for testing the modules. Emotional similarity between associations provided a validation metric for LLR values. However, further studies could use the emotional similarity between associations to construct networks and modules. Applying the label of the associations for a similarity measure can help to link directly associations to certain emotional constructs and also gives a less sparse data than co-occurrence measures. It is also important to emphasize that emotional labeling of the associations can be changed to other appropriate labels (e.g., valence, PANAS, etc.). However, we recommend applying a diverse set of potentially relevant labels to maintain the unrestricted nature of the association task.
Future studies could investigate network topological parameters to determine how in individual associations are distributed across modules. These parameters can link the identified modules to individual response patterns. Studying the relation between individual response patterns and the higherlevel structure can relate the group-level opinion dynamics to cognitive processes such as biased assimilation (Lord, Ross, & Lepper, 1979) or socio-psychological differences such as SDO or GM in our case. In future studies, the influence of a social object on association relations can be assessed by comparing these relations to a Bresting state^baseline of the mental organization among lexical concepts such as large-scale semantic networks (De Deyne & Storms, 2008a;Nelson et al., 2004;Steyvers & Tenenbaum, 2005). Furthermore, constructing questionnaires from data-driven constructs (CoOp modules) can help to converge theoretical and observed dimensions regarding a social object. For example, as opposed to previous studies that had found an emphasis on economic concerns if respondents' attention was explicitly directed to them, economic concerns did not appear as a governing factor in free individual opinions about asylum seekers. Crosscultural studies can also apply CoOp network analysis to study how corresponding social objects vary in different cultures and refine questionnaires according to specific cultures (Hainmueller & Hopkins, 2014).
In sum, traditional questionnaires without an inductive focus can hardly reflect the dynamic contents constituting a social object, although these can form a link between social constructs and actual actions (Abric, 1993). The inductive nature of the CoOp modules can contribute to classification of the changing contents that constitute a social object, and it can provide a data-driven representation of characteristic social frames for a particular time and space.
Open Access This article is distributed under the terms of the Creative Comm ons Attribution 4.0 International License (http:// creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
|
2018-08-14T20:37:27.169Z
|
2018-08-09T00:00:00.000
|
{
"year": 2018,
"sha1": "ad7fbff1674566e9e0c0f2865e79aee9ebfb7449",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.3758/s13428-018-1090-z.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "ad7fbff1674566e9e0c0f2865e79aee9ebfb7449",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
}
|
21743938
|
pes2o/s2orc
|
v3-fos-license
|
Post-appendectomy appendicitis: A case report
Introduction: Acute appendicitis is a common surgical emergency that requires intervention. The accurate diagnosis remains challenging in some cases despite advances in both minimally invasive surgery and radiology. Stump appendicitis is a rare complication after appendectomy. It is defined as the acute inflammation of the residual appendix. A small number of stump appendicitis cases have been reported. Case Report: We report a case of stump appendicitis in a 42-year-old female, nine months following a laparoscopic appendectomy. She presented with a 24-hour history of abdominal pain, which started periumbilically and then localized to the right lower quadrant. Physical examination showed tenderness in the right iliac fossa with evidence of rebound and guarding. Laboratory studies were remarkable for leukocytosis. Computed tomography scan of the abdomen and pelvis showed a remnant appendicular segment with a maximum cross diameter of about 1.2 cm, associated with local inflammatory changes and surrounding fat stranding. An open stump appendectomy was performed uneventfully. Conclusion: Stump appendicitis is a rare but serious complication of appendectomy. It can represent a diagnostic dilemma if the treating clinician is unfamiliar with this rare clinical entity. Prompt recognition is important to avoid serious complications. Proper identification of the appendicular base intraoperatively and leaving the appendix stump shorter than 5 mm decrease the risk of stump appendicitis. (This page in not part of the published article.) International Journal of Case Reports and Images, Vol. 8 No. 9, September 2017. ISSN: 0976-3198 Int J Case Rep Images 2017;8(9):597–600. www.ijcasereportsandimages.com Safar et al. 597 CASE REPORT PEER REVIEWED | OPEN ACCESS Post-appendectomy appendicitis: A case report Ali Safar, Abdulrahman Al-Aqeeli, Ahmad Al-Mass, Bader Al-Shaban
INTRODUCTION
Acute appendicitis is a common surgical emergency that requires intervention. The lifetime risk of developing appendicitis is about 7% [1]. The accurate diagnosis of appendicitis remains challenging in some cases despite advances in both minimally invasive surgery and radiology [2]. One rare complication after appendectomy is stump appendicitis, which is defined as the acute inflammation of the residual appendix [3]. Although the signs and symptoms do not differ from those of acute appendicitis, the diagnosis is often not considered because of the history of previous appendectomy [4]. A small number of stump appendicitis cases have been reported [5]. We report a 42-year-old female with preoperatively diagnosed stump appendicitis by computed tomography scan, who underwent a laparoscopic appendectomy nine months ago.
CASE REPORT
A 42-year-old female presented to Mubarak Al-Kabeer hospital on 13/4/2017 with a 24-hour history of abdominal pain. The pain started periumbilically and then localized to the right lower quadrant. It was associated with nausea and episodes of chills and rigors. Her medical history was noncontributory, but surgical history was notable for a laparoscopic appendectomy that was performed nine months earlier at the same hospital.
On admission, the patient was afebrile and her vital signs were otherwise normal. Physical examination revealed tenderness in the right iliac fossa with evidence of rebound and guarding. Routine laboratory studies were remarkable for a white blood cell count of 16x109/L with 84% neutrophils. Urinalysis was negative.
Computed tomography (CT) scan of the abdomen and pelvis was performed with rectal and intravenous contrast, which showed a remnant appendicular segment at the base with a maximum cross diameter of about 1.2 cm ( Figure 1). It also showed local inflammatory changes and surrounding fat stranding ( Figure 2). A preoperative diagnosis of stump appendicitis was made on the basis of the CT study.
Surgical exploration performed after completion of the CT scan showed a 1-2 cm long inflamed appendiceal stump. An open stump appendectomy was performed uneventfully. Stump appendicitis was also confirmed on gross pathologic and histologic examination of the resected specimen. No evidence of gross perforation was present. The postoperative course was uneventful, and the patient was discharged 72 hours later.
DISCUSSION
Appendectomy is one of the most commonly performed emergent surgical procedures. The first appendectomy was performed by Claudius Amyand in 1735. The clinical features and pathological abnormalities of appendicitis were described by Reginald Fitz in 1886. In 1945, Rose was the first to describe stump appendicitis in two patients who had undergone appendectomy for acute appendicitis [6].
The appendix arises from the postero-medial wall of the cecum about 3 cm below the ileocecal valve. The base of the appendix can be misidentified intraoperatively. The variable position and subserous length of the appendix, combined with acute inflammation, may result in this misidentification. Following the teniae coli on the cecum helps in identifying the true appendicular base. Generally, an appendix stump shorter than 5 mm is associated with a lower risk of stump appendicitis [7,8].
Stump appendicitis can represent a diagnostic dilemma if the treating physician is unfamiliar with this rare clinical entity. Patients present with signs and symptoms of appendicitis or acute abdomen along with a history of previous appendectomy. The presence of an appendectomy scar does not rule out the possibility of stump appendicitis [9]. Prompt recognition is important to avoid serious complications like perforation and peritonitis [8].
Radiological evaluation by ultrasound and CT scan helps in the preoperative diagnosis of stump appendicitis Safar et al. 599 [5,10]. Computed tomography scan of the abdomen is more specific than ultrasound for the accurate preoperative diagnosis of stump appendicitis because it excludes other causes of acute abdomen. Computed tomography findings include pericecal inflammatory changes, abscess formation, fluid in the right paracolic gutter and cecal wall thickening [7]. Completion appendectomy either by open or laparoscopic technique is the treatment of choice for stump appendicitis [11].
CONCLUSION
Stump appendicitis is a rare but serious complication of appendectomy. Patients present with signs and symptoms of appendicitis or acute abdomen along with a history of previous appendectomy. The diagnosis can be missed or delayed if the physician is unaware of this rare clinical entity. Prompt recognition is important to avoid serious complications. Proper identification of the appendicular base intraoperatively and leaving the appendix stump shorter than 5 mm decrease the risk of stump appendicitis. *********
Why should you publish with Edorium Journals?
In less than 10 words: "We give you what no one does".
Vision of being the best
We have the vision of making our journals the best and the most authoritative journals in their respective specialties. We are working towards this goal every day.
Exceptional services
We care for you, your work and your time. Our efficient, personalized and courteous services are a testimony to this.
Editorial review
All manuscripts submitted to Edorium Journals undergo pre-processing review followed by multiple rounds of stringent editorial reviews.
Peer review
All manuscripts submitted to Edorium Journals undergo anonymous, double-blind, external peer review.
Early view version
Early View version of your manuscript will be published in the journal within 72 hours of final acceptance.
Manuscript status
From submission to publication of your article you will get regular updates about status of your manuscripts.
Favored author program
One email is all it takes to become our favored author. You will not only get 15% off on all manuscript but also get information and insights about scholarly publishing.
Institutional membership program
Join our Institutional Memberships program and help scholars from your institute make their research accessible to all and save thousands of dollars in publication fees.
Our presence
We have high quality, attractive and easy to read publication format. Our websites are very user friendly and enable you to use the services easily with no hassle.
Something more...
We request you to have a look at our website to know more about us and our services. Please visit: www.edoriumjournals.com We welcome you to interact with us, share with us, join us and of course publish with us.
Invitation for article submission
We sincerely invite you to submit your valuable research for publication to Edorium Journals.
Six weeks
We give you our commitment that you will get first decision on your manuscript within six weeks (42 days) of submission. If we fail to honor this commitment by even one day, we will give you a 75% Discount Voucher for your next manuscript.
Four weeks
We give you our commitment that after we receive your page proofs, your manuscript will be published in the journal within 14 days (2 weeks). If we fail to honor this commitment by even one day, we will give you a 75% Discount Voucher for your next manuscript.
|
2019-03-17T13:07:00.636Z
|
2017-08-01T00:00:00.000
|
{
"year": 2017,
"sha1": "fb51398f434429813178dc793410f3bec7064179",
"oa_license": "CCBY",
"oa_url": "http://www.ijcasereportsandimages.com/archive/2017-articles-archive/009-2017-ijcri/CR-10829-09-2017-safar/ijcri-1082909201729-safar.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "b4b398500573ecdd89fa17344f7511a95ec3b875",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
251408679
|
pes2o/s2orc
|
v3-fos-license
|
Violent experiences and neighbourhoods during adolescence: understanding and mitigating the association with mental health at the transition to adulthood in a longitudinal cohort study
Purpose Violence occurs at multiple ecological levels and can harm mental health. However, studies of adolescents’ experience of violence have often ignored the community context of violence, and vice versa. We examined how personal experience of severe physical violence and living in areas with high levels of neighbourhood disorder during adolescence combine to associate with mental health at the transition to adulthood and which factors mitigate this. Method Data were from the Environmental Risk Longitudinal Twin Study, a nationally representative birth cohort of 2232 British twins. Participants’ experience of severe physical violence during adolescence and past-year symptoms of psychiatric disorder were assessed via interviews at age 18. Neighbourhood disorder was reported by residents when participants were aged 13–14. Potential protective factors of maternal warmth, sibling warmth, IQ, and family socio-economic status were assessed during childhood, and perceived social support at age 18. Results Personal experience of severe physical violence during adolescence was associated with elevated odds of age-18 psychiatric disorder regardless of neighbourhood disorder exposure. Cumulative effects of exposure to both were evident for internalising and thought disorder, but not externalising disorder. For adolescents exposed to severe physical violence only, higher levels of perceived social support (including from family and friends) were associated with lower odds of psychiatric disorder. For those who also lived in areas with high neighbourhood disorder, only family support mitigated their risk. Conclusion Increasing support or boosting adolescents’ perceptions of their existing support network may be effective in promoting their mental health following violence exposure. Supplementary Information The online version contains supplementary material available at 10.1007/s00127-022-02343-6.
wider environment. At the community level, individuals may live in neighbourhoods where there is violence that is not directed at them or witnessed personally. This includes physical and social signs of violence, threat, and danger, collectively referred to as neighbourhood disorder [2].
Adolescence is a peak age for experiencing several types of violence including physical assault, sexual victimisation, and family violence [3]. It is also when youth begin to spend more time unsupervised in the community, potentially exposing them to a wider range of violence and facilitating a greater awareness of neighbourhood disorder. At the same time, adolescence and the transition to adulthood is a high-risk period for the onset of a variety of common psychiatric disorders [4,5] that for many will signal the start of recurrent mental illness throughout adulthood [6]. Indeed, around 75% of adult mental health disorders will have onset by the age of 18 [7]. A better understanding of how personal experiences of violence and high levels of neighbourhood disorder during adolescence are associated with the development of mental health diagnoses could inform early targeted intervention and prevention.
Evidence consistently shows that personal experiences of violence, especially during childhood and adolescence, elevate the risk for psychiatric disorders [8,9] including internalising disorders such as depression and anxiety [10,11], externalising disorders such as attention-deficit hyperactivity disorder (ADHD) [12], and psychosis [13,14]. Separately, studies of community-level violence have also shown associations with residents' mental health problems [15][16][17]. Although research specifically with adolescents is scarce, links between neighbourhood disorder and adolescent psychological distress [18] and psychotic experiences [19] have been evidenced.
However, different types of violence often converge, and therefore, it is also important to consider the combined impact of multiple forms of exposure. From the cumulative stress perspective, the greater the number of risk factors an individual experiences, the greater their risk of suffering from mental health problems [20,21]. Alternatively, exposure to violence across multiple settings may normalise or desensitise individuals such that one exposure reduces the effects of the other [22]. Individuals who experience one type of violence also commonly experience another type [23,24] and this poly-victimisation confers even greater risk for mental health problems [9,[25][26][27]. Violence exposure may also converge across ecological levels; for example, people who live in neighbourhoods with high levels of disorder are more likely to have personal experience of crime [15,28]. However, unlike poly-victimisation, there has been limited investigation of how multi-level violence exposure during adolescence combines to impact mental health. One study has shown cumulative effects of adverse living conditions (including neighbourhood disorder) and crime victimisation on adolescent psychotic experiences [28]. However, studies of community-level violence and mental health have typically ignored [29] or been conflated [30] with personal experiences of violence, while studies of inter-personal violence often do not consider the social context of violence in which these may take place. Thus, understanding how these two levels of violence exposure operate in the context of one another in relation to mental health represents a significant gap in the literature.
Fortunately, not everyone who is exposed to violence develops mental health problems. For example, in a UK cohort, 40% of young people did not have a psychiatric disorder at age 18 despite experiencing severe childhood victimisation [31]. Understanding what factors protect against poor mental health among adolescents exposed to violence is necessary to inform interventions at the individual, family, and community levels to mitigate its effects. Existing research has primarily focused on factors that are protective following violence that is personally experienced [32], but it remains unknown what factors are protective for adolescents who are also exposed to violence at the community level.
The present study addresses these knowledge gaps using data from the Environmental Risk (E-Risk) Longitudinal Twin Study. We investigate: (i) how the prevalence of psychiatric disorder at age-18 compares between adolescents with personal experience of severe physical violence, those who lived in neighbourhoods with high levels of disorder, and those with no such exposure during adolescence; (ii) whether there is a cumulative effect of having both personal experience of severe physical violence and living in a neighbourhood with high levels of disorder during adolescence on age-18 psychiatric disorder; and (iii) whether supportive relationships (maternal warmth, sibling warmth, and perceived social support), higher IQ, and higher family socioeconomic status (SES) protect against the development of psychiatric disorder within those violence-exposed groups who are at elevated risk.
The putative protective factors that we investigate were identified during focus group discussions with a group of young people with lived experience of violence, abuse, and mental health problems (see Latham et al. [33] for details of the focus groups) and then matched to measures available in the E-Risk Study. These factors are also consistent with theoretical accounts of resilience that highlight physical, psychological, and social resources in the environment that can help individuals to sustain their wellbeing in the face of adverse circumstances [34] as well as empirical findings [32,35,36]. Involving individuals with lived experience in mental health research ensures that it is relevant, inclusive, and high quality [37,38]. Accordingly, peer researchers also partnered with the academic research team to help interpret and present the study findings.
Sample
Participants were members of the Environmental Risk (E-Risk) Longitudinal Twin Study, which tracks the development of a nationally representative birth cohort of 2232 British children. The sample was drawn from a larger birth register of twins born in England and Wales in 1994-1995 [39]. Full details about the sample are reported elsewhere [40], and in Supplementary Material. Briefly, the E-Risk sample was constructed in 1999-2000 when 1116 families (93% of those eligible) with same-sex 5-year-old twins participated in home-visit assessments. Sex was evenly distributed within zygosity (49% male).
Follow-up home-visits were conducted when the children were aged 7, 10, 12, and 18 years (participation rates were 98%, 96%, 96%, and 93%, respectively). There were 2066 E-Risk participants (47% male) who were assessed at age 18. The average age of the participants at the time of the assessment was 18.4 years (SD = 0.36); all interviews were conducted after the 18th birthday. There were no differences between those who did and did not take part at age 18 in terms of socio-economic status (SES) assessed when the cohort was initially defined (χ 2 = 0.86, p = 0.65), age-5 IQ scores (t = 0.98, p = 0.33), age-5 behavioural (t = 0.40, p = 0.69) or emotional (t = 0.41, p = 0.68) problems, or childhood poly-victimisation (z = 0.51, p = 0.61).
The Joint South London and Maudsley and the Institute of Psychiatry Research Ethics Committee approved each phase of the study. Parents gave informed consent and participants gave assent between 5 and 12 years and then informed consent at age 18.
Measures
Personal experience of severe physical violence during adolescence At age 18, participants were interviewed face-to-face about exposure to a range of adverse experiences between 12 and 18 years using the Juvenile Victimisation Questionnaire (JVQ) [41,42] adapted as a clinical interview (see Supplementary Material and Fisher et al. [24] for full details). All information from the JVQ interview was compiled into victimisation dossiers. Using these dossiers, an expert in victimology (HLF) and three other members of the E-Risk team evaluated whether each participant was exposed to any physical violence, whether in the family, by peers, or by people in the wider environment. This "any physical violence" exposure variable was rated on a 6-point scale: 0 = not exposed, then 1-5 for increasing levels of severity (see Supplementary Table S1 for coding detail). The anchor points for these ratings were adapted from the coding system used for the Childhood Experience of Care and Abuse interview (CECA) [43,44]. Consistent with previous studies using the CECA [43,45], to index the most severe experiences of violence we dichotomised this variable such that those scoring at the upper end of the severity scale (4)(5) were identified as having personal experience of severe violence (coded 1: 24.3% of participants, N = 502).
Neighbourhood disorder during adolescence
Neighbourhood disorder (14 items) was assessed via a postal survey sent to residents living alongside E-Risk families when participants were aged 13-14. Survey respondents, who were typically living on the same street or within the same apartment block as the Study participants, reported on various characteristics of their immediate neighbourhood, including levels of neighbourhood disorder. Surveys were returned by an average of 5.18 (SD = 2.73) respondents per neighbourhood, and there were at least 2 responses for 95% of neighbourhoods (N = 5 601 respondents). Residents were asked whether certain problems affected their neighbourhood, including muggings, assaults, vandalism, graffiti, and deliberate damage to property. Items were coded 0 ('no, not a problem'), 1 ('yes, somewhat of a problem') or 2 ('yes, a big problem'). Consistent with existing analyses of the E-Risk Study [28,[46][47][48][49], items were averaged to create a summary score. Scores for each E-Risk family were then created by averaging the summary scores of respondents within that family's neighbourhood. The resulting variable approached normal distribution across the full potential range (M = 0.49, SD = 0.34, range = 0-1.93). We indexed high levels of neighbourhood disorder as those participants with above average neighbourhood disorder scores (coded 1: 42.2% of participants, N = 908).
Mental health problems at age 18
At 18 years of age, participants were privately interviewed about past-year symptoms of mental disorder. Ten disorder diagnoses were organised into three domains (externalising, internalising, and thought disorders) based on a reliable latent factor structure for psychopathology previously identified within the E-Risk Study [9]. Full information on individual diagnoses is available in Supplementary Material. Participants were classified as having a research diagnosis of 'externalising disorder' when they met diagnostic criteria for ADHD, conduct disorder, alcohol dependence, cannabis dependence, or tobacco dependence. Participants were classified as having a research diagnosis of 'internalising disorder' when they met diagnostic criteria for general anxiety disorder, major depressive disorder, post-traumatic stress disorder, or presented at least 2 of 5 eating disorder symptoms from an established screening tool indicating a possible case of anorexia nervosa or bulimia nervosa [50]. Finally, a 'thought disorder' classification was based on the definite presence of at least one of seven psychotic symptoms, centred on delusions and hallucinations. A total of 280 E-Risk participants met criteria for more than one classification. Thus, from these three domain-specific classifications, an overall binary outcome for 'any psychiatric disorder' was created, denoting the presence of any externalising, internalising, or thought disorder (coded 1), or the absence of all three (0).
Maternal warmth during childhood
Maternal warmth during childhood was assessed using procedures adapted from the Five-Minute Speech Sample method [51,52]. When children were aged 5 and 10, mothers were asked to speak for 5 minutes about each of the children separately. The speech samples were audiotaped and coded by two independent raters, who had good interrater reliability (r = 0.90). The warmth expressed by the mother in their interview about the child was coded on a six-point scale from no warmth (0; complete absence of warmth) to high warmth (5; definite warmth, enthusiasm, interest in, and enjoyment of the child). To capture high levels of maternal warmth during childhood, a binary variable was created that indexed the presence of high maternal warmth (i.e., a score of 4-5) at age 5 and/or age 10 (coded 1: 71.4% of participants, N = 1466).
Sibling warmth during childhood
Sibling warmth during childhood was assessed by asking mothers a series of questions about the quality of their twins' relationship with one another when the children were aged 7 and 10 [53]. Mothers responded on a three-point scale ranging from 0 'no' to 2 'yes' to six questions (e.g., "do your twins love each other", "do both your twins do nice things for each other"). Internal consistency at age 7 was α = 0.77 and at age 10 was α = 0.80. As age-7 and age-10 scores were highly correlated (r = 0.57, p < 0.001), these were summed to create a single composite score (M = 19.92, SD = 3.35).
Perceived social support
Perceived Social Support was assessed at age 18 via selfreports using the multidimensional scale of perceived social support (MSPSS), which assesses individuals' access to supportive relationships with family, friends, and significant others [54]. The 12 items comprise statements such as "There is a special person who is around when I am in need" and "I can count on my friends when things go wrong". Participants rated these statements as "not true" (0), "somewhat true" (1), or "very true" (2). We summed scores to produce an overall social support scale with higher scores reflecting greater social support (M = 20.71, SD = 4.35). In addition, the family and friend sub-scales were utilised separately to examine whether social support from either family (M = 6.98, SD = 1.78) or friends (M = 6.71, SD = 2.01) was specifically protective.
Intelligence quotient (IQ)
Intelligence Quotient (IQ) was tested at age 12 using a short version of the Wechsler Intelligence Scale for Children-Revised (WISC-R) [55] which comprised three subtests (Matrix Reasoning, Information, and Digit Span). We converted the scores into an IQ score according to Sattler [56] and then standardised to a mean of 100 and standard deviation of 15.
Family socio-economic status (SES)
Family socio-economic status (SES) was measured at age 5 using a standardised composite of parental income (i.e., total household income), education (i.e., highest parent qualification), and occupation (i.e., highest parent occupation). These three SES indicators were highly correlated (r = 0.57-0.67) and loaded significantly onto one latent factor [57]. The population-wide distribution of this latent factor was then divided into tertiles (i.e., low-, medium-, and high-SES).
Individual-and family-level covariates
Individual-level covariates included biological sex and participants' history of emotional and behavioural problems during childhood, including attention-deficit hyperactivity disorder (ADHD) diagnosis, conduct disorder diagnosis, symptoms of depression and anxiety, self-harm and suicide attempts, and psychotic symptoms. Family-level covariates included family SES and family history of psychopathology. Measurement details for all covariates are provided in Supplementary Material.
Statistical analyses
Analyses were conducted using Stata 15. We accounted for the non-independence of our twin observations in all analyses using the Huber-White variance estimator [58]. Analyses proceeded in three steps. First, we used a series of logistic regression models to examine the separate associations of (i) personal experience of severe physical violence and (ii) high levels of neighbourhood disorder during adolescence with (i) any psychiatric disorder, and then (ii) externalising disorder, (iii) internalising disorder, and (iv) thought disorder at age 18. We also conducted two sensitivity analyses: we examined the association of different types of severe physical violence (i.e., crime victimisation, maltreatment, sexual victimisation, and family violence) with the four age-18 mental health outcomes. We also tested associations using neighbourhood disorder categorised at different thresholds-above the median, above the 75th percentile, and using the full-scale (continuous) measure of neighbourhood disorder.
Second, to investigate the potential cumulative and interactive effects of personal experience of severe physical violence and high neighbourhood disorder, we created a 4-level categorical variable to reflect the four possible combinations of exposure: no exposure (coded 0); personal severe physical violence only (1); high neighbourhood disorder only (2); both personal severe physical violence and high neighbourhood disorder (3). We used Interaction Contrast Ratios (ICRs) to investigate whether personal experience of severe physical violence and high neighbourhood disorder during adolescence combined synergistically to increase the odds of psychiatric disorder at age 18 (indicated by a departure from additivity [59,60]). This approach uses odds ratios (OR) derived from logistic regression models to estimate the relative excess risk as a result of synergy (i.e., ICR = OR exposure to both -OR personal severe physical violence only -OR high neighbourhood disorder only + 1).
Third, we used logistic regression to examine whether supportive relationships (maternal warmth, sibling warmth, and perceived social support including family and friend sub-scales), higher IQ, or higher family SES were associated with reduced odds of psychiatric disorder within those violence exposure groups found to be at risk in step 2. We also tested statistical interactions between significant protective factors and adolescent violence exposure in the whole E-Risk sample using logistic regression.
To test the robustness of the associations, all models were adjusted for sex, family history of psychopathology, and childhood emotional and behavioural problems. Family SES was also included as a covariate in models in step 1 and 2; however, because we investigated family SES as a potential protective factor, models in step 3 were not adjusted for this. Because we tested four mental health outcomes in steps 1 and 2, and seven potential protective factors in step 3, we controlled the false discovery rate (FDR) by applying the Benjamini-Hochberg method [61] to each collection of statistical tests. All p values are presented in their uncorrected form with an asterisk indicating that the p value remained significant after FDR correction. Missing data, which was minimal, were predominantly due to participants missing mental health and personal experience of physical violence data due to not participating in the E-Risk Study follow-up at age 18 (see "Sample" and "Measures" descriptions). This was not associated with living in areas with high levels of neighbourhood disorder (OR = 0.99, p = 0.967, 95% CI = 0.63-1.55) and we therefore analysed complete cases.
Results
Is personal experience of severe physical violence during adolescence associated with mental health problems at age 18? Table 1 shows the associations between personal experience of severe physical violence during adolescence and mental health problems at age 18. Having personal experience of severe physical violence was associated with significantly elevated odds of meeting criteria for any psychiatric disorder including externalising, internalising, and thought disorders at age 18. These odds remained significantly elevated after adjusting for covariates. Examination of different types of personal severe physical violence (i.e., crime victimisation, maltreatment, sexual victimisation, and family violence) also showed elevated odds for any psychiatric disorder, externalising disorder, internalising disorder, and thought disorder (see Supplementary Table S2; note these are similar to findings using dimensional measures of mental health in the E-Risk Study; see Schaefer et al. [8]).
Is living in a neighbourhood with high levels of disorder during adolescence associated with mental health problems at age 18? Table 2 shows the associations between high (i.e., above mean) levels of neighbourhood disorder and mental health problems at age 18. Living in neighbourhoods with high levels of disorder during adolescence was associated with significantly elevated odds of meeting criteria for any psychiatric disorder at age 18. This association held after adjusting for covariates. The elevated odds of externalising, internalising, and thought disorders were no longer statistically Table 1 Association of adolescent personal experience of severe physical violence with psychiatric disorders at age 18 CI = confidence interval; OR = Odds ratio. Unadj. = unadjusted associations of violence exposure and age-18 mental health. Adj. = associations adjusted simultaneously for biological sex, family socio-economic status, family history of psychopathology, and childhood emotional and behavioural problems (attention-deficit hyperactivity disorder, conduct disorder, symptoms of depression and anxiety, self-harm and suicide attempts, and psychotic symptoms) *p values marked by an asterisk remained significant after correction for false discovery rate (FDR) using Benjamini-Hochberg procedure. All models account for the non-independence of twin observations. The sample sizes vary slightly according to the mental health outcome and due to small numbers of participants missing some data on covariates
Table 2
Association of high levels of neighbourhood disorder during adolescence with psychiatric disorders at age 18 CI, confidence interval; OR, odds ratio; Unadj., unadjusted associations of violence exposure and age-18 mental health; Adj., associations adjusted simultaneously for biological sex, family socio-economic status, family history of psychopathology, and childhood emotional and behavioural problems (attention-deficit hyperactivity disorder, conduct disorder, symptoms of depression and anxiety, self-harm and suicide attempts, and psychotic symptoms) *p values marked by an asterisk remained significant after correction for false discovery rate (FDR) using the Benjamini-Hochberg procedure. All models account for the non-independence of twin observations. The sample sizes vary slightly according to the mental health outcome and due to small numbers of participants missing some data on covariates Model Any psychiatric disorder 0.059 significant after adjusting for covariates; however, the effect sizes for internalising and thought disorders were not attenuated. Sensitivity analyses using neighbourhood disorder as (i) a continuous variable or dichotomised at the (ii) median and (iii) 75 th percentile revealed a similar pattern of associations (see Supplementary Table S3).
Is there a cumulative effect of having both personal experience of severe physical violence and living in a neighbourhood with high levels of disorder during adolescence on mental health at age 18?
Of the 502 E-Risk participants with personal experience of severe physical violence in adolescence, half (51.4%, N = 258) also lived in neighbourhoods with high levels of disorder. Table 3 shows the prevalence of age-18 psychiatric disorder according to adolescents' exposure to personal severe physical violence and/or neighbourhood disorder. When both personal experience of severe physical violence and high levels of neighbourhood disorder were considered together, there was evidence that those exposed to both had the highest odds of meeting criteria for any psychiatric disorder at age 18 ( Fig. 1, panel A). A similar pattern was evident for internalising and thought disorders-these outcomes were associated most strongly with exposure to both personal experience of severe physical violence and high neighbourhood disorder (Fig. 1, panels C, D). The higher odds were particularly notable for thought disorder. In contrast, the odds of externalising disorder were comparable for those adolescents with only personal experience of severe physical violence and those who also lived in high disorder neighbourhoods (Fig. 1, panel B). Adolescents who lived in neighbourhoods with high levels of disorder but did not have personal experience of severe physical violence were no more likely to meet criteria for a psychiatric disorder at age 18 than the non-exposed group.
Interaction contrast ratios for all mental health outcomes were non-significant showing that the combined effect of exposure to personal severe physical violence and high neighbourhood disorder was not significantly different to their summed effect. Do supportive relationships, higher IQ, or higher family SES protect against mental health problems for those violence-exposed groups who are at risk?
Having established that adolescents' personal experience of severe physical violence-with or without high levels of neighbourhood disorder-is associated with elevated odds of mental health problems across externalising, internalising, and thought disorders, we focus here just on the overarching 'any psychiatric disorder' outcome. Despite their elevated risk, in the E-Risk sample, just over 30% (N = 70) of adolescents who personally experienced only severe physical violence and 23% (N = 59) of those exposed to both severe physical violence and high neighbourhood disorder did not meet diagnostic criteria for a psychiatric disorder at age 18. We therefore examined whether supportive relationships, higher IQ, and higher family SES were operating as protective factors in these two violence-exposed subsamples (see Supplementary Tables S4 and S5 for descriptive statistics).
The results (Table 4) show that having family support at age 18 was associated with lower odds of any psychiatric disorder in adolescents exposed only to severe physical violence and those who also lived with high levels of neighbourhood disorder (though the latter did not remain significant after correction for FDR). For those with personal experience of only severe physical violence, higher overall levels of perceived social support and support from friends specifically were also associated with lower odds of any psychiatric disorder. Interestingly, the results showed that higher IQ, sibling and maternal warmth during childhood, and family SES were not significantly protective against age-18 psychiatric disorder in the violence-exposed subsamples.
Next, we tested for interactions between perceived social support, including family and friend sub-scales and adolescent violence exposure. None of these interactions were statistically significant (all p's > 0.05, see Supplementary Table S6).
Discussion
This study examined the association of adolescent violence exposure at the inter-personal and community level with mental health at the transition to adulthood. We found elevated odds for meeting diagnostic criteria for any psychiatric disorder (including externalising, internalising, and thought disorders) for adolescents with personal experience of severe physical violence. We also found evidence of a cumulative association with internalising and thought disorders for adolescents with personal experience of severe physical violence who also lived in neighbourhoods with high levels of disorder. Higher levels of perceived support (including from family and friends) at age 18 were associated with a reduced likelihood of psychiatric disorder following personal experiences of severe physical violence whereas only perceived support from family was related to reduced odds for those who were additionally exposed to neighbourhood disorder (though this association was not statistically significant). These results hint at a protective effect; however, perhaps, due to a lack of statistical power, interactions with violence exposure were not statistically significant. Our finding that adolescents who personally experienced violence and lived in neighbourhoods with high levels of disorder had the greatest odds of internalising and thought disorders is consistent with the cumulative stress hypothesis and existing research on poly-victimisation [9]. Interestingly, there was no cumulative association with externalising disorder-living in an area with high neighbourhood disorder during adolescence did not contribute any additional risk compared to having only personal experience of violence. This is in line with previous , and thought disorder (panel D) at age 18. Adj, adjusted; CI, confidence interval; ICR, interaction contrast ratio; OR, odds ratio. Odds ratios are adjusted for biological sex, family socio-economic status, family history of psy-chopathology, and childhood emotional and behavioural problems (attention-deficit hyperactivity disorder, conduct disorder, symptoms of depression and anxiety, self-harm and suicide attempts, and psychotic symptoms) and the non-independence of twin observations. *p values marked by an asterisk remained significant after correction for the false discovery rate (FDR) using the Benjamini-Hochberg procedure findings by Meltzer et al. [62] that adolescents' feeling of safety in their neighbourhood was related to emotional disorders but not conduct disorder, suggesting that neighbourhood disorder may be differentially associated with internalising and externalising problems. We speculate that living in dangerous or threatening communities may promote maladaptive cognitive styles such as biased threat perception and paranoia that are implicated in disorders such as anxiety and psychosis [63,64]. However, future studies that examine potential mechanisms and qualitative studies to better understand individuals' experiences are needed to investigate this further.
We also revealed a stronger association between personal experiences of severe physical violence and mental health (compared to neighbourhood disorder). This may be understood in terms of the proximity of the violence to the adolescent; violence at the inter-personal level is a more proximal exposure than violence that occurs at the community level. Therefore, those who live in a neighbourhood with high levels of disorder may be able to ignore or more easily distance themselves from this whereas having personal experience of violence is likely very distressing and difficult to get respite from, especially if it is ongoing. Relatedly, because we used reports of neighbourhood disorder from near-by residents rather than the participants themselves, adolescents may not have perceived their neighbourhood in the same way [19]. While our approach has the methodological advantage of avoiding same-source bias, which may inflate associations with mental health [65], a link between the two may depend on adolescents' themselves perceiving there to be a high level of neighbourhood disorder where they live. Indeed, studies that have utilised both perceptions of violence in the neighbourhood and officially reported crime statistics suggest that it is people's perceptions of their neighbourhood that are most relevant for their mental health [15,18,19].
Consistent with a wealth of existing research demonstrating the benefits of social support for health and wellbeing [66][67][68][69], we found evidence that perceived social support at age 18 helped reduce the likelihood that violence-exposed adolescents meet criteria for psychiatric disorder at the transition to adulthood. This suggests that having someone that adolescents can share their experiences and worries with and seek emotional support and advice from is important for maintaining good mental health. Given the prominence of peer friendships and individuals' increasing independence from their family during adolescence, it is notable that perceived support from family also helped maintain mental health following violence exposure. In fact, for those exposed to both personal violence and neighbourhood disorder, it was higher levels of family support (not friend support) at age 18 that was associated (albeit non-significantly after FDR correction) with a reduced likelihood of psychiatric disorder. On the contrary, maternal warmth and sibling warmth assessed during childhood were not associated with a reduced likelihood of meeting criteria for psychiatric disorder. This may be because the level of sibling Table 4 Association between potential protective factors and any psychiatric disorder at age 18 among adolescents exposed to (i) personal severe physical violence only and (ii) both personal severe physical violence and high neighbourhood disorder CI, confidence interval; IQ, intelligence quotient; OR, odds ratio; SES, socio-economic status a Adjusted simultaneously for biological sex, family history of psychopathology, and childhood emotional and behavioural problems (attentiondeficit hyperactivity disorder, conduct disorder, symptoms of depression and anxiety, self-harm and suicide attempts, and psychotic symptoms).
*p values marked by an asterisk remained significant after correction for the false discovery rate (FDR) using the Benjamini-Hochberg procedure. All models account for the non-independence of twin observations and maternal warmth is too low among those children who go on to experience violence in adolescence, or it may be that it is adolescents' perception of supportive relationships that are currently available to them that is most valuable for maintaining mental health in the face of violence exposure.
Strengths and limitations
Study strengths include the use of a large nationally representative sample, longitudinal study design with excellent participant retention, and inclusion of a broad range of covariates to limit alternative interpretations. Nonetheless, we also acknowledge several limitations. First, participants' mental health problems and personal experience of violence were both self-reported at age 18. Although adolescents are likely to be the most knowledgeable about their experiences, their current mental health may have impacted their reporting of violence exposure, resulting in reverse causation. For example, depressive disorder may bias recall of negative experiences [70] or improve the accuracy of reporting (so-called 'depressive realism' [71]). Cognitive avoidance strategies can also affect the retrieval of memory in individuals with post-traumatic stress [72]. However, the prevalence of violent experiences during adolescence in the E-Risk study is comparable to other UK studies, suggesting that these were not significantly under-or over-reported [24]. Similarly, perceived social support was reported at age 18 which has implications for interpreting its association with age-18 mental health. We did control for a range of earlier emotional and behavioural problems to try to rule out reverse causation, but it remains possible that the severity of adolescents' mental health symptoms during the past year impacted their perceived level of support. Second, neighbourhood disorder was measured only once during adolescence when participants were approximately 13 years old. The majority of participants (71.4%, N = 1475) remained living at the same home address between the ages of 12 and 18, though levels of disorder within their neighbourhood may have changed over time. Third, our measure of neighbourhood disorder considers the immediate environment where E-Risk participants live. However, adolescents likely also spend time in other neighbourhoods (e.g., for education, work, and leisure) which may expose them to high levels of disorder that could impact their risk for later mental health problems. Fourth, items measuring neighbourhood disorder were averaged to create a total score. This approach considers less severe forms of disorder (such as graffiti) as being equal to more severe, potentially less common, forms (e.g., assault) which may underestimate the level of disorder in a neighbourhood. There are alternative ways of aggregating items to account for their differences in severity [73]. Fifth, we focus on adolescents' personal experience of severe physical violence; therefore, our findings may not generalise to other harmful experiences, such as non-physical bullying, cyber-bullying, and emotional abuse. Our findings also do not inform about associations with mental health beyond the age of 18; some of those who did not report symptoms may go on to develop psychiatric disorders in the future. Sixth, although we focused on disorders, there are other ways of conceptualising psychopathology (e.g., symptom continuum [74]) which our results do not necessarily inform about. Finally, our findings were based on a sample of twins, and these may differ from non-twins. However, the E-Risk sample is representative of UK families in terms of socioeconomic distribution [75] and neighbourhood deprivation [76] and the prevalence of violence victimisation and mental health problems has been shown to be comparable between twins and non-twins [77].
Conclusion
Our findings reaffirm the need for early intervention to support adolescents who experience violence and highlights the vulnerability of those whose personal experience takes place in the context of community-level violence. Interventions focused on improving perceived social support by increasing the availability of supportive people or boosting adolescents' perceptions of their existing support network may be effective in protecting their mental health.
Authors' contributions RML conceptualised the research question, conducted the analysis, trained and supervised the independent researchers, and drafted the manuscript. LA directed Phase 18 of the E-Risk longitudinal study, participated in data collection, conceptualised variables, and critically reviewed manuscript drafts. BA, SB, and AC participated in interpreting the results and drafting the manuscript. TEM conceptualised and designed the E-Risk longitudinal study, participated in data collection, conceptualised variables, and critically reviewed manuscript drafts. JBN participated in the study design, conceptualised variables, and critically reviewed manuscript drafts. HLF conceptualised the research question, supervised RML, participated in data collection, conceptualised variables, and critically reviewed manuscript drafts. All authors read and approved the final manuscript.
Funding These funders played no role in study design; in the collection, analysis, and interpretation of data; in the writing of the report; nor in the decision to submit this article for publication.
Availability of data and materials
The dataset reported in the current article is not publicly available due to lack of informed consent and ethical approval for open access, but is available on request by qualified scientists. Requests require a concept paper describing the purpose of data access, ethical approval at the applicant's institution, and provision for secure data access (for further details, see here: https:// sites. duke. edu/ moffi ttcas pipro jects/ conce pt-paper-templ ate/). We offer secure access on the Duke University and King's College London campuses. All data analysis scripts and results files are available for review on request from the corresponding author.
Conflict of interest
The authors declare that they have no conflict of interest.
Ethics statement The Joint South London and Maudsley and the Institute of Psychiatry Research Ethics Committee approved each phase of the E-Risk study. The study has therefore been performed in accordance with the ethical standards laid down in the 1964 Declaration of Helsinki and its later amendments.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
|
2022-08-09T13:31:21.951Z
|
2022-08-09T00:00:00.000
|
{
"year": 2022,
"sha1": "b498dfffaff564d629278f4ef54e459a3979cf7f",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00127-022-02343-6.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "776657f6f0f30c9c768a407f3b069d9f6a429ed4",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
119358949
|
pes2o/s2orc
|
v3-fos-license
|
Revisiting RGEs for general gauge theories
We revisit the renormalisation group equations (RGE) for general renormalisable gauge theories at one- and two-loop accuracy. We identify and correct various mistakes in the literature for the $\beta$-functions of the dimensionful Lagrangian parameters (the fermion mass, the bilinear and trilinear scalar couplings) as well as the dimensionless quartic scalar couplings. There are two sources for these discrepancies. Firstly, the known expressions for the scalar couplings assume a diagonal wave-function renormalisation which is not appropriate for models with mixing in the scalar sector. Secondly, the dimensionful parameters have been derived in the literature using a dummy field method which we critically re-examine, obtaining revised expressions for the $\beta$-function of the fermion mass. We perform an independent cross-check using well-tested supersymmetric RGEs which confirms our results. The numerical impact of the changes in the $\beta$-function for the fermion mass terms is illustrated using a toy model with a heavy vector-like fermion pair coupled to a scalar gauge singlet. Unsurprisingly, the correction to the running of the fermion mass becomes sizeable for large Yukawa couplings of the order of O(1). Furthermore, we demonstrate the importance of the correction to the $\beta$-functions of the scalar quartic couplings using a general type-III Two-Higgs-Doublet-Model. All the corrected expressions have been implemented in updated versions of the Mathematica package SARAH and the Python package PyR@TE.
Introduction
Renormalisation Group Equations (RGEs) are important as they provide the necessary link between the physics at different energy scales. The two-loop RGEs for all dimensionless parameters in general gauge theories have been derived already more than 30 years ago [1][2][3][4][5][6]. More recently, these results have been re-derived by Luo et al. [7] including the β-functions for dimensionful parameters. The latter results are based on the β-functions of dimensionless couplings by applying a so called "dummy field" method [8]. However, no independent direct calculation of the two-loop β-functions for scalar and fermion masses and scalar trilinear couplings exists so far in the literature. One of the aims of this paper is to provide a more detailed (pedagogical) discussion of the dummy field method and to critically examine the β-functions for the dimensionful parameters. As a result we will correct the β-functions for the fermion masses. We also find differences for the purely scalar couplings in certain models with respect to the literature. These differences arise from not always justified assumption about the properties of the wave-function renormalisation. We provide an independent cross-check using well tested supersymmetric RGEs which confirms our results. We believe that these corrections and validations are non-trivial and important in view of the wide use of the RGEs. Still, an independent direct calculation of the dimensionful β-functions would be useful.
The general equations have been implemented in the Mathematica package SARAH [9][10][11][12][13] and in the Python package PyR@TE [14,15]. More recent results which are (partially) included in these packages such as kinetic mixing [16] or running VEVs [17,18] will not be discussed in this paper. The overarching purpose is to present the current state-of-the art of the two-loop β-functions and to collect the corrected expressions such that all the relevant information is at hand in one place.
The Lagrangian for a general gauge theory
In this section we review the Lagrangian for a general renormalisable field theory following [7]. The following particle content is considered: • V A µ (x) (A = 1, . . . , d) are gauge fields of a compact simple group G where d is the dimension of G.
• φ a (x) (a = 1, . . . , N φ ) denote real scalar fields transforming under a (in general) reducible representation of G. The Hermitian generators of G in this representation will be denoted Θ A ab (A = 1, . . . , d; a, b = 1, . . . , N φ ). Since the scalar fields are real, the generators Θ A are purely imaginary and antisymmetric.
The most general renormalisable Lagrangian can be decomposed into three parts, L = L 0 + L 1 + (gauge fixing + ghost terms) , where L 0 is free of dimensional parameters and L 1 contains all terms with dimensional parameters. Here, L 0 reads where F A µν (x) is the gauge field strength tensor defined in the usual way in terms of the structure constants f ABC of the gauge group and the gauge coupling constant g: 3) The covariant derivatives of the scalar and fermion fields are given by
The Lagrangian containing the dimensionful parameters is given by Here m f is a complex matrix of fermion masses, m 2 is a real matrix of scalar masses squared, and h abc are real cubic scalar couplings. Our goal is to revisit the one-and two-loop βfunctions for these dimensionful couplings which have been derived in Ref. [7], employing the so-called "dummy field" method which has been initially proposed in Ref. [8].
Renormalisation Group Equations
We are interested in the scale dependence of the Lagrangian parameters which, in general, is governed by RGEs. The RGEs can be calculated in different schemes. We are going to consider only dimensional regularisation with modified minimal subtraction, usually called MS, for four dimensional field theories. In this scheme the β-functions, which describe the renormalisation group running of the model parameters (Θ i ), are defined as where µ is an arbitrary renormalisation scale. β i can be expanded in a perturbative series: where β (1) i and β (2) i are the one-and two-loop contributions to the running which we are interested in. Generic expressions of the one-and two-loop β-functions for dimensionless parameters in a general quantum field theory were derived in Refs. [1][2][3].
The dummy field method
In principle, one could calculate the renormalisation constants for the dimensionful couplings (the fermion masses (m f ) jk , the squared scalar masses m 2 ab , and the cubic scalar couplings h abc ) and derive the β-functions directly from them. However, this is tedious and has not been attempted so far in the literature. Instead, a "dummy field" method has been employed in Ref. [7] applying an idea, to our knowledge, first mentioned in Ref. [8]. Since a detailed description of this method is lacking in the literature we provide a careful discussion of it in this section.
The idea is to introduce a scalar "dummy field", i.e. a non-propagating real scalar field with no gauge interactions. The dummy field will be denoted by an index with a hat, φd, and satisfies the condition D µ φd = 0. As a consequence, expressions with two identical internal dummy indices (corresponding to a propagating dummy field) have to vanish. Furthermore, since D µ φd = 0, all gauge boson -dummy scalar vertices vanish as well: Let us now consider the Lagrangian L 0 (2.2) in the presence of the same particle content plus one extra scalar dummy field (φd) and separate the terms with the dummy field. Using D µ φd = 0, λ abdd +λ adbd +λd abd +λ addb +λd adb +λdd ab = 6λ abdd , λ abcd +λ abdc +λ adbc +λd abc = 4λ abcd , and λ addd + λd add + λdd ad + λddd a = 4λ addd one easily finds (writing the sums over the scalar indices explicitly): A few comments are in order: • The first two lines reproduce the Lagrangian L 0 (2.2) with the original particle content without the dummy field.
• The terms in the third line reproduce the Lagrangian L 1 (2.6) if one makes the following identifications: Note that we believe these are the correct relations while the notation below Eq. (21) in [7] is rather sloppy: • The terms in the fourth line of Eq. (4.1) do not spoil the relations in Eq. (4.2) or (4.3). First of all, the second last term is only gauge invariant if φ a is a gauge singlet. Furthermore, it is an effective tadpole term which can be removed by a shift of the field φ. 1 The last term is just a constant. In any case, contributions from the interactions in the fourth line to the β-functions of the other dimensionful parameters would involve at least one internal dummy line which gives a vanishing result.
The relations (4.3) have been used in Ref. [7] to derive the β-functions for the fermion masses from the known ones for the Yukawa interactions. Likewise, the β-functions for the scalar masses and the trilinear scalar couplings were obtained from the scalar quartic β-functions. This was achieved by removing contributions with a summation ofd-type indices and terms withd indices appearing on the generators Θ. However, a subtlety arises due to the wave-function renormalisation of external dummy scalar lines which leads to effective tadpole contributions. Such contributions should be removed from the β-functions for the Yukawa interactions and quartic couplings but are not necessarily eliminated by just suppressing the summation overd-indices and associated gauge couplings. For this reason, we re-examine in the following sections all the β-functions for the dimensionful parameters by verifying the dummy method on a diagram by diagram basis.
β-functions for dimensionful parameters
We now apply the dummy method to obtain the β-functions of the dimensionful parameters using the generic results for the dimensionless parameters given in Refs. [1][2][3]7]. In Sec. 5.1, we start with the fermion mass term. The trilinear scalar couplings will be discussed in Sec. 5.2 before we turn to the scalar mass terms in Sec. 5.3. First of all, it is necessary to introduce a number of group invariants and definitions for certain combinations of coupling constants. These definitions will be used to write the expressions for the β-functions in a more compact form.
Group invariants C 2 (F ) is the quadratic Casimir operator for the (in general) reducible fermion representation: where i, j = 1, . . . , N ψ . Due to Schur's lemma, C 2 (F ) is a diagonal N ψ × N ψ matrix with the same eigenvalues for each irreducible representation. Similarly, C 2 (S) is the quadratic Casimir operator for the (in general) reducible scalar representation: where a, b = 1, . . . , N φ . Again due to Schur's lemma, C 2 (S) is a diagonal N φ × N φ matrix. Furthermore, S 2 (S) and S 2 (F ) denote the Dynkin index of the scalar and fermion representations, respectively, and C 2 (G) is the quadratic Casimir operator of the (irreducible) adjoint representation Coupling combinations We start with two N ψ ×N ψ matrices formed out of the Yukawa matrices Y a ij : where the sum includes all 'active' (propagating) scalar indices but not the dummy index.
It should be noted that Y † 2 (F ) = [Y 2 (F )] † ; instead it represents the quantity Y 2 (F ) where the Yukawa coupling Y a has been replaced by its conjugate Y †a . Furthermore, the following N φ × N φ matrices are needed below: There is one crucial comment in order concerning the properties of these objects: in previous works it is assumed that Y ab 2 (S) = Y 2 (S)δ ab and Λ 2 ab (S) = Λ 2 (S)δ ab holds. These properties are derived from group theoretical arguments. We agree with them as long as the considered model does not contain several scalar particles with identical quantum numbers. However, if this is the case than these relations are no longer valid. Or, in other words, the matrices Y ab 2 and Λ 2 ab are diagonal in the space of irreducible representations but not necessarily in the space of particles in the considered model. The consequence is that contributions from off-diagonal wave-function corrections may arise which are not included in Refs. [1][2][3]7]. This is one source for the discrepancies between our results and previous ones. This does not only affect the dimensionful parameters but also the quartic scalar couplings.
RGEs for dimensionless parameters
The β-function for the dimensionful parameters are obtained from those of the dimensionless parameters using the dummy field method. The one-and two-loop expressions for the running of a Yukawa coupling are given by where the definition of H a 2t can be found in App. A.1 and the factor κ = 1/2 for 2-component fermions and κ = 1 for 4-component fermions. These expressions were taken from Ref. [7] without any modifications.
For the quartic coupling, we are going to use the following expressions: Our equations (5.13) and (5.14) differ from the results in Refs. [2,7] in the terms which are underlined. The reason is that only the possibility of diagonal wave-function renormalisation is included Refs. [2,7] as discussed above.
Finally, to have all RGEs at one place, we give here also the β-functions for the gauge coupling although we will not use them in the following:
Fermion mass
The β-function of the fermion mass term can be obtained from the expressions of the Yukawa coupling by considering the external scalar as dummy field. We follow a diagrammatic approach; for each class of diagrams we provide the coupling structure and show the resulting diagram together with its expression after applying the dummy field method. In accord with the discussion in Sec. 4, the following mappings are performed: The fermion mass insertions will be represented by black dots in the Feynman diagrams. We recall that dummy scalars do neither couple to gauge bosons nor propagate. There are two generically different wave function correction diagrams contributing to the running of the Yukawa couplings: those stemming from either external fermions or scalars. For external fermions, the transition between the Yukawa coupling and fermion mass term looks as follows, where the grey blob depicts all loop corrections to the external line: Here, x 1 and x 2 are real numbers (cf. Eq. (5.12)). Thus, we find counterparts for all contributions in both cases. The wave-function renormalisation part stemming from the external scalar is completely different: after applying the replacement with dummy fields, we find only tadpole contributions. However, those are usually absorbed into a re-definition of the vacuum, i.e., they don't contribute to the β-function of the fermion mass term, and the correct replacements are However, we find differences compared to the results of Ref. [7], where the following replacements have been made: Figure 1: Two-loop diagram which does not contribute to the β-function of the fermion mass when replacing the external scalar by a dummy field as indicated here. The contribution depicted on the right hand side was included in Ref. [7]. We now turn to the vertex corrections. At one-loop level, there is only one diagram which needs to be considered: At the two-loop level, there are many more contributions. The explicit diagrams are given in Appendix A.1. While we completely agree with Ref. [7] for the one-loop vertex corrections, we also found differences at the two-loop level. Those stem from diagrams involving both, wave-function corrections of scalars as well as vertex corrections, as depicted in Fig. 1. According to our reasoning, these diagrams are also converted into tadpole diagrams which drop out.
Summarising our results, we find that the one-loop β-functions of fermion masses have one term less than the expression given in Ref. [7] and are given by the following form: Here, we disagree in several terms as discussed above. The numerical impact of these differences compared to earlier results is briefly discussed at the example of a specific model in Sec. 7.
Trilinear coupling
We now turn to the purely scalar interactions. The β-functions of the cubic interactions are obtained from the expressions for the quartic couplings by replacing one external scalar by a dummy field. The translation of the wave-function contributions between both cases is straightforward and can be summarized as follows: In this notation, the index i is summed over all uncontracted scalar indices. Furthermore, 'X' denotes the combination of group invariants multiplying Λ S abcd in Eq. (5.14). As discussed above, we have modified the parts which involve Yukawa or quartic couplings compared to Ref. [7]. The reason is that in these cases new contributions can be present due to off-diagonal wave-function renormalisation corrections. There are three generically different vertex corrections which contribute to the RGE of the quartic interaction. However, since the dummy field does not interact with the gauge sector, those kind of contributions do not appear in the case of the cubic interaction. Therefore, the translation at the one-loop level becomes: The explicit form of the two-loop diagrams as well as their expressions in both cases are given in Appendix A.2. We find agreement between our results and those of Ref. [7] at the one-and two-loop level up to the differences from off-diagonal wave-function renormalisations. Thus, the β-functions at the one-and two-loop levels are
Scalar mass
Finally, we turn to the terms involving two scalar couplings. The procedure is very similar to the case of the cubic scalar coupling, and we find the following relations for the wavefunction corrections to the terms appearing for the quartic scalar coupling: Again, 'X' denotes the combination of group invariants multiplying Λ S abcd in Eq. (5.14). Again, we need to consider the three generically different diagrams which contribute to the running of the quartic functions. The one with vector bosons in the loop vanishes due to inserting dummy fields, while for the other two diagrams additional terms arise.
The two-loop diagrams are given in Appendix A.3. We also find agreement between our results here and the ones given in Ref. [7] up to the wave-function renormalisation. One needs to be careful about some factor of 1 2 due to β m 2 ab = 1 2 β λ abdd , which we have included here explicitly into the definition of the β-function for m 2 ab , while it has been partially absorbed into other definitions in Ref. [7]. Thus, with our conventions the one-and twoloop β-functions read
Comparison with supersymmetric RGEs
We have now re-derived the full one-and two-loop RGEs for the dimensionful parameters. While we agree with Ref. [7] concerning the bilinear and cubic scalar interactions (up to wave-function renormalisation), we find differences in the fermion mass terms. Therefore, we want to double-check our results by comparing to those obtained using supersymmetric (SUSY) RGEs. The general RGEs for a softly broken SUSY model have been independently calculated in Refs. [8,19,20] and the general agreement between all results has been discussed in Ref. [21]. Thus, there is hardly any doubt that these RGEs are absolutely correct. Therefore, we want to test our results with a model in which we enforce SUSY relations among parameters. After a translation from the MS to the DR scheme one should recover the SUSY results.
Since a supersymmetric extension of the SM yields many couplings which are generically all of the same form, we opt for a more compact theory. We consider a toy model with one vector superfieldB and three chiral superfieldŝ where Q denotes the electric charge. The superpotential consists of two terms 2 W = λĤ uĤdŜ + µĤ uĤd (6.4) and the soft-breaking terms are This model contains all of the relevant generic structure we need to test. Making use of the results of Ref. [8], which are also implemented in the package SARAH, we find the following expressions for the one-and two-loop RGEs for the different parts of the model: 1. Gauge Couplings 3. Trilinear Superpotential Parameters λ = λ − 6|λ| 4 + g 2 |λ| 2 + g 4 (6.11)
Bilinear Soft-Breaking Parameters
As before, we have suppressed the pre-factors 1 16π 2 and 1 (16π 2 ) 2 for the one-and two-loop β-functions. With these functions, the running of all parameters at the one-and two-loop level is fixed. However, for later comparison, it will be convenient to know the β-functions for some products of parameters as well. That is done by applying the chain rule: We now consider the same model written as non-supersymmetric version. In this case, we have one gauge boson B, four fermions The full potential for this models involves a substantial amount of different couplings
(6.41)
We think that this rather lengthy form justifies our approach to consider only a toy model, but not a realistic SUSY theory. We have neglected couplings that would be allowed by the symmetry of this theory, but vanish as we match to the SUSY model. In particular, CP even and odd part of the complex field S will run differently unless specific (SUSY) relations among the parameters exist. Therefore, one would need to decompose S into its real components and write down all possible potential terms involving these fields. However, we are only interested in the β functions in the SUSY limit where no splitting between these fields is introduced. Therefore, we retain the more compact notation in (6.41). We can now make use of our revised expressions to calculate the RGEs up to two-loop. For this purpose, we modified the packages SARAH and PyR@TE accordingly. The lengthy expressions in the general case are given in Appendix B. In order to make connection to the SUSY case, we can make the following associations between parameters of these models: λ 5 = 7 8 g 4 |λ| 2 − g 2 |λ| 4 + 1 16 g 6 (6.62) (6.63)
Scalar Mass Terms
We see that all one-loop expressions as well as the two-loop β-function of the gauge coupling agree with the SUSY expressions. The remaining discrepancies at two-loop are due to the differences between MS and DR scheme. In order to translate the non-SUSY expressions to the DR-scheme, we need to apply the following shifts [22] which have to be applied to the expressions of the one-loop β functions to obtain the corresponding two-loop shifts. In addition, one must take into account that for the quartic couplings and the Yukawa couplings an additional shift appears 'on the left hand side' of the expression, e.g.
with some coefficient c depending on the charges of the involved fields. We find the following shifts for the different couplings: Bg 2 2|λ| 2 + 3g 2 (6.103) Hu + 9|µ| 2 (6.105) This gives a complete agreement between the two-loop β-functions of both calculations. Thus, our revised results for the RGEs of a general quantum field theory are confirmed.
7 Numerical impact
Running of fermion mass terms
We briefly want to discuss the numerical impact on the changes in the β-function for the fermion mass term. Differences in the running will only appear in models in which the Lagrangian contains fermionic terms with a Yukawa-like coupling Y between two Weyl fermions f 1 , f 2 and a scalar S as well as a fermion mass term µ. Both terms can only be present if S is a gauge singlet and if S : (1, 1) 0 , (7.4) and the potential reads The one-and two-loop β-functions are computed using our corrected expression and read while the differences compared to the old results are The numerical impact of this difference is depicted in Fig. 2 where we assumed a value of 1 TeV for µ T at the scale Q = 1 TeV and used different values Y T . As expected from Eq. (7.8), the discrepancy between the old and new results rapidly grows with increasing Y T . Thus, the correction in the RGEs is crucial for instance to study grand unified theories which also predict additional vector-like fermions with large Yukawa couplings to a gauge singlet.
The underlined terms stem from the off-diagonal wave-function renormalisation and are missing in the results of Refs. [1][2][3]7]. In Fig. 3 we show the numerical impact of the additional one-loop contributions on the running of the quartic couplings for two different points. The chosen sets of the quartic couplings, tan β and M 12 result in a tree-level Higgs mass of 125 GeV 3 . We see that the additional terms can lead to sizeable differences already for u,33 = 0.5 and small tan β = 2. This is due to Tr( u Y † u ). When increasing u,33 to 1 and tan β = 50, one obtains Tr( u Y † u ) 1 and the impact on the running couplings is tremendous.
Of course, there are also differences at the two-loop level. Those read within the same approximation: ) . (7.25)
Conclusions
In this paper, we have revisited the general RGEs with the goal to present the current state-of-the-art and to correct some mistakes in the literature. In particular, the known expressions for the scalar quartic couplings [3,7] assume a diagonal wave-function renormalisation which is not appropriate for models with mixing in the scalar sector. We therefore have corrected/generalized the expressions for the β-functions of the quartic couplings in (5.13) and (5.14). While finalizing this work, a related paper appeared on the arxiv [24] which confirms our findings concerning the couplings in the scalar sector. Furthermore, we have carefully re-examined the dummy field method and have provided a detailed description of it, which has so far been missing in the literature. We then have used this method to re-derive the β-functions for the dimensionful parameters (fermion masses, scalar masses, and the cubic scalar couplings). For cubic scalar couplings and scalar masses, the only differences to Ref. [7] are due to the aforementioned off-diagonal wave-function renormalisation. However, discrepancies for the fermion mass β-functions in [7] have been found and reconciled in (5.44) and (5.45). We have also performed an independent cross-check of our results using well-tested supersymmetric RGEs and we find complete agreement.
We have illustrated the numerical impact on the changes in the β-function for the fermion mass terms using a toy model with a heavy vector-like fermion pair coupled to a scalar gauge singlet. Unsurprisingly, the correction to the running of the fermion mass rapidly grows with increasing Yukawa coupling. Thus it is crucial to use the corrected RGEs if one wants to study for instance grand unified theories which predict additional vector-like fermions with large Yukawa couplings to a gauge singlet. In addition, we have demonstrated the importance of the correction to the β-functions of the scalar quartic couplings using a general type-III Two-Higgs-Doublet-Model. As can be seen in Fig. 3 the corrections to the running couplings are non-negligible and can become very large in certain regions of the parameter space.
All the corrected expressions have been implemented in updated versions of the Mathematica package SARAH and the Python package PyR@TE. We hope that this paper will be a useful resource in which all the relevant information on the two-loop β-functions is at hand in one place.
Acknowledgments
We are grateful to Dominik Stöckinger who first pointed out mistakes in the literature. KS and IS would like to thank Steven Martin for very helpful discussions. FS is supported by the ERC Recognition Award ERC-RA-0008 of the Helmholtz Association.
A The dummy field method at two-loop
In this appendix, we list all two-loop vertex corrections which are needed to obtain the β functions for dimensionful parameters.
B Full two-loop RGEs without SUSY relations
In this appendix, the full β-functions for all parameters of the non-supersymmetric toy model in Sec. 6 are listed up to two-loop order.
|
2018-10-12T14:01:51.000Z
|
2018-09-18T00:00:00.000
|
{
"year": 2019,
"sha1": "3e54347119136598de7fb8b0bb63694ccd7ff33b",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.nuclphysb.2018.12.001",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "6ec6a36470a77b6c678501d28fe2a7596328e69c",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
52007866
|
pes2o/s2orc
|
v3-fos-license
|
Maternal and Neonatal Characteristics for Late Foetal Death in Latvia between 2001 and 2014: Population-Based Study
Introduction Stillbirth is one of the most common adverse pregnancy outcomes worldwide. Late foetal death (LFD) rates are mostly used for international comparisons because of the large variations in stillbirth rates between countries. Objective To examine trends in LFD (including antepartum and intrapartum) by multiple births, birth weight, and maternal age in two time periods. Methods A retrospective cohort study was used to analyse data from the Medical Birth Register (2001–2014), divided into 2 periods of 7 years each. In total, data on 1,340 singletons were analysed. This study calculated LFD rates and rate ratios (RR). Results The overall LFD rate showed a slight statistically significant reduction (p < 0.001) of 18% between 2001–2007 and 2008–2014. There was a slight increase in the mortality rate from multiple pregnancies (RR 1.1/1000; 95% CI 0.6-1.9). There were no major differences in the LFD rate by maternal age during the time periods. Conclusions LFD decreased (RR 0.8/1000 births), as well as intrapartum LFD (RR 0.6/1000 births). Older maternal age influenced pregnancy outcomes, and higher LFD rates were observed in the age group ≥35 years. Substantial intrapartum stillbirths rates indicate problems with quality of intrapartum care and emergency obstetric care. Further research is needed to evaluate the strategies necessary to substantially reduce the number of stillbirths in the country.
Introduction
Stillbirth is one of the most common adverse pregnancy outcomes worldwide; over 3 million deliveries annually are stillborn [1][2][3][4]. The European Perinatal Health Monitoring data (PERISTAT) analysis shows that the average reduction of stillbirths in 2010 compared to 2004 was approximately 19% (variations among countries to 38%) [5,6]. Late foetal death rates are used for international comparisons because of large stillbirth rate variations between countries [7][8][9].
The number of stillbirths, which were explicitly targeted in the Millennium Development Goals, has decreased more slowly than has infant mortality or mortality in children younger than 5 years. The Every Newborn Action Plan has a target of 12 or fewer stillbirths per 1000 births in every country by 2030. A total of 94 mainly high-income countries and upper-middle-income countries have already met this target, although with noticeable disparities [3].
Variations in stillbirth rates across high-income countries and large equity gaps within high-income countries persist [10]. Disadvantaged women, those with less antenatal care and those who delivered without a skilled birth attendant were at increased risk of delivering a stillbirth [3,11,12]. Each death is a tragic loss and causes much grief to the parents and extended family. These deaths matter to the mother and the family, to society, and to the health care system. Stillbirths are associated with public health challenges such as social inequalities, maternal obesity, and smoking [13].
Total stillbirth rates in Latvia have seen little change in the past 16 years. A slight decrease has been observed from 7.0/1000 births in 2001 to 5.7/1000 births in 2016 [14]. The aim of this study was to examine trends in late foetal death (including antepartum and intrapartum) by multiple births, birth weight, and maternal age in two time periods.
Journal of Pregnancy
Latvia (including stillbirths) are compulsorily reported to the registry, and notification is made by standardized medical record forms used by all maternity units across the country. Late foetal deaths were defined as stillbirths occurring after 28 completed weeks of gestation and weighing at least 500 g. In total, the data on 1,340 LFD cases were analysed from 2001 to 2014, divided into 2 periods of 7 years each (2001-2007 and 2008-2014).
Descriptive statistics for all of the continuous variables (maternal age, foetal birth weight, and gestational week) are reported as medians, indicating the 25th and 75th percentiles. Categorical data are reported as percentages and 95% confidence intervals (CI). The categorical variables were compared using chi-square tests. P values < 0.05 were considered statistically significant.
The study described and compared maternal and antenatal care factors (including complete care, lack of antenatal care, and delayed antenatal care, i.e., the first visit after the 12th gestational week (GW)) and certain complications over the 2 time periods. Antenatal care quality was defined by three groups: without care (cases with LFD when mother has not been registered to gynaecology for antenatal care and therefore did not receive any antenatal checks), complete care (first antenatal visit till 12th week of pregnancy and totally 7 obligatory antenatal visits with all necessary antenatal checks and tests as per guidelines (blood, urine tests; ultrasound screening; genetic tests; etc.), and incomplete (one or more conditions are missing, e.g., late first antenatal visit or less antenatal visits or checks).
Late foetal death rates were calculated per 1000 total births in each time period. Time trends were analysed by calculating rate ratios (RR) with 95% confidence intervals (CI), and rates in 2001-2007 were compared with those in 2008-2014.
The study was conducted with the approval of the Ethics Committee of the University of Latvia.
Results
A total of 74% of all stillbirths from 2001 to 2014 were late foetal deaths (n = 1,340). The median maternal age in the surveyed population was 28 years (23-33), the median birth weight was 2380 g (1620-3100), and the median gestational week was 36 (32-39). There were no differences in medians by time periods. 55% (n = 732) was preterm birth. More than half were antepartum stillbirths, there was an increase (p < 0.001) between the 2 time periods. Intrapartum stillbirths make quite a large proportion although decrease by 6.1 percentage points (p < 0.001) was observed over time (22.8% to 16.7%). There were no changes detected in LFD by gestational age and birth weight (Table 1).
Smoking during pregnancy was observed in a total of 20.3% (95% CI 18.2-22.5) of cases, and a slight decrease by 3.8 percentage points in smoking was observed over time; however, this difference was not statistically significant ( Table 1).
Total numbers of births, live births, and LFD rates in different study time points are given in Table 2. Totally an average is 20,000 births a year. The overall late foetal death rate showed a slightly statistically significant reduction (p < 0.001) of 18% between 2001-2007 and 2008-2014. Decrease was observed also in intrapartum LFD (RR 0.6/1000 births). There were no major differences in the late foetal death rate by maternal age during the time periods. A more substantial reduction was observed in the age group ≥35 years (p < 0.001).
Discussion
Nearly 2.6 million stillbirths occur globally each year, most of which are thought to be preventable. The majority of these deaths occurred in developing countries. About half of all stillbirths occur in the intrapartum period, representing the greatest time of risk [1].
The results of the present study showed that an average of 80% of stillbirths are antepartum deaths. Our study findings indicate an indirect association between late and inadequate antenatal care and stillbirths. Our previous studies observed that the single largest risk factor for antepartum stillbirth is foetal growth restriction [15]. Preventive strategies need to focus on improving antenatal detection of foetal growth restriction [16,17].
The intrapartum death rate of a country is reflective of the care received by mothers and babies in labour and rate higher than 10% indicates problems with obstetric care quality [17]. The number of intrapartum foetal deaths that occur in high-income countries on average is 0.3-0.7/1000 births [3,4]. Our study showed higher intrapartum LFD rate although slightly decreased till 0.7/1000 in the second time point. Intrapartum stillbirths are largely preventable with quality intrapartum care, including prompt recognition and management of intrapartum complications [17]. Therefore antenatal care also plays a vital role in the management of a woman's health during pregnancy, and women who have not been registered to antenatal care are at an increased risk of intrapartum stillbirth too.
This study found that a very high proportion of LFD was related to a late first antenatal visit or incomplete care. Nevertheless, a positive decreasing tendency for late antenatal visits has been observed in the study period, from 2001 to 2014. High proportion was found for LFD cases and lack 1 Represents median (25th and 75th percentile) and Mann-Whitney U test is used. 2 Represents n (% (95% CI)) and Chi-square test is used; NS: Not Significant. of antenatal care, an average 15%. According to statistical data in the general population in 2016, 0.7% of live births occurred without antenatal care, but the rate of stillbirths that occurred without prenatal care was 8 times higher, at 6.2% [18]. We have limited information in the Medical Birth Register about maternal smoking habits, but the study data indicate that the proportion of maternal smoking related to LFD was an average of 20%. Smoking during pregnancy showed a slight but nonsignificant decrease, from 22% to 18%. Statistical data in Latvia shows that maternal smoking related to live births was 7.6% and 9.3% for stillbirths in 2016 [18]. Indisputably, prenatal care plays an important role in the monitoring and control of both sociodemographic and lifestyle factors, which may contribute to poor pregnancy outcomes including stillbirths [3,11,13,19]. As shown in the literature, the number of neonatal and infant deaths declined more rapidly than the number of stillbirths [1][2][3]20]. Foetal and neonatal mortality rates are highly sensitive to inclusion criteria for threshold gestational age and birth weight, especially in comparison to other countries [7,9]. However, it is no less important to obtain national data and trend analysis within the country because perinatal, foetal, and neonatal mortality statistics are also important to show the development of the health care system. Our country has limited epidemiological studies about stillbirths and late foetal death. For this reason, our research aim was to evaluate trends in late foetal death (including antepartum and intrapartum period) by multiple births, birth weight, and maternal age in two time periods to better understand and obtain more population-based data on this issue.
During the study period of 2001-2014, the overall late foetal death rate declined by 18%. Similar findings were obtained in the PERISTAT data analysis; between 2004 and 2010, stillbirths declined by 17%, with a range from 1% to 39% by country [6]. The perinatal health monitoring system shows that the foetal mortality rates at or after 28 weeks of gestation ranged from lows under 2.0 per 1000 live births and stillbirths in the Czech Republic and Iceland to 4.0 or more per 1000 live births in countries such as France and Latvia [5].
A survey using the PERISTAT data indicated that stillbirth rates in European countries declined in all gestational age subgroups. Declines were lower for stillbirths at 28-31 weeks (12%) than at 32-36 weeks (19%) and 37 weeks and over (18%) [6]. There were no changes by LFD within gestational age groups by time period in our study. The high proportion of LFD was from term births (45%). These results underscore the importance of a focus on improving outcomes across the gestational age spectrum.
The study results show that LFD rates were higher in multiple births and in the maternal age group ≥35 years, although a more rapid decrease of 15% was observed in that age group in the 2 time periods. Other study results that analysed more risk factors indicated that LFD rates were increased in women who were 35 years or older [3,8,12,21].
In recent years, the health of mothers and children in Latvia has been receiving increasing attention; thus, different solutions for improving the situation have been closely evaluated. Maternal and child health improvement and the reduction of mortality rates are also two of the objectives stated in the "Public Health Strategy for 2011-2017" [22] and the project document "Maternal and Child Health Improvement Plan 2018-2020" [23], developed by the Ministry of Health. The Action Plan also foresees changes in the legislative documents in screening policies, and improvements are being made in the implementation of perinatal audits in clinical practice and at the national level [23]. Quality of care includes the judgement to determine which women are at risk and require interventions. However, in addition to the quality of obstetric care, the timeliness of providing obstetric care is critical, especially to save the foetus. Tools such as perinatal audits have been shown to improve the quality of facility care and to reduce stillbirth [24,25]. Substantial proportion of intrapartum stillbirths (higher than 10%) are preventable with quality intrapartum care and emergency obstetric care can make the greatest impact on stillbirth rates [3,17]. Study data of intrapartum stillbirths rate highlight problems with accessibility and quality of the health care system in the field of antenatal and obstetric care. Improvement needs in Latvian healthcare system were documented also from international organizations (European Commission and the World Bank). For instance, there is a need to consider developing clinical guidelines and pathways based on clear criteria and standardized methods; improve quality of service provision, and coordination of services among healthcare providers and emerging legislation and regulatory frameworks [26].
Further work should be done to analyse and audit intrapartum death cases to identify areas of obstetric care for improvement. In 2017 improvements have been made in obstetric care, for instance, defined a procedure for the identification of high-risk patients and risk management; action plan and management of care in cases with common childbirth complications; obligatory maternal and perinatal audit in medical institutions etc.) [27].
The main strength of the study includes the fact that the data were population-based. This kind of epidemiological data is essential for health care planning and for determining temporal trends. The limitation is the lack of a comparison group of live births, which could be useful to determine other risk factors of LFD. Future research must focus on the causes of stillbirth. The results of this study may deserve attention for policy implementation regarding strategies to improve antenatal and obstetric labour and delivery care for women, in order to substantially reduce the number of stillbirths and strengthen perinatal audits at the national level.
Conclusions
The overall LFD rate showed a slight statistically significant reduction (p < 0.001) over the study periods (2001-2007 and 2008-2014). Intrapartum LFD slightly decreased (RR 0.6/1000 births). Substantial intrapartum stillbirths rates indicate problems with quality of intrapartum care and emergency obstetric care. Further research is needed to evaluate the strategies necessary to essentially reduce the number of stillbirths in the country and to provide detailed analysis of LFD causes. Improvement in female literacy, health education, identifying high-risk pregnancies, and periodic audits of all stillbirths can help in reduction of stillbirths.
Data Availability
The data used to support the findings of this study were provided by The Centre for Disease Prevention and Control (CDPC) of Latvia under license and so cannot be made freely available. Access to these data will be considered by the author upon request with permission.
Additional Points
This work was supported by National Research Programme Biomedicine for Public Health (BIOMEDICINE). Research Journal of Pregnancy 5 on acute and chronic diseases in a wide age range of children helps to develop diagnostic and therapeutic algorithms to reduce mortality, prolong survival, and improve quality of life.
Conflicts of Interest
The authors declare that there are no conflicts of interest regarding the publication of this paper.
|
2018-08-17T21:20:39.369Z
|
2018-07-18T00:00:00.000
|
{
"year": 2018,
"sha1": "ca04edf13f9c514d63229e3437093e1d0ca7361e",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/jp/2018/2630797.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "456ea4d0ac5bfedb08f44be37da01ffa5cbe36be",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
7649778
|
pes2o/s2orc
|
v3-fos-license
|
Causes and Three-year Incidence of Irreversible Visual Impairment in Jing-An District, Shanghai, China from 2010-2015
Background The registry system can be used to observe the distribution trend of diseases and analyze the related data to provide useful information in a way that enables the government to take appropriate interventional measures. The purpose of this study was to determine the causes and three-year incidence of newly registered disabled patients who were blind or had low vision in Jing-An District, Shanghai, China from 2010 to 2015. Methods Data from the registration system of visual disability in Jing-An District, Shanghai from 2010 to 2015 were collected and analyzed. In this registry, the only person with permanent visual impairment (VI) was identified as being a certified visually impaired person. The main causes of visual disability were obtained, the three-year incidence of visual disability was calculated, and the relationships between blindness or low vision and age, as well as those between blindness or low vision and gender, were analyzed. Results Six-hundred and forty-six newly certified people with VI were registered, including 206 blind patients and 440 low vision patients. The major causes of blindness were myopia macular degeneration (MMD, 23.30%), glaucoma (20.39%), and age-related macular degeneration (AMD, 17.96%). The three leading causes of low vision were MMD (58.86%), AMD (16.36%), and diabetic retinopathy (DR, 7.27%). DR (16.0%) was the first leading cause of blindness and the second leading cause of VI in patients aged 30–59 yrs. from 2010 to 2015. The three-year incidences of blindness in 2010–2012 and 2013–2015(P = 0.43), which remained stable throughout this time period, were 32.74/100000 and 36.51/100000, respectively. However, the three-year incidence of low vision was 64.51/100000 in 2010–2012 and 83.58/100000 in 2013–2015(P = 0.007), which shows that the incidence increased significantly due to the increase of patients with low vision caused by MMD and DR (P = 0.003 and P = 0.01, respectively). Conclusions MMD, glaucoma, and AMD were the main causes of blindness, while DR was becoming a major cause of VI, especially in working-age people of Jing-An District, Shanghai, China.
Background
Vision impairments are major global public health problems. In 2002, The World Health Organization (WHO) estimated that 1% of the total global burden of disease, measured as disability-adjusted life years (DALY), was attributable to vision loss. The global burden of disease in 2010 also demonstrated that blindness and visual impairment (VI) were significant public health burdens [1][2][3]. Estimating these trends is important for several reasons, including for understanding the unmet need and effects of interventions. Registry data can provide valuable information on the characteristics of the relevant population, as well as on details of the services provided. In many cases, these data are annually evaluated and reported, providing an updated source of information on the trends of the incidences and causes of the conditions in question [4,5]. Jing-An District is one of the nine downtown districts of Shanghai, and it consists of 5 blocks and approximately 300,000 residents. Any citizen of this area with VI can apply for identification to the local Disabled Person's Federation (DPF), and the Jing-An District center hospital is the only designated work unit for the identification visual disability in this region. Although registration with the DPF is entirely voluntary, it entitles the registered individuals to corresponding practical and monetary benefits. The registration system was described in our previous reports [6,7]. In this study, we updated the most recent 6 years of data from the registry system with specific objectives, which include the following: (1) evaluating the causal trend of newly registered blind and low vision patients, (2) calculating and comparing the threeyear incidences of VI, and (3) suggesting priorities for research and intervention strategies for VI.
Methods
The registry data for visual disability in the Jing-An District, Shanghai from 2010 to 2015 were collected and analyzed. The study was performed in accordance with the principles of the Declaration of Helsinki for research involving human subjects. The study met all of the standards for ethical approval in China, and the protocol was approved by the municipal government of Shanghai, China.
The criteria for registering blindness and low vision in this study were in accordance with the WHO categories of VI. Blindness was defined as a best spectacle corrected visual acuity (BSCVA) of less than 20/400 (3/60) in the better eye or a corresponding visual field loss to less than 10 degrees in the better eye with the best possible correction. Low vision was defined as a BSCVA of less than 20/60 (6/18) but more than 20/400(3/60) in the better eye. VI was defined as a BSCVA of less than 20/60 (6/18) in the better eye.
All of the registered cases of visual disability needed to undergo an assessment of VI at the Jing-An District center hospital. A comprehensive ocular examination was performed on every applicant, which included uncorrected visual acuity (UCVA), BSCVA, refraction, optical coherence tomography, and color fundus photography. BSCVA was tested using a phoropter (C-200, WEIZHEN Optic Tech. Co., Ltd., China) with a tumbling E letters chart projector at a distance of 2.5 m. Visual field tests were performed using a kinetic arctic perimeter (YZ22, 66 Vision Tech. Co., Ltd., China) when the visual disability was due to glaucoma, retinitis pigmentosa (RP) and other optic nerve diseases.
The causes of blindness were classified according to the International Classification of Diseases, 10th edition [8]. The diagnoses of myopia macular degeneration (MMD), age-related macular degeneration(AMD), glaucoma, RP, corneal opacity, diabetic retinopathy(DR) and other ocular diseases had been previously described [6,7]. MMD was only considered in subjects with a refractive error exceeding −6.0 diopters in either eye and with one or more of the following ophthalmologic findings: tessellated fundus with yellowish white diffuse or grayish white patchy chorioretinal atrophy, macular hemorrhage or posterior staphyloma. Late AMD was defined by the appearance of either exudative macular degeneration or pure geographic atrophy. Glaucoma was defined according to the International Society for Geographical and Epidemiological Ophthalmology classifications. The diagnoses of RP was based on night blindness, progressive loss of the peripheral visual field, and decreased visual acuity with age, as well as on typical signs observed under fundus examination. The diagnoses of DR, corneal opacity, and other diseases as causes of blindness followed the ophthalmology practice guidelines edited by the China Academy of Ophthalmology.
Using the best judgment, the ophthalmologist attempted to identify the disorder causing the greatest limitation of vision as the cause of blindness or low vision. The causes of blindness or low vision in the better eye were recorded. When two or more causes appeared to have contributed equally to VI for one eye, the primary cause was assigned as the cause. Some treatable eye disorders, such as cataract and refractive error, were not considered as causes of VI in the study. If cataract was regarded as the main cause of VI, the patient was referred for surgery and reassessed at least 2 months postoperatively if their visual function was restored unsatisfactorily.
The statistical analysis was performed using SPSS software version 13 (SPSS, Inc., Chicago, IL). The age-adjusted three-year incidences and the respective 95% confidence intervals (CIs) of blindness and low vision were calculated using the number of cases of VI and the population of Jing-An District in 2010-2012 and 2013-2015 using Microsoft Excel. Age was divided into three age range groups: 1-29 yrs., 30-59 yrs. and 60 yrs. and older. Binary logistic regression was used to analyze the factors related to the occurrences of certified blindness and low vision. Odds ratios (OR) and 95% CIs were determined to describe the influence of age and gender on the incidence of VI. A Chi-squared test was used to analyze differences between the genders and to test the differences in the three-year incidence of VI caused by various pathologies between 2010 and 2012 and 2013-2015. A P < 0.05 was considered as being statistically significant.
Causes of VI
A total of 646 newly certified people with VI were registered as being blind (206 patients) or as having low vision (440 patients) among the approximately 300,000 residents in Jing-An District, Shanghai, China from 2010 to 2015. The main causes of blindness were MMD (23.30%), glaucoma (20.39%), and AMD (17.96%). And 73.0% AMD leading to blindness was wet (neovauscluar), 27.0% was dry; however, only 15.2% AMD leading to low vision was wet, 84.8% was dry.
Principal causes of VI at different ages
The distribution of the principal causes of VI between the 30-59 yrs. and ≥60 yrs. age groups in Jing-An District from 2010 to 2015 is summarized in Table 1.
Two newly registered cases of patients younger than 29 years are not shown in Table 1 Three-year incidence of VI and age-adjusted three-year incidence of VI (Table 2). However, the increase in the three-year incidence of low vision in 2013-2015 was significant compared with that in 2010-2012 (from 64.51/100000 to 83.58/100000, P = 0.007), which was primarily due to the increase in the number of patients with MMD and DR (P = 0.003 and P = 0.01, respectively). Table 3 summarizes the age-adjusted incidences of blindness and low vision. The age-adjusted three-year incidence of low vision in 2013-2015 was significantly higher than that in 2010-2012(81.12/100000 vs. 66.68/100000, P = 0.004). However, there was no difference between the two time periods for any of the age groups ( Table 3).
Association of sex and age with VI
The association of sex and age with blindness or low vision was calculated using a logistic regression model, as illustrated in Table 4. The study observed no correlation between blindness and gender (OR = 1.26, P = 0.09). However, patients with low vision were more often female (OR = 1.29, P = 0.009). The overall number of visually impaired persons was significantly higher in the age ≥ 60 years age group than in the 30-59 years age group (OR = 3.7, P < 0.001 for blindness; OR = 2.64, P < 0.001 for low vision).
Discussion
The registry system, as one method for disease surveillance, is used to continuously collect the data of diseases, observe the distribution trend of diseases and analyze the relevant information to provide useful guidance for the government to take appropriate interventional measures. Registry data are an important resource for epidemiological studies, especially for those of some rare diseases. The problems related to aging have become increasingly prominent in Shanghai. Due to urban renewal, a large number of residents have moved out of Jing-An District in the past years, leading to a decline in the population. Table 3 shows the trend of the population change in Jing-An District. It is necessary to collect registry data to evaluate the causal trend of blindness and low vision. The present study clearly showed that DR was becoming a major cause of VI, especially in the working age population of Jing-An District, Shanghai, China. As far as we know, this study is the first to assess this topic in China. We observed that 11.2% of blindness was caused by DR and that DR was the fourth leading cause of blindness in 2013-2015. We also observed that 25.9% of blindness was [9]. Diabetes is also a major public health problem in China [10]. A recent national survey in 2010 reported that the overall prevalence of diabetes in China was estimated to be 11.6% [11] and that the prevalence of diabetes in Shanghai was 15.91% [12]. A recent meta-analysis including 19 studies performed in mainland China indicated that the prevalence of DR was 23% in the diabetic population [13]. The proportion of blindness caused by DR ranged from 3% to 7% in Southeast Asia and in the Western Pacific region and was as high as 15-17% in developed regions, such as North America and Europe [14]. With the build-up of pressure on the high prevalence of diabetes in China, the incidence of blindness attributable to DR may suddenly and prominently increase in the near future. Our previous study showed that the proportion of blindness caused by DR was only 7.6-8.0% and that DR was fifth or sixth leading cause of blindness in 2001-2009 [6]. However, the proportion of blindness caused by DR was as high as 11.2% in 2013-2015 in the present study. Therefore, we can speculate that DR has become or will become the major cause of VI in the working age population of China in the near future. It was observed in the present study that 23.30% of blindness cases and 58.89% of low vision cases were due to MMD, which was the first leading cause of VI. This result was consistent with our previous reports [6,7]. However, this conclusion was different from those of other studies in China. Many reasons can explain this difference, such as socioeconomic development levels, study methods, and the standards of visual impairment. Some treatable diseases, such as cataracts, refractive error and posterior capsule opacity, were extirpated as causes of visual impairment in their studies, and high myopia has been a major cause of blindness and low vision. Tang et al.reported that, in Taizhou, China, 51.1% of low vision cases and 33.4% of blindness cases were caused by MMD if the cataract was extirpated as the cause of VI [15]. Wang et al. reported that MMD was the first leading cause of permanent visual impairment (17.6%) in southern China [16]. Hu et al. also reported that MMD was the leading cause of permanent visual impairment in an aging Chinese metropolitan population (45.9% of the cases were low vision and 42.0% of the cases were blindness) [17]. Many studies also showed that high myopia was emerging as the main cause of blindness in some Asian countries, especially in China [6,7,[18][19][20]. High myopia increases the risk for pathologic ocular changes, such as cataract, glaucoma, retinal detachment, and MMD, all of which can cause irreversible vision loss. MMD has been observed to be the most frequent cause of irreversible blindness in China and some other Asian countries.
The present study showed that MMD, glaucoma and AMD were three leading causes of blindness, and that DR was a major cause leading to VI in patients aged 30-59 yrs. Prevention and treatment of these diseases should be one of the key tasks in public health. However, for most people with AMD, vision loss can neither be prevented nor adequately reversed. There is a clear need for further research on the causes and risk factors for AMD. Most patients with glaucoma can maintain sufficient visual acuity if effective treatment is given at an early stage. Therefore, we suggest that local health bureaus should conduct glaucoma screening to diagnose it at an early stage and carry out corresponding chronic disease management strategies in community health centers [21]. MMD is the chief cause of both blindness and low vision. Controlling the development of high myopia is of impending importance. It is necessary to take various measures to control myopia, including pharmacological, environmental, and optical interventions. Studies on the mechanisms of myopia progression are essential for decreasing the vision impairment burden of students. Fortunately, the construction of refractive data for all students has been the focus of school public health officials in Shanghai since 2013 [22]. We hope that this program can help control myopia. DR is also a major cause of VI. The early detection of DR is the key to avoiding blindness. In 2017, screening for DR and remote medical consultation using fundus image data transmission became part of the public health concern in Shanghai. We hope that this project will lead to early detection and active intervention of DR, thus reducing the incidence of VI attributable to DR. It should be noted that our analysis was based on the definition of legal blindness, which was principally based on BSCVA and visual field data. Meanwhile, our findings did not take into account the incidences of impaired vision and blindness resulting from uncorrected refractive error and cataract, which are potentially important causes of VI that can be reversed by treatment. An inherent limitation of this study has to do with the reliability of reports from registries, which use aggregated information with rigid reporting categories, restricting the capacity for an in-depth analysis. In addition, better reporting or methodological differences, such as an increase in screening level awareness, number of screeners, etc., could account for the variation in the incidences of VI, which did not fully reflect the actual situation. Certainly, a small group of visually impaired individuals remains unregistered in Jing-An District [7].
Conclusions
In summary, this study identified the three-year incidence and causes of VI in Jing-An District, Shanghai, China from 2010 to 2015. It was observed that MMD was the leading cause of VI. In addition, DR is becoming a major cause of VI, especially in working-age people of Jing-An District, Shanghai, China. Presently, more attention should be paid to diabetics and people with high myopia.
|
2017-11-28T19:08:55.631Z
|
2017-11-28T00:00:00.000
|
{
"year": 2017,
"sha1": "33aa8dd127f5e8b7b9062a8e845e13d98a70944f",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s12886-017-0603-3",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "33aa8dd127f5e8b7b9062a8e845e13d98a70944f",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
246534751
|
pes2o/s2orc
|
v3-fos-license
|
Post-treatment three-dimensional voxel-based dosimetry after Yttrium-90 resin microsphere radioembolization in HCC
Background Post-therapy [90Y] PET/CT-based dosimetry is currently recommended to validate treatment planning as [99mTc] MAA SPECT/CT is often a poor predictor of subsequent actual [90Y] absorbed dose. Treatment planning software became available allowing 3D voxel dosimetry offering tumour-absorbed dose distributions and dose-volume histograms (DVH). We aim to assess dose–response effects in post-therapy [90Y] PET/CT dosimetry in SIRT-treated HCC patients for predicting overall and progression-free survival (OS and PFS) and four-month follow-up tumour response (mRECIST). Tumour-absorbed dose and mean percentage of the tumour volume (V) receiving ≥ 100, 150, 200, or 250 Gy and mean minimum absorbed dose (D) delivered to 30%, 50%, 70%, and 90% of tumour volume were calculated from DVH’s. Depending on the mean tumour -absorbed dose, treated lesions were assigned to a < 120 Gy or ≥ 120 Gy group. Results Thirty patients received 36 SIRT treatments, totalling 43 lesions. Median tumour-absorbed dose was significantly different between the ≥ 120 Gy (n = 28, 207 Gy, IQR 154–311 Gy) and < 120 Gy group (n = 15, 62 Gy, IQR 49–97 Gy, p <0 .01). Disease control (DC) was found more frequently in the ≥ 120 Gy group (79%) compared to < 120 Gy (53%). Mean tumour-absorbed dose optimal cut-off predicting DC was 131 Gy. Tumour control probability was 54% (95% CI 52–54%) for a mean tumour-absorbed dose of 120 Gy and 90% (95% CI 87–92%) for 284 Gy. Only D30 was significantly different between DC and progressive disease (p = 0.04). For the ≥ 120 Gy group, median OS and PFS were longer (median OS 33 months, [range 8–33 months] and median PFS 23 months [range 4–33 months]) than the < 120 Gy group (median OS 17 months, [range 5–33 months] and median PFS 13 months [range 1–33 months]) (p < 0.01 and p = 0.03, respectively). Conclusions Higher 3D voxel-based tumour-absorbed dose in patients with HCC is associated with four-month DC and longer OS and PFS. DVHs in [90Y] SIRT could play a role in evaluative dosimetry.
Background
Selective internal radiation therapy (SIRT) has been established as a form of treatment for non-operable and locally advanced hepatocellular carcinoma (HCC) in the liver [1,2]. Both glass (TheraSphere ® , Boston Scientific Corporation, Marlborough, MA, USA) and resin microspheres with yttrium-90 (SIR-Sphere ® , Sirtex Medical Limited Australia, Sydney, Australia) are commonly used.
During treatment planning, a diagnostic liver angiography is performed with intra-arterial injection of gammaemitting 99m Tc-labelled macro-albumin aggregates ([ 99m Tc] MAA) at the proposed arterial treatment position. This is followed by perfusion scintigraphy (SPECT/ CT) to determine potential hepatopulmonary shunting and extrahepatic distribution.
Pre-operative dosimetry is used to personalize [ 90 Y] dosage and predict whether there will be sufficient accumulation of beta-emitting 90 Y-microspheres in the target tumours. For SIRT, dosimetry based on 99m Tc-MAA SPECT/CT prior to treatment, or a direct 90 Y PET/CT quantification after treatment are available. Two dosimetry methods are recommended to calculate appropriate injected 90Y-activity for resin microspheres: body surface area (BSA) and partition model method [3]. Both methods assume homogeneity of tissue or resin distribution, limiting their objectivity. Recently, treatment planning software became available allowing 3D dosimetry at voxel level. Voxel-based dosimetry allows 3D visualization of tumour-absorbed dose distributions and evaluation of degree of heterogeneity through dose-volume histograms (DVH) [4,5].
Tumour response and clinical outcomes of HCC following SIRT vary considerably, ranging from no response in certain patients to excellent results in others [6]. Large phase II trials, however, found no overall survival benefit [7,8]. Potentially, this could be caused by the investigated dose-response relationships. Dose-response relationships have been demonstrated for resin microspheres [6,[9][10][11][12], resulting in tumour dose-response thresholds between 100 and 120 Gy [13], which is the current recommendation for resin microspheres [4].
Post-therapy [ 90 Y] PET/CT-based dosimetry can validate treatment delivery as [ 99m Tc]MAA SPECT/CT is often a poor predictor of subsequent actual 90 Y absorbed dose [14]. SIRT treatment verification and dosimetry with [ 90 Y] PET/CT are currently recommended [4]. We hypothesize that 90 Y PET/CT-based dosimetry predicts better treatment responses in lesions receiving more than 120 Gy tumour-absorbed dose compared to less than 120 Gy. Therefore, the aim of this study is to assess posttherapy dosimetry between SIRT-treated HCC patients with lesions receiving more or less than 120 Gy tumourabsorbed dose and mRECIST observed responses.
Methods
Patients with unresectable HCC treated with [ 90 Y] resin microspheres SIRT in our institution from May 2018 to November 2020 were considered for this retrospective study. Inclusion criteria consisted of contrast-enhanced CT or MRI which was performed 12 weeks prior to SIRT, a targeted lesion long-axis diameter of at least 2 cm, and a follow-up MRI at four months. Only patients receiving Sirtex 90 Y-resin microspheres were included, as resin and glass microspheres differ in general kinetics and dose calculation. Individual informed consent was not required, because studies involving a retrospective review, collection, and analysis of patient records do not fall under the scope of the Dutch Act on Medical Scientific Research involving Human Beings (WMO). For privacy, data were stored and analysed anonymously. Patient characteristics, such as age, sex, comorbidities, other risk factors and outcomes, were extracted from the electronic medical records. Overall survival (OS) and progression-free survival (PFS) were noted. Relevant follow-up therapy and staging of Barcelona-Clinic Liver Cancer (BCLC) and Child-Pugh (CP) were noted. Evaluation of treatment response to SIRT was done according to the modified response evaluation criteria (mRECIST) at 4-month MRI [15,16].
Planning angiography and 99m Tc-MAA SPECT/CT
All patients were subjected to angiography of the upper abdominal vessels to define vascular anatomy and to assess optimal catheter-tip placement [4]. Following angiography, 150 MBq (4 mCi) of [ 99m Tc] MAA (Pulmocis, Curium Pharma, Petten, the Netherlands) was administered. One hour after injection of [ 99m Tc] MAA, lung and liver planar scan and low dose, no contrast-enhanced SPECT/CT acquisitions were performed using a hybrid scanner combining a dual-head gamma camera and a 2-slice SPECT/CT scanner (Symbia T2, Siemens Healthcare, Germany). Images were then reconstructed on a Siemens workstation (SyngoVia VB30, Siemens Healthcare, Germany). The amount of 90 Y-microsphere activity needed during treatment phase was determined by the partition model, provided and detailed by the manufacturer (SIR-Sphere ® , Sirtex Medical Limited Australia, Sydney, Australia) [4,15].
SIRT and [ 90 Y] PET/CT
SIRT was performed within two weeks after planning angiography. The planned activity of 90 Y-loaded microspheres was injected through a microcatheter at the same position as determined during planning angiography. Within one day after SIRT, patients underwent [ 90 Y] PET/CT scan (Biograph mCT PET/CT, Siemens Healthcare, Erlangen, Germany) with a maximum of two bed positions, and 15-min acquisition per bed position, for treatment verification and post-treatment dosimetry. PET data were reconstructed with Siemens Ultra HD (TrueX and time of flight), using three iterations and 21 subsets with a 400-matrix size and a 9-mm Gaussian (isotropic) filter. Attenuation and scatter correction of PET emission data were achieved by a low-dose CT scan with 120 kV and 35 mAs.
Dosimetry
For pre-treatment planning of injected 90 Y-activity, liver and tumour contours were manually delineated on CT images, acquired during planning angiography to be used in the partition model. Pre-treatment contrast-enhanced CT (Siemens SOMATOM Force CT) or gadolinium-enhanced fat-saturated T1-weighted MRI (Siemens Magnetom Skyra MRI) were used for 3D delineation of liver and tumour contours for posttreatment dosimetry. Post-treatment dosimetry contouring was performed in MIM SurePlan (v7.0.4, MIM software, Cleveland, USA). In all three planes and for every three slides, the researcher manually delineated vital liver tissue and tumours. The software then interpolated all contours to create a 3D representation of all contours. These contours were then transformed to contours on post-therapy PET/CT by a MIM Sure-Plan clinical workflow ("90Y Dose Calculation") using deformable registration algorithms. The computed contours were then, in some cases, manually translated or rotated to achieve optimum visual fit. 90 Y-dose and DVH for each tumour were calculated with the local deposition method (LDM), as previously described [16]. The mean tumour-absorbed dose (in Gy) were extracted from DVH, where area under the DVH (AUDVH) equals tumour-absorbed dose [17]. V100, V150, V200, and V250 were calculated from the DVH, representing the percentage of the tumour volume receiving indicated value of radiation (in Gy). D30, D50, D70, and D90 were computed showing the minimum absorbed dose delivered to those tumour volume percentages.
Excluding small tumours reduced the chance of partial volume effects of dosimetry data in relation to the PET/CT, as a sphere diameter of at least 2 cm with no filtering should give a better reading of activity according to the literature [18]. Depending on the mean tumour-absorbed dose, treated lesions were assigned to a < 120 Gy or ≥ 120 Gy group. Patients who had both a < 120 Gy and a ≥ 120 Gy lesion were added to both groups. For computing OS, time between first (or only) treatment and death was calculated and, patients were not added twice in case of group comparisons. Complete response (CR), partial response (PR), and stable disease (SD) mRECIST results were combined into a disease control (DC) group to be compared to progressive disease (PD).
Statistics
All descriptive statistics are given by numbers with percentiles or the median with its interquartile ranges, unless stated otherwise. Comparisons of tumour-absorbed dose between DC and PD are performed by an unpaired t test with Welch's correction. Comparisons of mRECIST with mean tumour dose and D-and V-values were compared by Kruskal-Wallis (with Dunn's multiple comparisons test) or two-way ANOVA. A nonlinear second-order polynomial (quadratic) least squares fit was performed on the DVH of DC and PD groups. Receiver operating characteristic (ROC) analysis was performed to identify the optimal cut-off (defined by the Youden index) of tumourabsorbed dose to predict DC. By averaging the chance of all patients to have DC, binned by intervals of 20 Gy, the tumour control probability (TCP) was computed and related to tumour dose using a linear quadratic model. OS and PFS between-group comparisons were determined with Kaplan-Meier Chi-square log rank Mantel-Cox. Statistical analysis was performed using SPSS version 23.0 software (SPSS Inc., Chicago, IL). p values lower than 0.05 were considered to be significant.
Results
Thirty patients (26 male, 4 female) with unresectable HCC underwent [ 90 Y] SIRT resin microspheres treatments and subsequent post-therapy [ 90 Y] PET/CT scanning in our institution between May 2018 and November 2020. A total of 36 treatments were performed, as six patients were treated two times. A total of 104 lesions were found, of which 43 lesions could be included (Fig. 1). Table 1. Patient characteristics did not differ between both groups. For pre-SIRT liver and tumour contouring, appropriate CT and MR images were available in 11 and 25 treatments, respectively.
Dose-survival analysis
Median follow-up for OS and PFS were 27 months (range 10-40 months). For the ≥ 120 Gy group, median OS and PFS were longer (median OS 33 months, [range 8-33 months] and median PFS 23 months [range 4-33 months]) than the < 120 Gy group (median OS 17 months, [range 5-33 months] and median PFS 13 months [range 1-33 months]) (p < 0.01 and p = 0.03, respectively; Fig. 6a and 6b). Ten patients died following SIRT with a median of 312 days (IQR: 206-317 days), of which nine < 120 Gy. One patient with a single ≥ 120 Gy lesion died during follow-up. All deaths in our study occurred due to progression of liver disease.
Discussion
We aimed to examine mRECIST observed responses in ≥ 120 Gy lesions compared to < 120 Gy by post-therapy dosimetry. We found equal injected 90 Y-dose between cases receiving more than ≥ 120 Gy or less, while mean tumour doses were widespread in both groups. This demonstrates that the planned and actual tumour dose can be considerably different and confirms the need for quantitative dose-response analysis by the use of post-therapy [ 90 Y] PET/CT in the treatment of locally advanced HCC. Patients with lesions receiving more than ≥ 120 Gy showed longer overall and progression free survival.
Generally, mean tumour dose can be used to determine 90 Y-SIRT efficacy [19]. As it assumes uniform dose distribution, much attention has been given to the analysis of DVHs. The introduction of tumour-absorbed dose, Several studies on post-therapy [ 90 Y] PET/CT in HCC with resin microspheres have been performed [4]. A study with 43 SIRT procedures found that tumour AUDVH was associated with DC, with an optimal cutoff of 61 Gy (76% sensitivity and specificity) [6]. Another study with 73 participants reported 50% TCP at 110-120 Gy [12]. Several case studies validated the feasibility of post-therapy PET with [ 90 Y] SIRT, with one study finding a tumour-absorbed dose of 287 Gy showing complete remission after 6 month [20,21]. Another study suggested a relationship between higher [ 90 Y] dose and Table 2 SIRT characteristics SIRT selective internal radiation therapy, MBq megabecquerel, Gy Gray, mRECIST modified response evaluation criteria in solid tumours † Objective response is the proportion of treatment sessions or lesions with complete or partial response † † Disease control is the proportion of treatment sessions or lesions with complete or partial response or stable disease better tumour response, noting that treatment responders had a mean tumour-absorbed dose of 215 Gy [22]. We found that AUDVH, and thereby tumour-absorbed dose, was associated with DC. DC was optimally predicted by a mean tumour dose of 131 Gy, and 50% TCP was achieved at 110 Gy. Non-conformity of our results to previous literature could be due to exclusion of lesions smaller than two cm and because a third of all included lesions were solitary HCC tumours. Both can lead to finding higher tumour dosages. Considering all available literature and our results, a potential trend between higher tumour doses than 120 Gy and better tumour response is likely. Currently, no standard exists for [ 90 Y] DVH reporting, and comparisons between studies are difficult as a result of different dose calculation methods, response evaluations, and low number of patients [23]. Current international recommendations determine a mean tumour-absorbed dose of 100 to 120 Gy for HCC [4]. In the present study, the achieved mean tumourabsorbed dose of 215 Gy in DC compared to 134 Gy in the PD group using the recommended dosimetry and administration protocols aligned with these suggested thresholds. Individual examination of the included tumour showed a wide variation of tumour doses. It has been proposed that a higher dose can not only target the primary tumour more effectively, but can also lead to the targeting of, often undetected, small satellite lesions [19]. In our study, post-SIRT novel lesions were seen in seven out of 13 PD cases. Further examinations of tumour-absorbed dose, volume over time and lesionbased response evaluations are in progress. We found that higher tumour-absorbed doses were well-tolerated, as vital liver dose was not different between groups and only one moderately severe radiation-related complication occurred.
Limitations of our study include its retrospective design and low number of patients, although patient characteristics were generally uniform. Dose prediction based on [ 99m Tc]MAA SPECT was not within the scope of this study, as its predictive value for delivered dose is still subject of debate in the literature [24]. As a result of our lesion-level analysis, several included lesions came from the same patients. No lesions were included that were targeted by a previous bout of SIRT, but we cannot rule out any second-degree radiation effects due to the lesion being part of the same treated liver hemisphere. We found that one patient had pre-therapy CT/MRI MRI were retrieved, but later MRI follow-up were sporadic and only survival characteristics could be accurately determined after four months. By excluding lesions smaller than 2 cm, we aimed to reduced partial volume effects, such as breathing and resolution artefacts, of PET data. Nevertheless, this led to some cases of mismatch between pre-and post-therapy contours. These contours needed manual alterations to achieve optimal fit, which might lead to overestimations of tumour-absorbed dose.
Conclusion
Resin-microsphere SIRT with post-therapy voxel-based mean tumour-absorbed doses above 120 Gy in patients with HCC is associated with four-month DC and longer OS and PFS. DVHs in [ 90 Y] SIRT could play a role in evaluative dosimetry. These results demonstrate the need for further validation of optimal tumour dose and dose distribution characteristics with post-therapy [ 90 Y] PET/ CT dosimetry.
|
2022-02-05T14:46:05.809Z
|
2022-02-04T00:00:00.000
|
{
"year": 2022,
"sha1": "7a63466141b864a4c80d1cef39c9bd482cd51514",
"oa_license": "CCBY",
"oa_url": "https://ejnmmires.springeropen.com/track/pdf/10.1186/s13550-022-00879-x",
"oa_status": "GOLD",
"pdf_src": "Springer",
"pdf_hash": "7a63466141b864a4c80d1cef39c9bd482cd51514",
"s2fieldsofstudy": [
"Medicine",
"Physics"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
16962484
|
pes2o/s2orc
|
v3-fos-license
|
A Reconfigurable Mobile Robots System Based on Parallel Mechanism
Reconfigurable robots consist of many modules which are able to change the way they are connected. As a result, these robots have the capability of adopting different configurations to match various tasks and suit complex environments. For mobile robots, the reconfiguration is a very powerful ability in some tasks which are difficult for a fixed-shape robot and during which robots have to confront unstructured environments (Granosik et al. 2005; Castano et al. 2000), e.g. navigation in rugged terrain. The basic requirement for this kind of robotic system is the extraordinary motion capabilities. In recent years considerable progress has been made in the field of reconfigurable modular robotic systems, which usually comprise three or more rigid segments that are connected by special joints (Rus, D. and Vona, M. 2000). One group of the reconfigurable robots featuring in interconnected joint modules realizes the locomotion by virtue of the structure transform performed by the cooperative movements and docking/undocking actions of the modules (Suzuki et al. 2007; Kamimura et al. 2005; Shen et al. 2002; Suzuki et al. 2006; Vassilvitskii et al. 2002). Because the modules in these robots are not able to move independently and the possible structures of the robot are limited, these kinds of robots are not suitable for the field tasks. The other kind of reconfigurable robots being composed of independently movable modules is more suitable for the field environment. The first prototype (Hirose et al. 1990) with powered wheels was designed by Hirose and Morishima in 1990, which consists several vertical cylindrical segments. The robot looks like a train, however with a weight over 300 kg it is too heavy. Klaassen developed a mobile robot with six active segments and a head for the inspection of sewage pipes (Klaassen et al. 1999). There are twelve wheels on each module to provide the driving force. Mark Yim proposed another reconfigurable robot PolyBot which is able to optimize the way its parts are connected to fit the specific task (Yim et al. 2000). PolyBot adopts its shape to become a rolling type for passing over flat terrain, an earthworm type to move in a narrow space and a spider type to stride over uncertain hilly terrain. The application of powered tracks to field robots enriches their configurations and improves the adaptability to the environments. A serpentine robot from Takayama and Hirose
Introduction
Reconfigurable robots consist of many modules which are able to change the way they are connected.As a result, these robots have the capability of adopting different configurations to match various tasks and suit complex environments.For mobile robots, the reconfiguration is a very powerful ability in some tasks which are difficult for a fixed-shape robot and during which robots have to confront unstructured environments (Granosik et al. 2005;Castano et al. 2000), e.g.navigation in rugged terrain.The basic requirement for this kind of robotic system is the extraordinary motion capabilities.In recent years considerable progress has been made in the field of reconfigurable modular robotic systems, which usually comprise three or more rigid segments that are connected by special joints (Rus, D. and Vona, M. 2000).One group of the reconfigurable robots featuring in interconnected joint modules realizes the locomotion by virtue of the structure transform performed by the cooperative movements and docking/undocking actions of the modules (Suzuki et al. 2007;Kamimura et al. 2005;Shen et al. 2002;Suzuki et al. 2006;Vassilvitskii et al. 2002).Because the modules in these robots are not able to move independently and the possible structures of the robot are limited, these kinds of robots are not suitable for the field tasks.The other kind of reconfigurable robots being composed of independently movable modules is more suitable for the field environment.The first prototype (Hirose et al. 1990) with powered wheels was designed by Hirose and Morishima in 1990, which consists several vertical cylindrical segments.The robot looks like a train, however with a weight over 300 kg it is too heavy.Klaassen developed a mobile robot with six active segments and a head for the inspection of sewage pipes (Klaassen et al. 1999).There are twelve wheels on each module to provide the driving force.Mark Yim proposed another reconfigurable robot PolyBot which is able to optimize the way its parts are connected to fit the specific task (Yim et al. 2000).PolyBot adopts its shape to become a rolling type for passing over flat terrain, an earthworm type to move in a narrow space and a spider type to stride over uncertain hilly terrain.The application of powered tracks to field robots enriches their configurations and improves the adaptability to the environments.A serpentine robot from Takayama and Hirose consists of three segments.Each segment is driven by a pair of tracks, but all tracks are powered simultaneously by a single motor located in the centre segment (Takayama et al., 2000).The special ability of adapting to irregular terrain is passive and provided by springs.The OmniTread serpentine robot (Granosik et al. 2005) for industrial inspection and surveillance was developed by Grzegorz Granosik in 2004.Optimal active joints are actuated by pneumatic cylinders in order to compromise the strength and compliance.However, the known robots usually have few configurations due to relatively simple docking and pose-adjusting mechanisms.For example, the Millibot Train robot consists of seven compact segments, which can connect by couplers with one DOF (Brown et al. 2002).A reconfigurable mobile robot designed by M. Park is not able to change its configuration actively at all (Park et al. 2004).The robot from Université Libre de Bruxelles has a one-DOF pose-adjusting mechanism and one coupler to change the configuration between the neighboring modules as well (Sahin et al. 2002).From the mechanical point of view, the reconfiguration mechanism applied to mobile robot is composed of the posture-adjusting and connecting mechanism, and the most important technology is how to construct a posture-adjusting mechanism with large workspace and high driving ability in a limited robot body.However, in complex field terrain, the fact that the existing reconfigurable mobile robots can only assume limited configurations due to relatively simple posture-adjusting is a ubiquitous deficiency.The project presented in this chapter aims at developing a reconfigurable mobile multi-robot platform made highly flexible and robust by its three-DOF posture-adjusting ability.The key object of the project is to develop a new posture-adjusting mechanism featuring in compact structure, large workspace and powerful driving ability.As a secondary object, the project has developed an effective connecting mechanism aligned to flat terrain and synthesized it with the posture-adjusting mechanism.The locomotion abilities of the system are expected to be as follows.
1.The single robots in the system have an independent omni-directional locomotion ability equivalent to that of a normal outdoor mobile robot.2. Due to the posture-adjusting mechanism, which enables the robots to drive very well and to operate in a large workspace, the robots can adjust the posture of their partners.3. The connecting mechanism tolerating large posture deviation in flat terrain can link two robots in a locked connection and transit large forces and torques between them.4. Compared with a single robot, the connected robots are able to perform more demanding locomotion activities, such as stepping over high obstacles, crossing wide grooves, passing through narrow barriers and self-recovering from invalid postures and other actions which are impossible for a single robot.To reach the above targets, a novel reconfigurable mobile robot system JL-1 based on a serial and parallel active spherical mechanism and a conic self-aligning connecting mechanism has been developed.This system is composed of three robot modules which are able to not only move independently, but also to connect to form a chain-structured group capable of reconfiguration.On flat terrain, each module of JL-1 can corporate with each other by exchanging information to keep up its high efficiency; while on rugged terrain, the modules can actively adopt a reconfigurable chain structure to cope with the cragged landforms which will be a nightmare for a single robot (Zhang et al. 2006;Wang et al. 2006).In the chapter, after giving an overview of JL-1, the discussion focuses on some special locomotion capabilities of it.Then the related kinematics analysis of the serial and parallel mechanism is discussed thoroughly as well as the theory of the connecting mechanism.Based on the discussion, the mechanical realization of JL-1 is introduced in detail.The prototype shows the advantage of the parallel mechanism in realizing powerful driving force in a relative small size.In the end, a series of successful on-site tests, such as crossing high vertical obstacles, connecting action and getting self-recovery when the robot is upsidedown, is presented to confirm the above principles and the locomotion abilities of JL-1.
Overview of the reconfigurable JL-I 2.1 Mechanical model of JL-1
Fig. 1.Adapting to terrains by pitching, yawing and rotating By virtue of three uniform modules being capable of docking with each other, JL-I has various moving modes which enable JL-1 to move in almost all kinds of rough terrains.The principle of terrain adaptability is shown in Fig. 1.In connected state, JL-I can change its posture by pitching around the Y axis, yawing around the X axis and rotating around the Z axis.JL-1 is endowed with the abilities of adopting optimized configurations to negotiate difficult terrains or splitting into several small units to perform tasks simultaneously, by the three DOF active spherical joints between the modules and the docking mechanism being capable of self-aligning within certain lateral and directional offsets.In JL-1, the yawing and pitching movements are achieved by a parallel mechanism.The third rotation DOF around the joint's Z axis is achieved by a serial mechanism.There are two reasons for using the serial and parallel mechanisms for JL-1.Firstly, the JL-I robot can be made lightweight and dexterous while allowing for a larger payload.Secondly, the advantages of the high rigidity of a parallel mechanism and the extended workspace of a serial mechanism can be combined, thus improving the flexibility of the robotic system.
Locomotion capabilities
It can be easily imagined that the locomotion abilities of JL-1 will be enhanced when it is in connected state, such as climbing up higher steps, spanning wider ditch and stepping up stairs.Furthermore, JL-1 is capable of some novel actions, which will be required in outdoor environment, e.g.self-recovery and passing through a narrow fence.
90° self-recovery
It is possible for the robot to implement a 90° recovering movement by adopting the proper configuration sequence as shown in Fig. 2. a) The robot is lying on its side b) The first module and the last module are yawing up around the X axes of the active joints.c) Then the first module and the last module rotate 90° around the Z axes.d) After that, they are pitching down around the Y axes of the active joints until they attach to the ground in order to raise the middle module up.e) The middle module rotates around the Z axis until it is parallel to the ground.f) In the end, the module is pitching down around the Y axes of the active joints until all three modules attach to the ground together.The robot is now in its home state again, and the process of 90° Self-recovery is over.
180° Self-recovery
It is also possible for the robot to tip over and realize the 180° recovery movement as shown in Fig. 3. active joints again until all three modules attach to the ground.h) The process of 180° Self-recovery is over.
Crossing a narrow fence
As shown in Fig. 4, the train configuration robot is able to cross a fence narrower than the width of its modules.a) The robot is in its home state, and the sensor detects the fence in the moving direction.b) The robot stops before the fence, and then the first module pitches up around the Y axis and then rotates 90° according to the Z axis.c) The crossing movement does not stop until the first module passes through the fence.d) The first module rotates and pitches to get back into the home state, and then the three modules attach to the ground together again.The following steps (e) to (k) of the second and third modules are similar to those of the first one.The process will be achieved until the robot crosses the fence entirely.In order to show the principle clearly, the lateral views of steps (e) and (f) are also given.
Kinematics analysis of the active spherical joint
As described above, the robot's reconfiguring abilities are achieved by the motion of the 3 DOF active spherical joints.Two of the DOF achieved by the parallel mechanism are yawing and pitching around the joint's X and Y axes respectively.The third rotation DOF around the joint's Z axis is achieved by the serial mechanism.The required orientation for the reference frame O'X'Y'Z' on the back module is achieved by a rotation of θ z , a pitching angle θ y and a yawing angle θ x according to the relative axes.
From the mechanical point of view, actually the pitching and yawing motions are realized by the outstretching and returning movement of the L 1 , L 2 of the parallel mechanism, and the rotation of θ z is actuated by the serial mechanism.The freedom of the reconfiguring movement is three and can be described with the generalized coordinate θ (3).The joint variants of the movement are named q, described as (4).
The purpose of the kinematics analysis is to deduce the relationship between q and θ.In Fig. 10, the points A, B, C, D are described as (5) in the OXYZ coordinate.
The homogeneous transformation matrix [T] from the world coordinate OXYZ to the coordinate O'X'Y'Z' is described as (6).After the reconfiguring movement, A, B, C, and D are changed to new positions described as A 1 , B 1 , C 1 , and D 1 .The Cartesian coordinates of the new points can be expressed as ( 7) and ( 8).
There are some constraints to the mechanical structure, as shown in ( 9) and ( 10).The lengths of the link L 1 and L 2 are equal to the distance between C 1 A 1 and D 1 B 1 respectively.
All these results are inserted into ( 9) and ( 10), then the kinematics expression results from them.
then the relation between q and θ can be concluded as ( 13).
The relationship of the world coordinate and the reference joint coordinate can be concluded.Furthermore the movements can be anticipated according to the joints' driving outputs.
System realization 4.1 Mechanical realization
The JL-I system consists of three connected, identical modules for crossing grooves, steps, obstacles and traveling in complex environment.The mechanical structure is flexible due to its uniform modules and special connection joints (Fig. 6a).Actually each module is an entire robot system that can perform distributed activities (Fig. 6b).The single module is about 35 centimeters long, 25 centimeters wide and 15 centimeters high.Fig. 7 shows the mechanical structure of the module which has two powered tracks, a serial mechanism, a parallel mechanism, and a docking mechanism.Two DC motors drive the tracks providing skid-steering ability in order to realize the flexible omni-directional movement.The docking mechanism consists of two parts: a cone-shaped connector at the front and a matching coupler at the back of the module.It enables any two adjacent modules to link, forming a train configuration.
Realizing the parallel mechanism
The realization of the parallel mechanism is also shown in Fig. 8.Each branch of it consists of a driving platform, a Hooker joint, a lead screw, a nut slider, a ball bearing, a synchronous belt system, a DC motor and a base platform.The Hooker joint connects the driving platform and the nut slider.The lead screw is supported by a ball bearing in the base platform.The cone-shaped connector fixed on the driving platform is called a buffer head, because its rubber is used to buffer the wallop during the docking process.Besides the two branches, there is a knighthead fixed on the base platform and connected to the driving platform by another Hooker joint.By revolving the two lead screws, the driving platform can be manipulated relative to the Hooker joint on the knighthead.
Fig. 8 The parallel mechanism
There are two advantages in applying the synchronous belt system.a) When the screw revolves, it rocks around the ball bearing.By using the synchronous belt system and an elastic connector, the rock motion of the screw is isolated from the motor.b) The motor and the lead screw can be installed on the same side of the base platform, and that decreases the dimension of the structure.
Realizing the serial and docking mechanism
The docking mechanism consists of two parts: a cone-shaped connector at the front (shown in Fig. 8) and a matching coupler at the back of the module, as shown in Fig. 9.The coupler is composed of two sliders propelled by a motor-driven screw.The sliders form a matching funnel which guides the connector to mate with the cavity and enables the modules to selfalign with certain lateral offsets and directional offsets.After that, two mating planes between the sliders and the cone-shaped connector constrain the movement, thus locking the two modules.This mechanism enables any two adjacent modules to link, forming a train configuration.Therefore the independent module has to be rather long in order to realize all necessary docking functions.In designing this mechanism and its controls, the equilibrium between flexibility and size has to be reached.A DC motor is connected to the coupler with its motor shaft aligned with the module's Z axis, which also passes through the center of the Hooker joint on the knighthead of the parallel mechanism.Therefore a full active spherical joint is formed when two modules are linked.
Control system
The control system of the robot based on an industrial PC (IPC) and a master-slave structure meets the requirements of functionality, extensibility, and easy handling (Fig. 10).Multiple processes programming capability is guaranteed by the principle of the control structure.
The hardware consists of an SBC-X255, an independent image processing unit and a lowlevel driving unit (SBC 2).
The SBC-X255 is the core part of the control system.It is a standard PC/104+ compliant, single-board computer with an embedded low power Intel Xscale PXA255 (400 MHz).This board operates without a fan at temperatures from -40° C up to 85° C and typically consumes fewer than 4.5 Watts while supporting numerous peripherals.The Ethernet port is used as a communication interface between the IPC and the image processing unit which is in charge of searching and monitoring.The IPC is a higher-level controller and does not take part in joint motion control.Its responsibilities include receiving orders from the remote controller, planning operational processes, receiving feedback information.There are two kinds of external sensors on the robot: a CCD camera and touchable sensors, which are responsible for collecting information about the operational environment.The internal sensors such as GPS, digital compass, gyro sensors are used to reflect the self-status of the robot.The gesture sensor will send the global locomotion information of the robot θx, θy, and θz to the controller, which are essential to inverse kinematics.Meanwhile there are limit switches to give the controller the position of the joint.On the joint where the accurate position is needed, the optical encoder is used.
On-site tests
Relevant successful on-site tests with the mobile robot were carried out recently, confirming the principles described above and the robot's ability.Fig. 11 shows the docking process of the connection mechanism whose most distinctive features are its ability of self aligning and its great driving force.With the help of the powered tracks, the cone-shaped connector and the matching coupler can match well within ±30mm lateral offsets and ±45°directional offsets.The experimental results show that the 3 DOF active joints with serial and parallel mechanisms have the ability to achieve all the desired configurations.The performance specifications of JL-I are given in Table 1.
Parameters Values
Posture adjustment angle around X-axis ±45°
Conclusions
The modular reconfiguration robot has the ability of changing its configuration which makes it more suitable for complex environments.In contrast to conventional theoretical research, the project introduced in this paper successfully completes the following innovative work.a) It proposes a robot named JL-I which is based on a modular reconfiguration concept.
The advantages and the characteristics of the mechanism are analysed.The robot features a docking mechanism with which the modules can connect or disconnect flexibly.The active spherical joints formed by serial and parallel mechanisms endow the robot with the ability of changing shapes in three dimensions.b) A kinematics model of reconfiguration between two modules is given.The relationship of the world coordinate and the reference joint coordinate is concluded.Furthermore, the movements can be anticipated according to the joints' driving outputs.The analysed results are important for system design and the design of the controlling mechanism for the robot.c) Experimental tests have shown that the JL-I can implement a series of various locomotion capabilities such as 90° recovery, 180° recovery, and crossing steps.This implies the mechanical feasibility, the rationality of the analysis and the outstanding movement adaptability of the robot.The future research will focus on the following aspects.a) Developing a new docking mechanism which tolerates larger offset in rugged terrain and can be used as a simple manipulator; b) Developing a more reliable track modules with shock absorption function; c) Developing a new mechanism which can actively undock a disable robot module.
Acknowledgement
The work in this chapter is proposed by National High-tech R&D Program (863 Program) of China (No. 2006AA04Z241).
Fig. 4 .
Fig. 4. The sequence of crossing a narrow fence
Fig. 5 .
Fig. 5.The kinematics model of the active spherical jointTo demonstrate the reconfiguring possibility, the kinematics analysis of two connected modules should be studied first.Fig.5shows the kinematics model of the joint between two modules.Where OXYZ is the world coordinate fixed at the plane QEF which represents the front unmovable module during the reconfiguration.The origin is located at the universal joint O, the Z-axis coincides with the axis of the serial mechanism and the X-axis points to the middle point of line AB.Another reference coordinate O'X'Y'Z' is fixed at triangular prism OABPCD which represents the back moveable module.The O'X'Y'Z' is coincident with the OXYZ when the spherical joint is in its home state.Equations (1) and (2) are satisfied due to the mechanical constraints.QF is perpendicular and equal to QE.QEF//OAB//PCD (1)QEF=OAB=PCD(2)
Fig. 9 .
Fig. 9.The serial and docking mechanism This docking mechanism can compensate a position deviation within ±30mm and a posture deviation within ±45° between two modules.The self-locking characteristic of the screw-nut mechanism ensures a reliable connection between two modules to endure the vibration in motion.
Fig. 10.The control system of JL-1's module
Fig. 11 .
Fig. 11.The docking processCompared with many configurable mobile robots, the JL-I improves its flexibility and adaptability by using novel active spherical joints between modules.The following figures show the typical motion functionalities one by one, whose principles are discussed above.
|
2016-01-07T01:57:53.067Z
|
2008-04-01T00:00:00.000
|
{
"year": 2008,
"sha1": "5a4625365b313e3078eb727530ffeb4f1938a5a9",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.5772/5438",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "d3d28ab1c4543ba6f966b1305ac51583c4fbe37d",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Engineering"
]
}
|
9076974
|
pes2o/s2orc
|
v3-fos-license
|
Fluorescent Fatty Acid Transfer from Bovine Serum Albumin to Phospholipid Vesicles: Collision or Diffusion Mediated Uptake
Purpose: The extent of palmitate uptake by hepatocytes is dependent upon the surface charge of the extracellular binding protein. Specifically, hepatocyte uptake is greater when palmitate is bound to cationic binding proteins than when it is bound to anionic proteins. To further understand the role of protein surface charge on the uptake process of protein-bound ligands, we examined the rate of transfer of fluorescent anthroyloxy palmitic acid (AOPA) in the presence of anionic and cationic extracellular proteins to model membranes containing different surface charged groups. Method: AOPA transfer rate in the presence of bovine serum albumin (ALB; isoelectric point pI = 4.8-5.0) or modified ALB (ALB e ; pI = 7.0-7.5) to negative, positive and neutral lipid vesicles was investigated using a fluorescence resonance energy transfer assay. Results: The rate of AOPA transfer from both proteins was decreased when ionic strength was increased; directly dependent on the concentration of acceptor lipid vesicles; and was affected by both the lipid membrane surface charge and protein-bound concentration. Conclusion: The data support the notion that AOPA transfer from binding proteins to lipid membranes occurred through two concomitant processes, aqueous diffusion of the unbound ligand (diffusion-mediated process) and a collisional interaction between the protein-ligand complex and acceptor membrane. The contribution of diffusional mediated transfer to the overall uptake process was determined to be 3 to 4 times less than the contribution of a collisional interaction. This study strengthened the hypothesis that charged amino acid residues on proteins are important for effective collisional interaction between protein-ligand complexes and cell membranes through which more free ligand could be supplied for the uptake process.
INTRODUCTION
Nonesterified long-chain fatty acids (LCFA) are a major energy source for most mammalian cells.In plasma these substrates are highly protein bound owing to their high lipophilicity.Extracellular protein binding makes understanding their uptake process very complicated.As such uptake of highly lipophilic ligands has been a controversial area of investigation.
Prior to entering the cell, these substrates must traverse several sequential barriers.One of the steps involved in the uptake process is dissociation from the extracellular protein-binding site.Whether the dissociation step is mediated by interactions with the surface charged groups on the outer membrane leaflet of the cell is not clear (1).Although it is generally accepted that uptake does not involve the protein-ligand complex, several reports provide strong evidence that ligand uptake may occur directly from the protein-ligand fraction (2)(3)(4)(5)(6).
Evidence supporting the hypothesis that an ionic interaction between the protein-ligand complex and the cell surface is likely mediating the supply of ligand from the protein-bound fraction to the cell has been reported (2).Hepatocyte uptake of [ 3 H]-palmitate was shown to be greater when the binding protein contained a net positive charge at physiological pH compared with a net negative charge.The present studies were undertaken to further explore the uptake process.______________________________________ The objective was to determine if an ionic interaction between extracellular binding proteins and acceptor membranes was associated with an increase in uptake rate.The transfer rate of LCFAs from albumin to model membranes was investigated using a fluorescence resonance energy transfer assay.We explored the transfer rate of anthroyloxy labeled palmitic acid (AOPA) bound to ALB (isoelectric point, pI=4.8) or modified ALB (pI=7.5) to negative, positive, and neutral lipid vesicles.This technique has been successfully used to study the rate and mechanism of transfer of many important physiological substrates such as cholesterol (7); phospholipids (8); sphingomyeline (9); short-chain fatty acids (10); and long-chain fatty acids (11,12) between various binding proteins and phospholipid vesicles.
Chemical modification of albumin
Albumin was modified by activating the carboxyl groups with carbodiimide treatment and then aminating them with ethylenediamine as previously described (13).Briefly, anhydrous ethylenediamine (66.6 ml) was added to 500 ml distilled water.The pH of the resulting solution was adjusted to 4.75 with approximately 350 ml 6 N HCl.ALB (2 g) and 1-ethyl-3 (3-dimethyl-amino propyl)carbodiimide (0.725 g) was added to this solution.The mixture was allowed to react for approximately 10 minutes with gentle stirring.The reaction was terminated by the addition of 30 ml 4 M acetate buffer (pH=4.75).The modified albumin (ALB e ) solution was dialyzed against distilled water at 4 o C, concentrated, lyophilized, and stored at -20 o C until use.
Determination of molecular weight
Mass measurements of ALB and ALB e , dissolved in 5% acetic acid in methanol/water (1:1 v/v), were conducted on an orthogonal injection electrospray ionization time-of-flight (ESI/TOF III) mass spectrometer (14).Prior to analysis, residual sodium, potassium, and phosphate were removed from proteins by ultrafiltration using 25 mM ammonium bicarbonate buffer.
Analysis was performed at a declustering voltage of 190 eV.Molecular masses were obtained by deconvolution using a computer program as previously described (14).
Determination of extent of modification
Amino acid modifications of ALB e were identified by peptide mapping as follows.Tryptic digestions of ALB e and ALB (as a control) were conducted in 25 mM ammonium bicarbonate solution (1% trypsin, w/w) at 37 o C for 24 hours and the digest analyzed by mass spectrometry on a SCIEX prototype tandem quadrupole/TOF mass spectrometer (QqTOF) coupled to a matrix assisted laser desorption ionization (MALDI) ion source.A 2,5 dihydrobenzoic acid (DHB) solution (100 mg/ml in acetone) was used as the matrix.Peptide fragments for sequence identification were formed by collision-induced dissociation (CID) with Argon as the collision gas and collision energies of 50-180 eV.
Determination of isoelectric point (pI)
The isoelectric points (pI) of ALB and ALB e were determined on IEF gels using a model 111 Mini IEF-cell (BIO-RAD) and BioLyte 3/10 obtained from Bio-Rad Laboratories (California, USA).The gels were stained with Coomassie blue R-250.
LUV Quality Assessment
Determination of zeta potential, mean vesicle size and, size distribution were performed using Nicomp 380 ZLS Submicron Particle Sizer / Zeta Potential Analyzer (Particle Sizing Systems, Langhorne, PA, USA).A single zeta potential measurement was conducted, which reflected the value from combined liposome samples.Electron microscopy was used to determine unilamellarity of lipid vesicles.
Binding of AOPA to ALB and ALB e
Binding of AOPA to ALB or ALB e was analyzed by fluorimetric titration according to Cogan et al. (17).Briefly, fluorescently labeled AOPA was added to PBS containing 1 M ALB or ALB e from a concentrated ethanolic stock solution.The molar ratio of AOPA to protein in PBS varied between 0.1 and 3.0.The total ethanol concentration was less than 1.5% at the end of the titration.An increase in the fluorescence intensity was observed with increasing AOPA concentration at 37 o C. No correction for background levels of AOPA in PBS was necessary since the contribution of AOPA blank to the observed fluorescence intensity in the presence of protein was less than 5 %.Equilibrium Partitioning of AOPA Between Binding Proteins and Acceptor Vesicles.ALB or ALB e (1 mM) in PBS, pH 7.4, was incubated with 0.1 mM AOPA at 37 o C for 10 minutes and the fluorescence intensity monitored.The protein-AOPA complex was then mixed with negative LUVs, positive LUVs, or neutral LUVs and the mixture was incubated for 20 minutes.Preliminary studies showed that this time period was sufficient to achieve equilibrium partitioning of AOPA.The remaining measured fluorescence reflected only the AOPA bound to protein.The relative partition ratio of AOPA between protein and each lipid vesicle preparation (PR AOPA , mol / mol) was calculated as follows: (1) where the numerator is the fluorescence intensity measured after addition of lipid vesicles divided by the fluorescence intensity measured before the addition of lipid vesicles multiplied by 100.
Transfer Assay
Transfer kinetics of AOPA from ALB or ALB e to different acceptor vesicles were measured using a resonance energy transfer assay as described by Wootan et al. (18).Briefly, 1 M ALB or ALB e was incubated with 0.1 M AOPA at 37 o C until binding equilibrium was reached (as measured by maximum fluorescence intensity).AOPA was stored as a concentrated solution containing ethanol.The ethanol concentration in the incubation mixture was < 0.1 % (v/v).Lipid vesicles were added to the protein-palmitate complex so that the final molar ratio of lipid vesicles to protein was 100.Upon mixing for less than 5 seconds, the decrease in AOPA fluorescence with time was monitored with RF-5000 Recording Spectrofluorometer P/N 206-12400 (Shimadzu Scientific Instrument Inc. Columbia, Maryland).Excitation was at 361 nm and emission was monitored at 470 nm.The remaining unchanged fluorescence signal reflected the protein-bound AOPA.There was no detectable intensity for the binding protein alone, vesicle alone, or unbound AOPA in PBS at the concentrations used.
Data were collected in photon counting mode at 10-s intervals for 600 to 1000 s and analyzed by plotting the variation of fluorescence intensity versus time, fitting to a single exponential equation (18): Where, C o is the initial (maximum) fluorescence intensity at zero time, K is the transfer rate of AOPA from binding protein to acceptor vesicles (units of s -1 ), t is the time interval (s), and C is the fluorescence intensity of AOPA which remained bound to the protein at equilibrium.Goodness of fit was assessed by the correlation coefficient which was > 0.9, absolute sum of squares which was < 0.1.
Data Analysis
Data are represented as mean SEM unless otherwise stated.N is the number of replicates performed for each experiment.Whenever appropriate, data were subjected to non-linear least square regression analysis or linear regression analysis.For statistical comparison between groups student t-test or one-way analysis of variance (ANOVA) was used taking p < 0.05 for the level of significance.
Molecular Weight (MW) and protein isoelectric point (pI)
The molecular weight of ALB and ALB e was determined to be 66,400 and 67,300, respectively.The range in isoelectric point (pI) for ALB and ALB e was 4.8-5.0 and 7.0-7.5,respectively.Thus, ALB contained a net negative charge while ALB e was neutral at physiological pH.Peptide mapping of ALB e showed that approximately 37% of the glutamic acid residues were modified by the addition of the ethylenediamine groups.
LUVs Physical Properties
Vesicle diameter and zeta potential of the lipid vesicles are shown in Table 1.There was no statistical difference in the mean vesicle size of the three lipid vesicle preparations.
Freeze-fracture electron microscopy confirmed unilamellarity of vesicles (data not shown).
AOPA Binding
Figure 1A shows the fluorescence intensity of AOPA in the presence of ALB, ALB e , and PBS.Binding of AOPA to ALB and ALB e is shown in Figure 1B and was analyzed according to the method of Cogan et al. (17).The calculated apparent dissociation constant (K d ) for ALB and ALB e was 0.024 ± 0.003 and 0.016 ± 0.001 mM (n=4), respectively.The apparent number of ligand binding sites for ALB and ALB e was calculated to be n=2.The data show that ALB and ALB e bind AOPA with comparable affinities.Based on these K d values, the protein:ligand molar ratio used in all transfer assays was 10:1, a ratio at which more than 99% of the AOPA was bound to ALB or ALB e .) or ALB e (□) when titrated with increasing concentrations of AOPA.Fluorescence intensity of AOPA alone in PBS is also shown (▲).B: binding data of AOPA to ALB or ALB e were analyzed as described in Materials and Methods.Po is the total protein concentration, Ro is the total AOPA concentration, and is the fraction of free binding sites on protein molecule.The apparent K d for ALB and ALB e were 0.024 and 0.016 M, respectively.
Partitioning of AOPA Between Binding Proteins and Lipid Vesicles
The relative distribution of AOPA between ALB or ALB e and lipid vesicles was calculated by measuring the equilibrium fluorescence intensity before the addition of vesicles and the remaining fluorescence intensity after the addition of vesicles according to equation 1.Data in Table 2 are presented as the relative partitioning of AOPA between binding proteins and lipid vesicles.While partitioning of AOPA in the presence of ALB and ALBe between the negative and positive lipid vesicles was highly significant (p<0.0001),there was no statistical difference in the partitioning of AOPA between ALB or ALB e and the neutral lipid vesicles.AOPA had the greatest preference for positively charged lipid vesicles followed by neutral and lastly negatively LUV.In all cases, partitioning favored binding proteins rather than vesicles.
Effect of Lipid Vesicle Concentration on AOPA Transfer to neutral LUV
To discriminate between AOPA transfer occurring by aqueous diffusion and that occurring through a direct interaction of protein and acceptor membrane, we examined AOPA transfer from binding protein as a function of increasing acceptor membrane concentration.Figure 2 shows the transfer of AOPA from ALB or ALB e to neutral LUV.Prior to the addition of vesicles, there was no statistical difference in the maximum fluorescence intensity between the two proteins.Over a range of phospholipid:protein (mol/mol) of 20:1 to 100:1, the rate of transfer from both proteins increased linearly as the lipid vesicle concentration increased, suggesting that one possibility for the mechanism of fatty acid transfer from proteins is through a collisional interaction of the protein-fatty acid complex with the phospholipid membranes.If the transfer mechanism was solely through aqueous diffusion, we should not expect to see a change in the transfer rate as the number of acceptor vesicles was increased (11).Interestingly the Y-intercepts for the regression lines in Figure 2 were 0.013 0.001 s -1 and 0.009 0.0003 s -1 for ALB and ALB e , respectively (p<0.05).Transfer rates at zero lipid vesicles concentration were estimated to be equivalent to the dissociation of AOPA into an aqueous phase (K off ) (12).The K off of AOPA-ALB e was slower than the K off of AOPA-ALB supporting the above results showing that K d of AOPA-ALB e is lower than the K d of AOPA-ALB.
Effect of Ionic Strength on AOPA Transfer to neutral LUV.
If transfer of AOPA from extracellular binding proteins to acceptor vesicles occurred through aqueous diffusion, then the rate of transfer may be affected by changing the aqueous solubility of the ligand (12).Conversely, if transfer occurred solely by a collisional interaction, then the transfer rate ought to be unaffected by changes in ligand solubility.Figure 3 shows that an increase in sodium chloride concentration resulted in a nonlinear decrease in the rate of AOPA transfer from ALB or ALB e .The decrease in transfer rates was linear up to 540 mM sodium chloride.At higher concentrations (>540 mM), the decrease in transfer rate appeared to reach a nadir.Although the results indicate that AOPA transfer must be occurring through diffusion, the observed decrease in the transfer rates due to possible changes in protein binding and/or lipid vesicles integrity by increasing ionic strength cannot be ruled out (see discussion).
Effect of Phospholipid Surface Charge on AOPA Transfer.
To further explore the effect of membrane properties on the transfer rate of AOPA from ALB or ALB e , we measured the AOPA transfer rates to acceptor membranes of different surface charge.If transfer occurred solely through diffusion then the rate should not be significantly affected by acceptor membrane characteristics.Figure 4A and 4B show a statistical difference in the transfer rate of AOPA from ALB and ALB e to negatively and positively charged vesicles.Also, the AOPA transfer rate to negatively charged vesicles in the presence of ALB was significantly lower (0.019 s -1 , n=12) than that to positively charged vesicles (0.026 0.0016 s -1 , n=8, p<0.0007).In contrast, the AOPA transfer rate in the presence of ALB e was significantly higher (0.026 0.0015 s -1 , n=4) when negatively charged vesicles were used as acceptor membranes as compared to positively charged vesicles (0.0135 0.00054 s -1 , n=10, p<0.0001).
Effect of ALB Concentration on AOPA Transfer Rate
To further investigate the role of ALB in AOPA transfer to model membranes, we measured the transfer rate of AOPA to positive LUVs in the presence of 1.0, and 10.0 M ALB and at low AOPA to ALB molar ratio e.g.0.1.Using these protein concentrations and low AOPA to ALB molar ratio, the free AOPA concentration is predicted to be unchanged and independent on ALB concentration (19).Figure 5 shows that the transfer rate of AOPA from 10 M ALB was significantly higher than from 1.0 M ALB (p<0.0001).These data imply that the AOPA-ALB bound fraction is an important determinant in transfer.
DISCUSSION
The objective of the present study was to elucidate whether the initial transfer process for long-chain fatty acids bound to proteins that differ in their surface charge characteristics is affected by acceptor vesicle membrane surface charge, concentration, and properties of the aqueous phase.One of the first barriers that cellular substrates must overcome prior to cellular entry is binding to the outer membrane leaflet followed by transmembrane flux.Transmembrane flux for long-chain fatty acids is known to occur via diffusion and membrane transport proteins and has been well studied, however, the mechanism responsible for substrates to gain access to and interact with the outer plasma membrane leaflet is less clear.Thus, our study examined the initial transfer of fluorescently labeled palmitic acid (AOPA) from ALB and ALB e to LUVs.Results show that: (a) the transfer is a first order process and is best described by a monoexponential equation; (b) the mechanism of transfer involves two processes; an aqueous diffusion-mediated process and a collisionalmediated process; (c) the observed kinetic rate is estimated to be the arithmetic sum of the slower dissociation rate (dissociation of AOPA from its protein binding site into solution, K off ) and the faster collisional transfer rate; (d) AOPA transfer that is mediated by a membrane-protein interaction providing the possibility of regulating the movement of these substrates by changing in the membrane composition and structure or the characteristics of the plasma delivery vehicle (e.g., lipid vesicle).These two processes are independent of each other and occur at the same time.Various factors affecting both kinetic processes are discussed below.
We have previously determined the high affinity binding constants (K a ) for ALB and ALB e using tracer radiolabeled fatty acid, e.g.[ 3 H] palmitic acid, and the heptane: buffer partitioning method (19,20).In this method, the [ 3 H] palmitic acid to protein molar ratio was less than 0.1.Thus, the calculated binding constant (Ka) represents binding of palmitate to the high affinity-binding site on the protein.In the present study we used anthroyloxy labeled fatty acid and fluorimeteric titration (0 to 3 moles of AOPA per 1 mole of protein) to determine the equilibrium binding constants for AOPA binding to ALB or ALB e .The binding constants calculated in this study were 3fold lower than our previously obtained binding constants for the same proteins.Thus, it may be that, in this study, two fatty acid molecules bind to the first two high affinity binding sites or to the high affinity and low affinity binding sites on the protein molecule; the calculated apparent dissociation constants would be the average value of these binding sites.Our calculated binding constants are, however, comparable to the lower values reported in the literature (21,22).To estimate valid transfer rates, it was necessary to determine the K a values for both AOPA-ALB and AOPA-ALB e in order to calculate the protein concentrations that produce the same unbound ligand concentrations.Since the measured K a values were similar, the ALB and ALB e concentrations were also similar in the transfer experiments.
Measurement of equilibrium partitioning of AOPA between ALB or ALB e and each lipid vesicle preparation shows that partitioning of AOPA favors binding protein over membrane phospholipids by an order of magnitude.Therefore, it was necessary to use higher acceptor to donor ratios in the transfer assay experiments to ensure that unidirectional transfer is monitored.Data in Table 2 show that AOPA has the greatest preference for positively charged lipid vesicles.Second preference was for negatively charged lipid vesicles and the least preference was for neutral lipid vesicles.While there is no difference in the partitioning of AOPA between ALB or ALB e and neutral LUVs, the observed significant difference in the partitioning of AOPA between both proteins and charged lipid vesicles indicates that charged surface groups might regulate the distribution of AOPA between protein and charged lipid vesicles.Our AOPA partitioning data using neutral lipid vesicles are comparable with the equilibrium distribution of [ 3 H] palmitic acid between albumin and lipid vesicles prepared from phosphatidylcholine (palmitate: albumin molar ratio = 2; K eq = 310) (23).
The possibility that AOPA transfer from binding protein to acceptor lipid vesicles occurs via a collisional mechanism was examined by increasing the concentration of the acceptor membranes and thereby the number of vesicles available for collisional interaction with the AOPAprotein complex.If this mechanism is important in the transfer process, then we expect an increase in the AOPA transfer rate as the acceptor membrane concentration increases (12).This is clearly the case, as seen in Figure 2. By increasing lipid vesicles concentration, there was an increase in the quenching of AOPA fluorescence, which reflect more transfer of AOPA to the lipid vesicle membrane.The observed rates are estimated to be equal to the dissociation rates of AOPA from binding proteins into water (K off , Y-intercepts in Figure 2) plus collision-mediated transfer rates.
Our estimated low K off values are reasonably comparable to the lower values reported in the literature if one considers the differences in FFA: ALB molar ratios used in this study and other studies (24)(25)(26).Using albumin-agarose as an acceptor, we previously determined the albuminpalmitate dissociation rate constant to be approximately 0.07 s -1 (2,20).This value is higher than that estimated in this study.The higher K off value determined previously may be overestimated if part of the FFA transfer occurred through direct collisional interaction between FFA-ALB complex and the agarose beads.
To further examine the mechanism(s) through which AOPA is transferred to lipid vesicles, we altered its aqueous solubility by increasing the solution ionic strength (Figure 3).If the aqueous solubility of AOPA is not an important factor in the transfer process, we do not expect a linear change in the transfer rate with increasing ionic strength.The data in Figure 3 demonstrate that the transfer rate decreases nonlinearly with increasing salt concentration, suggesting that one possible form of the transfer process may occur via aqueous phase diffusion.Another possible explanation for the effect of high ionic concentration on the transfer rate is that the hydrophobic and electrostatic protein-membrane interactions are disrupted by high salt concentrations, which may affect the transfer rate (27).The observation that the decrease in the transfer rate appears to level off with further increase in salt concentration suggests that the diffusion-mediated transfer is not the only mechanism taking place in the transfer process and that some interaction of the ligand-protein complex with lipid vesicles (collision-mediated transfer) may occur.We have determined the possibility that increasing salt concentration might affect the lipid bilayer structure and / or lipid vesicle mean size.Increasing NaCl concentration causes vesicle fusion, as indicted by the increase in vesicle mean size and a broader particle size distribution (Table 1).The changes in physical structure of the lipid vesicles may, in part, explain the decrease in the transfer rate of AOPA from ALB or ALB e .It also has been shown that fluorescence polarization of DPH incorporated into SUVs is increased as ionic strength increases, indicating an increase in the phospholipid acyl chain order (28).The possibility that the binding of AOPA to ALB or ALB e is affected by increasing NaCl concentration is unlikely, since the observed relative maximum fluorescence intensity at each NaCl concentration compared to the maximum fluorescence intensity at physiological ionic strength was similar.
The hallmark difference between transfer occurring through collision and that occurring through aqueous diffusion is the effect of acceptor membrane properties and / or the ligand-protein surface charge on the transfer rate (12,29).In order to examine the effect of lipid vesicle surface charge on the transfer rates, we used three lipid vesicle preparations with similar properties and different net surface charge.Zeta potential measurements (Table 1) confirmed the proper vesicle charge.Data in Figure 4 show that the transfer of AOPA bound to ALB (pI=4.8-5.0) was significantly faster to positive LUVs than to negative LUVs.In contrast, the transfer rate of AOPA bound to ALB e (pI=7.0-7.5) was significantly faster to negative LUVs than to positive LUVs.The observed effect of the surface charge may reflect an electrostatic interaction between charged phospholipid head groups in the membrane and opposite charges on the AOPA-protein complex.Another possible explanation is that the presence of charged head groups on the membrane phospholipids might cause secondary changes in the membrane bilayer structure e.g.phospholipid head group packing order in the membrane.These secondary changes could increase or decrease the accessibility of binding protein to the lipophilic site of the membrane.
The observed differences in AOPA transfer rates to charged LUVs are unlikely due to differences in vesicle size and / or lamellarity since the particle size measurement (Table 1) and electron microscopic studies showed that the three vesicle preparations were similar.
In an attempt to elucidate the role of ALB in the transfer process, we measured the AOPA transfer rate in the presence of two low ALB concentrations at a fixed AOPA to ALB molar ratio.Under this condition, the free AOPA concentration is predicted to be unchanged for the two ALB solutions (19).If the transfer process occurs solely from the unbound AOPA fraction and independent of ALB concentration, we do not expect to see a difference in the measured transfer rates.Data in Figure 5 show that transfer rate increased significantly with increasing ALB concentration.This result suggests that transfer of AOPA occurs from both the free and the protein-bound pools.
In summary, we provide evidence for the direct transfer mechanism of FFA bound to bovine serum albumin to lipid membranes.In contrast to the known aqueous diffusion mechanism observed for transfer of lipophilic compounds such as cholesterol and phospholipids, bovine serum albumin does not only function as an extracellular buffer for FFA levels but also as a delivery vehicle via its direct interaction with membranes.This provides a mechanism through which an efficient targeting of tightly bound substrates to the cell membranes is preserved.Our results support other reports showing the effect of albumin binding on cellular uptake (30,31) as well as our previous report that electrostatic interactions between the albuminligand complex and hepatocyte or myocyte surfaces modulate the kinetics of free fatty acid uptake (2,20).Furthermore, our work showed that at a constant concentration of unbound palmitate, there is a positive relationship between hepatocyte-[ 3 H] palmitate uptake and total albumin concentration (32).Thus, the unique function of serum albumin to interact with the cell surface is particularly important during cell differentiation when a change in plasma membrane composition occurs which may affect the targeting of extracellular FFA by influencing the rate of FFA transfer from albumin.Overall, the results presented in this study set the framework for future work directed at elucidating the specific structural features of drug delivery vehicles involved in targeting cells.
Figure 1 .
Figure 1.Binding of AOPA to proteins.A: Increase in fluorescence intensity of 1 M ALB () or ALB e (□) when titrated with increasing concentrations of AOPA.Fluorescence intensity of AOPA alone in PBS is also shown (▲).B: binding data of AOPA to ALB or ALB e were analyzed as described in Materials and Methods.Po is the total protein concentration, Ro is the total AOPA concentration, and is the fraction of free binding sites on protein molecule.The apparent K d for ALB and ALB e were 0.024 and 0.016 M, respectively.
1 )Figure 2 .Figure 3 .
Figure 2. Effect of acceptor membrane concentration on AOPA transfer from binding proteins.The transfer of 0.1 M AOPA from 1.0 M ALB () or ALB e (□) to neutral acceptor vesicles.Transfer was monitored at 37 o C. The estimated K off (Y-intercept at zero lipid vesicles concentration) for ALB and ALB e were 0.013 0.001and 0.009 0.0003 s -1 , respectively.Data are mean SEM, n = 4.
Figure 4 . 1 )Figure 5 .
Figure 4. Effect of vesicle surface charge on AOPA transfer.Transfer of 0.1 M AOPA from 1.0 M ALB or ALB e to 100 M negative LUVs (in A) or to 100 M positive LUVs (inB).Rates were 0.019 0.001 s -1 and 0.026 0.0015 s -1 from ALB and ALB e, respectively to negative LUVs, (A) or 0.026 0.0015 s -1 and 0.014 0.0005 s -1 from ALB and ALB e respectively to positive LUVs, (B).Data are mean SEM, n = 4 to 12.
Table 1 .
Physical properties of lipid vesicle preparations.
Table 2 .
Relative partitioning of AOPA between protein and lipid vesicles.
|
2017-04-26T14:53:28.006Z
|
2012-07-19T00:00:00.000
|
{
"year": 2012,
"sha1": "a0537cf5545e0f3664fc8edbb285d32db06cf120",
"oa_license": "CCBY",
"oa_url": "https://journals.library.ualberta.ca/jpps/index.php/JPPS/article/download/17059/14190",
"oa_status": "GOLD",
"pdf_src": "Grobid",
"pdf_hash": "a0537cf5545e0f3664fc8edbb285d32db06cf120",
"s2fieldsofstudy": [
"Biology",
"Chemistry",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
252160833
|
pes2o/s2orc
|
v3-fos-license
|
Effect of Immunophilin Inhibitors on Cochlear Fibroblasts and Spiral Ganglion Cells
Introduction: Loss of hair cells and degeneration of spiral ganglion neurons (SGN) lead to severe hearing loss or deafness. The successful use of a cochlear implant (CI) depends among other factors on the number of surviving SGN. Postoperative formation of fibrous tissue around the electrode array causes an increase in electrical impedances at the stimulating contacts. The use of immunophilin inhibitors may reduce the inflammatory processes without suppressing the immune response. Here, we report on in vitro experiments with different concentrations of immunophilin inhibitors MM284 and compound V20 regarding a possible application of these substances in the inner ear. Methods: Standard cell lines (NIH/3T3 fibroblasts), freshly isolated SGN, and fibroblasts from neonatal rat cochleae (p3–5) were incubated with different concentrations of immunophilin inhibitors for 48 h. Metabolic activity of fibroblasts was investigated by MTT assay and cell survival by counting of immunochemically stained neurons and compared to controls. Results: MM284 did not affect SGN numbers and neurite growth at concentrations of 4 × 10−5 mol/L and below, whereas V20 had no effect at 8 × 10−6 mol/L and below. Metabolic activity of fibroblasts was unchanged at these concentrations. Conclusion: Especially MM284 might be considered as a possible candidate for application within the cochlea.
Introduction
Immunophilins are a class of substances comprising among other cyclophilins (Cyps) and FK506-binding proteins (FKBP) [Barik, 2006]. Cyps play an important role in antiviral activity, cell regeneration, and inflammation and signal transduction [Wang et al., 2011;Flisiak and Parfieniuk-Kowerda, 2012]. The most abundant member of this family is the cytosolic cyclophilin A (CypA), which is representing 0.4% of total cellular proteins and can be found in human body at a concentration of 1 μg/mg [Daum et al., 2009;Flisiak and Parfieniuk-Kowerda, 2012]. Intracellular CypA is involved in cell signaling, calcium homoeostasis, and transport mechanism, whereas it is secreted in the extracellular space by neurons This article is licensed under the Creative Commons Attribution 4.0 International License (CC BY) (http://www.karger.com/Services/ OpenAccessLicense). Usage, derivative works and distribution are permitted provided that proper credit is given to the author and the original publisher. DOI: 10.1159/000526454 [Fauré et al., 2006], inflammatory cells, and upon cell death [Heinzmann et al., 2015]. Extracellular CypA shows proinflammatory cytokine-like behavior, is a potent chemoattractant for leukocytes, and elicits inflammatory responses [Pasetto et al., 2017]. Cyps are also cytosolic receptors for the immunosuppressive drug cyclosporine A (CsA) [Edlich et al., 2006]. CsA is a small peptide, inactive in acute inflammation but possesses a strong immunosuppressive action. Cyclophilins mediate the action of CsA by forming drug-dependent complexes [Liu et al., 1991]. The exclusive binding of CsA to the active site of CypA is the reason for many physiological effects such as mediation of immunosuppression, inhibition of the protein phosphatase activity of calcineurin, and thereby prevention of the cytokine gene transcription regulation [Edlich et al., 2006;Daum et al., 2009]. Human Cyp-CsA complexes prevent the transcription of genes involved in T-cell activation. This correlates with the specific block of the cellular immune response. CsA is clinically used as a potent immunosuppressant in the prevention of allograft rejection [Hacker and Fischer, 1993].
Cyp-CsA and FKBP-FK506 complexes bind to calcineurin [Barik, 2006]. Activation of calcineurin contributes to noise-induced hearing loss [Minami et al., 2004]. Application of CsA and FK506 was shown to decrease the threshold shift after acoustic injury [Uemaetomari et al., 2005]. At least for FK506 this effect is a combination of inhibition of calcineurin and an additional reduction in formation of reactive oxygen species [He et al., 2021]. Furthermore, FKBP12 is abundant throughout the cochlea and the dorsal cochlear nucleus [Zajic et al., 2001]. Cycloheximide (CHX) is an additional potent inhibitor of FKBP12, which additionally has neuroregenerative properties [Christner et al., 1999].
One nonimmunosuppressive CsA derivate is the immunophilin inhibitor MM284. This compound can only interact with cyclophilins extracellularly. Therefore, it reduces the recruitment of T cells and macrophages and leads to a reduction in inflammatory processes without affecting the immune system [Hacker and Fischer, 1993;Heinzmann et al., 2015].
A second immunophilin inhibitor is V20, a conjugate of CHX and CsA, linked via 2,2′-(ethylene dioxy)-diethylamin. V20 represents an active substance, which in complex with Cyps reduces the calcineurin inhibition and is therefore potentially nonimmunosuppressive. Due to its composition, this substance could potentially inhibit cyclophilins and FKBPs.
Sensory neural hearing loss is accompanied by loss of spiral ganglion neurons (SGN). In order to ensure the best possible care for these patients, remaining SGN can be electrically stimulated by a cochlear implant (CI). For this, a large number of vital SGN and a close nerve-electrode contact are necessary. A close nerve-electrode contact results in lower thresholds [Telmesani and Said, 2015] and less spread of excitation [Yang et al., 2020]. In addition, after implantation of the electrode into the cochlea, connective tissue forms around the electrode array, which impairs signal transmission to the nerve cells of the auditory nerve . Inflammation is one factor in formation of fibrous tissue [Velnar et al., 2009;Fernández-Klett and Priller, 2014]. Among current approaches for a reduction of fibrous tissue growth after cochlear implantation are microstructured surfaces [Reich et al., 2008], use of metal ions [Paasche et al., 2011], or application of dexamethasone by mini-osmotic pumps [Vivero et al., 2008] or by elution from the electrode [Wilk et al., 2016]. Despite all efforts, only application of steroids during electrode insertion [Prenzler et al., 2020] and incorporation of dexamethasone in the silicone body of the electrode array [Briggs et al., 2020] are used clinically. The two nonimmunosuppressive immunophilin inhibitors MM284 and V20 might also provide a means to reduce inflammatory processes after cochlear implantation. This could prevent formation of connective tissue around the CI electrode and provide an improved nerveelectrode contact. Therefore, the aim of the current study was to investigate in vitro possible toxic effects of both substances on SGN and fibroblasts and therefore to evaluate their safety for an intracochlear application in conjunction with cochlear implantation.
Statement of Ethics
The experiments were conducted in accordance with the German "Law on Protecting Animals" ( §4) and the European Directive 2010/63/EU for protection of animals used for experimental purpose and registered (no. 2016/118) with the local authorities (Lower Saxony State Office for Consumer Protection and Food Safety [LAVES], Oldenburg, Germany). Sprague-Dawley rats of different sexes (postnatal days 3-5) were used for the experiments. All rats had free access to water and food and were kept at 22 ± 2°C under 14 h/10 h light/dark cycle.
For the experiments, NIH/3T3 and primary fibroblasts were subcultivated at 80% confluence (passage 3-5) and seeded in 96-multiwell culture plates (TPP) at a density of 8 × 10 4 cells/well in 50 μL supplemented fibroblast medium. The outermost 36 wells of a plate were filled with Ca 2+ /Mg 2+ -free Hank's balanced salt solution with 0.35 g/L NaHCO 3 but without phenol red (HBSS; Biochrom). Wells B2-G2 and B11-G11 contained complemented DMEM (blank) and untreated cells as control. The residual wells (B3-G10) were used for the substances or control series with the same concentrations of the solvent DMSO. The substances or DMSO solution were added at the time of plating (NIH/3T3) or after 24 h (primary cells). After this time, the medium was removed before adding 50 μL of fresh medium and 50 μL diluted substances or DMSO solution. To control wells 50 μL of fibroblast medium was added. Each experiment was repeated 6 times with n = 3 per plate (only primary fibroblasts and V20: N = 4). Cells were incubated for 48 h.
MTT Assay
To measure metabolic cell activity, the MTT (3-[4,5-dimethylthiazol-2-yl]-2,5-diphenyltetrazoliumbromid) test was performed according to ISO 10993-5:2009Appendix C (DIN EN ISO 10993-5:2009. At the day of the test, a solution of 1 mg/mL MTT (Ap-pliChem) in DMEM without phenol red was prepared, sterile filtered through a PES membrane, pore size 0.22 mm (#SLGP033RS; EMD Millipore, Billerica, MA, USA), and stored until use at room temperature under exclusion of light. After incubation of the cells in the diluted substances, the supernatant was removed and 50 μL of the MTT solution was added per well (final concentration: 50 μg MTT/well). The cells were incubated for 2 h at 37°C, 5% CO 2 . After decanting the MTT solution, addition of 100 μL isopropanol (Sigma-Aldrich, St. Louis, MO, USA) per well dissolved the formazan crystals completely while shaking. Absorption was measured at 570 nm using a Synergy H1 Hybrid Reader (BioTek, Bad Friedrichshall, Germany). For each plate, the blank values were averaged and the mean value was subtracted from the measured values of all other wells. The results for control wells of untreated cells were also averaged and taken as 100% cell activity. Values for each tested concentration were averaged and normalized to the untreated controls of the same 96-well plate before results from different plates were averaged.
Spiral Ganglion Cell Culture
The primary SGN were isolated from postnatal Sprague-Dawley rats (postnatal days 3-5) of different sexes. Dissection of the cochleae and dissociation of the spiral ganglia were performed according to the previously described protocol [Wefstaedt et al., 2005] Finally, the cells were seeded at a density of 1 × 10 4 cells/well in a 96-multiwell culture plate, coated with poly D/L-ornithine (0.1 mg/mL; Sigma-Aldrich) and laminin (0.01 mg/mL; natural from mouse; Life Technologies, Carlsbad, CA, USA). The SGN were cultivated for 48 h in a mixture of complemented Panserin, supplemented with BDNF (final concentration: 50 ng/mL), and the different dilutions (final concentrations: 2 × 10 −4 down to 2.56 × 10 −9 mol/L) of immunophilin inhibitors V20 and MM284 or control series with the same concentrations of the solvent DMSO. For each concentration and positive control (SGN with complemented Panserin supplemented with 50 ng/mL BDNF), three wells per plate were treated and every setup was repeated six times. After 48 h, the cells were fixed with a 1 + 1 mixture of acetone (J. T. Baker, Deventer, The Netherlands) and methanol (Carl Roth, Karlsruhe, Germany) for 10 min and washed 3 times with PBS.
Data Evaluation (SGN)
Surviving neurons were defined as neurofilament-positive cells exhibiting a neurite length of at least three cell soma diameters [Gillespie et al., 2001]. All surviving neurons of each well were manually counted using a transmission light microscope (Olympus CKX41, Hamburg, Germany) with a camera (Colorview III, SIS; Olympus). For neurite length measurements, the five longest neurons in each field of view (one in the center and four around the perimeter of the well) were manually traced by using the imaging software cellSens (Olympus) [Schmidt et al., 2018]. The survival rate was calculated by the number of surviving neurons with reference to BDNF-treated controls (mean number of neurons in the positive control) of the same plate and then averaged across different plates (N = 6). The same procedure was followed for evaluation of neurite length.
Statistical Analysis
Statistical analysis was performed using GraphPad Prism version 5.02 (GraphPad, La Jolla, CA, USA). As tests for Gaussian distribution are not very meaningful for small N, nonparametric tests were used. To account for matched observations, Friedman test followed by Dunn's posttest was used to compare results with a b different concentrations of a treatment group to untreated controls. To compare different treatment groups at a specific concentration, Kruskal-Wallis's test followed by Dunn's posttest was applied. p values of less than 0.05 were considered to be statistically significant.
Effects of Immunophilin Inhibitors on Fibroblasts
Addition of DMSO to the fibroblast cultures at a concentration of 2 × 10 −4 mol/L resulted in a reduced viability (29% for NIH/3T3 and 53% for cochlear fibroblasts). At all lower concentrations, viability was above 75% of untreated controls and the solvent did not influence cell viability in a statistically relevant fashion (Fig. 1).
After plating of fibroblasts, treatment with V20 at concentrations of 2 × 10 −4 mol/L and 4 × 10 −5 mol/L resulted in no viability of NIH/3T3 fibroblasts (p < 0.05; Fig. 1a), whereas about 20% of cochlear fibroblasts survived at 4 × 10 −5 mol/L (Fig. 1b). In contrast, survival of both fibroblast types was nearly unaffected at concentrations of 8 × 10 −6 mol/L and below compared to untreated controls (p > 0.05 for all concentrations). After treatment with MM284, the metabolic activity of fibroblasts was not different to untreated controls for concentrations of 4 × 10 −5 mol/L and lower for both fibroblasts (Fig. 1). At a MM284 concentration of 2 × 10 −4 mol/L, metabolic activity was significantly (p < 0.05) reduced to about 50% of controls.
For both types of fibroblasts, cell viability was significantly larger for MM284 compared to V20 at concentrations of 2 × 10 −4 mol/L and 4 × 10 −5 mol/L. Group comparison results are summarized in Table 1.
Effects of Immunophilin Inhibitors on SGN
Incubation of SGN with different concentrations of DMSO resulted in no cell survival at a concentration of 2 × 10 −4 mol/L, whereas at lower concentrations no statistically significant differences to untreated controls were detected (Fig. 2a). DMSO did not have an effect on neurite length of surviving SGN (Fig. 2b).
At concentrations of 2 × 10 −4 mol/L, no surviving SGN were detected for both substances (Fig. 2a). The same was observed for V20 at a concentration of 4 × 10 −5 mol/L (Fig. 3a), whereas survival of SGN increased to about 70% of controls after MM284 treatment at this concentration (Fig. 3b). For both substances and compared to controls, the survival of SGN was mostly between 80% and 110% for concentrations of 8 × 10 −6 mol/L and below. No significant differences were detected. The highest survival rate (124%) was achieved with MM284 at 1.6 × 10 −6 mol/L. Comparing results from different treatment groups, cell survival was significantly reduced with V20 compared to MM284 and DMSO at a concentration of 4 × 10 −5 mol/L ( Table 1).
The neurite length of the surviving SGN for BDNFtreated controls was on average 278 μm (SD: ±37 μm). When surviving cells were detected in the cultures after treatment with immunophilin inhibitors, neurite length was unaffected by the applied substances (Fig. 2b). In control and treated cultures, surviving neurons were nearly exclusively of monopolar morphology.
Discussion
When a foreign body or implant is brought into contact with body tissue, it is exposed to a large number of immune defense reactions. This can even lead to rejection of the implant [Stolle et al., 2014]. Often involved in these processes and their regulation are CHX and cyclosporines [Flisiak and Parfieniuk-Kowerda, 2012]. In addition to the inhibition of protein synthesis by CHX, the CsA derivatives have immunomodulatory properties due to the inhibition of calcineurin via the CsA-CypA complex [Hacker and Fischer, 1993]. Their derivatives, the immunophilin inhibitors, can inhibit Cyps and are therefore potential nonimmunosuppressive candidates [Heinzmann et al., 2015].
The current study investigated the immunophilin inhibitors MM284 and V20 with regard to a possible application in the inner ear as these should reduce the formation of connective tissue but not be toxic. The focus of this work was therefore put on their cytocompatibility for cells from the inner ear. Damage from the surgical insertion of the electrode can be categorized into immediate intracochlear changes and delayed components Fayad et al., 2009]. Immediate changes arise from trauma at the site of the cochleostomy or along the path of the electrode trajectory . Delayed changes arise from the host response to the electrode, which involves a tissue reaction consisting of inflammation, fibrosis, and possible new bone formation Somdas et al., 2007;Fayad et al., 2009]. As fibroblasts are a major part of the tissue formation around the CI after implantation , the influence of both substances on fibroblasts (NIH/3T3 and primary cochlear fibroblasts) was investigated. The treatment of both types of fibroblasts with MM284 and V20 demonstrated unaffected metabolic cell activity at concentrations of 8*10 −6 mol/L and below, with only V20 having affected cell viability at 4 × 10 −5 mol/L. At 2 × 10 −4 mol/L, reduced or nearly no cell viability was detected for both substances. This could potentially indicate toxic effects, but at a concentration of 2 × 10 −4 mol/L also addition of DMSO without the test substances resulted in reduced cell viability. The resulting amount of DMSO at this concentration was 0.353 mol/L. It is described in the literature that DMSO shows little toxic effects at a concentration of 0.4 mol/L and is cell toxic from a concentration of 0.7 mol/L [Miller et al., 2015;Moskot et al., 2019]. This implies that at 2 × 10 −4 mol/L, the DMSO concentration should be in a critical range, and according to our measured dose response curve, the toxic effects can most likely be attributed to the amount of DMSO in the cultures. The resulting amount of DMSO at 4 × 10 −5 mol/L was 0.0564 mol/L. According to the literature, cell viability should nearly be unaffected at this concentration [Miller et al., 2015;Moskot et al., 2019]. This was confirmed by the presented results in the current study also for the primary cells from the inner ear. MM284 can only interact with Cyps extracellularly, which is of particular importance because the application of MM284 leads to a reduction of inflammatory processes without affecting the immune system. Using DMSO to dissolve the drug could result in the drug not acting exclusively in the extracellular space due to the increase in cell permeability [He et al., 2012]. The use of DMSO was recommended by the group of Prof. Fischer, who provided the substances. As dose response curves for DMSO and MM284 were comparable regarding cell viability and no differences between results with both drugs were detected, the metabolic cell activity of cochlear fibroblasts at a concentration of 2 × 10 −4 mol/L appears still to be influenced by the solvent. Therefore, additional toxic effects of MM284 due to not acting exclusively extracellular are very unlikely. The higher cell viability with MM284 compared to V20 at 4 × 10 −5 mol/L suggests that the results with V20 at this concentration are not purely caused by DMSO, but there is some toxic effect of V20.
Here, it cannot be excluded that this effect is caused due to some interaction between DMSO and V20. The lack of significant differences between DMSO and V20 at 4 × 10 −5 mol/L is probably due to the lower number of repetitions of the tests (N = 4). Even though it is speculation, a possible trigger for the different outcome of cochlear fibroblast at 4 × 10 −5 mol/L could be lower doubling rates of these cells.
As any substances applied to the inner ear should not have adverse effects on SGN, both substances were also tested for their influence on survival and outgrowth of SGN. In these experiments, reduced cell survival was found at concentrations of 2 × 10 −4 mol/L (MM284 and V20) and 4 × 10 −5 mol/L (V20) similar to the fibroblasts with the dose response curves for DMSO alone and MM284 being undistinguishable. The toxic threshold of DMSO for hair cells is 0.176 mol/L [Qi et al., 2008], and values for SGN are not known. Therefore, we speculate that effects at 2 × 10 −4 mol/L (DMSO concentration: 0.282 mol/L) can also be attributed to DMSO. The survival of SGN at 4 × 10 −5 mol/L after addition of MM284 and/or DMSO indicates that the complete loss of neurons at this concentration after V20 treatment might be associated with the action of V20. The neuronal survival at concentrations of 8 × 10 −6 mol/L and below supports the nontoxic effects of the substances at these concentrations. Neurons in SGN cultures of newborn mice can be of monopolar, bipolar, pseudomonopolar, or multipolar mor-phology [Whitlon et al., 2007]. As no changes in neuronal morphology were observed in surviving neurons, we speculate that MM284 has no adverse effects on the neurons in our study.
Inflammation as a factor for fibroblast proliferation often becomes apparent after insertion of a CI electrode. Among other reasons, it can be caused by mechanical tissue damage like the insertion trauma and lead to chronic inflammatory reaction, fibrosis, or new bone formation with a mostly disadvantageous growth of fibrous tissue on the implant surface [Wrzeszcz et al., 2014]. Therefore, the reduction in inflammation should also lead to a reduced formation of connective tissue around the electrode carrier. The current paper investigated the cytocompatibility of immunophilin inhibitors for a possible application in the inner ear and shall provide the basis for later in vivo experiments regarding the reduction of fibrous tissue formation after cochlear implantation.
In conclusion, MM284 can be considered as noncytotoxic for SGN and fibroblasts from the inner ear at concentrations of 4 × 10 −5 mol/L and below and seems therefore to be a suitable candidate for an intracochlear application in vivo to investigate a possible reduction of connective tissue around the electrode carrier. At lower concentrations, the use of V20 might also be possible, but as not much information was available on this substance, we suggest gathering more information before investigating it in vivo.
|
2022-09-10T06:17:22.759Z
|
2022-09-08T00:00:00.000
|
{
"year": 2022,
"sha1": "cd809f9c5826fc60a2afe7e876ee41b0c569fc60",
"oa_license": "CCBY",
"oa_url": "https://www.karger.com/Article/Pdf/526454",
"oa_status": "HYBRID",
"pdf_src": "Karger",
"pdf_hash": "44d5d5c66c74da32da3f920624b3ce5151f5dbd5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
221383364
|
pes2o/s2orc
|
v3-fos-license
|
A systematic review exploring the patient decision‐making factors and attitudes towards pre‐implantation genetic testing for aneuploidy and gender selection
Pre‐implantation genetic testing for aneuploidy (PGT‐A) is in high demand worldwide, with ongoing debate among medical societies as to which patient groups it should be offered. The psychological aspects for patients regarding its use, lag behind the genomic technological advances, leaving couples with limited decision‐making support. The development of this technology also leads to the possibility for its utilization in gender selection. Despite the controversy surrounding these issues, very few studies have investigated the psychological aspects of patients using PGT‐A.
| INTRODUC TI ON
In vitro fertilization (IVF) is often a last resort for couples who do not achieve pregnancy, usually after 2 years of trying to conceive. Fertility treatment is stressful and is known to have significant psychological impact on couples. 1 Pre-implantation genetic testing (PGT) for aneuploidy (PGT-A) has been available for couples undergoing IVF for nearly 20 years. 2 PGT-A involves retrieving five to ten trophoectoderm cells from a blastocyst and screening for the presence of a normal number of chromosomes, with only euploid embryos selected for transfer. 3 Requests for PGT-A are now in demand worldwide and have presented both patients and clinicians with the decision of when to use PGT-A as an adjuvant treatment to standard IVF therapy. 4 PGT-A is now the most commonly used alternative to morphological assessment of the embryo, when deciding on which embryo should be selected for transfer. 5 The rationale for using PGT-A is the well established fact that the rate of aneuploid embryos increases significantly with age. 6,7 One large study screened over 15 000 embryos and found the aneuploid embryo rate was approximately 25% in young women (30 years of age or younger), 58.2% at age 40,75.1% at 42 and 88.2% at age 44. 8 The consequences of transferring aneuploid embryos include failed implantation, miscarriage, termination of pregnancy or the birth of a chromosomally abnormal child if prenatal screening did not identify the pregnancy to be high risk. 7 Patient education surrounding PGT-A varies between different countries and fertility clinics regarding what information patients are given, who is involved in the education process and how patient understanding is gauged. 9 Potential advantages are that PGT-A can improve implantation rates per embryo transfer, reduce miscarriage risk, minimise the risk of a resulting pregnancy with an aneuploid fetus and minimise the time to pregnancy. 10,11 PGT-A could also reduce multiple pregnancy rates by elective single embryo transfer of euploid embryos without affecting cumulative pregnancy and live birth rates compared with double embryo transfer. 11,12 However, patients must also be made aware that PGT-A cannot affect the genetic make-up of the embryo 13 , does not increase the live birth rates per egg retrieval 13 , is an invasive procedure with a small risk of damage to the embryos (<1%) 14 and may not reflect the genetic status of the whole embryo due to mosaicism. 15 As PGT-A is still not performed routinely in the majority of fertility clinics worldwide, patients should be informed that PGT-A may not be equally effective in all clinics, and, therefore, they should be made aware of the individual experiences of clinics using PGT-A prior to embarking on treatment.
The rapid advancements of PGT-A have raised numerous concerns regarding the ethical acceptability of some of its potential applications. Nevertheless, PGT-A has evolved without regulation in many of the countries in which it is used, and it is in widespread clinical use and being increasingly utilized worldwide. 16 Indeed, the American Society for Reproductive Medicine (ASRM) advises that the limited number of studies and evidence currently available leaves the value of PGT-A as a universal screening tool for IVF undetermined. 17 The Human Fertilization and Embryology Authority (HFEA) currently states that there is a conflicting body of evidence that PGT-A is beneficial to reproductive outcome and has called for further research. 18 There is ongoing debate as to who benefits most from PGT-A and to which patient groups it should be offered.
Currently, the psychology behind why patients become aware of and decide to use PGT-A lags behind the genomic technological advances, leaving couples with limited decision-making support. 19 gender selection, with its availability in certain countries likely to become a point of market advantage. 20 Those in favor of PGT for gender selection argue that couples should have reproductive autonomy and privacy with their reproductive choices and that it is preferable to dispose of embryos of the undesired sex, instead of testing for gender when pregnant followed by termination of pregnancy. 21,22 Those against, argue that using IVF for sex selection encourages the current sexist stereotype that male offspring is preferred, presents unnecessary physical and emotional burden on the woman undergoing potentially unnecessary procedures and goes against the ideal scenario of parents having unconditional love for their children. 23,24 Organizations such as the American College of Obstetrics and Gynecologists (ACOG) have issued statements suggesting gender selection is an inappropriate use of medical resources and perpetuates gender bias. 25 Despite this opposition the use of PGT for sex selection is legal in a diverse number of countries, including USA, Mexico, Thailand and Italy. Indeed, in these countries the use of embryo testing for sex selection is on the increase, with the Society of Assisted Reproductive Technology (SART) reporting that the use of PGT for gender selection in the USA increased from 9% of PGT cycles in 2005 to 22% in 2008. 26 More recent data from the USA has shown that 57 987 IVF cycles used PGT for aneuploidy and gender selection, which represents 22% of all assisted reproductive technology (ART) cycles. 27 A survey conducted in the USA found that 72.7% of ART clinics in the USA offer gender selection to their patients. 28 Although PGT for gender selection is illegal in the UK and the vast majority of European countries, clinics in the USA have reported a significant surge in fertility patients from countries such as the UK, Europe and Australia. 29 Despite the controversy and debate surrounding sex selection, surprisingly very few studies have investigated the motivations and attitudes of the patients using or potentially using this option. Insight into patient perspectives of PGT and gender selection could add to this ethical debate and generate new perspectives for couples wanting to rationalise this option.
| Aims
Over 20 years since its first use, the psychological impact of PGT for aneuploidy and gender selection remains poorly defined, with no established clinical guidelines for patient education and counseling.
This systematic review aims to synthesise and update the literature regarding patient motivations, decision-making factors, attitudes and experiences of patients using PGT for aneuploidy and sex selection screening. No review has analyzed these psychological factors surrounding PGT together, and it could, therefore, illuminate any gaps in our current knowledge and improve clinical practice.
| Search strategy
Three computerized databases (PubMed, Science Direct, SciFinder) were searched systematically using PRISMA guidelines. 30 The search terms and combinations of searches used are listed in Table 1
| Study selection
Given that PGT is a relatively recent technology, there were no restrictions placed on publication date and inclusion. Only English language peer-reviewed studies that examined the psychosocial aspects of PGT-A, including patient motivations, attitudes, experiences and decision-making process were included. This review aimed to synthesize all available data on the topic, so no studies were excluded based on study design. Because this review focused on patients directly involved in PGT-A, the following studies were excluded: those that focused on potential use of PGT-A or couples using PGT for monogenic disorders and chromosomal structural re-arrangement.
The full inclusion/exclusion criteria can be found in Table 1. Any disagreement between the reviewers was resolved by discussion until consensus was reached.
| Search strategy and study selection
After the initial search, 374 records were screened for inclusion.
The publications screened dated from 1998 to 2018. A total of 287 studies were excluded based on title alone and 87 abstracts were retained and examined. In addition, 63 abstracts were excluded that were not relevant to the research question. One study was identified after reviewing other relevant studies. Of the 25 full text publications that were examined, 10 met the inclusion criteria. 14,25,26,[32][33][34][35][36][37][38] Of the 15 studies that were excluded, one text was not available in English, two were duplicate studies and 12 were review or opinion articles. An overview of the search results and screening process is presented in the study flow diagram (Figure 1).
| Study characteristics
The study characteristics, sample size, methods, aims, findings and conclusions can be found in Tables 2 and 3. Individual study results are discussed in detail in this section and also summarized in Tables 2 and 3. There were considerable differences in study aim, design, quality, sample size and outcome measures of the 10 studies included for review. Four of the studies investigated patients using PGT-A and six studies examined those who used PGT for gender selection. To collect the data, five of the studies used questionnaires, 25,[32][33][34][35] three studies used an existing database [36][37][38] and two used semi-structured interviews. 14,26 Three of the studies collected data prior to treatment and seven collected their data retrospectively. The sample size range was 21-1500 patients. Of the 10 studies, eight were from the USA, one was from Australia and one was from Lebanon. Thematic analysis was used to extract key and consistent themes that emerged. 31
| Motivation and decision-making factors for patients using PGT-A
Quinn et al (2018) performed a cross-sectional survey of 191 subjects after thorough counseling of PGT-A, with 61% of patients opting for PGT-A, and 39% deciding not to pursue this option, with a subsequent analysis of these two groups. 34 Patients planning on using PGT-A rated their main motivating factor as having a healthy baby (57%), reducing the risk of birth defects (18%), reducing the risk of miscarriage (16%) and reducing the time to pregnancy (3%). 34 Another study found that the majority of their participants who elected PGT-A did so because they did not want to terminate a pregnancy, wanting to avoid having a child with a disability, as well as wanting to avoid the disappointment of miscarriage and failed embryo implantation. 14 Katz et al (2002) reported that 96% of their respondents felt that discarding genetically abnormal embryos was significantly "less wrong" than a termination of pregnancy later in pregnancy. 33 Lamb et al (2018) reported that a significant proportion of their participants opting for PGT-A had recurrent failed IVF and wanted more explanation as to why their fertility treatment was not successful. 14 One study found that of their participants that declined PGT-A in a prior IVF cycle, 69% stated they would decline PGT-A in a repeat IVF cycle, and 31% opted for PGT-A in future IVF treatments. 32 Of the patients who accepted PGT-A treatment, 75% would accept PGT-A in a repeat IVF cycle. 14 Another study found their participants opted for PGT-A because of their age and the fact they had not been pregnant before, so as to ensure the embryo was chromosomally normal. 14 One study found that 57% of respondents decided to share the decision to use PGT-A with others; of these, 85% felt strongly supported by family and friends. However, 66% of the studies participants did not feel this social support was significant. 32 Gebhart et al (2016) reported the additional cost of PGT-A would be approximately $3500 in their clinic, and overall only 21% of patients reported this to be a significant factor in their decision-making process. 32 However, of the 40% of their cohort that declined PGT-A, 67% reported cost as an important determinant, compared to 33% of those who used PGT-A, and this difference was statistically significant. 32 Another study found that 31% of their respondents declined PGT-A primarily to reduce costs of treatment. 34 This was supported by Lamb et al (2018), who reported the financial burden of IVF with PGT-A was a significant decisional factor for all of their 37 respondents, and in general they found those who declined PGT-A did so because they were not willing to pay the additional costs. 14 However, those patients that opted for PGT-A appeared to perform their own cost-effectiveness calculations relating to pay and potential success of IVF and thought that the cost was justified. 14 For example, they quoted they "were not willing to pay for IVF without knowing it was a viable embryo; the cost of PGT-A was instead of transferring all those genetically abnormal embryos". 14 A significant minority of the same study were unwilling to pay for PGT-A as they felt the cost was not justified. 14 One study found that after thorough counseling, the major reason for declining PGT-A was the fear of having no embryos to transfer (35%), the potential of delaying time to pregnancy (9%) and PGT-A potentially damaging the embryos (7%). 34
| Attitudes of patients towards PGT-A
Gebhart et al (2016) found that 69% of their 117 respondents felt the decision to use or not use PGT-A was not difficult because they were sufficiently knowledgeable. 32 In all, 69% of their respondents had no prior knowledge of PGT-A before embarking on IVF treatment, and only 9% had moderate or advanced knowledge of PGT-A. 32 However, 93% of participants felt that after counseling on PGT-A they had sufficient knowledge to accept or decline its use at the time of their IVF cycle, with 87% identifying that PGT-A identifies aneuploidy embryos and 81% reporting that PGT-A identifies "normal" embryos. 32 Quinn et al (2018) reported 58% of their respondents who decided to undergo PGT-A rated themselves more knowledgeable compared with those who did not (42%, P = .02). 34 Studies have consistently reported that the clinic provider was the source of the most information about PGT-A. 32 • PGT was found to be a highly acceptable treatment, with little concern expressed regarding its extension to gender testing. Motivations/decision-making factors: • Patients were strongly in favor of a shared decision-making model in which couples have considerable autonomy over decisions about the embryo(s) to transfer. • 117 (61%) planned PGT-A and 74 (39%) did not. Among those who decided to undergo PGT-A, 56% stated their primary reason was to have a healthy baby, 18% chose PGT-A to reduce the incidence of birth defects, and 16% aimed to decrease the risk of miscarriage.
• Patients who decided not to pursue PGT-A stated they prioritized avoiding the scenario in which they might have no embryos to transfer (36%) or reducing cost (31%).
• Both groups rated physicians as the single most important source of information in their decision-making (56% vs 68%, P = NS). Patients who choose to undergo PGT-A have different priorities from those who do not. Many patients planning PGT-A do so for reasons that are not evidencebased. While patients cite physicians as their primary source of information in the decision-making process, rationales for selecting PGT-A are inconsistent with physician counseling.
participants felt their partner had the most influence on their decision, followed by the clinic provider (35%). 32 Close family and friends and the internet were significant other resources of information surrounding PGT-A, with the referring obstetrician and gynecologist most frequently reported as providing the least information (3%). 32 One study revealed a lack of patient understanding regarding PGT-A 32 Studies consistently showed that patients strongly preferred to discard genetically abnormal embryos to requiring a termination of pregnancy because of a genetic problem. 14,34 Lamb et al (2018) reported one of the significant differences they identified among patients accepting or declining PGT-A was their opinion of science, with those choosing PGT-A holding a much more optimistic view that technological advancements in science can enhance reproductive outcomes. 14 A minority of respondents expressed a will to keep the IVF process as "natural" as possible, as well as some questioning the reliability of the technology, with one patient concerned that the embryos would be harmed by the PGT-A process. 14 Indeed, Gebhart et al (2016) reported 51% expressed some concern that PGT-A could harm their embryos. 32 Another study reported that potential harm from the biopsy to the embryo was their most influential decision in not going ahead with PGT-A. 34
| Motivations, decision-making factors and knowledge of PGT for gender selection
Six studies investigated patients who used PGT for sex selection: four studies found there was no statistically significant difference between couples wanting a male or a female child, 25,26,35,37 whereas two studies found that their participants had a preference for male offspring. 36,39 Studies report that the majority of their participants had 2 (28%-44%) or 3 (28%-30%) existing children of the same gender. 26,38 Studies also consistently reported that couples with existing children were using gender selection technologies to have a child of the opposite sex, thus achieving gender balance in the family. 25 performed structured interviews with 18 participants and reported their motivations to pursue gender selection included: age-related concerns for one of the parents (78%); a desire to limit family size (72%); the gender makeup of their own families (67%); desire for a child of a particular sex (61%); desire to pass on the family name (61%); and the desire to enhance the experience of current children (50%). Couples also consistently cited a desire for same gendered-parenting experiences, meaning a father wanting the experience of raising a son and vice versa for the mother. 26 The only study from a non-western population looked at the medical and non-medical indications for the use of PGT overall in 192 couples in its first 3 years of use in Lebanon and found that motivations for PGT use were non-medical gender selection (96.3%), known parental chromosomal aneuploidy (3.1%) and known balanced translocation (0.5%). Therefore, only 3.7% of their patients were using PGT for medical reasons. Of those using PGT for gender selection, 94.1% were for the selection of male offspring and 5.9% for the selection of a daughter. 38 In this Lebanese cohort, if the couple using PGT for gender selection were also infertile, then 100% of couples wanted to use PGT for selection of a son. 38 Gleicher et al (2007) also reported statistically significant gender choices dependent on a couple's ethnicity, with an obvious gender bias in favor of male offspring among Chinese, Arab and Indian couples. 36 In contrast, caucasian and Hispanic couples appeared to prefer female offspring. 36 One study found there was almost always strong agreement between the couple regarding their choice to pursue IVF and gender selection. 26 The only study to investigate any moral issues their participants had with using gender selection technologies found the major concerns to be: the potential psychological impact on their current children (67%); the creation and destruction of potentially healthy embryos (67%); negative feeling from family members (61%); financial costs (56%); and religious concerns (50%). 26 Indeed, the highly private way in which these patients conceptualized the decision to pursue gender selection was reflected by the fact that almost all couples stated they had not discussed this option with any close family or friends. 26 These findings were supported by studies whose patients were using PGT for medical reasons. Lamb et al (2018) reported that a significant minority of their participants found that knowing the sex of the baby was a positive addition to the decision-making process, although they all denied using PGT for sex selection. 14 Gebhart et al (2015) reported that 89% of respondents did not find gender selection significant and that this did not influence their decision-making process. 32
| D ISCUSS I ON
This systematic review provides an up-to-date analysis of the psychological aspects of PGT for aneuploidy and gender selection. It also explores the motivations, attitudes and experiences around (Continues) PGT-A use and reports on the patient decision-making process. To the best of our knowledge this is the first systematic review to investigate this topic.
| Patients selecting PGT-A
Understanding factors that influence couples in the decision-making process for PGT-A treatment can help providers to integrate this information and improve patient education, counseling and the shared decision-making process. It is established that patients only recall a small volume of the information that is given by medical practitioners across different specialties, 40 despite attempts to improve retention with numerous interventions. 41,42 In addition to information recall of patients being low, the information that is retained is often inaccurate or misunderstood; 43 ing 75% of patients selected PGT-A to have a "healthy baby" or to reduce the risk of birth defects. PGT-A certainly has the potential to reduce aneuploid live births; however, the majority of aneuploid pregnancies will fail to implant, miscarry or are detected by prenatal screening and/or testing. Indeed, the incidence of babies born with aneuploidy in developed countries is a relatively rare phenomenon and this has not been studied with regard to the use of PGT-A. 46 Of more concern, Lamb et al (2018) reported a common response was that previous parental carrier screening eliminated their risk of certain genetic conditions and, therefore, PGT-A would not provide any additional genetic information and was not required. 14 An increased aneuploidy risk is not usually "carried" by a parent and each pregnancy carries a risk for aneuploidy mainly determined by maternal age. However, it is not questioned that PGT-A does not screen for birth defects and by no means guarantees a healthy live birth. Additionally, Gebhart et al (2016) reported 81% of their participants had PGT-A to have a "normal" embryo. Therefore, it appears that improved counseling is needed to educate patients on the differences between assessing chromosomal number and single gene disorders and chromosomal rearrangement, as well as the other multiple etiologies that lead to birth defects.
In one study, 16% of respondents selected PGT-A to reduce their miscarriage risk, and the theory behind this is justifiable, since it is Patients who selected PGT-A also rated reducing the time to pregnancy as a significant factor. 34 There is evidence that PGT-A testing significantly improves the chances of a live birth per embryo transfer 50 but patients need to understand the difference between this and time to live birth after starting an IVF cycle. However, a recent study did report that using PGT-A reduced the time in fertility treatment by approximately 4 months. 49 Unfortunately, while time to a live birth when utilizing PGT-A is an ideal outcome measure, this has not been assessed in a randomized controlled trial. 51
| Reasons for patients not selecting PGT-A
PGT-A is an expensive procedure and, understandably, the associated costs were a consistently raised reason to decide not to go ahead with PGT-A. 14,32,34 However, although PGT-A is associated with cost, it is unclear whether its use offsets these additional costs by increasing pregnancy rates per embryo transfer, thus reducing the number of transfers required. A recent study investigated the cost effectiveness of PGT-A of achieving a live birth in women over 37 years of age, and concluded the use of PGT-A to be a cost-effective strategy, reducing costs of fertility treatment overall. 52 Another study by Neal et al (2018) showed that for patients with more than one embryo for transfer, IVF with PGT-A reduced healthcare costs by on average US$931-2411, shortened treatment time by 4 months, reduced the number of emryo transfers required and reduced miscarriage rates when compared with IVF alone. 49 A minority of participants from studies consistently reported they declined PGT-A due to concern about harming the embryo. 14,32,34 This concern should be acknowledged; however, patients should be reassured that data do not support that PGT-A damages blastocyst-stage embryos in experienced fertility centers. 15 Indeed, a study in which one embryo was biopsied with subsequent transfer without influence from the PGT-A result found that blastocyst biopsy did not appear to impact on implantation rates. 53 For those considering its use, thorough patient education and counseling must be given, addressing the current limitations of the clinical effectiveness of PGT-A. This is particularly relevant, since most studies have reported patients who participate in PGT-A have significant misconceptions and gaps in knowledge surrounding its use. Couples felt the use of gender selection was a personal one and not a larger societal one. 26 Much of patients' moral decision-making process involved weighing moral issues for themselves, future children and their immediate families. Larger societal implications to allow PGT for gender selection were not apparent in their personal decision-making process. However, it should be noted that couples did consistently report moral concerns about this approach to have a child, and although the majority felt the benefits outweighed the risks, and, therefore, continued with PGT for gender selection, the majority continued to raise concerns about the process. For example, a significant number of couples in various studies had experienced having a child of the opposite sex they had hoped for, but did not feel that child was less loved because of this initial disappointment. Patients were also ambivalent regarding their choice to disclose they had used PGT for gender selection to close friends and family. Disposing of unused embryos created via ART was another consistent source of concern for couples. Couples also identified numerous moral concerns regarding gender selection, including the negative impact the use of gender selection could have on their other children and on their personal relationships with close family members. 26 Couples also reported their strong beliefs surrounding reproductive liberty and privacy. 26 These issues raised, confirm the importance of fertility clinics in countries offering PGT for sex selection offering extensive pre-decisional and post-treatment counseling to patients using this technology, with the existing counseling models for PGT for monogenic disorders or PGT for chromosomal structural re-arrangement being appropriate counseling models to base this on. 56 Patients need to be made aware during counseling that the effect of PGT-A is also highly dependent on the IVF center performing the procedure. In the wrong hands, PGT-A could be detrimental, with euploid embryos incorrectly classified as aneuploid and discarded, or increased risk of damage to the embryo during biopsy. Patients, therefore, should be made aware of the individual experiences of clinics using PGT-A prior to embarking on treatment.
| PGT for gender selection
There were some methodological limitations identified in the studies included in the systematic review. First, the majority of studies used a relatively small sample size and were single-centered studies.
Second, the majority of the studies used existing databases or closedended surveys to gather their data. Although subjects in the studies using questionnaires were invited to write in additional factors that influenced their decision-making process, it is possible that respondents would not have volunteered this information in a questionnaire tool. This could mean that additional factors might have played a role and are unreported. It is also likely that some of the survey content in the studies was misunderstood by patients; for example, using PGT-A to "have a healthy baby" was commonly selected but this can-
| CONCLUSION
Fertility patients are especially vulnerable to treatments and technologies that have the potential to improve their IVF outcome. Screening embryos for chromosomal aneuploidy offers many theoretical benefits. However, until randomized controlled trials show a definite positive outcome for certain populations using this technology, fertility clinics should ensure adequate counseling to allow better patient interpretation. Regarding the use of PGT for sex selection, the full range of issues raised for the use of PGT for sex selection has not yet been examined adequately for the numerous regulating authorities internationally to make sweeping ethical statements concerning its use. Studies consistently report that it is naïve to suggest couples use this simply to have a child of a particular sex, but instead make decisions based on a diversity of moral values and cultural perspectives. These perspectives allow a more practically orientated discussion surrounding ethical considerations and the use of ART for gender selection. An increasing number of women are choosing to have pregenetic testing of embryos globally, and patient preferences are likely to remain significant where a potential clinical balance of interests ex- ists. There is a need to develop decision support tools for couples for the increasing genetic testing options available.
|
2020-09-01T13:01:33.692Z
|
2020-08-30T00:00:00.000
|
{
"year": 2020,
"sha1": "c3e3c3ee6601df14b61fe26d5bd4bdf462c46ab1",
"oa_license": "CCBYNC",
"oa_url": "https://obgyn.onlinelibrary.wiley.com/doi/pdfdirect/10.1111/aogs.13973",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "f512495dab6ad3a581bfdc2f390a984a580528aa",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
100437332
|
pes2o/s2orc
|
v3-fos-license
|
Tantalum nitride nanotube photoanodes: establishing a beneficial back-contact by lift-off and transfer to titanium nitride layer
In this work we introduce the use of TiN/Ti2 N layers as a back contact for lifted-off membranes of anodic Ta3N5 nanotube layers. In photoelectrochemical H2 generation experiments under simulated AM 1.5G light, shift of the onset potential for anodic photocurrents to lower potentials is observed, as well as a higher magnitude of the photocurrents compared to conventional Ta3N5 nanotubes (~ 0.5 V RHE ). We ascribe this beneficial effect to the improved conductive properties of the TiNx -based back contact layer that enables a facilitated electron-transport for tantalum-nitride based materials to the conductive substrate.
Introduction
Ta3N5 is considered to be one of the most important semiconductor materials for photoelectrochemical (PEC) water splitting. It has a visible-light active band gap of ~2.1 eV that embraces the H2O H2/O2 reduction-oxidation potentials, and thus may yield a 15.9% theoretical maximum solar-to-hydrogen (STH) conversion efficiency [1][2][3][4][5][6][7][8][9]. Extensive research has focused on nanostructuring the Ta3N5 photoelectrodes, in the form of nanoparticles, nanorods, or nanotubes [10][11][12][13][14][15]. 1-dimensional (1D) nanorod/nanotube structures provide the advantage, not only of a high surface area, but also of a directional and controllable light and electron management compared to the nanoparticles. For the synthesis of Ta3N5 nanostructures, first usually Ta2O5 structures are synthesized, that are then converted to Ta3N5 by a thermal annealing treatment in NH3. However, thick nanoparticle-based Ta3N5 photoelectrodes (e.g. deposited on conductive layers (FTO) by electrophoretic deposition) suffer from an inefficient electron transfer across the interface between the Ta3N5 and the conductive substrate [10,14]. To overcome this issue, a refined necking treatment (i.e. additional TaCl5 decoration and annealing) was introduced and improves the charge transfer not only between particles but also at the particle/substrate interface [10,11].
Recently, excellent mechanical and electric contacts were reported by using a transferred particle layer on a Ti/Ta-metal contact [12,13]. More recently a film transfer method based on Ta2O5 films on silicon for the Ta3N5 photoelectrodes was reported to achieve control in film thickness and to establish a defined back contact [14]. These data demonstrate that various metallic (Nb, Ti, Ta) and semiconductor (NbNx , CdS) layers can be used as back contacts to improve the PEC performance of Ta3N5 photoelectrodes.
For Ta3N5 photoelectrodes grown on or from Ta foils, either 1D nanostructures such as nanorods and nanotubes or compact films, even a higher PEC water splitting performance was reported compared with nanoparticle-based photoelectrodes. Also for these structures back contacts are a crucial factor for the quality of electrode-substrate connection [7,[15][16][17][18][19][20]. Previous work has shown that the formation of Ta-subnitride layers (Ta2N, TaN) underneath anodic Ta3N5 nanotubes can enhance their efficiency due to a higher conductivity than Ta3N5, and therefore improve the charge transfer to the back contact [7,15]. However, to build photoanodes based on nanotubular Ta3N5 and explore different back contact layers, formation of free-standing membranes is needed. Similarly to the procedure for obtaining TiO2 nanotube membranes, under optimized conditions Ta2O5 nanotube membranes can be obtained, that can then be converted to Ta3N5 . The methods used for obtaining membranes include either voltage pulses or an annealing-anodization-etching sequence [21] to separate the layer from the Ta metal substrate. In the present work, we use the latter, i.e. after nanotube growth, the layers are crystallized (air annealing), anodized again to grow an amorphous nanotubular oxide layer underneath, which then can be selectively dissolved in a HF aqueous solution. This procedure leads to Ta3N5 nanotube membranes and thus enables the possibility of transferring them to desired conductive layers as back contacts for 1D Ta3N5 nanotubular structures. Ideal ohmic back contacts should be highly conductive and have a work function corresponding to the Fermi level of the semiconductor at flatband situation.
For this we use in the current work TiN/Ti2N layers as a back contact. They seem ideal as they show a virtually metallic behavior and provide an excellent match of the work functions (W F (TiN x ) ≈ 3.5-4.4 eV) [22]. Here we show that indeed, by using this back contact, the anodic photoresponse under simulated AM 1.5G light shifts to significantly lower onset potentials. We ascribe this to the improved charge transfer properties of the back contact layer that enables a facilitated electron-transport to the substrate.
Experimental
The Ta2O5 NTs were prepared by anodizing Ta foil (99.9%, 0.1 mm, Advent) in a two electrode electrochemical cell with a Pt counter electrode. The anodization experiments were performed at 60 V (with a maximum current density set at 0.1 mA cm -2 ) in a sulfuric acid (H2SO4 , 98%) electrolyte containing 0.8 wt.% NH 4 F and 13.6 vol% DI water. The anodization times were 5, 10, and 15 min, respectively. Then, samples were immersed in ethanol for 5 min and dried in N2 . For producing Ta2O5 nanotube membranes, the as-prepared Ta2O5 nanotube layers were annealed in air at 450 °C for 1 h, followed by anodizing at 80 V in the same fresh electrolyte. The layers were then detached from the Ta substrate by immersion in an aqueous 5% HF solution for 30 min at room temperature. For the preparation of membrane photoelectrodes, the Ta2O5 NT membranes were coated with a Ti nanoxide paste (Solaronix SA) on Ti foils by doctor blading, followed by annealing in a NH3 atmosphere at 950 °C to convert to Ta3N5/TiNx (TiNx from Ti nanoxide). The temperature was ramped up with a heating rate of 10 °C min-1 , kept at the desired temperature for 1 h, and finally the furnace was cooled down to room temperature.
The photoelectrochemical experiments were carried out under simulated AM 1.5G (100mW cm-2 ) illumination provided by a solar simulator (300 W Xe with optical filter, Solarlight; RT). 1 M KOH aqueous solution was used as an electrolyte. The Ta3N5 layers were prior to measurements coated with a Co(OH) x co-catalyst as A field-emission scanning electrode microscope (Hitachi FE-SEM S4800, Japan) was used for the morphological characterization of the electrodes. X-ray diffraction (X'pert Philips MPD with a Panalytical X'celerator detector, Germany) was carried out using graphite monochromized Cu Kα radiation (Wavelength 0.154056 nm).
Chemical characterization was carried out by X-ray photoelectron spectroscopy (PHI 5600, spectrometer, USA) using AlKα monochromatized radiation, peaks were calibrated to C1s 284.8 eV.
Results and discussion
Ta2O5 nanotube (NT) membranes were grown by electrochemical anodization of Ta, and further proceeded to membranes as described in the experimental part. The resulting both-end open NT-layers consist of aligned nanotubes, with an individual diameter of ~50 nm and a length of 10-12 μm (Fig. 1a-c). The thickness of such Ta2O5 NT membranes can be adjusted between 10 to 20 μm, by increasing the anodization time from 5 to 15 min (~15 μm and ~20 μm for 10 min and 15 min, respectively). Nevertheless, membranes with a thickness lower than 7 μm cannot be separated from the Ta substrate without breakage. The oxide membranes were then connected by doctor blading to a TiO2 nanoparticle layer (~10 μm) on a Ti foil (Ta2O5 /TiO2 , Fig. 1d-f); in such electrodes, ~10 nm diameter TiO2 nanoparticles can be observed underneath the NT membrane layer (Fig. 1d,e) and in between the nanotubes (Fig. 1f). The Ta2O5/TiO2 structure is then subjected to nitridation at 950 °C (optimized conditions) and a Ta3N5/TiN x electrode is obtained (Fig. 1g-i).
The nanotube diameter decreases to ~40 nm as well as the thickness, to ~8 μm, due to the volume decrease when converting Ta2O5 to Ta3N5. Additionally, well-connected nano-sized porous TiNx particles are observed underneath the NT membrane (Fig. 1h,i). The thickness of the TiNx layer is about 10 μm (inset Fig. 1i). The TiN x nanoparticles and Ti foil act as a back contact and a conductive substrate for the Ta3N5 membrane, respectively.
To determine the phase and chemical composition of the as-formed and nitrided nanotubes or membranes, XRD and XPS investigations were performed. Fig. 2a shows the XRD patterns of Ta3N5 NT on Ta foil (Ta3N5 NT) and the membrane with TiNx back contact layer (Ta3N5/TiNx ). Both patterns show monoclinic Ta3N5 phases, while for the Ta3N5/TiNx electrode peaks corresponding to TiN and Ti2N are also observed. For Ta3N5 NT, in addition, a Ta 4 N peak at 41° and 42° can be identified, that is generally attributed to over-nitridation [7]. XPS measurements of the Ta2O5/Ta3N5 nanotubes or membranes are listed in Fig. 2b-f. The Ta 4f 7/2 peak is shifted from 27 eV for Ta2O5 to 25 eV for Ta3N5 (Fig. 2b), and for both nitrided nanotubes on Ta or Ti foil, clear N 1s peaks at 397 eV are observed (Fig. 2c). In addition, for the TiN x back contact layer, the Ti 2p and N 1s peaks (Fig 2d, f) 5 [7,15,17,18]), which indicates a preferable band alignment in the electrode to reduce the onset for PEC oxygen evolution from water. Additionally, we investigated the influence of the nanotube membrane thickness on the photoresponse of Ta3N5 electrodes as shown in Fig. 3c. The higher thicknesses of NT membranes, i.e. ~15 μm and ~20 μm, lead to a lower photocurrent due to an increased recombination of charge carriers. However, the ~15 μm NT membrane (anodization at 60 V for 10 min) still shows a low onset potential of 0 V RHE.
The electrical properties of nanotubes and membrane were evaluated by two-point conductivity measurements and show a 5 times decrease in the resistance of the layers and an increased electric conductivity of the membrane on the TiNx back contact layer (Fig. 3d). Please note that TiNx shows virtual metallic conductivity compared to the TaNx phase (inset of Fig. 3d). Moreover, electrochemical impedance spectroscopy measurements confirm a lower charge transfer resistance of the membrane on TiNx
Summary
In the current work we successfully form a photoanode, consisting of a Ta3N5
|
2019-04-08T13:07:57.954Z
|
2016-10-13T00:00:00.000
|
{
"year": 2016,
"sha1": "972fa390790c8b583e7682ecded4d8e2ee10275d",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1610.04169",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "972fa390790c8b583e7682ecded4d8e2ee10275d",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Physics"
]
}
|
232420798
|
pes2o/s2orc
|
v3-fos-license
|
Consumers’ decisions to access or avoid added sugars information on the updated Nutrition Facts label
The Nutrition Facts (NF) label was recently updated and now includes the added sugars content in an effort to reduce added sugars consumption. This study investigated whether consumers wanted to access or avoid the added sugars content using an online experiment and five product categories (yogurt, cereal, fruit juice, snack bar, ice cream). We recruited a sample of 490 U.S. adults (49% female; 73% White/Caucasian). Respondents were randomly assigned to an information treatment (simple or full) before making decisions on whether to access or avoid the added sugars content. The simple information treatment explained that added sugars information was now available on the NF label, while the full information treatment included additional details (e.g., how to interpret the added sugars content and associated diseases). After making the access or avoid decisions for each product category, respondents rated their likelihood of purchase for ten products (two per category). Rates of information avoidance were much lower than what has been observed in previous studies, and rates of avoidance did not vary by information treatment. The majority of respondents (75–87% across the five product categories) preferred to access the added sugars content. Still, we found some consumers preferred to avoid this information, with higher rates of avoidance for the ice cream product category. Additionally, we found significant differences in likelihood of purchase ratings between information accessors and avoiders. Respondents who chose to access the added sugars information exhibited healthier purchasing behaviors for all product categories; they were more likely to purchase low added sugars products and less likely to purchase high added sugars products relative to information avoiders. Given consumers’ demonstrated interest in accessing the added sugars content, it is important that the new changes to the NF label be broadly communicated to promote healthy eating behaviors.
Introduction
Obesity is a prevalent health problem in the U.S.; almost 75% of US adults are classified as overweight or obese [1,2]. Added sugars, which are found in products like sweetened beverages, bakery products, and ice cream, have been identified as a key contributor to obesity in the U.S. [3]. They are defined as sugars added to foods during processing, preparation, and at the table, like sucrose, brown sugar, high fructose corn syrup, or honey [4]. The 2020-2025 Dietary Guidelines for Americans (DGA) recommends that added sugars comprise no more than 10% of total daily energy intake. Yet, on average, Americans currently exceed this recommendation [5]. The scientific literature has consistently found a causal link between excessive consumption of added sugars and obesity as well as diabetes and cardiovascular disease [6,7]. As a result, several studies have called for the inclusion of added sugars content in the Nutrition Facts (NF) label.
In 2016, the U.S. Food and Drug Administration (FDA) released an updated NF label, which required food manufacturers to provide the added sugars content, among other changes (large firms with sales of $10 million or more had to comply by January 1, 2020, and small firms have until January 1, 2021). In the updated NF label, the added sugars content is now provided as a sub-component below total sugars; it is displayed in grams with the accompanying Percent Daily Value [8]. For more information on how this format was selected by FDA, see the proposed and final rules on the provision of added sugars in the updated NF label [8,9].
Before updating the NF label, few studies explored consumers' perception of added sugars. Multiple studies found that consumers struggled to accurately interpret the added sugars information [10][11][12]. In a more recent study, Khandpur, Rimm, and Moran found that consumers' comprehension of added sugars content was improved under the updated NF label and that generally consumers supported disclosure of the added sugars information [13].
Providing consumers with additional nutrition information has the potential to improve their food selection and/or consumption behaviors. Previous studies have found that NF label use is associated with food choices that are lower in cholesterol [14], sugar, total fat, saturated fat [15], and added sugar [16]. Other research suggests some consumers may actively avoid nutrition information [17,18].
For added sugars, it is unclear whether consumers will seek or avoid this information. There are a few potential explanations for why consumers may avoid this information. First, consumers may want to avoid any guilt or regret associated with consuming foods with added sugars that may be viewed as unhealthy. Second, consumers may want to avoid added sugars information for products they believe to be healthy in the event that acquiring the information would be inconsistent with their beliefs [18]. If consumers willfully ignore the added sugars information, the intended effects (e.g., improved food choice and, ultimately, health outcomes) of including this information may not be realized.
The primary goal of this study was to investigate whether consumers wanted to access the newly included added sugars information in the updated NF label when purchasing food products. While recent research indicates that including added sugars information does not affect food choice [19], choosing to acquire that information is a necessary condition for changing behavior. In this study, we explored two factors that may influence information acquisition. The first factor relates to consumers' understanding of added sugars and why this information is important. Some consumers may struggle to correctly identify what added sugars are and what products they are in [13,20,21] and may not know which diseases are associated with the overconsumption of added sugars [22]. These knowledge gaps may decrease the likelihood that consumers choose to acquire the added sugars information. In this study, we randomly assigned respondents to either a simple or full information treatment that varied the amount of information provided on added sugars. We hypothesized that respondents who received the full information treatment (which addresses the knowledge gaps identified in the literature) would be more likely to acquire the added sugars information. The second factor that may influence information acquisition is product type. Grebitus and Davis found that attention to the NF label is affected by the healthfulness of a food product category, possibly because nutrition information is a source of disutility when selecting and consuming some foods [23]. Similarly, we hypothesized that consumers were more likely to avoid the added sugars information for high-sugar products.
Sample recruitment
To test consumers' willful avoidance of added sugars information on the updated NF label, an online experiment was conducted using the Qualtrics survey platform in April, 2020. The study was approved by the University of Illinois at Urbana-Champaign Institutional Review Board (IRB #20576). We recruited a sample of 490 U.S. residents who were over the age of 18. Given our group sizes (n = 243 for simple and n = 247 for full information treatments, respectively), we had sufficient power (0.80) to detect a 12-percentage point difference in the rate of information avoidance between the two treatments at a 95% confidence level. Our sample was recruited using Prolific, which is an online crowdsourcing platform that provides a higher transparency about the subject pool for research than other online platforms [24]. Prolific outperforms other online panels in terms of data quality [25]. On average, Prolific recruits more diverse, more naïve, and less dishonest participants relative to Amazon MTurk [25]. Respondents were compensated $1.30 for completion of the study.
Experimental design
After providing consent, respondents were first asked to rate the healthfulness of seven food categories, including yogurt, fruit juice, fresh fruit, ice cream, snack bar, soda, and cereal on a 7-point scale (1 = very unhealthy to 7 = very healthy) as an opening question to ensure respondents perceived differences in the healthfulness of products. Fresh fruit and soda were included to assess how respondents rated their healthfulness in relation to the target products used in this study. The researchers selected these products as polar examples of the healthfulness scale, and respondents rated them as expected. Respondents were thereafter randomly assigned into a simple or full information treatment (see Fig 1). Fig 2 displays the added sugars information provided for each treatment. The simple information treatment informed respondents that the added sugars content was now included in the NF label and showed an exemplar of the NF label. Respondents in the full information treatment were provided with more information including the definition of added sugars, the recommended daily amount of added sugars intake, example foods that contain added sugars, diseases associated with overconsumption of added sugars, and how to interpret the added sugars information in the NF label.
Regardless of information treatment, all respondents were then asked whether or not they would like to receive the added sugars content for five product categories to assess individuals' avoidance of added sugars information. We chose yogurt, cereal, fruit juice, snack bar, and ice cream as our target product categories. We selected products that were identified as primary sources of added sugars in the 2015-2020 DGA (note: in the fruit juice category, 100% fruit juice does not contain added sugars, but fruit juice cocktails do; we included both in the present study). To be clear, information acquisition or avoidance decisions were made by respondents for each product category to allow for heterogeneity in acquisition and avoidance behavior.
After making their information acquisition or avoidance decisions, respondents were asked to rate the likelihood of purchase for 10 products (two products per category; one product with a low level of added sugars and another with a high level of added sugars) on a 7-point scale (1 = Not at all likely to 7 = Extremely likely). Low and high levels of added sugars varied by product category. The researchers did not set objective thresholds (e.g., a product with no more than X grams of added sugars is considered low), primarily because it was difficult to find products that met such thresholds across all five categories. Rather, the researchers tried to select products that contained a low or high level of added sugars, by proportion, to the total sugars content. See S1 Table for a list of the 10 products and their added sugars content. Product order was randomized across respondents, and respondents were shown each product individually. The high-and low-sugar versions of a product within the same category were not shown side by side nor were they evaluated consecutively (unless randomized that way by chance). Respondents who chose to see the added sugars information received the added and total sugars information (in grams) as displayed in the updated NF label in addition to an image of the product. Respondents who chose to avoid the information only saw the product image. Product images were only shown when respondents were asked to rate their likelihood of purchase for each product.
Subjects were then asked to provide the main reason for wanting or not wanting the added sugars information for each product category. We adapted the responses from Thunström [26] who investigated the avoidance of calorie information in a restaurant setting. Similar to the Thunström paper [26], there were more potential reasons for information avoidance compared to information acquisition (eight and five, respectively). Example reasons for information avoidance included 'I don't want to think about added sugars when purchasing this product'; 'I would enjoy this product less if I knew the added sugars content'; and 'I would not want to know the added sugars content because it would not matter to my food choice
Data analysis
To estimate determinants of consumers' willful avoidance of added sugars information, we employed the following model: where avoid ij was coded as one if subject i chose to avoid the added sugars information for product category j (1 = yogurt, 2 = cereal, 3 = fruit juice, 4 = snack bar, and 5 = ice cream). Full i was an indicator variable equal to one if subjects were randomized into the full information treatment. A negative Full i coefficient could be interpreted as an educational effect on
PLOS ONE
consumers' willful avoidance of added sugars information. This means that consumers provided with the full information treatment were less willing to avoid the added sugars information compared to those who received the simple information treatment. A vector of demographic variables, X i , included primary shopper, frequency of grocery shopping, sex, age, household income, household size, living with a child(ren) under 18 years old, race, whether on a special diet, family history of diet-related diseases, BMI classification, and label use behavior.
The study estimated determinants of avoidance of added sugars information using a multivariate probit model. The multivariate probit allowed for simultaneous estimation of Eq (1) for all five product categories and included the estimation of pairwise correlations across the errors of the five equations [27]. The multivariate probit model estimated a set of probabilities depending on whether the subject i wanted to access or avoid the added sugars for one product category and their desire to access or avoid it for the other categories. We tested the assumption that the error terms across equations were uncorrelated (null hypothesis in the multivariate probit model) using a Likelihood Ratio test.
Another interest of this study was to test how consumers' avoidance of added sugars information related to their likelihood of purchase. One-way analysis of variance (ANOVA) was used to examine mean differences in the likelihood of purchase for each product, comparing information accessors and information avoiders. All analyses were conducted using the statistical software package STATA version 16.0. Table 1 summarizes the sample characteristics. The majority of respondents (84%) served as the primary shopper in their household. Our sample was comparable to the U.S. population in terms of sex; however, our respondents were younger and more educated relative to the U.S. population [28].
Information avoidance behavior
First, we examined consumers' decisions to access or avoid the added sugars information. Fig 3 presents the shares of added sugars information accessors and avoiders for each product category by information treatment and for all participants combined. While we hypothesized that consumers in the full information would be more likely to access the added sugars information, there were no significant differences in the rates of access/avoidance across the two information treatments (Fig 3). However, consumers' information avoidance behavior varied by product category as expected. In particular, the rate of avoidance of added sugars information for ice cream (25.1%) was significantly higher (all p-values from t-tests < 0.001) than that of yogurt (14.7%), cereal (12.9%), fruit juice (13.5%), and snack bar (14.9%). There were no significant differences in avoidance across non-ice cream categories.
Respondents reported their primary reasons for wanting to access or avoid the added sugars content for each product category. See S2 and S3 Tables for access and avoidance results, respectively. For all categories, the majority of respondents who wanted the information (47-66% across the five product categories) stated that the added sugars content would matter to their food choices. A smaller share of respondents (22-38% across the five product categories) stated that they would be interested to know the added sugars content but indicated it would not affect their food choice. Less than 10% of respondents indicated that they would enjoy the product more if they knew the added sugars content.
There was some heterogeneity in the reasons for avoiding information across product categories. For yogurt, snack bar, and ice cream, a little less than one-third (29%) of respondents reported avoiding the information because it would not matter to their food choice anyway. For ice cream, an additional 23% (12-19% for other product categories) stated that they didn't want to think about added sugars when purchasing this product. Across all products, approximately 10-20% of respondents stated they chose to avoid the information because a) it would make them feel guilty or b) they would enjoy the product less if they knew the added sugars information. Table 2 reports the results of the multivariate probit estimation. The correlation coefficients at the bottom of Table 2 were all positive and significant. The null hypothesis of uncorrelated error terms across five equations was rejected, meaning the multivariate probit model was preferred to separate estimation of individual probit models for each product category. The positive coefficients implied the potential complementarities across the five product categories.
PLOS ONE
Consumers who avoided added sugars information for yogurt, for example, were shown to also be more likely to avoid it when purchasing cereal, fruit juice, snack bar, or ice cream. Socio-demographics had little explanatory power for information avoidance decisions. While there were few consistently significant findings across the majority of products, we found that respondents who frequently use the NF label when purchasing new products were less likely to avoid the added sugars information (significant for yogurt and snack bar). Additionally, older respondents were shown to be more likely to avoid the added sugars information (significant for yogurt and fruit juice).
We found that those who have participated in any nutrition assistance program were more likely to avoid the added sugars content (significant for fruit juice and ice cream). In addition, individuals with a family history of added sugars-related diseases were less likely to avoid the added sugars information, except for the ice cream product category, which had a positive and significant coefficient. Table 3 compares the likelihood of purchase ratings for the 10 products between information accessors and information avoiders. For all product categories, information accessors exhibited higher likelihood of purchase ratings, on average, for low-added sugars products compared to information avoiders (all significantly different except low-added sugar yogurt and snack bar). Further, we observed the opposite for high-added sugars products. Information accessors reported lower likelihood of purchase ratings, on average, than information avoiders (all significantly different).
Discussion
The new added sugars information is included in the updated NF label to nudge consumers toward reducing their added sugars consumption and ultimately, improve health outcomes. Acquiring the information, however, is likely a necessary condition for influencing selection Table 3. One-Way ANOVA tests comparing likelihood of purchase ratings between information accessors and information avoiders by product.
Product
Mean Likelihood of Purchase ± SD P-value
PLOS ONE
and consumption decisions. While classical economic theory assumes that consumers are better off when they acquire free information as it helps them to make better decisions, a growing literature suggests there may be an incentive to avoid such information [17,18]. To the best of our knowledge, this was the first study to experimentally test whether and under what conditions consumers wanted to access or avoid the added sugars information on the NF label. The majority of consumers preferred to access the added sugars information rather than avoid it, which was consistent with the high levels of support for the disclosure of added sugars information observed by Khandpur, Rimm, and Moran [13]. However, a subset of consumers may actively avoid the new added sugars information, particularly for less healthy products like ice cream. The rates of added sugars information avoidance in this study were much lower than the rates of calorie information avoidance (58% preferred to avoid) found by Thunström et al. [29]. One possible explanation might be the difference in setting. In the Thunström et al. study [29], participants were asked about whether they would like calorie information for a restaurant meal. Food away from home may be viewed as more hedonic-a 'treat' or indulgence-in nature; in such cases, consumers may want to prioritize personal enjoyment of the food and eating experience over nutritional considerations. Conversely, nutrition may be a higher priority when purchasing foods for at-home consumption.
Contrary to our hypothesis, there was no significant difference in consumers' decisions to access or avoid the added sugars information based on the information treatment received. One potential explanation could be that the added sugars information is relatively new to most consumers. This novelty may contribute to the high rates of accessing the added sugars information across both treatments. As discussed above, it is also possible that rates of accessing the information were higher because this study focused on products that are typically consumed at home instead of away from home. In general, consumers may be more motivated to access the nutrition facts for at-home purchases relative to away-from-home purchases, especially if they are the primary shoppers for their households and responsible for feeding others in the home. Lastly, the term added sugars on its own may have a negative connotation such that consumers were interested in learning more, even without fully understanding what added sugars are or diseases associated with their overconsumption. In this case, the provision of this additional information may have little impact on one's decision to access/avoid the added sugars content.
The primary reason consumers reported for wanting to access the added sugars information was that it would matter to their food choices, which is consistent with results from a previous study that investigated consumer preferences for calorie labeling [26]. This finding suggests the added sugars information could help adjust food selection and/or consumption. Very few respondents indicated that they would enjoy the product more if they knew the added sugars content, implying that the added sugars information may function as an 'emotional tax' (in that the information evokes negative emotions) for some consumers [26].
Reasons for avoiding the added sugars information were more varied. Some consumers indicated the information would not matter for their food choice or that they didn't want to think about added sugars when purchasing a particular product, while others stated knowing the information would make them feel guilty or enjoy the product less. Collectively, these results suggested that for a subset of consumers, the added sugars information had the potential to reduce the utility of their consumption experience. We also observed a subset of respondents who avoided the information because they reported knowing the added sugars content already. While we do not assess whether respondents' knowledge is accurate, the new information may not really be "new" for some consumers.
We found that older consumers were more likely to avoid the added sugars information, which was consistent with the results of Thunström et al. [29]. A more surprising result was that respondents who had participated in a nutrition assistance program such as SNAP or WIC were more likely to avoid the added sugars information, particularly for the fruit juice and ice cream product categories. One possible explanation could be low health literacy of the NF label among SNAP-eligible respondents. Speirs et al. found that only 37% of SNAP-eligible adults had adequate health literacy, which was assessed based on their ability to answer questions using the NF label [30]. A limited understanding of the NF label may result in limited interest for the newly included added sugars information. It is also possible that SNAP recipients purchase less food from restaurants [31], so the setting where they make hedonic purchases (e.g., grocery store) may look different from households with more financial resources.
Our results also suggested that accessing the added sugars information is associated with healthier food choices. We found that information avoiders were more likely to purchase unhealthy products (in terms of added sugars levels included) than the information accessors. Our findings were consistent with Thunström et al. [29], who found that participants who avoided calorie information exhibited higher calorie intake, on average. We acknowledge that we cannot infer causality in this case as participants who chose to access the information may have been more likely to select healthier products regardless of information.
While this study makes many contributions to the literature, there are some limitations to acknowledge. First, the use of an online survey limited our ability to observe actual purchasing behavior, so there was some potential for hypothetical bias in our likelihood of purchase ratings. Future research should focus on how the inclusion of this information on the NF label influences non-hypothetical food purchases. Second, while we explored information avoidance for several product categories, more research is needed to determine if our findings generalize to other product categories, including the presence or absence of heterogeneity in avoidance behavior. Future research should also investigate the potential impact of variation of the levels of added and total sugars within brands and with similar flavors on consumer behaviors. Third, it should be noted that information access or avoidance decisions may also be influenced by brand. In this study, we held brand constant across the low and high added sugars products in each category; however, it is possible that some brands exhibit "health halos" that could impact consumers' decision to access or avoid added sugars information. Lastly, while this study isolated the impact of the added sugars information, it should be noted that other information on the NF label like fat or protein and other product attributes such as price may influence consumers' purchase intentions. Future research should explore how purchase intentions or actual purchasing behaviors changes when consumers have the full NF label to consider in addition to the added sugars information.
The inclusion of added sugars information was one of the major changes in the updated NF label. We found that most consumers were interested in acquiring this information, and they exhibited healthier purchasing behaviors for all product categories than information avoiders. Therefore, from a health policy and promotion standpoint, it is imperative to communicate to consumers that the added sugars information is now available on food products and emphasize the importance of the information for making healthier choices. FDA acknowledged the need for consumer education when it published the final regulations to update the NF label, particularly for the new added sugars information [32]. Educational efforts will also need to address the primary reasons individuals choose to avoid or access information to tailor messages that will resonate with consumers. Special attention and consideration should be given to more hedonic products, like ice cream, where individuals indicated a stronger preference for information avoidance. For these types of products, nutrition educators and dietary interventions may require additional emphasis on portion control strategies to promote healthful eating behaviors.
Supporting information S1
|
2021-03-31T06:16:35.828Z
|
2021-03-29T00:00:00.000
|
{
"year": 2021,
"sha1": "1a15c3f352b179cca6743501b5926c237c957ebb",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0249355&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "acd6d584a1a5ed707ff5461d3b89cacc9b79f4ed",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
43005597
|
pes2o/s2orc
|
v3-fos-license
|
Giant fluctuations of local magnetoresistance of organic spin valves and non-hermitian 1D Anderson model
Motivated by recent experiments, where the tunnel magnetoresitance (TMR) of a spin valve was measured locally, we theoretically study the distribution of TMR along the surface of magnetized electrodes. We show that, even in the absence of interfacial effects (like hybridization due to donor and acceptor molecules), this distribution is very broad, and the portion of area with negative TMR is appreciable even if on average the TMR is positive. The origin of the local sign reversal is quantum interference of subsequent spin-rotation amplitudes in course of incoherent transport of carriers between the source and the drain. We find the distribution of local TMR exactly by drawing upon formal similarity between evolution of spinors in time and of reflection coefficient along a 1D chain in the Anderson model. The results obtained are confirmed by the numerical simulations.
Motivated by recent experiments, where the tunnel magnetoresitance (TMR) of a spin valve was measured locally, we theoretically study the distribution of TMR along the surface of magnetized electrodes. We show that, even in the absence of interfacial effects (like hybridization due to donor and acceptor molecules), this distribution is very broad, and the portion of area with negative TMR is appreciable even if on average the TMR is positive. The origin of the local sign reversal is quantum interference of subsequent spin-rotation amplitudes in course of incoherent transport of carriers between the source and the drain. We find the distribution of local TMR exactly by drawing upon formal similarity between evolution of spinors in time and of reflection coefficient along a 1D chain in the Anderson model. The results obtained are confirmed by the numerical simulations. Organic spin valves (OSVs), being one of the most promising applications of organic spintronics, are actively studied experimentally 1-9 . The organic active layer of an OSV is sandwiched between two magnetized electrodes. Due to long spin-relaxation times of carriers in organic materials, the net resistance of OSV is sensitive to the relative magnetizations of the electrodes. Among many advantages that OSVs offer, is wide tunability due to e.g. chemical doping, and enormous flexibility. The processes that limit the performance of OSVs can be conventionally divided into two groups: (i) interfacial, which take place at the interfaces between the electrodes and active layer [11][12][13][14][15][16][17][18] , and (ii) intralayer, which exist even if the interfaces are ideal. 19,20 Due to the latter processes the injected polarized electrons, Fig. 1, lose memory about their initial spin orientation while traveling between the electrodes. One of the most prominent mechanisms of this spin-memory loss is the precession of a carrier spin in random hyperfine fields of hydrogen nuclei 5,19,20 . The effectiveness of the OSV performance is quantified by tunnel magnetoresistance (TMR) given by a so-called modified Julliere's formula 22 , see e.g. the review Ref. 21, where P 1 , P 2 stand for polarizations of the electrodes. The difference from the original Julliere's formula 22 is the exponential factor Q = exp(−d/λ s ) describing the spin-memory loss over the active layer of thickness, d. Processes (i) can be incorporated into Eq. (1) by appropriately modifying P 1 , P 2 . For example, in Ref. 11 replacement of P 1 , P 2 by "effective" spin polarizations reflects the relative position of the Fermi level with respect to interfacial donor (acceptor) level. In this way, the "effective" polarization depends on bias, which might explain the sign reversal of TMR [11][12][13][14][15][16][17][18] . Processes (ii), on the other hand, are reflected in Eq. (1) via the factor Q = exp(−d/λ s ), where λ s is the spin diffusion length. The meaning of Q is the polarization of electrons at that the spin polarization of electrons falls off homogeneously and monotonically with coordinate x, see Fig. 1. The prime message of the present paper is that strong local fluctuations of TMR, including the local sign reversal, is a generic property of the OSV even with ideal interfaces. In other words, the factor Q, captures the spin memory loss only on average. The local value of Q fluctuates strongly from point to point and takes values in the domain −1 < Q < 1 . On the physical level, the local value of TMR in the absence of interfacial effects, is the fingerprint of hyperfine-field configuration along a given current path.
The origin of strong local fluctuations of TMR is quantum-mechanical interference of amplitudes 23 of subsequent spin rotations accompanying the inelastic hops of the electron which has been routinely neglected in earlier studies. Formally, this interference, in course of a time evolution of spin in random hyperfine field can be mapped on spatial propagation of electron along a 1D disordered chain. [26][27][28]30 In this regard, it is important to realize that, as electron enters the OSV, its spatial coherence is lost after a single inelastic hop. At the same time, the spin evolution of a given electron remains absolutely coherent all the way between the electrodes.
Experimental relevance of local TMR, which motivated our study, was demonstrated in a recent paper Ref. which are the averages of the histograms (green), are specific for the configuration of the hyperfine field. On the contrary, the histograms in the right panel approach the theoretical result, Eq. (11), shown with solid line. These histograms would represent the evolution of local TMR when field configuration slowly rotates due to, e.g., spin-spin interaction. active layer was played by isolated C 60 molecules attached to the substrate. By scanning the tip, the authors were able to recover the surface map of the conductance through a single molecule, and its evolution with bias. In this way, the sign reversal of TMR was demonstrated on the local level. Recurrence relation for the spin transport. We will illustrate our message using the simplest model 19,20,23,24 depicted in Fig. 1. As shown in this figure, electron hops along the parallel chains. The waiting times, τ n , for each subsequent hop are Poisson-distributed as While residing on a site, electron spin precesses around a local hyperfine field. Hyperfine fields are random, their gaussian distribution is characterized by rms value, b 0 .
In course of hopping, the values b ⊥ change abruptly after each time interval, τ n . The evolution of the amplitudes a 1 , a 2 of ↑ and ↓ spin projections is described by the unitary evolution matrix defined as (2) Microscopic expressions for R, Υ, and the phases χ and φ are elementary: R = |b n,⊥ |τn 2 , Υ = √ 1 − R 2 , χ = bzτn 2 , and φ = tan −1 bn,y bn,x . Here b z and b ⊥ = (b x , b y ) are the tangetial and normal (with respect to the initial spin orientation) components of the hyperfine field.
Coherent evolution of the electron spin over n steps is described by the product, n i=0 U i , of matrices Eq. (2).
Naturally, after n steps, this product can also be reduced to the form Eq.
(2) with Υ replaced by some effective Υ n . This observation suggests that Υ n and Υ n+1 are related via a recurrence relation which we choose to cast into the form . (
3)
Mapping on a 1D Anderson model. Consider now a different physical situation: spinless electron propagates coherently along a line of impurities randomly positioned at points, x n , see Fig. 1c. As shown in the figure, the energy-conserving wave function on the interval (x n , x n+1 ) is a combination of two counterpropagating waves. Denote with t the amplitude transmission coefficient of the impurity. Then the relation between the net transmission coefficient, t n , of the system with n impurities satisfies the famous Fabry-Perrot-like recurrence relation where η is the phase accumulation upon passage through the interval (x n , x n+1 ).
At this point we make our main observation that the recurrence relations Eq. (4) and Eq. (3) map onto each other upon replacement Υ −1 ↔ t. On the other hand, it is known that the distribution function of t n can be found exactly. In particular, the average ln(t 2 n ) increases linearly 27 with n, which is the manifestation of the Anderson localization in 1D. Anderson localization is the result of quantum interference of multiply-scattered waves 26 . The very existence of the mapping of Eq. (3) onto Eq. (4) suggests that the interference effects are equally important for the temporal evolution of spin. We will see, however, that replacement t n by 1/Υ n rules out the Anderson localization but causes giant fluctuations of Υ n with n. In addition, the mapping allows one to employ well-developed techniques, see e.g. the review Ref. 28, to describe these fluctuations analytically.
Distribution of local spin polarization after n steps. In the mapping of Eq. (3) onto Eq. (4) the randomness of the impurity positions, x n , is taken over by the random azimuthal orientations of the hyperfine fields. Following Ref. 27, this randomness allows one to write down the recurrence relation for the distribution function of the effective transmission coefficient. In our case it is more convenient to analyze the distribution of the related quantity Q = 2Υ 2 − 1 = 1 − 2R 2 , which is the local spin polarization, as mentioned in the Introduction. Then the functional recurrence relation reads An immediate consequence of Eq. (5) is the relation Q n+1 = (1 − 2R 2 ) Q n between the averages. This, in turn, implies that, on average, spin-memory loss follows the classical prediction exp(−n/λ s ), where λ s = 1/ ln(1 − 2R 2 ). From now on we consider the limit of large n and small R. The latter allows us to expand the denominator in Eq. (5) to the first order in R 2 , which, upon integration by parts, yields the following Fokker-Planck equation where x = nR 2 is assumed to be a continuous variable. It is not surprising that Eq. (6) is exactly the Fokker-Planck equation for 1D localization. The important difference, however, is that for spin evolution it should be solved in the domain |Q| < 1 rather than 28 Q > 1. The latter is a direct consequence of the mapping t ↔ Υ −1 . For a restricted domain |Q| < 1 the separation of variables in Eq. (6) reveals that the eigenfunctions with respect to Q are the Legendre polynomials, P m (Q), the corresponding eigenvalues, m(m + 1), define the xdependence, exp[−m(m + 1)x], for a given m. The coefficients in the linear combination of the Legendre polynomials are fixed by the "initial" condition F(0, Q) = δ(Q + 1), which corresponds to a full polarization at x = 0. This yields the following solution F(x, Q) Summation over m in Eq. (7) can be performed explicitly by using the integral presentation and the identity which can be easily derived from the generating function for the Legendre polynomials. Substituting ζ = − exp(x − 2iκ) and integrating by parts leads to the final result The imaginary part of the integrand is odd in k. Therefore, we can ultimately present Eq. (10) as a purely real integral The difference between Eq. (11) and its counterpart 28 in 1D Anderson model stems from the fact that the denominator in the identity Eq. (9) in our case is complex. Numerical results and analysis. The parameter x = nR 2 in the argument of the distribution Eq. (11) is related to the sample thickness, d and classical spin-diffusion length as x = d/2λ s . It is seen from Fig. 2 that, as x passes through x ∼ 1, the distribution evolves from δ-function (at x 1) to linear and, eventually, to flat. Flat distribution manifests complete spin-memory loss. But even when this loss is small on average, a sizable part of the distribution lies in the domain Q > 0, which corresponds to negative TMR. Note that, upon neglecting interference in Eq. (5), the distribution becomes δ Q + e −2x , i.e. infinitely narrow.
Until now we neglected the effects caused by the ran-domness of the waiting times, τ i . With regard to the distribution F(x, Q), this randomness amounts to replacement of R 2 by R 2 τi in the parameter x. A much more delicate issue is whether or not the randomness in τ i affects the local value of TMR. Naturally, the TMR, measured by a local probe, is the average over all τ i . Then the question arises whether this averaging washes out the difference between the points at which the TMR is measured, i.e. replaces the local Q by exp(−d/λ s ) or, on the contrary, the averaged TMR is a unique signature of the actual realization of the hyperfine fields along a given current path. We argue that the second scenario holds. Our argument is two-fold. Firstly, we performed direct numerical simulation of local spin polarization along a given path with randomness in τ i incorporated 29 . The results shown in Fig. 3 demonstrate that, while this randomness broadens the histograms, their center, which is the observable quantity, depends dramatically on actual orientations of the hyperfine fields along the path. Secondly, our analytical calculation 29 demonstrates that, while the disorder due to random orientations is short- , the same correlator calculated with given hyperfine-field realization but with random τ i falls off very slowly, as a power law. On the basis of two preceding arguments we conclude that, at times scales where nuclear spin-spin interaction does not rearrange the hyperfine-field configuration, the TMR remains specific for this configuration. Concluding remarks. Our theory applies for OSVs with thin inhomogeneous active layers, depicted in Fig. 1, in which the transport can be modeled with directed noncrossing paths 9 .
In this paper we treated the time evolution of the amplitudes (a 1 , a 2 ) in terms of a product of matrices. An alternate approach would be to start from the Schrödinger equations, namely, iȧ 1 = 1 2 b ⊥ (τ )a 2 (τ ), and iȧ 2 = 1 2 b * ⊥ (τ )a 1 (τ ). These two equations can be reduced to a single second-order equation for, say, a 1 . This equation can then be reduced to the Schrödinger-like form. This procedure would formally demonstrate why the spin evolution maps on non-hermitian 1D Anderson model: the effective potential, 1 2 ⊥ , in the Schrödinger equation appears to be complex 30 .
A. Distribution of off-diagonal element of the evolution matrix
The expression R = |b n,⊥ | τ n /2 for the off-diagonal element of the evolution matrix applies in the limit of weak rotation, R 1. The spread in the local values of R originates from the randomness of b n,⊥ = (b x , b y ) as well as from the randomness of the waiting times, τ n . Therefore the calculation of the distribution function of R involves averaging over three random variables where F (τ ) = 1 τ * exp − τ τ * is the Poisson distribution. Introducing dimensionless variables, x = b/b 0 and integrating over τ with the help of the δ-function yields For R > b 0 τ * the above integral can be calculated using the steepest-descent method It turns out that Eq. (14) provides an excellent approximation for all values of R. For example, for R = 0 the difference between the exact value and Eq. (14) amounts to a factor 2/ √ 3. We checked numerically that, with the latter distribution, the histograms of local polarization do not differ from box-like distribution.
Another effect of randomness in the waiting times originates from the phase χ = bzτ 2 in the matrix Eq. (2). Thus a rigorous account of the spread in τ i requires generating random χ i and R i from the joint distributioñ Since typical R and χ are of the same order, we again used in the simulations the R i -values uniformly distributed between 0 and R and χ i values uniformly distributed between − R 2 and R 2 . The results are shown in Fig 3. B. Temporal correlators of the random fields Consider a hopping chain containing N 1 sites. For concreteness we will consider only the correlation of the x-projections of the hyperfine fields. In course of transit between the electrodes, the carrier spin "sees" this projection in the form of a telegraph signal where θ(τ ) is a step-function, b i is the x-projection on site i, and τ i are the random waiting times for the hop i → (i + 1). As was mentioned in the main text, there are two correlators, b x (τ )b x (τ + T ) , relevant for TMR. The first is for a fixed realization, {b i }, and randomness coming only from the Poisson distribution of τ i . The second correlator, K 2 , is K 1 averaged over all possible realizations of hyperfine fields It is easy to see that K 2 (T ) has a simple form and decays on the time scale of a single hop ∼ τ * . On the other hand, as we will see below, K 1 (T ) persists at much longer times. The result, Eq. (19), can be established from the simple reasoning: the product b x (τ )b x (τ + T ) contains the terms of the type b 2 i and the terms b i b j with j = i. The latter terms vanish upon configurational averaging. The terms b 2 i are nonzero only if T is smaller than τ i . The corresponding probability can be expressed as θ(τ i − T ). Subsequent averaging over τ i leads us to Eq. (19).
Turning to the correlator K 1 , in order to perform averaging over τ i in Eq. (17) we use the integral representation of the θ-function and cast b x (τ ) in the form In a similar way the product b x (τ )b x (τ + T ) can be presented as a double integral The advantage of the above representation is that it allows averaging over τ i in the integrand using the relation Obviously, the average b x (τ )b x (τ + T ) does not depend on τ , since it should be understood as lim Then the integration over τ sets ω = −ω . The coefficient in front of b i b j -term in the product Eq. (21) is given by Assume that j is smaller than i, then all terms with k < (j − 1) do not enter into Eq. (23). As a result, the averaging over remaining i − j + 1 random times leads to the following result for the coefficient in front of The remaining step is the integration over ω in Eq. (21). This integration is carried out straightforwardly by closing the contour in the bottom half of the complex ω-plane. The ω-integral is nonzero only for j ≤ i. The final expression for the coefficient in Thus the final result for the correlator K 1 acquires the form Note that if we perform averaging over b i -s, the second term will vanish, and we will recover the expected result Eq. (19). For non-averaged K 1 the term in the square brackets restricts the domain of summation over (i − j) to Therefore, if the length of the chain, N , is smaller than T τ * the above condition will never be satisfied, and K 1 will fall off exponentially with T with characteristic decay time τ * . In the opposite limit, the summation over i, j within the allowed domain will eliminate T dependence from K 1 (T ). We can now restate the above observation as follows: K 1 (T ) weakly depends on T for T < N τ * , and decays exponentially with T for T > N τ * . Since the transport of electron between the electrodes takes the time N τ * , we conclude that the realization of the hyperfine field does not change during this time interval. Finally, to estimate the magnitude of K 1 (T ) for T < N τ * , we calculate the quantity K 2 1 and average it over hyperfine fields. This averaging can be performed analytically. The result is conveniently expressed through the modified Bessel function, I 0 , as For T > τ * Eq. (27) simplifies to K 1 (T ) 2 = b 4 0 / 4πT /τ * .
C. Broadening of Classical Distribution
As it was mentioned in the main text, neglecting the interference in Eq. (5) leads to the infinitely sharp distribution, F(x, Q) = δ(Q+e −2x ). This conclusion however implies that the magnitudes of R are the same on each site. In reality the magnitudes of R are distributed according to Eq. (14). This will cause a broadening of the classical distribution function,F(x, Q), which we estimate below.
We begin with the recurrence relation forF n (Q) The explicit form ofF n (Q) can be found exactly for arbitrary distribution H(R). For this purpose we introduce a new variable z = ln Q and rewrite Eq. (28) in terms of the function G(z) = e zF (e z ), G n+1 (z) = dR H(R)G n z − ln(1 − 2R 2 ) .
The right-hand-side of Eq. (29) is a convolution and turns into a product upon the Fourier transform. This readily yields G n (k) = G 0 (k) dR H(R) exp 2ikR 2 n .
For large n the form of G n (k) and, correspondingly, the form of the distribution G n (z) approaches to the Gaussian. Thus the distributionF(Q) is essentially log-normal F n (Q) = 1 |Q| √ πnσ R 2 exp ln |Q| + n R 2 2 nσ R 2 , where σ R 2 = R 4 − R 2 2 . It follows from Eq. (31) that the center of the distribution,F n (Q), moves linearly with n, which is the same as the average of the quantum distribution, while the width slowly grows with n as δQ = √ nσ R 2 exp(n R 2 ). Strong local quantum fluctuations of TMR persist up to nR 2 1. For such n the width of the classical distribution remains smaller than R. Note also, that probabilistic treatment of the spin rotation encoded in Eq. (28) forbids the negative TMR, i.e. restricts the domain ofF n (Q) to negative Q. If we, however, proceed from the classical limit of Eq. (5) to the Fokker-Planck equation, then the classical limit of the Fokker-Planck equation would correspond to neglecting Q 2 in the right-hand side of Eq. (6). Note that by doing so, we also remove the restriction that Q is negative. The Fokker-Planck equation in this limit reduces to a heat equation, and, similarly to the quantum result, yields a flat distribution at large x. This corresponds to "temperature equilibration" at long times. Even though the classical and quantum Fokker-Plank results share limiting behavior and have the same average at all times, their shapes are visibly distinct.
In this subsection we have demonstrated that there are two different classical limits of the quantum spin evolution. They predict two dramatically different shapes for the distribution of spin polarization.
|
2013-11-05T01:16:27.000Z
|
2013-11-02T00:00:00.000
|
{
"year": 2014,
"sha1": "38346a95f1bee4ac1e49bdd2f1da0fdd09e1914e",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1311.0338",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "38346a95f1bee4ac1e49bdd2f1da0fdd09e1914e",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
}
|
118382168
|
pes2o/s2orc
|
v3-fos-license
|
Cosmological Simulations with Self-Interacting Dark Matter II: Halo Shapes vs. Observations
If dark matter has a large self-interaction scattering cross section, then interactions among dark-matter particles will drive galaxy and cluster halos to become spherical in their centers. Work in the past has used this effect to rule out velocity-independent, elastic cross sections larger than sigma/m ~ 0.02 cm^2/g based on comparisons to the shapes of galaxy cluster lensing potentials and X-ray isophotes. In this paper, we use cosmological simulations to show that these constraints were off by more than an order of magnitude because (a) they did not properly account for the fact that the observed ellipticity gets contributions from the triaxial mass distribution outside the core set by scatterings, (b) the scatter in axis ratios is large and (c) the core region retains more of its triaxial nature than estimated before. Including these effects properly shows that the same observations now allow dark matter self-interaction cross sections at least as large as sigma/m = 0.1 cm^2/g. We show that constraints on self-interacting dark matter from strong-lensing clusters are likely to improve significantly in the near future, but possibly more via central densities and core sizes than halo shapes.
INTRODUCTION
The nature of dark matter is one of the most compelling mysteries of our time. On large scales, the behavior of dark matter is consistent with what cosmologists of yore called "dust" (e.g., Tolman 1934), meaning its behavior is consistent with being collisionless and non-relativistic ("cold") for the vast majority of the Universe's history (Reid et al. 2010). This consistency has been of great interest to the particle-physics community because the most popular candidate for dark matter, the supersymmetric neutralino, displays exactly this behavior (Steigman & Turner 1985;Griest 1988;Jungman, Kamionkowski & Griest 1996). While the supersymmetric neutralino paradigm is attractive in many ways, there are two outstanding problems with it. First, astroparticle searches have yet to turn up evidence for the existence of the neutralino, though searches are rapidly increasing their sensitivity to interesting neutralino parameter space (Geringer-Sameth & Koushiappas 2011;Ackermann et al. 2011;Cotta et al. 2012;Fox et al. 2011;Bertone et al. 2012;Atlas Collaboration 2012;Koay 2012;Baudis 2012;Baer, Barger & Mustafayev 2012;XENON100 Collaboration et al. 2012). Second, there are predictions for the structure of dark-matter halos that have not been observationally verified at a scales (on the scales of individual dark-matter halos) while leaving the large-scale successes of CDM intact. In this paper we revisit this basic class of self-interacting dark matter (SIDM) using cosmological simulations to explore its effect on dark matter halo shapes as a function of cross section. In a companion paper (Rocha et al. 2012) we investigate implications for dark matter halo substructure and density profiles.
We are reinvestigating this simple SIDM model, which had been decreed "uninteresting" in several studies a decade ago, for two primary reasons. First, we suspected that the constraints that indicated that the SIDM cross section was too small to meaningfully alter the morphology of dark-matter halos, were not as tight as claimed. Second, there is a wealth of new data (e.g., from nearfield cosmology, lensing studies of galaxies and clusters) that may be better places to either look for SIDM or constrain its properties. In this paper and our companion paper, Rocha et al. (2012), we reevaluate past constraints on SIDM and suggest several new places to look for the effects of SIDM on halo structure.
The most stringent constraints on dark matter models with large isotropic, elastic self-scattering cross sections emerged from the shapes of dark-matter halos, in particular from lens modeling of the galaxy cluster MS 2137-23 by Miralda-Escudé (2002). This massive galaxy cluster has a number of radial and tangential arcs within ∼ 200 kpc of the halo center (Mellier, Fort & Kneib 1993;Miralda-Escude 1995). Miralda-Escudé (2002) argued that self interactions should make dark-matter halos round within the radius r where the local per-particle scattering rate equals the Hubble rate Γ(r) = H0, or equivalently, where each dark-matter particle experiences one interaction per Hubble time. The scattering rate per particle as a function of r in a halo scales in proportion to the local density and velocity dispersion, where ρ is the local dark-matter mass density and vrms is the rms speed of dark-matter particles. Using the fact that the lens model needs to be elliptical at 70 kpc, Miralda-Escudé (2002) set a constraint of σ/m 0.02 cm 2 /g on the velocity-independent elastic scattering cross section. This constraint is one to two orders of magnitude tighter than other typical constraints on velocityindependent scattering (Yoshida et al. 2000b;Gnedin & Ostriker 2001;Randall et al. 2008). It rendered velocity-independent scattering far too small to form cores in low surface brightness galaxies and other small galaxies (Kuzio de Naray, McGaugh & de Blok 2008;de Blok et al. 2008). This is unfortunate because the main reason SIDM was interesting at the time was that it was a mechanism to create cores in such galaxies (Spergel & Steinhardt 2000).
cantly alter dwarf-scale or smaller dark-matter halos while leaving cluster-mass halos largely untouched (Feng et al. 2009;Buckley & Fox 2010;Loeb & Weiner 2011;Vogelsberger, Zavala & Loeb 2012). In recent times, such velocity-dependent interactions have arisen in hidden-sector models designed to interpret some chargedparticle cosmic-ray observations as evidence for dark-matter annihilation (Pospelov, Ritz & Voloshin 2008;Fox & Poppitz 2009;Arkani-Hamed et al. 2009;Feng, Kaplinghat & Yu 2010). Constraints on other hidden-sector dark-matter models have been made using X-ray isophotes of the gas in the halo of the elliptical galaxy NGC 720 (Buote et al. 2002;Feng et al. 2009;Feng, Kaplinghat & Yu 2010;Buckley & Fox 2010;Ibe & Yu 2010;McDermott, Yu & Zurek 2011;Feng, Rentala & Surujon 2012). However, as we show below, reports of the death of isotropic, velocity-independent elastic SIDM are greatly exaggerated. In this paper, we show that these earlier studies did not correctly account for the fact that the observed ellipticity (of the mass in cylinders or the projected gravitational potential) gets contributions from mass well outside the core, and the region outside the core retains its triaxiality. We also show that for ellipticity estimators that are relevant observationally, there is significant amount of scatter and the overlap between CDM and SIDM ellipticities is substantial even for σ/m = 1 cm 2 /g. Lastly, we find that in the regions where SIDM particles have suffered (on average) about one or more interactions, the residual triaxiality is larger than what has been previously estimated (Davé et al. 2001). Along with the analysis in Rocha et al. (2012), we find that studies of the central densities of dark-matter halos are likely to yield tighter constraints on the SIDM cross section than the morphology of the halos.
We briefly summarize our simulations in Sec. 2. We present results on the three-dimensional shapes of SIDM dark-matter halos compared to their CDM counterparts in Sec. 3. We reexamine the previous SIDM constraints based on halo shapes in light of our simulations in Sec. 4. In particular, we reexamine the Miralda-Escudé (2002) constraint in Sec. 4.2 and from the shapes of the X-ray isophotes of NGC 720 (Buote et al. 2002) in Sec 4.3. In Sec. 4.4, we show how other lensing data sets may constrain SIDM in the future. We summarize the key points of this paper and present a few final thoughts in conclusion in Sec. 5.
SIMULATIONS
We modeled self interactions by direct-simulation Monte Carlo with a scattering algorithm derived in Appendix A of Rocha et al. (2012) and implemented within the GADGET-2 (Springel Figure 1. Surface density of a halo of mass M vir = 1.2 × 10 14 M projected along the major axis of the moment-of-inertia tensor -the orientation that dominates the lensing probability. The left column shows the halo for CDM, while the middle and right columns show the same halo simulated using SIDM with σ/m = 0.1 cm 2 /g and 1.0 cm 2 /g, respectively. The bottom row shows the same information, now zoomed in on the central region. The surface density stretches logarithmically from ≈ 10 −3 g/cm 2 (blue) to ≈ 10 g/cm 2 (red).
2005) cosmological N-body code. Once the code passed accuracy tests, we performed cosmological simulations of CDM and SIDM with identical initial conditions for cubic boxes of 25h −1 Mpc on a side and 50h −1 Mpc on a side, each with 512 3 particles. For the SIDM runs we explored cross sections of σ/m = 1, 0.1, and 0.03 cm 2 /g, though the lowest cross section run (with σ/m = 0.03 cm 2 /g) provided results that were so similar to CDM that we have not included them in any of the figures below. The initial conditions were generated using the MUSIC code at z = 250 (Hahn & Abel 2011) with a year seven WMAP cosmology (Komatsu et al. 2011): h = 0.71, Ωm = 0.266, ΩΛ = 0.734, Ω b = 0.0449, ns = 0.963, σ8 = 0.801. A summary of our simulation parameters, including particle mass and resolution is provided in Table 1. We adopt a naming convention where the simulations denoted SIDM0.1 and SIDM1 have subscripts corresponding to their cross sections in units of cm 2 /g. In all cases the selfinteraction smoothing length as defined in Rocha et al. (2012) was set to 2.8 times the force softening.
We locate and characterize halos using the publicly available Amiga Halo Finder (AHF; Knollmann & Knebe 2009) package. The total mass of a host halo Mvir is determined as the mass within a radius rvir using the virial overdensity as defined in Bryan & Norman (1998). Though most of our analysis focuses on distinct (field) halos, we also explore subhalo shapes. For these objects their masses are measured within the radius at which the radial density profile of the subhalo begins to rise again because of the presence of the host.
We present results on the shapes of halos at z = 0 in a radial range rmin to rvir, where rmin is the minimum radius within which we trust the shape measurements. Since our two sets of simulations have different resolution, we can check the convergence of our shape estimates. For integral measures of shape (e.g., the momentof-inertia tensor for all particles within a given radius), we find that although the density profiles look largely converged outside of the numerical-relaxation radius r relax defined in Power et al. (2003), the shapes do not converge until at least rmin = 2r relax . This radius is roughly rmin ≈ 20 kpc for the halos in the 50h −1 Mpcsized simulations, with only modest dependence on halo mass and scattering cross section, and rmin ≈ 10 kpc for the 25h −1 Mpc boxes. Below rmin we find that the halo shapes are systematically too round. However, shapes are more robust if found in shells (either spherical or ellipsoidal) because they are less contaminated by the effects of the overly round and numerically relaxed inner regions. Shape estimates, especially integral estimates, are most reliable if there are at least ∼ 10 4 particles within the virial radius (or tidal radius for subhalos), consistent to what has been found in earlier work (e.g., Allgood et al. 2006; Vera-Ciro et al. 2011). . Host halo shapes in shells of radius scaled by the virial radius in three virial-mass bins as indicated. The black solid lines denote the 20th percentile (lowest), median (middle), and 80th percentile (highest) value of c/a at fixed r/r vir for CDM. The blue dashed lines show the median and 20th/80th percentile ranges for σ/m = 1 cm 2 /g, and the green dotted lines show the same for σ/m = 0.1 cm 2 /g. There are 440, 65, and 50 halos in each mass bin (lowest mass bin to highest).
Preliminary Illustration
Before presenting a statistical comparison of CDM and SIDM halo populations, we provide a pictorial illustration of how an individual halo changes shape as we vary the cross section. Fig. 2, we project the halo along the intermediate axis, which maximizes the deviation of the surface density from axisymmetry. This particular halo is one of the most massive halos identified in the 50h −1 Mpc box runs, with Mvir = 1.2 × 10 14 M and rvir = 1.27 Mpc. The top row in each figure shows the surface density on the scale of the virial radius, while the lower row shows the inner 300h −1 kpc of the halo (side-to-side). The surface-density stretch is the same between the two figures. The major and minor axes for the projections in these figures were determined using the moment-of-inertia tensor of all particles within a sphere of radius rvir in the halo. If modeling the mass distribution as an ellipsoid, the principal axes a(major) > b(intermediate) > c(minor) are the square roots of the eigenvalues of this tensor.
In comparing Figs. 1 and 2, note that Fig. 1 is the most relevant for strong-lensing studies (Sec. 4) and shows the smallest differences, especially at large radii. Indeed, only the zoomed view of the σ/m = 1 cm 2 /g run is visibly rounder than the CDM case. Even for the intermediate projection ( Fig. 2), which maximizes the visual difference, the inner regions of the halo are only slightly rounder and less dense than their CDM counterparts for σ/m = 0.1 cm 2 /g. The σ/m = 1 cm 2 /g case is indeed less dense and rounder within ∼ 100h −1 kpc, but even in this case, some ellipticity is clearly evident.
A final point of interest in these visualizations concerns the substructure. The subhalos apparent in the CDM halo are similarly abundant in the SIDM cases, and even approximately match in their positions. There are minor differences in substructure densities and locations (especially in the central regions) but overall it is difficult to distinguish among the runs by comparing their substructure content (see Rocha et al. 2012 for a more quantitative comparison of substructure).
Three-dimensional halo shapes
We quantify halo shapes by examining ellipsoidal shells centered on the radial slices identified by AHF (Knollmann & Knebe 2009) for profile measurements. We use shells instead of enclosed volume because it is less sensitive to numerical relaxation effects at the center and because it is a better estimate of the effects of local darkmatter scattering. In each shell of material, we calculate a modified moment-of-inertia tensor (defined and used in Allgood et al. 2006) in ellipsoidal shells, rn = x 2 1,n + x 2 2,n /(b/a) 2 + x 2 3,n /(c/a) 2 and (x1,n, x2,n, x3,n) are the coordinates of the nth particle in the frame of the principal axes (major, intermediate, minor) of this tensor. This is the same moment-of-inertia tensor from which shapes are inferred in Dubinski & Carlberg (1991) and Davé et al. (2001). The principal axes (a, b, c) are computed as the square roots of the eigenvalues ofĨ ell ij .The weighting of the moment-of-inertia tensor is chosen such that the outermost particles in the shell do not dominate the shape estimate. We begin by finding the moment-of-inertia tensor in a spherical shell, setting a = b = c = 1 for this initial estimate ofĨ ell ij , and iterate to findĨ ell ij with convergent (a, b, c) values. In each iteration, the ellipsoidal shell volume is defined using the (a, b, c) found in the previous iteration. We experiment with either keeping the semi-major axis a of the shell fixed between iterations or allowing a to float such that the volume in the shell remains fixed as we iterate to findĨij(a), but find that c/a is insensitive to these choices. Throughout this section, we show c/a for fixed volume in the shell. We only show results for c/a because the trends for b/a are similar but less informative, since c/a indicates the deviation of the halo shape from sphericity. In order to understand trends, we split our analysis of host dark-matter halos and subhalos, and bin halos by virial (or tidal) mass. Host halos are those whose centers do not lie within the virial radius of a more massive halo. In Fig. 3, we show the minorto-major axis ratio c/a as a function of radius normalized by the halo virial radius, r/rvir, for host halos in three mass bins. Larger values of c/a imply more spherical halos. For the two lower mass bins, we used halos selected from the 25 h −1 Mpc boxes since these have the higher resolution. For the highest mass bin, we used halos identified in the 50 h −1 Mpc boxes in order to gain better statistics. We checked to make sure that the results were convergent between boxes where the relevant mass resolutions overlap. The shaded region corresponds to the 20th to 80th percentile for c/a of the halo population for fixed r/rvir, and the central line shows the median value of c/a. The black solid lines and yellow shaded regions denote shapes of CDM halos, the blue dashed lines and regions correspond to σ/m = 1 cm 2 /g, and the green dotted lines and regions correspond to σ/m = 0.1 cm 2 /g. The regions extend down in r/rvir to the largest value of rmin/rvir in the given mass bin.
We reproduce the well-known trend that galaxy-mass halos in CDM are more spherical than cluster mass halos and that CDM halos become more spherical in their outer parts (Allgood et al. 2006). SIDM halos deviate most strongly from CDM at smaller radii, where the scattering rates are highest for a fixed cross section. For SIDM1, halos are actually more spherical in their centers than their edges, with c/a rising with decreasing r for r/rvir < 0.5. For SIDM0.1, differences from CDM are only apparent for r/rvir < 0.1.
In Fig. 4, we compare the shapes of subhalos (thick lines) and host halos (thin lines) of similar mass by plotting the median axis ratio c/a as a function of r/router where router = r tidal for subhalos (as defined by the AHF halo finder) and router = rvir for host halos. All halos have masses within router between 10 11 h −1 M and 10 12 h −1 M . Though not shown, the 20th to 80th percentile ranges are similar in size as in Fig. 3. We find that the subhalo interiors in the σ/m = 1 cm 2 /g cosmology are systematically rounder than host halos. We speculate that there are are at least three effects that drive this trend. First, subhalos are typically more evolved than field halos of the same mass, with fewer recent mergers. Allgood et al. (2006) find that halos that form earlier are more spherical than halos that form later, which is attributed to directional merging, and more generally to the highly non-spherically-symmetrical way in which halos form and accrete. Second, these subhalos are the remains of more massive halos, which are more susceptible to the effects of self interactions at fixed r/rvir, as we showed in Fig. 3. Moreover, the outer radius of the subhalos is truncated with respect to its virial value, thus increasing r/router for fixed r. Since halos tend to be rounder in the outer parts with respect to the interiors for CDM halos, this boosts the initial c/a for fixed r/rvir, beyond which SIDM boosts c/a even more. We see this trend for the CDM and σ/m = 0.1 cm 2 /g halos in Fig. 4. To see how the shape of dark-matter halos changes as a function of the typical local scattering rate (Eq. 1), we plot c/a as a function of ρ(r)vrms(r) ∝ Γ(r)(σ/m) −1 in Fig. 5. The proportionality constant in relating ρvrms to the scattering rate is O(1) and depends on the distribution function of dark matter particles. Thus it is reasonable to use ρvrms as a proxy for the local scattering rate modulo the actual cross section. To simplify the interpretation further, we multiply this quantity by 10 Gyr cm 2 /g in Fig. 5. In these units, if ρ(r)vrms(r) > 1 for σ/m = 1 cm 2 /g, most particles will have scattered after 10 Gyr. For σ/m = 0.1 cm 2 /g, this quantity needs to be 10 times larger to achieve the same scattering rate. Generally, ρvrms increases as one goes in towards the halo center, so particles tend to scatter more frequently in the core than in the outer parts of the halo, where interactions are uncommon over a Hubble time. Fig. 5 shows that deviations in the halo shape from CDM begin when ρvrms(σ/m) × (10 Gyr) ∼ 0.1, independent of halo mass. This corresponds to approximately 10% of particles having scattered over a Hubble time at this radius. However, the changes are small compared to the change where Γ × (10 Gyr) 1. We note here that the most massive halos in Davé et al. (2001) also seem to show the same qualitative behavior. However, even for large values of ρvrms, the deviation from sphericity is significant, a fact that is in some disagreement with the simulation results of Davé et al. (2001) where c/a 0.9 for their most massive halo. We speculate that part of this could be due to the differences in the way the ellipticity was estimated and part could be due to the smaller box run by Davé et al. (2001), which implies a quieter merger history. It is well known that CDM halos have anisotropic velocity ellipsoids and elongated shapes that are driven partially by directional merging (Allgood et al. 2006). These mergers provide a source of anisotropy that needs to be overcome by scattering in order for halos to reach sphericity. We also note that the energy transfer facilitated by self-interactions would lead to an isotropic velocity dispersion tensor and that does not necessarily imply a rounder halo. To make this connection between isotropic velocity dispersion tensor and a rounder halo, previous analytic estimates have relied on the simulations of Davé et al. (2001).
The difference may also be a numerical artifact: we find that halo shapes only converge if there are at least 10 4 particles in the halo and only for radii r > 2r relax (see Sec. 2). Most of the halos used for shape estimates in Davé et al. (2001) only have 10 3 particles in the virial radius. Their largest halo does have more than enough particles for reliable shape estimates. However, for this massive halo, Davé et al. (2001) show shape measurements at radii much smaller compared to the convergence radius than we do. In our simulations, we find that halos appear artificially round below the convergence radius. Our shape measurements are consistent with Davé et al. (2001)'s massive halo for radii above the convergence radius.
The other major effect of energy transfers due to scattering is to create a core (see Rocha et al. (2012)) and deep inside a constant density core, we must have c/a → 1. Hence we expect the log slope of the density profile to correlate strongly with the halo shape. Thus, instead of the density profile scaling as ρ ∼ r −1 in the interior as expected for CDM (Navarro, Frenk & White 1997;Navarro et al. 2004Navarro et al. , 2010, the density profile of SIDM halos plateaus close to the center. We use the following proxy for the negative log slope of the density profile: where M (r) is the mass enclosed within radius r and γ = −d log ρ/d log r for a power-law density profile. Based on the previous figures, we expect halo shapes to become increasingly round as γ 1, and that the CDM and σ/m = 0.1 cm 2 /g halos should not dip below γ ≈ 1. In Fig. 6, we show c/a as a function of the log slope of the density profile, as approximated using Eq. (3), for host halos in our highest mass bin. We obtain similar results for halos of smaller mass, but only show the highest mass bin because these halos are the best resolved.
We find that, indeed, SIDM1 halos become significantly rounder as γ < 1 and become almost completely round when γ gets much smaller than 0.5. Interestingly, we also see that c/a deviates strongly from CDM even for relatively large values for the log slope, in regions of the halo in which the scattering is not efficient at changing the radial density profile. This is a consequence of the fact, as shown in Fig. 5, that it does not take a lot of scatters to start rounding out the halos, although it takes multiple scatters for the halos to acquire c/a axis ratios in excess 0.8. Thus, the effects of scattering are apparent for σ/m = 1 cm 2 /g even when ρ ∼ r −2 and the density profile is unaffected by scatterings. However, the observational importance of this behavior is mitigated by two factors -the change for γ 1 is mild and easily within the scatter in ellipticities seen in CDM.
In summary, we find that it only takes a modest local scattering rate per particle, Γ(r) 0.1H0, to start changing the three-dimensional halo shape within radius r with respect to CDM. We find that SIDM1 halos gets significantly rounder compared to CDM predictions when the negative log-slope of the density profile γ 1 in regions where dark matter particles (on average) have had at least interactions in a Hubble time (Γ × (10 Gyr) > 1) . Our results show that even in the limit of one or more scatterings, the halo shapes retain some of their initial triaxiality.
Defining Observables
There are a number of ways of quantifying deviations of mass distributions from spherical or axial symmetry. In the previous section, we quantified the deviations in terms of c/a, the ratio of the semi-minor to semi-major axes determined from the modified three-dimensional moment-of-inertia tensor, Eq.
(2). This is rarely a practical shape estimate observationally. Instead, there is a more suitable measure of halo ellipticity or triaxiality for each type of observation. The relationship between these measures is non-trivial, so care must be taken to compare theory to observation appropriately. From paper to paper, the definition of "ellipticity" can change significantly. In order to facilitate these comparisons, we define three distinct measures of asymmetry in this section and go on to use them for specific comparisons to observational studies in Sec. 4.2, 4.3, and 4.4. The symbols we use for these shape definitions are summarized in Table 2. For strong lensing, which we discuss in Secs. 4.2 and 4.4, what matters is the deviation of the convergence from axial symmetry. The convergence is κ = Σ/Σcr, where Σcr is the critical surface density for creating multiple images. For an axially symmetric system the convergence will be κ(θ), and depend only on the angular distance on the sky θ from then lens center. Once axial symmetry is broken, one must consider κ(θ, φ), where φ is the azimuthal angle that rotates on the sky. Miralda-Escudé (2002) used the following quadrupole approximation to fit the surface density of MS 2137-23, where κ0(θ) is the convergence averaged along azimuthal angle and ε quantifies the amplitude of deviation of the convergence from axial symmetry. Since the normalization of κ and the angle θ depend on the source, lens, and source-to-lens distances, which generically vary, we quantify deviations from axial symmetry in terms of surface density Σ(R, φ), where R is the two-dimensional physical radius in projection. Using Eq. (4), we define the measure of ellipticity which should be equivalent to ε cos(2φ) if the quadrupole expansion of the two-dimensional surface density is approximately correct. Here, Σ0(R) is the azimuthally averaged surface density. The second type of measure used to quantify deviations from spherical or axial symmetry in lensing arises from the extension of the double pseudo-isothermal sphere, to allow for deviations of the surface-mass density from axial symmetry (double pseudo-isothermal elliptical, or dPIE; see Richard et al. 2010), with The dPIE profile has the properties that ρ ∼ const for r rcore, ρ ∝ r −2 for rcore r rcut, and ρ ∝ r −4 for r rcut, so that the total mass is finite. The center of the halo in projection is denoted by (xc, yc), with the x-direction aligned with the major axis of the distribution. This surface-density profile is often used in fits of the shapes of galaxies or dark-matter halos in clusters that strongly lens background galaxies.
The third way we will quantify halo shapes observationally is in terms of similar spheroids (see, e.g., Section 2.5 of Binney & Tremaine 2008). For spheroids, we can define an ellipticity parameter where b is the semi-minor and a the semi-major axis. If the z-axis is the symmetry axis, the semi-major axis is given by for oblate spheroids, and for prolate spheroids. Similar spheroids are those for which the Deviation from axial symmetry in lensing convergence maps Fig. 7 e Eq. 8 Ellipticity in dPIE surface density fits to lensing signal Fig. 11 Eq. 9 Ellipticity of similar spheroid used in X-ray studies density profile may be described in terms of ρ(a) and is fixed throughout the body. The isopotential surfaces of such spheroids are rounder at large distances if the body is more centrally concentrated (see the discussion in Sec. 2.5 of Binney & Tremaine 2008). As summarized in Table 2, e is the surface-density shape definition we use in Sec. 4.2, is the X-ray motivated shape definition we use in Sec. 4.3, and e is the lensing-fit shape definition used in Sec. 4.4.
Revisiting Miralda-Escudé (2002)
Galaxy clusters are great places to look for the effects of velocityindependent SIDM because one may typically achieve much higher values of ρ(r)vrms(r) for fixed r/rvir. In addition, there are many different probes of the mass distribution of clusters that span an enormous dynamic range of radial scale-stellar kinematics and strong lensing towards the center of the cluster, weak lensing and X-ray gas distributions throughout the halo volume, and weak + strong lensing maps of the matter distribution around individual galaxies in the cluster (Sand et al. 2008;Newman et al. 2009Newman et al. , 2011Kneib & Natarajan 2011). It is no surprise the tightest constraints on velocity-independent SIDM emerged from cluster studies, and a revisit of the tightest of these constraints is the subject of this section.
The strongest published constraint on velocity-independent SIDM, σ/m 0.02 cm 2 /g came from Miralda-Escudé (2002)'s study of the galaxy cluster MS 2137-23. This cluster has an esti-mated virial mass of ∼ 8 × 10 14 M (Gavazzi 2005), and its mass distribution has also been studied by Fort et al. (1992), Mellier, Fort & Kneib (1993), Miralda-Escude (1995), Gavazzi et al. (2003), and Sand et al. (2008). There are two strongly lensed galaxies that produce a total of five distinct images: one source has a radial image at θ ∼ 5 from the center of the brightest cluster galaxy and an arclet at θ = 22.5 . The other source has a large tangential arc and two arclets all at about θ = 15 from the brightest cluster galaxy center, which corresponds to 70 kpc. In order to reproduce both the relative magnifications and the alignments of the images in the sky, the surface density must deviate from axial symmetry at 70 kpc. Quantitatively, it means that the parameter ε in Eq. (4), which corresponds to the amplitude of e given in Eq. (5), must be ε ≈ 0.2 at R = 70 kpc. This figure is largely driven by the tangential arcs and associated arclets (Miralda-Escude 1995) Based on the ellipticity at 70 kpc and an argument that the dark-matter surface density should be approximately axial for a typical particle collision rate Γ H0, Miralda-Escudé (2002) asserts that Γ(70 kpc) H0. Using the fact that the tangential arc should lie at approximately where the mean interior convergencē κ = 1 (or an estimated critical density Σcr = 1 g/cm 2 ) and the rough approximation ρ(r) ∼ Σ(r)/r, Miralda-Escudé estimates the three-dimensional density ρ(70 kpc). Using the velocity dispersion of the brightest cluster galaxy at the center of the halo as a proxy for vrms, Miralda-Escudé uses Eq. (1) to determine a limit on σ/m, which is found to be σ/m 0.02 cm 2 /g.
We get a sense that this line of reasoning may be flawed when we examine the surface-density plots of one of our most massive halos in Figs. 1 and 2 and our findings of Sec. 3. First, recall that our results show (cf., Fig. 5) that the inner halo shape retains some triaxiality even when Γ H0. Second, the surface density includes all matter along the line of sight, not just the material within r < R. Thus, the surface density at small R/rvir includes a lot of material with large r, far out in the halo where SIDM scatters are unimportant. This material is still quite triaxial. Moreover, SIDM also creates cores, which means that the outskirts of the halo have an even greater weight in the total surface density than if the halo were still cuspy at the center. Empirically, we see that the simulated surface densities in Figs. 1 and 2 are quite elliptical. This point becomes more and more important as the size of the core becomes smaller. All of these things suggest that the constraint reported in Miralda-Escudé (2002) is far too high.
When attempting to quantify the constraints on SIDM from MS 2137-23 using our simulations, we run into the following problem: the largest halo in our simulations has a virial mass Mvir = 2.2 × 10 14 M , a factor of approximately four smaller than the estimated virial mass of MS 2137-23. Moreover, we do not know the orientation of the principal axes of the cluster with respect to the line of sight. In order to make the comparison, we do two things. First, we can use virial scaling relations to estimate the radius at which ρ(r)vrms(r), a proxy for the scattering rate (see Fig. 5), has the same value as it would for a radius of 70 kpc in MS 2137-23, in other words, we look for the radius at which the SIDM scattering rate should be comparable to the radius at which Miralda-Escudé (2002) finds the constraint for MS 2137-23. For our Mvir ∼ (1 − 2) × 10 14 M halos, r = 35 kpc is roughly the point at which the scattering rate is similar to that at 70 kpc in MS 2137-23. This is outside rmin for these halos, so we trust the shape measurements. Second, we look at several projections of the halos. We calculate Σ0(R), Σ(R, φ), and hence e (φ) for the various projections of the halos.
We show an example of e (φ) curves for lines-of-sight along the principal axes of the halo moment-of-inertia tensors of one of our largest halos in Fig 7, the same halo shown in Figs. 1 and 2. As in previous figures, the solid black line denotes the CDM result, the blue dashed line the result for σ/m = 1 cm 2 /g, and the green dotted line the result for σ/m = 0.1 cm 2 /g. We do not show the curve for σ/m = 0.03 cm 2 /g even though it is the cross section closest to the Miralda-Escudé (2002) constraint because it is indistinguishable from the CDM line. The red dotted line shows the minimum amplitude of e (φ) required for the lens model of MS 2137-23. The e (φ) curves of the other massive halos look similar to the curves for the halo shown in Fig. 7. What we find is that even the σ/m = 1 cm 2 /g curve generally satisfies the MS 2137-23 constraint. Therefore, we find that the Miralda-Escudé (2002) constraint is in fact overly constraining by two orders of magnitude. While we do not simulate cosmologies with σ/m > 1 cm 2 /g, and thus cannot set a quantitative upper limit on the SIDM cross section, we may conclude that σ/m = 1 cm 2 / g is not ruled out by MS 2137-23.
There are a few caveats to this conclusion, which we do not believe will significantly alter our claim. First, none of our simulated clusters are as massive as MS 2137-23. This precludes us from doing a detailed comparison of the projected densities in σ/m = 1 cm 2 /g to that required to explain the arcs in MS 2137-23. It is, however, informative to do a simple calculation to gauge the importance of this effect. We appeal to the results in the companion paper (Rocha et. al. 2012) that show that the density profile in SIDM for σ/m = 1 cm 2 /g is well fit by a Burkert profile (Burk-ert 1995), and that this profile deviates from CDM density profiles at radii smaller than half the Burkert scale radius. At this point, r b /2 or 0.35(rmax/21.6 kpc) −0.08 , the density profile becomes almost constant. We use the Vmax − rmax relation seen in our CDM simulations and the NFW profile to compute the projected mass within 70 kpc as the CDM prediction and compare that to the SIDM prediction by computing the same projected density but assuming that ∀ r < r b /2, ρSIDM(r) = ρCDM(r b /2). This computation reveals that the projected density in σ/m = 1 cm 2 /g model should be about 30% lower than in CDM for the median (1−5)×10 15 M halos. Even for the CDM case, however, the projected density is about a factor of 2 smaller than the estimated Σcr = 1 g/cm 2 . Clearly, the estimated projected density (as opposed to the shape) could be a significant constraint on the σ/m = 1 cm 2 /g model. This simple computation motivates further work along these lines with more realistic SIDM density profiles, inclusion of scatter and the uncertainty in the halo virial mass.
We have started much larger-scale simulations in order to study clusters in more detail, which will include an investigation of strong-lensing cross sections as well as ellipticity. In terms of the ellipticity function e (φ), it is not clear in which direction our results will go for simulated 8×10 14 M halos. On one hand, large halos are more triaxial than small halos, so we would expect that if anything, we would be underestimating the degree of ellipticity in the surface-density distribution. On the other hand, large halos have larger and rounder cores for velocity-independent SIDM compared to their lower-mass cousins. This might drive the constraints the other way, although a larger core implies that the outskirts of the halo get weighted more heavily along the line-of-sight integral of the density for the surface-density calculation. Finally, several authors have noted that MS 2137-23 is actually unusually round for a galaxy cluster (Gavazzi 2005;Sand et al. 2008). And intriguingly, the modeling of both Miralda-Escude (1995) and Sand et al. (2008) indicate that a cored dark-matter radial density profile is preferred over a strongly, CDM-like cuspy profile for the dark-matter halo. Such a cored profile is more in line with what we would expect Figure 8. Distribution of halo ellipticity as defined in Eq. 9 from fits for halos with M = (3 − 10) × 10 12 M . The black histograms indicate ellipticities for CDM halos, and the blue shaded histograms show the ellipticities for halos with σ/m = 1 cm 2 /g, and green dashed histogram shows ellipticities if σ/m = 0.1 cm 2 /g. The dark blue line with arrow shows the center and width of the best-fit and uncertainty from Buote et al. (2002). The shapes are found in using the Dubinski & Carlberg (1991) weighted moment-of-inertia method described in Sec. 2 and approximated as oblate or prolate spheroids. The initial minimum radius of the shell for the shape measurement is set to r min = 8.5 kpc, and the maximum radius to rmax = 14 kpc. See text for details.
for SIDM. Tighter constraints on σ/m, or a measurement should it be non-zero, will come from an ensemble of clusters, including the most triaxial of them, a point to which we return in Sec. 4.4.
Shapes from X-ray observations of elliptical galaxies
Constraints on dark-matter scattering have also been made using the shapes of significantly smaller dark-matter halos, Mvir ∼ 10 12 − 10 13 M (Feng et al. 2009;Feng, Kaplinghat & Yu 2010;Buckley & Fox 2010;McDermott, Yu & Zurek 2011). In this case, the observations consist of X-ray studies of the hot gas halos of elliptical galaxies (Buote et al. 2002).
One may use the shape and twisting of the X-ray isophotes to learn about the three-dimensional shape of the dark-matter halo. Interestingly, halo shapes may be constrained with imaging data alone and do not rely on temperature-profile modeling, which is required for mass-density determinations. This "geometric argument" was first made by Binney & Strimpel (1978) and subsequently applied by a number of authors in the study of elliptical galaxies and galaxy clusters (Fabricant, Rybicki & Gorenstein 1984;Buote & Canizares 1994, 1998bBuote et al. 2002). The geometric argument is the following: for a single-phase gas in hydrostatic equilibrium, the three-dimensional surfaces in the gas temperature T , gas pressure pg, gas density ρg and gravitational potential Φ all have the same shape (Binney & Strimpel 1978;Buote & Canizares 1998a). Surfaces of constant emissivity jX ∝ ρ 2 g have the same three-dimensional shape as the isopotential surface.
On the other hand, if the spectral data are available (as they typically are with the Chandra telescope), then the temperatureprofile data may be used with the assumption of hydrostatic equilibrium to fit the X-ray isophotes. Buote et al. (2002) use both approaches to model NGC 720. X-ray isophotes are a good probe of the shape of the local matter distribution for the following reasons. First, the total mass profile (galaxy + gas + dark-matter halo) is nearly isothermal (ρ ∼ r −2 ) for elliptical galaxies (Humphrey et al. 2006;Gavazzi et al. 2007). This means that the shape of the isopotential contours reflects the shape of the matter distribution at the same radius, although typically the isopotential surfaces are significantly rounder than the isodensity contours (see, e.g., Binney & Tremaine 2008). Second, the density profile of gas tends to be pretty sharply cuspy (ρg ∼ r −1.5 ). Since the emissivity goes as jX ∝ ρ 2 g , and the morphology of jX traces that of the gravitational potential (see the discussion on the geometric argument above), this means that most of the X-ray emission along the line of sight is concentrated at radii close to the projected radius. Thus, the X-ray isophotes indicate the shape of the matter distribution at radii similar to the projected radius.
In this section, we focus on the shape measurement of the X-ray emission about the elliptical galaxy NGC 720. The inferred shape of the matter distribution of this system has been used to set constraints on self-interacting dark matter in the recent past (Feng et al. 2009;Feng, Kaplinghat & Yu 2010;Buckley & Fox 2010;McDermott, Yu & Zurek 2011). The data set used by Buote et al. (2002) for the shape measurement of NGC 720 is a 40 ks exposure of the inner 5 of the galaxy with the ACIS-S3 camera on the Chandra telescope. The data included in the fit are contained within a 35 − 185 annulus from the center of the galaxy, which corresponds to ≈ 4.5 − 22.4 kpc. Buote et al. (2002) estimate the three-dimensional isopotential shapes in the following way. First, they investigate a mass-follows-optical light (M ∝ L * ) mass profile for the gravitating mass using the geometric argument. They use a spheroidal Hernquist (1990) model for the stellar mass of the galaxy, with structural parameters determined by deprojecting the optical image to three dimensions. This is to test if the stars in the galaxy may be sufficient to provide a gravitational potential for the gas. They find that the isophotes are rounder than observed beyond about Re (≈ 50 or 5 kpc). This suggests that there must be significant ellipsoidally distributed mass extending well beyond the effective radius of stars, since isopotential contours become round as the distance to the main ellipsoidal mass profile increases.
Next, Buote et al. (2002) use the fact that they find the temperature profile of the halo gas to be isothermal to find the threedimensional X-ray emissivity distribution (jX ∝ ρ 2 g ) directly from the equation of hydrostatic equilibrium: where µ is the molecular weight of the gas atoms and mp is the proton mass. They used three similar spheroid models for the mass distribution of the galaxy (baryons plus dark matter) to find the gravitational potential Φ(r). The three models were pseudoisothermal (ρ(a) ∝ 1/(1 + (a/ap) 2 ) where a is defined in Eqs. (10-11)), Navarro-Frenk-White (Navarro, Frenk & White 1997) and Hernquist (Hernquist 1990). They fit both oblate and prolate spheroids, and assumed that the symmetry axis was in the plane of the sky, as this leads to the most elliptical isophotes for a given spheroid. Once the potential was found, they calculated the emissivity distribution in three dimensions and integrated along the line of sight. A χ 2 statistic was used to test the quality of the model fits to the X-ray surface-brightness map. The best-fit models were the pseudoisothermal ones, with the oblate and prolate spheroids having nearly identical goodness-offit. The best-fit ellipticity = 0.37 ± 0.03 for the oblate spheroid, and = 0.36 ± 0.02 for the prolate spheroid. The flatness of the ellipticity as a function of radius in the region of interest drives the best-fit density-profile for isothermality. Only for a similar isothermal spheroid are the X-ray isophotes so constant in ellipticity assuming density profiles that are similar spheroids.
We compare our simulations to the Buote et al. (2002) results by using the weighted moment-of-inertia tensor ellipsoidal shape estimator of Sec. 2. This method weights particles in the region of interest equally in estimating the shape of the mass distribution, and does not preferentially weight particles near the edge as the true moment-of-inertia tensor does. The initial region in which the ellipsoidal shapes are estimated is a spherical shell of radius rmin < r < rmax, where rmin is our numerical limit on the shape convergence radius and rmax = 14 kpc. We choose this value of rmax as a compromise between finding the ellipticity at small radii, where differences with CDM are highest, and having enough particles in the region of interest for a robust and unbiased shape estimate. The shape of the region of interest is deformed in each iteration of the weighted moment-of-inertia calculation to reflect the major axes of the morphology of the tensor at that iteration, keeping the morphology of the region of interest ellipsoidal and fixing its volume. This region is within the core radii of the σ/m = 1 cm 2 /g halos, and is our best approximation to the shape of the innermost part of the dark-matter halos, the parts relevant to the study of NGC 720. For reference, the 10 12 , 10 13 M halos have median core radii 16, 43 kpc, respectively (Rocha et. al., 2012).
In order to find the spheroidal ellipticity (described in Eq. (9)) from these ellipsoidal fits, we use the ellipsoidal axis ratios to decide if the halo is oblate or prolate. If prolate, we take axis ratio of the spheroid 1 − , cf. Eqs. (10) & (11), to be √ bc/a, i.e., the spheroidal semi-minor axis is the geometric mean of the ellipsoidal minor and intermediate axes. If oblate, we set 1 − = c/ √ ab, i.e., the semi-major axis of the oblate spheroid is set to be the geometric mean of the major and intermediate axes of the ellipsoid. Both these choices set the spheroidal volume equal to the ellipsoidal volume. A more realistic analysis would compute the integrated jX directly from the potential of the simulated halo and compare that to the observations. However, there are significant other uncertainties (discussed below) that argue against such an approach being more fruitful.
In Fig. 8, we show the ellipticity distribution for all halos in our 25 h −1 Mpc boxes for CDM and SIDM1 that have a virial mass within the 1σ uncertainties of the mass modeling in Humphrey et al. (2006) for NGC 720, Mvir = (6.6 +2.4 −3.0 ) × 10 12 M . Note that we do not weigh the distribution by the error distribution for the virial mass presented in Humphrey et al. (2006). However, the distribution in our simulated halos is not a strong function of virial mass in this mass range. We show the CDM distributions with the black histograms, the σ/m = 1 cm 2 /g distributions with the cyan shaded histograms, and σ/m = 0.1 cm 2 /g with dashed green histograms. We find that the inferred ellipticities are approximately independent of rmax out to approximately 25 kpc, with there being a small tail at higher for the σ/m = 1 cm 2 /g halos since the core radii are in the 20-40 kpc range and halo shapes are less affected by scatterings at radii larger than roughly the core radius. For σ/m = 0.1 cm 2 /g, the core sizes are of order rmin or smaller, so the shape measurements are relatively insensitive to rmax range. A quick review of Fig. 3 also shows that the relative independence of the ellipticity estimate out to 0.1rvir is to be expected in each cosmology. For the σ/m = 1 cm 2 /g halos, we exclude those halos for which either poor centering of the halo or ongoing merging makes the halos appear artificially flattened. For the other cosmologies, the relative cuspiness of the central density leads to more accurate halo centering.
Although there are significant differences in the ellipticity distributions of the CDM and SIDM halos, the observed ellipticity (downward arrow) is just within the ellipticity distribution for σ/m = 1 cm 2 /g. The fact that this magnitude of ellipticity is still in the distribution for σ/m = 1 cm 2 /g is also apparent from the middle panel of Fig. 3, which shows that the three-dimensional axis ratios at r/rvir 0.04 (approximately the radius corresponding to the 2D region of interest for NGC 720) can accommodate significant ellipticity. Note that radius is also close to the typical core radius where there we expect roughly 1 interaction per particle on average. This implies that NGC 720 could be an extreme outlier in SIDM1 model and thereby consistent with observations. However, there are some issues with this interpretation. One issue is that the ellipticity measured for the dark matter halo seems to be constant down to roughly ∼ 5 kpc in the data, which is about 1% of the virial radius for the mean virial mass for NGC 720 (Humphrey et al. 2006). The middle panel of Fig. 5 shows a sharp rise in axis ratio when the number of interactions gets above unity and hence our results are likely to change if we were able push deeper into the core with higher-resolution simulations. A second issue is that the measured halo mass for dark matter within 10 kpc (Humphrey et al. 2006) is a factor of 2-3 larger than even the median mass in our CDM simulations for halos with virial mass in the range preferred by Humphrey et al. (2006) fits. Thus we should also cut the histogram in Fig. 8 based on, say, the mass within 10 kpc (or some other measure of the concentration). These halos, by virtue of their larger densities, will also be rounder. The discrepancy in average dark-matter density within 10 kpc may also be a sign that the inner part of the dark-matter halo has been compressed as a consequence of assembly of the elliptical galaxy, a compression that has been observed in other studies of elliptical galaxies (Schulz, Mandelbaum & Padmanabhan 2010;Dutton et al. 2011). Moreover the presence of the stars and gas increase the velocity dispersion of the dark matter in the central parts and increase the scattering rate -an effect that is not captured in our SIDM simulations.
The resolution of these issues lies in a better comparison to the data, which in turn requires a bigger box higher-resolution simulation to probe deeper into the halo and gain more statistics. It will be important to include baryons to see how their presence may affect halo properties. With such a halo catalog in hand, it will be interesting to do a more careful comparison to the X-ray data of NGC 720 and other large nearby ellipticals. The addition of other ellipticals would be crucial. With only one object, the spread we see in ellipticities may be hard to overcome (although it may be smaller if we also cut on concentration as discussed above). If we had an ensemble of shape measurements, we would be able to set tighter limits on the SIDM cross section.
While our comparison to Buote et al. (2002) is not sufficiently sharp, the weight of the arguments suggests that σ/m = 1 cm 2 /g is not likely to be consistent with the measured shape of NGC 720. However, based on our existing simulations, σ/m = 0.1 cm 2 /g is as consistent as CDM for the shape of the NGC 720 isophotes. It is interesting to note in this regard that there is no hint of a large core (∼ 30 kpc) in the results of Buote et al. (2002) or Humphrey et al. (2006). The core sizes for σ/m = 0.1 cm 2 /g are smaller, ∼ 7 kpc, for the same virial mass range (Rocha et al, 2012), comparable to the effective radius of the stars in NGC 720. Thus the inferred dark matter density profile in NGC 720 may be a better way to search for effects of self-interactions.
To amplify the point about the dark matter density further, we note that the median central (maximum) density for SIDM with σ/m = 0.1 cm 2 /g is 0.05M /pc 3 , while Humphrey et al. (2006) infer an average density of 0.04M /pc 3 within 10 kpc. At 5 kpc, the inferred average density is 0.1M /pc 3 , still within a factor of two (expected from scatter) of the predictions. This lends credence to the argument that an analysis focused on SIDM predictions of an ensemble of nearby X-ray-detected elliptical galaxies could be a fruitful way to look for signatures of or constrain SIDM. It is also worth noting that the shape distribution of σ/m = 0.1 cm 2 /g is visibly different from CDM and perhaps an ensemble of X-rayshape measurements could resolve the differences.
Our analysis argues for the conclusion that SIDM cross sections with σ/m 0.1 cm 2 /g can be hidden in X-ray data, and that previous constraints on SIDM using these data are overly stringent. Feng, Kaplinghat & Yu (2010) assumed that the Buote et al. (2002) ellipticities described the halo shape at the inner radius of R = 4.5 kpc and that Γ ∼ H0 was required to make the halo spherical as indicated by the results of Davé et al. (2001). Our results indicate that this interpretation is flawed because (a) SIDM halos retain significant triaxiality in the region where Γ ∼ H0 and (b) the scatter in SIDM halo ellipticities is large. In order to use analytic arguments to constrain self-interacting dark matter models, they should be tuned to reproduce the the distribution of axis ratios seen in simulations.
The future of cluster lensing constraints
In the future, far better constraints on SIDM will come from statistical studies of galaxy-cluster lens samples rather than the analysis of individual objects. In this section, we focus on statistical studies of the shapes of relaxed clusters.
There are a number of ongoing and future observational programs designed to characterize the mass function of and mass distribution within galaxy clusters (e.g., LSST Science Collaborations 2009; Gill et al. 2009;Plagge et al. 2010;Richard et al. 2010;Planck Collaboration et al. 2011a,b,c;Pillepich, Porciani & Reiprich 2012;Marriage et al. 2011;Viana et al. 2012;Postman et al. 2012). Modulo the effects of baryons, a smoking-gun sign of SIDM would be for the mass function of galaxy clusters to look identical to CDM, but with lower mass density and rounder surfacemass distributions at the centers of the clusters. One would use the ensemble of galaxy-cluster data to compare with simulations of clusters in CDM and SIDM (with various elastic scattering cross sections).
In this study, we will consider shape-based constraints from the initial results of the Local Cluster Substructure Survey (Lo-CuSS; PI: G. Smith), a multi-wavelength follow-up program of 165 low-redshift clusters selected from the ROSAT All-sky Survey catalog (Ebeling et al. 2000;Böhringer et al. 2004;Richard et al. 2010). Twenty clusters were used for the first mass-modeling study (Richard et al. 2010). These were selected because they had been observed with the Hubble Space Telescope (HST), could be followed up spectroscopically at Keck, and were confirmed to have strongly lensed background galaxies. The details of the mass modeling of these clusters are presented in Sec. 3 of Richard et al. (2010). For our purposes, the key fact is that the surface density of the cluster-mass dark-matter-halo component of the lens model was parametrized in terms of the dPIE profile, Eq. (7). This study should be taken as an example of the power of using statistical studies of clusters for SIDM constraints, with an emphasis on the constraints on halo shapes.
In order to compare the LoCuSS clusters to simulations, we fit the surface densities of our most massive clusters in the 50 h −1 Mpc boxes for our CDM, SIDM0.1, and SIDM1 runs using the dPIE surface-density profile, Eq. (7). We fit the surface densities of the halos projected along the principal axes of the moment-of-inertia tensors of the mass within the virial radius, and perform the fits within annuli Rmin < R < Rmax. The inner radius of the annuli is chosen to be Rmin = 20 kpc/h, since this is the threedimensional shape convergence radius (see Sec. 2). We set the outer radius Rmax = 50 kpc/h since most of the lensing arcs in Lo-CuSS are in the range 20 kpc < R < 100 kpc if projected into the plane of the sky at the position of the galaxy clusters. We fix rcut = 1000 kpc, which is what is typically done with the LoCuSS clusters. In practice, the parameter constraints are insensitive to this choice, as all the data are well within this radius if projected onto the sky. The free parameters that we fit are: σ0, the characteristic velocity dispersion of the cluster; rcore, the pseudoisothermal core radius; e, the cluster ellipticity (as defined in Eq. (8)); and the position angle θ. We fix the center of the cluster to that inferred by AHF. The surface-density parameters were fit using the downhill simplex algorithm in the scipy python module using the likelihood L = P (N obs |N (σ0, rcore, e, θ)) where P (N obs |N (σ0, rcore, e, θ)) is the Poisson probability of finding N obs simulation particles within the annulus given a dPIE model with parameters (σ0, rcore, e, θ), for which N simulation particles are expected. Li is the probability of finding a simulation particle i at position (x, y) given those same parameters, Li = Σ(x, y|σ0, rcore, e, θ) where Σ(x, y) is defined in Eq. (7). Examples of the dPIE fits are shown in Figs. 9 and 10, in which we show the surface densities of one massive halo (Mvir = 1.8 × 10 14 M ) along the major and intermediate principal axes of the halo moment-of-inertia tensor as well as the best-fit dPIE surface densities. The central regions of the halos are masked for R < rmin, the region where the shape profiles in three dimensions are not converged. The halo appears rounder and more dense if projected along the major axis rather than the intermediate axis, just as we saw for another halo in Figs. 1 and 2. While there are noticeable differences between the CDM and σ/m = 1 cm 2 / g surface densities at small projected radii, the differences between CDM and σ/m = 0.1 cm 2 /g are more subtle.
To make a quantitative comparison between our simulations and the LoCuSS observations, we examine only those LoCuSS clusters that have σ0 and rcore in a similar range as the fits to the five most massive halos in our simulations. This restricts the number of relevant LoCuSS cluster to five. In Fig. 11, we show the ellipticity e distribution of the five most massive clusters in the CDM (black), σ/m = 1 cm 2 /g (cyan), and σ/m = 0.1 cm 2 /g (green hatched) simulations for lines of sight along the three principal axes of the moment-of-inertia tensors. The dark blue points with error bars show the central values and 1-σ uncertainties in e for the cluster halos in the lens-model fits for the five similar LoCuSS clusters. As expected, the ellipticities are highest for the , σ/m = 0.1 cm 2 /g (center) and σ/m = 1 cm 2 /g (right) cosmologies. The central regions in which the projected radius R < r min are masked as they are in the fits. These surface densities result from viewing the halo along the major axis of the moment-of-inertia tensor of all particles in the halo. Bottom row: Best-fit dPIE surface-density fits for the surface densities above. The ellipticities of the dPIE fits are: CDM, e = 0.28; σ/m = 0.1 cm 2 /g, e = 0.25; σ/m = 1 cm 2 /g, e = 0.29.
intermediate-axis line of sight. In this instance, the ellipticities of the CDM and σ/m = 0.1 cm 2 /g halos are more consistent with the LoCuSS sample than the σ/m = 1 cm 2 /g halos. However, the lensing probability is highest for lines of sight closely aligned to the major axis (van de Ven, , and in this case even the CDM halos appear slightly rounder than the LoCuSS clusters. The σ/m = 1 cm 2 /g halos are definitely too round. Based on this initial set of LoCuSS halos, we believe it is safe to say that σ/m = 0.1 cm 2 /g is at least as consistent with observations as CDM, but that there is significant tension with σ/m = 1 cm 2 /g. There are several things that preclude us from making any statements stronger than this. First, we have a small sample of both simulated and observed galaxy-cluster halos. Second, we do not know the virial mass or alignment of the LoCuSS galaxy clusters, nor do we have a good handle on the selection function of the survey. This means that we cannot make a direct comparison between the simulations and the observations. Third, we do not have any galaxy-cluster-mass halos with Mvir > 2.2 × 10 14 M in our simulations, and thus our ellipticity probability distributions for the lowest-σ0 LoCuSS clusters are almost certainly biased, although it is not clear in which direction that bias goes. Based on the X-ray data, it appears likely that several of the LoCuSS clusters in our subsample are more massive than any simulated cluster even though σ0 and rcore are similar to the simulated clusters (Richard et al. 2010). Given that the dPIE fits are made based on the very cen-tral regions of the clusters, it is not unexpected that virial masses of the clusters can be very different even if the inner regions look similar. Finally, we do not include baryons in our simulations; it remains unclear how the presence of baryons alters the density profile and shape of galaxy clusters (Scannapieco et al. 2012).
However, it is fair to say that this ensemble of observed galaxy clusters already places stronger constraints on the SIDM cross section than the other constraints we considered in this section, in particular the constraint from MS 2137-23. While σ/m = 1 cm 2 /g is easily allowed by our reanalysis of the MS 2137-23 constraint (modulo the uncertainty in the normalization of the convergence), this value of the SIDM cross section is in some tension with the LoCuSS cluster sample. A more quantitative limit, though, will only be possible with better theoretical predictions for the shapes of cluster-mass halos and a careful analysis of observed cluster selection functions. In the future, we will simulate larger dark-matter halos and perform mock observations of them to find a better quantitative mapping between observations and SIDM cross-section limits.
It may be difficult to probe cross sections as small as σ/m = 0.1 cm 2 /g based on cluster halo shapes alone, though. We get a sense of how hard it may be to use strong lensing to probe halo shapes, and hence small self-interaction cross sections, in Fig. 9, in which the shapes of the simulated CDM and σ/m = 0.1 cm 2 /g halos do not look very different on the scales to which strong lensing is sensitive. Moreover, in Rocha et al. (2012), we estimated that core sizes for cluster-mass halos should be 20 kpc if Figure 11. Ellipticity e (Eq. 8) of halos fit with dPIE profiles for three different projections. The solid black histograms show e values for the five most massive CDM halos in the 50 h −1 Mpc simulation, the cyan histogram shows the same halos in the SIDM 1 simulation, and the green hatched histogram is for SIDM 0.1 . The dark blue points with uncertainties show the best-fit ellipticities and their 1-σ uncertainties from dPIE modeling of the five LoCuSS clusters with σ 0 and rcore similar to those of the simulated clusters (Richard et al. 2010). σ/m = 0.1 cm 2 /g. These sizes are similar to the effective radii of the brightest cluster galaxies (BCGs) at the centers of halos. There are several implications of this fact. First, it means that stellar kinematics of the BGCs will be important for probing the dark-matter halo on scales for which scattering matters. Strong lensing is less sensitive to both the density profile and halo shapes at such small scales (see, e.g., Fig. 3 in Newman et al. (2011)). Second, it means that any inferences on the dark-matter halo on such scales depends on careful and accurate modeling of the BGCs in the data analysis. Third, we would have to model the behavior of SIDM in the presence of a significant baryon-generated gravitational potential, and to explore the coevolution of dark-matter halos and BGCs. In particular, simulations of isolated disky galaxies indicate that the presence of baryons tends to make the dark-matter distribution more spherical, although it is not clear how much the dark-matter distribution changes if the central galaxy is elliptical instead (Debattista et al. 2008). We note that while these issues also have implications for SIDM constraints based on radial density profiles or central densities, they may be more serious for the shape-based constraints because the shapes of σ/m = 0.1 cm 2 /g are already so similar to CDM halos for small radii even in the absence of baryons.
CONCLUSIONS
The takeaway message of this work is that mapping observations to constraints on the self-interaction cross section of dark matter is significantly more subtle than previously assumed, and as such, constraints based on halo shapes are, at present, one to two orders of magnitude weaker than previously claimed.
There are three primary reasons that contribute to this conclusion. First, the observational probes (gravitational lensing and X-ray surface brightness) of halo shapes are actually probes of some moment of the mass distribution. For lensing, the observational probes are also sensitive to all material along the line of sight. While SIDM makes the three-dimensional density distribution significantly rounder within some inner radius r, the surface density will in general not be spherically symmetric at a projected radius R = r. The surface densities are affected by material well outside the core set by scatterings where material is still quite triaxial (Fig. 3). Previous constraints were made under the assumption that the observations tracked the three-dimensional halo shape for fixed projected radius. This is a less troublesome assumption for X-ray isophotes, since it is weighted by the square of the gas distribution, and hence sensitive to the central regions. The shapes measured should be related most closely to the shapes of the enclosed mass profile. So, to probe cross-sections as small as σ/m = 0.1 cm 2 /g, one needs to get down to O(10 kpc) from the center of the halos. The contribution of stars in this region makes it difficult to robustly estimate the shape of the dark matter profile and it also makes it difficult to get a large ensemble of galaxies for this study.
Second, there is a fair bit of scatter added by assembly history to the observed shapes and the scatter is large enough that it precludes using a small number of objects to set constraints on SIDM cross sections. Finally, although we find that the three-dimensional shape of halos begins to become more spherical than CDM at radii where the local interaction rate is fairly low, Γ(r) ≈ 0.1 H0, there is a fair amount of triaxiality even when Γ(r) ≈ H0, a fact that was not appreciated in earlier studies (e.g., Miralda-Escudé 2002; Feng, Kaplinghat & Yu 2010) We find that the convergence map of MS 2137-23 (Mvir ∼ 10 15 M ) allows a velocity-independent SIDM cross section of σ/m = 1 cm 2 /g. The X-ray isophotes of NGC 720 (Mvir ∼ 10 13 M ) likely rule out σ/m = 1 cm 2 /g, but are consistent with σ/m = 0.1 cm 2 /g at radii where we can resolve shapes in our simulations. Based on a preliminary comparison to lensing models of LoCuSS clusters, we conclude that σ/m = 0.1 cm 2 /g is as consistent with observations as CDM but that σ/m = 1 cm 2 /g is likely too large to be consistent with the observed shapes of those clusters.
Cross sections in this range are very interesting. In Rocha et al. (2012), we show that a cross section in the neighborhood of σ/m = 0.1 cm 2 /g could solve the "Too Big to Fail" problem for the Milky Way dwarf spheroidals (Boylan-Kolchin, Bullock & Kaplinghat 2012), the core-cusp problem in LSB galaxies (Kuzio de Naray et al. 2010;de Blok 2010), as well as the shallow density profiles of the galaxy clusters in Sand et al. (2008);Newman et al. (2009), andNewman et al. (2011) while not undershooting their central densities or overshooting the core sizes. Cross sections in this range are also consistent with other density-profile-based and subhalo-based constraints (Yoshida et al. 2000b;Gnedin & Ostriker 2001).
Since the current set of observations appear to be consistent with a SIDM cross section of σ/m = 0.1 cm 2 /g, there are two relevant questions for shape-based SIDM constraints. Will shapebased constraints be competitive with other types of SIDM constraints? And what will it take to get down to σ/m ∼ 0.1 cm 2 /g with shapes?
Upon closer inspection, our view is that constraints using existing data could be pushed below σ/m = 1 cm 2 /g, but it is not yet clear that we can get to σ/m ∼ 0.1 cm 2 /g. While Xray isophotes of the elliptical galaxy NGC 720 are consistent with σ/m = 0.1 cm 2 /g, there are some differences and a larger ensemble of elliptical galaxies may be able to test that. There are a number of other elliptical galaxies for which high-resolution X-ray data exist (e.g., Humphrey et al. 2006), but they lack the detailed shape measurements of NGC 720. So better constraints could result from X-ray shape analysis for these galaxies. For clusters, based on our quick pass through the LoCuSS results, lensing-based shape constraints on SIDM could also extend well below σ/m = 1 cm 2 /g if simulations are performed of a statistically significant number of massive galaxy clusters. However, in studies of both galaxies and clusters, it is likely that the measured densities in the inner regions would be a better way to test for signatures of self-interacting dark matter.
|
2012-08-15T04:26:36.000Z
|
2012-08-15T00:00:00.000
|
{
"year": 2013,
"sha1": "bfc9b006b03af55bd1a00822b9e58700dee470a5",
"oa_license": null,
"oa_url": "https://academic.oup.com/mnras/article-pdf/430/1/105/3068492/sts535.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "bfc9b006b03af55bd1a00822b9e58700dee470a5",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
13039905
|
pes2o/s2orc
|
v3-fos-license
|
Angiotensin II system in the nucleus tractus solitarii contributes to autonomic dysreflexia in rats with spinal cord injury
Background Autonomic dysreflexia (AD) is a potentially life-threating complication after spinal cord injury (SCI), characterized by episodic hypertension induced by colon or bladder distension. The objective of this study was to determine the role of impaired baroreflex regulation by the nucleus tractus solitarii(NTS) in the occurrence of AD in a rat model. Methods T4 spinal cord transection animal model was used in this study, which included 40 Male rats Colorectal distension (CD) was performed to assess AD and compare the changes of BP, HR, and BRS, six weeks after operation. After that, SCI rats with successfully induced AD were selected. Losartan was microinjected into NTS in SCI rats, then 10, 30, 60 minutes later, CD was performed to calculate the changes of BP, HR, and BRS in order to explicit whether Ang II system was involved in the AD occurrence. Ang II was then Intra-cerebroventricular infused in sham operation rats with CD to mimic the activation of Ang II system in AD. Finally, the level of Ang II in NTS and colocalization of AT1R and NMDA receptor within the NTS neurons were also detected in SCI rats. Results Compared with sham operation, SCI significantly aggravated the elevation of blood pressure (BP) and impaired baroreflex sensitivity (BRS) induced by colorectal distension; both of which were significantly improved by microinjection of the angiotensin receptor type I (AT1R) antagonist losartan into the NTS. Level of angiotensin II (Ang II) in the NTS was significantly increased in the SCI rats than sham. Intracerebroventricular infusion of Ang II also mimicked changes in BP and BRS induced by colorectal distension. Blockade of baroreflex by sinoaortic denervation prevented beneficial effect of losartan on AD. Conclusion We concluded that the activation of Ang II system in NTS may impair blood pressure baroreflex, and contribute to AD after SCI.
Introduction
Autonomic dysreflexia (AD) commonly occurs in patients with spinal cord injury (SCI) at or above the level of the sixth thoracic segment(T6) [1]. An episode of AD is characterized by huge elevation of blood pressure (BP) with bradycardia or sometimes tachycardia. It is accepted that an increase greater than 20-30 mmHg in systolic BP could be considered as AD [2]. In a subgroup of SCI patients, especially the ones with cervical and high thoracic SCI, a dysreflexic episode could be missed, for it only appears in normal or slightly elevated ranges, as the resting BP in these patients is often lower than that in normal individuals [3]. Often, the episodic hypertension is triggered by urinary bladder or colon irritation [2,4].
Although AD usually behaves as a mild discomfort, it can also cause intracranial hemorrhage, coronary artery constriction, retinal detachments, pulmonary edema, etc. [5,6], and becomes a devastating event. It is observed that the incidence and severity of AD is affected by several factors, such as the level of injury, the completeness of injury, and the time after injury [2]. It is reported that AD occurs in up to 70% patients with quadriplegia and high paraplegia, in the stage of chronic SCI [7,8], and is the primary cause of morbidity and mortality [9,10].
Several hypotheses have been proposed for the development of AD. Plastic changes within the spinal cord and peripheral autonomic circuits are thought to be the main reason for autonomic instability which leads to elevated BP [11]. The loss of supraspinal input contribute to the situation [12]. Altered sensitivity of peripheral alpha-adrenergic receptors is also one possible explanation [13]. All these proposed mechanisms focus on the changes of the peripheral nervous system. Despite there are a few articles reported that propriospinal changes as well as interneuronal sensitivity alterations following spinal cord injury [14,15], interruption of tonically active medullo-spinal pathways [16], and sprout of the primary afferent fibers[17]are probably involved, but what role does the central nervous system play in the progression of AD remains partly undetermined.
It is well known that the BP baroreflex, a negative feedback loop would rapidly respond to an elevated BP in a healthy body. A reasonable inference that impaired baroreflex sensitivity (BRS)may function in patients with SCI has been proven [18]. Nucleus tractus solitarii (NTS), a relay station of baroreflex transmission, plays a key role in tonic and reflex control of cardiovascular activity. It is found that abnormalities in the NTS inhibit BRS and lead to severe hypertension [19,20], and an increase of angiotensin II (Ang II) in the NTS is relevant [21,22]. It is reported that the activation of central angiotensin II system in NTS participants in this hypertension [23,24]. Microinjection of angiotensin II into NTS has been found to depress the baroreflex function, whereas microinjection of the antagonist of angiotensin II primary receptor, angiotensin II receptor 1 (AT1R), could facilitate the BRS [25].
In present study, the principle goal was to clarify whether the blunted BRS function involved in the hypertension induced by AD, and the relationship between AD and angiotensin II system in NTS. We used nitroprusside sodium and phenylephrine injection to measure the values of BRS in SCI baseline and during AD which was triggered by colon distension. We also investigated the effect of bilateral microinjection of the AT1R blocker-losartan into the NTS on the elevated BP of AD. Furthermore, the experiments were duplicated and followed by sinoaortic denervation in SCI rats and intracerebroventricular administration of angiotensin II in intact rats to test and verify our hypothesis.
Animals
All studies were conformed to the Guide for the Care and Use of Laboratory Animals published by the US National Institutes of Health (NIH Publication No. 85-23, revised 1996) and approved by the Institutional Care and Use Committee of the Second Military Medical University. A total of 40 Male Sprague-Dawley rats (Sino-British SIPPR/BK Laboratory Animal Ltd, Shanghai, China) weighing between 300 and 350 g were enrolled in the study and housed in a light and temperature-controlled cage with food and water available ad libitum.
Model preparation
T4 spinal cord transection animal model was used in this study in accordance with other researches [26]. Rats were anesthetized with 3% isoflurane. After median incision of the back and dissection of the fascia and muscles in layers, a T3 vertebra laminectomy was performed to expose the T4 spinal cord. In the SCI group, both the dura mater and spinal cord were completely transected with micro-scissors. The completeness of transection was verified by visual inspection of the lesion site. Absorbable gelatin sponge was inserted into the gap to reduce bleeding, and was laid over the vertebral canal to protect the spinal cord. In the control group, the T4 spinal cord was exposed only without transection. All the operative procedures were performed with aseptic surgical techniques. Penicillin was used for three days after the operation. During the recovery period (about one week), all rats were taken care of three times a day. Physical manipulations were performed to prevent pressure sores and gentle manual compressions were used to avoid bladder filling until the rats could accomplish automatic micturition. All rats were allowed to recover for six weeks.
Surgery and experimental protocol six weeks after SCI
Six weeks after T4 transection, 30SCI and 20sham operation rats were anesthetized with intraperitoneal α-chloralose (40 mg/kg) and urethane (800 mg/kg). The trachea was cannulated and artificial ventilated with mixed 100% oxygen and room air for assistance. The left femoral artery was cannulated using polyethylene tubing PE10 (Smiths medical Company, UK) to record BP and heart rate (HR) by PowerLab/8PS (AD Instruments, Australia). Both the left and right femoral vein were cannulated using polyethylene tubing PE50 for fluid and drug infusion. Rats were placed in a stereotaxic frame (Narishige, Japan) and the dorsal surface of the medulla oblongata was exposed by removing part of the occipital bone and dura from incising of the atlantooccipital membrane. Temperature of the rats was maintained at about 37˚C with a temperature controller (World Precision Instruments, USA). Colorectal distension (CD) was performed in SCI and sham operation rats to assess AD and compare the changes of BP, HR, and BRS in the two groups. After that, SCI rats with successfully induced AD were selected for the following experiments. Losartan was microinjected into NTS in SCI rats, then 10, 30, 60 minutes later, CD was performed to calculate the changes of BP, HR, and BRS in order to explicit whether Ang II system was involved in the AD occurrence. Ang II was then Intra-cerebroventricular infused in sham operation rats with CD to mimic the activation of Ang II system in AD. For the further explanation of the function of baroreflex in AD occurrence, Sinoaortic denervation was used in SCI rats. Finally, the level of Ang II in NTS and colocalization of AT1R and NMDA receptor within the NTS neurons were also detected in SCI rats.
Assessing AD triggered by colorectal distension AD in rats was assessed six weeks after SCI or sham operation [4]. After anesthesia, CD was performed using a latex balloon-tipped Swan-Ganz catheter (Edwards LifeSciences, USA). The catheter was inserted into the rectum for 2 cm, fixed to the tail with tape, and left in place for about 10 min until BP became stable. The volume of 2 ml air was syringed to inflate the balloon catheter for about 1 min to mimic the noxious colorectal distension [27]. This volume of air generated a pressure of about 35mmHg, measured through a sidearm of the catheter. The colorectal distension was performed twice with an interval of 10min.The BP values before and after colorectal distension were recorded twice and averaged, respectively. An elevation greater than20% from baseline was considered as AD status [2],the rats failed to induce AD were excluded from the following experiments.
NTS microinjections
The effect of losartan (an AT1R antagonist) on changes in BP and BRS after microinjection into the NTS was explored with SCI rats in which AD can be observed. NTS Microinjection (volume of 50 nl) was achieved with three-barrel micropipettes (20-30 μm tips) using a pneumatic pressure injector (World Precision Instruments, USA) 10 minutes after anesthesia. The position of NTS was located as 0.4-0.5 mm rostral, 0.5-0.6 mm lateral, and 0.4-0.5 mm deep to calamus scriptorius, and functionally identified by a rapid depressor response (>25 mmHg) to 1 nmol L-glutamate injection [28]. The interval of 60 s existed between bilateral injections. After identification of NTS with 10 minutes washout, 1.6 nmol losartan was injected for follow up experiments. L-glutamate and losartan were dissolved in artificial cerebrospinal fluid (aCSF). The vehicle injection had performed as a control to prevent the interference. The injection sites were confirmed by microinjection of 50 nl 2% pontamine sky blue for analysis at the end of each experiment.
Measurement of baroreflex sensitivity
The most common method for quantitatively assessing BRS is to alter BP pharmacologically, based on the Oxford and Modified Oxford techniques [29]. In our study, nitroprusside sodium (100 μg/kg) was used to decrease BP (40-50 mmHg), and then increased it (140-150 mmHg) with phenylephrine (80 μg/kg) [28]. The BRS was calculated as the ratio of changes in HR (beat/min) and changes in MAP (mmHg) (ΔHR / ΔMAP).
Intra-cerebroventricular infusion
This experiment was performed to verify the effect of central Ang II on MAP elevation induced by the colorectal distension. After anesthesia, the atlantooccipital membrane was exposed and the fourth ventricle was punctured into by a stainless-steel cannula, verified by effusion of the cerebrospinal fluid. The cannula was connected to a 0.5 ml syringe via a 30 cm flexible tube. Ang II was infused at a rate of 300 μl/h for one hour and the concentration was 150 pmol/100 μl. The MAP and HR in response to colorectal distension were observed 30 min and 60 min after central infusion of Ang II.
Sinoaortic denervation (SAD)
SAD eliminates the baroreflex by removing afferent inputs from arterial baroreceptors [30]. After anesthesia, atropine (0.5 mg/kg) was used to prevent salivary secretion in rats with AD observed. Followed by a midline incision, bilateral superior laryngeal nerves were sectioned. Connective tissue of carotid bifurcation regions was stripped and the area was painted with 10% phenol in ethanol. After the SAD surgery, the baroreceptor denervation was accepted as the decrease in HR was no more than 6 bpm in response to phenylephrine[31].
Measurement of Ang II
Rats were euthanized with an overdose of pentobarbital sodium (200 mg/kg) and removed the brain, which were frozen rapidly with liquid nitrogen. NTS tissues were punched according to rat atlas in cycstat, lysed with lysate, and sonicated the mixture. The supernatants were extracted for detecting the level of Ang II after centrifugation. Level of Ang II was measured by the Elisa kits (no.F15050, Shanghai Westang Biotech CO., LTD) according to the manufacturer's instructions. In brief, protein samples were added to coated wells for 40 minutes at 37˚C, washed wells 3 times, and added biotinylated antibody for 20 minutes at 37˚C, wells washed 3 times, reacted with horseradish peroxidase conjugated secondary antibody, and added with substrates (TMB Solution). The concentration of Ang II were determined the absorbance at 450 nm with an automated micro plate reader, and were calculated according to the standard curve. The level of Ang II in the NTS was expressed as the ratio of the concentration of Ang II to the concentration of total protein in protein samples.
Immunohistochemisty
In order to detect the colocalization of AT1R and NMDA receptor within the NTS neurons, a double staining immunohistochemistry was performed. The rats were euthanized by an overdose of pentobarbital sodium (200 mg/kg) and perfused through the aorta with 0.9% NaCl solution and 4% paraformaldehyde in 0.1 mol/L phosphate buffer. The brains were removed and post-fixed in 4% paraformaldehyde in 0.1 mol/L phosphate buffer, overnight. The brain blocks were then transferred to 20% sucrose in phosphate-buffered saline (PBS) and kept in the solution until they sank to the bottom. Then, the brain blocks were rapidly frozen. Sections of 20 μm thickness were cut in a cryostat and floated in PBS. The sections were pre-incubated in antibody dilution solution (5% Bovine Albumin V, 0.2% Triton X-100 and 0.05% sodium azide in PBS) for 30 min at room temperature(RT), followed by incubation with the 1st primary antibody, NMDAR1 antibody (Abcam) overnight at 4˚C. Subsequently, the sections were incubated with FITC-conjugated IgG. The sections were then incubated with the 2nd primary antibodies of AT1R (SIGMA-ALORICH, Anti-AGTR1/AT1) overnight at 4˚C. Subsequently the sections were incubated with TRITC-conjugated IgG. All the incubations and reactions were separated by 5 min washes (3 times) in PBS wash buffer. Finally, the sections were then mounted on slides and embedded. Images were taken with the Olympus digital camera DP72 (Olympus, Japan) attached to an Olympus microscope (IX71, Olympus, Japan).
Statistical analysis
All data were presented as mean±SE. Paired t-tests were used to compare MAP, HR and BRS changes before and after colorectal distension. The MAP, HR and BRS changes in response to losartan and Ang II injections at different time points were performed using repeated one-way ANOVA, with post hoc Student-Newman-Keuls test. Difference was defined as significant at p<0.05.
Effect of colorectal distension on BP and HR in SCI rats
No significant difference was found in baseline mean arterial pressure (MAP) and HR between the SCI and the control groups(n = 5)six weeks after SCI or sham procedure. In SCI rats, MAP was increased by 31 ± 6 mmHg after colorectal distension, which indicated that AD was successfully induced. While in the control group, the MAP was only increased by 5 ± 4 mmHg after colorectal distension, which was significantly lower than that of the SCI group (p<0.05). Similarly, the decrease in HR (19 ± 5 vs 4 ± 3 bpm) by colorectal distension was significantly (p<0.05) higher in the SCI group than the control group (Fig 1 and Table 1). After these tests, the SCI rats which were successful induced AD upon colorectal distention were selected out for the following experiments.
Effect of colorectal distension on BRS
The BRS was measured in SCI rats which were successful induced AD or sham operation rats in resting and in the response to colorectal distension. It was found that BRS in the SCI group was significantly reduced compared with the control group (-0.47 ± 0.08 vs -0.85 ± 0.14 bpm/mmHg), and it was further (p<0.05) blunted by colorectal distension (-0.29 ± 0.07 bpm/ mmHg) (Fig 2).
Effects of Ang II in NTS on MAP and BRS in response to colorectal distension
We also found that level of Ang II in the NTS was significantly increased after SCI. The changes of MAP and BRS in response to CD were measured 10, 30, and 60 min after bilateral microinjection of losartan into the NTS (n = 5). The MAP elevation induced by colorectal distension was significantly attenuated and the decreased BRS by CD was significantly improved 10 and 30min after NTS injection of losartan (Fig 3). The vehicle interference of aCSF was also explored and no difference was detected in the changes of MAP and BRS in response to CD after the injection of aCSF. In additional, we also found that level of Ang II in the NTS was significantly increased (0.39 ± 0.05 vs0.19 ± 0.02 ng/mg) in rats after SCI (n = 5). Control rats (n = 5) received central infusion of Ang II into the fourth ventricle to determine the effect of AT1R activation on MAP and BRS in response to colorectal distension. After Ang II treatment (30 and 60 min), the elevation of MAP and the decrease of BRS were significantly (p<0.05) amplified in response to colorectal distension (Fig 4).
Effects of SAD on MAP and BRS change by colorectal distension
SAD was performed in SCI rats to further verify the important role baroreflex plays in AD and to explore whether this effect could be blocked by microinjection of losartan into the NTS (n = 5). After SAD, the MAP increased and BRS decreased significantly (p<0.05), suggesting successful denervation [29]. In these SAD rats, microinjection of losartan into the NTS failed to attenuate MAP response to colorectal distension (Fig 5).
Colocalization of AT1R and NMDAR1 receptor in the NTS neurons
As indicated in S1 Fig, the colocalization of the glutamate receptor subtype N-methyl-D-aspartic acid receptor (NMDAR) and AT1R expression was found in the same NTS neurons.
Discussion
AD is a severe complication in the chronic stage of SCI. The underlying mechanisms still have not been well explicated. Current explanation to hypertension induced by AD is mostly focused on the peripheral system, such as that the elevated tonic activity of arterial smooth muscles and malfunction of the nerve conduction in spinal level [32][33][34]. It is rarely accepted that the central nervous system might play a role in AD, because seemingly sensory information cannot be transmitted from the site of irritation (for example, the colon) to the brainstem, if the spinal cord is completely transected. However, this does not necessary mean the central nervous system cannot have an impact in the onset and progression of AD. The most obvious is the control and regulation of BP baroreflex. After T4 spinal cord transection, the nociceptive transmission of spino-thalamic tract and motor transmission of corticospinal tract were cut off. Though, the anatomic structure of baroreflex and cardiac sympathetic system are retained, the autonomic nervous system also have adaptive changes with the time extension. Therefore, we aimed to test the hypothesis that the NTS might be involved in AD after SCI.
In our study, we found that microinjection of the AT1R antagonist losartan into the NTS effectively attenuated the elevation of BP response to AD triggered by colorectal distension, and improved the AD-induced blunting of BRS. Also, we have demonstrated that AT1R activation by central infusion of Ang II in control rats induceda MAP and BRS change similar to that evoked by colorectal distension after SCI. These demonstrated that Ang II system in NTS may participate in AD. Furthermore, SAD was performed to verify whether the baroreflex mechanism played a role in the above effect in SCI rats during AD.
It is known that NTS is the relay station of blood pressure regulation. When blood pressure rises, the baroreceptor transfer impulse to the NTS via sinus nerve (vagus nerve branch) and then to the dorsal vagal nucleus, which transfer impulse to vagal efferent fibers and make heart rate slow and peripheral vascular dilation. Both of these changes lead to blood pressure decrease. When blood pressure decrease, the baroreceptor transfer impulse to the NTS via the Ang II is a major vascular-nerve conduction transmitter in the NTS. Recently, studies have shown that activation of the renin-angiotensin system is an important factor in the development of neurological hypertension. After injection of Ang II into the NTS, the endothelial nitric oxide (eNOS) which is important to regulate blood pressure and heart rate is inhibited, and the release of inhibitory neurotransmitter GABA is increased. All of these shunt the neural and electrical signal of baroreceptor into NTS, inhibit the signal transduction of the vascular pressure feedback pathway, and lead to up regulation of the blood pressure set point [35,36]. The AT1R is reported to be expressed in the NTS and involved in central control of BP and baroreflex transmission [37]. Increased AngII is an important mechanism responsible for cardiovascular dysfunction in hypertension and heart failure. In this study, we also found that Ang II level in the NTS was increased in SCI rats. Our results verified the hypothesis that the activation of AT1R in the NTS contributed to the elevation of BP during AD by reduction of BRS, and blockade of AT1R by losartan significantly blunted this effect. However, the exact mechanism by which Ang II system in the NTS changed in the situation of SCI is not clear. In this work, there is a limitation that only losartan injection was used to block the AT1R in the NTS. Clearly, gene knockdown of AT1R by retroviral or lentiviral shRNA may be more specific to determine the chronic effect of AT1R in mediating the SCI-induced change in cardiovascular dysfunction. Ang II system activation can be amplified or prolonged by several factors, such as the increase of generation, decrease of Ang II resolution, up-regulation or supersensitivity of AT1R, etc. [25,28,[38][39][40]. It is well known that the transmitter glutamate in the NTS plays an important role in mediating resting BP and baroreflex transmission. We found that the glutamate receptor subtype NMDA and AT1R was coexpressed in the NTS neurons, suggesting a possibility that functional change of AT1R induced by SCI affects the excitatory synaptic transmission in the NTS. It is possible that SCI probably activate the renin-angiotensin system and affect the maintaining of resting MAP and its sensitivity to AD. In acute stage of SCI, the lower level of BP may result in the oxygen deficit in neurons of the brain including medulla oblongata, this may require the overactivation of Ang II system in the NTS for maintaining and keeping the resting blood pressure at the normal level. We confirmed that the expression of AT1R in the NTS was upregulated in rats with SCI. It is reported that Ang II in the NTS attenuates baroreflex functions, whereas losartan improved its sensitivity. It has been demonstrated that the BRS was severely impaired after the SCI [41]. Therefore, when AD was performed, Ang II system was activated with bolus of Ang II releasing, which resulted in further blunting of BRS.
In conclusion, the activation of Ang II system in NTS may impair blood pressure baroreflex, and contribute to the occurrence and deterioration of AD in SCI rats.
|
2018-04-03T04:54:06.954Z
|
2017-07-24T00:00:00.000
|
{
"year": 2017,
"sha1": "470400f1774f56ba6edf94d4037a55e2674d950f",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0181495&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "470400f1774f56ba6edf94d4037a55e2674d950f",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
67862807
|
pes2o/s2orc
|
v3-fos-license
|
A network-centric approach to drugging TNF-induced NF-κB signaling
Target-centric drug development strategies prioritize single-target potency in vitro and do not account for connectivity and multi-target effects within a signal transduction network. Here, we present a systems biology approach that combines transcriptomic and structural analyses with live-cell imaging to predict small molecule inhibitors of TNF-induced NF-κB signaling and elucidate the network response. We identify two first-in-class small molecules that inhibit the NF-κB signaling pathway by preventing the maturation of a rate-limiting multiprotein complex necessary for IKK activation. Our findings suggest that a network-centric drug discovery approach is a promising strategy to evaluate the impact of pharmacologic intervention in signaling.
A dynamic and complex network of interacting proteins regulates cellular behavior. Traditional target-centric drug development strategies prioritize single-target potency in vitro to modulate key signaling pathway components within the network and produce a desired phenotype. Target-centric strategies use biochemical assays to optimize specificity and affinity of small molecules for a protein class, such as protein kinases, or a specific enzyme. In some cases, an effective inhibitor is comparable with gene knockdown (KD) that reduces or completely removes the target protein from the network. However, given that pleiotropy is prevalent among disease-associated proteins, compounds that disrupt specific protein-protein interactions (PPI) while leaving others intact are attractive, especially when complete disruption is detrimental to the cell 1,2 . Small molecules are a promising class of PPI inhibitors to perturb signaling networks in vivo, but they are technically difficult to identify and assess. Instead, many PPI inhibitors are derived from competitive peptides with challenging cell permeability and pharmacokinetic properties 3 .
Tumor necrosis factor (TNF)-induced nuclear factor (NF)-κB signaling is an example of a tightly regulated and therapeutically relevant pathway that has resisted target-centric drug discovery. TNF is an inflammatory cytokine that initiates dynamic intracellular signals when bound to its cognate TNF receptor (TNFR1). In response to TNF, the IκB-kinase (IKK) complex is rapidly recruited from the cytoplasm to polyubiquitin scaffolds near the ligated receptor where it is activated through induced proximity with its regulatory kinase, TAK1 [4][5][6][7][8][9][10] . When fully assembled, the mature TNFR1 complex (Fig. 1a) is a master regulator of inflammation-dependent NF-κB signaling. NF-κB inhibitor proteins (IκB) are degraded soon after phosphorylation by activated IKKs, and the NF-κB transcription factor accumulates in the nucleus to regulate TNF-induced transcription. Since changes in the subcellular localization of IKK and NF-κB transmit stimulus-specific information [11][12][13][14] , these dynamic features can be used to demonstrate pharmacologic alterations to inflammatory signaling 15 .
Chemicals that modulate inflammation-dependent IKK and NF-κB signals are of considerable therapeutic interest. Activated NF-κB regulates the expression for hundreds of genes that mediate signals for inflammation, proliferation, and survival [16][17][18][19][20][21] and its deregulation is linked to chronic inflammation in addition to the development and progression of various cancers [22][23][24][25] . As pleiotropic proteins, IKK and NF-κB are poor targets for inhibitors because they provide basal activity as survival factors independent of inflammatory signaling 26 and their genetic disruption can be lethal 27,28 . The complexity of the pathway and the difficulty of modulating specific PPIs in vivo exacerbates the challenges of drugging this pathway in the cell 29 . Not surprisingly, there are no clinically approved small-molecule inhibitors of NF-κB pathway components.
An alternative network-centric strategy is to predict small molecules that act on rate-limiting PPIs in the signaling pathway in silico and screen them for phenotypes associated with pathway disruption in vivo. Although complete disruption of IKK and NF-κB can have damaging effects on the cell, their dynamics in response to disease-associated inflammatory signals are influenced by >50 other proteins. Thus the broader NF-κB network contains numerous entry points for chemicals to impinge on the pathway. Here we use machine learning with gene expression (GE) data to provide a synoptic list of likely small-molecule inhibitors of the NF-κB pathway. For a well-defined molecular network, we show that pathway-specific inhibitors can be predicted from transcriptomic alterations that are shared between (i) exposure to small molecules and (ii) genetic KDs of the pathway components. Through molecular docking, we reduce the list of predicted compounds and suggest a mechanism of action, evaluating bioactivity using live-cell experiments that monitor signaling dynamics in single cells. We find two first-in-class small molecule inhibitors of the pathway that limit PPIs upstream of IKK recruitment and inhibit TNF-induced NF-κB activation. Our results combine to demonstrate a valuable network-centric systems biology approach to drug discovery.
Results
Identifying candidate inhibitors of NF-κB signaling. To demonstrate a network-centric strategy for targeting TNFinduced NF-κB signaling, we focused on differential GE signatures from the NIH Library of Integrated Network-Based Cellular Signatures (LINCS) L1000 dataset 30 . We compared transcriptional profiles between genetic KDs of proteins in the NF-κB signaling pathway and responses of the same cell types to thousands of distinct bioactive compounds. Using a random forest classification model trained using Food and Drug Administration (FDA)-approved drugs, we identified compounds whose transcriptomic perturbations resembled genetic disruption. For each compound, the probability of a compound-protein interaction was evaluated in terms of several attributes, including direct correlation with the KD signatures and indirect correlations with KD signatures of other proteins in the network for ≥4 cell lines (see ref. 31 for detailed explanation). In the context of a protein interaction network, disruption of a physical target by a drug can cause similar GE profiles as inhibition of downstream or upstream genes in the same subnetwork. Hence, a compound that disrupts TRADD or TRAF2 in Fig. 1a might have similar signatures to the KD of genes in the pathway such as TNFR1, UBC, or NEMO (see below). Here we leverage this guilt by association, which suggests that chemical inhibition acts broadly within a signaling subnetwork ( Supplementary Fig. 1), to drug the NF-κB signaling pathway.
A PPI inhibitory peptide that competes with recruitment of catalytic IKK subunits at ubiquitin scaffolds was previously shown to inhibit inflammatory NF-κB activation and disease progression in a murine model for inflammatory bowel disease 26,32 . We reasoned that any compounds that disrupt the mature TNFR1 complex, particularly at the level of TRADD, TRAF2, and RIP1, will prevent TNF-mediated IKK recruitment and nuclear translocation of NF-κB. Transcriptional signatures for 717 unique compounds showed strong correlations with genetic KDs of TRADD, TRAF2, and RIP1. From this initial set, we identified potential pathway inhibitors as compounds that also correlated with genes in the mature TNFR1 complex (Fig. 1a). Specifically, we ranked candidate inhibitors by their mean Pearson correlation with NF-κB-pathway KDs to assist selection of compounds for additional screening ( Supplementary Fig. 2).
Targeting core PPIs in the mature TNFR1 complex. Molecular docking was used to further refine the list of candidate compounds and predict mechanism of action against proteins in the TNFR1 signaling complex. The 717 candidate molecules described above were docked with domain structures available in the PDB for TRAF2, TRADD, and RIPK1. TRAF2 emerged as a promising target because, contrary to the other proteins, cocrystal structures of TRAF2 are available. Namely, the PPI between TRAF2 and both TRADD (PDB code 1F3V 33 ) and a TNFR2 peptide (PDB code 1CA9 34 ) have been characterized. Both co-crystals indicate a well-defined binding site, which was used to visually screen the top scoring compounds based on both Pearson correlation and binding scores (n = 180 compounds; see Supplementary Fig. 2). Three compounds whose binding modes replicate native contacts in the TRADD-TRAF2 protein complex were selected for testing: (1) BRD-K43131268, (2) BRD-K95352812, and (3) BRD-A09719808. For compounds 1, 2, and 3 respectively, predicted targets from our genetic KD GE dataset 31 included: TRAF2, UBC, NFKB1, and RIP1; TRAF6, NEMO, TRAF2, NFKB1, UBC, TAB2, and IKKβ; and NFKB1, TRAF2, UBC, UBB, and NEMO. Furthermore, compounds 2 and 3 showed significant correlations with both HOIL, TAK1, clAP1/ 2, and UbcH5 KDs (Fig. 1a) and the corresponding transcriptional profiles for gene KDs in the NF-κB pathway (Fig. 1b). Compounds 2 and 3 also have similar chemical structures (Fig. 1c), strongly suggesting a similar mechanism of action. Compounds 2 and 3 formed hydrogen bond contacts with TRAF2 residues S453, S454, S455, and S467, which are predicted to compete with TRADD interface residues Q143, D145, and R146 based on the co-crystal (Fig. 1c). Compound 3 is predicted to bind stronger due to the extra hydrogen bond formed by its amide group with TRAF2 residue G468. Of note, all these TRAF2 residues are conserved in TRAF5. Competitive binding should disrupt the native TRADD-TRAF2/5 PPI interface and could prevent maturation of the full TNFR1 signaling complex by promoting dissociation or allosteric stabilization of a non-native conformation. The predicted binding mode of compound 1 is less specific and did not form any of the contacts described above ( Supplementary Fig. 3).
To test whether the compounds interact with TRAF2 in vitro, we measured the thermal stability of purified TRAF2 in the presence of each compound. Thermal shift assays showed that compounds 2 and 3, respectively, exert a subtle-to-moderate dose-dependent stabilizing effect on full-length TRAF2 (Fig. 2a, b), suggesting direct compound-protein binding. In contrast, compound 1 did not show a clear trend ( Supplementary Fig. 4). We note that the observed thermal shifts are consistent with the relatively small stabilizing effect that the compounds are expected to exert on the stable trimer formed by the soluble full-length TRAF2 protein 34 . Together, these data suggest that compounds 2 and 3 may impinge on TNF-induced signaling.
Small molecules disrupt TNF-induced NF-κB dynamics. We set out to determine whether the compounds are effective inhibitors of NF-κB signaling in living cells. For this, the endogenous gene locus for the transcriptionally active RelA subunit of NF-κB was modified using CRISPR/Cas9 to encode a fluorescent protein (FP) fusion in U2OS cells ( Supplementary Fig. 5 BCL2A1 BCL2L1 BLNK BTK CARD10 CARD11 CCL4 CD14 CD40 CFLAR CHUK CSNK2B CXCL12 ICAM1 IKBKB IKBKG IL1B IL1R1 IL8 IRAK1 IRAK4 LCK LTB LTBR LYN MALT1 MAP3K14 MAP3K7 MYD88 NFKB1 NFKB2 NFKBIA PIAS4 PLAU PLCG1 PLCG2 PRKCB PRKCQ PTGS2 RELA RELB RIPK1 SYK TAB2 TAB3 TIRAP TLR4 TNF TNFAIP3 TNFRSF11A TNFRSF13C TNFRSF1A TNFSF13B TRADD TRAF1 TRAF2 TRAF3 TRAF5 TRAF6 Fig. 1 Transcriptional responses to compounds correlate with knockdowns of NF-κB pathway genes. a Schematic of the mature tumor necrosis factor (TNF) receptor 1 (TNFR1) complex, a cytoplasmic multi-protein complex that assembles following ligation of TNF to TNFR1. The color for each protein species in the complex is the average Pearson correlation between gene expression profiles for the species' genetic knockdown and the transcriptional response to compounds 2 and 3. b Correlation between transcriptomic perturbations by compounds 1, 2, and 3 and the knockdown of genes functionally involved in NF-κB according to the KEGG PATHWAY Database. Pearson correlation color scale is shown (right). c Unbiased molecular docking predicts binding of compounds 2 (yellow) and 3 (magenta) to the TRADD-binding interface of TRAF2. Hydrogen bonds with key TRAF2 interface residues are indicated by dotted lines. Source data are provided as a Source Data file 11,14,35 . When cells were pretreated with compounds 2 and 3 before exposure to TNF, nuclear mobilization of NF-κB was reduced with increasing concentration of the inhibitory compound ( Fig. 3b).
To quantify the compounds' effect on NF-κB dynamics, each single-cell trajectory was decomposed into a series of descriptors ( Fig. 3c) that transmit information within the cell about extracellular cytokine concentrations 14 . Descriptors of NF-κB dynamics that transmit the most information about TNF, including the area under the fold change curve (AUC) and the maximum fold change (Max) 14 , were significantly less when cells were pretreated with 10 µM of the compound 2 or 3 before addition of TNF (Fig. 3d). Other descriptors showed a similar pattern of inhibition when exposed to 10 µM of either compound prior to TNF stimulation (Supplementary Figs. 6 and 7). By contrast, aside from subtle alterations to the rates of nuclear NF-κB mobilization, compound 1 did not significantly alter the overall TNF-induced dynamics of nuclear NF-κB ( Supplementary Fig. 8). These data suggest that compounds 2 and 3 restrict the signaling network upstream of NF-κB activation with low micromolar potency ( Supplementary Fig. 9).
Compounds 2 and 3 also showed significant correlations ( Fig. 1b) with ubiquitination machinery and kinases, including IKK, that are common to basal cellular processes and inflammatory responses 36 . Interleukin-1 (IL-1) is one such inflammatory cytokine that activates NF-κB via the functional IKK complex but independent of interactions between TRADD and TRAF2. Instead, IL-1 utilizes TRAF6 that does not share any of the four serine residues (S453, S454, S455, and S467; Fig. 1a) identified as the binding substrate of our compounds. Consistent with this observation and in contrast with the TNF response, IL-1-induced dynamics of nuclear NF-κB were indistinguishable between cells pretreated with compounds 2 or 3 and IL-1-only control cells ( Fig. 4 and Supplementary Fig. 10). Furthermore, cytotoxicity analysis and assessment of IKKβ kinase activity in vitro demonstrated that compounds 1, 2, and 3 have low cytotoxicity and no direct inhibitory activity over IKKβ kinase activity at the concentrations used in this study (Supplementary Figs. 11 and 12). Together our results demonstrate that the IKK and NF-κB systems are intact in cells exposed to the compounds and suggest that the mode of action for both compounds is directed specifically at the level of the mature TNFR1 complex.
Small molecules prevent formation of the mature TNFR1 complex. Induced proximity between IKK and other regulatory factors within the mature TNFR1 complex is essential for TNFinduced NF-κB activation and may be perturbed in cells exposed to compounds 2 and 3. To test this hypothesis, and directly observe the penultimate recruitment of IKK to the TNFR1 complex, we used CRISPR/Cas9 to target the γ-subunit of IKK (also known as NEMO) for FP fusion and live-cell imaging in U2OS cells (Supplementary Fig. 13).
FP-IKK was diffuse within the cytoplasm of unstimulated cells and rapidly localized to punctate structures near the plasma membrane after exposure to TNF ( Fig. 5a; Supplementary Movie 2). Because a key role of the TNFR1 complex is to recruit and activate IKK at ubiquitin scaffolds 10 , detection of FP-IKK puncta can be used to measure maturation of the complex in living cells. The number of FP-IKK puncta in single cells peaked at 15 min and dissolved within an hour of TNF stimulation (Fig. 5b). Although the recruitment and dissolution dynamics of FP-IKK are prolonged when compared with a previous study that overexpressed a fusion of mouse IKKγ in U2OS cells 13 , they are otherwise qualitatively similar. Consistent with our observations for NF-κB, the number of TNF-induced puncta were greatly reduced in cells that were pretreated with compounds 2 or 3 before exposure to TNF (Fig. 5b). Unexpectedly, the compounds also reduced the overall expression level of IKKγ ( Supplementary Fig. 14) through an unknown mechanism that may relate to TRAF-dependent ubiquitination cascades that regulate the ambient stability of other NF-κB-inducing kinases 37 . Overall, the absence of IKKγ mobilization in TNF-stimulated cells indicate that micromolar concentrations of compounds 2 and 3 prevent a key proximityinduced mechanism provided otherwise through assembly of the mature TNFR1 complex.
Discussion
Taken together, our results show that compounds 2 and 3 inhibit the TNF-induced NF-κB signaling pathway by limiting the formation of the mature TNFR1 complex. The mode of action of the compounds is specific to the TNF response, leaving intact core molecular components of the NF-κB pathway that are co-opted by other biological processes including responses to other inflammatory stimuli. We also highlight the broader effects of disrupting a pathway component within the larger network, including the downregulation of IKKγ protein expression, and the limitations of single-target molecular modeling as a basis for drug design. The regulatory complexity of the NF-κB signaling pathway, which enables highly specific and stimulus-dependent transcriptional responses, also confounds drug discovery efforts that do not account for network-scale responses to chemical disruption. Consequently, successful therapeutic intervention in complex signaling pathways may require a network-centric strategy guided explicitly by a compound's anticipated effects on signaling dynamics as a pharmacologic target 15 .
Correlations in GE signatures and single-cell experiments can be used to respectively predict and validate the network effects of bioactive compounds, and structural analysis can further inform on their mechanism of action. Here our models suggest that compounds 2 and 3 destabilize interactions between TRADD and TRAF family proteins. Mechanistically, disruption at this upstream junction will preclude ubiquitin scaffold assembly and rationalizes our data. In addition to the live-cell data, these include the correlations observed between the compounds and KDs of UBB, UBC, and other signaling proteins that are recruited to these polyubiquitin chains (see Fig. 1a), such as IKK and other upstream regulators. A limitation is that identified compounds may also have alternative effects on other signal transduction pathways that are not explicitly considered here. Overall, by identifying two independent compounds that converge on similar genomic and functional phenotypes strongly suggests that the NF-κB pathway is specifically disrupted. Although the LINCS dataset does not explicitly report the transcriptional response of cells to TNF in the presence of chemical or genetic perturbations, compounds that impinge on TNFinduced dynamics could still be inferred using a machine learning algorithm with prior knowledge of the signaling network. This pipeline therefore represents an alternative strategy to singletarget-based drug discovery that can be more generally applied to discover novel inhibitors of protein subnetworks in a variety of signaling pathways. Because mechanism of action is not constrained a priori, it is possible to discover a chemical agent that disrupts multiple points in the same protein subnetwork or to predict chemical combinations that produce specific networklevel responses. It is unlikely that the magic bullet drug discovery paradigm will uncover the full therapeutic potential of compounds that modulate PPIs and dynamic intracellular signals, such as the TNF-induced NF-κB signaling pathway. Rather, more effective drug development efforts may require approaches like the one presented here that embrace the complexity of regulatory networks and dynamic phenotypes associated with their disruption.
Methods
Analysis of GE data. Gene KD and compound treatment GE signatures were extracted from the NIH LINCS L1000 Phase I and Phase II datasets (Gene Expression Omnibus accession IDs: GSE70138 and GSE92742). We collected signatures for the 1680 small molecules and 3104 gene KD experiments that had been performed in at least 4 of the 7 most common LINCS cells lines (A549, MCF7, VCAP, HA1E, A375, HCC515, HT19). We hypothesized that compounds that disrupt the TNF-inducible NF-κB signaling pathway should produce similar network-level effects, and thus similar differential GE signatures, to genetic KDs of proteins in the pathway. Thus, for each compound-KD signature pair in our dataset, we computed several cell-specific quantitative features, most importantly: direct correlation is the Pearson correlation coefficient between the compound treatment and the gene KD expression signatures in the given cell line, and indirect correlation is the fraction of the KD protein's interaction partners, as defined by BioGrid 38 , whose respective KD signatures were highly correlated with the compound signature. Three additional features, quantifying baseline drug activity in the cell and the maximum and average compound-induced differential expression levels of NF-κB pathway proteins 31 , were also calculated and used in the subsequent classification.
Using a Random Forest (RF) classifier trained in the expression signatures of 152 FDA-approved drugs with known mechanism(s) of action 31 , features for every compound-KD pair (n = 5,214,720) were used to predict the probability that the compound would inhibit the KD protein's interaction network. The top 100 predicted interactions for each compound were extracted, and compounds whose predicted targets were enriched in TNF-induced NF-κB signaling genes were collected for structural analysis.
Structural analysis. For structural docking of RF-predicted inhibitors, representative crystal structures of TNF-inducible NF-κB signaling proteins (Supplementary Fig. 1) were mined from the PDB 39 , optimizing for sequence coverage, structural resolution, and structural diversity. Domain structures were available for Fig. 1a with the exception of IKKα. Potential small-molecule binding sites on each protein structure were identified by clustering the output of computational solvent mapping software FTMap 40 . RF-predicted inhibitors were docked to predicted binding sites on each protein structure using smina 41 and a prospectively validated pipeline 42,43 . Generic versions of the three promising candidate inhibitors of TRAF2, which showed both biophysical complementarity and broad spectrum transcriptomic correlations with KDs in the pathway, were purchased from MolPort for experimental validation. Molport IDs were MolPort-000-763-757, MolPort-004-495-831, and MolPort-004-588-414 for compounds 1, 2, and 3 respectively. Notably, because of commercial availability, Molport versions of compounds 2 and 3 had minor modifications (see Supplementary Fig. 15) that do not alter their predicted binding profiles.
Thermal shift assay and analysis. TRAF2-compound interactions were measured by fluorescence-based thermal shift using an Applied Biosystems ABI QuantStudio(TM) 6 Flex System. All assay experiments used 1 μM GST-TRAF2 (Rockland) per well and 2× Sybro Orange (Invitrogen) in a buffer containing 50 mM HEPES, pH 7.5, 150 Mm NaCl in a total reaction volume of 15 μL in 384-well plates. Compounds were diluted with dimethyl sulfoxide (DMSO), and each reaction had a final DMSO concentration of 1.5%. PCR plates were covered with optical seal, shaken, and centrifuged after protein and compounds were added. The instrument was programmed in the Melt Curve mode and the Standard speed run. The reporter was selected as Rox and None for the quencher. Each melt curve was programmed as follows: 25°C for 2 min, followed by a 0.05°C increase per second from 25°C to 99°C, and finally 99°C for 2 min. Fluorescence intensity was collected continuously. In the Melt Curve Filter section, X4 (580 ± 10)-M4 (623 ± 14) was selected for the Excitation Filter-Emission Filter. The raw data were extracted in MS-Excel format. Each melt curve was normalized between 0 and 1 and the midpoint of the curve was used to determine the melting temperature.
Establishing EGFP-RELA/IKKγ CRISPR Knock-in cells. The RelA repair template consisted of DNA sequences for a left homology arm (LHA −544 bp, chromosome 11_65663376-chromosome 11_65662383) followed by an enhanced green fluorescent protein (EGFP) coding sequence with a start codon but no stop codon and a sequence encoding 3x GGSG linker followed by a right homology arm (RHA +557 bp, chromosome 11_65662829-chromosome 11_65662276) were assembled from plasmids synthesized by GeneArt. Synonymous mutations that are not recognized guide RNAs were introduced to prevent interaction the repair template and U2OS cells (ATCC HTB-96) were seeded in 6-well plates (2 × 105 cells per well) in complete growth medium. The following day, with pSpCas9n (BB)-2A-Puro-RELA/IKKγ_gRNAs and repair template donor plasmids were linearized using BGLII, and cells were transfected using FuGENE HD (Promega) with a transfection reagent to DNA ratio of 3.5 to 1 and a total DNA amount of 4 μg. After 2 weeks, cells were subjected to single-cell sorting into 96-well plates using Beckman Coulter MoFlo Astrios High Speed. Cells underwent clonal isolation and a positive clone was identified via western blot and confirmed by live-cell imaging.
Western blot analysis. U2OS cells (parental and expressing EGFP-RelA/IKKγ via CRISPR Knock-in) were cultured for 24 h in complete growth medium. After treatments, cells were lysed in sodium dodecyl sulfate (SDS)-based lysis buffer consisting of 120 mM Tris-Cl, pH 6.8, 4% SDS supplemented with protease, and phosphatase inhibitors at 4°C for 30 min. Protein extracts were clarified by centrifugation at 4°C at 12,000 × g for 10 min. Lysate protein levels were quantified by BCA assay (Pierce). Samples were separated by SDS-polyacrylamide gel electrophoresis, 25 μg total protein per lane, then transferred to polyvinylidene difluoride membranes. Blocking was done in 5% milk in TBS for 1 h. Primary antibodies directed at RelA and β-actin (#4764 and #3700, respectively; Cell Signaling Technology), IKKγ, and GAPDH (sc-8330 and sc25778, respectively; Santa Cruz) were diluted (1:1000 for all primary antibodies) in 5% milk in TBS-T and incubated overnight at 4°C. Conjugated secondary antibodies (LICOR; 1:10,000 dilution) were used in combination with an Odyssey (LI-COR) scanner for detection and quantification of band intensities. Uncropped scans of all western blots are available in the Source Data file.
Live-cell imaging and analysis. Live cells were imaged in an environmentally controlled chamber (37°C, 5% CO 2 ) on a DeltaVision Elite microscope equipped with a pco.edge sCMOS camera and an Insight solid-state illumination module (GE). concentration of compounds for 2 h before exposure to 100 ng/ml TNF. Wide-field epifluorescence and DIC images were collected using a ×60 LUCPLFLN objective. For all treatments, cytokine mixtures were prepared and prewarmed so that addition of 120 μL added to wells results in a final concentration as indicated. Time-lapse images were collected over at least 4 fields per condition with a temporal resolution of 5 min per frame. Quantification of nuclear FP-RelA localization and the formation IKKγ puncta from flat-field and background-corrected images was performed using customized scripts in Matlab and ImageJ.
Fixed-cell immunofluorescence and analysis. For fixed-cell measurement of endogenous RelA ( Supplementary Fig. 5), U2OS cells were seeded into plastic bottom 96-well imaging plates (Fisher) at 6000 cells/well 24 h prior to treatment. On the day of the experiment, media containing TNF was prepared at 15× the desired concentration for each well. Timing of TNF treatment was planned such that fixation (0, 10, 30, 60, 90, 120 min) occurred simultaneously for all time points at the same time. Prewarmed 15× cytokine mixture was spiked into wells and mixed. Between treatments, the cells remained in environmentally controlled conditions (37°C and 5% CO 2 ). At time zero, media was removed from the wells, 185 μL of phosphate-buffered saline (PBS) was used to wash the wells, and wells were incubated at room temperature in 120 μL of 4% paraformaldehyde (PFA) in 1× PBS for 10 min. Wells were then washed 3× for 3 min with 185 μL 1× PBS and then incubated in 120 μL 100% methanol for 10 min at room temperature. Next wells were washed 3× for 3 min in PBS-T (1×PBS 0.1% Tween 20) followed by 120 μL of primary antibody solution (3% bovine serum albumin (BSA) PBS-T, 1:200 dilution of NF-ĸB p65 F-6 (sc-8008; Santa Cruz)). Plates were wrapped in para-film and left to incubate at 4°C overnight. The following morning, wells were washed 3× for 5 min in 185 μL PBS-T followed by incubation for 1 h in 120 μL of the secondary antibody solution (3% BSA PBS-T, 1:2000 Goat anti-Mouse IgG Alexa Fluor 647 (Cat#A21235, Thermo Fisher)). PBS-T 185 μL was used to wash the wells for 5 min and they were put into 120 μl Hoechst solution (PBS-T, 200 ng/mL Hoechst) for 20 min. Finally, wells were washed for 5 min with PBST and then 185 μL PBS was used to fill the wells and keep the cells hydrated during imaging. Cells were imaged using Delta Vision Elite imaging system at ×20 magnification with a LUCPLFLN objective (0.45 NA; Olympus). Analysis was done using Cell Profiler to segment cells and quantify median nuclear intensity values. Further analysis was performed using custom scripts in MATLAB.
Permutation tests to assess statistical significance. For permutation tests, data from the TNF-only and the indicated experimental condition were combined and randomly distributed into Permuted control and Permuted experimental bins without replacement, preserving the size of the original control and experimental data sets. 10 6 permutations were performed and the difference between the means of Permuted control and Permuted experimental data were calculated for each permutation to generate a histogram. Two-tailed p values were determined by computing the fraction of permuted datasets where Δmean permuted Δmean unpermuted (Supplementary Figs. 7, 8, and 10).
In vitro IKKβ kinase assay. We used recombinant activated IKKβ and the IKKtide substrate (Promega, V4502) with the ADP-Glo bioluminescence assay (Promega, V7001) to evaluate the effects of compounds 1, 2, and 3 on IKKβ kinase activity. 1× kinase buffer A (40 mM Tris-HCl pH 7.4, 20 mM MgCl 2 , 0.1 mg/mL BSA, supplemented with 2 mM MnCl 2 , 2 mM dithiothreitol, and 100 μM Sodium vanadate) was used to prepare all components of the reaction. All components were prepared in a 96-well plate and transferred to every other well of a 384-well opaque plate (Sigma-Aldrich, CLS3825-10EA) using a multichannel pipet. We prepared a 2.5× ATP/IKKtide substrate mix (62.5 μM ATP mixed with 0.5 μg/μL IKKtide) and a 5× concentration of the indicated concentration of compounds in 0.5% DMSO, maintaining a final DMSO concentration of 0.1% in all reactions. The components of this kinase reaction were added to each well in the following order: 1 μL of 5× compound or buffer only, 2 μL of 100 ng/μL of IKKβ Kinase or buffer, and 2 μL of 2.5× ATP/IKKtide substrate mix. The plate was briefly spun, and the reaction incubated at room temperature for 1 h. Next, 5 μL of ADP-Glo reagent were added to each well, spun, and incubated for 40 min at room temperature. Finally, 10 μL of Kinase Detection Reagent were added to each well and incubated for 30 min at room temperature. Luminescence from each well was measured using an integration time of 500 ms in a M4 microplate reader (SpectraMax). Data from triplicate reactions were extracted and plotted.
Compound toxicity comparison. We compared cytotoxicity of the three compounds with Bay 11-7082 (Cayman, 10010266), an inhibitor of the NF-κB pathway at working concentrations of 1-10 μM, using the LIVE/DEAD Cell Imaging Kit (488/570) (Invitrogen, R37601). For each condition, 15,000 U2OS cells were seeded in 200 μL of growth medium in each well of 96-well plate 48 h before microscopy. Next, medium was changed to medium containing DMSO, 10 μM of the indicated compound, or 10 μM of Bay 11-7082 for the indicated duration (2, 16, or 24 h). Before imaging, medium was changed to phenol red-free FluoBrite Dulbecco's modified Eagle's medium (Gibco, A18967-01) containing 300 ng/mL of Hoechst 33342 and 1:10,000 of both Live Green and Dead Red dyes of the LIVE/DEAD Cell Imaging Kit. Cells were incubated for 60 min and imaged on the Delta Vision Elite imaging system at ×20 magnification with a LUCPLFLN objective (0.45NA; Olympus). Analysis was done using Cell Profiler to segment cells and quantify median nuclear intensity values. Further analysis was performed using custom scripts in MATLAB. Data from biological triplicates were plotted as mean ± SD.
Reporting summary. Further information on experimental design is available in the Nature Research Reporting Summary linked to this article.
Code availability. Code used to analyze datasets in the current study are available from the corresponding authors on reasonable request.
|
2019-03-08T15:41:53.177Z
|
2019-02-26T00:00:00.000
|
{
"year": 2019,
"sha1": "0540b7c1aadb2ccd29ab8cc5509328c73437ebe1",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41467-019-08802-0.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0550764674ae98b9497efbd9ffd7356aef6cf406",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
220490343
|
pes2o/s2orc
|
v3-fos-license
|
Predictors of adherence to wearing therapeutic footwear among people with diabetes
Aims People at increased risk of developing diabetic foot ulcers often wear therapeutic footwear less frequently than is desirable. The aims were to identify patient groups prone to nonadherence to wearing therapeutic footwear and modifiable factors associated with adherence. Materials and methods A questionnaire was mailed to 1230 people with diabetes who had been fitted with therapeutic footwear. Independent variables were categorized into five domains. For each domain, variables that were associated with adherence in a univariate regression analysis were entered into a multiple regression analysis. Results A total of 429 (34.9%) questionnaires were analyzed. Multiple regression analyses showed significant associations (p < 0.05) between higher adherence and paid employment, current foot ulcer, previous foot ulcer, satisfaction with follow-up, self-efficacy, understanding of lost/reduced sensation as a risk factor for foot ulcerations, visible storage of therapeutic footwear at home, storage of conventional footwear out of sight, consistent choices about which footwear type to wear, and a belief that therapeutic footwear promotes ulcer healing. The five multivariate models explained 2–28% of the variance in adherence, with the strategies for footwear use domain explaining the most. Conclusions Patients without paid employment or without foot ulcer experience are more prone to nonadherence. To improve adherence, clinicians should advise patients to store therapeutic footwear in a visible place at home and put conventional footwear away and encourage patients’ self-efficacy and habitual use of therapeutic footwear. Future studies should investigate this topic further and explore ways to promote changes in habits. A study limitation was that all variables were self-reported.
Introduction
Diabetic foot ulcers affect 19-34% of people with diabetes during their lifetimes and are associated with increased mortality and risk of amputation [1]. Although most ulcers heal, recurrence rates are alarmingly high: approximately 40% of patients develop a new ulcer within 1 year after healing, and this figure increases to 60 and 65% after 3 and 5 years, respectively [1]. Evidence-based guidelines and systematic reviews recommend that people with previous plantar foot ulcers wear therapeutic footwear to prevent reulceration [2,3], but the level of adherence to wearing therapeutic footwear is often lower than desirable [4][5][6][7]. A review from 2016 [8] identified only six quantitative observational studies that investigated footwear adherence and did not find strong evidence for the ability of any factor to predict adherence. Two studies in the review found that perceiving the benefits of wearing therapeutic footwear was associated with a higher degree of adherence [4,6]. Several other factors were associated with adherence in some studies but not in others (age [4,6,7,9], body mass index [4,7], diabetes type [4,7,9], foot deformity and minor amputation [4,7,9], perceived severity of the foot condition [6,9], and therapeutic footwear appearance [4,7,9]). After searching in the relevant databases for more recent publications, we found only one quantitative observational study. In that study, adherence was not significantly different between men and women [10]. Interventional studies to improve adherence are even more rare. We are aware of only one small experimental study, which found that motivational interviewing improved adherence in the short term but that adherence returned to the low baseline level in the long term [11]. Thus, it is still uncertain which patient groups are in most need of interventions to improve adherence and what variables should be the targets for such interventions. The aims of the study were to identify patient groups prone to nonadherence to wearing therapeutic footwear and modifiable factors associated with adherence.
Materials and methods
A questionnaire was constructed based on the Health Belief Model, which includes perceived seriousness of the health condition, perceived susceptibility of developing the condition, perceived benefits and barriers to engaging in the health behavior, cues to action, and selfefficacy [12]. The model predicts that people who have better adherence to wearing therapeutic footwear have the following perceptions: foot ulcers are serious and they are susceptible to developing diabetic foot ulcers, there are substantial benefits and few obstacles to wearing therapeutic footwear, and they have a high degree of self-efficacy and experience some cue prompting them to wear the therapeutic footwear. The model has previously been used to study diabetic foot self-care adherence [13] but not to study adherence to wearing therapeutic footwear. The questionnaire included items covering demographics, diabetes type, foot complications, satisfaction with services, understanding of sensory neuropathy as a risk factor for foot ulcers, internal locus of control, belief in the efficacy of therapeutic footwear, concerns about the prevention and healing of foot ulcers, self-efficacy, general health, depression, attitudes towards therapeutic and conventional footwear, footwear adherence, reminders to use therapeutic footwear, and social support. Some items were constructed for this study, while the majority of the items were copied or adapted from existing validated instruments [14][15][16][17][18][19][20][21][22]. The questionnaire was pilot tested on five people with experience with diabetic foot ulcers [5]. They answered the questionnaire and were subsequently interviewed individually on their understanding of the content. After minor revisions, the questionnaire was mailed to all people (n = 1230) fulfilling the criteria for inclusion in the study in May and June 2017. The inclusion criteria were that the person should be at least 18 years old (on January 1, 2016), have diabetes mellitus, have been prescribed therapeutic footwear at some point and have visited one of the two participating prosthetic and orthotic clinics from January-December 2016. Bilateral major amputation was the only exclusion criterion. For those who had not responded to the questionnaire 1 month later, a reminder letter was sent. The study was approved by the Regional Ethics Committee Review Board of Uppsala (reference number 2016/528).
Statistical methods
The respondents' age and sex distributions were compared to those of the rest of the sample, using a twosided t-test and χ 2 -test, respectively, with p < 0.05 indicating a statistically significant difference. The dependent variable, adherence to wearing therapeutic footwear, was measured with two questions adapted from the Questionnaire for Persons with a Transfemoral Amputation [16]. The questions asked about the time of wearing therapeutic footwear in terms of both number of days per week (scale: 0 to 7) and number of hours per day (scale: 0-3, 4-6, 7-9, 10-12, 13-15, and more than 15). An index was calculated by multiplying the number of days/week by the number of hours/day and dividing this value by 108.5 (the number of waking hours per week, assuming 15.5 waking hours per day). For example, for a person wearing therapeutic footwear 7 days per week and 10-12 h per day, the index would be 7*11/ 108.5 = 0.71, indicating that the person wears therapeutic footwear 71% of the waking day.
The independent variables were grouped into five domains. The first domain consisted of variables related to demographics, health and social support with the aim of identifying patient groups prone to nonadherence. The second to fifth domains consisted of variables related to health care services, attitudes towards foot ulcers, strategies for footwear use and attitudes towards footwear. For the variables in these domains, the aim was to identify modifiable factors that were associated with adherence. Linear regression was used to test the associations between each variable and therapeutic footwear adherence. For each domain, variables with a p-value < 0.10 were entered into a forward linear multiple regression analysis, in which p-values < 0.05 were considered statistically significant. In addition, explorative secondary analyses were conducted on variables in the domain that explained most of the variance in adherence. In these analyses, adherence levels were compared between response categories for each item using one-way analysis of variance (ANOVA) with Fisher's least significant difference (LSD) test as a post hoc test. P-values < 0.05 were considered statistically significant. IBM SPSS Statistics for Windows, version 25.0 (Armonk, NY: IBM Corp.) was used for all statistical analyses.
Results
In total, 469 valid questionnaires were returned, but 26 were excluded because the respondents stated that they did not own therapeutic footwear, and 14 were excluded because they had missing answers to the questions on therapeutic footwear adherence (dependent variable). Thus, 429 questionnaires were included in the analysis, for a response rate of 34.9%. The sex distributions of the respondents (n = 429) and the rest of the sample (n = 801) were not significantly different (66.7% men vs 62.1% men, p = 0.112). Mean age also did not significantly differ between respondents (mean 69.1 years, standard deviation 10.6) and the rest of the sample (mean 69.6 years, standard deviation 13.3; p = 0.519). The majority of the respondents were men, retired, had type 2 diabetes, were ulcer-free at the time of the survey and had a history of foot ulcers (Table 1). A minority of the respondents had amputations. On average, the respondents wore their therapeutic footwear 50.3% of the waking day (standard deviation 32.8%).
Domain 1. Demographics, health and social support
In the univariate regression analyses, being retired, having paid employment, having type 1 diabetes, having type 2 diabetes, having a current foot ulcer, having a previous foot ulcer, having a partial foot amputation, and having an amputation through or above the ankle had p-values less than 0.10 (Table 1). In the multiple regression analysis, having paid employment, having current foot ulcers and having previous foot ulcers were significant (p < 0.05). The model explained 6% of the variance in adherence.
Domain 2. Health care services
All three items in this domain (responsiveness of clinic staff, partnership of the patient in clinical decision making, and satisfaction with the follow-up of footwear) had p-values less than 0.10 ( Table 2). Only satisfaction with follow-up was significant in the multiple regression analysis, which explained 5% of the variance in adherence.
Domain 3. Attitudes towards foot ulcers
Understanding that lost or reduced sensation in the feet increases foot ulceration risk and being worried about developing new foot ulcers had p-values less than 0.10 in the univariate regression analyses, but only the former was significant (p < 0.05) in the multiple regression analysis ( Table 2). The model explained 2% of the variance.
Domain 4. Strategies for footwear use
All variables in this domain had p-values less than 0.10 in the univariate regression analyses ( Table 2). In the multiple regression analysis, four variables were significantly associated with adherence: self-efficacy (being confident about being able to wear therapeutic footwear all the time), storage of therapeutic footwear, storage of conventional footwear and approach to making choices about what footwear type to wear. The model explained 28% of the variance in adherence.
Domain 5. Attitudes towards footwear
All but one variable in this domain had p-values less than 0.10 in the univariate regression analyses (Table 3). Only perception of footwear's effect on ulcer healing was significant in the multiple regression analysis, which explained 11% of the variance in adherence.
Secondary analyses
Secondary analyses were performed on the four items that were significant in the multiple regression analysis of the Strategies for footwear use domain, as this domain explained the largest amount of variance (Table 4). In a comparison of the highest and lowest response categories for each item, the mean adherence differed between 0.25 and 0.35 across items. When the two items about footwear storage were combined for people who owned both therapeutic and conventional footwear, adherence was highest (0.61) among people who kept their therapeutic footwear visible at home and put their conventional footwear away. This value was more than six times higher than that for people with the lowest adherence
Discussion
The first aim of the study was to identify patient groups who were prone to nonadherence. People with paid employment had higher adherence than those without paid employment, which may be because employed people spend more time away from home, where adherence is often higher than at home [7]. Furthermore, the level of adherence was higher among people with current or previous foot ulcers, suggesting that active foot complications may serve as a wake-up call to patients [23] and that additional clinical attention should be paid to people without personal experience of foot ulcers. However, differences in adherence were small between those with and without paid employment and between those with and without ulcer experience (Table 1). This may also explain why previous studies with smaller sample sizes have not found employment [7,9] or previous foot ulcer experiences [7,9,24] to be significantly associated with adherence. Previous amputations were associated with adherence in the univariate analysis but not in the multiple regression analysis. This result may be explained by the fact that most amputations are preceded by a foot ulcer [25], and thus, the ulcer and amputation variables may have overlapped. The majority of the respondents in this study were men, which is consistent with several studies that have Therapeutic footwear visible at home, conventional footwear put away 122 (37.7) 0.61 (0.30) c SD Standard deviation. † When the one-way analysis of variance was significant (p < 0.05), pairwise post hoc comparisons were conducted using Fisher's least significant difference (LSD) test. Different letters (a, b and c) denote that adherence was significantly different (p < 0.05), and the same letters denote that adherence was not significantly different found men to be at higher risk than women for developing foot ulcers [26]. Sex, age, education level and diabetes type were not associated with adherence. This finding is consistent with most previous research [4,6,7,9,24,27] and suggests that basic demographic and diabetes-related characteristics are not useful for identifying nonadherent patient groups. The second aim was to identify modifiable factors that are associated with adherence to wearing therapeutic footwear. The greatest proportion of variance was explained by the Strategies for footwear use domain. This domain explained 28% of the variance, which is more than the variance explained in a previous study on footwear adherence [7]. The secondary analyses demonstrated substantial differences in adherence between people who stored their footwear in different ways, between people with high and low self-efficacy, and between people who did or did not make consistent choices about footwear. Although these aspects have not previously been investigated in quantitative studies, the results are consistent with qualitative studies that have found that the formation of new habits is important for adherence to diabetic foot self-care and footwear use [28,29]. The final regression model consisted of four variables (self-efficacy, conventional footwear storage, therapeutic footwear storage and habitual footwear choice) that explained a moderate proportion of the variance in adherence. This finding supports the notion that adherence is a multifactorial phenomenon [8,28,30]. It also indicates that the observed variables are of importance but that there still is a substantial amount of unexplained variance in adherence to therapeutic footwear. This unexplained variance may be due to other independent variables that were not measured in the present study as well as to errors in the measurement of adherence. Thus, future research should investigate additional variables, such as body mass index [7] and patients' acceptance of their disease and need of therapeutic footwear [31], as well as use objective measures of adherence, such as temperature and activity monitors [32].
Studies on footwear adherence have been criticized for not defining the conceptual framework, resulting in high heterogeneity, and for focusing on a narrow range of factors, typically those related to the patient, therapy and health condition [32]. This study included a wide range of factors and is the first study to use the entirety of the Health Belief Model to study the predictors of footwear adherence. However, perceived benefits of therapeutic footwear, self-efficacy and cues to action were the only factors from the model that were significantly associated with adherence in the multiple regression analysis. This would suggest that the model may need to be revised or replaced by another model to better understand footwear adherence.
Although the conclusions are preliminary, the study has some clinical implications. First, patients should be advised to store their conventional footwear out of sight to eliminate the visual cue (temptation) to wear it and increase the effort to choose it by needing to go and get it from somewhere else. Second, patients should be advised to keep their therapeutic footwear visible at home to provide a visual cue (reminder) to wear it and reduce the effort to choose it. Third, clinicians should discuss with patients how to form new footwear habits, that is, how therapeutic footwear can become the new default option that is chosen without conscious effort. Strategies to create such new habits is an important avenue for future research and may include advice on how to store footwear and patient education to support self-efficacy [33], which was the factor that most strongly correlated with adherence in this study. Other suggestions for clinicians to improve adherence are to follow-up therapeutic footwear prescription and educate patients on how peripheral neuropathy increases the risk of ulcerations. Additionally, patients should be educated with the aim of strengthening their belief in the efficacy of therapeutic footwear in healing and preventing ulcers. For instance, education could include a visualization of plantar pressure measurements to compare the amounts of pressure when one wears therapeutic footwear, wears conventional footwear and walks barefoot [34]. Finally, clinicians prescribing therapeutic footwear should acknowledge that improving certain footwear attributes (e.g., fit, pain and walking difficulties) may improve adherence. However, these suggestions are preliminary, and future research is needed to explore this further. This is the largest study on therapeutic footwear adherence to date, and the results revealed that adherence was explained to the largest extent by variables that have not previously been investigated. Thus, a strength of the study was that a wide range of potential predictors were included. Some limitations of the study were that it was observational and cross-sectional, which limited the possibility of inferring causality. For example, certain ways to store footwear, high self-efficacy and the habitual use of therapeutic footwear may improve adherence, but it is possible that these variables reflect adherence rather than cause it. Thus, future studies should use interventions to modify these variables to investigate whether they actually are causes of adherence and potential targets for clinical interventions to improve adherence. In addition, studies should be conducted in other countries to test the generalizability of the results. Other limitations were that we only had data on sex and age for the non-respondents, which means that we cannot know if the sample was representative of the full population. Furthermore, all data were selfreported, and the questionnaire's content validity and psychometric properties were not investigated.
Conclusions
Patients without paid employment or without experience of foot ulcers are more prone to nonadherence than those with employment or with foot ulcer experience.
To improve adherence, clinicians should advise patients to store therapeutic footwear in a visible place at home and put conventional footwear away and encourage patients' self-efficacy and habitual use of therapeutic footwear.
Abbreviations ANOVA: One-way analysis of variance; LSD test: Fisher's least significant difference test
|
2020-07-13T14:20:43.347Z
|
2020-07-13T00:00:00.000
|
{
"year": 2020,
"sha1": "9e2d90faef5de3723da3d0faf722cdc7538b025e",
"oa_license": "CCBY",
"oa_url": "https://jfootankleres.biomedcentral.com/track/pdf/10.1186/s13047-020-00413-z",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9e2d90faef5de3723da3d0faf722cdc7538b025e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
67691335
|
pes2o/s2orc
|
v3-fos-license
|
MODELING DRIVER BEHAVIOR FOR TWO AND THREE LANE SECTIONS IN IRAQI RURAL ROADS
Modeling driver behavior is the corner stone for any traffic simulation model. Driving behavior is a complex task to mimic the reality by simulation model. This study has focused on collecting field data from several rural road sites. These data include lane utilization, lane changing and headway. Then, a simulation model has been developed for representing the driver behavior at rural roads. Car-following model developed in this study is safety one. Then, lane changing hybrid model has been developed according to the suggested assumptions by previous studies and collected field data to match the real behavior. Gap acceptance model has been adopted from previous studies which show good consistency with real driver behavior through comparing with other characteristics such as lane changing and lane utilization. The developed model has been calibrated with field data and showed encouraging results.
INTRODUCTION
The need to modeling the behavior of the driver comes from the desire to improve the efficiency of the driver interaction with different types of modern automated systems, after completion of the model. These systems are able to predict the future procedures and are able to solve many problems occur at different levels of traffic conditions. Moreover, these systems operate flexibly so that they can be tuned to suit different behaviors of drivers [1].
The behavior of the driver during driving is complex and varies according to the condition of the driver as well as the circumstances surrounding it either from other users of the road or characteristics of the road itself, so study the characteristics of the driver and the characteristics of the vehicle comes first to understand the interaction that occurs between the driver and the vehicle on the one hand and between vehicles on the other [2,3]. Interaction between vehicles can be defined in three terms: car-following, lane change models, and implicit model between car-following and lane change models which is gap-acceptance model [4]. Details of the models will be explained in the following sections.
PREVIOUS STUDIES
It is necessary to review some of the previous studies with regard to the three models (i.e. carfollowing, lane change, and gap-acceptance models) to further understand what these models are and to prove which algorithm of these models can be used to model driver behavior to suit reality. The following sections briefly review previous studies on the three models.
CAR-FOLLOWING
The car-following model (CF) plays a major role in the theory of traffic flow modeling, where it is the interaction between a vehicle and another in a single lane. The introduction of this model dates back to the 1960s. Several stages of development and calibration have been carried out to make these models more reflective of reality. The safety distance model is among the most widely developed by many researchers, including Gipps [7], Benekohal and Treiterer [8], Yousif [9], and Al-Jameel [10]. The basic principle of this model is to avoid collisions between vehicles at any time of traffic and this is achieved by leaving enough space between successive vehicles.
LANE CHANGE
The lane change model (LC) comes in second place after the car-following model in participating in traffic flow modeling and has a close relationship with the driver's behavior, which is to change the location of the vehicle from one lane to another to achieve more comfortable driving conditions or as a result of circumstances forced the driver to change the lane, these conditions either from other users of the road or from the effects of road characteristics [11]. The lane change model is divided into a mandatory lane change (MLC) and discretionary lane change (DLC). MLC occurs when the driver is forced to change the lane or change the lane to convert to the intended destination, while DLC occurs in order to reach desirable driving conditions such as overtaking a slow vehicle [12].
GAP-ACCEPTANCE
In order to perform a change of lane, a motive for change must be provided and then an examination is made of whether there is an acceptable gap to achieve this. Whereas, the gap acceptance model (GA) is responsible and controls whether a change will occur to the lane or not. Among the models that have been developed is the model proposed by Wang [13] , where the developed model is more reasonable for reality. In the present study, the model developed by Al-Jameel [14] was based on Wang [13] model, and Equations (1and 2) show the critical gap of lead and lag. The critical gap is the minimum time that, if available, will make the vehicle driver have sufficient time to maneuver and change his lane to another lane [15]. Figure 1 shows the process of changing the corridor and the critical gap.
DATA DESCRIPTION (DATA COLLECTION AND ANALYSIS)
Data collection is a basic step for the calibration of the developed model and is also used to find some values of the parameters involved in the evolution of the model prior to calibration.
One of the most widely used methods of collecting traffic flow data is video recording. This method is still used to collect different data from traffic volumes, time headway, frequency of lane change and other data. The stage prior to data collection is to select suitable sites for case study. To choose sites there are several criteria adopted for this: It is essential that the site be free of curvature and grade, the site is free of roadworks and also the possibility of collecting data from the site easily. Whereas, there is a suitable place (e.g. footbridges or bridges) in order to erect a recording camera away from the driver's sight. Table 1 describes the data collected from four sites of rural roads with normal sections. Two sites with three lanes for each direction are: The first site is The Expressway No.1 Section R4 linking Baghdad and Hilla, and the second site is the road between Baghdad and Mahmudiyah. The two sites with two lanes for each direction are the first site the road link between Najaf and Karbala and the second site the road between Baghdad and Al-Kut, as is shown in Figure 2. In addition to using the camera to record traffic, radar velocity was used to measure the desired velocity, speed measurement conducted a traffic volume less than 300veh/hr [16]. Data analysis also took a rather long time because most data collection was done by camera recording. Vehicles were classified into two groups: Cars include passenger cars, motorcycles, taxis, small vans, and small pickups; and heave good vehicles (HGVs) include buses, lorries large van, and, trailers. Figure 3 shows the traffic volume for each of the total traffic volume and HGVs. Whereas, Figure 4 shows the traffic volume for each lane.
Time headway
The characteristics of both vehicle and driver should be determined either from previous studies or from on-site investigations. The characteristics are time headway, vehicle length, acceleration/deceleration rate, buffer space, desired speed, and driver reaction time.
By using a recording camera and stopwatch program, the time headway was calculated, where is the difference of time between two consecutive vehicles as they pass the same point and on the same lane. Determine the theoretical distribution that fits the distribution of field data time headway it is important in the process of generating vehicles into the system when you build a microsimulation model [17].
Vehicle lengths
The distribution of the length of vehicles on the road can be calculated by sensors implanted on the road surface. Due to the lack of these sensors in the Iraqi roads, the task of calculating the length of vehicles is not possible at the present time. Therefore, it was based on a previous study conducted by Al-Hanna (1974) in UK and Table 3 shows the mean length of vehicles for both cars and HGVs, where Al-Hanna found that the distribution of the length of vehicles fit with the normal distribution.
Acceleration/deceleration rate
With regard to the rate of acceleration/deceleration, there are two types of acceleration used according to ITE (1999) and is the normal acceleration (i.e. comfortable acceleration) and the maximum acceleration rate, where the normal acceleration used for example to reach the desired speed, while the maximum acceleration rate used in the case of stop the vehicle suddenly.
Buffer space
The buffer space (BUF S ) or the distance between stopped vehicles under congestion condition, as shown in Figure 1, and was adopted 1.7m as its initial values for the buffer space which is subsequently adjusted by the calibration process.
Desired speed
The desired speed measurement was done by using a speed gun. Table 6 shows the rate of speed for a three-lane section, and by chi-square, the data distribution is tested and it is found to be fit with normal distribution. Figure 7 shows the normal distribution of desired speed for the second lane (cars and HGVs).
Reaction time
The reaction time of the driver in both cases (i.e. surprised and alerted) was adopted from a previous study as suggested by Johansson and Rumar [19] and Figure 8 shows the cumulative distribution of reaction time.
Traffic distribution
As for the distribution of vehicles on each available lane, the traffic volume of each lane was calculated. Due to the absence of loop detectors for the purpose of collecting enough data to determine the distribution of vehicles on each lane correctly, the data was collected by recording the camera towards determining the distribution of vehicles on each lane. Therefore, all data obtained were combined into a single data set for each of the three-lane and two-lane section. Figure 9 shows the distribution of vehicles for a three-lane section, where the distribution of vehicles on the second and third lane is higher than the distribution on the first lane. This is why most drivers avoid using the first lane because of the large number of surface defects [20], which increases the traffic volume on the second and third lane.
As for the two-lane section, the distribution of vehicles on the second lane is higher of its distribution on the first lane as shown in Figure 01 because of the behavior and desire of the driver. Table 7 Copyright 2017 Al-Qadisiyah Journal For Engineering Sciences. All rights reserved. summarizes the equations obtained by the Excel program, noting that the data was filtered from some abnormal data to strengthen the relationship. As for the distribution of HGVs on each lane, at this stage of the research and based on the site's investigations, it was based on a fixed percentage from the total traffic flow (HGVs%= 0.20).
MODEL RULES
In this section, sub-models and hypotheses used in the development of the model are identified. The model developed consists of three sub-models, namely the car-following, lane change and gap-acceptance. Some of these concepts are explained in the literature review section, but in this section, each sub-model will be addressed individually and the most important features of each model will be defined.
Car-following rules
For the sub-model of car-following, the safe distance model assumes that there is sufficient distance between a vehicle and another that prevents collisions in all traffic flows situations. Where it is highly efficient in the representation of reality, the reason is that it has been developed in many stages and different geometric sections of road (For example, roadworks section, weaving section, narrow lane section) and in the levels of microsimulation [9,14,21,22]. Therefore, in this study will depend on the model developed by AL-Jameel [14], where AL-Jameel based on Benekohal [23] in the development of its model. This model consists of five cases of acceleration control of the longitudinal movement of the vehicle along the road and these accelerations are: a. Acceleration of comfortable conditions. b. Acceleration of the mechanical capability of the vehicle. c. Acceleration to moving from stationary conditions. d. Acceleration from slow moving conditions. e. Acceleration of stopping the distance conditions. One acceleration of these accelerations is used according to the situation in which the vehicle is selected and the appropriate acceleration is chosen according to a flowchart as Al-Jameel [24] note.
And then apply Equations (4 and 5) to update the location and speed of the vehicle every 0.5sec (scan time was adopted 0.5sec according to Alterawi [22]). t is the scanning time interval (sec),and ACL F is the acceleration (or deceleration if negative) rate of following vehicle (m/sec 2 ).
Section Equation R
2 Three-lane section
Lane change rules
As for the sub-model of changing the lane, several hypotheses have been adopted to develop a model suitable for the behavior of the driver when changing the lane. As mentioned earlier in the literature review, the model of LC is divided into MLC and DLC, and since the study area is a normal section, the LC is the type DLC. Therefore, the type of DLC will be developed only in this study. The change of the lane either towards the right lane (i.e. the slower lane) or towards the left lane (i.e. the faster lane) will occur according to the driver's interaction and desire.
LC to the left lane
The driver changes his/her lane to the left lane to achieve a higher speed or to reach the desired speed in the case that it is preceded by a slow vehicle and this happens if one of the following conditions is met: If the speed of the driver's vehicle is higher than the speed of the vehicle preceded by the value of R (The value of R suggested by Ferrari [25]).
R = 1040 / SP Fd
Where; SP Fd desired speed (km/hr) of following vehicle.
• If the speed of the vehicle is less than the speed desired by the value of R and the rules of the CF sub-model does not allow the speed increase and provided there is sufficient distance between the target vehicle and the new leader vehicle.
LC to the right lane
In the case of changing the corridor to the right, for example, occurs when the driver wants to return to the right lane after completing the process of overtaking and this is achieved if: The driver of the vehicle would prefer to change its lane to the right lane when its speed is less than the speed of the vehicle followed by the value of R, provided that the speed of the vehicle is less than or equal to the speed desired.
A change may occur towards the right lane when the right lane is empty for more than 300m [26].
According to the field data, a percentage of drivers prefer to stay in the second lane, even if the first lane is free of vehicles for a long distance. This percentage ranges from 15 to 20 percent of users of the second lane, whether in the section with three lanes or two lanes. Figure 11 illustrates the flowchart of the process of changing the lane of the type DLC.
The change to another lane is made if the previous assumptions are satisfied and there is a motive for the driver to change his/her lane. The process of changing the lane is controlled by the critical gap. In order to the driver to safely change its path, this gap must be provided as shown in Equation 7. To calculate this gap, either for lead or lag, Equation 1. and Equation 2. mentioned in the literary reviews are based.
LG g ≥ LG gmin and LD g ≥ LD gmin (7) AL-QADISIYAH JOURNAL FOR ENGINEERING SCIENCES
CALIBRATION OF THE DEVELOPED MODEL
After completing the development of the model it is important to conduct a calibration process in order to ensure that the model has the ability to represent the reality. The calibration includes checking the results of the model and comparing it with field data and correcting the parameters. This process is returned until the minimum amount of error allowed for the results of the model [27]. For the sub-model of car-following, it does not need to be calibrated at this stage of the research, but calibration will be done by comparing the traffic flow of the model results with the field data and also calibrating the sub-model of lane change.
The outputs of the model is the flow of traffic and frequency of lane change and many of the outputs can be used as needed, these outputs can be measured in the model by installing a virtual loop detector on a section of the road. Figure 12 shows the comparison between the results of the traffic flow of the model with the field data for a whole section of the road and Table 8 shows some statistical tests (see Equation 8. Copyright 2017 Al-Qadisiyah Journal For Engineering Sciences. All rights reserved. and 9.) Figure 03 shows a comparison between the modeling results and the field data for each lane and Table 9: proves this statistically.
The value of GEH is acceptable when it is less than 5, and if RMSEP value is less than or equal to 15, it is satisfactory (as mentioned by Alterawi (2014)).
Where; n is the number of time intervals.
is the observed data at time interval i ,and is the simulated data at time interval i. As for the calibration of the model of lane change has been used frequency of lane change for calibrating. By means of gap-acceptance parameters (i.e. β1, β2, β3, and β4), the best results can be achieved between modeling and field data. Where a range of (0.1, 0.1, 0.1, and 0.1) to (1, 1, 1, 1) was used for these parameters and using several runs and resetting of these parameters until the best values required for these parameters were achieved:
VALIDATION OF THE DEVELOPED MODEL
After completing the calibration process, the model needs an additional testing process with other data. Validation process is the stage of ascertaining the viability of the model to conduct and achieve several applications. Figure 15 shows the process of comparison between the flow of the results of the model and actual data for whole model and Table 10 shows that the modeling results match very acceptable with reality through statistical tests. As for the results of the speed, the model has the ability to calculate the speed for each lane, but cannot be measured realism speed with high accuracy acceptable by speed gun and for all vehicles, where the speed gun has the ability to measure the speed of one vehicle during each run of the device (speed gun). Therefore, the calibration and validation of the flow of traffic were conducted only. Figure 16 shows the comparison of frequency of lane change (FLC) of modeling with FLC of the field data.
|
2019-02-17T14:19:33.905Z
|
2018-01-06T00:00:00.000
|
{
"year": 2018,
"sha1": "5d9f1fab42af71ea03ba03d50300bde9e2745436",
"oa_license": "CCBY",
"oa_url": "http://qu.edu.iq/journaleng/index.php/JQES/article/download/492/454",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "3f1835f7c9575bc2a6d6b529f374878f2299a1b6",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
126380721
|
pes2o/s2orc
|
v3-fos-license
|
Research of work stability of diamond detectors used in SCR DDIR
In this work we study influence of various factors on stability of ionizing radiation detectors installed in the cosmic ray spectrometer (SCR) based on diamond detectors of ionization radiation (DDIR). Diamond detectors for SCR are made of single crystals of synthetic diamond type IIa. Diamond detectors were studied successively in three different experiments. Checking detector stability with ambient temperature increased up to 70 degrees Celsius was the first experiment. At next we change the geometry of detector irradiation by rotating nuclear source around it and measuring changes in detector count rate. And last one experiment was about checking the phenomenon of polarization by prolonged detector irradiation by ionizing radiation of various types and energies. The study revealed the presence of the strong influence of the polarization effect on the work of diamond detectors for registration of ionizing particles with short mean free path (in our experiment they were the alfa-particles of 238Pu). In this work correspondence of the experimental results of the “rotation” the source around the detector with the data obtained by simulation in GEANT-4 was shown.
Introduction
The detectors of ionizing radiation (IR) based on the diamond sensitive elements are getting more popular in various spheres of human activity in recent years. The idea to use diamond as a sensitive element for the detectors of ionizing radiation appeared in the 1960s, and has been determined by such properties of diamond as high radiation, temperature and chemical resistance [1]. However, difficulty with choosing the crystals with desired properties (mainly due to the different degrees of natural diamond purity) has impeded the intensive development of this direction, and therefore use of diamonds was very costly. Currently advances in growing the artificial diamond single crystals by chemical vapour deposition process (CVD) led to simplification of using diamonds in various applications in science and technology. For example, thanks to its high velocity of charge carriers and radiation resistance, diamond is now widely used as a replacement for silicon sensitive elements in devices monitoring the position of the beam of charged particles in accelerators, preventing the deflection of a beam from a predetermined path [2]. High radiation resistance and its stability at temperatures up to 200 degrees Celsius enabled to use diamond detectors in the vertical neutron chamber in the International Thermonuclear Experimental Reactor (ITER) [3]. This work is devoted to the use of diamond detectors of ionizing radiation in space technology, in particular in the Cosmic Ray Spectrometer developed by the Industrial -Technology Center "UralAlmazInvest". It is expected that the spectrometer will allow to study the cosmic radiation of following types: electrons, protons and heavy ions.
Experement setup
In this work we studied the DDIR ELSC-1 based on artificial type IIa diamond for saving stability of it characteristics under various conditions. The following conditions for the detector had been considered: increased ambient temperature; the detector irradiation at different angles of incidence by the IR on the surface of DDIR (that was checked with a simulation of diamond detectors, made by GEANT-4); a long irradiation time of the detector by different types of IR (checking the influence of polarization). Brief description of the ionizing radiation sources used in this study is shown in table 1. Table 1. Radiation sources.
Name of the nucleus and type of radiation Energy spectrum 238 Pu, alfa-particle One line with energy near to 5499.9 keV Continued energy range with maximum value 90 Sr-90 Y, beta-particle near to 2.28 MeV, and energy of maximum of activity near to 545.9 keV.
3. The heating of the detector and changing the geometry of irradiation ELSC-1 with a 238 Pu was placed in special heater, where temperature was up to 70 degrees Celsius. Then twelve spectra (300 seconds in duration each) were recorded with PC-compatible board, while decreasing detector and source temperature by every 5 degrees Celsius. We studied the changes in the total count rate of the detector and in the count rate at the peak of total absorption. The results of data obtained during several series of such measurements are shown in figure 1. Statistics of pulses set has made main contribution to the measurement error. As we can see, there are no changes in count rate of ELSC-1 by detecting ionizing radiation of 238 Pu, even with ambient temperature near to 70 degrees Celsius.
The study of ELSC-1 detector behavior in a changing geometry of irradiation was the next experiment. The angle between the incident flux of IR and the surface of ELSC-1 varied in increments of 10 degrees, and after each change the spectrum of ionizing source 90 Sr-90 Y was recorded. Also, a mathematical model was created in the GEANT-4 environment, which imitates a similar experiment. Comparison of the results is shown in figure 2.
As seen from the experimental results, there is a slight decrease in the count rate when the angle of incidence on the detector surface reaches the value of 70 degrees. This means that a decrease in detection efficiency due to reduction of the effective area of the detector under the influence of radiation, prevails over the increase of efficiency, which is caused by increasing of the thickness of the detector. Comparison of experimental results with mathematical models showed similar results, and in this connection we can make one conclusion: a mathematical model of a diamond detector created in GEANT-4 is fully confirmed by the behavior of the detector in the real world.
Checking the polarization phenomena
The most significant drawback of diamond detectors today, is their susceptibility to the influence of the polarization effect. This effect is manifested as a decrease in charge collection efficiency of the detector (which reduces the counting rate and the distortion of the spectrum [4]). Checking of the polarization effect in ELSC-1 detector was carried out using a long irradiation of the detector by two types of ionizing radiation. There was continuous recording of twelve spectra, 300 seconds in duration each (total time of measurement is one hour). Comparison of the ELSC-1 readings at the beginning and at the end of the measurement cycle allows us to see whether there is any polarization. The comparison of the results for different radiation sources are shown in figures 3 and 4. Influence of different experiment geometry on the count rate of DDIR, 90 Sr-90 Y.
We saw noticeable effect of polarization in case of registration of radiation with short mean free path. We investigated some ways to eliminate polarization (currently known method of turning off the power supply of detector and a method of changing the polarity of the bias voltage [5]). It was found that turning off the power supply of the ELSC-1 for 30 seconds is the better way to eliminate the polarization than reversing polarity of the bias voltage for the same time. In this regard the special scheme for CRS was made that allowed spectrometer to work in the following measurement mode: 60 seconds of measurement, 10 seconds depolarization by turning off the bias voltage. Currently, CRS is at the final stage of manufacturing.
Conclusion
It has been found that the DDIR ELSC-1 remains stable when the ambient temperature increases up to 70 degrees Celsius. We have found a slight change in the counting rate by increasing the angle of incidence of electron radiation on the surface of a diamond detector from 0 to 70 degrees, which confirmed the legitimacy of the use of a mathematical model of the detector set up in It is shown that with the help of periodic power outage from the detector for short periods of time (about 30 seconds) the instability of the diamond detector associated with the effect of polarization is almost entirely eliminated.
|
2019-04-22T13:03:03.011Z
|
2016-02-05T00:00:00.000
|
{
"year": 2016,
"sha1": "3f53a3c74d89563b4bfd8d5748c5ecbec81cbc7f",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/675/4/042013",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "8036affddb2bed5f5e1539bbadb230940e5d932b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
221495606
|
pes2o/s2orc
|
v3-fos-license
|
Ribosomal RNA Modulates Aggregation of the Podospora Prion Protein HET-s
The role of the nucleic acids in prion aggregation/disaggregation is becoming more and more evident. Here, using HET-s prion from fungi Podospora anserina (P. anserina) as a model system, we studied the role of RNA, particularly of different domains of the ribosomal RNA (rRNA), in its aggregation process. Our results using Rayleigh light scattering, Thioflavin T (ThT) binding, transmission electron microscopy (TEM) and cross-seeding assay show that rRNA, in particular the domain V of the major rRNA from the large subunit of the ribosome, substantially prevents insoluble amyloid and amorphous aggregation of the HET-s prion in a concentration-dependent manner. Instead, it facilitates the formation of the soluble oligomeric “seeds”, which are capable of promoting de novo HET-s aggregation. The sites of interactions of the HET-s prion protein on domain V rRNA were identified by primer extension analysis followed by UV-crosslinking, which overlap with the sites previously identified for the protein-folding activity of the ribosome (PFAR). This study clarifies a missing link between the rRNA-based PFAR and the mode of propagation of the fungal prions.
Introduction
Prions are infectious proteins that can self-propagate and transmit their fibrillary amyloid conformation to normal indigenous prion proteins [1]. Prions can cause fatal neurodegenerative diseases that affect both humans and other animals. These diseases, called in general Transmissible Spongiform Encephalopathy (TSE), include Bovine Spongiform Encephalopathy (BSE or mad cow disease), Scrapie in sheep, Creutzfeldt-Jakob disease and Kuru in humans [1,2], etc. These diseases are caused by aggregation of the prion protein PrP in the amyloid form, which is a product of the PRNP gene in humans [3][4][5][6]. Besides mammalian prions, several prion-forming proteins were identified in fungi as well. While the biological significance of prion-forming proteins in fungi is somewhat unclear [7], most yeast prions are functional prions having a fibrillar structure similar to the mammalian prions [8], although, so far, there is no evidence for cross-nucleation of the mammalian prions with fungal prions. Thus, fungal prions have provided suitable and safe models for understanding the folding, aggregation and propagation mechanisms of disease-forming mammalian prions.
HET-s is a prion protein that corresponds to the [Het-s] prion system in the filamentous fungi Podospora anserina (P. anserina) [9]. It was initially identified as a part of a non-self-recognition process [10,11] that controls vegetative incompatibility in this fungus [12]. There are two antagonistic allelic variants of this protein: HET-s and HET-S, which are of the same length (289 amino acids) but differ in the sequence for 13 amino acids. The HET-s is a two-domain protein with a C-terminal domain measuring Rayleigh light scattering at 402 nm. While freshly diluted HET-s showed only background level, a significant increase in light scattering was seen after overnight incubation of HET-s, indicating large aggregate formation in the HET-s sample. In order to study whether RNA and in particular rRNA influences HET-s aggregation, overnight incubation of HET-s was conducted in the presence of different RNA samples at a fixed concentration. These included bulk tRNAs isolated from E. coli MRE600, in vitro transcribed mRNAs from two unrelated proteins-(i) E. coli dihydrofolate reductase (DHFR) and (ii) human carbonic anhydrase I (HCA), and different domains of E. coli 23S rRNAs. Since full-length 23S rRNA produces high background scattering, individual domains of E. coli 23S rRNA were transcribed and subjected to this assay.
As shown in Figure 1A,B, addition of mRNAs or tRNAs did not show any significant change in light scattering suggesting that they do not influence HET-s aggregation. In contrast, a moderate to significant decrease in light scattering was seen with different domains of rRNAs. Domain V of 23S rRNA showed the highest reduction in light scattering, which suggests that this highly conserved rRNA domain strongly inhibits HET-s aggregation. Domains IV and II of 23S rRNA were also quite effective in reducing HET-s aggregation. It is worth mentioning that the domain V of 23S rRNA hosts peptidyl transferase center as well as the active sites for PFAR and the domains IV and II are closely associated with it.
Next, in vitro transcribed domain V rRNAs from large ribosomal subunit from various prokaryotic (E. coli, Bacillus subtilis (B. subtilis)) and eukaryotic (yeast S. cerevisiae, human, human mitochondria) sources were tested in HET-s aggregation assay. As shown in Figure 1C,D, all domain V rRNAs showed a comparable level of reduction in light scattering. This result suggests that in addition to peptidyl transfer and PFAR, prevention of protein aggregation is likely another conserved function of the domain V of 23S/25S/28S rRNA, irrespective of prokaryotic or eukaryotic origin. Further, upon titration of domain V rRNA (E. coli) gradual reduction of light scattering was observed in a concentration-dependent manner ( Figure 1E,F). However, unlike PFAR, where a 1:1 molar ratio of the sample protein and domain V of rRNA is required for the highest extent of protein-folding, substoichiometric or relatively lesser concentration of domain V rRNA than the protein HET-s was sufficient for largest reduction of its aggregation. Our results indicate that rRNA especially the domain V of 23S rRNA can prevent the formation of large aggregates of the HET-s prion.
ThT Binding Demonstrates That rRNA Prevents Fibrillar Aggregation of HET-s
ThT binding assay is frequently used for the quantitative determination of amyloid fibril formation as its fluorescence increases specifically by binding to mature β-sheet enriched amyloid fibrils [39,40]. We have probed HET-s aggregation with ThT fluorescence under conditions identical to light scattering measurements. ThT was added just before fluorescence measurement, which excluded any effect of ThT on HET-s aggregation process.
As shown in Figure 2, ThT fluorescence increased significantly upon HET-s aggregation. This observation suggests that most likely HET-s aggregates to fibrillar amyloids, which is in good agreement with earlier reports [41]. Further, we analyzed the effect of the 23S rRNA domains (Domains II, IV, V of 23S rRNA from E. coli) in the ThT binding assay since these RNAs caused a significant reduction in light scattering by HET-s aggregates (Figure 2, inset). The bulk tRNAs isolated from E. coli, which did not have a pronounced effect in the light scattering assay was used as a control. Larger RNAs could not be used in this assay due to high background fluorescence of ThT with just RNAs. As expected, the bulk tRNAs did not reduce ThT fluorescence. However, all 23S rRNA domains lead to a decrease in ThT fluorescence (Figure 2, inset). Again, the highest reduction was seen with domain V of 23S rRNA. Moreover, the degree of reduction in ThT fluorescence with different 23S rRNA domains (Figure 2, inset) followed the same trend as seen in the light scattering assay ( Figure 1E,F). This result confirms that rRNAs, especially the domain V of 23S rRNA, prevent amyloid aggregation of the HET-s prion.
HET-s Aggregates Show Different Morphology with or without RNAs as Seen by TEM
We have analyzed the aggregation morphology of HET-s proteins by TEM. As expected, after overnight aggregation reaction, the free HET-s protein forms typical amyloid fibrils ( Figure 3A). Interestingly, when the aggregation reaction was done in the presence of domain V of 23S rRNA we have noticed, not only, a change in the fibril morphology, but also, a vast decrease in the amount of HET-s aggregation ( Figure 3B). Our results showed that HET-s alone forms long amyloid fibrils as well as clustered aggregates from which the long fibrils emerge. In contrast, in the presence of domain V of 23S rRNA (E. coli), the fibrillar structures disappeared; instead, scattered smaller aggregates with spherical and branched structures were observed. Interestingly, RNAs could be seen as black dots among the HET-s aggregates as they bind better to uranyl ions supplied in the reaction as acetate salt. When other RNAs (e.g., DHFR mRNA, HCA mRNA) were tested with HET-s, various aggregate morphologies could be seen, which were different from the HET-s alone, but not free from fibrillar structures ( Figure 3C,D). They caused accumulation of several intermediate aggregate structures, but the reduction of aggregation was not to the same extent as with domain V of 23S rRNA. This result visibly confirms that domain V rRNA effectively reduces and alters HET-s aggregation morphology.
Cross-Seeding Assay Demonstrates That Domain V of 23S rRNA Aids in Formation of The HET-s "Oligomeric Seeds"
To determine how domain V rRNA affects the HET-s fibril formation, aggregation kinetics of HET-s was followed by Rayleigh light scattering (Em. 402 nm, Ex. 400 nm) for HET-s alone or with domain V of 23S rRNA (E. coli; Figure 4). HET-s alone started aggregation immediately after dilution of the denaturant and light scattering increased with time suggesting gradual aggregation of the protein. When domain V rRNA was added with HET-s, both the rate and the amplitude of light scattering decreased dramatically (Figure 4), suggesting that significantly smaller amounts of large HET-s aggregates populated in the presence of the domain V rRNA. This result is in full agreement with end-point measurements presented in Figure 1A,B. However, to test whether the domain V rRNA aids in the formation of the oligomeric amyloid "seeds", as also seen for murine rPrP [37], we designed a cross-seeding assay. For that, we first set HET-s aggregation reactions without and with domain V of 23S rRNA by incubating overnight at 37 • C. Then, the supernatant was collected after centrifuging down the large HET-s aggregates. A small volume of the cleared supernatant (10 µL) was added into a fresh HET-s aggregation reaction (500 µL) and then HET-s aggregation was followed with time using Rayleigh light scattering as described above. In both cases, we observed a pronounced and faster increase in light scattering as would be expected from seeded aggregation reactions ( Figure 4). The addition of supernatant from the HET-s with domain V of 23S rRNA reaction showed the fastest and highest increase in light scattering suggesting that it contained "seeds" for HET-s aggregation. Thus, it could be concluded that domain V rRNA blocks the formation of large aggregates of HET-s, but facilitates the accumulation of small soluble aggregates. These can work as "seeds" to induce large aggregation in fresh HET-s solution.
The Interaction Map of HET-s on Domain V rRNA
Domain V of 23S rRNA was found to be an effective suppressor of fibrillary aggregation of HET-s. Here, we have mapped the interaction sites of HET-s with domain V rRNA using UV cross-linking followed by primer extension (by reverse transcription) assay. As shown in Figure 5, UV cross-linking of HET-s immediately after dilution of the denaturant produced reverse transcription "road-blocks" on almost the same nucleotides on domain V rRNA as other control protein substrates, namely HCA, DHFR and bovine carbonic anhydrase (BCA).. The main interaction sites were U2474-A2476, U2492-G2494, G2553-C2556, A2560-A2564 and U2585-G2588. Interestingly, the same sites were reported earlier for PFAR [28]. Thus, this observation suggests that PFAR might be involved in the prevention of HET-s aggregation. To test that we used a mutant variant of domain V rRNA and tested in HET-s aggregation assay. [28]. © the American Society for Biochemistry and Molecular Biology.
Effect of Mutations in Domain V rRNA in HET-s Aggregation
HET-s protein produced a strong block in the residues UAG2586-88 (E. coli numbering) on domain V rRNA. These residues are highly conserved and mutation in these bases showed defects in PFAR. We tested the effect of UAG2586-88CCA mutant domain V rRNA on HET-s aggregation by light scattering and ThT binding assays. In both assays, UAG2586-88CCA domain V rRNA showed less efficiency in reducing HET-s aggregation compared to the wild-type domain V rRNA ( Figure 6B,C), suggesting that interaction with these bases of domain V rRNA might have an impact on reduction of HET-s aggregation. However, it should be mentioned that PFAR was completely lost with this mutant domain V rRNA, while in case of HET-s aggregation it only caused partial reduction. Further, we tested this mutant domain V rRNA in UV cross-linking followed by primer extension assay. No binding was seen on the altered bases while other interaction sites remained unchanged ( Figure 6A). This result suggests that HET-s interacts with domain V rRNA in a sequence-dependent manner. However, whether this interaction is directly causative for the reduction of HET-s aggregation or not remains to be answered.
Discussion
How newly synthesized polypeptide chains are folded in the living cells is one of the major questions in biological science. Several molecular chaperones were shown to be part of the process, which suggests that the cells have evolved multiple processes to ensure protein folding under various circumstances. Ribosomes from all three kingdoms of life were shown to have activity in refolding denatured proteins to their active state [21]. This PFAR commonly called PFAR was assigned to the large subunit of the ribosome, and more precisely to the domain V of the largest rRNA, which belongs to the large ribosomal subunit and holds the peptidyl transferase center [42]. A recent study demonstrated that ribosomes can also disaggregate various folding intermediates [38]. However, whether PFAR and protein disaggregation are related to two sides of the same coin remains elusive.
The interaction of the prion proteins with RNAs, resulting in modulation of their folding and aggregation pathways is an established fact [37,43,44]. As mentioned in the introduction, earlier results with the antiprion compounds 6AP and GA indicated a close involvement between PFAR and prion processes. These compounds were primarily identified by red/white screening in yeast [PSI+] system and further confirmed in the mammalian prion system [45,46]. The red/white colony screening method is based on the principle that in [PSI+] cells, most of the Sup35 protein, a subunit (also called eRF3) of the eukaryotic release factor, is sequestered into protein aggregates and thus unavailable to function in translation termination. As a result, [PSI+] causes an increased tendency to read through the stop codons. The ade1-14 allele contains an opal stop codon in the open reading frame of ADE1. When Sup35p is in its aggregated, prion conformation ([PSI+] cells), ribosomes read through this opal codon, which allows cells to grow on adenine-deficient medium (SD-Ade) and produce regular white colonies. However, when Sup35p is in its normal soluble form ([PSI-] cells), translation of the ade1-14 allele terminates at the opal codon preventing cells from growing on SD-Ade and leading to red colonies due to the formation of a metabolic byproduct. The treatment of [PSI+] cells with 6AP and GA results in the formation of red [PSI-] daughter colonies, suggesting that the prion phenotype was reversed. This would mean that either, in those 6AP/GA-treated cells, Sup35p could not aggregate to the inactive large amyloid form or alternatively, that prion propagation to daughter cells by means of small aggregates or "seeds" formation was blocked. Since 6AP and GA bind specifically to rRNA and the binding is sensitive to mutations on domain V rRNA [28,46], PFAR was already implied in prion processes in vivo. However, the question remained whether PFAR is involved in "seeds" formation and thus in prion propagation, or alternatively-in large prion fibril formation.
In coherence with earlier reports [31,32], our results with HET-s prion protein as a model system, shed light on the involvement of the ribosome in the prion propagation processes. We find that rRNA, especially domain V of 23S/25S/28S rRNA, can prevent spontaneous aggregation of the HET-s prions into large, amyloid fibrils. Instead, it can facilitate the accumulation of the small oligomeric aggregates, which like prion "seeds", can induce de novo fibrillar aggregation of HET-s. Our primer extension data presented in Figure 5 demonstrate that HET-s interacts with domain V of 23S rRNA using the nucleotides, which were identified for PFAR in relation to other nonprionogenic proteins [29,30,47]. Moreover, mutation of those nucleotides abolishes or diminishes the interaction ( Figure 6A), also similar to what was seen earlier for other proteins [28]. This leads to the conclusion that HET-s interaction with rRNA is associated with PFAR. Thus, combining our observations together with earlier reports we propose that most likely, PFAR prevents misfolding of HET-s proteins and thereby blocks the formation of large, fibrillar and amorphous aggregates ( Figure 2). However, PFAR does not inhibit HET-s aggregation completely leading to the formation of the oligomeric HET-s "seeds". Our analyses are presented in a simple model in Figure 7. Our in vitro biochemical results can be extrapolated to explain the in vivo results of 6AP and GA action in [PSI+] yeast cells. In full agreement with the results and analyses presented by Voisset et al., we propose that PFAR is involved in the propagation of the [PSI+] prions by oligomeric "seeds" formation [32]. 6AP and GA primarily inhibit PFAR by binding to the domain V of 25S rRNA [28]. As a consequence, "seeds" formation diminishes and hence, prion propagation stops. Combining evidence from our current results and earlier works, we conclude that the rRNA-based PFAR governs yeast prion propagation by mediating a subtle balance between fibrillar (insoluble) and (soluble) oligomeric aggregates. The universality of this mechanism remains to be tested in other prion systems. However, given the highly conserved sequence, structure and functions of the domain V of the major rRNA of the large ribosomal subunit from all kingdoms of life, it will not be surprising if such a universal mechanism exists. This will, undoubtedly, be of fundamental scientific and therapeutic interest in the field of prion and neurodegenerative diseases. rRNA. Monomeric HET-s spontaneously undergoes a conformational change to form prefibrillar oligomers, which eventually aggregate to large insoluble amyloid fibrils. However, when it interacts with domain V of 23S rRNA (or other active rRNA components) PFAR prevents misfolding of HET-s and thereby blocks the formation of large, fibrillar (and amorphous) aggregates. Instead, an alternative pathway comes into action and HET-s folds to form soluble "oligomeric seeds" capable of promoting de novo prion propagation.
Chemicals and Buffers for Experiments
The analytical grade chemicals were purchased from Sigma-Aldrich (Saint Louis, MO, USA) and Merck (Kenilworth, NJ, USA). Talon Resin (CLONTECH) was purchased from TaKaRa Bio Europe AB (Göteborg, Sweden). The reagents for in vitro transcription, primer extension assay and extraction of RNAs were purchased from Macherey-Nagel (Dueren, Germany) and ThermoFisher Scientific (Uppsala, Sweden).
HET-s Protein Expression and Purification
The pET21 clone of full-length HET-s with C-terminal histidine-tag was kindly provided by Sven J. Saupe (University of Bordeaux, Bordeaux, Aquitaine, France). The plasmid was transformed into E. coli BL21(DE3) pLysS cells. Bacteria were grown to 0.5 OD in 2× YT medium and then induced by the addition of 1 mM isopropyl-β-d-thiogalactoside. Four hours after induction, the cells were harvested by centrifugation and either stored at −80 • C or proceeded with purification. Cells were lysed in lysis buffer (100 mM potassium phosphate buffer, pH 8.0). The lysate was centrifuged for 20 min at 20,000× g. The pellet was washed in lysis buffer and resuspended in denaturing buffer (8 M guanidinium-HCl (Gdn-HCl) in lysis buffer). The lysate was incubated with Talon Resin (CLONTECH) for 1 h at 20 • C, and the resin was washed with washing buffer (8 M urea in lysis buffer). The HET-s protein was eluted from the resin in the denatured state with elution buffer (200 mM imidazole in washing buffer) and stored at 4 • C.
In Vitro Transcription and Extraction of Various RNAs
Plasmids containing DNA sequences for domain V of large rRNA from different species (Human, Human mitochondria, bacteria B. subtilis, yeast S. cerevisiae) and PCR products containing sequences of 23S rDNA from E. coli, mRNAs of HCA (783 nucleotides (nt)), DHFR (498 nt) and were used as DNA templates for transcription. The in vitro transcriptions were done using T7 RNA polymerase according to [30] and the RNAs were purified from free nucleotides by using RNA purification kit (Macherey-Nagel). Bulk tRNAs were isolated from E. coli MRE600 by phenol-chloroform treatment [48]. The quality of the RNAs was checked by running into denaturing urea polyacrylamide gel. The length of the 23S rRNA domains were as domain V (595 nt), domain IV (360 nt) and domain II (725 nt).
Light Scattering Assay
Rayleigh light scattering is often used to monitor protein aggregation since the intensity of the scattered light increases with the increase in the size and density of the particles. For studying HET-s aggregation, 8 M urea denatured HET-s was diluted 50 times in 50 mM Tris-HCl buffer (pH 7.5) to a final concentration of 5 µM and incubated without or with different RNA samples overnight at 37 • C. Then, Rayleigh light scattering from the samples was measured at 402 nm (excitation 400 nm, excitation and emission slit 2.5 nm) with a HITACHI F-7000 steady-state fluorescence spectrophotometer (Tokyo, Japan) at 25 • C. For kinetics of HET-s aggregation light scattering at 402 nm was followed with time with the same setup as described above, alone or with various RNAs/"seeds" from previous HET-s aggregation reactions. All measurements were performed at least in triplicates and the data represent the average of three to five independent experiments.
ThT Binding Assay
The HET-s samples with/without RNAs were treated in the same way as in the light scattering assay. ThT (obtained from Sigma) solutions were prepared in double-distilled water and filtered through a 0.22 µm syringe filter. To the overnight incubated HET-s samples ThT was added in the ThT:HET-s ratio 20:1 incubated 3 min at 25 • C, and then ThT fluorescence (between 465 and 565 nm, excitation and emission slit width 5 nm) was recorded using a fluorescence spectrophotometer (HITACHI, F-7000) with excitation at 450 nm. All measurements were done in triplicates and the data represent the average of all three experiments after background subtraction. 4.6. Primer Extension Assay for Detecting HET-s Binding Sites on Domain V rRNA 30 µM HET-s protein stored in urea was diluted 100 times in refolding buffer containing domain V variants of 23S rRNA from E. coli (300 nM), and UV cross-linking was performed immediately in a Bio-Rad GS Gene Linker TM instrument (Hercules, CA, USA), with 254 nm UV irradiation (600 mJ) [49]. For comparison, three unrelated proteins-BCA, HCA and DHFR, were denatured with 6 M Gdn-HCl and subjected to UV cross-linking immediately after dilution of the denaturant. The samples were kept on ice during irradiation to prevent heat damage to the RNA. The irradiated samples were precipitated by salt/ethanol and washed with 70% ethanol for primer extension. Primer 5 -ACCCCGGATCCGCGCCCACGGCAGATAGG-3 was labeled with [γ-32P] dATP at 37 • C using T4 polynucleotide kinase for 1 h by the 5 -end-labeling method [50]. The primer extension was done using the same procedure as described in [28].
TEM
HET-s protein stored in urea was diluted to 5 µM and incubated in 50 mM Tris-HCl pH 7.5 at 37 • C for a day without or with 1 µM in vitro transcribed different RNAs. For morphological analysis of aggregates formed in vitro, samples were diluted 1:4 in 50 mM pH 7.5 Tris-HCl. A solution of each sample (10 µL) was applied to a carbon-coated copper grid and negatively contrasted with 2.5% uranyl acetate in 50% ethanol. Samples were studied at 75 kV in a Hitachi H-7100 transmission electron microscope (Tokyo, Japan), and images were obtained with Gatan 832 Orius SC1000 (Gatan Inc., Pleasanton, CA, USA).
Cross-Seeding Assay
First, 8 M urea denatured HET-s was diluted 50 times in 50 mM Tris-HCl buffer (pH 7.5) to a final concentration of 5 µM and incubated without or with domain V of 23S rRNA (1 µM) overnight at 37 • C to induce aggregation. The overnight samples were centrifuged at 14,000 rpm for 30 min at room temperature and the supernatant was separated from the aggregated pellet. 10 µL of the cleared supernatant from each reaction was added as "seeds" to the fresh dilutions of 8 M urea denatured HET-s (500 µL). The aggregation kinetics was followed by monitoring Rayleigh light scattering at 402 nm (excitation 400 nm, excitation and emission slit 2.5 nm) with a HITACHI F-7000 steady-state fluorescence spectrophotometer as described under "Light Scattering Assay", we added as controls, we also followed the kinetics of HET-s aggregation with and without domain V of 23S rRNA (1 µM). The fluorescence data are plotted against time to follow the time course of HET-s aggregation with/without "seeds". All experiments were done in triplicates.
Author Contributions: S.S. designed the study and the experiments, Y.P. performed all experiments, Y.P. and P.K. and S.S. performed data analyses, Y.P. and S.S. wrote the manuscript and P.K. contributed with comments and finalizing discussion. All authors have read and agreed to the published version of the manuscript.
|
2020-09-03T09:04:19.250Z
|
2020-09-01T00:00:00.000
|
{
"year": 2020,
"sha1": "458db2518aafd5f0f7022416a0439c8f7071367a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/21/17/6340/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0cb67a6d3ce14edb6cb38644677f22053ae8078e",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
252986220
|
pes2o/s2orc
|
v3-fos-license
|
Wildfire Risk Levels at the Local Scale: Assessing the Relative Influence of Hazard, Exposure, and Social Vulnerability
: Wildfire risk assessment provides important tools to fire management, by analysing and aggregating information regarding multiple, interactive dimensions. The three main risk dimensions hazard, exposure and vulnerability, the latter considered in its social dimension, were quantified separately at the local scale for 972 civil parishes in central mainland Portugal and integrated into a wildfire risk index. The importance of each component in the level of risk varied, as assessed by a cluster analysis that established five different groups of parishes, each with a specific profile regarding the relative importance of each dimension. The highest values of wildfire risk are concentrated in the centre-south sector of the study area, with high-risk parishes also dispersed in the northeast. Wildfire risk level is dominated by the hazard component in 52% of the parishes, although with contrasting levels of magnitude. Exposure and social vulnerability dominate together in 32% of the parishes, with the latter being the main risk driver in only 17%. The proposed methodology allows for an integrated, multilevel assessment of wildfire risk, facilitating the effective allocation of resources and the adjustment of risk reduction policies to the specific reality in each parish that results from distinct combinations of the wildfire risk dimensions.
Introduction
Wildfires are becoming more harmful, with recent events occurred in Southern Europe, South America, USA and Australia showing their potential destructive power [1][2][3]. In Portugal, wildfire is one of the most impactful hazards, with the extreme events occurred in 2017 causing the most devastating consequences ever recorded, including the loss of over 100 human lives [4,5]. Especially in the inner part of the territory, the combination between the abundance of flammable forest and shrubdominated land cover, the warm and dry summers typical of Mediterranean-type climates, and the irregular topography, creates a particularly challenging fire-prone landscape [6][7][8][9]. Historical data also shows that, between 1980 and 2018, Portugal had the highest average number of annual wildfires and the second largest annual burnt area among the top affected countries of southern Europe (Portugal, Spain, France, Italy and Greece) despite having the smallest territory [5]. Most of the damage occurs in the summer months as the consequence of a relatively small number of large fires [9][10][11][12].
Like other natural hazards, wildfires can be approached from a disaster risk reduction (DRR) perspective [13][14][15]. The DRR approach conceives risk as a multidimensional phenomenon, that not only includes the characteristics of natural hazards and of the environment in which they occur, but also the degree to which populations, infrastructure and livelihoods are exposed to these hazards, as well as their level of vulnerability to their destructive and disruptive effects [15]. This integrated perspective enables organizations of local to global scope to act upon specific dimensions of In Portugal, Parente and Pereira quantified wildfire risk at the national scale, considering only damage to vegetation [30]. Using raster data, wildfire hazard was estimated as the combination of wildfire probability (quantified for each pixel as the percentage of years from the study period in which that pixel burned) and terrain susceptibility (defined as the propensity of the terrain to be burned as a function of its inherent properties, such as land cover or slope). The potential damage (corresponding to the dimensions of exposure and vulnerability) was quantified using the economic value by hectare of existing vegetation types, and their expected degree of loss in case of burning. Antunes et al. [44] used a similar approach to calculate wildfire risk for a single municipality in central-north Portugal, additionally assessing risk with a focus on scenically valuable landscape units. More recently, Oliveira et al. [28] assessed wildfire risk specifically for human settlements (villages) within a civil parish in central Portugal, combining burn probability scenarios with exposure and vulnerability levels. The latter was based on a cluster analysis of the social characteristics of resident population; in addition, coping capacity factors were also integrated, namely the time required to reach a potential fire shelter and the distance of each village to the nearest fire station.
In this work, we employ a new detailed parish-scaled approach to characterize a regional-sized study area in central mainland Portugal with respect to the three dimensions of wildfire risk: hazard, exposure, and vulnerability, the latter considered in its social dimension. We then combine the three individual dimensions into an integrated wildfire risk index, based on an adaptation of the INFORM framework [39]. This adaptation was recently applied with success by Santos et al. [45] and Pereira et al. [46], albeit to other hazards (floods and landslides, respectively) and was chosen due to its simple structure and its versatility, being applicable with varying degrees of complexity depending on the availability of data regarding each of the dimensions of risk. Cluster analysis is subsequently used to aggregate the 972 parishes into groups sharing similar wildfire risk dimensions, allowing for a nuanced perspective over the study area. Finally, we discuss the limitations of the index, as well as its potentialities in a risk management context. Our objectives are thus threefold: (1) to characterize the parishes in the study area in terms of wildfire hazard, exposure, and social vulnerability; (2) to quantify wildfire risk within the study area by means of an integrated index; (3) to identify wildfire risk profiles within the study region, by investigating the combination patterns of the components of wildfire risk among the different parishes.
Study Area
The study area was the NUTS 2 region "Centro", which covers a total area of 28,199 km 2 in central-north mainland Portugal (Figure 1). It comprises 100 municipalities, further subdivided into 972 civil parishes, which were the units of analysis adopted in this study and correspond to the smallest administrative unit in the country. The parishes vary in area from 2.0 km 2 to 373.5 km 2 . Elevation ranges from the sea level to the highest point in mainland Portugal, 1993 m in the Estrela mountain in the east (Figure 2A), with landforms varying from coastal plains in the west to mountain ranges, and further to plateaus at different elevation levels in the east. Land cover also presents much variability ( Figure 2B). It is dominated by different forest types, mainly eucalyptus (Eucalyptus globulus) and maritime pine (Pinus pinaster), concentrated in a N-S swath across the centre of the study area, and along a narrow coastal fringe. Elsewhere, forests occur interspersed with agroforestry, with the latter dominating in the SW and SE limits. In the NE, agroforestry occurs interspersed mostly with oak forests and shrubland. The SW-NE-oriented Central Mountain Range (Cordilheira Central) is marked by large patches of scrub and unvegetated or sparsely vegetated terrain. Annual rainfall ranges from a minimum of 600 mm in the extreme NE of the study area, up to 2800 mm in the highest areas of the Central Mountain Range [47]. Regarding wildfire incidence, the spatial pattern of burnt areas shows a remarkable correspondence with land cover, with burnt areas corresponding broadly to the distribution of forest and scrub; Figure 2C shows the number of times burnt between 1975 and 2018, calculated with reference to a 25 m pixel. Nonetheless, a spatial distinction is evident, with a large part of the central sector having burned 1 to 4 times, but patches that have burned a greater number of times (up to a maximum of 15) occurring in a dispersed pattern in mountainous areas of the N and NE sectors. This high recurrence is mostly related to the use of fire for pasture renovation [11]. In contrast, the central-south sector of the study area is characterized by less frequent, but much more extensive wildfires, promoted by continuous forest patches, interspersed with scrub patches corresponding to different stages of post-fire succession ( Figure 2B) [11]. Figure 2D shows the cumulative percentage of area burnt by parish for the period 1975-2018, further illustrating the variability in wildfire patterns within the study area. Most of the central and NE sectors of the study area are dominated by parishes in which more than 97% of the area has burned during this 44-year period, with values above 200% being frequent. In the most extreme cases, all the area of the parish burned between three and near to five times during the considered period. In contrast, all the coastal region, as well as the S and SE limits, are dominated by parishes in the lower classes (less than 50% burned).
Population density also shows a marked variation within the study area, decreasing generally away from the coast. Its values are below 100 inhabitants/km 2 in most of the study area, reaching values over 500 inhabitants/km 2 only in and around the larger urban centres such as Leiria, Coimbra, Aveiro, Viseu or Guarda [47] (Figure 1).
Fire 2022, 5, x FOR PEER REVIEW 1975 and 2018, calculated with reference to a 25 m pixel. Nonetheless, a spati is evident, with a large part of the central sector having burned 1 to 4 times that have burned a greater number of times (up to a maximum of 15) occur persed pattern in mountainous areas of the N and NE sectors. This high mostly related to the use of fire for pasture renovation [11]. In contrast, the sector of the study area is characterized by less frequent, but much more ex fires, promoted by continuous forest patches, interspersed with scrub patches ing to different stages of post-fire succession ( Figure 2B) [11]. Figure 2D show lative percentage of area burnt by parish for the period 1975-2018, further ill variability in wildfire patterns within the study area. Most of the central an of the study area are dominated by parishes in which more than 97% of burned during this 44-year period, with values above 200% being frequent extreme cases, all the area of the parish burned between three and near to fiv ing the considered period. In contrast, all the coastal region, as well as the S a are dominated by parishes in the lower classes (less than 50% burned).
Population density also shows a marked variation within the study are generally away from the coast. Its values are below 100 inhabitants/km 2 in study area, reaching values over 500 inhabitants/km 2 only in and around the centres such as Leiria, Coimbra, Aveiro, Viseu or Guarda [47] (Figure 1). Municipal limits within the study area are shown in light grey.
Methodology
The general structure of the methodology used the calculate the wildfire risk index (WRI) is shown in Figure 3. It followed the adaptation of the INFORM framework [39] recently applied to floods and landslides [45,46]. The three dimensions of risk and their integration are described in the following sections.
Methodology
The general structure of the methodology used the calculate the wildfire risk index (WRI) is shown in Figure 3. It followed the adaptation of the INFORM framework [39] recently applied to floods and landslides [45,46]. The three dimensions of risk and their integration are described in the following sections. A 25 m pixel was adopted for all spatial data. ArcGIS 10.7.1. software (ESRI Inc., Redlands, CA, USA) was used for all spatial analysis operations.
Wildfire Hazard
Wildfire hazard was calculated using the methodology described by Oliveira et al. [21] (summarized in Figure 4). According to the underlying conceptual framework, wildfire hazard is calculated as the product of susceptibility (the terrain's propensity to suffer a wildfire or to support its spreading as dictated by its intrinsic characteristics such as elevation, slope and vegetation cover) by wildfire probability (the unconditioned probability that a given spatial unit will burn on any given year). This methodology has been previously adopted in wildfire studies [20,30,48,49] and is officially used by the Portuguese state agency for the conservation of nature and forests (ICNF) for producing yearly wildfire hazard maps for mainland Portugal [50].
For each pixel, susceptibility values are the result of the sum of the likelihood ratios (LR) associated with the variables elevation (in m), slope angle (in degrees) and land cover, obtained by cross-tabulating each of these classified variables with past burnt areas. Aspect was not considered, as this variable does not have a clear spatial relationship with burned area in mainland Portugal and has been shown not to increase the predictive capacity of wildfire hazard models [21].
Burnt area data was obtained from the Portuguese Institute for Conservation of Nature and Forests (ICNF). Topographic data were obtained from the European Environmental Agency's Digital Surface Model, with a 25 m pixel (https://www.eea.europa.eu/data-and-maps/data/copernicus-land-monitoring-service-eu-dem; accessed on 1 March 2021). Land-cover data was obtained from the Portuguese General-Directorate of the Territory (Direção-Geral do Território). A 25 m pixel was adopted for all spatial data. ArcGIS 10.7.1. software (ESRI Inc., Redlands, CA, USA) was used for all spatial analysis operations.
Wildfire Hazard
Wildfire hazard was calculated using the methodology described by Oliveira et al. [21] (summarized in Figure 4). According to the underlying conceptual framework, wildfire hazard is calculated as the product of susceptibility (the terrain's propensity to suffer a wildfire or to support its spreading as dictated by its intrinsic characteristics such as elevation, slope and vegetation cover) by wildfire probability (the unconditioned probability that a given spatial unit will burn on any given year). This methodology has been previously adopted in wildfire studies [20,30,48,49] and is officially used by the Portuguese state agency for the conservation of nature and forests (ICNF) for producing yearly wildfire hazard maps for mainland Portugal [50].
For each pixel, susceptibility values are the result of the sum of the likelihood ratios (LR) associated with the variables elevation (in m), slope angle (in degrees) and land cover, obtained by cross-tabulating each of these classified variables with past burnt areas. Aspect was not considered, as this variable does not have a clear spatial relationship with burned area in mainland Portugal and has been shown not to increase the predictive capacity of wildfire hazard models [21].
Burnt area data was obtained from the Portuguese Institute for Conservation of Nature and Forests (ICNF). Topographic data were obtained from the European Environmental Agency's Digital Surface Model, with a 25 m pixel (https://www.eea.europa.eu/dataand-maps/data/copernicus-land-monitoring-service-eu-dem; accessed on 1 March 2021). Land-cover data was obtained from the Portuguese General-Directorate of the Territory (Direção-Geral do Território). For each class i of each variable, the LR score Lri is calculated as [51]: where Si is the number of burnt terrain units (pixels) corresponding to class i of variable Y, S is the number of burnt terrain units, Ni is the number of terrain units associated with class i of variable Y, and N is the total number of terrain units. For a total of n predisposing variables, the total LR score of each terrain unit (Lrj) is calculated as: where Xij equals 1 for the classes of the variables that are present and 0 for all others. For each class i of each variable, the LR score Lri is calculated as [51]: where Si is the number of burnt terrain units (pixels) corresponding to class i of variable Y, S is the number of burnt terrain units, Ni is the number of terrain units associated with class i of variable Y, and N is the total number of terrain units. For a total of n predisposing variables, the total LR score of each terrain unit (Lrj) is calculated as: where Xij equals 1 for the classes of the variables that are present and 0 for all others. Yearly burnt areas between 1975 and 2018 were used to derive LR scores for elevation and slope angle. As land cover mapping is only available since 1995, with maps existing for 1995, 2007, 2010, and 2015, LR scores were calculated for each class considering the specific timeframe represented by each landcover map. Likelihood ratio scores were, therefore, calculated for the 1995 map using annual burnt areas for the years 1995-2006 (12 years), for the 2007 map using annual burnt areas for the years 2007-2009 (3 years), for the 2010 map using annual burnt areas for the years 2010-2014 (5 years), and for the 2015 map using burnt areas for the years 2015-2018 (4 years). The final LR score for each land cover class was calculated as the weighted average of the scores within the successive land-cover maps, with the number of years covered by each map used as weight.
Wildfire hazard was obtained by multiplying the susceptibility score of each pixel by its probability of burning each year, obtained as the ratio of times that a given pixel was burnt (between 1975 and 2018) and the total number of years within this period (44 years). The resulting map was classified in five classes (very low; low; medium; high; very high) as required by the Portuguese Forest Authority, according to the breaks of the successrate curve and the predictive capacity of the hazard model [21].
Finally, wildfire hazard was quantified for each of the 972 parishes as the percentage of the parish area classified in the two highest hazard classes ( Figure 5). Yearly burnt areas between 1975 and 2018 were used to derive LR scores for elevation and slope angle. As land cover mapping is only available since 1995, with maps existing for 1995, 2007, 2010, and 2015, LR scores were calculated for each class considering the specific timeframe represented by each landcover map. Likelihood ratio scores were, therefore, calculated for the 1995 map using annual burnt areas for the years 1995-2006 (12 years), for the 2007 map using annual burnt areas for the years 2007-2009 (3 years), for the 2010 map using annual burnt areas for the years 2010-2014 (5 years), and for the 2015 map using burnt areas for the years 2015-2018 (4 years). The final LR score for each land cover class was calculated as the weighted average of the scores within the successive land-cover maps, with the number of years covered by each map used as weight.
Wildfire hazard was obtained by multiplying the susceptibility score of each pixel by its probability of burning each year, obtained as the ratio of times that a given pixel was burnt (between 1975 and 2018) and the total number of years within this period (44 years). The resulting map was classified in five classes (very low; low; medium; high; very high) as required by the Portuguese Forest Authority, according to the breaks of the success-rate curve and the predictive capacity of the hazard model [21].
Finally, wildfire hazard was quantified for each of the 972 parishes as the percentage of the parish area classified in the two highest hazard classes ( Figure 5). Wildfire hazard, defined for each parish as the percentage of its area classified as havin "High" or "Very High" wildfire hazard.
Exposure
We approached this dimension of wildfire risk in terms of two complementary sub components. The first was the existing number of inhabitants and residential buildings in each parish, and thus exposed to potential damage. The second was related to the spatia pattern of human occupation and expresses the degree to which the inhabitants and build ings in each parish are located outside the boundaries of the consolidated urban area (o the central area of villages and towns). The underlying assumption is that the degree o exposure of a parish increases with the increasing spatial dispersion of buildings and peo ple within the parish, as this spatial pattern reflects a stronger intermix between urba features and forest/natural areas. Urban areas were defined by extracting all areas classi fied as artificialized from the Portuguese government's 2018 Land Cover Map (Carta d Ocupação do Solo, produced by the Directorate-General of the Territory), with the excep tion of roads. Individual residential buildings were obtained in the form of a point datase from the Geographical Database of Buildings (Base Geográfica de Edifícios), produced b Statistics Portugal (Instituto Nacional de Estatística, 2011).
Residents in each building were estimated using the approach employed by Garci et al. [52]. Knowing the number of lodgings within each building (included in the Geo graphical Database of Buildings) and the number of residents within each statistical sub section (the smallest spatial statistical unit for which data are available; obtained from Statistics Portugal, 2011), we estimated the average number of residents within the lodg ings of the buildings in each statistical subsection. As an example, if a statistical subsectio has a total of 100 residents and 20 lodgings, each lodging will have on average 5 residents If a building within that statistical subsection has 3 lodgings, this building will be esti mated to have 15 residents.
Using this approach, a total of ten variables were calculated for each of the 972 par ishes as potential descriptors of the two components of exposure identified above: th total number of residents, the number of residents within urban areas, the number of res idents outside of urban areas, the difference between the numbers of urban and non-urban residents, the percentage of total residents outside of urban areas, and the same five vari ables calculated for buildings instead of residents. A Pearson correlation analysis was then performed to understand the relations between all variables (Table 1). Values confirmed Figure 5. Wildfire hazard, defined for each parish as the percentage of its area classified as having "High" or "Very High" wildfire hazard.
Exposure
We approached this dimension of wildfire risk in terms of two complementary subcomponents. The first was the existing number of inhabitants and residential buildings in each parish, and thus exposed to potential damage. The second was related to the spatial pattern of human occupation and expresses the degree to which the inhabitants and buildings in each parish are located outside the boundaries of the consolidated urban area (or the central area of villages and towns). The underlying assumption is that the degree of exposure of a parish increases with the increasing spatial dispersion of buildings and people within the parish, as this spatial pattern reflects a stronger intermix between urban features and forest/natural areas. Urban areas were defined by extracting all areas classified as artificialized from the Portuguese government's 2018 Land Cover Map (Carta de Ocupação do Solo, produced by the Directorate-General of the Territory), with the exception of roads. Individual residential buildings were obtained in the form of a point dataset from the Geographical Database of Buildings (Base Geográfica de Edifícios), produced by Statistics Portugal (Instituto Nacional de Estatística, 2011).
Residents in each building were estimated using the approach employed by Garcia et al. [52]. Knowing the number of lodgings within each building (included in the Geographical Database of Buildings) and the number of residents within each statistical subsection (the smallest spatial statistical unit for which data are available; obtained from Statistics Portugal, 2011), we estimated the average number of residents within the lodgings of the buildings in each statistical subsection. As an example, if a statistical subsection has a total of 100 residents and 20 lodgings, each lodging will have on average 5 residents. If a building within that statistical subsection has 3 lodgings, this building will be estimated to have 15 residents.
Using this approach, a total of ten variables were calculated for each of the 972 parishes as potential descriptors of the two components of exposure identified above: the total number of residents, the number of residents within urban areas, the number of residents outside of urban areas, the difference between the numbers of urban and non-urban residents, the percentage of total residents outside of urban areas, and the same five variables calculated for buildings instead of residents. A Pearson correlation analysis was then performed to understand the relations between all variables (Table 1). Values confirmed the two components of exposure and their mutual independence: the quantity of residents and buildings exposed (values highlighted in light grey in Table 1) and their degree of dispersion outside the urban area (highlighted in dark grey). Based on the collinearity between variables, two final variables were selected to express exposure for each parish: the total number of residents ( Figure 6A), and the percentage of the total residents outside of urban areas ( Figure 6B). Table 1. Pearson correlation coefficients between the potential exposure descriptors. Values ≥ 0.7 highlighted in grey. The two grey tones are intended to differentiate groups of collinear variables, considered to express different dimensions of exposure. TotBui-total number of buildings; UBui-number of urban buildings; NUBui-number of non-urban buildings; DiffBui-difference between urban and non-urban buildings; PerNUBui-percentage of total buildings that are non-urban; TotRes-total number of residents; Ures-urban residents; NURes-non-urban residents; DiffRes-difference between urban and non-urban residents; PerNURes-percentage of total residents that are non-urban. ** Significant at the p = 0.01 level. and buildings exposed (values highlighted in light grey in Table 1) and their degree of dispersion outside the urban area (highlighted in dark grey). Based on the collinearity between variables, two final variables were selected to express exposure for each parish: the total number of residents ( Figure 6A), and the percentage of the total residents outside of urban areas ( Figure 6B). To combine both variables into one, each was normalized to the scale 0-1 using the min-max technique. Following this technique, each value x of a variable j with minimum value Minj and maximum value Maxj is re-scaled into xres using the formulation: After both variables had been re-scaled, the mean value from both was calculated for each parish. Table 1. Pearson correlation coefficients between the potential exposure descriptors. Values ≥ 0.7 highlighted in grey. The two grey tones are intended to differentiate groups of collinear variables, considered to express different dimensions of exposure. TotBui-total number of buildings; UBuinumber of urban buildings; NUBui-number of non-urban buildings; DiffBui-difference between urban and non-urban buildings; PerNUBui-percentage of total buildings that are non-urban; To-tRes-total number of residents; Ures-urban residents; NURes-non-urban residents; DiffResdifference between urban and non-urban residents; PerNURes-percentage of total residents that are non-urban. ** Significant at the p = 0.01 level. To combine both variables into one, each was normalized to the scale 0-1 using the min-max technique. Following this technique, each value x of a variable j with minimum value Min j and maximum value Max j is re-scaled into x res using the formulation: After both variables had been re-scaled, the mean value from both was calculated for each parish.
Social Vulnerability
We adopted the social vulnerability methodology originally proposed by Mendes et al. [53] and further developed and applied at different spatial scales by Tavares et al. [54] and Mendes et al. [33]. This approach defines social vulnerability in terms of two dimensions: criticality and support capability. Criticality expresses individual characteristics that are related to vulnerability and to the potential for recovery (for example age, employment, housing conditions, and mobility). Support capability describes the collective equipment and infrastructure (whether public or private) held by a particular territory that contribute to the contingency of activities, the collective and individual recovery and rehabilitation, and the consequential decrease in the impact caused by a disastrous event [45].
Principal Component Analysis (PCA) was employed for the quantification of both dimensions. This technique has often been applied to social vulnerability with the purpose of reducing a relatively large set of potentially influencing variables into a smaller set of underlying dimensions [32,[54][55][56].
Criticality was defined using an initial set of 25 variables, describing social and demographic characteristics of the population and properties of the built environment (Table 2; conceptual justification is shown in Table A1). All were obtained from the most recent national census (2011) at the scale of the individual parish. All values were standardized prior to use in the analysis. PCA allowed to extract 6 principal components (PC) from 20 variables out of the initial dataset, with a KMO (Kaiser-Meyer-Olkin) value of 0.874 and explaining 73% of the total variance. Support capability was defined from an initial dataset of 14 variables, obtained from different sources and varying in spatial scale from the parish to the municipal level (Table 3; conceptual justification is shown in Table A2). All were standardized prior to use. Of the initial 14-variable dataset, 11 variables were used to extract 4 PC, with a KMO of 0.773, and explaining 67% of total variance. --Proportion of non-classical lodgings 2 (%). 1 Representatives of legislative power and of executive bodies, directors and executive managers, as well as specialists in intellectual and scientific fields. 2 Lodgings that, while serving as residence for at least a family, are either mobile, improvised or otherwise not built for habitation. For each of the two dimensions of social vulnerability, the extracted PC were interpreted, and any necessary changes were made to the cardinality of the PC scores. Each parish's criticality was quantified as the sum of its scores in each criticality PC, weighted by its proportion of explained variance. Similarly, each parish's support capability was defined as the sum of its scores in each of the four PC describing this dimension, weighted Finally, social vulnerability (SV) was calculated for each parish by integrating its values of criticality (CR) and support capability (SC) using formulation (4) [45,54]. This formulation ensures that high values of criticality and low values of support capability will result in increased social vulnerability (higher value): The spatial distribution of the resulting values is shown in Figure 8. Finally, social vulnerability (SV) was calculated for each parish by integrating its values of criticality (CR) and support capability (SC) using formulation (4) [45,54]. This formulation ensures that high values of criticality and low values of support capability will result in increased social vulnerability (higher value): The spatial distribution of the resulting values is shown in Figure 8. Finally, social vulnerability (SV) was calculated for each parish by integrating its values of criticality (CR) and support capability (SC) using formulation (4) [45,54]. This formulation ensures that high values of criticality and low values of support capability will result in increased social vulnerability (higher value): The spatial distribution of the resulting values is shown in Figure 8.
Wildfire Risk Index
Following the adaptation of the INFORM methodology [39] employed in [45,46], the three components of wildfire risk were re-scaled to values between 0 and 1 using the min-max technique Equation (3). It should be noted that both sub-components of exposure had already been individually re-scaled (see Section 3.2) prior to their combination by averaging. As the final values varied only between 0.005 and 0.507, they were re-scaled to have equal range of variation to the hazard and social vulnerability values.
Finally, the three components hazard (H), exposure (E) and social vulnerability (SV) were combined into the final wildfire risk index (WRI) using the formulation [39,45]: In practice, the result corresponds to the geometric average of the three dimensions, with equal weights [39]. As Pereira et al. [46] noted, the exponentiation of the factors to 1/3 allows to highlight the differences among parishes, especially the ones with lower scores, while keeping their ranking/hierarchy.
It is worth noting that, given the multiplicative structure of the formula, any null value in any of the three components would result in a parish having null wildfire risk. To avoid such an outcome, all null values in each driver were converted to a positive, albeit insignificant value. To do so, the smallest positive value among all three components was determined (0.00005 for wildfire hazard). Then, each null value in each component was replaced by a value lower than the lowest positive value in that dimension by a unit of this same order of magnitude (0.00001).
Cluster Analysis
Hierarchical cluster analysis was performed on the 972 parishes with the purpose of aggregating them into homogeneous groups regarding the three main risk components of hazard, exposure, and social vulnerability. This approach was previously applied to landslide and flood hazards [45,46]. Clustering was performed using SPSS (IBM Corp., Armonk, NY, USA) and following Ward's method, with squared Euclidian distance as measure of distance between cluster centres. This clustering method consists of a bottom-up approach, in which the criterion for selecting the pair of clusters to merge at each step is based on the minimum increase in the total within-cluster variance. The range of solutions tested varied from 2 to 10 clusters. The optimal number of clusters was evaluated through Schwarz's Bayesian Criterion (BIC), the Akaike Information Criterion (AIC) and expert judgment. The BIC suggested 2 clusters and the AIC 4 clusters. The 2-cluster solution was considered not to describe adequately the diversity of combinations between the three risk components within the study area. The 4-cluster solution enabled a better representation of the wildfire fire risk profiles. However, it grouped too broadly all the parishes with high hazard in the same cluster (356 cases out of 972 in cluster 3), instead of differentiating among them. A 5-cluster solution was therefore chosen, allowing for a more nuanced perspective over the variability of wildfire risk patterns in the study area.
The Dimensions of Wildfire Risk
Parish-aggregated wildfire hazard, representing the percentage of parish area with high and very high hazard levels, varied between 0 (23 parishes) and 100% (2 parishes), with a mean value of 47.62%. The frequency distribution of the values (Figure 9) shows a remarkable contrast with the other dimensions of the wildfire risk index, with hazard values strongly deviating from a normal distribution (kurtosis = −1.34). The relative abundance of parishes with very low and very high hazard can be seen across the study area in Figure 10A. The highest values occur within the central, N, and NE sectors of the study area. The central sector is homogeneously characterized by very high values, whereas the N and NE sectors show more variability. In contrast, the western and SW portions of the study area, as well as its extreme southern limit, are characterized by mostly low wildfire hazard values. Few parishes along the seacoast show high wildfire hazard, in contrast with the predominantly low levels that dominate that sector of the study area. As expected, the spatial distribution of wildfire hazard follows closely that of its input factors, namely elevation (and its derivative slope) ( Figure 1A), land cover (shown for 2018 in Figure 1B) and wildfire probability (proportional to the wildfire recurrence map in Figure 1C).
Exposure values vary between 0.005 and 0.507, with a mean value of 0.132. The frequency distribution is positively skewed (skewness 1.63) and strongly leptokurtic (k = 4.42) (Figure 9). Correspondingly, the spatial distribution of this variable is marked by a predominance of relatively low values ( Figure 10B), with the parishes with highest values forming a narrow arch along the southern sector of the study area. In contrast, the eastern limit of the study area is marked by homogeneously low values. The maps of the two input variables behind exposure show contrasting patterns. The total number of residents per parish is highest along the coast, diminishing inland, whereas the percentage of resident population living outside of urban areas shows a concentration of high values, elongated from W to E in the southern sector of the study area ( Figure 6B). The centre-north and northeast sectors have a heterogenous combination of parishes with high and low exposure levels, whereas the parishes along the seacoast show homogeneously low values. The averaging of these two contrasting maps generates a final exposure map showing predominantly low values and little spatial contrast ( Figure 10B). The integration of criticality and support capability allowed to obtain a distribution of values of social vulnerability varying from 0.0003 to 0.791, with a mean of 0.291. (Figure 8). The distribution is slightly positively skewed (0.275) with a slightly negative kurtosis (−0.133). It was rescaled from 0 to 1, similarly to the other components of wildfire risk. The spatial distribution of the final values ( Figure 10C) shows two essential patterns. The coastal and centre-north sectors are characterized by parishes with relatively low social vulnerability, some forming homogeneous clusters. Contrarily, the parishes in the eastern and centre-south sectors are characterized by a predominance of fairly high social vulnerability, with a few dispersed parishes showing the highest values. The integration of criticality and support capability allowed to obtain a distribution of values of social vulnerability varying from 0.0003 to 0.791, with a mean of 0.291. (Figure 8). The distribution is slightly positively skewed (0.275) with a slightly negative kurtosis (−0.133). It was rescaled from 0 to 1, similarly to the other components of wildfire risk. The spatial distribution of the final values ( Figure 10C) shows two essential patterns. The coastal and centre-north sectors are characterized by parishes with relatively low social vulnerability, some forming homogeneous clusters. Contrarily, the parishes in the eastern and centre-south sectors are characterized by a predominance of fairly high social vulnerability, with a few dispersed parishes showing the highest values.
Wildfire Risk Index
The final wildfire risk index varied between 0.005 and 0.839, with a mean value of 0.308. The distribution is slightly positively skewed (skewness = 0.18) with a slightly negative kurtosis (k = −0.247) (Figure 9). Although the formulation of the index takes into equal consideration its three components, the final values are more closely linearly correlated to hazard (p = 0.801) than to exposure or social vulnerability (0.408 and 0.671, respectively; all correlations are significant at the 0.01 level). The spatial distribution of the values ( Figure 10D) suggests four main spatial patterns. The first characterizes the coastal region and is marked by the dominance of low to very low-risk parishes, with occasional isolated parishes with higher values in the middle range. The second pattern occurs in the centre-south of the study area and is marked by a homogeneous concentration of medium to very high-risk parishes. A third pattern can be associated with the centre-north and NE of the study area, where parishes with contrasting levels of wildfire risk are distributed in a heterogeneous pattern. Finally, a fourth spatial pattern can be defined in the southeast, the extreme centre-south, and the narrow NE limit of the study area, where there is a relatively homogeneous predominance of medium-to-low risk parishes.
Risk Profiles and Cluster Analysis
Five clusters of parishes with similar characteristics were identified ( Figure 11A). The relations between the clusters and the risk components ( Figure 11B) show contrasting degrees of component variability throughout the study area, with wildfire hazard values varying much more than exposure or social vulnerability, which fluctuate to a similar degree. The figure also shows that the relative importance of the components of wildfire risk varies among the five defined clusters, with hazard dominating in clusters 2, 3 and 4, but not in cluster 1 (where exposure and social vulnerability are more relevant) nor in cluster 5 (where social vulnerability is more important). Regarding the two dimensions of social vulnerability, the results of the principal component analysis allowed to interpret the driving components of criticality (Table 4), by order of importance, in the following manner: PC1-social and demographic dynamics (31.18% of all variance explained); PC2-professional qualification and urban/rural contrasts (11.81%); PC3-uprooting and long-term mobility (8.41%); PC4-conditions of the built environment (8.15%); PC5-habitational conditions (7.02%); and PC6-daily commuting (6.83%). The final values varied between 0.007 and 0.953, with a mean of 0.463. The distribution is slightly positively skewed (skewness 0.164) with nearly null kurtosis (k = 0.08). The corresponding criticality map ( Figure 7A) shows a clear spatial tendency, with values generally increasing from the seacoast to the interior, and the highest values occurring in the easternmost limit of the study area, as well as in a small cluster of parishes in its centre-south. This distribution implies a general increase in the individual and household-level potential for loss and a decrease in the potential for recovery as one progresses from the seacoast inland, resulting from characteristics such as an ageing (AGE) and less educated population (ILLIT), and a larger proportion of elderly people leaving alone (SING65). Likewise, characteristics such as a larger proportion of smaller residential buildings (FLOORS) and single-accommodation buildings (SINGACCO), or a larger proportion of seasonally used homes (SEASON) contribute to more isolated households and make mutual aid more difficult (Table 4).
With respect to support capability, four principal components were extracted, which can be interpreted in terms of their effect over the capacity to resist and recuperate from disaster, by decreasing order of importance (Table 5): PC1-economy and emergency resources (30.90% of variance explained); PC2-existing infra-structure (14.42%); PC3-quality of the habitational setting I (10.64%); and PC4-quality of the habitational setting II and accessibilities (10.62%). The final values varied between 0.084 and 0.959, with a mean of 0.400. The elevated positive skewness (1.41) and kurtosis (2.35) imply that most of the study area is characterized by a relatively low support capability ( Figure 7B), with the lowest values occurring in a rather dispersed pattern in the centre-south and the SE sectors. In contrast to the prevailing pattern, the parishes along the seacoast show relatively high values, with some well-defined agglomerations of contiguous parishes with similar values. It should be noted that some of the variables used for quantifying support capability were only available at the municipal scale (Table 3), which may account for this pattern. A reflection on the principal components used to quantify this dimension of social vulnerability (Table 5) Fire 2022, 5, 166 16 of 24 indicates that it is among the more urbanized (ROOMS, URBWAST, RESOUT, ROAD), economically dynamic (GVA, MEDSALEV, REPDEGR, AGEBUILD) seacoast parishes, with their relative abundance of services and infrastructure (FIREF, NURSES, WHEELCH) that the capacity to reduce the impacts of wildfires and to recover and rehabilitate in their wake are highest. The integration of criticality and support capability allowed to obtain a distribution of values of social vulnerability varying from 0.0003 to 0.791, with a mean of 0.291 (Figure 8). The distribution is slightly positively skewed (0.275) with a slightly negative kurtosis (−0.133). It was rescaled from 0 to 1, similarly to the other components of wildfire risk. The spatial distribution of the final values ( Figure 10C) shows two essential patterns. The coastal and centre-north sectors are characterized by parishes with relatively low social vulnerability, some forming homogeneous clusters. Contrarily, the parishes in the eastern and centre-south sectors are characterized by a predominance of fairly high social vulnerability, with a few dispersed parishes showing the highest values.
Wildfire Risk Index
The final wildfire risk index varied between 0.005 and 0.839, with a mean value of 0.308. The distribution is slightly positively skewed (skewness = 0.18) with a slightly negative kurtosis (k = −0.247) (Figure 9). Although the formulation of the index takes into equal consideration its three components, the final values are more closely linearly correlated to hazard (p = 0.801) than to exposure or social vulnerability (0.408 and 0.671, respectively; all correlations are significant at the 0.01 level). The spatial distribution of the values ( Figure 10D) suggests four main spatial patterns. The first characterizes the coastal region and is marked by the dominance of low to very low-risk parishes, with occasional isolated parishes with higher values in the middle range. The second pattern occurs in the centre-south of the study area and is marked by a homogeneous concentration of medium to very high-risk parishes. A third pattern can be associated with the centre-north and NE of the study area, where parishes with contrasting levels of wildfire risk are distributed in a heterogeneous pattern. Finally, a fourth spatial pattern can be defined in the southeast, the extreme centre-south, and the narrow NE limit of the study area, where there is a relatively homogeneous predominance of medium-to-low risk parishes.
Risk Profiles and Cluster Analysis
Five clusters of parishes with similar characteristics were identified ( Figure 11A). The relations between the clusters and the risk components ( Figure 11B) show contrasting degrees of component variability throughout the study area, with wildfire hazard values varying much more than exposure or social vulnerability, which fluctuate to a similar degree. The figure also shows that the relative importance of the components of wildfire risk varies among the five defined clusters, with hazard dominating in clusters 2, 3 and 4, but not in cluster 1 (where exposure and social vulnerability are more relevant) nor in cluster 5 (where social vulnerability is more important).
Cluster 1 includes 306 parishes, and it is the largest, closely followed in size by cluster 3. It is characterized by the lowest average wildfire hazard and social vulnerability values ( Figure 11B), having an intermediate level of exposure in comparison with the other clusters. It is spatially distributed on a homogeneous N-S swath along the coastline ( Figure 11A), also including a few isolated groups of parishes throughout the study area, namely in the centre-north, the extreme south, and the SE.
Cluster 2 includes 147 parishes. It presents levels of exposure and social vulnerability similar to those of cluster 1, but with a higher wildfire hazard level ( Figure 11B). Spatially, it occurs mostly in association to cluster 1 in a broad N-S swath along the coastline, also appearing as a group of parishes in the centre-north (again in association with cluster 1) and more isolated in the NE and SE of the study area ( Figure 11A).
Cluster 3 includes 303 parishes, the second largest group. It is characterized by relatively low exposure levels, only surpassed by cluster 5 which has the lowest exposure; it includes intermediate levels of social vulnerability, and relatively high wildfire hazard levels (near to the highest values found in cluster 4) ( Figure 11B). Spatially, cluster 3 dominates the centre and NE sectors of the study area, also including a few isolated parishes within the N-S coastal swath dominated by clusters 1 and 2 ( Figure 11A). Cluster 4 is the smallest, including only 53 parishes. It stands out by having the highest values in all the three components of wildfire risk ( Figure 11B). Spatially, it occurs mostly as a homogeneous concentration of parishes in the centre-south of the study area, also appearing in isolated manner in the S, extreme N, and NE ( Figure 11A).
Finally, cluster 5 includes 163 parishes. It is characterized by relatively low hazard levels (the second lowest, following cluster 1), the lowest exposure levels, and relatively high social vulnerability levels (the second highest, after cluster 4) ( Figure 11B). Spatially, it dominates the SE of the study area, with minor concentrations of parishes in the NE and the centre-south, and isolated parishes occurring throughout the study area ( Figure 11A).
Regarding the percentage of parishes by main wildfire risk dimension, hazard dominates in 51.7% of all parishes (clusters 2, 3 and 4), with exposure together with social vulnerability dominating 31.5% (cluster 1), and social vulnerability dominating only 16.8% (cluster 5). Circles identify potential outliers, defined as situated between 1.5× and 3× the interquartile range below the 1st quartile and above the 3rd quartile. Asterisks identify potential extreme outliers, exceeding 3 times the interquartile range below or above the 1st and the 3rd quartile.
Cluster 1 includes 306 parishes, and it is the largest, closely followed in size by cluster 3. It is characterized by the lowest average wildfire hazard and social vulnerability values ( Figure 11B), having an intermediate level of exposure in comparison with the other clusters. It is spatially distributed on a homogeneous N-S swath along the coastline ( Figure 11A), also including a few isolated groups of parishes throughout the study area, namely in the centre-north, the extreme south, and the SE.
Cluster 2 includes 147 parishes. It presents levels of exposure and social vulnerability similar to those of cluster 1, but with a higher wildfire hazard level ( Figure 11B). Spatially, it occurs mostly in association to cluster 1 in a broad N-S swath along the coastline, also appearing as a group of parishes in the centre-north (again in association with cluster 1) and more isolated in the NE and SE of the study area ( Figure 11A).
Cluster 3 includes 303 parishes, the second largest group. It is characterized by relatively low exposure levels, only surpassed by cluster 5 which has the lowest exposure; it includes intermediate levels of social vulnerability, and relatively high wildfire hazard levels (near to the highest values found in cluster 4) ( Figure 11B). Spatially, cluster 3 dominates the centre and NE sectors of the study area, also including a few isolated parishes within the N-S coastal swath dominated by clusters 1 and 2 ( Figure 11A).
Cluster 4 is the smallest, including only 53 parishes. It stands out by having the highest values in all the three components of wildfire risk ( Figure 11B). Spatially, it occurs mostly as a homogeneous concentration of parishes in the centre-south of the study area, also appearing in isolated manner in the S, extreme N, and NE ( Figure 11A).
Finally, cluster 5 includes 163 parishes. It is characterized by relatively low hazard Figure 11. (A) Division of the parishes in the study area in five clusters based on hazard, exposure, and social vulnerability levels; (B) Distribution of hazard, exposure, and social vulnerability levels among the five clusters. Circles identify potential outliers, defined as situated between 1.5× and 3× the interquartile range below the 1st quartile and above the 3rd quartile. Asterisks identify potential extreme outliers, exceeding 3 times the interquartile range below or above the 1st and the 3rd quartile.
Discussion
The proposed risk index ( Figure 10D) allows for a general and integrative perspective over the spatial patterns and variations of wildfire risk throughout the study area. This perspective is invaluable in a context of regional-level to country-level spatial planning and risk management. However, the applicability and value of this index can only be fully grasped in relation to its hierarchical structure in three increasing levels of detail and specificity: the final integrated level, the level of the individual dimensions of wildfire risk (hazard/exposure/social vulnerability), and the level of their individual sub-components (in the cases of exposure and social vulnerability) (Figure 3). Organizations and individuals responsible for risk management at municipal and sub-municipal scales can implement measures adjusted to the dimensions that influence wildfire risk levels in their areas, avoiding untailored, generalist and less efficient approaches. For instance, a risk manager in a municipal administration can allocate financial and human resources to early detection and suppression of wildfires in hazard-dominated parishes within the municipality (such as those in clusters 2, 3 and 4; Figure 11A), while privileging measures such as the promotion of neighbour support networks or rapid evacuation capabilities in social vulnerabilitydominated parishes (such as those in cluster 5). Prior studies have shown the importance of identifying priority measures in exposed areas, regarding fuel and fire management options [28,57], as well as to engage in proactive and collaborative management to prevent wildfire losses [58]. Other studies have shown that people's characteristics and social context are paramount to understand their perception regarding wildfires, and how it influences their relations with fire occurrence and their ability to apply protective measures [28,59]. Moreover, social context and local conditions are crucial to define suitable mitigation and adaptation strategies to increase communities' safety and resilience [60,61].
Furthermore, in the case of parishes where the main driving dimensions result from the combination of more than one component (exposure and vulnerability), risk managers can resort to the third level of detail-that of the sub-components-to support their decisionmaking. For example, it is expectable that similar social vulnerability values among parishes will result in some cases from a particularly high criticality, and in others from a particularly low support capability. A consideration at this third and most detailed level would allow risk managers to focus their policies and measures on the more relevant constituents of the more relevant dimension of risk within each parish.
Regarding spatial scale, the use of the individual parish as unit of analysis allowed for a high level of detail to represent wildfire risk and its dimensions, that can be either directly used or adapted to any level of territorial management. At the municipal level, the results allow risk managers to differentiate parishes within a given municipality, thus informing their planning decisions. At higher levels of spatial planning (e.g., region, association of municipalities), results can be adjusted to a municipal scale of representation, for example by using area-weighted averages of the values of the parishes within each municipality. Our choice of the parish as spatial unit of analysis is in accordance with the considerations put forward by the authors of the Inform index [39], which indicate that the index can be applied at any spatial scale for which information is available. In our case, spatial data was available with a 25 m pixel, and most of the statistical data were available at the parish level. The exceptions were nine of the variables used as input for quantifying support capability, which were only available at the municipal scale (Table 3). Nevertheless, the index can be applied to spatial units of any level: municipal, regional, or national (for multi-country assessments).
A similar consideration can be made regarding temporal scale. Although we employed a structural approach to assess wildfire hazard by using wildfire factors that change only on a multi-year scale [21], the applied wildfire risk methodology could be focused on summer-specific wildfire risk, if summer-specific wildfire hazard data were available. In this respect, this index could be combined with a seasonal approach to wildfire hazard such as that recently proposed by [20].
In parallel to the potential advantages of using this wildfire risk index, some limitations to our approach need also to be considered. The dimension of exposure was expressed using two variables only: total number of residents per parish, and percentage of resident population outside of urban areas by parish, due to the high collinearity with other variables focused on residential buildings. In practice, the first variable quantifies both the number of people and residential infrastructures exposed to wildfires, whereas the second expresses the degree of isolation that these elements are subject to in each parish, adding to their level of exposure. Given the variety of elements that can be at risk within the territory besides people and buildings, our approach excluded elements such as non-residential structures (e.g., cultural, industrial, collective equipment such as hospitals) and economic activities, as well as agricultural lands, forest areas and ecosystems. It also excluded the temporary residents or seasonal visitors that are present only during summertime, when wildfires are more frequent. Future work should be dedicated to diversifying the elements represented within the exposure dimension. A good example in this respect is the HANZE exposure database [62], which included land cover classes and the estimation of their economic value. Further examples are the works of Salis et al. [25] and Thompson et al. [26], in which diverse types of exposed elements (e.g., wildland-urban interfaces, vineyards and Like exposure, the dimension of vulnerability should also be made more comprehensive in future work. Our approach was focused solely on its social component, and therefore on the characteristics of the residents. Features such as the expected level of destruction of physical structures [63] and land use parcels (e.g., forests), as well as their estimated economic value [30,44] would make the quantification of vulnerability more realistic. Such changes would require the consideration of expected wildfire severity in the methodology, as well as reliable and accessible estimations of the economic value and potential recovery costs of the different elements, which may vary depending on the country or region. Risk is inherently multidimensional, and any application of a risk index will be more effective the more detailed and exhaustive the available data are for each of its dimensions and sub-dimensions.
Conclusions
A comprehensive wildfire risk index was proposed and applied to a region in central Portugal. The index was complemented with the division of the 972 parishes studied, by means of cluster analysis, into groups characterized by similar relations between the three wildfire risk dimensions: hazard, exposure, and social vulnerability.
The hierarchical structure of the index, which is based in the INFORM framework, allows approaching wildfire risk management in different levels. At the most generalized, the final index values allow for a general perspective over the distribution of wildfire risk throughout the study area. Results suggest four distinct spatial patterns, with the highest risk parishes being evidently concentrated in the centre-south of the study area, where mitigation measures should be applied first. At the level of the three dimensions of risk, results can inform the decisions of wildfire risk managers, allowing them to more efficiently allocate resources to the major dimension (or dimensions) that are more relevant in each parish. In this respect, the five defined clusters illustrate different risk profiles, with three of them being dominated by hazard (although with values of differing magnitude), and the other two being dominated, respectively, by exposure together with social vulnerability, and social vulnerability only. At the most detailed, sub-dimension level, available only in the cases of exposure and social vulnerability, risk managers can focus their attention on the most relevant factors behind these dimensions, further adjusting policies and measures to the specific reality within each parish.
The proposed index provides an integrated and spatially detailed perspective of wildfire risk that is relevant for disaster risk reduction approaches. It can be easily applied to other study areas, using any spatial unit for which spatial and statistical data are available.
Conflicts of Interest:
The authors declare no conflict of interest. Table A1. Conceptual justification for the variables adopted for expressing criticality.
Variable Relation to Criticality
Illiteracy rate (%) Education is linked to socioeconomic status, with higher educational attainment resulting in greater lifetime earnings. Lower education constrains the ability to understand warning information and access to recovery information [29,32,64].
Proportion of the resident population with university degree (%) School dropout rate (%) Proportion of socially more valued professionals (%) Socially valued professions are associated with higher income and education level and are likely to be associated with a greater capacity to resist and recover from wildfire events.
Proportion of single-member families constituted by people with 65 or more years of age (%) Extremes of the age spectrum affect the movement out of harm's way. Parents lose time and money caring for children when daycare facilities are affected; elderly may have mobility constraints or mobility concerns increasing the burden of care and lack of resilience [32,64].
Mean age of resident population (years) Proportion of the resident population with 14 or less years of age (%) Proportion of lodgings formed by couples with children (%) Families with children will have to allocate time and resources to care for them, which may affect their resilience and capacity to recover from hazards.
Mean commuting time of the working or studying resident population (min) The greater the amount of time a resident is absent from home on a regular basis, the less likely is he/she to be able to react quickly in case of wildfire, and the more difficult will be the recovery. This will be especially acute in the case of seasonally used homes.
Proportion of the resident population working or studying in another municipality (%) Proportion of seasonally used classic family lodgings (%) Proportion of the resident population that resided in another municipality 5 years before (%) New residents and foreign nationals will be less likely to have established consolidated networks of social connections, and thus be less likely to benefit from help from neighbours and more likely to be unaware of warning information. In the case of foreigners, the language barrier may constrain disaster preparedness and resilience [29], and cultural barriers may affect access to post-disaster relief initiatives.
Proportion of the resident population of foreign nationality (%) Female activity rate (%) Women can have a more difficult time during recovery than men, often due to sector-specific employment, lower wages, and family care responsibilities [32]. Female proportion of the population (%) Proportion of self-owned lodgings that include expenses (%) Home expenses can be a major component of the household budget and impact the capacity to invest in resilience prior to a disaster, as well as the capacity to recover from it.
Proportion of family lodgings lacking at least one basic infrastructure (%) The quality of residential construction affects potential losses and recovery [32]. Older buildings, those lacking basic infrastructures, or mobile or improvised habitations are likely to be more vulnerable to the effects of wildfire [29].
Average age of buildings (years) Proportion of buildings built within the previous ten years (%) Proportion of non-classical lodgings (%) Proportion of rented or subleased classic lodgings (%) People that rent do so because they are either transient or do not have the financial resources for home ownership. They often lack access to information about financial aid during recovery. In the most extreme cases, renters lack sufficient shelter options when lodging becomes uninhabitable or too costly to afford [32].
Proportion of single-lodging buildings (%)
People in rural areas tend to have limited access to emergency and contingency-related resources, good and services. Their rehabilitation potential is also reduced compared to urban areas [56].
Proportion of overcrowded lodgings (%)
Overcrowding may be associated with financial constraints, also making evacuation more difficult [29,64].
Proportion of the population using automobile for dislocations (%) Residents with access to automobiles will be more mobile, which will facilitate getting out of harm's way [29], as well as the capacity to recover from a wildfire. Table A2. Conceptual justification for the variables adopted for expressing support capability.
Variable Rationale
Ageing ratio of buildings (%) Infrastructure that is old or degraded will likely be more vulnerable to wildfire damage, while possibly constraining the efficiency of response on the part of authorities. Additionally, the state and age of constructions is an indicator of the economic health of a parish (see economic indicators below).
Proportion of buildings in need of major reparations or very degraded (%) Proportion of buildings having wheelchair accessibility (%) An indicator of the capacity of residents with impaired mobility to efficiently evacuate in case of wildfire, either with or without assistance.
Proportion of the resident population living outside of urban centres (%) Population that is dispersed across the parish territory will likely be harder to assist by authorities in the case of disaster. Additionally, rural residents may be more vulnerable due to lower incomes and more dependent on locally based resource extraction economies [32].
Road network density (km/km 2 )
The greater the number of corporations and firefighters, the greater the capacity of authorities to respond in case of wildfire [36]. Road density will promote overall accessibility, and therefore promote the efficiency of this response [36]. High road density will also facilitate evacuation in case of disaster.
Firefighter corporations (Nº) Firefighters (Nº)
Pharmacies and mobile pharmaceutical posts (Nº) The number of nurses and pharmacies are likely indicators of the overall capacity for efficient medical response in case of wildfire, decreasing its impacts and promoting recovery. Nurses by workplace (Nº)
Rooms in tourist accommodation establishments (Nº)
All these variables were adopted as indicators of overall economic health and vitality of parishes. Wealth enables communities to absorb and recover from losses more quickly due to insurance, social safety nets, and entitlement programs [32,64].
Urban waste collected by inhabitant (kg) Gross Value Added of enterprises (EUR) (note: does not include financial sector) Median sale value by m 2 of family accommodations ATM machines (Nº)
|
2022-10-19T15:17:53.925Z
|
2022-10-14T00:00:00.000
|
{
"year": 2022,
"sha1": "d8aabb6a1846b95d54cfae896f289387dcb39239",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2571-6255/5/5/166/pdf?version=1665739747",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "7da8cf531ce2d1e528c88730a3f76aa604733b9b",
"s2fieldsofstudy": [
"Environmental Science",
"Geography"
],
"extfieldsofstudy": []
}
|
247940090
|
pes2o/s2orc
|
v3-fos-license
|
Near infrared and optical emission of WASP-5 b
CONTEXT: Thermal emission from extrasolar planets makes it possible to study important physical processes in their atmospheres and derive more precise orbital elements. AIMS: By using new near infrared and optical data, we examine how these data constrain the orbital eccentricity and the thermal properties of the planet atmosphere. METHODS: The full light curves acquired by the TESS satellite from two sectors are used to put upper limit on the amplitude of the planet's phase variation and estimate the occultation depth. Two, already published and one, yet unpublished followup observations in the 2MASS K (Ks) band are employed to derive a more precise occultation light curve in this near infrared waveband. RESULTS: The merged occultation light curve in the Ks band comprises 4515 data points. The data confirm the results of the earlier eccentricity estimates, suggesting circular orbit: e=0.005+/-0.015. The high value of the flux depression of (2.70+/-0.14) ppt in the Ks band excludes simple black body emission at the 10 sigma level and disagrees also with current atmospheric models at the (4-7) sigma level. From the analysis of the TESS data, in the visual band we found tentative evidence for a near noise level detection of the secondary eclipse, and placed constraints on the associated amplitude of the planet's phase variation. A formal box fit yields an occultation depth of (0.157+/-0.056) ppt. This implies a relatively high geometric albedo of Ag=0.43+/-0.15 for fully efficient atmospheric circulation and Ag=0.29+/-0.15 for no circulation at all. No preference can be seen either for the oxygen-enhanced, or for the carbon-enhanced atmosphere models.
Introduction
The year 2005 marks the first direct detection of the light radiated by an extrasolar planet (Charbonneau et al. 2005;Deming et al. 2005). The observations were made by the Spitzer space telescope (Werner et al. 2004) at 4.5, 8 and 24 µm, not easily accessible by ground-based instruments. Although it was quite expectable that a similar measurement in the near infrared could also be possible by ground-based 4 m-class telescopes, two years passed until the first tentative observation of that kind (Snellen & Covino 2007). Since then, secondary eclipse (occultation) observations at the 2.2 µm (2MASS K -or Ks) band still remained in the realm of ground-based instruments, due to the lack of space instruments at this wavelength (e.g., Croll 2015;Zhou et al. 2015;Martioli et al. 2019). The 2MASS bands are especially suitable for the observation of hot extrasolar planets, due to the expected peaking of the black-body flux in ∼ 1−2 µm for temperatures between 1500 and 2000 K, i.e., for the characteristic equilibrium temperatures of extrasolar planets (e.g., Alonso 2018).
Here we revisit WASP-5, an "ordinary" extrasolar planetary system, discovered by the SuperWASP collaboration (Anderson et al. 2008). The system harbors a single planet with a main sequence host, akin to our Sun. So far, no other planets have been reported in the system, although there are contradictory results concerning the origin of the transit time variation of planet b (i.e., Fukui et al. 2011;Hoyer et al. 2012). Based on the followup work of Gillon et al. (2009), the main system parameters are as follows: R s /R = 1.029, M s /M = 0.960, T e f f = 5700 K, a = 0.0267 AU, R p /R J = 1.087, M p /M J = 1.58. These parameters imply an equilibrium temperature (assuming zero albedo and full heat redistribution) of 1740 K (Chen et al. 2014). The orbital period is 1.6284300 d, derived in this paper from the combination of the earlier epochs and those resulting from the analysis of the data from the TESS satellite (Ricker et al. 2015).
Occultation observations in the Ks band have already been carried out by Chen et al. (2014) and Zhou et al. (2015). Here, we combine these data with our unpublished observations made by the 6.5-m Walter Baade Telescope at the Las Campanas Observatory. 1 Our main goal is to increase the precision of the estimation of the occultation depth -an important ingredient for a more re-
Datasets
Two occultation light curves in the near infrared Ks-band have been published so far on WASP-5 b. Chen et al. (2014) used the MPG/ESO 2.2 m telescope to observe the target in all three 2MASS bands. In spite of the substantial instrumental systematics they clearly detected the event after applying corrections due to positional and image quality dependences. Zhou et al. (2015) performed a survey of seven hot Jupiters by using the Anglo-Australian Telescope (AAT). Their survey included also WASP-5, yielding a long-stretched coverage, allowing sufficient baseline in the eclipse modeling. Details of the observational settings and the methods used are described in the corresponding papers.
The third dataset comes from our single-night observations on 9 November, 2011 (UT). The four-chip camera of the FourStar infrared imager attached to the 6.5-m Walter Baade telescope was used to gather high cadence Ks images on the 10.9 × 10.9 field hosting WASP-5. An integration time of 4.4 s was used, yielding a ∼ 7 s overall sampling interval. For better photometric accuracy, the telescope was slightly defocused, resulting in stellar images of ∼ 10 diameter. All images were taken in a simple staring mode, without dithering. Unfortunately, the sky was not photometric throughout the night, due to intermittent clouds. This has led to losing some 300 data points primarily after the ingress, affecting ∼ 25% of the full observing run. To obtain the photometric fluxes, we employed both the classical iraf 2 routines and those of the fitsh 3 package by Pál (2012). The two methods have led to very similar results, so we decided to use our earlier reduction made by iraf.
First we performed the standard reduction steps of bias, dark and flat corrections, including a treatment for the overall infrared sky emissivity variation by a nonlinear iterative multi-2 iraf is distributed by the National Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation. 3 https://fitsh.net/ step method, the nebulosity filtering algorithm 4 of Irwin (2010). Then, we tested several aperture sizes to select the one that yielded the least scatter in the corresponding ensemble light curves (LCs). It turned out that nearly all apertures yield the same quality LCs, with a slight preference toward the mid-sized apertures. Finally, we selected the one with the aperture radius of 30 pixels (4.8 ), outer annulus starting at pixel radius of 40 and ending at 50, to assess the temporal background level.
In deriving the final ensemble LC (i.e., the target flux divided by the simple sum of the fluxes of the comparison stars), we decided not to use any comparison star from chips others than chip 1, that hosts the target. On this chip, (see Fig. 1) we have two bright comparison stars (No. 2 and 3) and a fainter one (No. 4). We found that adding the fainter star slightly increases the noise 5 , therefore, we settled with the ensemble of the two brightest stars only.
Merging the three Ks light curves
Before some of the peculiarities of the merging process are detailed, we describe the steps leading to the ensemble LC of the FourStar/Baade data (set-2 in Table 1).
The FourStar/Baade light curve
As mentioned, the biggest issue with the data is the temporal cloudiness during some part of the first half of the observation. The top three panels of Fig. 2 show the flux variation for the entire run, including the target and the two comparison stars. It is worth noting that for the better visibility of the part of the flux variation that is dominated by the non-outlying points, we limited the plots at the 4% flux drop. Several data points reach as much as 60-80% drops.
Although the comparison stars serve an excellent diagnosis on the environmental origin of the harsh variations seen in the target, and the ensemble flux ratio cures most of the variation, we see in the bottom panel that the large drops in the flux could not be filtered out at the level required by the small signal we are searching for. Nevertheless, the binned LC strongly suggests the presence of an underlying occultation signal. We note that in constructing the binned LC, we used overlapping bin sets with a shift of half of the bin width. In this way we can test the dependence of the binned LC on the bin distribution, which is an important piece of information on the sensitivity of any conclusion to be drawn from the binned LC -even if the conclusion is only preliminary.
In processing further the set-2 data, we observe the following: 1) there are outlier data points that are concentrated in a sufficiently broad section of the full time series and, therefore, might seriously bias the derived eclipse parameters; 2) likely because of (however small) differential extinction, there is a significant downward trend in the ensemble LC. This should also be filtered out. 3) A closer inspection of the ensemble LC in the bottom panel of Fig. 2 reveals that roughly in the middle of the cloudy period the flux suddenly jumped by a small fraction, enough to make a visible effect on the expected shallow eclipse. The most likely cause of this jump is the short-time change in Zhou et al. (2015) IRIS2 / AAT 3.9 / Siding Spring (Australia) Notes: All data were taken in the 2MASS K color, except for set-1, where the custom made K filter of the GROND instrument was used (with a transmission curve very close to that of the 2MASS K band). • Technical assistance is provided by Timo Anguita (see Chen et al. 2014). Assisted by Markus Rabus. Typical integration time. Table 1) and the resulting ensemble light curve (target flux over reference flux, ∼ F1/(F2 + F3), without outlier correction, normalized to its average). Star-2 and 3 (see Fig. 1) served as comparison stars. The 11-th order polynomials, robustly fitted to the fluxes to handle outliers, are shown by green lines. Shaded area in the bottom panel indicates the period of intermittent clouds. The binned light curve (with overlapping bins -see text) is shown by yellow dots. The time axis is shifted to the moment of the first data point (BJD1). the telescope pointing, leading to a sudden variation in the ensemble of pixels used in the flux evaluation. Leaving this jump in the ensemble LC would bias the occultation depth.
By following the principle of 'least data massaging', we proceeded as follows. For the treatment of outliers (issue 1) we robustly fitted 6 11 th -order polynomials to the fluxes of the target and the comparison stars. After the fit we employed a 3σclipping for the outliers and replaced these items by the corresponding polynomial values for the respective fluxes. In this way we naturally ended up with an ensemble LC that had no outliers, however, showed some trace of the 'trimming' made. displays where the polynomial replacement of the original data points were made (green dots). When both the target and all of the comparison stars had to be corrected, we see a continuous sequence of points. In all other cases the corrected points scatter around the ridge, represented by the binned LC (yellow points).
The linear trend and the jump in the ensemble LC (issues 2 and 3) were treated within an iterative process by filtering out these systematics, fitting the cleaned LC to an eclipse model and then subtracting this eclipse model from the starting dataset to get the next approximation for the systematics.
The systematics were represented by a linear function for the trend and a jump function to handle the discontinuity mentioned above: where F is the observed flux, t is the time, measured from the first data point. H is the Heaviside function with unit step at t jump = 0.068 d. The jump position was fixed throughout the fit. Because the star blocks all radiation from the planet, the trapezoidal approximation for the occultation light curve suits perfectly: where t 1 , t 2 , t 3 and t 4 , respectively, are the moment of ingress (first contact), start and end of the total eclipse and the moment of egress. The length of the ingress and egress phases are assumed to be equal: t 2 − t 1 = t 4 − t 3 = ∆t. Except for the eclipse depth δ, all these parameters are scanned for the best fit within the framework of robust least squares. For any given set of {t i } the transit depth was fitted in one step, due to the linear nature of the parameter. The systematics parameters {c i } were fitted in the same manner. The final light curve for set-2 are shown in Sect. 3.2
The three light curves
In trying to treat all three datasets in the same way, i.e., by starting from the simple ensemble light curve and employing the "minimum massage" post processing step, we found that the case of set-1 (Chen et al. 2014) is different. Since the ensemble light curve suffers excessively from systematics (see Fig. 2 of that paper), we decided to use their processed light curve that was obtained by applying carefully chosen external parameters (such as stellar position and image size) to separate systematics. On the other hand, for set-3 of Zhou et al. (2015) we used their simple ensemble light curve, even though there is also a substantial nonlinear trend in the data. In spite of this, we decided not to use an airmass or some polynomial correction (as given in the original paper), since this may introduce unpredictable changes in the eclipse and significantly depress the depth of the occultation. All three light curves (sets-1 and -3 as above, set-2 as given in Fig. 3) serve as the input time series to fit them individually by the eclipse model and a linear trend (extended by a jump function for set-2). The result of this procedure is shown in Fig. 4. Table 1 in the pre-merging phase. All light curves are filtered out from time-dependent linear trends and normalized by the total (star+planet) flux F t . Black dots are the binned values, yellow lines are the trapezoidal models fitted to the original (unbinned) data shown by deep gray dots. For better visibility we increased the point size for set-1.
Merging the three light curves
Before constructing the merged light curve, we need to check if such a merging is possible, i.e., if there is a unique orbital period that matches all three light curves within the observational errors. Evaluation of the updated orbital period by using the primary transit observations from the TESS satellite and combining the ephemerides with earlier followup data will be given in Sect. 4. Here we merely use the orbital period and the moment of the transit derived from that analysis. By fitting the individual folded light curves we can examine if the data suggest strong discrepancies signaling warnings to be considered during the merging process. Figure 5 shows the individual phase-folded fits, indicating that the three datasets are in reasonable agreement, even if we consider set-1, the most discrepant from all. Set-1 contains the least number of data points (699 vs 2084 and 1732 for set-2 and 3, respectively), and has also the largest residual (data minus fit) scatter (in relative flux units: σ = 0.0036, vs 0.0030 and 0.0033). In spite of these differences, all three datasets yield remarkably close egress phases. The cause of this is not entirely clear at this moment. In some cases, it might be simply the sign of more stable sky conditions in the second part of the run (i.e., for set-2 this was indeed the case).
In the final step of the merging process, we packed all data points in a single phase-folded dataset, by discarding the relatively small differences in data quality (i.e., weighting all data points from all sets equally). 7 The phase-folded light curve, containing all the 4515 data points, was robustly fitted by the trapezoidal model. The resulting binned light curve and the best-fit trapezoidal are shown in Fig. 6. The fitted parameters are listed in Table 2. The errors were computed from simple Monte Carlo simulations, whereby the binned light curve (mapped back to all the 4515 phase points) was perturbed by a bin-dependent Gaussian noise. We opted to use the binned time series rather than the trapezoidal fit, because of the remaining systematics, especially before/after the ingress/egress. We generated 500 mock time series, fitted trapezoidals to each realization, and, after completion, we computed the standard deviations of the parameters. We refer to these standard deviations as the 1σ errors of the respective parameters. In Appendix A we give further details of the error calculation and the improvement of the parameters by using the merged data as compared with the fits to the individual datasets. 0.00528 0.00070 δ 0.00270 0.00014 Notes: All eclipse times are in the units of the orbital phase. Eclipse depth δ is the relative flux depression. The ingress and egress times, T 1 and T 4 can be converted into Barycentric Julian Date (TDB standard) by using the following formulae, e.g., for the ingress: T ing [BJD]= T cen + P × (n + T 1 ), where n is the epoch number of the event of interest and T cen = 2458355.50805 is the moment of the transit center and P = 1.6284300 d is the orbital period. Epochs are 'as observed', i.e., no correction was made due to orbital light time effect of 27 s.
Analysis of the TESS data
We use the light curves acquired by the full sky survey satellite TESS for: a) updating the ephemeris of the transit (since the occultation and the transit data were acquired in different epochs, we need a precise ephemeris to predict the transit phase right before the occultation occurred, if we want to make an estimation on the eccentricity); b) to measure the emission in the optical, we need space-based data, because of the high precision needed to detect the signature of the planet at this wavelength in the phase of occultation -the thermal emission in the optical is small, due to the planet's low temperature and, in general, the albedos of the gas giants are also small (e.g., Wong et al. 2020Wong et al. , 2021. WASP-5 was observed by TESS in sector 02 between August 22 and September 20, 2018. Then, the object was revisited while scanning sector 29 between August 26 and September 22, 2020. The two segments comprise altogether over 30000 data points in the short cadence (2 min) sampling rate. We note that Wong et al. (2020) have already performed an analysis of the sector 02 data and ended up with similar conclusions to ours as to be detailed in the subsections below. Figure 7 shows the light curves from the above two sectors after employing the Presearch Data Conditioning (PDC) method of Smith et al. (2012) and Stumpe et al. (2012) implemented in the TESS pipeline 8 . The Simple Aperture Photometric (SAP) time series served as the input for the PDC filter. Both types of data were downloaded from the STScI MAST site. 9 We filtered the data further by using a 36 th -order robust polynomial fit to minimize the effect of the remaining systematics and possible stellar variability. The effect of this filtering is discussed in Appendix B.
Because sector 02 SAP data suffer from a large number of outlying data points, to make the analysis uniform, we performed an iterative 3σ clipping for all datasets. The clipping was made relative to the transit model and the clipped values were set equal to the corresponding model values.
Updating transit ephemeris
To derive transit light curves free from other variations, we employed the same type of robust iterative method as briefly described in Sect. 3.1. The input data were the PDC/SAP time series as mentioned above. The model time series constituted two multiplicative parts: the transit and a 36 th -order polynomial. For the transit we adopted the simple model of Kovacs (2020), representing the ingress/egress phases as linear flux depressions with the same steepness and duration. The limb darkening was modeled by a scalable U-shaped function. We found this model quite satisfactory at the level of the accuracy of the data analyzed. The transit parameters for the various time series are shown in Table 3. After combining these with the transit parameters obtained from the five followup observations of Baluev et al. Anderson et al. (2008), we found that the orbital period of Fukui et al. (2011) should be decreased by 0.123 s to properly match the published epochs. 10 By choosing sector 02 timing as a reference, the final ephemeris is given in Table 4. The error of the epoch was computed from 50 simple Monte Carlo simulations by using the PDC data and is equal to the standard deviation of the epochs obtained from the 50 realizations. The error on the period was calculated from (σ 2 1 + σ 2 2 ) 1/2 /2444, where σ 1 = 0.00019 d as given in Fukui et al. (2011), σ 2 is the epoch error as given in Table 4 and the integer in the denominator is the elapsed epoch number between the two epochs. It is worth noting that the currently published ephemerides by Ivshina & Winn (2022) are in complete agreement with ours. There are 7 ms and 17 s differences between the periods and transit centers, respectively, corresponding an agreement within 1 − 2σ. ±0.00013 Notes: T cen resulted from the analysis of the current (2018 and 2020) TESS visits, the period was derived from the combination of these TESS data and earlier followup observations dated back to the discovery (Anderson et al. 2008) of WASP-5.
Search for occultation and phase variation
To test the dependence of a possible detection of these delicate features on the data processing methods, we used four data types: SAP light curve with or without robust polynomial correction (see Sect. 4.1); PDC light curve with the same options. After 10 When using the period of Fukui et al. (2011) we get an overall difference of ∼ 4 min, whereas with the 0.123 s lower period the differences are below 1 min and mostly 0.5 min.
prewhitening by the transit, we performed a bin signal search in the light curve folded by the orbital period. To account for the possible other variations, we employed a fully binned analysis, where the out of eclipse region was also divided into bins of the same size as the eclipse duration. After the bin with the largest flux depression was identified, we used a simple statistic to characterize its significance. Similarly, the phase variation was studied simply by a single-component Fourier fit to the original (i.e., not binned) phase-folded light curve. Significance tests were performed by using injected signals into pure Gaussian time series. Further details on the secondary eclipse and phase variation searches together with the supplementary statistical tests are given in Appendix B. Here we summarize the constraints derived in that appendix.
First, for the illustration of the data quality at the expected level of reflected light variation, we show the transitand polynomial-filtered, PDC-processed light curve in Fig. 8. The blue dots resulted from overlapping binning (see Sect. 3.1) with 200 bins (400 points altogether). The bin model has a bin width equal to the transit length, yielding 17 bins. Although this figure has little indication for the presence of the type of signals we are searching for, as shown in Appendix B, the parameters fitted to the data of various processing levels remain remarkably stable. This leads to the following average values of the secondary eclipse depth and phase variation amplitude: δ = 0.157 ± 0.056 ppt, 2A = 0.113 ± 0.041 ppt. With additional statistical tests we found that at the observed amplitude of the cosine component, it has only 0.3% probability that the underlying phase variation has a total (peak-to-peak) amplitude greater than 0.20 ppt. Also, for a boxy eclipse of the same depth, the probability that the bin model yields a phase solution outside the expected secondary eclipse phase is less than 10%. With the observed correct location of the main dip for all four datasets, this suggests that we may have found a signature of the underlying signal.
Although the phase variation seems to yield a more stringent limit on the eclipse depth, the discordant phase 11 of the cosine fit refrains us from relying too much on the result suggested by this fit. Therefore, we use the eclipse depth quoted above as our best guess at present for the real secondary eclipse depth in the TESS waveband.
Search for additional transit components
Although hot Jupiters systematically avoid close planetary companions (Poon et al. 2021), it is still a matter of interest if WASP-5 is one of those rare systems. Unfortunately, the short time spans of the TESS observations make the search for the more common longer period companions less trivial, leading to lower observed multiple system rates from the TESS data (Otegi et al. 2021).
After prewhitening by the transit, we performed BLS searches (Kovacs et al. 2002) in the frequency interval [0.01, 10] c/d. The time series contains two dense tracks, separated by ∼ 730 days, comprising 36778 SAP data points altogether. We tested all four data type combinations (SAP, PDC with or without polynomial filtering). All data types show an increasing power excess from 1 c/d down to 0.01 c/d with no prominent peak in this frequency interval. The spectra are flat in [1, 10] c/d, without any dominant peak superposed on the white noise background.
To test the detection limit in the potentially interesting frequency interval of low-order resonance, we injected a transit signal in the original SAP time series. Then we performed a polynomial filtering as mentioned earlier in this section. The injected signal had a period of half of the orbital period of planet b and a transit depth of 0.3 ppt (corresponding to 1.8 Earth radii). We used a boxy transit with the same duration as that of planet b. The result is shown in Fig. 9. Fig. 9: Injected transit test of the full TESS dataset. We used the SAP light curve to inject the signal and then robust polynomial filtering was employed to lower the the red noise. We show the frequency spectrum of the so-derived time series, after subtracting the transit signal of planet b. Red arrow indicates the peak due to the injected signal with a transit depth of 0.3 ppt. The inset shows the close neighborhood of the test signal. 11 The cosine fit exhibits considerably lower phase stability than the bin fit even for simple white noise (see Appendix B). In addition, we may also have other sources (e.g., stellar variability, instrumental systematics) that interfere with the phase variation -but, because of the different time scales, leave the secondary eclipse relatively intact.
From the structure of the spectrum, it is clear that 0.3 ppt transit depth is close to the low limit of a transit signal we can hope to detect in the available dataset. This limit is changing as a function of dataset and frequency, and is obviously higher for signals with periods longer than one day.
The eccentricity
With the occultation ephemeris derived in Sect. 3 and with the updated transit ephemeris by using the TESS data in Sect. 4, we can easily compute the two components of the eccentricity. For an easier reference, the components are as follows (Winn 2014) e cos ω = π 2 (ϕ obs − ϕ cal ) , e sin ω = T 14 (oc) − T 14 (tr) where ϕ obs and ϕ cal , respectively, are the observed (corrected for light-time effect) and calculated phases of the occultation centers (the latter is with the assumption of circular orbit). The argument of periastron is denoted by ω. As described in Sect. 3.3, the errors of the occultation signal in the Ks band were computed from a simple Monte Carlo simulation, based on the binned version of the merged data from the three data sources. The observational noise was considered to be multiplicative and nonstationary, according to the standard deviations around the bin means. For the 500 realizations we calculated the eccentricity components from the fitted trapezoidal occultation parameters.
As it is obvious from Eq. 3, the two components are not independent. Furthermore, e cos ω is expected to be less noisy, as several authors noted previously (e.g., Winn 2014). Indeed, Fig. 10 clearly shows both the correlation and the considerably tighter behavior of e cos ω. We also observe that the e sin ω component is shifted to more positive values. This is because noise makes the ingress/egress parts shallower, leading to the preference of longer eclipse durations in the best-fit search. Fig. 10: Eccentricity components obtained from the Monte Carlo simulations as described in the text. Gray and black dots, respectively, denote the e sin ω and e cos ω components. The inset shows the correlation between the two components, leading to a smaller error on the eccentricity as compared to that of the e sin ω component alone.
From these simulations we obtained the errors also for the eccentricity components and, finally, for the eccentricity. For simple reference, we summarized these parameters in Table 5. Article number, page 7 of 13 A&A proofs: manuscript no. wasp5_v3_aph The correlation between the two eccentricity components is also exhibited by the lower error on the eccentricity than that on the e sin ω component.
It is instructive to compare our eccentricity values with those derived from the Spitzer data by Baskin et al. (2013). First we checked if there was any difference between using our transit ephemerides vs those employed by Baskin et al. (2013). We found that for the two epochs Baskin et al. (2013) published in their Table 1, our ephemerides predicted an average offset 1 min greater than the one calculated from the ephemerides of Fukui et al. (2011) (3.7 min vs 4.7 min). Their offset time implies e cos ω = 0.0025 ± 0.0012. Because the agreement is at the ∼ 1σ level between their value and ours, we can average them out and arrive at a value of 0.0020 ± 0.0016, not implying anything different from zero eccentricity. 12
The emission spectrum
Here we examine how the more accurate occultation depth in the near infrared (Sect. 3.3) and our preliminary estimate on the same quantity in the visible from the TESS data (Sect. 4.2) can constrain the atmospheric properties of WASP-5 b. The secondary eclipse analysis was presented in Sect. 3, where we derived an occultation depth of δ(occ, K s) = (2.70 ± 0.14) ppt in the near infrared.
In the visible, corresponding to the wide-band filter of TESS 13 , we use the average of the eclipse depths obtained from four types of datasets: δ(occ, vis) = (0.157 ± 0.056) ppt. Although we could use also the value obtained from the estimation of the phase variation, we opted not to use this value for reasons discussed in Sect. 4.2. Baskin et al. (2013) have measured the planet's emission at 3.6 µm and 4.5 µm by the Spitzer infrared satellite. Although we do not make any model fitting in this paper -since we use the same models as given by Chen et al. (2014) -, we found it instructive to display all, currently available data on the same plot.
The atmospheric models presented by Chen et al. (2014) are based on the plane-parallel equilibrium models of Madhusudhan & Seager (2009, 2010, employing free pressure-temperature profile and chemical composition. Figure 11 shows these theoretical spectra and the black body lines for fully efficient and zero circulations (i.e., lack of heat exchange between the day and night sides of the planet -see Cowan & Agol 2011). The atmospheric models have monotonic pressure-temperature profile (i.e., no temperature inversion). The depth of the atmosphere was chosen to fit the brightness temperatures corresponding to the J, H, K data of Chen et al. (2014).
There are two essential conclusions we can draw from the positions of the new data points in respect to these models. First, 12 Baskin et al. (2013) did not publish e sin ω values, so we cannot compare the eccentricities directly.
13 λ e f f = 0.746, W e f f = 0.390 µm, see: http://svo2.cab.inta-csic.es/theory/fps/ the lower error bar on Ks increased the significance of the higher Ks flux and suggests strong emission in this waveband. This can be realized by additional emitters at a deeper level of the atmosphere (corresponding to temperatures higher than 2700 K).
Second, even though our estimate for the secondary eclipse depth is only tentative in the TESS band, it still yields a useful piece of information. This is because of the relatively small error on the data with respect to the spectral features in the optical. In particular, the current value of the emission in the visible corroborates what the Spitzer data may also indicate, i.e., no strong preference for any of models used. From the optical occultation depth we can also estimate the geometric albedo. For an easier reference, here we repeat the necessary formulae presented by Cowan & Agol (2011) and, e.g., by Daylan et al. (2021). The observed occultation depth constitutes two parts: the thermal radiation by the planet and the reflected light of the host star δ obs = δ therm + δ re f l .
Assuming circular orbit, the reflected light is directly related to the geometric albedo A g where R p is the planet radius, a is the semi-major axis. The dayside thermal emission can be parameterized as follows where R s is the stellar radius, F p and F s are the wavelength (λ) dependent fluxes of the planet and the star, respectively. In the black body approximation, parameter α is used to relate the substellar temperature T 0 = T e f f R p /a to the dayside temperature T day The single parameter α comprises the Bond albedo A b and the atmospheric circulation parameter . For the origin of the coefficients in the expression of α we refer to Burrows et al. (2008) and Cowan & Agol (2011). Obviously, separation of A b and is not possible by using the occultation alone. However, by measuring the phase curve, one may attempt to derive the night side emissivity that depends solely on (Singh et al. 2021). Because of the lack of very high quality data required by this method, we opted to parameterize the derived geometric albedo depending on the extreme limits of and omitting the negligible temperature decrease due to the expectedly small Bond albedo (e.g., Mallonn et al. 2019). Furthermore, as usual, we assumed pure black body radiations both for the star and for the planet in the waveband of interest. Fig. 12: Share of the thermal and reflected lights in the total flux change during occultation with varying geometric albedo. The dots are for two extremes of planetary atmosphere dynamics with negligible Bond albedo: α = 2/3 -complete lack of circulation, α = 1/4 -fully efficient circulation. The 1σ error of the observed value is indicated by the gray-shaded stripe. The vertical stripes show the resulting geometric albedos.
The result is shown in Fig. 12. Although with fully efficient circulation (α = 1/4) the geometric albedo can be as high as A g = 0.43 ± 0.15, based on the Ks occultation data (e.g., Kovács & Kovács 2019) and several other, more direct studies (i.e., those based on full phase curve analyses, such as Keating et al. 2019) it is highly unlikely that WASP-5 b stands out from the other hot Jupiters, that mostly have low circulation efficiency. Therefore, it is quite reasonable to assume that the true value of A g is closer to the no circulation limit of A g = 0.29 ± 0.15.
Conclusions
In this work we dealt with the secondary eclipse (occultation) light curve of the hot Jupiter WASP-5 b. Our goal was twofold: i) derive accurate occultation light curve in the 2MASS Ks band, and, ii) use the latest TESS data to obtain the first estimate on the occultation depth in the optical. For goal i) we used already published Ks photometry by Chen et al. (2014) and Zhou et al. (2015) and combined these with the so far unpublished observations made by the FourStar infrared imager of the Baade 6.5 m telescope in 2011. Following the principle of "minimum data massage" we ended up with a high precision occultation light curve. The relative flux depression (planet vs star) is 2.70±0.14 ppt, which places WASP-5 b among the top few extrasolar planets with Ks occultation light curve of relative precision this high.
We attempted to find the signature of secondary eclipse and phase variation in the visible by using the currently available TESS data from two visits. Based on the statistical tests presented in Appendix B we found convincing pieces of evidence that the underlying signals are unlikely to have total variation greater than 0.20 ppt. As a result, we accepted the eclipse depth derived from the formal fit, i.e., (0.157 ± 0.056) ppt.
Using these values, our main conclusions on the atmospheric properties of WASP-5 b are as follows.
-Simple black body radiation fails to reach the observed Ks emission at the level of 10σ. A similar statement, with somewhat lower significance of 4 − 7%, is also true for the bandaveraged values of the adopted atmospheric models. Detailed atmospheric modeling with strong emission features in the Ks band is required to fit the high observed emission. -The value derived for the emission in the TESS waveband shows no preference for any of the adopted models with oxygen or carbon enhancements. The observed emission value is ∼ 2σ apart from both models. -From the TESS eclipse depth we found that, depending on the circulation model, the geometric albedo A g is likely in the range of 0.29-0.43. This places WASP-5b among the most reflective extrasolar planets but with a caveat of the preliminary nature of the detection of the secondary eclipse in the optical.
It is also worth mentioning that the ephemeris of the Ks occultation light curve further confirms the low (likely zero) eccentricity of the orbit, namely e = 0.005 ± 0.015. Furthermore, the TESS data do not suggest the presence of any additional transiting planet larger than ∼ 2 Earth radii with a period between 30 and 0.1 days.
The Ks waveband is within a relatively easy reach for most of the ground-based telescopes with near infrared capabilities. However, the time scale and the signal level make extrasolar occultation measurement still challenging, due to the combination of the above two properties with the local conditions, exhibited via the red noise component of the observed photometric time series. Perhaps the best way of handling red noise (if there are no other ways to filter it out), is to take multiple samplings. The careful combination of these samples will reduce both the white and red noise components. Due to the sparse sampling from the side of the available data points in the different wavebands, measurement accuracy is crucial for spectral retrieval. Although this spectral band is (will be) available in various space missions (JWST now and ARIEL by the end of the decade -see Tinetti et al. 2018), the expected high demand (in particular for JWST) makes ground-based observations still very important in supplying high quality data for more reliable extrasolar planet atmosphere modeling.
Acknowledgements. Constructive comments by the referee are appreciated, in particular those that have led to a deeper investigation of the significance of the secondary eclipse in the TESS band. We thank Nikku Madhusudhan for the valuable comments regarding the atmospheric modeling of WASP-5b. This paper includes data collected with the TESS mission, obtained from the MAST data Notes: The ingress and egress phases (T 1 , T 4 ) can be converted into Barycentric Julian Date (TDB standard) by the following formulae: T ing,egr [BJD]= T cen + P × (n + T 1,4 ), where n is the epoch number of the event of interest and T cen = 2458355.50805 is the moment of the transit center, P = 1.6284300 d -see Sect. 4.1 for more details. The epochs are without correction for orbital light time effect. Pol: 1/0 for with or without polynomial filtering; σ: standard deviation of the input time series; ∆ϕ = ψ − ϕ 0 for the bin model and ∆ϕ = φ 0 − ϕ 0 for the cosine model; δ: eclipse depth; DSP: dip significance parameter; A: amplitude of the cosine fitted; SNR: signal-to-noise ratio for the cosine fit. See text for additional information on the symbols. The width of the bins is equal to the eclipse duration of WASP-5, and are distributed according to the eclipse phase (centered arbitrarily at phase zero). In both cases one realization is shown by black line to follow bin-bybin fluctuations more easily. To avoid unnecessary jamming, we show only realizations with the largest negative bin average at the eclipse center. The difference in the overall eclipse depth in the two cases is due to the different DSP cutoffs used.
It is important to address the statistical significance of these, obviously near noise-level detections. To do so, we performed simple Monte Carlo simulations by generating the following type of signals where G(i) is an uncorrelated Gaussian noise with standard deviations shown in Table B.1. Function S (i) is either a simple box or a cosine function, representing the secondary eclipse or the phase variation. The box model is zero out of the eclipse and −δ within the eclipse, with the box centered at the phase of the assumed eclipse, ϕ 0 (0.5 phase after the transit). Probability density functions (PDFs) and cumulative distribution functions (CDFs) were computed for the phase differences (i.e., ∆ϕ = ψ − ϕ 0 ), DSP and A values.
The primary goal of this investigation is to pose an upper limit on the underlying signal by using the near noise-level values derived above. Therefore, we computed two basic quantities. In the presence of injected signals of varying amplitudes and box depths we computed: a) the occurrence rates of box depths and cosine amplitudes less than the observed values; b) the occurrence rates of the dip center and cosine maximum phases within the proximity of the predicted secondary eclipse.
As an example, for three different injected box signals, Fig. B.2 shows the PDF of the phase differences and the CDFs of the DSP values. The phases seem to converge quite quickly (i.e., even for boxes as shallow as 0.1 ppt, more than half of the cases hit the near proximity of the expected phase). It is interesting to note that there is surplus of occurrences at the edges in the phase distribution. It is especially visible in the pure noise (δ = 0.0) simulations. The reason of this excess is the smaller number of data points in the bins at the end-phases of the folded time series (the bin occupancy depends on the positioning of the occultation and the width of the eclipse). This leads to more fluctuating values at the edges, and therefore, higher chance of being selected as the "best-fit" for the box model.
The DSP values are less sensitive to a small underlying signal. They are still near the pure noise values and they become more distinctive from these only if the box depths become deeper than 0.15 − 0.20 ppt.
A similar test performed on the same type of dataset with injected cosine signals indicate the opposite effect (see Fig. B.3). The phase settles at a far lower pace but the amplitude of the signal becomes more quickly detectable. For example, if the underlying cosine signal had an amplitude of 0.1 ppt (corresponding Table B.1 in row 4). Left column: PDF of the phase difference between the test box center and the calculated secondary eclipse center from the bin model for the same box depths as shown in the corresponding panel on the right. The bin width is the total transit duration, i.e., in phase units = 0.030.
to an eclipse depth of 0.2 ppt), then the probability that we can detect a component as low as observed is less than 0.003.
We can assess the likelihood of the various underlying signals in the currently available TESS data by using the data settings (i.e., data point distribution, N and σ) for the two extreme data types shown in Table B.1. These settings correspond to the PDC and SAP fluxes, respectively, with and without polynomial detrending. We utilize the detection power of the phase of the bin search and the amplitude sensitivity of the Fourier fit. The upper panel of Fig. B.4 shows the occurrence rate of the observed total amplitude as a function of the underlying (injected) signal amplitude. The lower panel exhibits the likelihood that the best-fitting bin phase is not being in the close proximity of the predicted phase.
These plots suggest that the underlying phase variation should have a total amplitude less than 0.20 ppt with a probability of more than 99.7%, because otherwise it would have a probability less than 0.3% to get a cosine amplitude as small as given in Table B.1. Although at a somewhat lower level of significance, this result is corroborated by the frequency of the correct phase hits in the box test. Depending somewhat on the data type, the probability of not hitting the correct phase for an underlying boxy eclipse signal of depth 0.20 ppt is between 4% and 9%. We note that the average of the eclipse depths derived from the observed data and the associated 1σ formal error yield an upper limit of 0.157 + 0.056 = 0.213 ppt, close to the high-significance limit obtained above. This, together with the stability of the ob- Occurrence rate of the detection of cosine amplitudes δ/2 less than δ 0 /2 in the presence of injected cosine amplitudes d inj /2. The Gaussian components of the mock signals were generated by using the standard deviations of the SAP and PDC time series (first and last rows in Table B.1). The δ 0 values refer to the respective total amplitudes in the same table (i.e., 0.086, 0.106 ppt for the SAP and PDC data, respectively). Lower panel: Occurrence rate of phase difference ∆ϕ for the same noise models as above but injected by box signals of depths d inj and widths of 2 = 0.060. served eclipse phases, lend further support to the tentative detection of the secondary eclipse for WASP-5b from the TESS data.
|
2022-04-05T01:16:06.433Z
|
2022-04-03T00:00:00.000
|
{
"year": 2022,
"sha1": "3d02927ff262903f242e158e2b75792cc5d93fc8",
"oa_license": null,
"oa_url": "https://www.aanda.org/articles/aa/pdf/forth/aa43131-22.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "8570713e16cc1ec6c2e92cb54997aa4fc7cd0a3c",
"s2fieldsofstudy": [
"Physics",
"Environmental Science"
],
"extfieldsofstudy": [
"Physics"
]
}
|
231595187
|
pes2o/s2orc
|
v3-fos-license
|
Epidemiological Analyses Reveal a High Incidence of Breast Cancer in Young Women in Brazil
PURPOSE Breast cancer screening is not recommended for young women (< 40 years old); therefore, those diagnosed are more likely to have advanced and metastatic disease, reducing treatment outcomes. This study aimed to investigate breast cancer epidemiology among young women in Brazil. METHODS Data from three publicly available databases and a cohort from a university hospital in Brazil were analyzed in a retrospective study. Descriptive statistics was performed on disease prevalence and stage distribution across age groups. Incidence was estimated using age-standardized incidence ratio. The impact of age in disease-specific survival was also analyzed. RESULTS Invasive breast cancer prevalence data by age group revealed that 4.4% and 20.6% of patients were < 35 and < 45 years old, respectively. In the United States, this prevalence was 1.85% and 11.5%, respectively (odds ratio [OR], 2.2; P < .0001). The percentage of regional and metastatic diseases were higher in São Paulo State (Fundação Oncocentro de São Paulo [FOSP]) compared with the United States (45% and 9.8% v 29% and 5.7%, respectively; P < .0001). In FOSP, regional and metastatic disease prevalence were higher among young patients (53.5% and 11.3%, respectively). The median tumor size in patients < 40 years old was higher (25.0 mm × 20.9 mm; P < .0001), and young patients have higher risk to be diagnosed with positive lymph nodes (OR, 1.5; P = .004) and higher proportion of luminal-B and triple-negative (TNBC) tumors. Young patients have a poor disease-specific survival because of late-stage diagnosis and more aggressive breast cancer subtypes (human epidermal growth factor receptor 2–enriched and TNBC) (P < .0001). CONCLUSION In Brazil, breast cancer prevalence among young patients and late-stage incidence during this age span is higher. Advanced disease and more aggressive subtypes lead to a significant impact on breast cancer-specific survival in young patients.
INTRODUCTION
Breast cancer is the leading malignancy affecting women worldwide. 1 The probability of developing breast cancer increases with age, and the incidence of the disease is reported as uncommon in women younger than 40 years. 2 According to the US SEER database staging system, the median age at diagnosis is 62 years, and the prevalence in patients , 35 years and between 35 and 45 years of age is 1.9% and 8.4%, respectively. 3 Although stage has been described as the main prognostic factor in breast cancer, the disease is considered heterogeneous. 4 Many studies have reported that molecular classification based on whole gene expression profile or immunohistochemistry could stratify patients into distinct subtypes in terms of biological behavior and prognosis. 5 Moreover, young patients with breast cancer are more likely to be diagnosed with advanced disease, with the prevalence of aggressive tumor subtypes being higher compared with older patients. 6 This scenario may influence the survival rate and the cost of treatment. 7 Additionally, breast cancer in young patients has other critical implications; these patients are at high risk of developing a new breast cancer in the residual ipsilateral or the contralateral mammary gland because of the longer remaining lifespan and the social and economic impacts of the treatment for those at the working and reproductive ages. 8 Breast cancer incidence is reportedly increasing in several low-and middle-income countries. 9, 10 This increase has been affecting all age groups; hence, determining the actual prevalence of breast cancer in women under screening ages is crucial for making Author affiliations and support information (if applicable) appear at the end of this article.
Accepted on November 3, 2020 and published at ascopubs.org/journal/ go on January 12, 2021: DOI https://doi. org/10.1200/GO. 20. 00440 public health policy decisions. In this study, we aimed to analyze the prevalence and incidence of breast cancer in young patients and its impact on disease-specific survival in a Brazilian population.
Study Design
This was a retrospective observational study based on the analysis of three publicly available databases (Fundação Oncocentro de São Paulo [FOSP] cohort, the Instituto Nacional de Câncer [INCA] cohort, and the SEER Program-SEER cohort) and a cohort from Hospital das Clinicas-Ribeirão Preto School of Medicine-University of São Paulo, Brazil (HCRP cohort). It was approved by the local committee in ethics (#2.638.453/2018). The public databases were used to estimate breast cancer prevalence and incidence in Brazil and the United States. The HCRP database was used to analyze disease-specific survival in young patients with breast cancer. All patients included in the three Brazilian data sets had their treatment covered by the public health system (SistemaÚnico de Saúde).
Population and Exclusion Criteria
A total of 114,936, 218,053, 1,632,850, and 1,970 patients were classified as International Classification of Diseases code 50 in FOSP, INCA, SEER, and HCRP cohorts, respectively. Data curation was based on histological diagnosis, year of diagnosis, sex, and registry duplication. The exclusion criteria were as follows: (1) duplication, (2) misclassification, (3) misdiagnosis, (4) diagnosis of noninvasive disease, (5) malignant tumor not arising from mammary epithelial cells, (6) diagnosis in male, and (7) year of diagnosis before the year 2000. Information without the year of diagnosis and years with small representation data (, 40% of the median) were also excluded. Table 1 presents the final cohort population.
Statistical Analyses
Patients were classified into age groups (5 years' range). Descriptive statistics was performed. Disease prevalence and stage distribution across age groups were compared using χ 2 test. We used 2010 population estimates in São Paulo, Brazil, and in the United States as the base for population incidence estimation. 11 The age-standardized incidence ratio and the cumulative risk were estimated as previously described. 12 To maintain stage classification uniformity, we used the SEER nomenclature: localized (stage I and II N0), regional (stages II with N1 and III), and distant (stage IV). 13 For survival analysis, patients were divided into two groups as defined in the European Society for Medical Oncology 3 rd international consensus: young (, 40 years) and standard (≥ 40 years). 14 The breast cancer subtypes in the HCRP data set were considered to be (1) luminal A if estrogen receptor (ER)-positive and/or progesterone receptor (PR)-positive, human epidermal growth factor receptor 2 (HER2)-negative, and grade 3; (2) luminal B if ER-positive while PR-and HER2-negative
CONTEXT Key Objective
This study aimed to explore the incidence, characteristics at presentation, and outcomes of breast cancer in young women in Brazil. We performed epidemiological data analysis of three breast cancer databases from Brazil and have made comparisons with the US SEER database staging system.
Knowledge Generated
We observed a higher prevalence of breast cancer in young women (, 40 years old) in Brazil. Based on age-adjusted incidence, our results support the fact that the higher prevalence is because of a higher incidence. Young patients had more aggressive subtypes and higher risk of being diagnosed with advanced and metastatic tumors. Late diagnosis and the higher prevalence of triple-negative and human epidermal growth factor receptor 2 (HER2)-positive tumors in young patients are associated with worse disease-specific survival.
Relevance
Clinical-pathological characteristics of breast cancer in young women lead to significant impact on disease-specific survival.
These findings add important information for public health planners and reinforce the need to identify specific surveillance and screening policies in young women. (3) luminal/HER2 if ER-positive and/or PR-positive while HER2-positive; (4) HER2 if ER-and PR-negative while HER2-positive; and (5) triple-negative (TNBC) if ER-, PR-, and HER2-negative. 15 The difference in primary tumor size was analyzed using Wilcoxon test. Chi-squared or Fisher's exact tests were performed to analyze association among qualitative variables and the estimated odds ratio (OR). Univariate survival analyses were based on Kaplan-Meier curves and tested for significance using the logrank test. Cox multivariate regression model including the significant variables from the univariate analysis was used for hazard ratios estimation. The level of significance was set at 5%. We conducted all analyses using R version 3.6.1. 16 According to the SEER data, the median number of cases per year was 57,202 (IQR, 7,142.25), and the mean age at diagnosis was 61.7 6 14.0 years. The age at diagnosis is significantly higher in the SEER data set compared with the Brazilian data (P , .0001). The prevalence of patients with invasive breast cancer was 1.85% and 11.5% for women , 35 and 45 years, respectively. Breast cancer among young women accounts for 10.5% and 5% of cases in the Brazilian and SEER cohorts, respectively. There is a significant difference in prevalence among young patients between the two data sets (OR, 2.2; 95% CI, 2.20 to 2.28; P , .0001).
Prevalence of Invasive
To correct for population density in the distribution data, we used the SEER and FOSP data and estimated the agestandardized incidence ratios. The overall incidence ratio was higher in the SEER cohort (36.7 v 26.7 cases/100,000 women/year), with a cumulative risk by the age of 40 years of 0.13% (SEER cohort) and 0.15% (FOSP cohort). Figure 2 illustrates the age-standardized incidence ratios in both databases. Note that the incidence ratio by the age of 50 years is higher in the FOSP cohort.
Stage Distribution Among Young Patients
We analyzed the distribution of tumor stages at diagnosis according to the age at breast cancer onset using data from the SEER and FOSP databases. Overall, the percentages of regional and metastatic diseases are higher in FOSP cohort (45% and 9.8% v 29% and 5.7%, respectively; P , .0001). The prevalence of regional and metastatic diseases is even higher among young patients. In the FOSP cohort, 53.5% and 11.3% of young patients were diagnosed with regional disease and metastatic disease, respectively. Figure 3 illustrates the distribution of stages according to age groups in both databases.
Impact of Age in Breast Cancer Subtypes and Survival
We analyzed the association of age groups, breast cancer subtypes, and the impact in overall and disease-specific survivals using data from the HCRP database. Although the overall survival was not affected by age (P = .6), young patients have a significant reduction in disease-specific survival. The median survival time was 8.1 years (95% CI, 6.9 to 11.0) for young and 11.5 years (95% CI, 10.6 to 13.3) for standard patients (P = .004). Figure 4 shows the Kaplan-Meier curves for disease-specific survival.
Based on the univariate analyses, disease stage and tumor subtype significantly influenced the disease-specific survival. The distribution of breast cancer subtypes and stages are significantly different in young patients. Table 2 summarizes patient characteristics according to age group.
Young patients presented with more advanced disease. The median tumor size in young and standard groups was 25.0 mm (IQR, 25) and 20.9 mm (IQR, 16), respectively (P , .0001). The axillary involvement was associated to the primary tumor size (P , .0001), and young patients had higher risk to be diagnosed with positive lymph nodes (OR, 1.5; 95% CI, 1.1 to 2.1; P = .004). Additionally, there was a higher proportion of luminal-B and TNBC among young versus standard patients (11% v 7.3% and 20% v 15%, respectively; P = .03).
To correct for age, stage, and tumor subtypes in the hazard ratios, we used the Cox proportional hazard ratio model to assess the effect of these multiple variables on diseasespecific survival. According to the multivariate analysis, the significant variables affecting disease-specific survival were tumor subtype (HER2-enriched and TNBC) and tumor stage (Table 3).
DISCUSSION
Breast cancer diagnosis and treatment in young patients is challenging because aggressive subtypes are more frequent and most patients are diagnosed with advanced disease. 17,18 In turn, this leads to a high cost of treatment, massive impact on familial, sexual, social, and economic aspects, and reduced survival rate. 8,19 Although several data support the observation of a low breast cancer prevalence among young patients, we have demonstrated that the prevalence among the Brazilian population is not irrelevant. We identified 10.5% of diagnosed breast cancer in patients , 40 years. Additionally, the estimated agestandardized incidence suggested that our population is under higher risk of developing breast cancer before age 50 years. More than 60% of young patients were diagnosed with locally advanced or metastatic disease, and the prevalence of TNBC was also higher. These factors led to a significant decrease in disease-specific survival.
Breast cancer screening programs do not cover young women under average risk. There is no evidence that screening young women leads to a significant reduction in breast cancer mortality, 20,21 added to the potential increase in overdiagnosis and the radiation-induced cancer. 21 Furthermore, the screening mammogram's low sensitivity in dense breasts, in addition to the low disease prevalence, leads to an unacceptable predictive positive value in young women. 22 Thus, in this age group, breast cancer is mainly diagnosed on symptomatic women.
Some studies have estimated the natural progression of breast cancer. 23,24 Plevritis et al 25 reported that the median tumor size during transition from local to regional disease (axillary involvement) is 2.5 cm. Among the HCRP cohort in our study, 50% of young patients were diagnosed with tumor . 2.5 cm, corroborating with the observation that most young Brazilian patients are diagnosed with regional and distant disease. Moreover, a previous study has shown that the difficulty in patient flow for public health care and the consequent timing between diagnosis and treatment are important factors associated with advanced clinical stage at diagnosis. 26 Socioeconomic conditions, formal education, and race contribute considerably to this scenario, indicating that breast cancer awareness and access to the public health system are the main aspects that the government has to deal with. 27 These observations are highly relevant for decision makers on public health policies.
In addition to the investment in cancer awareness and healthcare access, identifying people with high risk of developing breast cancer may be another strategy to optimize breast cancer care in young women. 21,28 This approach Breast Cancer in Young Women in Brazil aims to improve early diagnostic rates, based on increased surveillance and awareness, and to apply risk reduction strategies for high risk women. 29 A cross-sectional study, using three breast cancer risk assessment tools (Gail, Tyrer-Cuzick, and BRCAPRO) and including 382 women 35-69 years of age in the South region of Brazil, has demonstrated that the tools can identify up to 8.8% of women 35-39 years as having high risk for breast cancer. 30 These tools can also identify individuals at risk for Hereditary Breast and Ovary Cancer (HBOC) syndrome. One study has demonstrated the prevalence of BRCA1 or BRCA2 mutation in 21.5% of 349 unrelated individuals in Brazil with personal or familial high risk for HBOC syndrome. 31 Integrating genetic cancer risk assessment in primary care with a genetic counselor is a potential opportunity for systematically screening high-risk families and individuals, 32,33 which seems to be a rational measure for populations with high breast cancer incidence and prevalence.
Another point we would like to address is the higher incidence of breast cancer among young women in Brazil than in the United States, which is also supported by reports from Brazil and from other developing countries. 27,34,35 Differences in disease incidence among young individuals may be attributable not only to inherited genetic factors but also to modifiable risk factors, such as smoking and alcohol, and the exposure to carcinogens. Although the etiological factors responsible for such differences are unknown, 17,27 the chronic exposure to endocrine disrupting and carcinogenic pesticides may be a concern. Recent studies suggest the exposure of organochlorine pesticides may be associated with a higher incidence of breast cancer. 36 Furthermore, toxicoproteomics analysis has demonstrated that women exposed to pesticides are more prone to develop ER-negative breast cancer. 37 In terms of absolute numbers, Brazil is the largest consumer of pesticides worldwide. 38 Therefore, future studies on population genomics and exposure to environmental carcinogens are necessary to elucidate these factors.
A recent study of the Brazilian population has already demonstrated unfavorable clinicopathological features for breast cancer among young women. 27 However, this study did not analyze how these factors may influence the overall and disease-specific survival. In our study, we were able to explore the impact of molecular subtyping and stage in a long-term follow-up breast cancer cohort. Our data demonstrated that disease-specific survival is clearly affected by late diagnosis as well as TNBC and HER2-enriched breast cancer prevalence among the young population. The diagnosis of regional disease in young women increases the risk of dying because of breast cancer by 3.5 times. Such an observation reinforces the need to establish a program for the identification and surveillance of high-risk populations.
Our study has some limitations. First is the inherent limitation of retrospective studies, as well as the possibility of some missing data. Second is the lack of follow-up data among the FOSP, INCA, and SEER databases. Nevertheless, the follow-up information on the breast cancer patients under the HCRP cohort (n = 1,823) is highly consistent. Additionally, the age and stage distributions of the HCRP data set are highly similar to that of the FOSP and INCA databases, making this cohort a reliable representation of the Brazilian population.
In conclusion, we have demonstrated that the incidence of breast cancer in young women is higher in Brazil. The prevalence of patients with breast cancer in women younger the screening age is also high. Young patients with breast cancer are more likely to be diagnosed at the regional and distant stages, with a higher prevalence of more aggressive breast cancer subtypes, hence leading to a significant impact on disease-specific survival. Strategies to identify high-risk individuals in our population may be a starting point to drive specific surveillance and screening policies for breast cancer care and awareness for young women.
AUTHORS' DISCLOSURES OF POTENTIAL CONFLICTS OF INTEREST
The following represents disclosure information provided by authors of this manuscript. All relationships are considered compensated unless otherwise noted. Relationships are self-held unless noted. I = Immediate Family Member, Inst = My Institution. Relationships may not relate to the subject matter of this manuscript. For more information about ASCO's conflict of interest policy, please refer to www.asco.org/rwc or ascopubs. org/go/authors/author-center. Open Payments is a public database containing information reported by companies about payments made to US-licensed physicians (Open Payments).
No potential conflicts of interest were reported.
|
2021-01-14T06:16:23.344Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "1e3573cff7d7cd4944fbb55d7b64bc3afbcf7117",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1200/go.20.00440",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "51220ca71567c9bae05052ae8f07ea8bfed24538",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
23051800
|
pes2o/s2orc
|
v3-fos-license
|
MINIREVIEWS Hepatitis C infection and lymphoproliferative disease: Accidental comorbidities?
Chronic hepatitis C virus (HCV) infection has been as sociated with liver cancer and cirrhosis, autoimmune disorders such as thyroiditis and mixed cryoglobuli -nema, and alterations in immune function and chronic inflammation, both implicated in B cell lymphoprolifera tive diseases that may progress to non-Hodgkin lym phoma (NHL). HCV bound to B cell surface receptors can induce lymphoproliferation, leading to DNA muta tions and/or lower antigen response thresholds. These findings and epidemiological reports suggest an asso ciation between HCV infection and NHL. We performed a systematic review of the literature to clarify this po tential relationship. We searched the English-language literature utilizing Medline, Embase, Paper First, Web of Science, Google Scholar, and the Cochrane Database of Systematic Reviews, with search terms broadly defined to capture discussions of HCV and its relationship with NHL and/or lymphoproliferative diseases. References were screened to further identify relevant studies and literature in the basic sciences. A total of 62 reports discussing the relationship between HCV, NHL, and lym phoproliferative diseases were identified. Epidemiologi cal studies suggest that at least a portion of NHL may be etiologically attributable to HCV, particularly in areas with high HCV prevalence. Studies that showed a lack of association between HCV infection and lymphoma may have been influenced by small sample size, short follow-up periods, and database limitations. The associ ation appears strongest with the B-cell lymphomas rela tive to other lymphoproliferative diseases. Mechanisms by which chronic HCV infection promotes lymphoprolif erative disease remains unclear. Lymphomagenesis is a multifactorial process involving genetic, environmental, and infectious factors. HCV most probably have a role in the lymphomagenesis but further study to clarify the association and underlying mechanisms is warranted. liferative conclusions
INTRODUCTION
Chronic hepatitis C virus (HCV) infection is an insidious form of liver disease that can silently progress to cirrhosis over a period of 10-30 years in 20% of cases. Chronic HCV infection has also been associated with extrahepatic manifestations, including membranoproliferative glomerulonephritis, various autoimmune disorders, and idiopathic pulmonary fibrosis.
HCV is a single, positive-strand RNA hepatotrophic virus with marked genetic variability [1] . It exhibits lymphotropism, the ability to replicate in peripheral blood mononuclear cells, which may represent the association between HCV infection and the development of lymphoproliferative disorders [2] . HCV is a the possible cause of B-cell dysregulation diseases and conditions and B-cell lymphoproliferative disorders that may progress to non-Hodgkin lymphoma (NHL) [3] , three and was identified as a principle cause of mixed cryoglobulinemia [4,5] .
Infectious agents have been associated with the development of B-cell lymphoma [1] , and several studies have suggested that infection with HCV is a risk factor for the development of B-cell NHL [1][2][3] .
Oncogenesis is a multifactorial process in which many viruses, including Epstein-Barr virus, human papilloma virus, and human T cell leukemia\lymphoma virus, play a well-known role.1 Epidemiological studies have demonstrated an association between HCV infection and NHL [6][7][8][9][10] , suggesting that HCV infection plays a role in the development of premalignant and malignant hematologic disorders [2,6,11,12] . Other studies have shown that treatment of NHL with antiviral therapy in patients infected with HCV can lead to regression of lymphoproliferative disease [13,14] . In contrast, other studies have shown a lack of association between HCV infection and the development of lymphoma [15,16] ; thus, debate regarding a potential relationship between HCV and NHL continues.
RESEARCH
We searched the English-language literature utilizing Medline, Embase, Paper First, Web of Science, Google Scholar, and the Cochrane Database of Systematic Reviews, with search terms broadly defined to capture discussions of HCV and its relationship with NHL and/or lymphoproliferative diseases. References were screened to further identify relevant studies and literature in the basic sciences.
Findings supporting the association between HCV and NHL
In one study 3 on HCV-positive patients, 13.7% (32/233) had monoclonal gammopathy compared to an incidence in the general population of 1%. Of the 32 cases, 24 (75%) were benign and were not associated with a malignant disorder; however the other eight (25%) were associated with a malignant lymphoproliferative disorder or plasma cell disorder, and two additional subjects with-out monoclonal gammopathy were diagnosed as having a malignant lymphoproliferative disease. In this study, the overall prevalence of malignant lymphoproliferative disease/plasma cell disorder in individuals with HCV infection was 4.3%. Monoclonal gammopathy was found in 14% of the patients with chronic HCV infection and was associated with malignant lymphoproliferative disease in more than 25% of such patients.
Another study [17] showed that the prevalence of HCV among lymphoproliferative disease patients was 7.8% compared with a prevalence of 1.19% in the group with myeloproliferative and myelodysplastic disorders, and 0.64% in the general population. After subtype analysis, the strongest association between HCV and lymphoproliferative disease was found in patients with diffuse large B-cell lymphoma. The authors proposed that an anti-HCV antibody test should be performed routinely in lymphoma patients.
In another study, HCV prevalence was 17.5% (70/400) in lymphoma patients and 5.6% (22/396) in controls. The highest prevalence rates were found among patients with lymphoplasmacytic lymphoma and marginal zone lymphoma (30% and 26.6%, respectively), both of which are considered indolent types of B-cell NHL. Among the two largest subgroups of B-cell NHL, the HCV prevalence was more elevated among patients with large B-cell lymphoma (19.0%), an aggressive form of B-cell NHL, compared to patients with follicular B-cell NHL (13.9%), an indolent form of lymphoma. This study suggests that the association between B-cell NHL and HCV infection is not genotype-specific because no difference in genotype distribution was found between the control group and the study group [18] .
In contrast to the findings described above, several other studies have found that the genotype 2a was more frequent among patients with monoclonal gammopathy than those without [19,20] . Furthermore, genotypes 1b and 2a were suggested to be risk factors for the developing lymphoma in HCV patients [21] .
Other studies have compared the prevalence of HCV infection among patients with different types of lymphoproliferative disorders, including Hodgkin's disease, acute lymphoblastic leukemia, multiple myeloma, T-cell NHL and B-cell NHL. A total of 537 patients were tested for HCV infection and compared to the general population. Among all lymphoproliferative disorders, the prevalence and the relative risk (RR) of being infected with HCV was increased only in those with B-cell NHL and, specifically, in the subgroup of immunocytomas, whereas the histologic types of other patients were only occasionally associated with HCV infection [22] . Likewise, a recent meta-analysis of 15 case control studies and three prospective studies found a higher RR for NHL among HCV-positive individuals and the etiologic fraction of NHL attributable to HCV was more than 10% in areas with high HCV prevalence [23] .
Finally, in a seminal study, response to interferon therapy (interferon α and ribavirin, alone and in combination) was compared for nine patients having splenic lymphoma with villous lymphocytes and HCV infection versus a control group of six similarly treated patients having splenic lymphoma with villous lymphocyte who tested negative for HCV infection. All nine patients who tested positive for HCV had a remission of their lymphoma after the loss of detectable HCV RNA in the blood, and one patient had a relapse when HCV RNA became detectable again. In contrast, none of the six HCV-negative patients had a response to interferon therapy [14] . Similar findings were published in a subsequent study [24] . These studies comprise one of the strongest arguments supporting the association of HCV infection with NHL.
Mechanism of pathogenesis
The mechanism by which chronic HCV infection promotes lymphoproliferative disease remains unclear. Some studies have shown that treatment of HCV infection with viral elimination methods reduces the incidence of malignant lymphoma in patients infected with HCV [14, [25][26][27] . Over the years, several theories involving different mechanisms have been proposed. One such idea is that hypermutation induced by HCV infection of the immunoglobulin genes in B cells can be the cause of lymphomagenesis [28,29] . Another proposed theory is that HCV infection-induced hypermutation causes genetic instability and chromosomal aberrations, possibly resulting in neoplastic transformation [30] . Soluble interleukin 2 receptor (sIL-2R) has been shown to take part in some cancers, including T-cell lymphoma, nasopharyngeal carcinoma, lung and breast cancer, epithelial ovarian cancer, renal cell carcinoma, and hepatocellular carcinoma in Egyptian patients [31][32][33][34][35][36][37] . In splenocytes of HCV transgenic mice that express the full HCV genome in B cells (RzCD19Cre mice), the level of IL-2R was higher than levels in splenocytes derived from mice that express Cre under the transcriptional control of the B lineage-restricted CD19 gene. (CD19Cre mice). Furthermore, serum concentrations of sIL-2R in RzCD19Cre mice that developed B-cell lymphomas were higher. Serum sIL-2Rα levels above 1000 pg/mL were highly suggestive for the development of B-cell lymphomas in RzCD19Cre mice [38] .
Chronic antigenic stimulation is thought to be important in the pathogenesis of HCV-related B-cell NHL .The immune response that occurs in HCV-positive patients against one HCV antigen, the E2 envelope glycoprotein (E2), suggests that the restricted V gene observed in lymphoproliferative disorders may be linked to this antigen. V-region genes from human anti-E2 antibodies derived from B cells of HCV-infected individuals show a similar V gene bias to that observed in HCV-associated mixed cryoglobulinemia and NHL [39,40] . These studies implicate the specific immune response against the E2 antigen in the pathogenesis of B-cell lymphoproliferative diseases and, potentially, in HCV-associated lymphomas. The E2 protein may be the antigen involved in driving B cell proliferation. B cells bind E2 via their specific B-cell receptor and could engage both B-cell receptor and CD19/CD21/ CD81 (tetraspanin) signaling complexes simultaneously [41,42] . Furthermore, the HCV core protein induces IL-10 expression in mouse splenocytes [43] and IL-10 upregulates expression of IL-2R (Tac/CD25) in normal and leukemic B lymphocytes [44] . Therefore, IL-10 and IL-2R might induce B cell transformation and B-cell NHL. The induction of IL-2, IL-10, and Bcl-2 by the HCV core protein and the induction of IL-12 by E2 have been proposed as mechanisms that play a major role in lymphomagenesis [43] .
Striking geographic differences in the prevalence of lymphoproliferative disease have been found, suggesting that genetic and/or environmental factors are also involved in the pathogenesis of this disorder. The prevalence is higher in Italy, the United States, Brazil, and Japan, but not in other countries where the prevalence of HCV infection in the general population is lower, such as in Canada. A total of 100 patients with B-cell lymphoma (10 high grade 46 intermediate grade 44 low grade) and 100 controls with non-hematological malignancies were studied in North America. None of the controls or lymphoma patients had antibodies to HCV, suggesting that HCV is unlikely to play an important role in the pathogenesis of B-cell lymphoma in North America. In contrast, a study of another general population was performed in patients of second-generation Danish-Swedish origin, and it was found that the presence of HCV infection was a risk factor for the development of NHL despite the low prevalence of HCV infection in this population [10,18,[45][46][47][48][49][50][51] . Factors that can contribute to geographic differences include the prevalence of HCV infection in the general population-HCV appears to be associated with B-cell NHL mainly in countries such as Italy, where HCV is highly prevalent [52] ; HCV genotype [53] ; and the type of virology testing used, since the detection of HCV antibodies without testing for HCV RNA may underestimate the true rate of HCV infection in NHL patients [54] .
Findings against the association between HCV and NLH
In addition to the findings described above, data have been collected in recent years that argue against an association between HCV infection and lymphoproliferative disorders. One study from India showed that the incidence of HCV infection was 1% in patients with NHL and 0% in patients with chronic lymphocytic leukemia [55] . In another case-controlled study from Pakistan, the prevalence of HCV infection was not significantly different in 143 lymphoma patients compared with 29 controls (5% vs 3.4%, respectively) [56] . A prospective case-controlled study assessed whether HCV infection preceded the development of NHL by examining the serum of 95 subjects with NHL diagnosed at a mean 21 years after screening for anti-HCV antibodies and HCV RNA. In this study, samples from four of 95 case subjects and one of 95 matched control subjects had repeatedly reactive HCV enzyme-linked immunosorbent assay enzyme-linked immunosorbent assays. None of these cases, however, were confirmed HCV seropositive by third generation strip immunoblot assay (RIBA-3). Furthermore, none of the serum samples from case subjects were found to be HCV RNA-positive by real time PCR (RT-PCR). Based on this study, HCV infection acquired during early adulthood was determined to not be a risk factor for subsequent B-cell neoplasia over the course of a patient's lifetime [49] .
Another study exploring the association between HCV and HBV infections with lymphoproliferative diseases [18] showed no significant association between the presence of anti-HCV antibodies and the risk of developing NHL, multiple myeloma, or Hodgkin's lymphoma. This study did find that chronic HBV infection may increase the risk of lymphoid malignancies.
In a prospective study of 2162 patients with HCV infection in Japan, the incidence of NHL was not considerably elevated (only four of the 2162 patients with HCV had NHL); however, the median follow-up time was less than 6 years. This short follow-up period may explain the low rate of NHL in this study since NHL may typically develop only after long-standing HCV infection [57] . In another study of 2,533 female patients infected with HCV, an association between HCV infection and NHL was not observed [58] .
Several studies have failed to find an association between HCV infection and lymphoma subtypes such as mucosa-associated lymphoid tissue lymphoma of the stomach [48,[59][60][61][62] . Furthermore, a study of HIV-positive patients with concurrent HCV infection revealed that HCV infection was not associated with an increased risk of developing NHL. In addition, no relationship between NHL risk and anti-HBcAg or anti-HBsAg was found [19] .
CONCLUSION
In conclusion, lymphomagenesis is a multifactorial process in which genetic, environmental, and infectious factors can all be involved.
The above review described some studies that found a strong association between HCV infection and the development of lymphoma, particularly non-Hodgkin lymphoma one of the regions for this strong association between HCV and lymphoproliferative disease can be due to the overrepresentation of lymphoplasmacytoid lymphomas/immunocytomas relative to other B-NHL histotypes in several studies is likely the result of a selection bias towards the inclusion of patients with type Ⅱ MC in these centers, and this reason might account for the apparently high prevalence of HCV infection in B-NHL patients in these series [63] . The studies presented here that showed a lack of association between HCV infection and lymphoma may have been influenced by small sample size, short follow-up periods,and geographic variation. We believe this field of study would benefit from further prospective trials.
|
2018-04-03T03:00:52.765Z
|
2014-11-21T00:00:00.000
|
{
"year": 2014,
"sha1": "337c05ed184848aee1a81a722ba619cb69e6a00e",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.3748/wjg.v20.i43.16197",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "ac0c4ab9ce65905c9e3c7040ed3fd977b6750b89",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
13820444
|
pes2o/s2orc
|
v3-fos-license
|
How to survey classical swine fever in wild boar (Sus scrofa) after the completion of oral vaccination? Chasing away the ghost of infection at different spatial scales
Oral mass vaccination (OMV) is considered as an efficient strategy for controlling classical swine fever (CSF) in wild boar. After the completion of vaccination, the presence of antibodies in 6–12 month-old hunted wild boars was expected to reflect a recent CSF circulation. Nevertheless, antibodies could also correspond to the long-lasting of maternal antibodies. This paper relates an experience of surveillance which lasted 4 years after the completion of OMV in a formerly vaccinated area, in north-eastern France (2010–2014). First, we conducted a retrospective analysis of the serological data collected in 6–12 month-old hunted wild boars from 2010 up to 2013, using a spatial Bayesian model accounting for hunting data autocorrelation and heterogeneity. At the level of the whole area, seroprevalence in juvenile boars decreased from 28% in 2010–2011 down to 1% in 2012–2013, but remained locally high (above 5%). The model revealed the existence of one particular seroprevalence hot-spot where a longitudinal survey of marked animals was conducted in 2013–2014, for deciphering the origin of antibodies. Eleven out of 107 captured piglets were seropositive when 3–4 months-old, but their antibody titres progressively decreased until 6–7 months of age. These results suggest piglets were carrying maternal antibodies, few of them carrying maternal antibodies lasting until the hunting season. Our study shows that OMV may generate confusion in the CSF surveillance several years after the completion of vaccination. We recommend using quantitative serological tools, hunting data modelling and capture approaches for better interpreting serological results after vaccination completion. Surveillance perspectives are further discussed. Electronic supplementary material The online version of this article (doi:10.1186/s13567-015-0289-6) contains supplementary material, which is available to authorized users.
Introduction
Classical Swine Fever (CSF) is one of the diseases entailing strong economic impact on the pig industry in the European Communities [1]. Eradicated from domestic pigs in Western Europe, CSF has remained endemic in some populations of wild boar (Sus scrofa) for more than 20 years. Thus, free ranging populations of European wild boar are regarded as potential reservoirs of CSF [2,3] and their monitoring and management is compulsory in the European Communities (Directive 2001/89/EC). Oral immunisation has appeared as an effective management strategy for controlling CSF outbreaks in wild boar in contrast with conventional control measures (e.g., increase of hunting pressure and hunting of young animals) [4][5][6]. The live C-strain, an attenuated CSF virus, has been repeatedly delivered by mean of baits to wild boars pre-baited on feeding stations [4]. In facilities, a satisfying level of neutralizing antibodies, which may last lifelong, was observed already after a single vaccination dose was orally administrated [7]. Repeated vaccination treatments (in facilities) increased the individual concentration of neutralizing antibodies [8] and increased high herd immunity in natural populations (such as detailed in [5] or [6]). Nevertheless, re-emergence of CSF was sometimes reported after long periods of apparent remission during which the disease was supposed to be eradicated [3], which has pinpointed the importance of maintaining the monitoring after the implementation of oral mass vaccination (OMV).
In the absence of vaccination, both direct observation of infection (through RT-PCR and virus isolation) and indirect through the detection of antibodies in young wild boars are good indicators of the recent circulation of CSF. Seroprevalence in juveniles from 6 to 12 monthsold is particularly useful when using hunting data since the viremia is generally short while wild boar recovering from infection will keep antibodies for life [3,9]. However, this indicator is compromised during OMV based on an attenuated live virus not modified since diagnostic methods cannot differentiate antibodies targeting the wild strain from antibodies targeting the vaccine strain [10]. During the conduction of OMV, the surveillance is therefore only based on virus detection results, but viroprevalence is very low [3,5,11] and RT-PCR methods have to be adapted to distinguish between vaccine and wild strains to avoid inconclusive results in the presence of the vaccine-strain [12,13]. Due to the confusing effect of OMV, the absence of viral detection for a long period (i.e., up to 1 year) is recommended before the completion of OMV [3]. During the 3 years after the completion of OMV, the examination of antibodies in 6-12 monthold wild boars is recommended for monitoring the risk of CSF virus persistence or re-emergence [14], with a particular attention given to the hot spot areas exhibiting seroprevalence above 5% [3]. Nevertheless, until now no study has detailed the post-vaccination monitoring of serological responses. In particular no study has yet discussed the confusing effect of repeated vaccination treatments on the performance of a surveillance design based on serological results. Different mechanisms may complicate a monitoring based on seroprevalence in 6-12 month-old wild boar as an indicator of recent CSF infection. Maternal antibodies transmitted by immune sow to their offspring generally do not persist more than 3 months post-partum [15] while juvenile wild boar are generally not hunted before 6 months of age [3,14]. However, 70% of adult wild boars remained immune during the implementation of OMV [6,16] and it has been experimentally demonstrated that maternal antibodies may persist up to 1 year in some piglets when the level of neutralizing antibodies is very high in sows [17]. Even though this phenomenon is not very frequent in natural conditions (unknown percentage), it is reasonable to assess long-lasting maternal antibodies may happen at the level of a population comprising at least 20 000 wild boars. The vaccine strains rarely survive for more than a few days at room temperature [18], so wild boar immunization with baits remaining in the environment after OMV was stopped seems a less probable scenario. Nevertheless, we did not fully reject that hypothesis since millions of baits were delivered at a large scale and exceptional survival of vaccine cannot be excluded [16]. Thus antibodies in 6-12 month-old hunted wild boar after the completion of OMV may finally correspond to three situations: (1) wild boar infection and recovery during the current year, (2) immune sows having transmitted a high amount of antibodies to their offspring through the colostrum leading to exceptional seropositive results in piglets over 6 months of age, or (3) piglets having eaten a viable vaccine-bait remaining in the environment after vaccination was stopped.
In the present study we aimed at improving CSF surveillance in wild boar after the completion of OMV. We focused our study in the Vosges du Nord area, northeastern France, where two consecutive CSF outbreaks had been previously described during the 1990s and the 2000s [19][20][21] and where OMV using C-strain livevaccine had been implemented from August 2004 up to June 2010 [6]. We adopted a two step surveillance approach combining two spatial scales of data collection. First, we fitted a "disease" mapping model of the serological data collected in 6-12 month-old hunted wild boars during the 3 years following the completion of OMV for identifying the hot spots of seroprevalence in that age class. Secondly, repeated captures of wild boar were organized within the identified hot spot areas for examining the individual kinetics of neutralizing antibodies and CSF infection in 2-18 month juvenile wild boar and in adult sows (i.e., over 24 months of age) from the same social groups. We were able to discuss the origin of antibodies in juvenile wild boars after the completion of OMV and to propose ad hoc perspectives for CSF surveillance.
Study area
The Vosges du Nord area is located within the Moselle and the Bas-Rhin administrative departments, northeastern France (48°50N and 7°30E) [19,20]. The study area covers 3000 km 2 comprising 1200 km 2 of forests and is uninterrupted with the Palatinate forest through Germany ( Figure 1). CSF virus has been reported in either hunted or wild boars found dead from April 2003 up to May 2007 [6]. Vaccination was implemented from August 2004 according to the process recommended by Kaden et al. [4] using the Riems C-live-vaccine strain included in baits according to the field process detailed by Calenge and Rossi [16]. No virus positive wild boars had been observed since May 2007, the completion of the vaccination strategy was thus adopted by June 2010 (i.e., 3 years after the last viropositive result). Exhaustive monitoring of hunted and wild boars found dead was requested by the French ministry of agriculture from July 2010 up to October 2013, following the recommendations from European and National food safety agencies [3,22].
Source of diagnostic samples
In the study area, every wild boar shot or found dead had been examined for serological and virological detection during the 3 years after the completion of vaccination, i.e., from July 2010 up to June 2013 [23,24]. We more particularly focused on the occurrence of seropositivity in 6-12 month-old wild boars shot by hunters after the stop of OMV, i.e., between September 2010 up to June 2013. The data set was split into three time periods corresponding to the three successive generations of piglets born after vaccination (2010-2011, 2011-2012, 2012-2013). Each year was defined from October of the year "N" to June of the following year "N + 1", because during that period animals less than 12 months-old were easily distinguished from older individuals according to their body mass and shape [19], while in the summer (July-August N + 1) a confusion might happen between animals more or less than 12 months-old [16].
Diagnosis
Detection of antibodies to CSFV was carried out using commercially available ELISA kits (Idexx CSF Ab) according to the manufacturer's instructions by two local laboratories. All the sera with positive or doubtful results using ELISA collected from December 2012 up to June 2013 in young hunted wild boars were subsequently analysed using a differential virus neutralization test (VNT) in order to confirm the specificity of the ELISA results and to assess the titre of neutralizing antibodies. Specific neutralizing antibodies (NAb) against the CSFV Bas-Rhin strain versus the Aveyron strain of the border disease (BD) virus (Pestivirus of small ruminants) were assayed using the VNT on PK15 cells according to the OIE's manual of diagnostic tests [25] by the National Reference Laboratory.
"Disease" risk mapping
"Disease" risk mapping for each time period was based on the serological results collected in the Vosges du Nord area through hunting at the level of 331 French administrative territories (village or town) that we call hereafter municipalities. The serological status was likely to be similar between wild boars inhabiting the same municipality by sharing the same antigenic background (i.e., virus circulation or vaccination treatment) and even belonging to the same family groups (and maternal antibody source). The serological status was also likely to be similar between wild boars inhabiting neighbouring municipalities, either because CSF is a contagious disease or because individual wild boars may be shot in a given municipality while the largest part of its home range may be included in a neighbouring one [26]. The occurrence of similar serological status within one municipality generates "heterogeneity", while the occurrence of similar serological results between neighbouring municipalities generates "autocorrelation". The geographical representations of raw seroprevalence (without data modelling) at the scale of administrative units could lead to misinterpretations of the observed clusters and consecutive inaccurate management decisions [27]. In order to avoid these problems, we took into account the probable heterogeneity and autocorrelation of serological results using hierarchical spatial Bayesian models [28,29] (model detailed in Additional file 1). Data were encoded as "0" and "1" depending on their ELISA results; only positive or negative results were considered (i.e., inconclusive or doubtful results were removed). Seroprevalence per municipality was modelled year by year according to a hierarchical Bayesian approach proposed by Abrial et al. [30]. The probability to observe seropositive results in a given municipality "i" was modelled according to a Poisson distribution. The variation of data was modelled according to a local spatial component (U i ) accounting for seroprevalence similarities between neighbour municipalities (i.e., spatially structured variation or autocorrelation) and a global one (H i ) accounting for seroprevalence unstructured heterogeneity between municipalities. The conservation of the spatial pattern from year-year was also tested by considering the average seroprevalence predicted by the model retained for the previous year as a potential factor of seroprevalence within the model for the current year (Risk i ). We retained for each year the most parsimonious model having the smaller Deviance Information Criterion (DIC; [31]). Simulations were calculated using the BRugs package [32], which constitutes the interface between the software OpenBugs and the R statistical environment (R Development Core Team [33]
Capture-mark-recapture study Capture and sampling process
In order to maximize the chances of capturing seropositive piglets, we targeted the areas exhibiting the highest seroprevalence in young wild boars (i.e., the hot spot municipalities that were identified by the hierarchical spatial Bayesian models). The capture area spread over 3000 hectares mainly comprising the municipality of Baerenthal and the neighbour ones (Figure 1). Physical captures were performed from the 2 nd of July up to the 30 th of August 2013. Twelve mobile traps [35] were deployed and baited daily from the 4 th June 2013 until the end of August. Each animal was initially identified using numbered ear-tags, and animals captured at the same time and place were marked the same way (colour and shape of tags) as a first indication of the family group. The assignation to family groups was further confirmed using video-cameras on feeding capture sites (such as described by Rossi et al. [36]). Each wild boar was bled at each capture, with a limit of one sampling per week and individual in order to limit an animal's stress at handling. Ethic and authorization rules were the same as described by Rossi et al. [9,36]. An intensive information campaign (i.e., using mailings, local newspapers, meetings, telephone calls) was conducted in order to encourage hunters from the whole Vosges du Nord area to notice and sample systematically marked animals from September 2013 up to December 2014.
Samples and diagnosis
Serological responses were measured in sera (collected with dry tubes centrifuged in the few hours after capture) and virological examinations were performed on whole blood from captured animals (using tubes with EDTA) or spleen in hunted ones. Sample sera were first tested by an ELISA (Idexx CSF Ab) according to the manufacturer's instructions. Then, positive or doubtful samples were analyzed by differential VNT. Two CSF virus strains were considered here for VNT: the Bas-Rhin strain (i.e., the wild type of the virus isolated from 2003 up to 2007 in the study area, [21]) and the Alfort strain (i.e., genetically related with the vaccine C-strain). RNA was purified from whole bood using the Rneasy minikit (Quiagen, Courtabeuf, France). The CSF genome was first amplified by real-time polymerase chain reaction (r-RT-PCR) using a commercial kit (LSI VetMAX ™ classical swine fever by LSI-Life technologie or Adiavet CSF Real Time by Adiagene-Biomerieux) according to the manufacturer's instructions. To confirm that piglets were viremic at the time of capture (i.e., carrying viral particles in their blood), virus isolation was performed on the PCR positive samples, on PK15 cells, according to the OIE's manual of diagnostic tests [25].
Surveillance of hunted wild boar Raw serological data
From July 2010 to June 2013, 9331 6-12 month-old hunted wild boars were sampled and exhibited a conclusive serological ELISA result. On average, the seroprevalence dramatically decreased from 27.8% in July-September 2010 (±1.0%) down to 1% in April-June 2013 (±0.8%). However, raw seroprevalence remained high (i.e., with local peaks above 5%) at the level of some municipalities (Figure 2). In 2012-2013, 46 out of 56 (~80%) sera having a positive doubtful result to the ELISA and subsequently tested by VNT had detectable levels of neutralizing antibodies targeting specifically the CSF not the BD virus. The remaining 20% sera with ELISA positive or doubtful results exhibited undetectable levels of neutralizing antibodies against both CSF and BD viruses, possibly as a result of low CSF antibody titres or unspecific ELISA results. Table 1. During the whole study period, the model best fitting the observed data included the local autocorrelation component "U i " but not the global one "H i " (i.e., the DIC of model M3 was lower than the DIC of model M4 in Table 1). In 2012-2013, the best model also retained the effect of the seroprevalence predicted by the model of the previous year "Risk i " (i.e., the DIC of model M5 was the lowest in Table 1). We represented the predicted seroprevalence according to the best model for each year (Figure 3). According to these predictions, only some municipalities exhibited higher seroprevalence compared to the average seroprevalence (i.e., white surrounded municipalities in Figure 3). The Baerenthal municipality exhibited a higher risk compared to other municipalities during the whole study period (Figure 3). adult females and piglets, nine groups comprising only subadults of both sexes, one single adult sow without piglet, and one single adult male. From the 23 rd July 2013 up to the 20 th December 2014, 51/134 marked animals were found at death time (38%): 47/51 had been shot during hunting, 2/51 had been found sick, and 2/51 had been found dead after a road accident. Dead animals comprised 2/8 adults, 12/19 subadults and 37/104 piglets. The place of death was confirmed for 49/51 individuals: 38/49 in the municipality of capture, 9/49 in a neighbouring municipality (i.e., less than 5 km from their capture site) and 2/49 up to 10 km from their captured site.
Serological and virological results
None of the captured wild boar were found positive by rt-RT-PCR but 12/134 animals exhibited at least one positive ELISA result: 11 (11/107) piglets belonging to five groups and one adult sow older than 30 months old belonging to the same group as the three seropositive piglets ( Table 2). The two sick individuals who died during summer 2013 were necrospsied and exhibited general weakness associated with respiratory distress and important worm loads (genus Metastrogylus) in the lungs (bronchial tubes), but they were both negative by serology and PCR. The kinetics of antibodies among the twelve seropositive individuals is detailed in Table 2.
The neutralizing antibody titres observed in piglets were low (20) compared to the adult sow (360). Among the piglets recaptured from July to August 2013, either ELISA results became negative or VNT antibody titres decreased. Among the remaining piglets, all but one could be sampled during hunting from which only one, shot the 10 th November 2013, showed a positive ELISA result at around 6-7 months and a low neutralizing antibody titre (7.5). No other seropositive result was reported in the marked animals until December 2014 (Table 2).
Discussion
At a global level, the average seroprevalence observed in 6-12 month-old hunted wild boars has continuously decreased during the 3 years following the completion of OMV. However, at a local level, seroprevalence peaks above 5% raised the question of a residual circulation of CSF or related virus. Most of ELISA positive results were associated with detectable titres of CSF neutralizing antibodies (~80%), thus confirming that positive ELISA results were corresponding to antibodies targeting the CSF virus. A study conducted at two different spatial levels (i.e., inter and intra-municipalities) was adopted to clarify the situation. At the inter-municipality level, the hierarchical Bayesian approach was useful before mapping seroprevalence [27]: maps predicted by the models (Figure 3) were indeed quite different from the raw seroprevalence ones (Figure 2). The spatial modelling of hunting data accounted for an important autocorrelation of seroprevalence between adjacent municipalities at the scale of the whole area (i.e., the local spatial component "U i " led to a dramatic decrease in models' DIC), especially during the two first years. Autocorrelation could correspond to a contagious phenomenon, i.e. the active circulation of CSF virus between municipalities. The capture study showed that a significant proportion of wild boars captured in a given municipality were shot elsewhere (~23%), mainly in the neighbouring municipalities. The low match between municipalities' boundaries of and wild boar home ranges could also participate in the autocorrelation of seroprevalence. We did not detect an important heterogeneity at a global scale (i.e., the global spatial component "H i " led to no improvement in models' DIC). Nevertheless, some isolated municipalities located in the heart of the forested area, exhibited a higher seroprevalence compared to others and this spatial structure was particularly conserved from 2011-2012 to 2012-2013 (i.e., the Baerenthal municipality being more at risk during the 3 years of the study). Identifying seroprevalence hot spots was interesting for targeting the surveillance efforts, but could not answer the question of the origin of antibodies in 6-12 month-old wild boars (i.e., CSF infection, vaccination, persistence of maternal antibodies). We thus used a longitudinal approach at a finer scale for clarifying the epidemiological situation.
The capture-mark-recapture performed at the level of the seroprevalence hot spot (i.e., Baerenthal and neighbouring municipalities) showed no virus positive animal. This result could either correspond to CSF eradication or to the low number of sampled wild boar (134 individuals allowing the detection of about 3% of viroprevalence), we thus looked at the semi-quantitative serological results. During July-August 2013, about 10% of the 3-5 monthold piglets, and one out of the seven adult sows above 30 months old (i.e., possibly born before the completion of vaccination) were found seropositive. Seropositive piglets had much lower neutralizing antibody titres than the seropositive adult sow, and they progressively lost their antibodies from July 2013 to November 2013 (Table 2). A single individual out of eleven piglets seropositive when 3 months-old was still seropositive during hunting, i.e., when 6-7 months-old. This pattern was consistent with previous experiments conducted in pigs or wild boar showing maternal antibody transmission by immune sows to their offspring several years after challenge of infection [15], and the rare occurrence of long-lasting maternal antibodies in animals after their 6 th month [17]. An active immunization of piglets, caused by persisting natural infection, would have induced higher titres of neutralizing antibodies and the subsequent persistence of antibodies [37]. Our capture-mark-recapture study thus suggests that the local peaks of seroprevalence (>5%) observed at the level of municipalities was generated by the survival of immunized sows more than 3 years after the completion of vaccination. The detection of antibodies in young animals above 6 months of age possibly occurred after some seropositive sows were repeatedly vaccinated before June 2010 and conserved high antibodies titres many years after the last vaccination campaign. In that area about 600 000 vaccine-baits had been delivered per year for 6 years [6,16] and vaccine were mainly consumed by adult animals [38], so that one may assume that the intensity of the vaccination treatment was the main cause for the presence of high antibodies titres in sow's offspring. The progressive disappearance of vaccinated sows due to natural mortality and hunting together with the probable decrease of antibody titres from year to year could finally account for the progressive decrease of young seroprevalence over time.
In the present paper we investigated the reason of the presence of seropositive juvenile wild boars after the completion of oral mass vaccination (OMV) combining large scale hunting data and local longitudinal survey on marked animals. Due to autocorrelation in the hunting data, a hierarchical Bayesian approach was implemented to decrypt the spatial structure of seroprevalence. The models revealed the presence of localized hot spots of seroprevalence, at the level of some municipalities. However, only the longitudinal survey of marked individuals from the hot spots areas could disentangle the possible sources of antibodies in 6-12 month-old wild boars. In the present case, the progressive disappearance of antibodies in repeatedly captured piglets suggests that hyper-immunized sows having transmitted maternal antibodies to their offspring could explain seropositivity in young wild boars during the 4 years following the completion of OMV. Our approach offers an objective way to interpret surveillance data and adapt the surveillance process after the completion of OMV. This study also pinpoints the difficulty of interpreting seropositivity in young wild boar post-vaccination by simply examining transversal serological results and qualitative serological results. On the contrary to what was initially expected by European experts [3,14], the "ghost of vaccination" may haunt the antibodies response of 6-12 months-old wild boars several years after the completion of vaccination, which strongly interacts with the efficacy of surveillance based on hunting data.
In that context, we recommend combining different surveillance approaches post-vaccination. First, the passive surveillance of dead or sick animals should be strengthened (i.e., reinforced collection of carcasses) since CSF re-emergence is supposed to cause morbidity and mortality in naïve populations [3]. Nevertheless, the low probability of carcass detection in dense forested areas and the presence of scavengers sometimes limit the efficacy of passive surveillance for early detection ( [23], Rossi, unpublished observations). Thus, we also recommend maintaining an active surveillance based on young wild boar serology combined with data modelling for identifying seroprevalence hot spots above 5%. When such hot spots are detected, we recommend targeted longitudinal surveys using quantitative serology (VNT) and virus genome detection. In the future, the use of marker vaccine and companion tests could also be considered as a possible tool for better disentangling the origin of antibodies during and after vaccination [39].
|
2016-05-12T22:15:10.714Z
|
2016-01-25T00:00:00.000
|
{
"year": 2016,
"sha1": "8a792e01ff8657b6b5057d70ff2a004b297f8bd0",
"oa_license": "CCBY",
"oa_url": "https://veterinaryresearch.biomedcentral.com/track/pdf/10.1186/s13567-015-0289-6",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8a792e01ff8657b6b5057d70ff2a004b297f8bd0",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
237422155
|
pes2o/s2orc
|
v3-fos-license
|
The modified TIME-H scoring system, a versatile tool in wound management practice: a preliminary report
Background and Aims: The concept of WBP (wound bed preparation) has revolutionized the way to diagnose and correctly identify the best therapeutic path about the widespread clinical problem of difficult wounds. Starting from the modified TIME-H, authors conducted a preliminary study with the aim of assessing the impact of skin lesions and soft tissues for the surgical patient. Materials and Methods: 38 patients were preliminarily evaluated. The patients were classified according to the lesion examined, in particular among those who had an infectious or vascular etiology (SSTIs), and patients with surgical site lesions (SSI) and assigned to one of three categories prognosis: favorable (with healing expected within 12 weeks) (0-3A, 0-1B), intermediate (with healing expected over 12 weeks) (4-6A, 2-4B) and uncertain healing (7-8A, 5-8B). Results: At the end of the one-year observation period, authors established the healing prediction rate among the studied lesions: the surgical site lesions presented the highest percentage of predictivity (88%), followed by the mixed etiology (72%) and the infectious/vascular injuries (63%). Conclusion: This modified-TIME-H can be considered as a versatile and useful scoring tool that should be used in daily clinical practice for the study and treatment of chronic wound diseases.
O r i g i n a l a r t i c l e
Background and Aims
Lesions of the skin and soft tissues, especially those characterized by infectious etiology (SSTIs), and those charged to the surgical site (SSI), represent the most frequent postoperative complication in the hospital environment (1). Several studies and metaanalyzes have tried to quantify the extent of the problem (2): it is calculated that, approximately, a range of surgical patients, located between 2.8% and 7%, develop SSTIs (3). In recent years technological innovation has offered new and powerful prospects in the treatment of skin lesions with an infectious etiology. Despite the introduction of therapies based on new antibiotic molecules, less and less invasive surgery and advanced dressings, the diffusion of a simple decisionmaking algorithm is still scarce and replicable, to guide the clinician from the preparation of the wound bed to the complete healing. The morphological evaluation of the lesion and the systemic health conditions of the patient represent the first step in establishing a correct healing prognosis. The concept of WBP (wound bed preparation) has revolutionized the way to diagnose and correctly identify the best therapeutic path about the widespread clinical problem of difficult wounds. It is precisely in this perspective that the TIME protocol was introduced, which stands for "Tissue, Inflammation / Infection, Moisture, Edge / Epithelialisation", with the aim of promoting the acceleration of the wound repair process (4). As widely debated in literature, especially by Ligresti et al. (5), The TIME protocol had some limitations in its field of application, given that it was unable to provide a fundamental answer to patients suffering from chronic ulcers: the quantification of the prognosis in terms of healing time. The TIME-H scoring system was thus proposed, which included a healing score based on the patient's general health and topical wound conditions. A healing score was thus calculated which indicated the expected time of wound closure, in order to obtain the elaboration of a personalized therapy protocol. The system involved assigning a numerical value to each parameter, and it has been modified several times in the literature, as proposed by Conduit et Al. (6). Just starting from the modified TIME-H proposed by Lim et Al. (7), authors conducted a preliminary study with the aim of assessing the impact of skin lesions and soft tissues for the surgical patient.
Materials and Methods
This preliminary study was conducted at the Surgical Division and at the local referral Center of Wound Care of Parma Hospital (Parma, Italy). Starting from the modified TIME-H score (7), 38 patients were preliminarily evaluated. The patients were classified according to the lesion examined, in particular among those who had an infectious or vascular etiology (SSTIs), and patients with surgical site lesions (SSI) and assigned to one of three categories prognosis: favorable (with healing expected within 12 weeks) (0-3A, 0-1B), intermediate (with healing expected over 12 weeks) (4-6A, 2-4B) and uncertain healing (7-8A, 5-8B). This work was approved by the local Ethics Committee of Emilia Romagna (AVEN) and all the patients gave their informed consent before the enrollment. Authors included patients with at least one chronic lesion (present ≥ 3 months) or a dehiscence of a surgical wound that appeared in the immediate post-operative period (21 days), patients available to undergo subsequent follow-up of the study and patients capable of providing informed consent. Authors excluded from the study patients undergoing surgical revision of the lesion; patients not available to undergo subsequent follow-up of the study and patients unable to provide informed consent. Once assigned a score to a patient, the expected result was documented in a database. The patients were then treated with the help of a therapeutic protocol based on an appropriate standard for etiology and wound conditions, choosing in this phase between traditional dressings and advanced dressings (according to the international, national and Emilia-Romagna Region guidelines in both cases). In each subsequent follow-up the same lesion was re-evaluated and the TIME-H score, based on the state of the current wound, was then updated. For the purpose of this study, patient follow-up was continued until complete healing of the wound or the end of the study period, depending on which event occurs first. Authors also collected other informations for each patient, always based on TIME-H score, including percentage of healed wounds, duration (expressed in terms of weeks) of wound healing, duration (expressed in terms of weeks) of the hospital stay and the subsequent one outpatient evaluation, the rate of change in the size of the wound (cm2/ month) and the final outcome of the healing process at the end of study period. The results of the medians are then put to comparison based on the different categories of lesions and therapeutic strategies. The Mann-Whitney U test allowed authors to analyze data (expressed as + standard deviation [SD]-mean) and to compare values. All changes with a P value of .05 or less were considered statistically significant for the study.
Results
38 patients were enrolled for this preliminary report over a one-year period (from March 2019 to February 2020) ( Table 1).
Of these 7 were excluded (they were lost during the follow-up). Of the remaining 31 patients, 16 (52%) were male aged between 67 and 86 years old (median age was 77). 15 patients (48%) were female, aged between 64 and 88 years old (median age was 76,7). The evaluated injury types were classified as follows: 13 surgical site lesions -(40 %); 9 infectious or vascular etiology (SSTIs) -(30%); -9 presented ulcers from mixed etiology (30%). Studied subjects reported their chronic lesions to have been present for a median of 6 months before the first evaluation. The modified TIME-H score questionnaire also allowed to calculate the median wound size (6,8 cm2), with a total median score of 4.0 (range 3.0 -5.0). After the first evaluation, 6 (19,35%) patients were classified in the Table 2. Results of the preliminary report. certain healing category, 16 patients (51,60%) in the uncertain healing category, and 9 patients (29.05%) in the difficult healing category. A total of 5 patients of the six in the certain healing category presented effective total healing; 12 of 16 patients in the uncertain healing category and 4 of the 9 in the difficult healing category have been correctly classified according to the original prognosis. At the end of the one-year observation period, authors established the healing prediction rate among the studied lesions: the surgical site lesions presented the highest percentage of predictivity (88%), followed by the mixed etiology (72%) and the infectious/vascular injuries (63%). Authors also evaluated the duration of specialist intervention and the reduction in wound size for the three categories (Table 2).
Discussion
The management of chronic lesions of the skin and soft tissues, especially those with an infectious etiology (SSTIs), and those charged to the surgical site (SSI), represent an important postoperative challenge in the hospital environment (8-9-10). If primary intention closure is not suitable, or if part of a wound closed by this method requires secondary intention closure, the most important goal is to select the most appropriate treatment for the wound. The available treatment options will depend on the findings on wound and global patient assessment and on the local situation of the wound at the given time. Surgical, sharp and autolytic debridement represent several ways to remove dead and devitalised tissue on the wound bed (e.g. necrosis, gangrene, slough) or infected tissues (11). Topical negative pressure is a method of wound healing that can only be used once the wound is free from dead and devitalised tissues. Negative pressure is applied to the wound bed, which then promotes an increase in the blood supply to the wound bed. This increases the rate of angiogenesis and therefore the growth of granulation tissue (12)(13). It removes excess exudate, therefore maintaining a moist wound healing environment. As it removes the exudate it maintains minimal levels of bacteria on the wound bed, thereby reducing the risk of wound infection whilst it is in operation. Healing rates with this method are usually quicker than with traditional methods of healing (14)(15). Topical negative pressure is also known as vacuum-assisted closure (VAC). TIME (acronym for Tissue, Inflammation / Infection, Moisture, Edge / Epithelialisation) represents a protocol developed on the basis of the "wound bed preparation concept" (WBP), in order to promote an acceleration of the healing process. The modified TIME-H version has been later developed, with the addition of a healing score (H) based on the wound conditions, the systemic state of the patients and the associated chronic pathologies. Authors started from the modified TIME-H proposed by Lim et Al. (7), in order to clearly quantify the prognosis of chronic wounds and improve patients' satisfaction. This preliminary report was conducted at the Surgical Division and at the local referral center of wound Care of Parma Hospital (Parma, Italy). Authors have studied the 38 enrolled patients prospectively, and involved individuals in different levels of health in determining the Modified TIME-H score for chronic lesions. Authors discovered that scoring lesions through the Modified TIME-H system, a higher proportion of patients in the certain healing category can be predicted to achieve complete healing, with a higher rate of wound size reduction, and a shorter duration of clinical follow up, when compared with other categories of predicted outcomes. This modified TIME-H scoring system should be considered as a ready-to-use daily assessment tool, easily applicable even when the prognosis of patients is not favorable. This is the first report that discovers the several healing predictivity rates among several wound types. At the end of the one-year observation the surgical site lesions presented the highest percentage of predictivity, followed by the mixed etiology lesions and the infectious/vascular injuries.
The limitations of this study are represented by the relatively small number of enrolled patients and short duration in follow-up. Authors simply suggest additional studies, involving multiple centers, with a larger population and longer follow-up to better confirm the validity of this Modified TIME-H scoring system.
Conclusion
This preliminary report showed that this modified-TIME-H score should be addressed as a versatile and useful scoring tool that should be used in daily clinical practice for the study and treatment of chronic wound diseases. The current standards of a correct clinical practice cannot ignore the growing economic and social impact of chronic wounds, reason why the reduction of the treatment period represents a precious target: authors found that by applying the described method, the average healing time was considerably reduced.
|
2021-09-07T06:23:03.025Z
|
2021-09-02T00:00:00.000
|
{
"year": 2021,
"sha1": "3f5a047941f9cc7c04954b1743930fb889890f2e",
"oa_license": "CCBYNCSA",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "5bb9f63d1322d35900254b9e59cd4fd3c5565003",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
23809075
|
pes2o/s2orc
|
v3-fos-license
|
Identification and Characterization of RPK118, a Novel Sphingosine Kinase-1-binding Protein*
Sphingosine kinase (SPHK) is a key enzyme catalyzing the formation of sphingosine 1 phosphate (SPP), a lipid messenger that is implicated in the regulation of a wide variety of important cellular events through intracellular as well as extracellular mechanisms. However, the molecular mechanism of the intracellular actions of SPP remains unclear. Here we have cloned a novel sphingosine kinase-1 (SPHK1)-binding protein, RPK118, by yeast two-hybrid screening. RPK118 contains several functional domains whose sequences are homologous to other known proteins including the phox homology domain and pseudokinase 1 and 2 domains and is shown to be a member of an evolutionarily highly conserved gene family. The pseudokinase 2 domain of RPK118 is responsible for SPHK1 binding as judged by yeast two-hybrid screening and immunoprecipitation studies. RPK118 is also shown to co-localize with SPHK1 on early endosomes in COS7 cells expressing both recombinant proteins. Furthermore, RPK118 specifically binds to phosphatidylinositol 3-phosphate. These results strongly suggest that RPK118 is a novel SPHK1-binding protein that may be involved in transmitting SPP-mediated signaling into the cell.
Sphingosine kinase (SPHK) 1 is a key enzyme catalyzing the formation of sphingosine 1 phosphate (SPP), a lipid messenger that is implicated in the regulation of a wide variety of important cellular events including cell growth, survival, motility, cytoskeletal changes, and the release of calcium from intracellular stores (1, 2) by acting both as an extracellular agonist and an intracellular messenger (3). The extracellular effects of SPP are mediated by the recently identified endothelial differentiation gene (EDG) receptors, novel members of the G-proteincoupled heptahelical receptor family (4). For example, the bind-ing of SPP to HEK293 cells stably expressing EDG-1 induced the inhibition of cAMP accumulation and the activation of extracellular signal-regulated kinase (ERK) in a pertussis toxin-sensitive manner (5). On the other hand, the following findings were important clues to a specific intracellular action of SPP. First, the activation of various plasma membrane receptors such as the platelet-derived growth factor (6,7) and the Fc⑀RI (8) was found to rapidly increase intracellular SPP production through the stimulation of SPHK. Second, microinjected SPP mobilized Ca 2ϩ from the internal stores of cells that had been pretreated with pertussis toxin to inactivate G i -or G o -coupled receptor signaling (9). Third, the manipulation of intracellular SPP content in yeast cells, which lack a cell surface receptor for SPP, by the overexpression or deletion of genes that encode SPHK has revealed an important role for SPP in yeast survival and proliferation during exposure to heat or nutrient-deprivation stress. However, the intracellular site of action of SPP remains unknown.
The present studies were designed to determine the intracellular site of action of SPP by identifying molecules interacting with sphingosine kinase-1 (SPHK1) using a yeast twohybrid assay. Here we have identified a novel evolutionarily conserved SPHK1-binding protein, RPK118. Molecular and biochemical characteristics of this new molecule are described herein.
EXPERIMENTAL PROCEDURES
Yeast Two-hybrid Screening and cDNA Cloning-Full-length mouse sphingosine kinase-1a cDNA (DDBJ/EMBL/GenBank TM accession no. AF068748) was cloned into pGBKT7 (CLONTECH) in-frame with the GAL4 DNA-binding domain. The bait was transformed into the yeast strain AH109 together with the rat brain cDNA library (CLONTECH) fused to the GAL4 activation domain in the pGADT7 according to the manufacturer's instructions (CLONTECH). DNA from positive clones was prepared from yeast and transformed into competent DH5␣ (TaKaRa, Otsu, Japan) according to standard protocols. From partial RPK118 cDNA sequence information, a complete human RPK118 cDNA was obtained (DDBJ/EMBL/GenBank TM accession no. AB070706) by using a 5Ј-rapid amplification of cDNA ends (5Ј-RACE) method as described (10) with the total human brain cDNA reverse transcribed from fetal human brain mRNA (Invitrogen).
Mammalian Expression Vectors-The full-length RPK118, a PSK2 fragment, or RPK⌬PSK were each subcloned into mammalian expression vector pCMV5 with a FLAG-epitope tag to express N-terminal FLAG-tagged fusion proteins in mammalian cells. RPK118 was also cloned into pEGFP-C1 (CLONTECH) for N-terminal green fluorescent protein (GFP)-fused expression. For the transient expression studies, COS7 cells were grown in 60-mm tissue culture dishes to ϳ70% confluency. The cells were transfected with 3 g of plasmid DNA per dish using 12 l of FuGENE 6 reagent according to the manufacturer's instructions (Roche Molecular Biochemicals). The cells were grown for 36 -48 h and then assayed.
In Vitro Kinase Assay-COS7 cells expressing FLAG-RPK118 or FLAG-ribosomal S6 kinase 3 (RSK3) were lysed in cold lysis buffer (20 mM Tris-HCl, pH 7.4, 150 mM NaCl, 1% Nonidet P-40, 0.5% Triton X-100, 1 mM Na 3 VO 4 , and protease inhibitors). FLAG-tagged proteins were immunoprecipitated using anti-FLAG antibody M2 beads. The immunoprecipitates were washed and measured for in vitro protein kinase activity without or with 250 M S6 peptide (Upstate Biotechnology) as described (11). The incorporation of 32 P into the S6 peptide absorbed on P-81 paper was determined by Cerenkov counting. For autophosphorylation, the immunoprecipitates were separated by SDS-PAGE followed by quantification using a Fujix Bio-Imaging Analyzer BAS2000 (Fuji Photo Film).
Northern Blotting Analysis-Poly(A) ϩ RNA blots containing 1 g of poly(A) ϩ RNA per lane from multiple human tissues (CLONTECH) were hybridized with the 32 P-labeled 900-bp BamHI fragment of pCMV5-human RPK118. Hybridization was carried out according to the manufacturer's protocol. Bands were visualized using a Fujix Bio-Imaging Analyzer BAS2000.
Confocal Microscopy-Transfected COS7 cells were seeded on coverslips (Nalge Nunc) at a density of 10 6 cells ml Ϫ1 , fixed with 3% paraformaldehyde in phosphate-buffered saline for 15 min, permeabilized with 0.2% Triton X-100 for 10 min, and blocked for 10 min in phosphate-buffered saline containing 1% bovine serum albumin. The cells were incubated with primary antibodies for 1 h at 25°C, washed three times, and incubated with fluorochrome-conjugated secondary antibodies for 1 h at 25°C. Microscopy was performed with a confocal microscope (Bio-Rad MRC1024). For double staining, control scans confirmed that no bleed-through was detectable under the condition used. No signal was obtained if the first antibody was omitted. Alexa 488conjugated goat anti-mouse IgG and Alexa 594-conjugated goat anti-rat or anti-mouse IgG were from Molecular Probes (Eugene, Oregon). The early endosomal antigen 1 (EEA1) antibody was from BD Transduction Laboratories (San Jose, CA).
Dot-blot Overlay Assay-Recombinant glutathione S-transferase (GST)-RPK118 generated in the Sf9 cell expression system as described previously (10) was further purified by glutathione-Sepharose 4B (Amersham Biosciences). The purified GST-RPK118 was used to probe bovine serum albumin-blocked nitrocellulose filters (Echelon Research Laboratories, Salt Lake City, UT) on which various phospholipids had been spotted. The filters were washed, and GST-RPK118 bound to the filters was detected using the anti-GST antibody (Amersham Biosciences).
RESULTS
Identification of a Novel SPHK1-interacting Molecule, RPK118, with the PX, EPS, PSK1, and PSK2 Domains-In an effort to identify proteins that may participate in the recruitment of SPHK to the right destination where intracellular SPP will accumulate, we conducted a yeast two-hybrid screen of a rat brain cDNA library using full-length mouse SPHK1 cDNA as bait. Four independent clones encoding amino acid sequences almost identical to the C-terminal region of a previously reported "human ribosomal S6 kinase" gene product (RSK52) with a calculated molecular mass of 52kDa (12) were isolated (Fig. 1A). We assumed that the four clones are parts of a rat homologue of human RSK52. We then isolated the human RSK52 gene from fetal human brain cDNA and used it for transient transfection studies. However, for unknown reasons we failed to express this human RSK52 protein when the cDNA encoding human RSK52 was transfected into COS7 cells (data not shown). Upon further analysis, this sequence turned out to be incomplete because of a frameshift misreading. We have determined the full-length sequence of the cDNA by using the 5Ј-RACE method (Fig. 1B). The identified 3201-bp cDNA contains an open reading frame encoding a novel 1066 amino acid protein with a calculated molecular mass of 118 kDa. By using the BLAST algorithm, we have identified four domains homologous to previously characterized proteins including a phox homology (PX) domain (13), a Saccharomyces cerevisiae End13/Vps4, sorting nexin 15, and Emericella nidulans PalBhomology (ESP) domain (14), and pseudokinase 1 (PSK1) and 2 (PSK2) domains (Fig. 2, and see below). The PX domain, the function of which is as yet unknown, is an evolutionarily conserved sequence that is present in a number of proteins with diverse functions, including proteins involved in vesicular trafficking (15)(16)(17)(18). The ESP domain, whose function is also unclear at present, may be involved in the association of the Vps4p protein with endosomal membranes (19). The C-terminal half of RPK118 contains two conserved sequences arranged in tandem that show homology to the regions essential for the catalytic activity of RSK3 (20). The first sequence (residues 340 -426) corresponds to the kinase subdomains I to V (Fig. 3A), whereas the second sequence (residues 906 -1066) corresponds to the kinase subdomains VIA to XI (Fig. 3B) with a large unrelated insert between the sequences (Fig. 1B). The GXGXXG motif essential for ATP binding in the first sequence, which corresponds to the kinase subdomain I of RSK3, and the DFG motif important for conferring Mg 2ϩ sensitivity to the enzyme in the second sequence, which corresponds to the kinase subdomain VII of RSK3, were both mutated as shown in Fig. 3, A and B, suggesting that the protein is defective in FIG. 1. SPHK1-interacting molecules and amino acid sequence of human RPK118. A, the SPHK1-interacting molecules, including full-length RPK118 and RPK118-derived clones isolated from the yeast two-hybrid screens, are shown. cDNA from a C4 clone encodes a protein identical with the PSK2 fragment. RSK52 is also included. B, the deduced amino acid sequence of human RPK118 is shown in a single letter code and numbered on the left. The PX domain (residues 9 -128) is highlighted in gray, the ESP domain (residues 239 -307) is boxed, and the PSK1 (residues 340 -426) and PSK2 domains (residues 906 -1066) are highlighted in dark gray and black, respectively. phosphotransferase activity. Indeed, the immunoprecipitated 118-kDa protein showed no kinase activity toward either itself or the exogenous substrate despite good expression of the 118-kDa protein and fair kinase activity of RSK3 (Fig. 4, A and B), confirming its structural parameters (Fig. 3). Therefore, we have designated these regions of homology as the PSK1 and PSK2 domains, respectively, after pseudo-kinase. We have also termed this protein as RPK118 after the ribosomal S6 kinaselike protein with two PSK domains. A data base search identified orthologues of RPK118 in Drosophila melanogaster and Caenorhabditis elegans, demonstrating that RPK118 is a member of a novel and highly conserved gene family (Fig. 2). These cDNA sequences were from putative open reading frames identified from D. melanogaster and C. elegans. A comparison of these sequences reveals the presence of all characteristic homology domains including the PX, ESP, PSK1, and PSK2 domains in these homologues. There also exists a human gene encoding a putative protein, RPK60 (with the ESP, PSK1, and PSK2 domains but devoid of a PX domain), which was originally reported as an "unknown kinase" (GenBank TM accession no. AAD30182) and whose function is currently unknown.
Biochemical Characterization of RPK118 -The tissue distri-bution of RPK118 mRNA in human tissues was analyzed by Northern blotting (Fig. 5). RPK118 was ubiquitously distributed among various tissues tested with the highest levels of mRNA detected in the skeletal muscle, brain, heart, placenta, kidney, and liver. Next, the binding results obtained from the yeast two-hybrid screening were further confirmed by documenting the interaction between RPK118 and SPHK1 directly. We conducted immunoprecipitation experiments using FLAG-RPK118, FLAG-PSK2 fragment, another deletion mutant FLAG-RPK⌬PSK that is devoid of the C-terminal half of the sequence (from residues 314 to 1066, including the PSK1 and PSK2 domains), and HA-SPHK1. These epitope-tagged proteins were expressed in COS7 cells, and the interaction of these proteins was analyzed. HA-SPHK1 was specifically co-immunoprecipitated with FLAG-RPK118 (Fig. 6). The FLAG-PSK2 fragment also interacted with SPHK1, confirming the results from yeast two-hybrid analyses that show that the binding site for SPHK1 is localized within the PSK2 domain of RPK118 (Fig. 1A). FLAG-RPK⌬PSK showed no interaction with SPHK1. RPK118 Co-localizes with SPHK1 in COS7 Cells-We investigated the interaction of RPK118 with SPHK1 in intact cells using immunofluorescence techniques. For the characterization of subcellular localization of RPK118, we constructed RPK118 fused with GFP. The utility of GFP-RPK118 was verified by demonstrating co-localization of GFP-RPK118 and FLAG-RPK118 in COS7 cells expressing both proteins. These proteins distributed diffusely in the cytosol and in some small dot-like or ring-shaped structures where the proteins showed exact co-localization (Fig. 7, A-C, arrows). In contrast, GFP itself was distributed diffusely throughout COS7 cells express- FIG. 4. In vitro protein kinase assay showing that RPK118 has no protein kinase activity. An in vitro protein kinase assay was performed using endogenous (A) and exogenous (B) substrates. FLAGtagged RPK118 or FLAG-tagged RSK was immunoprecipitated and assayed for protein kinase activity. Aliquots of immunoprecipitates were subjected to SDS-PAGE followed by immunoblot analyses with anti-FLAG antibody (inset). A representative of three separate experiments is shown. ing GFP vector alone (data not shown). When EEA1, an endogenous marker of early endosomes, was immunostained, EEA1 displayed good co-localization with GFP-RPK118 (Fig. 7, D-F, arrows), indicating that the RPK118-positive dot-like or ringshaped structures were in fact early endosomes. Next, when GFP-RPK118 and HA-SPHK1 were expressed simultaneously in COS7 cells, GFP-RPK118 was distributed diffusely in the cytoplasm (except nuclei) and also in putative early endosomes (Fig. 7G). A similar pattern was observed when this protein alone was expressed in COS7 cells (Fig. 7, A-D), whereas HA-SPHK1 was distributed in a fine reticular pattern in the cytoplasm with early endosomal distribution (Fig. 7H). Some but not all endosomal structures were co-labeled both with GFP-RPK118 and HA-SPHK1 (Fig. 7I, arrows). The dot-like or ring-shaped endosomal localization pattern of RPK118 may not be a consequence of high levels of the protein expression, because this pattern was also observed in the cells expressing the protein at a relatively low level (data not shown). Interestingly, in the cells expressing HA-SPHK1 but not GFP-RPK118 (Fig. 7H, arrowhead), HA-SPHK1 staining with putative early endosomal structures was hardly observed. To demonstrate the direct involvement of RPK118 in the recruitment of SPHK1 to the early endosomes, the effect of PSK2 (an SPHK1-binding fragment of RPK118 as suggested by Figs. 1A and 6) on SPHK1 distribution was tested. When the PSK2 fragment was expressed together with GFP-RPK118 and HA-SPHK1 in COS7 cells, HA-SPHK1 was distributed both in the cytoplasm and the peripheral area without any endosomal labeling (Fig. 7L), leaving the staining pattern of GFP-RPK118 almost unchanged (Fig. 7K). Thus, the PSK2 fragment may function as a dominant negative by competing with RPK118 for SPHK1 binding. These results clearly demonstrate the importance of RPK118 in the recruitment of SPHK1 to early endosomes.
RPK118 Interacts Specifically with Phosphatidylinositol 3-Phosphate (PtdIns (3)P) through Its PX Domain-Recent observations that proteins containing the PX domain specifically recognize PtdIns (3)P at specific membrane surfaces (15)(16)(17)(18)21) prompted us to ask whether RPK118 also binds to PtdIns (3)P through its PX domain. To study the phosphoinositide binding specificities of RPK118, we employed a dot-blot overlay assay. As shown in Fig. 8, RPK118 exhibited specific binding to PtdIns (3)P compared with other phosphoinositides. RPK118 interacted weakly with phosphatidylinositol 5-phosphate but not at all with phosphatidylinositol 4,5-bisphosphate or phosphatidylinositol 3,4,5-trisphosphate. A deletion mutant, RPK118⌬PX, which lacks the PX domain of RPK118, failed to bind to PtdIns (3)P on the filters (data not shown). DISCUSSION We have shown here that a newly identified protein, RPK118, can bind to and co-localize with SPHK1 in COS7-cells. The PSK2 domain of RPK118 is required for binding to SPHK1 as demonstrated by the results from immunoprecipitation analyses (Fig. 6) as well as yeast two-hybrid screening (Fig. 1A). The overexpression of RPK118 in COS7 cells did not cause any change in the intracellular content of SPP with repeated experiments, and RPK118 binding to SPHK1 did not alter the enzymatic activity of SPHK1 in vitro (data not shown), suggesting that RPK118 may function only as an adaptor molecule for SPHK1.
Analysis of the subcellular distribution of SPHK will provide important clues to the understanding of the mechanism of intracellular action of SPP. It has been reported (22) that SPHK1 expressed in HEK293 cells was detected in both cytosol and membrane fractions. More detailed subcellular studies using density gradient centrifugation have also shown the membrane-associated SPHK activities, especially in vesicles derived from the endoplasmic reticulum and the plasma membrane in rat tissues (23). Our present studies indicate that SPHK1 distributes in a fine reticular pattern in the cytoplasm and that it co-localizes with RPK118-positive ring-shaped early endosomes when RPK118 is co-expressed with SPHK1 (Fig. 7, G-I). We also show that the ring-shaped endosomal pattern of SPHK1 distribution was completely altered to a nearly homogeneously diffuse pattern by the expression of the PSK2 fragment, the SPHK1-binding site of RPK118 (compare Fig. 7, H and L). This suggests that the intracellular localization of SPHK1 may vary depending on the functional state of the cell. Indeed, while this manuscript was under review, Melendez and Khaw (24) reported that antigen stimulation of human mast cells induced a rapid translocation of SPHK1 from the cytosol to the "nuclear-free membrane fractions." Our present results strongly suggest that RPK118 may at least in part determine the endosomal localization of SPHK1. The mechanism of stimulation-induced translocation of SPHK1 to the appropriate cellular destination through RPK118 remains to be elucidated. RPK118-positive ring-shaped structures putatively identified as early endosomes based on EEA1 labeling were also colabeled with the early recycling endosomal marker Rab4 (data not shown). Further studies are necessary for understanding the physiological relevance of SPHK1 recruitment to early endosomes by RPK118.
From the structural analysis, it is obvious that RPK118 has the highest sequence homology with sorting nexin 15 (SNX15) (14). Sorting nexins are an emerging family of proteins with a PX domain that are involved in regulating vesicular transport (13,14,25,26). The sequence of the PX domain of human SNX15 is 76.5% similar (55.3% identical) to that of RPK118. SNX15 also contains ESP but not PSK domains where SPHK1 binds. The exact destination where the PX domains of RPK118 as well as SNX15 bind remains to be identified. Recently, Xia et al. (27) reported that SPHK interacts with tumor necrosis factor-␣ receptor-associated factor 2 (TRAF2). However, there is no sequence homology between RPK118 and TRAF2.
|
2018-04-03T06:11:15.500Z
|
2002-09-06T00:00:00.000
|
{
"year": 2002,
"sha1": "573e15a4d08c03b3d78ab0df097a7403386f9ac6",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/277/36/33319.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "cac1817d1e9ec735a6b13323a38fb66a4fb149ec",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
134468063
|
pes2o/s2orc
|
v3-fos-license
|
The effectiveness of preplant seed bio-invigoration techniques using Bacillus sp. CKD061 to improving seed viability and vigor of several local upland rice cultivars of Southeast Sulawesi
Research was aimed to evaluate the bio-invigoration techniques using Bacillus sp. CKD061 in improving seed viability and vigor of local upland rice. The research is arranged in factorial with completely randomized design (CRD). The different upland rice cultivars as first factor that consists of 11 cultivars, namely: Pae Tinangge, Pae Rowu, Pae Uwa, Pae Tanta, Pae Waburi-Buri, Pae Mornene, Pae Indalibana, Pae Lawarangka, Pae Huko, Pae Wagamba and Pae Momea. The second factor is the seed bio-invigoration technique, consists of 5 treatments, namely: without seed bio-invigoration (B0), NaCl + Bacillus sp. CKD061 (B1), KNO3 + Bacillus sp. CKD061 (B2), Ground burned-rice husk + Bacillus sp. CKD061 (B3), and Ground brick + Bacillus sp. CKD061 (B4). The results showed that seed bio-invigoration using Bacillus sp. CKD061 gave effect on the seed viability and vigor. Interaction of the seed bio-invigoration and upland rice cultivars were able to improve seed viability and vigor. Seed bio-invigoration ttreatment using ground brick + Bacillus sp. CKD061 was the best treatment, which could improve the viability and vigor of Pae Waburi-Buri, Pae Mornene and Pae Indalibana. The treatment increased vigor index by 133% in Pae Waburi-Buri and 127% in Pae Mornene, and Pae Indalibana compared with control.
Introduction
Rice (Oryza sativa L.) is a very important food crop because rice is still the main staple food in Indonesian. The demand of rice is increase every year. Various efforts have been done to increase rice production such as through the increasing productivity, breeding new variety [1] and development rice under the shade [2]. Another program is by the decreasing the level of rice consumption and promoting a local source of staple foods such sago [3], cassava or corn [4], but the results are not yet optimal. One effort to rice production is through the development of upland rice in dry land [4].
Constraints encountered include limited input of cultivation technology, especially in terms of the use of quality seeds and control techniques of plant-disturbing organisms. In addition, the decline in rice production is also attributed to the decrease in productive wetland due to the transfer of functions to the interests of industry, housing and other non-agricultural land use. The development of upland rice in dry land can be a solution to increase production. However, upland rice received less attention because of its low productivity. Development of upland rice (especially local upland rice) in Southeast Sulawesi is still limited, in addition to marginal land issues, as well as the implementation of cultivation techniques of upland rice, especially in the use of quality seeds. The use of high quality seeds is an important prerequisite for generating economically profitable crop production. Therefore, preparation and treatment of seeds to improve the quality is very important to do, especially with the physiological dormancy (after ripening) problem in post-harvest upland rice seed in the field. The alternative to overcome these problems is through seed invigoration technology integrated with biological agents of rhizobacterial groups, microorganisms capable of acting as biofertilizers and biopestisides [5]. Invigoration is a way to improve the physiological quality of seeds, especially seed vigor, through physical or chemical treatment. High-vigorous seeds are able to demonstrate good performance in germination processes under diverse environmental conditions [6]. Seed invigoration is physiological and biochemical improvements associated with synchronous germination, velocity, and increased seed germination using low-potency matrix solids or low osmotic potential solutions. This treatment is known as seed matriconditioning or osmoconditioning treatments can be integrated with rhizobacterial applications, called bio-matriconditioning [7][8] or bio-osmoconditioning. This treatment aims to improve the viability and vigor of seeds, growth and yield of plants [5,9], also proved able to protect the seed from seedborne and soilborne fungi at an important phase at the beginning of its growth [10]. Treatment can be recommended as a growth promoting of local rice crops of Southeast Sulawesi.
Materials and Methods
The research was conducted in Agrotechnology Laboratory, Agriculture Faculty of Halu Oleo University, from September 2015 to March 2016. The research was arranged based on factorial in completely randomized design (CRD). The first factor is the local upland rice cultivar consisting of 11 local upland rice cultivar of Southeast Sulawesi: V1=Pae The second factor is bio-invigoration treatment with Bacillus sp. CKD061 Consists of 5 treatments namely: B 0 = without bio-invigoration treatment (as control), B 1 = NaCl + Bacillus sp. CKD061, B 2 = KNO 3 + Bacillus sp. CKD061, B 3 = ground burned-rice husk + Bacillus sp. CKD061 dan B 4 = ground brick + Bacillus sp. CKD061. Each treatment was replicated 3 times, therefore, overall there were 165 experimental units.
The effects of seed bio-invigoration on the seed viability and vigor were evaluated by measuring their germination percentage, vigor index, relative growth rate, and growth uniformity.
1. Germination percentage (GP), depicting seed potential viability [11], was measured based on the percentage of normal seedlings (NS) during the first (i.e. 5 days after planting (dap)) and the second (i.e. 7 dap) observation by using the following formula: 2. Relative growth rate (RG-r), depicting seed vigor, is the ratio of KCT to maximum RG-r. The maximum RG itself was obtained from the assumption that at the first observation, normal seedlings had reached 100%.
3. Seed uniformity, depicting seed vigor, was measured based on the percentage of normal seedlings (NS) on the day between the first count (5 dap) and second (7 dap
The effects of seed bio-invigoration treatment on vigor index of several local upland rice cultivars of Southeast Sulawesi.
Cultivars of Pae Waburi-Buri, Pae Mornene, and Pae Indalibana provide a better response to the treatment of ground brick + Bacillus sp. CKD061 with value vigor index respectively 93.33%, 90.67% and 90.67%. Consistently, seed bio-invigoration treatment using NaCl + Bacillus sp. CKD061 was able to solve the dormancy of Pae Tanta cultivars by increasing the vigor index by 817% (Table 3).
Discussion
Seed bio-invigoration treatment using Bacillus sp. CKD061 integrated with ground burned rice husk or ground brick or KNO 3 gives better results in enhancing the viability and vigor of local upland rice seed compared with NaCl and control. Bacillus spp. is a group of PGPR (Plant Growth Promoting Rhizobacteria) which has been shown to be effective in increasing plant growth and yield [12]. The role of PGPR in increasing plant growth and production was presumably caused by the ability of rhizobacteria to produce IAA [9], gibberellins [13] and to dissolve phosphate [5,14]. In general, the utilization of Bacillus sp.CKD061 integrated with matriconditioning of ground burned rice husk or ground brick resulted in more effective in increasing the viability and vigor of local upland rice seed
Conclusions
It is concluded that seeds bio-invigoration treatment using Bacillus sp. CKD061 which is integrated with a medium of ground burned rice husk or ground brick or KNO 3 solution, is better able to increase the viability and vigor of local upland rice seed compared to other treatments and controls. Cultivars Pae Waburi-Buri, Pae Mornene, and Pae Indalibana responded better to the treatment of ground brick + Bacillus sp. CKD061 through increased vigor index respectively 131%, 100% and 100%. Integration of bio-invigoration seed treatment using NaCl + Bacillus sp. CKD061 was able to overcome the dormancy of local upland rice seed cv. Pae Tanta.
|
2019-04-27T13:08:58.393Z
|
2018-02-01T00:00:00.000
|
{
"year": 2018,
"sha1": "e6f052b54d99f2f280402f91c1361cd44c9d0e5a",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/122/1/012031",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "8883203ad69532fcd4ae7ab93774dabd85fca983",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology",
"Physics"
]
}
|
251500443
|
pes2o/s2orc
|
v3-fos-license
|
Developing New Product Using Diary Study and Concept Testing Analysis Based on The Customer Needs for Pasta Product (Case: Très)
Très is one of the businesses engaged in the food and beverage industry. Très introduced itself as a Pasta specialists that have a signature product called Fusion Pasta Brûlèe. This product is a combination of products originating from Italy, namely pasta, and made into a pasta product that is presented with innovative combinations with Indonesian flavors. As one of the newbies in the business sector, the experience of fusion and original taste is a core value for Très business. The food and beverage industry has an important role to play in Indonesian economy but the problem is, food and beverage business is not a sustainable business especially with the times and technological advances. The food business can last a long time if it is able to balance growth, development and innovation in every product presented. The research aims to searching the customer needs for developing new product so that this business can continue to move sustainably and follow developments. Knowing the needs of the customers, it makes Très and teams plan products in accordance with the needs of the market. Previously, Très and team made an analysis of a diary study to determine the customer's eating activities and experiences by the food. The customer actually understands the importance of nutrition in the body that affects their activities. Très also finds out about future suggestions that customer needs for healthy food. Most of the results of diary studies, customers want food that is healthy but also tastes good and is time efficient when consumed. Then the result synchronized with their desire to feel fulfilled by the food. After that the result by the diary study, Très through the concept testing analysis, from here it is known whether the product is able to be accepted by the community as a product that is in accordance with the needs of the customer. Can be applied with open innovation and collaborating products to maintain the sustainability of the product while continuing to analyze consumer desires. Open innovation.
I. INTRODUCTION
Businesses in the Food & Beverage industry are now growing rapidly over time. This business is very promising for the community, especially because of the many innovations that can be found in it. Another factor that supported the growing Food & Beverage industry in Indonesia has contribute to more than 7% of total GDP as well of the total industrial manufacturing output. The Food & Beverage industry. The players are dominated by the presence of large companies, but also international and foreign companies (EIBN, 2015).
Small and medium enterprises (SMEs) have an important role in the Indonesian economy (Iriyanti and Aziz, 2012;Pawitan, 2012;Setyaningsih, 2012). In Indonesia, small and medium sized enterprises (SMEs) represent 90% of all business and responsible for over 58% of the GDP. SME comprises microbusiness (94%), small business (5%) and medium sized business (1%). A significant number of small and medium scale food processing operations are regionally based (Tambunan, 2011).
The food and beverage industry is one of important sectors industry in Indonesia. This industry improve every positive developmen. In 2019, the sector growing 7.78%. This level was bigger than the increase of the overall non-oil and gas sector which was only 4.34%. The food and beverage contribution to GDP in the first quarter of 2020 increasing. The first is the requirement to enter markets which related to SME's regulations such as food safety, market access, production, on-time delivery, and sustainability. The local industry is also expected by adopting innovative production technology and improve the logistic chain (Directorate General of National Export Development, 2020).
A. Social Enterpreneurship
Social entrepreneurship is a business oriented field of the purpose is to efficiently provide every basic human needs. Where the customers or existing markets and institutions have failed to fulfill. The food & beverage industry is the main topic of the business industry right now for helping the community and country. The main problem is the food & beverage product is usually not very long in terms of the business or the product. Especially on the hype food that enters society a lot but doesn't last too long in survival and development. The words of "Social Entrepreneurship" refers to actions that combine the economic and social approaches to produce something new. Social entrepreneurship focuses on the creation of social impact, social change and social transformation (Nicholls, 2006;Mair and Noboa, 2006). The key elements such as innovation in entrepreneurship, and tension towards specific goals through the product with some major changes and innovations in the way to get the social entrepreneurship.
The concept of social entrepreneurship as a process of creating value by combining resources in new ways and new things. Social entrepreneurship is still a reference to determine the value of a company.These resource combinations are intended primarily to explore and exploit opportunities to create social value by stimulating social change or meeting social needs. When the product is viewed as a process, social entrepreneurship involves the offering of service and products but can also refer to the creation of a new company.
Très' company focuses on developing and expanding a product. What this business does is make food innovations that are sustainable in the future and enter the layers of society in Indonesia. In the process of its journey, Très has developed many products from outside Indonesia which can finally enter the taste of the tongue of the Indonesian people. Several products have been produced and while developing these products, Très continues to implement the vision and mission that was built to survive in the Food & Beverage industry.
Très really applies quality to its products which makes this company run and move well and have customers throughout all over Indonesia. This product is also supported by packaging specifically designed to follow and support the movement to protect the environment. Not only that, this business also helps SMEs making reusable packaging which is ultimately used by this company as permanent packaging.
B. Food and Beverage Market
Food and Beverage market was adjusted by considering the condition of business in Indonesia. Covid-19 in every country, the food and beverage industry has the many challenge. The pandemic of Covid-19 is starting to recover, Indonesia keeps continuing the development of national food and beverage products kind of import and export activity (Ministry of Trade, 2020). Indonesia's food and beverage sector managed to grow by 3% in 2020 (TheInsiderStories, 2020). The food industry trend expected to reach USD 2,517 million with an annual growth rate of 10.8% from 2021 to 2026. Food and beverage market got increasing with evidence and illustration projected to reach a market volume of USD 3,724 million by 2025 (Statista, 2021).
Food and Beverage sector offers huge investment opportunities. In the market of food and beverage sector during this situation, Food and Beverage is currently one of the highest income sector and help the other industry in Indonesia, as it recorded increasing in a positive growth during the pandemic. Based on the data released by Bank Indonesia, the F&B industry grew by 0.2% year-on-year (YoY) in 2020, well below that of 7.8% YoY in 2019; yet, higher than Indonesia's gross domestic product, which contracted by 2.1% YoY in 2020.
C. Food and Beverage Development
The Covid-19 pandemic has changed consumer behavior through all things and activities. Consumers are shifting from grocery shopping to focusing on cooking at home through online platforms out of fear of this virus. In particular, encourage innovation in the food and beverage industry to increase focus on immune health and greater demand for supply chain transparency. The growth and development of this sector is real. From the results of the data mentioned in several previous journals, it also stated that the growth of this sector occurred most significantly during the Covid-19 pandemic.
As the evidence by the health and immunity is one of the company's top five global food and beverage trends for 2021, with approximately 31% of consumers saying they buy more products that help stabilize and boost their immunity to health and 50% prefer food and drink products. drink. which naturally contain ingredients that are beneficial to their health and body. Indonesia is one of the largest producers of raw materials for such products, such as palm oil, fish, cocoa and coffee, exporting its surplus production abroad. On the other hand, Indonesia imports products that cannot be produced (either at all or in sufficient quantities) locally, such as wheat, milk, or processed food products. The Indonesian economy has taken good policies to reduce the country's dependence on imports which also benefits the F&B sector (Hendriadi, 2016). The food service segment in Indonesia is very diverse. Indonesia's food service market is expected to grow by 5% in the three years following Covid-19. Strong growth driven by continued urbanization, rising incomes and tourism. The thriving e-commerce sector is also driving the food service market. Gojek and Grab are the leading players expanding home delivery services.
D. Food and Beverage Growth
The food and beverages industry in Indonesia increased in 2010 is 89.16% that means it shows that the food and beverages industry is gaining until 2022. This shows that the food and beverages industry in Indonesia in 2015 is a very strict industry which is getting more concentrated and getting less competitive. The result of Minimum Efficiency of Scale (MES) in the food and beverages industry in Indonesia in 2010 is 65.23% which means the central restriction of the food and beverages industry in Indonesia is quite big. This shows a decrease in enter restriction of the food and beverages industry in Indonesia.
This food and beverage sector also introduces differentiation strategy, new product development, and product innovation which results it will be increasing the profit of the producer's ability to widen its market segments through the advantages of their products. But, if the business plan and the strategies not merge anymore, then the industry risks itself towards loss. It should be the sustainability plan which is done based on the timeline that was created. In the food and beverages industry, producers act as the price taker, that means the price they set for their products will be heavily influenced by the price. If one competitor decreases their product's price, it's almost guaranteed that other companies will follow so they can remain competitive in the market.
This study is only focused on the next step for how Trés can be in a long term and sustainable food & beverage business strategy following the customer behavior & customer wellbeing. However, other problems like finance, Human resource management, operational management are not discussed in depth in this study.
A. Open Innovation
As already indicated, open innovation was explained by Henry Chesbrough and has much attention by academics and practitioners. The explain is opposite with the traditional and closed innovation system. Chesbrough (2006), himself defines open innovation is for use in and out flow of knowledge to get the new innovation, and expand the markets. Innovation can be for new products or for the future of the company. The open innovation made the company should make something that benefical of external as well as internal resources when trying to advance. This approach has been known as the fast-growing for the company, technologyintensive industries. Evidence of an application within more mature industries remains scarce (Sarkar & Costa, 2008, p. 574).
Open innovation is the exchange of knowledge through the inflow and outflow of insights and knowledge within a company (Chesbrough, 2003). In particular small and medium-sized enterprises (SMEs) benefit from implementing an open innovation strategy and participating in networks to achieve access to a wide range of new knowledge created beyond the boundaries of their company, but which is necessary to successfully innovate (Gellynck et al., 2007;Omta, 2002). Open innovation has been found to be applicable in the food sector (Sarkar & Costa, 2008). However, studies focusing on SME networks are still limited and are particularly concerned with the role of networks in knowledge exchange and innovation. Therefore, for example, Sarkar and Costa (2008) conclude that case studies and more focused empirical research are needed.
Open innovation if merge together is neglected in the F&B industry and in the manufacturing sector in general. Open Innovation in the F&B industry especially during the collaboration of large companies and their value chain partners i.e. suppliers and clients, but seldom discuss interactions with other potential innovations. actors (Bigliardi & Galati, 2013). First and more clearly, is the idea that in real life consumers and society are driving many of the innovations that take place in the Food & Beverage industry, e.g. demands for a healthy lifestyle (Bigliardi & Galati, 2013b). Second, to continue to strengthen open innovation can also be one of the stages to get new product development. Open innovation really helps business owners to open their minds about how to sustain their business. On the other hand, open innovation also helps business owners to show the character of each product that will be used as innovation in the future.
There are other relevant trends that have the potential to shape the way companies innovate in this and other industries. First, there are megatrends, which are considered to be transformative global forces that define the world of tomorrow with far-reaching impacts on business, society, economy, culture and personal life (Naisbitt, 1982). Francisco et al. (2014) provides a comprehensive report on the future of food manufacturing through the convergent trend of food technology with industrial technology. For example, they mention that there are eight general trends affecting the Food & Beverage industry; urbanization, aging population, increasing costs or consumption, energy and environment, food safety and security, big data and analytics, smart cities, and taste preferences.
III. RESEARCH METHODOLOGY
Qualitative research was conducted in this study. Qualitative research methods can be used to gather insight into a problem or generate new ideas for research. By using qualitative research methods, researchers can collect more information and gain a more detailed understanding of business problems and opportunities (Arora & Stoner, 2009: 275). This research conducts qualitative research methods through diary studies of customers, root cause analysis and concept testing tests. Diary study is an exploratory research step undertaken to derive customer needs from their stories. Tres need to know their activities and eating habits. After that, the researchers found out the problem and included it in the root cause analysis using fish bones. After researchers find the problem from the root cause of the fishbone problem, from here a concept test can be carried out to customers which will then be developed together with open innovation and ensure whether the product is accepted by the customer or not.
Primary data were collected directly from respondents through diary studies and concept tests. The primary data needed in this research is to test customer needs through open innovation and new product development. This research is expected to be a stepping stone for this business in developing innovative products that are in accordance with the wishes of customers.
A. Research Flow
Research flow is used as reference to determine the concept of strategy and the solution to the problem faced by Très. Research Flow also used as a literature study and observation to analyze a methodm what factors are related, and also the right strategy to solve the problem. In this research, a research flow is utilized as a guideline for analyzing thoroughly the problem statement based on observation and analysis methods to develop initiative solutions. The company always conducts product surveys when selling to customers. Herewith on beneath the research flow of this research:
B. Internal Analysis 1) 7P Marketing Mix
In the 7P marketing mix, it is divided into 7 marketing dividers. There are products, prices, promotions, places, people, processes and the last is physical evidence. Très as mentioned in Table I that the products that have been prepared by are products that follow the needs of consumers by distributing questionnaires to the food sold. Next is prices. In this business case, the focus on the price marketing mix is brand value. the value contained in this business is highly raised. Such as taste, packaging and sustainability. From that, this business can go to other activities such as collaborating with other products or other business to make bundling packages. For promotion, what this business and the owner do is social media strategies. Some activities such as advertising, this business uses Instagram ads, Tiktok ads and makes questionnaires about our products to adjust the marketing segment. Très distributes a questionnaire in the form of a barcode for one month at every product purchase. From these activities, consumers can get an assessment of the products we sell. This is the promotional activity carried out.
For the place section in marketing mix, this business is still focused on market research. owners and employees, once a month always conduct market research to find out what new products are currently popular and enter the community. On the other hand, the market research conducted is only related to the product, not to the consumer. This business is still selling on online platforms (Instagram and Whatsapp) which in the future will develop through an offline store. People in the marketing mix in this business are carried out by brand personality and CRM. brand personality here is by introducing the public and customers to the products we sell through the brand that is presented.
In the process of the marketing mix, in this business there are several process activities that are in it. Especially on flow activities from producers to consumers through several things. from the process from the scratch to the shipping stage, including the part of the process in the marketing mix. As long as this business is built, owners and employees try their best to keep it stable. In the physical evidence section, this business prioritizes the part on how the company presents to customers through its products. In this business, the focus is on packaging as physical evidence that can be superior. Basically, the process of making the packaging used for this product includes physical evidence. The packaging used is an eco-friendly product made from materials that are easily biodegradable from paper. The packaging can also be reused because it looks like a box, so it can be used again for other items.
After conducting an internal analysis using the 7P marketing mix, later after going through several analyzes, the 7P Marketing mix will be re-implemented on the new product development.
2) Effectiveness, Efficiency, Experience
Effectiveness is talking about doing the right things. Très design the effectiveness of our product with the value of our product by the comment of our customer. The comments and opinions from products that are traded to customers to increase the effectiveness of our products.
Efficiency means ensuring that the work flows in a good way, preventing delays and cost caused by delays or preventing the products and services reaching the customer. Très efficiency is when the customer has already finished the payment, Très will send the product on the same day to all over Indonesia with a special delivery courier.
Experience is how to connect efficiency and effectiveness to the customer's needs. Consumers definitely want an experience that will make them believe according to their experience. From there, the customer will have a desire to buy the product as the reason the company already has experience. In this business, experience is generated from the way of cooking, packaging and shipping systems.
C. External Analysis 1) Diary Studies Analysis
Diary study is a research method used to collect qualitative data about customers behaviors, activities, and experiences over time. In a diary study, data is self reported by participants longitudinally. In this anaylisis, researcher used the longitudinal in 7 Days but must be filled within two weeks. Freed to the respondent to fill in at any time. That is, over an extended period of time that can range from a few days to even a week or a month. During the defined reporting period, study participants are asked to keep a diary and specific information about activities being studied. To help participants remember to fill in their diary, sometimes they are periodically prompted.
The intended respondents to this analysis are respondents who have busy activities in their daily activities. On average, they have sufficient income to fulfill their eating activities. After knowing 10 respondents who wanted to talk about their usual daily activities, especially with regard to food, several related questions were presented as follows: "Always skip dinner, need no flour and dairy product, always happy about food, fast and hassle free, will choose healthy but the taste still good." P2 "Always skip breakfast, patients with gerd and cholesterol, need a clean food, prefer delicious but less healthy, easy and makes activities easier." P3 "Love dessert, need food high in fiber, allergies with goat and clams, really interested with healthy food concept, no matter how busy I still eat." P4 "Prefer anything that easy to digest, very aware with health and nutrition, need food that rich of nutrition but still taste good." P5 "Reduce carbo in night, balance every food because I do like workout, food will taste better when I watch movie, time efficient and clean I am too lazy to cook." P6 "Always skip breakfast, really love spicy food but had a seafood allergies, must consume vegies every day, good food but healthy too, knows what the ingredients inside food" P7 "I have a hard time eating, love strong taste, need food contains vitamins, I prefer to eat processed food, the content of milk it will be good for the body." P8 "Full of work activities, have too much carbo will be lazy and sleepy, I have been cooking everything, I have lactose intolerant." P9 "I like eating when I am stressed, I love food that more savory and spicy, important that the food is filling, i am flexible about the healthy products." P10 "I eat regularly, Participated in catering healthy food the taste is good but not blend, must be delicious according to taste and nutrition." The results of the diary study show that everyone has different activities and desires for food. from these results, that customers have their own needs, both in terms of taste, shape or health. From these results, we can also know that customers are also concerned about the after effects of a food. Arguably the future investment for health. The results also show that there are customers who really choose what to eat, but some are not too picky. Many have congenital diseases such as cholesterol and allergies which greatly affect their daily activities. After knowing the customer's wishes and knowing the reason, from here it can be summarized in one root cause analysis. The personas of the 10 respondents are below: The diary study made is expected to help to find out the customer's wishes related to product development. In the diary study, the results are very diverse and respondents have different activities. they want healthy food but still don't make them fussy especially at times because they have quite a solid activity. In addition, some of these respondents also made a narrowing regarding the food they wanted. For example, there are those who really like spicy food but can't eat seafood because of allergies. So the respondents who did the analytical diary study also hoped that the researchers would be able to make an effective product for their consumption.
Customers need products that have great potential for future value. According to the results of the diary study in table 2, customers have busy activities from morning to night so that their eating activities are not regular. The customer really hopes that from the results of the diary study, there will be food that can overcome some of the problems mentioned in the table. Indonesia's economy is largely driven by rising household consumption, and one industry that thrives on this like no other is that of food and beverages. Sales growth is fuelled by rising personal incomes and increased spending on food and drink, especially from the growing number of middle class consumers.
2) Root Cause Analysis
Root cause analysis using the fishbone method is about to scale up the business and resolve the problems. After getting the result from the diary study analysis, the root cause will be defined. The study is to help in seeing whether there are customer needs and the innovation soon to be for the development products. From the root cause, it can be seen through several important aspects in food and beverage products such as taste, ingredients, health & nutrition, eating activity, behavior when buying food, and price. After that, the activity to find new products to support innovation following the customer's needs from the root cause. it turns out, the customer is very selective in determining the food to be purchased. Customers are also very concerned about the food in terms of taste and content. from the results of this study, that many customers are now looking for healthy food that is still time efficient in its preparation. Moreover, customers are very focused on their health in the future, because some respondents have diseases such as gerd, cholesterol and lactose intolerance. Customers always need something simple and useful while deciding to buy a product which in the future will be one of their permanent choices in deciding to buy a product.
IV. RESULT AND DISCUSSION
After conducting a diary study analysis, followed by getting the root cause from several respondents, a new product development was made that follows the needs of consumers. Developing new products are in a better position to survive, grow, and prosper in the business for sustainability (Bhuiyan, 2013;Mu et al., 2009). The importance of new products to the success of company has resulted in dramatic increases in the number of new products being introduced in the last few decades (Bhuiyan, 2013). In addition, company will look for the future market opportunities to make new development product as a strategic, to make the company will long-term in a competitive advantages (Kahn et al., 2012). Definition of innovation by (Schumpeter, 1939) defines innovation as the setting up of a new production function where production means combining product and services.
A new product development can mean a new commodity, a new form of organization such as a merger, or opening up of a new market. More recently, however, the Organization for Economic Co-operation and Development (OECD, 2005) has given a much broader definition for innovation to reflect the many roles innovation plays in modern day business. It defines innovation as the implementation of a new or significantly improved product (good or service), or process, a new marketing method, or a new organizational method in business practices, workplace organization or external relations.
There are 3 new products that will be produced following the results of the diary study and root cause analysis. The products are: • Tuna Veggies Aglio O-Lio with Gluten Pasta.
• Tuna Creamy Spinach Pasta with Gluten Free Pasta & Non Dairy Products. • Creamy Mushroom Pasta with Gluten Free Pasta & Dairy Products. In order to be accepted by customers, then the three new products will then go through concept testing analysis.
A. Concept Testing Analysis
Concept testing is defined as a research method that involves asking customers questions about company concepts and ideas for a product or service before actually launching it. Thus, the customers' acceptance and their willingness to buy and therefore make critical decisions before the launch. The benefits and different methods of concept testing also learn how to decide which method will be best suited for your research. Crawford and Di Benedetto (2010) propose a simple New Product Process: according to the authors, if this combination of activities is performed well, it will churn out the new products the company needs.
Analysis of concept testing and developing new products used following Ulrich and Eppinger. Here are the concept test that used in this research: 1. Comparison Testing; 2. Sequential Monadic Testing.
1) Comparison Testing
Comparison testing is two or more products are presented to the respondents. The respondents compare these concepts by using rating questions or asking to select the best concept displayed. Comparison tests give clear and easily understandable results. It's easy to determine which concept is the winner. The results lack context. There is no way to tell why the respondents choose one concept over others. It is essential and easy to understand these details before successfully launching a product because their rating and comments. Researchers develop three new products that are in accordance with consumer needs in the diary study results table (Table II). These three products can be used as benchmarks for researchers to create new products when in this concept test a comparison will be made between the three products.
2) Sequential Monadic Testing
Sequential monadic tests, the target audience is split into multiple groups. However, instead of showing one concept in isolation, each group is presented with all the concepts. The order in the concepts is randomized to avoid bias. The respondents are asked the same set of follow-up questions for each of the concepts to get further insights. Since each group of respondents sees all concepts, This concept testing method makes it ideal for research with budget constraints or when only a small target audience is available. In this concept test research, consumers are divided into two groups who will try the same three products. With the same amount and taste too. Researchers will present the same product to two different groups and have different activities. From here, a more variance and more diversity assessment can be made regarding the three products.
B. Concept Testing Implementation
The three products are new innovation products which later when concept testing will be carried out will be sorted according to which product should be tried first. After the distribution of the product that will be tried by the respondent, then the respondent is divided into two groups. The group is divided based on occupation and income. In group 1, the average respondent has a permanent job and full activity. The monthly income also follows their activities because on average are company owners and students. In group 2, the average respondent is someone who has a fixed monthly income. such as housewives and office workers. their activities are scheduled activities. The questions and descriptions that will be carried out in concept testing analysis are as follows: • Background The Respondent; • Impression to each products; • Taste to each products; • Feeling to each products; • Impression of the packaging; • About the pasta products; • About the healthy pasta products; • About the sauce and the herbs; • Rating the products in each product;; • Willing to buy with the range of the price overview • Willing to buy if the product is focus on the healthy and nutrition; • Reason in general why they want to buy this products; • Rating the products in general. Respondents for the concept of testing are as follows: As explained in Fig. 6 before, that the concept testing used there are two methods, namely comparing test and sequential monadic testing. that the 8 respondents were divided into two groups to reduce the results that would be biased later. The two groups are distinguished by occupation and monthly income. Respondents to do this concept test are some people who have high activity and understand food. The community who will be the respondents of this concept test is expected to be a benchmark for researchers to determine whether the product development meets the customer's needs as expected. Respondents criteria for the concept test are people who have a high activity. These respondents are workers and housewives who have problems in the food preparation process. Not only that, the respondent has a high activity, so there is no time to think about healthy and beneficial foods for the body. Moreover, the activities of the respondents are very diverse. The average respondent persona is someone who is very busy. The average age of the respondents was around 23-55 years. Most of the respondents are married and have a fixed monthly income with an average of around Rp. 1,500,000 -Rp. 10,000,000. The respondents are domiciled in the Jabodetabek and Bandung areas. The researcher looks for these respondents, following the place where this business will be run and opened, also seeing the activities and income of each respondent.
The concept testing analysis process was carried out on different days and times. In the first group, three new and fresh development products were presented. prepared according to the appropriate development criteria so that the taste and quality that will be tried by the respondents are balanced. in group 2, the same thing was done with group 1. the respondents gave comments for each new development product that was presented. The comments are made by answering the questions presented which aim to get product validation results from consumers.
Respondents will later answer these questions honestly and according to their expectations. In Table IV, the results of comments from consumers on the development of new products are presented. According to these results, some of the consumers accept the existence of the derivative product, which means that the product is suitable and will be accepted by the community. But consumers expect an improvement in taste and physical appearance that will be presented to consumers such as packaging.
From the results of the concept testing that has been done, the assessment of the product is very diverse based on what the respondents feel and what they see. The responses from these respondents regarding the innovation of healthy pasta products were well received. Although, even so, researchers must continue to improve and execute some important points on the product. The results of the concept testing were not biased because they were divided into two groups of respondents, each group having different activities and occupations. Consumers focus on the taste and ingredients served. especially in the texture of pasta whose ingredients are gluten free. consumers are very happy with the innovation of healthy pasta, because not many businesses have realized this derivative product. Various things were done by people, for example adjusting the customized diet that the body with a mix of carbohydrates, protein, and other elements require a certain technique that can be digested by the balance of the composition there in. Food Combining is one way to achieve a healthy lifestyle through a healthy diet. Where the core of food combining is fresh and natural foods, consuming a combination of food by following the cycle of metabolism, maintaining acid-base balance of the body, and does not need to measure the consumption of food. Thus increasing the knowledge and attitudes about the vulnerability and the efficacy of the treatment can affect a person's decision on health behavior.
The concept testing result, if it is implemented in the 7P marketing mix for internal analysis, the previous analysis is very supportive of the innovation that will be made. The marketing mix of product, price, promotion, place, people, process and physical evidence have been stated in the results of the concept test. On the product, after knowing what the customer wants and also the activities he undertakes, the assessment is that this product is quite interesting. With a product like this, the price is very reasonable because the ingredients used are of the best quality. The promotion to be carried out is stated in the conclusion, the plan for the next year. by penetrating this new product to the public that this product is a healthy product through the online platform Instagram with interesting content.
Très will continue to sell on the online platform first, and will register with Gojek and Grab Food for partner collaboration. For people, Très will continue to use the Customer Relationship Management (CRM) method which will make it easier to penetrate the community regarding new product development. In the process, especially within the company's operations, Très will continue to use its former suppliers and producers as a place to buy raw materials for its products. Physical evidence keeps on using environmentally friendly packaging which will be improved. Will be carried out to raise the value of developing the new products. Place Will remain on the online platform and prepare to the service application. People Will take seriously with the brand personality. Still using Customer Relationship Management (CRM).
Process
The new raw materials for new develop product still using the same producent. Physical Evidence The packaging of the product was accepted by the customers. Need more info about the nutritions.
V. CONCLUSION
The researcher discovered that the customers are is a person who has a full and solid activity. Many of the customers forget to eat breakfast and dinner because of their busy schedule. Customers also want healthy products that can help them to live healthy even though they are busy because of their activities. There are several things that become a reference point for researchers in this problem for the future
A. Focus on Customer Satisfaction
Très always seeks to find out what the community wants regarding the consumer's desire for the product they want. During the analysis, Très focused on customer wellbeing for a food product. From here, researchers understand how customers go about their daily activities. The researcher needs not only their activities, but also customer experience on food. Eating activities should also be explained by the customer. The customer's desire when conducting diary studies analysis is a time efficient and healthy product. products that can fulfill the eating activities of customers without having to take a lot of time. but the product is healthy because the lifestyle of today's society is not very supportive of the health aspect.
B. Available to Reach Customer
Customers really want food that is healthy but still delicious in terms of taste. The product development carried out by Très is a derivative product of the previous brulee pasta. namely healthy pasta products with the addition of nondairy products and using gluten free pasta. This innovation of the new product is a food product that still prioritizes the taste that has become the company's value, but is still healthy. customers so they don't feel too guilty when consuming it. This innovative product has gone through the concept testing stage through 8 customer respondents who were divided into 2 groups with different activities and problems.
C. How Très presents The Products to The Customer
Customers want a product that still supports all their activities. From the results of the diary study analysis, customers really want food products that are easy to carry. The product must be clean so that the packaging presented to the customer must also be attractive. There are some customers who really like to eat food while watching or working. There are some customers who are very busy with their activities, so they need food product innovation that is time efficient but still delicious and healthy. Très always focused on eco-friendly and sustainable product packaging.
VI. RECOMMENDATION AND IMPLEMENTATION PLAN
The researcher is going to explain the stage and steps of the implementation plan. The implementation plan consists of the action plan with the time plan used in the monthly plan (Ghant chart). The implementation includes activities and KPI of new develop products. The Objective Key for action plan consists of: 1) Searching The Customer Need & Activity, 2) New product Development, 3) Concept Testing Analysis to The Customer, 4) Launching The New Product Development, 5) Direct Marketing and Sales Promotion, 6) Promotion Through Instagram and TikTok, 7) Sign Up for GoFood and Grab Food. This action plan will be held for one year from August 2022 to May 2022.Conflict of Interest.
Authors declare that author does not have any conflict of interest in doing this research of diary study and concept testing for Très.
|
2022-08-12T15:14:47.022Z
|
2022-08-09T00:00:00.000
|
{
"year": 2022,
"sha1": "c52b92e6062b762112ea8ae5976ca95caabfa504",
"oa_license": "CCBYNC",
"oa_url": "https://www.ejbmr.org/index.php/ejbmr/article/download/1569/873",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d7659dce6673d26d5fa6ea3d4edb4038265ac711",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
}
|
202855653
|
pes2o/s2orc
|
v3-fos-license
|
Morphology, Growth and Architecture Response of Beech ( Fagus orientalis Lipsky) and Maple Tree ( Acer velutinum Boiss.) Seedlings to Soil Compaction Stress Caused by Mechanized Logging Operations
: The Caspian forests of Iran were monitored and evaluated for forest natural regeneration after logging activities for more than a decade. This large area has a substantial ecological, environmental and socio-economic importance. Ground based skidding is the most common logging method in these forests and soil compaction is the most critical consequence of this method. One of the current main topics and important emerging issue in forest research of the last decade are discussed in this study. Soil compaction has major influences on growth and / or mortality rates of forest seedlings. This study has lasted for over ten years so as to have a clear overview related to forest natural regeneration after logging activities. We monitored and evaluated physical soil properties (bulk density, penetration resistance and total porosity) and their e ff ects on maple and beech seedlings on 10-year-old
more than the controlled area, after using a wheeled skidder for skidding during selection cutting in the north of Iran. Williamson and Neilsen [6] found that a single pass by a rubber-tired skidder increased soil bulk density of the top 10 cm by 22% in the Tasmanian forests. Meek et al. [23] reported a reduction in infiltration rates of 54% when the soil was compacted from 1.6 g/cm 3 to 1.8 g/cm 3 . Naghdi et al. [17] studied the influence of ground-based skidding on physical and chemical properties of forest soils and their effects on maple seedling growth in the Caspian forests of Iran. Their results indicated that significant differences between undisturbed areas and machine trail areas, of bulk density (0.75 g/cm 3 vs. 1.26 g/cm 3 ) and total porosity (70.6% vs. 50.4%) were strongly related to the level of traffic frequency and to the trail gradient. Physical and chemical soil properties are often significantly impacted by skidding operations, depending on trail gradient and traffic frequency, which resulted in the decrease of seedling growth. Jourgholami et al. [24] studied the effects of soil compaction on seedling morphology, growth, and architecture of chestnut-leaved oak (Quercus castaneifolia C.A. Mey.) in the Caspian forests of Iran. These results indicated that both above-and below-ground seedling characteristics, including size and biomass, were negatively affected by soil compaction. At the highest intensity of compaction, size and growth were reduced by 50% compared to controls; negative effects were typically more severe on below-ground (i.e., the length and biomass of the root system) than on above-ground responses. Soil compaction reduces macropores and total porosity, infiltration capacity and permeability to water, and increases penetration resistance [9]. Increased soil penetration resistance can reduce, overall, seedling growth performance [24,25]. Tree seedlings tend to be particularly sensitive to the increase of soil penetration resistance [26]. Sustainable timber production of natural high forests needs the continuous establishment of natural tree regeneration and adequate growth of seedlings [27].
The Caspian forests of Iran have an area of about 2 million hectares and are located from the north of the country to the southern coasts of the Caspian Sea in the northern parts of the Alborz mountain belt, from sea level to 2800 m a.s.l. These forests are the most valuable forests in Iran. The main natural characteristics of these forests are the large number of hardwood species, the biological diversity, the richness of endemic and endangered species and the large amount of ecological niches. These forests have been managed by single tree selection cutting silviculture as close-to-nature method since 2015. Since then timber harvesting from these forests has been limited to the harvest of damaged trees, including broken, fallen, uprooted, infested and diseased trees [28,29]. Logging operations in these forests are generally performed using a ground-based skidding system. Chainsaws and cable skidders are two of the main logging machines for wood harvesting. Temporary skid trails with a distance of 120 m from each other are the most common type of forest roads designed and constructed for short-term timber harvesting in the Caspian forests of Iran [30].
Due to steep slopes of the Caspian forests, most of the skid trails are built by excavator. Although oriental beech (Fagus orientalis Lipsky) forest sites are highly suitable for timber production [31], these sites are also located in steep mountainous terrain on marls (35% lime and 65% clay) which have low infiltration capacity and are susceptible to intense runoff and erosion after heavy rainfall [32]. Seedling growth and survival following skidding, are of particular concern in the Caspian oriental beech forests. Mixed and pure beech stands occupy about 20% of these forests and produce more than 35% of the total wood stock volume of the Caspian forests [33]. Maple tree (Acer velutinum Boiss.) is one of the first woody species that establishes naturally a few years after selective logging and grows on the skid trails in the Caspian forests [17].
Timber or biomass extraction and forest logging affect forest soil, and natural renovation is particularly crucial for the maintenance of biodiversity. Monitoring the effects of forest operations is a requirement for sustainable forest management.
Beech is the most frequent species in mixed forest and the forestry interventions are aimed at its affirmation both in pure and mixed stands. The maple tree and beech are both of commercial interest in these forests. The seedlings of the maple tree and beech were selected as experiment objects in this study. Maple is a pioneer and shade intolerant species, suitable for restoring compacted soils, hence it is the preferred species for revegetating machine operating trails and landings in the Caspian forests. Beech is the shade tolerant tree, with a significant economic importance.
The aims of this study were: (1) To determine soil physiochemical properties after 10 years from machinery logging operation, (2) to determine the effects of soil compaction on seedling morphology, growth and architecture, (3) to identify the growth variable most responsive to soil compaction level, and (4) to obtain models of growth variables of seedlings in natural sites.
Study Area
The study was conducted in the Iranian Caspian forests. The study area is located in the Nav forests (latitude: 37 • 38 34 to 37 • 42 21 N; longitude: 48 • 48 44 to 48 • 52 30 E) in the Guilan province, north of Iran. The elevation in the study area is approximately 1450 m a.s.l. and the site is oriented towards the north. The mean annual precipitation is approximately 950 mm and the mean annual temperature is 9.1 • C.
According to the United States Department of Agriculture (USDA) soil taxonomy, the soils are Alfisols (brown forest soil) and the soil texture is sandy clay loam. The bedrock type is siltstone and limestone, which belong to the upper Jurassic and lower Cretaceous periods. The average depth of soil to the bedrock ranged from 60 cm to 90 cm, which is well-drained.
During December and January of 2007, marked trees (10.7 trees/ha) were felled using motor-manual felling, topped at merchantable height of 20 cm DUB (Diameter Under Bark). Due to the high soil moisture in winter and to prevent future damage to the soil, during May and June of 2007 the logs were winched on the constructed skid trails and were skidded in the shape of long logs to roadside landings using a Timberjack 450C wheeled skidder. The weight of the skidder was 9.8 Mg (55% on the front and 45% on the rear axle), and its width and length were 3.8 m and 6.4 m, respectively, with engine power of 177 hp (132 kW). It is equipped with a blade for light pushing of obstacles and stacking of logs. The skidder was fitted with size 24.5-32 tires inflated to 220 kPa on both front and rear axles.
Sampling Design and Data Collection
In June 2017 (10 years after logging), about 100 m of skid trails were selected to compare physical soil properties and beech (Fagus orientalis Lipsky) and maple (Acer velutinum Boiss.) seedling characteristics between the skid trail and the near undisturbed forest area (control). The sampled skid trails represented about the 30% of the total trail length, and they had an average longitudinal gradient of 35%. Five sample plots (10 m × 4 m) with random starting points and regular distance of 5 m intervals were taken on the skid trails. Actually, sample plots were placed to cover the width of the skid trail (4 m) and 10 m along the skid trail. For each skid-trail sample, three samples (2 m × 2 m) were taken at a 50 m distance from the skid trail in the undisturbed interior stand as control plots. Three main physical soil properties were measured in plots: bulk density, penetration resistance and total porosity (Figure 1). On each skid-trail sample, 9 soil core samples (3 samples on the left wheel tracks, 3 samples on the right wheel tracks, and three core samples on log skidded routes, between the two wheel tracks) were taken to measure soil bulk density. Also, one core sample from the center of the control plot was taken. In total, 30 soil core samples were taken from wheel tracks, 15 soil core samples from log skidded routes, and 15 soil core samples from undisturbed (control) sites.
Soil parameters were derived using ASTM soil laboratory measurements standards. The soil samples of 10 cm in deep were collected with a soil hammer and rings (diameter 5 cm, length 10 cm), put in polyethylene bags, and immediately labeled. Surface litter and duff were removed before sampling. The soil samples were dried in an oven under 105°C for 24 h to obtain dry bulk density [34].
Sample volumes and weights were corrected for large roots, wood, or gravel. The dry bulk density was calculated from the Equation (1): where, ρd is the soil dry bulk density (g/cm 3 ), Wd is dry weight of the sampler complete of the soil sample (g), Wc is weight of the cylinder sampler (g) and Vc is volume of the cylinder sampler (cm 3 ). On each skid-trail sample, the soil penetration resistance (PR) was measured in 18 points (6 points on left wheel track, 6 points on right wheel track, and 6 points on log skidded routes, between the two wheel tracks) by a hand-held penetrometer (Eijkelkamp, Zevenaar, Netherlands) at depths of 0 cm, 5 cm, and 10 cm. Also, PR was measured in 8 points for each control plot. Totally, the number of measured PR on the wheel tracks, log skidded routes and control areas, were 60, 30 and 120, respectively.
The total soil porosity was calculated as Equation (2): where AP is the total porosity (%), ρd is the dry bulk density (g /cm 3 ), and 2.59 g /cm 3 is the particle density measured by a pycnometer on the same soil samples used to determine the bulk density [1]. Soil pH was determined using an Orion Model 901 pH meter (Orion Research, Cambridge, MA, USA) in a 1:2.5, soil/water solution, soil organic C (OC) was determined using the Walkley-Black technique, and total N (TN) using a semi-micro-Kjeldahl technique [1].
Seedling response to different soil properties was measured by several variables. The traditional measure is height growth. Forms of below-ground growth and biomass are also significant measure growth responses [35]. On each area treated (wheel track, log skidded, and control) 27 normal seedlings of maple tree (nearest to the soil bulk density samples) with stem lengths between 30 cm- On each skid-trail sample, 9 soil core samples (3 samples on the left wheel tracks, 3 samples on the right wheel tracks, and three core samples on log skidded routes, between the two wheel tracks) were taken to measure soil bulk density. Also, one core sample from the center of the control plot was taken. In total, 30 soil core samples were taken from wheel tracks, 15 soil core samples from log skidded routes, and 15 soil core samples from undisturbed (control) sites.
Soil parameters were derived using ASTM soil laboratory measurements standards. The soil samples of 10 cm in deep were collected with a soil hammer and rings (diameter 5 cm, length 10 cm), put in polyethylene bags, and immediately labeled. Surface litter and duff were removed before sampling. The soil samples were dried in an oven under 105 • C for 24 h to obtain dry bulk density [34].
Sample volumes and weights were corrected for large roots, wood, or gravel. The dry bulk density was calculated from the Equation (1): where, ρ d is the soil dry bulk density (g/cm 3 ), W d is dry weight of the sampler complete of the soil sample (g), W c is weight of the cylinder sampler (g) and V c is volume of the cylinder sampler (cm 3 ). On each skid-trail sample, the soil penetration resistance (PR) was measured in 18 points (6 points on left wheel track, 6 points on right wheel track, and 6 points on log skidded routes, between the two wheel tracks) by a hand-held penetrometer (Eijkelkamp, Zevenaar, The Netherlands) at depths of 0 cm, 5 cm, and 10 cm. Also, PR was measured in 8 points for each control plot. Totally, the number of measured PR on the wheel tracks, log skidded routes and control areas, were 60, 30 and 120, respectively.
The total soil porosity was calculated as Equation (2): where AP is the total porosity (%), ρ d is the dry bulk density (g /cm 3 ), and 2.59 g /cm 3 is the particle density measured by a pycnometer on the same soil samples used to determine the bulk density [1]. Soil pH was determined using an Orion Model 901 pH meter (Orion Research, Cambridge, MA, USA) in a 1:2.5, soil/water solution, soil organic C (OC) was determined using the Walkley-Black technique, and total N (TN) using a semi-micro-Kjeldahl technique [1].
Seedling response to different soil properties was measured by several variables. The traditional measure is height growth. Forms of below-ground growth and biomass are also significant measure growth responses [35]. On each area treated (wheel track, log skidded, and control) 27 normal seedlings of maple tree (nearest to the soil bulk density samples) with stem lengths between 30 cm-60 cm and Forests 2019, 10, 771 6 of 23 7 years age of were selected. Seedling age was determined by phyllotaxy, and the following parameters were measured for each seedling: Morphological parameters: Stem length (SL), stem diameter (SD), main root length (MRL), main root diameter (MRD), lateral root length (LRL), and root penetration depth (RPD). Growth parameters: Total dry biomass (TDB), stem dry biomass (SDB), and root dry biomass (RDB). Architectural parameters: Ratio of lateral to main root length (RLM), root mass ratio (RMR; ratio of RDB to TDB), stem mass ratio (SMR, ratio of SDB to TDB), ratio of main root length to stem length (RRS), and ratio of root penetration to main root length (RPL).
Stem and root diameter were measured by a vernier caliper (Insize Model 1205, INSIZE, Queensland, Austria), 5 cm above and below of the soil surface, respectively. The vertical distance from the tip of the root to the soil surface was measured by a metal ruler as the root penetration depth. Dry weights (biomass) of seedlings were obtained after drying at 70 • C until a constant weight was reached.
Data Analysis
The means of soil bulk density bulk density (BD), AP and PR, and the means of seedling parameters in three treatment areas were compared by one-way analysis of variance (ANOVA) and Duncan test at significance level of 0.05. The relationship between bulk density (BD) and stem length (SL), stem diameter (SD), main root length (MRL), lateral root length (LRL), root penetration depth (RPD) and total dry biomass (TDB); between seedling stem length (SL) and MRL, LRL, RPD, RDB, and TDB; and the relationship between MRL and RPD, were analyzed by correlation and regression in the three treatment areas. The nMDS (Non-metric multidimensional scaling) approach was used to analyze the differences of the main indicators of the seedling performance tested among the three areas. All analyses were performed using SPSS 19 (IBM, New York, NY, USA).
Soil Environment
Impact of skidding on the three physical soil parameters was significant, with a higher level of impact in the wheel track and slightly lower in log skidded compared to Control (Table 1). Moreover, ten years later the larger number of tractor passes increased bulk density from 12.6% to 36.1% and penetration resistance from 68.0% to 220.0%, while porosity declined from 12.8% to 30.9% in winching corridors and tire track ( Table 1). Amount of organic C and total N in the tire track and winching corridor were significantly lower than the control. The pH value on the control was significantly lower than the tire track and winching corridor.
Morphology
Considering the growth mode and the plagiotropic behavior of the seedlings in beech, both stem length and height were measured. However, seedlings of both beech and maple did not show statistical differences from the control area (Table 2) referring to height or stem length, while for all other parameters clear statistical differences were found. Length and diameter of the main root were significantly reduced by soil compaction, lateral root length in the compacted soils was significantly shorter than control, root penetration length decreased from control area to wheel track.
In the skid trail soil, root penetration depth decreased by 56% compared to forest soil, this for both beech and maple seedlings. A lower reduction was observed in the winching corridor, 41% and 39% respectively, compared to the control.
Growth (Biomass)
Soil compaction had a significant effect on the reduction of stem and dry root biomass of beech ( Table 2 Beech). Seedling biomass in the compacted soils (B = −24.2% and C = −35.0%) was significantly lower than the control (A). Dry stem biomass in the tire track (C) and winching corridor (B) was 34.3% and 20.8% lower, respectively, than the control (C). Dry root biomass in the tire track (C) and winching corridor (B) was 36.1% and 29.3% lower, respectively, than the control (A).
Regarding maple seedling growth ( Table 2 Maple), only dry root biomass showed a statistical difference; the value decreased from the control area to the wheel track (C = −35.7%), with an intermediate value in the area of log skidded (B = −20.26), in accordance with main root length, lateral root length and root penetration length decrease. Dry stem biomass (B = +4.5%, and C = +5.4%) did not show an effective statistical difference from the control area, while total dry biomass with the p-value of 0.081 was considered as border line, with value that decreased from the control area to the wheel track (C = −15.55), with an intermediate value in log skidded (B = −7.3%), but without statistical significance.
Architecture (Allocation Rates)
Soil compaction had a significant effect on all the seedling architecture parameters of beech. Root penetration/main root length (RPL) showed that at a higher compaction level main root suffers difficulties in penetrating. Both main root/stem length (RRS) and root penetration/main root length (RPL) showed differences in each different soil impaction, as the Duncan test highlighted. Referring to the seedling architecture of maple, all the characteristics showed clear statistical differences from the control area, in particular, a decreasing trend was shown for RLM, RMR, RRS and RPL from the control area to the wheel track, with an intermediate value in log skidded, while for SMR there was an increasing trend.
Relationship between Bulk Density and Seedlings Morphology and Biomass
The effect of soil bulk density increase on root and stem parameter was tested (Table 3). A negative statistic correlation was observed between bulk density and root length (for beech: F = 340.84, p < 0.000; R 2 = 0.7437; for maple: F = 185.60, p < 0.000, R 2 = 0.5451) ( Figure 2); the lateral roots suffered more than main roots from the increase in soil compaction. In beech seedlings a negative statistical correlation was also observed between bulk density and diameter of root (F = 29.65, p < 0.000, R 2 = 0.4516) and stem (F = 65.77, p < 0.000, R 2 = 0.474) ( Figure 2); both root and stem diameter decreased at bulk density increase. While, root and stem diameter of maple seedlings increased by increasing bulk density, although these relationships were not statistically significant. Root penetration depth was significant correlated with soil bulk density (for beech: F = 356.12, p < 0.000, R 2 = 0.8299; for maple: F = 301.14, p < 0.000, R 2 = 0.7432) (Figure 2), at higher bulk density values root penetration decreased. A negative statistical correlation was observed between bulk density and seedling biomass ( Figure 2). In both seedlings species total biomass was negatively correlated with bulk density (for beech: F = 166.05, p < 0.000, R 2 = 07847; for maple: F = 110.47, p < 0.000, R 2 = 0.4717) (Figure 2).
Beech Seedling Performance
In beech seedlings, the ratio of lateral root length to main root length (RLM) showed a trend that is first increasing and then decreasing, but the R 2 value was low, while the ratio of main root length to stem length (RRS) and the ratio of root penetration to main root length (RPL) showed a negative trend, at increasing bulk density (Figure 3).
Beech Seedling Performance
In beech seedlings, the ratio of lateral root length to main root length (RLM) showed a trend that is first increasing and then decreasing, but the R 2 value was low, while the ratio of main root length to stem length (RRS) and the ratio of root penetration to main root length (RPL) showed a negative trend, at increasing bulk density (Figure 3).
Maple Seedling Performance
The relationship between stem length and main root length was tested for the three areas. In this case, only for area C the regression analysis was statistically not significant, and R 2 value was very low (Figure 4). In area A (control) and B (log skidded) the regression analysis was statistically significant with a positive relationship between the two variables, with a high R 2 value of about 0.8.
Maple Seedling Performance
The relationship between stem length and main root length was tested for the three areas. In this case, only for area C the regression analysis was statistically not significant, and R 2 value was very low (Figure 4). In area A (control) and B (log skidded) the regression analysis was statistically significant with a positive relationship between the two variables, with a high R 2 value of about 0.8. Figure 3. Relationship between RLM (ratio of lateral to main root length, RRS ratio of main root length to stem length, and RPL ratio of root penetration to main root length of beech seedlings with soil bulk density.
Maple Seedling Performance
The relationship between stem length and main root length was tested for the three areas. In this case, only for area C the regression analysis was statistically not significant, and R 2 value was very low (Figure 4). In area A (control) and B (log skidded) the regression analysis was statistically significant with a positive relationship between the two variables, with a high R 2 value of about 0.8. The relationship between stem length and lateral root length was tested for the three areas. In this case, for all the three areas the regression analysis was statistically significant, and R 2 value was low only for area C ( Figure 5). In area A and B, the regression showed a positive relationship between the two variables, with a good R 2 value of about 0.8. The relationship between stem length and lateral root length was tested for the three areas. In this case, for all the three areas the regression analysis was statistically significant, and R 2 value was low only for area C ( Figure 5). In area A and B, the regression showed a positive relationship between the two variables, with a good R 2 value of about 0.8. The relationship between stem length and main root penetration depth was tested for the three areas ( Figure 6), and in this case, for all the three areas the regression analysis was statistically significant, but R 2 value was low. In areas A and B, the regression showed a slightly positive relationship between the two variables, with R 2 value low, ranging from 0.35 to 0.69. The relationship between stem length and main root penetration depth was tested for the three areas ( Figure 6), and in this case, for all the three areas the regression analysis was statistically significant, Forests 2019, 10, 771 12 of 23 but R 2 value was low. In areas A and B, the regression showed a slightly positive relationship between the two variables, with R 2 value low, ranging from 0.35 to 0.69. The relationship between stem length and main root penetration depth was tested for the three areas ( Figure 6), and in this case, for all the three areas the regression analysis was statistically significant, but R 2 value was low. In areas A and B, the regression showed a slightly positive relationship between the two variables, with R 2 value low, ranging from 0.35 to 0.69. The relationship between stem length and main dry root biomass was tested for the three areas. In this case, for all the three areas the regression analysis was statistically significant with R 2 value of The relationship between stem length and main dry root biomass was tested for the three areas. In this case, for all the three areas the regression analysis was statistically significant with R 2 value of about 0.7 and the regression showed a slightly positive relationship between the two variables ( Figure 7). The relationship between stem length and total dry biomass was tested for the three areas: For all the three areas the regression analysis was statistically significant with a very good R 2 value of about 0.9 and the regression showed a positive relationship between the two variables ( Figure 8). The relationship between stem length and total dry biomass was tested for the three areas: For all the three areas the regression analysis was statistically significant with a very good R 2 value of about 0.9 and the regression showed a positive relationship between the two variables ( Figure 8). The relationship between stem length and total dry biomass was tested for the three areas: For all the three areas the regression analysis was statistically significant with a very good R 2 value of about 0.9 and the regression showed a positive relationship between the two variables ( Figure 8). The relationship between main root length and root penetration depth was tested for the three areas, and only for area C the regression analysis was statistically not significant, and R 2 value was very low (Figure 9). In areas A and B, the regression analysis was statistically significant, with a positive relationship between the two variables, with good R 2 value, ranging from 0.6 to 0.8. The relationship between main root length and root penetration depth was tested for the three areas, and only for area C the regression analysis was statistically not significant, and R 2 value was very low (Figure 9). In areas A and B, the regression analysis was statistically significant, with a positive relationship between the two variables, with good R 2 value, ranging from 0.6 to 0.8. Finally, the relationship between stem length and R/S biomass was tested: for area B the regression analysis was statistically not significant, and R 2 value was very low (Figure 10). In areas A and C, the regression analysis was statistically significant with a negative relationship between the two variables. Finally, the relationship between stem length and R/S biomass was tested: for area B the regression analysis was statistically not significant, and R 2 value was very low (Figure 10). In areas A and C, the regression analysis was statistically significant with a negative relationship between the two variables.
The nMDS diagram of the main indicators of the maple seedling performance matrix ( Figure 11) showed a negative relationship between seedling characteristics, functionality and soil disturbance: The most impacted (and thus most compacted) soils showed an abnormal growth in the root diameter of the seedlings, a stiff growth in height and a limited length and distribution of the root system. The main variability was expressed from stem mass ratio (SMR) and root mass ratio (RMR), while root penetration/main root length (RPL) showed a clear differentiation between the groups A and B, and area C. Finally, the relationship between stem length and R/S biomass was tested: for area B the regression analysis was statistically not significant, and R 2 value was very low (Figure 10). In areas A and C, the regression analysis was statistically significant with a negative relationship between the two variables. The nMDS diagram of the main indicators of the maple seedling performance matrix ( Figure 11) showed a negative relationship between seedling characteristics, functionality and soil disturbance: The most impacted (and thus most compacted) soils showed an abnormal growth in the root diameter of the seedlings, a stiff growth in height and a limited length and distribution of the root system. The main variability was expressed from stem mass ratio (SMR) and root mass ratio (RMR), while root penetration/main root length (RPL) showed a clear differentiation between the groups A and B, and area C.
Effect on Seedling Quality
The morphological features of the seedlings were used to compute the quality parameters in order to compare the three cases (Table 4). Sturdiness quotient (SQ) of seedlings on the tire track (for maple: SQ = 10.7, for beech: SQ = 13.1) and log skidded (for maple: SQ = 9.3, for beech: SQ = 12.3) were significantly higher than on the un-compacted soils (A control) (for maple: SQ = 8.5, for beech: SQ = 11.1). Root-shoot ratio (RS) of maple seedlings on the control (RS = 0.96) was significantly higher than the tire tracks (RS = 0.60) and log skidded (RS = 0.73). On the contrary, the root-shoot ratio (RS) in beech seedlings did not show statistically significant differences in the three areas.
Effect on Seedling Quality
The morphological features of the seedlings were used to compute the quality parameters in order to compare the three cases (Table 4). Sturdiness quotient (SQ) of seedlings on the tire track (for maple: SQ = 10.7, for beech: SQ = 13.1) and log skidded (for maple: SQ = 9.3, for beech: SQ = 12.3) were significantly higher than on the un-compacted soils (A control) (for maple: SQ = 8.5, for beech: SQ = 11.1). Root-shoot ratio (RS) of maple seedlings on the control (RS = 0.96) was significantly higher than the tire tracks (RS = 0.60) and log skidded (RS = 0.73). On the contrary, the root-shoot ratio (RS) in beech seedlings did not show statistically significant differences in the three areas.
Discussion
In this research, the long-term effects of skidding were investigated on physical and chemical properties of soil (bulk density, penetration resistance total porosity, organic C, total nitrogen and pH), on morphology, growth and architectural characteristics of beech and maple seedlings, in the Caspian forests of Iran.
Soil Environment
The results of our research demonstrated that 10 years after the operation skid trails and winching corridors still showed a significant difference in physical and chemical properties compared to undisturbed soils.
On winching corridors soil disturbance was due to dragging the logs on trails to skidder and logs. Higher value of bulk density signed the compacting action of the trunk load and the coupled load of trunk and vehicle. Jourgholami et al. [38] observed that bulk density significantly increased with the number of vehicle passes, in a mixed forest characterized by a brown forest soil (Alfisols) and well-drained in Iran. Similar results were obtained in other studies in the Hyrcanian forest [22,39,40]. Important soil structural characteristics were modified and physical parameters worsened more in higher disturbed soil than in the winching corridor. USDA Forest Service suggested that a bulk density increase of more than 15% is detrimental for the soil ecosystem [41]. Compaction alters the moisture regime of the soil and can impede the growth of roots; hence the tree is not able to draw water or nutrients at depth; poor root development can also make mature trees more susceptible to wind-throw [42]. Dickerson [43] found that wheel-rutted soils required about 12 years to recover and log-disturbed soils about 8 years after tree-length skidding. A research by Naghdi et al. [44] showed that 20 years after skidding operations, the micromorphological properties of compacted soil on the skid trails had not yet recovered. They were significantly different compared to the control thus needing more time for a complete recovery in the Caspian forests of Iran. However, some authors reported lower soil damage improving harvesting methods and providing training for the operators [13,45].
Soil penetration resistance after 10 years on log skids and tire tracks dramatically increased compared to the undisturbed areas. Whalley et al. [46] found that plant root growth slowed down at a penetration resistance of 2 MPa and stopped when resistance values exceeded 3 MPa.
As bulk density and penetration resistance increased, porosity decreased. Total soil porosity observed after 10 years has maintained a negative relative variation compared to the undisturbed of about 13% in the winching corridor and 30% in the tire track. This is consistent with previous observations [2,8,11,14,[47][48][49]. Naghdi et al. [44] reported that the soil porosity on the skid trails (35.8%) was significantly lower than the untouched soils (54.9%) after 20 years from compaction (skidding operation) in the Hyrcanian forests of Iran. These results were quite similar to those observed in our research. Reduction in soil porosity and air permeability reduces the soil penetrability for roots [50] and limit root extension, elongation, branching, density, and penetration of primary roots as well as root access to, and uptake of soil moisture and nutrients [15]. Seedling root growth is also reduced when oxygen concentration drops beneath the 6% to 10% range [51].
Generally, pH seems less susceptible to variations in the case of forest soils, even if disturbed [3,4,11], but in our case it increased at the different disturbance intensity. Naghdi et al. [17] observed that the pH was significantly influenced by the number of passes of a rubber-tired skidder in the Sorkhekolah forest, North Iran.
Organic Carbon content decreased as disturbance increased, as well as nitrogen. The soil disturbed by the coupled Timberjak movement and trunk load showed a considerably lower organic matter content, as observed in other cases studied in different forest areas, management, treatment and soil type [3,4,17,52]. Rut and soil displacement, causing layers of lesser fertility to emerge [22,40] is still indicated 10 years after the forest operation by the difference in chemical characteristics compared to the undisturbed control, such as the reduction in organic carbon and nitrogen, and the increase in pH. It is very interesting to note that 10 years after the recovery processes are still in progress, although the areas considered were colonized by forest vegetation. The lowering of chemical parameters seems to be ascribed to the microbiological activity decline [3,53,54] due to displacement of dead wood and forest litter and to mixing and removal of topsoil, coupled with the reduced soil porosity [49]. The lowering of porosity may result in decreased water fluxes and gas diffusion with adverse effects on roots of trees [55,56] and seedlings [57], soil bacterial community [58] and more in general soil biota community [59].
Effect of Soil Compaction on Beech and Maple Seedling
After 10 years morphological parameters of beech and maple seedlings were affected as soil compaction increased. Only stem length and seedling height of both beech and maple were similar in the different soil compaction [25]. The difference in height and length of beech seedling is not dramatic, as plagiotropic stem behavior can be frequently observed in beech [60].
Both root and stem diameter of beech were adversely affected by bulk density, decreasing at its increasing. On the contrary in maple they were positively influenced, increasing at soil compaction increase.
Seedlings with a larger stem diameter were better able to survive than the smaller diameter [61]. Jacobs et al. [62] noted that seedlings with a greater initial height at planting could better survive. Therefore, it can be argued that after 10 years, the smaller seedlings in each soil compaction type were subjected to environmental selection. Furthermore, the beech seedlings observed have not yet been subjected to neighborhood competition and were not yet discriminated for water and light (similar height of seedling and length of stem).
The relationship analysis of maple seedling morphology indicated that main root, lateral root length and dry biomass, and root penetration depth were increased by increasing stem length in all three treatments. These results indicate that although soil compaction decreases the root growth, roots have a good growth, and a good penetration depth in the heavily compacted soils, with increasing stem length.
The root system of beech and maple seedlings was significantly reduced by increased soil compaction. The effects of skid trails on reduction of root length was higher on lateral root length than on main root length.
Root length decreasing with increasing soil compaction was noted for many plant species [63], including trees [9,64]. Higher soil bulk density was associated with thinner root, shorter lateral roots and lower penetration depth. Similar results were found by Mosena and Dillenburg [65]. According to our results on beech and maple seedlings unfavorable effects were observed in other hardwood species [11,25,26,66]. Hildebrand [57] indicated a bulk density threshold in 1.25 g/cm 3 for the development of fine roots of beech seedlings in loess loam soil; beech seedlings grown at a bulk density of 1.34 g/cm 3 exhibited a heavy suppression of fine roots; at bulk density 1.46 g/cm 3 the fine roots were arranged around the main root shaped like a brush and therefore showed poor penetration of the soil. Naghdi [17] observed a 44% reduction of maple seedlings root length in soils heavily trafficked. In Acer cappadocicum Bunge. seedlings, grown in greenhouse condition, Jourgholami [21] detected a decrease of 43% in stem lengths, 36% in stem diameters, 49.8% in main root lengths, and an increase of 101% in lateral root lengths when comparing the un-compacted class (control, bulk density 1.08 ± 0.03 g/cm 3 ) to the highest compaction (bulk density 1.38 ± 0.03 g/cm 3 ). Von Wilpert and Schäffer [67] indicated that deficiencies in soil gas permeability reduce fine root formation. The lowest fine root density and rooting depth were found below the wheel tracks due to compaction in nearly stone-free loamy soils (Luvisols) and fine root distribution was interpreted as a very sensitive indicator for the soil aeration status [67]. The results of previous research indicate that soil compaction effects are related to soil type [14], species [68] and forestry intervention system [52,69].
Root dry mass is an indicator of the root system capability to absorb water and nutritive elements [61] that assure better survival for seedling. The negative effect on the dry mass of stem and root agrees with data and correlations observed in other species [11,25,66,70]. It is interesting to note that total dry biomass difference in the compacted area was driven by stem rather than root biomass in beech and by root rather than stem in maple. This suggests primarily the difference of species behavior. Moreover, after 10 years a selection of seedling was performed on disturbed areas based on capability to obtain sufficient natural resources to grow. After 10 years, dry root biomass was lesser decreasing than other parameters, probably due to the environmental selection in beech. The root system assures seedling survival exploring soil for water supply and nutrients, supporting mycorrhizal symbiosis [71]. Under a threshold related to the transpiring area (shoot) and to the water absorbing area (roots) the seedlings have no future.
Root penetration depth was dramatically affected by increasing bulk density in both beech and maple seedlings. The growth inhibition of the roots system and limitation of the maximum depth of penetration was observed in Quercus petraea Matt. seedlings [70]. Furthermore, a modest increase in soil density has led to negative effects on plant growth. Root penetration depth and lateral root length were the parameters mainly affected by soil compaction in our study. These evidences confirmed that the impact on soil properties have a long-term detrimental effect on the seedling root system [18,57,61,66,67].
The architecture of the seedlings describes the allocation of biomass to the root and the stem. As noted above, the bulk density affects the morphology and growth of the seedling. Likewise, as soil compaction increases, the seedling architecture is also affected. The main root length (MRL) and the root penetration depth (RPD) are different root morphological indicators. Therefore, the index of RPL is a root architectural indicator as a root growth performance indicator. The morphological plasticity of the root system in response to soil compaction is clearly exposed by the RPL. The parameters describing the seedling architecture confirm the observation on the morphology and the growth of seedlings both beech and maple. Beech and maple seedling showed that the ratio of lateral to main root length (RLM), root mass ratio (RMR), main root/stem length ratio (RRS) and root penetration/main root length ratio (RPL) on compacted soil were lower than the undisturbed control. Conversely, the stem mass ratio (SMR) of maple and beech on the log skids and tire tracks was higher than the undisturbed area.
Roots grow mainly downwards [72] and compacted soils oppose greater resistance than ones with lower bulk density. A high penetration resistance modified the root system shape from the typical pattern [73]. Increased soil strength (penetration resistance) may change the proportional growth allocation between above and below ground portions of seedlings [24,64,74] decrease the proportion of roots [9]. RRS typically increases with decreasing water availability [75,76], but as result of soil compaction, R/S responses were highly variable. Following soil compaction, RRS increased in P. contorta Dougl. ex Loud. in dry soils, but not in moist or wet soils [77], while it decreased in the cases of Q. coccifera L. and Q. faginea Lam., and remained unchanged in the cases of Q. ilex L., Q. canariensis Willd., and Q. pyrenaica Willd. [74]. Compacted soils exhibit lower water storage capacity due to lower porosity [32]. Madsen [78] observed a limiting factor in soil water content for survive of beech seedling. Specifically, the root biomass of beech is positively conditioned by the availability of water [72]. In natural conditions, seedlings with low root shoot ratio may be more susceptible to water deficit stress [79]. Similarly, to what was previously observed for growth parameters, the biomass of both the root and the stem was observed after 10 years of forest operations and the seedlings sampled had at least the opportunity to acquire minimal survival features.
Effect of Soil Compaction on Seedling Quality
Seedling sturdiness quotient expresses the vigor and robustness of the seedling, and it reflects the stocky and spindly nature of the seedlings, being the ideal value less than six [37]. It is good indicator of seedling ability to withstand physical damage such as exposure to severe wind. The parameters that characterize the seedlings grown in the natural context compared with seedlings produced in controlled systems showed a great difference. In our study the sturdiness quotient depicted tall and thin beech and maple seedlings, in each compaction situation, although the seedlings in the control were better shaped. The morphological, growth and architectural parameters are only generally the same compared to what was observed in controlled contexts [61,80].
Seedling root-shoot ratio (RS) indicate a measure of balance between the transpiration area and water absorbing area. A root-shoot ratio between one and two is considered as optimal [37]. Our results indicated that all means of RS values for both beech and maple seedlings were lower than one. The maximum RS value was obtained in the control for maple seedlings (RS = 0.96). The RS values of maple seedlings on the tire track and log skidded were higher than RS values of beech seedlings in these areas.
Nevertheless, the SQ and RS values are incomparable with indexes obtained in the nursery. The seedlings examined in our study were older. Natural regeneration is a much more complex process and the scenario is different.
Several studies underline the effects on soils and forest regeneration of forest operations [81,82]. Compacted soil caused by forest harvesting has an important role in the reduction of root growth of plants by restricting access to water and nutrients and by reducing air diffusion [14]. In a study by Pinard et al. [20] in the Malaysian forests, it was concluded that after 17 years, the density and richness of woody plants on skid trails are less than the adjacent forests. Krause [83] reported that compaction from harvesting equipment can reduce water infiltration and air permeability which is detrimental to the establishment and growth of regenerating species.
On the other hand, skid trails, which are often constructed by excavation, are one of the most important elements for close-to-nature forestry, in order to enable selective cutting methods in areas provided by optimal forest road network [84]. For this reason, the rehabilitation of skid trails has become more important in recent years. D'Oliveira [85] suggests artificial regeneration on the skid trails of the Brazilian forests, and based on his research, native species have grown well and survived after 5 years. The rehabilitation of skid trails after finishing skidding operations by drainage construction are essential to reduce the quantity of runoff and to minimize then soil erosion [7,86] and the installation of water bars and brush barriers on skid trails is necessary to control and capture runoff and ensure soil stabilization. The cross drains should be spaced less than 25 m apart on skid trails sloping 10-15 degrees [87]. Post-harvest operations such as installation of water bars and brush barriers on skid trails, construction of drains across skid trails and seeded to grass, replanting and fertilization of severely compacted soils are procedures for the specific purpose of maintaining forest biodiversity. The forest management plan should include these operations in order to maintain both short and long term management goals and to ensure forest productivity. Forest soil maintenance is a key factor to sustain productive forests [88]. Plant coverage is a main ecological factor to limit soil erosion on the skid trails.
The forest service must be competent in managing soil disturbance, in order to maintain a sustainable production of natural resources [89].
Physical soil properties and changes in other environmental properties of skid trails can create differences in beech and maple seedling growth between the skid trail and non-skid trail, such as chemical soil properties, wind speed, air temperature, and light intensity.
Conclusions
Post-harvest operations are necessary to limit soil erosion and to maintain plant species diversity in these forests.
All the data gathered, both physical and chemical on the features of the soil in the different conditions of impacts, demonstrated that vehicle passages and loads provoke damage. The physical soil properties analyzed for the skid trails showed a clear impact in comparison to the undisturbed areas. In particular, the impact was more relevant in the tire tracks with dangerous values, while on the log skids the impact was limited. The increase in soil bulk density restricted root and stem growth, reduced root penetration and modified the architecture of beech and maple seedlings, probably limiting the access and the absorption of nutrients and water in different magnitudes.
Although all forest logging systems have the potential to negatively affect forest soil, a more sustainable approach to forest harvesting is needed. Monitoring soil damage and recovery are useful in order to evaluate and improve sustainable forest management while maintaining fertility.
Harvesting systems that require vehicle passage on the tracks must be prioritized so as to reduce forest soil degradation, ensuring an environment favorable to the establishment and growth of natural regeneration.
The careful planning of interventions, the rational opening of forest tracks, the choice of the best system for the environmental conditions, the adequate training of workers and the correct management of forest operations are essential elements if a sustainable and respectful forest management of natural resources is to be pursued.
Physical soil properties and changes in other environmental properties of the skid trail created differences in beech and maple seedling growth between the skid trail and the non-skid trail. This was closely related to the physiological characteristics of the two species studied. Beech seedlings reacted well to a moderate uncovering, but they needed little disturbed soil, even if very mixed bedding. Maple seedlings reacted better than beech seedlings to the uncovering and soil disturbance. The effects of the skid trail on morphology, growth and architecture of maple seedlings in the Hyrcanian beech forests showed that the maple as a seedling is a suitable species for maintenance of the physical properties of skid trails after logging operations in the beech stands in the Caspian forests of Iran. Stem and main root diameter increased, while main root and lateral root length, and root penetration depth decreased. Ratio of lateral to main root length, root mass ratio, stem mass ratio, ratio of root penetration to main root length were significantly reduced. The relationship analysis of seedling morphology indicated that, although soil compaction reduces root growth, roots have a good growth, and good penetration depth in the heavily compacted soils, with increasing stem length.
|
2019-09-17T03:08:49.971Z
|
2019-09-05T00:00:00.000
|
{
"year": 2019,
"sha1": "7559284eae45022119f2e4fc693630973471a213",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1999-4907/10/9/771/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "27045e51b430de77b76e1de671ed75395b3cb86f",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
55307224
|
pes2o/s2orc
|
v3-fos-license
|
TERRESTRIAL AND AERIAL GROUND-PENETRATING RADAR IN USE FOR THE ARCHITECTURAL RESEARCHES : ANCIENT 16 TH CENTURY WATER SUPPLY AND DRAINAGE AT THE MONASTERY OF EL ESCORIAL ( MADRID , SPAIN )
Remote sensing techniques in Archaeology are increasingly essential components of the methodologies used in archaeological and architectural researches. They allow uncovering unique forgotten data which are unobtainable using traditional excavation techniques, mainly because their precise location is lost. These data are still important since they can help to prevent flood effects inside the ancient building cellars and basements, as it happened periodically in El Escorial. Wide ancient drainage galleries run more than one hundred feet downhill outside the building, ensuring that rainwater and springs were adequately drained. Nowadays their plans are lost, and the lack of documents related both to the ancient water supply and drainage systems become an impediment to solve the stains of damp on the stone masonry walls and vaults, and even other occasional flooding effects. In this case, nondestructive techniques were needed to find the ancient underground passages in order to preserve the integrity of the building and its current activities. At a first stage oblique aerial infrared images taken from a helium barrage balloon helped to find easily, quickly and cheaply the buried masonry structures. Secondly, radar pulses were particularly interesting to image the subsurface as they were valuable means of assessing the presence and amount of both soil water and buried structures. The combination of both techniques proved to be an accurate and low-cost way to find the ancient drainage systems. Finally, results were produced by means of open source software. * Corresponding author.
INTRODUCTION
The first notices about damages due to floods at the Monastery of El Escorial (Madrid, Spain) were described by an anonymous 1645 report, the Libro de la Fontanería (Drainage Book), whose original manuscript is preserved at the Monastery Library (Andrés, 1965).Flood effects inside the building have been substantially palliated after the new drainage system was built in the 1960s.Although the original 16 th century water supply and sewage tunnels were mostly preserved, the lack of use implied that a great part of their traces are currently lost, both inside and outside the building.These installations are particularly interesting considering that the first one still reaches the height of the 30 feet storey of the building, and that formerly included a hot water distribution network.The second one took advantage of rain and spring water through the construction of eleven cisterns and a wide net of tunnels.After locating the main entrances, we are going round the original tunnels inside the building, measuring, drawing and describing them with the aim of documenting the original water supply and sewage systems.While the inner tunnels are still accessible, vegetation has blocked up the outer sewage tunnels, whose location is currently lost.
Objectives
Our research aimed to find the still existent original sewage tunnels and reservoirs outside the building, to document them and to preserve them from aggressive factors -as the invasive vegetation which is destroying the ancient stone and brickworks.
To achieve these main targets, we used non destructive fast lowcost techniques as aerial visible and infrared images and ground-penetrating radar at a second stage.Results and documentation were produced using open source software, according to the targets of the CIPA Task Group 2 "Open source in use for the Cultural Heritage communication processes."
Case Study: The Monastery of San Lorenzo de El Escorial (Madrid, Spain)
The recently built sewage network helped to erase the memory of the original buried structures; but as they were not destroyed, it is still possible to study and document them.Written sources as the Libro de la Fontanería provide an accurate description of the different tunnels, pipes, taps, and fountains which are located inside the Monastery.A recent work by Martín Gómez (1986) also deals with the water supply system.But both are vague on defining the location of the external tunnels and water towers, only mentioning that the two main galleries run somewhere under the Royal Pantheon and under the Galería de Convalecientes (Gallery of Convalescents).
Among the documents located in the main Spanish archives there are some drawings related to water supply and drainage in the royal palaces; the most useful in our research was a sketch on a letter by the former architect Juan Bautista de Toledo located in the Archivo General de Simancas (Figure 1), that shows the tunnel under the Galería de Convalecientes, at the South side of the Monastery.No other document or drawing was found about the other main tunnel under the Royal Pantheon, excepting a description of the works undertaken to finish it in 1645 by friar Francisco de los Santos (1657), when water flooded the crypt (Figure 2).As the Monastery is included since 1984 on the World Heritage List of UNESCO, the methods to be used had to be nondestructive and must also preserve the current uses of the building as a main touristic site, but also as a school, a monastery and offices of Patrimonio Nacional.On the other hand, methods should also be able to be applied quickly and at low cost.Two modern techniques as aerial visible and infrared images and ground-penetrating radar (GPR) when combined proved to be useful tools to accomplish our targets, considering all the previous determining factors.
Precedent researches
Precedent researches on the use of aerial oblique visual and infrared images for archaeological researches are numerous: among the recent contributions can be mentioned: about low cost aerial photographic techniques, see Eppich, Almagro, Santana and Almagro (2011); about producing georeferenced cartography from single oblique Photos, see Bozzini, Conedera and Krebs (2011); about the identification of archaeological remains from the air, see Aber, Marzolff and Ries (2010), as well as Mirijovsky, Martinek and Brus (2011).The use of small remote-controlled helicopters for architectural and archaeological surveyings is being applied for long by the DAVAP Research Team at the University of Valladolid (Sánchez, San José, Fernández, Martínez and Finat 2011).Similarly some interesting experiences have been accomplished using GPR, as those led in the Alhambra (Granada, Spain) by Rafael Gómez (2008), and by the team of Prof. Conyers at the University of Denver (Conyers, Ernenwein & Bedal 2002), who uses the GPR mapping for the detection and interpretation of cultural materials.Finally, the use of GPR in urban areas has been carried out by Basile, Carrozzo, Negri and Nuzzo (2000) in Lecce (Italy) to obtain a detailed characterization of the most superficial layers, where presumably archaeological structures were buried.Finally, about integrated geophysical survey methods in the assessment of archaeological sites, see Keay et al (2009).
Previous considerations
Before planning the works, we took in consideration some specific characteristics of the site which could affect the development of the works.Firstly, we knew in advance the materials and constructive features of the tunnels, as we have drawn them at their inner sections inside the Monastery applying direct measuring, photogrammetry, and free drawing tools as Inkscape (Figure 3).Another important question to be considered was the existing granite rock layer under the Monastery, who served as a solid basement to the building, but whose depth and extension was unknown.
Finally, different degrees of moisture were detected on the stone façades of the monastery.They were particularly evident on the stonework surfaces near the drainage areas (Figure 4).The external area of the Monastery that was chosen to apply the above mentioned techniques was the East slope that run downhill from the Muro de los Nichos to El Bosquecillo (Little Wood), covering an area of about 12 Ha (Figure 5).Originally a big pond was located in this place, about 750 m far from the Monastery, and its main function was to collect the water from the different sewage tunnels and channels coming from the Monastery and its orchard and gardens.The pond was replaced in the 1950s by a swimming pool, during the modernisation of the whole water supply system.As we mentioned above, two main methods were applied in this research.
At a first stage we searched the ancient tunnel using low aerial oblique visible and infrared images, in order to set the limits to the area to be explored at the second stage with the RDP.
Instrumentation for the visible light spectrum and IR data collection
Along the first stage we decided to take two series of oblique aerial images: the fist one with a digital camera equipped both for visible and infrared light spectrum.After visiting the area we established in advance the conditions for IR, sensitivity, focus, and shutter, together with the bearing parameters and timetable according to the light hours.Among the qualities of the digital camera was its lightness (only 650 g without battery) and its small size (128 x 93 x 129 mm) (Table 1).It included Fujifilm's Hyper Utility Software HS-V" version 3, which became a useful tool with side-by-side image comparisons along with metadata analysis.
To take the images we used a barrage helium balloon, remotecontrolled, which allowed the operator to visualise in real time the camera recordings, thus having the full control of the work.
The PVC balloon was 5 m long, with a capacity of 9.6 m 3 of helium.In order to detect the buried structures, it was furnished with robotics able to pan the camera 360º, with an azimuth of 90º.
Sensor We searched firstly for perceptible terrain anomalies and their location within the research area.The aerial images allow observing both a wide area and details of its features.
With this aim we took the series of visible light spectrum images at 30 m high and 45º azimuth, selecting 28 mm focal length and a B&W 486 filter.
The following image series were captured in the infrared light spectrum at 40 m high and 47º azimuth, using a B&W 093 filter.Its built-in 10.7x, 28-200 mm optical zoom system minimised dust.On the other hand, through the use of an infrared cut or "hot mirror" filter on the lens, the camera could capture a visible light image very close in quality to that of a standard digital camera, adding flexibility and costeffectiveness to the overall package.This kind of IR camera detects any buried structures, showing them by means of contrasts between their traces and its impact on the terrain surface and the vegetation cover.Frequently these superficial changes cannot be perceived on the visible light spectrum, but IR brings them into light under particular conditions (Figure 5).
Figure 5.Comparison of the visible light spectrum and the IR images; the purple traces correspond to the main buried tunnel.
Instrumentation for the GPR data collection
Once the research area was limited, we introduced the GPR technique in order to get the accurate plan of the main sewage tunnel.This method does not need to establish any direct contact with the ground; it is fast and easy to manage, and can be worldwide used excepting on saline soils (which is not the case).
In our case, the maximum depth of the tunnel vault does not exceed 1 m, what could be considered as an advantage.On the other hand, the granite rocks that were spread all around the study area could introduce some distortions that must be considered on the later analysis.GPR technique takes advantage of the fact that all buried materials in the ground have particular physical and chemical properties that affect the velocity of electromagnetic energy propagation, the most important of which are electrical conductivity and magnetic permeability.The reflectivity of radar energy that occurs at a buried interface is primarily a function of velocity changes, which is measured by differences in the relative dielectric permittivity (RDP).The greater the change in velocity, the higher the amplitude of the reflected wave.The greatest factor affecting RDP is the moisture content and its distribution.Thus, in order to generate a significant reflection in a profile, the changes in GPR between two bounding materials must occur over a short distance.As the brick vault of the tunnel has different physical and chemical properties than the surrounding organic-rich surface soils or forested lands, significant reflections occurred at their interface.Similarly, differences between the underlying granite rocks and wet soils near the sewer water can be easily detected (Table 2).
We used a GSSI UtilityScan with 400 Mhz and 900 Mhz antennas, on a SmattCar cart running at 0.5 m/s speed, that allowed collection of datasets each 2 m.Working under good conditions this instrument can detect structures buried at a depth of 6 m, what can be considered more than enough in this case study.
The GPR included a GPS in order to georeference each dataset, which made easier the later cartographic production.GPR data were collected along closely spaced transects within a grid.This is an active method that transmits electromagnetic pulses from surface antennas into the ground, and then measures the time elapsed between the moment when the pulses are sent and the moment when they are received back at the surface (two-way travel time).As the radar pulses are transmitted through various materials on their way to the buried target feature, their velocity changes; and when the travel time of energy pulses is measured, and their velocity through the ground is known, depth in the ground can be accurately measured (Figure 6).The approximate size of the radiation footprint at a depth in the ground is estimated from the antenna frequency and the RDP of the ground through which the energy passes.The cone of transmission becomes wider with depth when the relative dielectric permittivity (RDP) of the material is low.In high RDP materials the transmission cone is narrower, and its footprint radius at any depth is much smaller.When hundreds or thousands of reflection traces are stacked together, a two-dimensional reflection profile is produced.
Material
Point source hyperbolas (diffractions) are generated as conical patterns from buried objects of a limited size, its apex denoting the actual location of the tunnel (Figure 7).GPR georeferenced maps were produced in eight horizontal slices (distant ½ feet) showing the presence of the buried tunnels and an old water tower.They are similar to arbitrary level-maps in standard archaeological investigations, except they are showing the strength of radar reflections with certain depth intervals.
Results
After the field work and once back at the office we started with the data process and interpretation.Radargrams were processed using the software RADAN 6.5, which allows introducing digital filters such as the finite and the infinite impulse response (FIR and IIR), space filters, migration, and Hilbert transform.
Figure 8.Old buried water tower at a depth of 2 ½ feet, south east slope.Units were chosen in this case according to the historical 16 th century distance units.
Once data are processed by removing the unwanted noise and the point source hyperbola tails (high angle reflections) are removed to improve the accuracy, leaving only the reflections at the apex, the resulting reflection profiles are ready for additional image production (Soldovieri and Orlando 2009) (Figure 8).
As the datasets can be exported and presented into different graphic file formats, it is easy to draw maps and to produce photorealistic 3D models.For example, the datasets from the horizontal slices can be imported into an open source visualisation program as 3D Studio Max and SketchUp, to render them into 3D shapes (Figure 9).
Discussion
In GPR technique, point-source reflections are generated from one distinct point feature in the subsurface.In this case the buried materials that generate this kind of point-source reflections could be individual granite rocks, metal objects, or ceramic pipes of old sewage systems, but also the 16 th century drainage tunnels.
The ability to identify buried features is mostly a function of the wavelength of the energy reaching them at the depth they are buried.When features are different to large planar surfaces, wavelengh should be larger than the clutter and greater than about 75% of one wavelength or so in dimension.
On the one hand, a usual disturbance that affects the resolution of reflections in the ground is background noise, which is almost always recorded during GPS surveys.On the other hand, GPR radar antennas employ electromagnetic energy of frequencies that are similar to those used in television, FM radio and other radio communication bands, so there is almost always nearby noise generators of some kind.
CONCLUSIONS
The acquisition of reliable results with GPR techniques depends on the knowledge in advance of the type of soils and geologic materials, and their moisture.According to Conyers (2004), rain is the most important factor in resolving ancient structures in radar.
To define the depth of the prospective target features and their approximate dimensions and composition using estimates of RDP, the cone of transmission can be predicted, and a potential resolution of features of interest can be estimated from the footprint size using different frequency antennas.Some of the important factors to be considered in choosing an antenna frequency are: the electrical properties of the ground at the site, the depth of radar energy transmission necessary to study the buried features, their size and dimensions, the site access, and the presence of possible external electrical interferences within the frequency spectrum of the antenna.Radio interferences and their sources must be previously identified, in order to choose an adequate antenna frequency so as to minimize that influence.As a general rule, if the target features are within about one meter of the ground surface, antennas between 400 and 900 MHz are adequate to transmit energy to that depth and solve most features and associated stratigraphy.Finally, when applying this technology on granite slab pavements, which is the case of the tunnel under the Galería de Convalecientes, the expected high attenuation of electromagnetic energy due to the wet granite supposes a low penetration depth of the signal, not exceeding 1 m even using a 100 MHz antenna.As a summary, the importance and accuracy of integrated transdisciplinary methods (as the geophysical and land survey methods) in the assessment of architectural sites has proved to be an interesting methodology.It supplies complementary datasets to the traditional ones which are got through photogrammetric or scanner laser methodologies.
Figure 1 .
Figure 1.Juan Bautista de Toledo 1564, sketch of the drainage tunnel under the Galería de Convalecientes on top of a letter to King Philipp II.Archivo General de Simancas (Valladolid, Spain).
Figure 2 .
Figure 2. Herrera and Perret, 1587: longitudinal cut of the Monastery showing the Royal Pantheon (detail).
Figure 3 .
Figure 3. Section of the water supply tunnels inside the Monastery.Depth from the ground floor level varies from 0.5 m to 1 m.
Figure 4 .
Figure 4. Eastern façade of the Monastery (Muro de los Nichos or Wall of the Niches), showing the stains of damp on the stone masonry walls.2.2 Description of the methods
Figure 6 .
Figure 6.Radar energy cone of transmission and footprint.
Figure 9 .
Figure 9. Map of the South-east area of the Monastery, showing the different sections of the drainage tunnel.The water tower is located in D.
Table 1 .
Digital Camera IS-1 main specifications.
Table 2
. Relative dielectric permittivities (RDPs) of the geological materials in El Escorial.
|
2018-12-06T23:34:53.330Z
|
2013-07-19T00:00:00.000
|
{
"year": 2013,
"sha1": "c0ea28d851750114e0424aa954a2de7595d6d94e",
"oa_license": "CCBY",
"oa_url": "https://isprs-archives.copernicus.org/articles/XL-5-W2/177/2013/isprsarchives-XL-5-W2-177-2013.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "c0ea28d851750114e0424aa954a2de7595d6d94e",
"s2fieldsofstudy": [
"History"
],
"extfieldsofstudy": [
"Geology"
]
}
|
13747345
|
pes2o/s2orc
|
v3-fos-license
|
Flight State Identification of a Self-Sensing Wing via an Improved Feature Selection Method and Machine Learning Approaches
In this work, a data-driven approach for identifying the flight state of a self-sensing wing structure with an embedded multi-functional sensing network is proposed. The flight state is characterized by the structural vibration signals recorded from a series of wind tunnel experiments under varying angles of attack and airspeeds. A large feature pool is created by extracting potential features from the signals covering the time domain, the frequency domain as well as the information domain. Special emphasis is given to feature selection in which a novel filter method is developed based on the combination of a modified distance evaluation algorithm and a variance inflation factor. Machine learning algorithms are then employed to establish the mapping relationship from the feature space to the practical state space. Results from two case studies demonstrate the high identification accuracy and the effectiveness of the model complexity reduction via the proposed method, thus providing new perspectives of self-awareness towards the next generation of intelligent air vehicles.
Introduction
The current state sensing and awareness of flight vehicles relies on traditional sensors and detection devices mounted on different locations of the vehicle, e.g., Pitot tubes installed in front of the nose for airspeed measurement, transducers located on each side of the fuselage for angle of attack detection. Inspired by the unsurpassed flight capabilities of birds, a novel "fly-by-feel" (FBF) concept has been recently proposed for the development of the next generation of intelligent air vehicles that can "feel", "think", and "react" [1,2]. Such bio-inspired systems will not only be able to sense the environment (temperature, pressure, aerodynamic forces, etc.), but also be able to think in real-time and be aware of their current flight state and structural health condition. Further, such systems will react intelligently under various situations and achieve superior performance and agility. Compared with the traditional approaches, this FBF concept has the following advantages: (1) structural complexity reduction by integrated structures with self-sensing ability, (2) structural health on-line monitoring through embedded multi-functional materials, (3) autonomous flight control and decision-making After realizing sensing ability through multi-functional structures development, the next step is to equip the smart wing with thinking and judging capability, i.e., the structure is expected to be aware of surroundings and identify its current flying state. There have been studies devoted to addressing the related identification problem based on either strain or vibration signals obtained from experiments. Huang et al. studied the active flutter control and closed-loop flutter identification and a fast-recursive subspace method was applied in high-dimensional aero-servo-elastic system. The wind tunnel test showed that the natural frequency and modal damping ratios of the flutter modes can be precisely tracked [13]. Pang and Cesnik employed non-linear least squares fit and Kalman filtering to obtain wing shape information and rigid body attitude. Results revealed that the Kalman filter has good performance in the presence of sensor noise [14]. For elastic deformation, Sodja et al. conducted a dynamic aeroelastic wind tunnel experiment under harmonic pitching excitations, experimental data including the bending and torsion deformation were consistent with the elastic analysis model developed by the Delft University of Technology [15]. For more general flight states, Kopsaftopoulos and Chang established a stochastic global identification method using PZT signals from both time and frequency domain based on developed Vector-dependent Functionally Pooled (VFP) model [2,16,17]. A large range of airspeeds and angles of attack were considered in the VFP-based identification framework and the structural dynamics of the composite wing could be captured and predicted.
Overall, the above data processing approaches mainly belong to state space methods and improved time series analysis. Based on the previous study yet from another perspective, if we can After realizing sensing ability through multi-functional structures development, the next step is to equip the smart wing with thinking and judging capability, i.e., the structure is expected to be aware of surroundings and identify its current flying state. There have been studies devoted to addressing the related identification problem based on either strain or vibration signals obtained from experiments. Huang et al. studied the active flutter control and closed-loop flutter identification and a fast-recursive subspace method was applied in high-dimensional aero-servo-elastic system. The wind tunnel test showed that the natural frequency and modal damping ratios of the flutter modes can be precisely tracked [13]. Pang and Cesnik employed non-linear least squares fit and Kalman filtering to obtain wing shape information and rigid body attitude. Results revealed that the Kalman filter has good performance in the presence of sensor noise [14]. For elastic deformation, Sodja et al. conducted a dynamic aeroelastic wind tunnel experiment under harmonic pitching excitations, experimental data including the bending and torsion deformation were consistent with the elastic analysis model developed by the Delft University of Technology [15]. For more general flight states, Kopsaftopoulos and Chang established a stochastic global identification method using PZT signals from both time and frequency domain based on developed Vector-dependent Functionally Pooled (VFP) model [2,16,17]. A large range of airspeeds and angles of attack were considered in the VFP-based identification framework and the structural dynamics of the composite wing could be captured and predicted.
Overall, the above data processing approaches mainly belong to state space methods and improved time series analysis. Based on the previous study yet from another perspective, if we can extract distinguished features from the continuous coupled structural aerodynamic behavior, it is possible to identify the flight state directly using the limited features instead of detailed characterization of the structural responses. Machine learning techniques can be employed to establish the mapping relationship from the feature space to the practical state space.
Facing a series of signals generated from the embedded sensor network, one of the main challenges is what kind of features should be extracted and whether these features are useful for classification. A set of features without careful selection and evaluation may lead to poor results whatever superior machine learning models are applied. Feature engineering is such a research field including feature extraction and selection. For a period of time series signals with noise, various statistical features can be calculated such as the mean value, standard deviation, peak value, kurtosis, etc. from both time domain and frequency domain [18], a feature pool is then created with different number of features depending on the characteristics of the signals [19][20][21]. More features are encouraged to avoid missing important candidates with superior classification performance. The next step is feature selection in which a limited subset is obtained by eliminating less effective features. It reduces model dimension and computational time [22]. Generally, feature selection can be divided into three categories as filter, wrapper and embedded. Filter methods rank the variables completely separate to the model used for classification. The assignment of feature importance is based on information generated by some statistical algorithms. Filter methods are computationally simple and fast without the interaction with the classifier and feature dependencies [23]. Embedded solutions select salient features as part of the learning process of the model, which can be linear regression, support vector machine, decision tree, random forest, etc. These methods integrate the subset selection into the model construction but are difficult to adjust for the optimal search [24]. The third category is wrapper, in which features are selected based on the performance of a given model by searching the possible subsets space and assessing the performance of the given model on each subset, models can be various learning machines [25]. Although wrapper methods often achieve sound classification performance by considering the feature dependencies, the frequent interactions between feature subset search and the classifier cause high computational costs [26].
We have demonstrated the effectiveness of establishing the mapping relationship from the feature space to the flight state space through neural networks modelling [27]. This paper significantly improves the previous work by creating a much larger feature pool and considering the co-linearity among various features. To sum up, the objective of this paper is the introduction and evaluation of a novel feature selection method for accurate flight state identification of a self-sensing wing structure based on experimental vibration data recorded by piezoelectric sensors under multiple flight states. The developed method belongs to the filter family and is capable of obtaining a group of most important features for classification with low mutual dependency. The framework of the data acquisition, methodology development, evaluation and application is shown in Figure 2.
The rest of the paper is organized as follows: Section 2 presents the problem statement. Section 3 focuses on the feature extraction and feature selection in which the novel filter algorithm is introduced. Two case studies including the general flight state identification and the stall detection and alerting are conducted in Section 4 followed by their results and discussions in Section 5. Concluding marks are made in the last section. extract distinguished features from the continuous coupled structural aerodynamic behavior, it is possible to identify the flight state directly using the limited features instead of detailed characterization of the structural responses. Machine learning techniques can be employed to establish the mapping relationship from the feature space to the practical state space.
Data Acquisition
Self-sensing wing PZT signal preparation
Methodology Development
Feature extraction Feature selection Application: Stall Alerting Facing a series of signals generated from the embedded sensor network, one of the main challenges is what kind of features should be extracted and whether these features are useful for classification. A set of features without careful selection and evaluation may lead to poor results whatever superior machine learning models are applied. Feature engineering is such a research field including feature extraction and selection. For a period of time series signals with noise, various
Problem Statement
The problem statement of this work is as follows: based on signals collected from the PZT sensors embedded in the self-sensing wing through a series of experiments under varying flight states, develop a feature selection method that is capable of obtaining limited useful features for flight state identification with high accuracy and low model complexity. Specifically, the coupled aerodynamic-mechanical responses represent different flight states, with each state characterized by a specific angle of attack (AoA) and airspeed and kept constant during the data collection. The first problem is that whether a few salient features can be extracted from a period of vibrational time series (e.g., thousands of data points) as a representation of the corresponding flight state. In this way, we can skip the investigation into the detailed aeroelastic behavior and use the limited features to identify the specific flight state directly instead of using the entire lengthy signal. This would significantly reduce the complexity of the flight state characterization. The second problem is how to guarantee the effectiveness of selected features. If the selected strong features are highly correlated with each other, they will exhibit similar identification ability which are still away from the optimal subset.
The above two problems constitute the motivation of this study and are addressed in the following approaches: firstly, a large number of features is extracted to cover a wide range of descriptions of the flight state. Then, a modified distance evaluation algorithm is conducted to obtain a subset of individually powerful features followed by the combination of a variance inflation factor algorithm to reduce high dependency among features in the subset. Machine learning models are employed to evaluate the above method for multiple flight states identification as well as a specific case of stall detection and alerting.
The main novel aspects of this study include: (1) A large feature pool is created covering up to 47 different features from the time, frequency and information domains. (2) A novel filter feature selection method is developed by combining a modified distance evaluation algorithm and a variance inflation factor.
Methodology Development
In this section, a novel filter feature selection method is proposed via the combination of a modified distance evaluation algorithm and a variance inflation factor. In order to obtain sufficient feature candidates, a large feature pool is firstly created by extracting features covering a wide range. The output of this method is a feature subset consisting of most salient features with low correlation, which is able to represent a lengthy time-series signal of the wing structural response under certain flight state.
Feature Extraction
Feature extraction relies heavily on experts' knowledge, it is encouraged to extract different kinds of features, as many as possible in case of missing useful ones. In this study, we intend to create a large feature pool from three main sources, namely the time, frequency and information domains.
In time domain, 25 statistical features are calculated including 12 commonly used features such as mean, standard deviation, variance, peak, mean absolute deviation, etc. and 13 un-dimensional features such as crest factor, shape factor and a series of normalized central moments. The expressions of all time domain features are listed in Table 1. In terms of their physical insights, t 1 -t 12 may reflect the vibration amplitude and energy while t 13 -t 25 may represent the series distribution of the signal in time domain.
Previous studies employed Fast Fourier Transform (FFT) to convert the time series into frequency spectrum [19,20]. However, the signal instances from the wind tunnel experiments are samples of a stochastic process with considerable noise. Welch's method improves FFT by shortening the signals and averaging, and thus the peaks are smoothed for noise reduction [28]. Herein, a sample-long Hamming data window with 90% overlap is used for the Welch-based spectral estimation. A series of power spectrum y(k) without log transformation is then used for frequency domain feature extraction. Thirteen statistical features such as mean spectrum, spectrum center, root mean square spectrum, etc. and their mathematical expressions are shown in Table 2. f 1 may indicate the vibration energy in the frequency domain. f 2-4 , f 6 , f 10-13 may describe the convergence of the spectrum power. f 5 , f 7-9 may show the position change of the main frequency.
Time Domain Feature Parameters
Un-Dimensional Table 2. Features in the frequency domain.
Frequency Domain Feature Parameters
Note: y(k) is a spectrum for k = 1, 2, . . . , K, K is the number of spectrum components; f r k is the frequency value of the kth spectrum line.
In electroencephalograph (EEG) analysis for neural diseases diagnosis and vibration analysis for mechanical defects, fractal dimensions from computational geometry and entropies from information theory have demonstrated effectiveness in early diseases/fault diagnosis [29,30]. Inspired by that, a group of complex features are employed and their terminologies are Multi-Scale Entropy, Partial Mean of Multi-Scale Entropy, Petrosian Fractal Dimension, Higuchi Fractal Dimension, Fisher Information, Approximate Entropy, and Hurst Exponent, respectively.
Multi-Scale Entropy (MSE) introduces the scale factor based on the sample entropy to measure the complexity of signal under different scale factors [31]. It is calculated as: where τ is the scale factor, m is the embedding dimension and r is the threshold. Here m = 2, r = 0.2 * standard deviation, τ = 12. The first three values are selected due to the relatively high distinction among different classes. Also, an integrated non-linear index called Partial Mean of Multi-Scale Entropy (PMMSE) is used to simultaneously reflect the mean value and variation trend of MSE [32], which is expressed as: (2), . . . , MSE (12)].
Fractal dimension characterizes the space filling capacity of a pattern that changes with the scale at which it is measured [33]. Herein, two approaches are used as Petrosian Fractal Dimension (PFD) and Higuchi Fractal Dimension (HFD). PFD is calculated as: where N is the length of the signal and N δ is the number of sign changes in the signal derivative [30].
In terms of HFD, firstly k new series are constructed from the original signal [x 1 , Secondly the length L(m, k) for each new series is calculated as: and the average length L(k) = ∑ k i=1 L(i, k) /k. After k max repetitions, a least-squares method is used to obtain the best slope that fits the curve of ln(L(k)) versus ln(1/k), which is defined as the Higuchi Fractal Dimension. For details, please refer to [34].
Fisher Information (FI) measures the expected value of the observed information [35]. Its mathematical expression using normalized singular spectrum is: where σ i is the normalized value through σ i = σ i /∑ M j=1 σ j , and M is the number of singular value. Approximate Entropy (ApEn) quantifies the amount of regularity and the unpredictability of fluctuations of a signal [36], which is computed in the following procedures: Step (2).
(5) ApEn is calculated as: Hurst Exponent (HST) measures the long-term memory of a signal. It is used to quantify the relative tendency of the signal either to regress to the mean or to cluster in a direction [37]. For time series . Then: The slope of ln(R(n)/S(n)) versus ln(n) for n ∈ [2, 3, . . . , N] is defined as the Hurst Exponent.
In summary, abbreviations of the complex features extracted from information domain are listed in Table 3.
Feature Selection
Feature extraction guarantees a wide coverage of the object descriptions from various aspects while feature selection ensures that a set of most salient descriptions can be utilized. For large-scale models, feature selection is of utter importance in computation reduction and efficiency improvement.
The distance evaluation technique ranks the feature importance independent of the model used for classification, which belongs to the filter category as mentioned in the Introduction. Salient features result in minimum inner-class distances of the same class while have maximum margins for different classes. It has been widely used in fault diagnosis of rotating machinery [20,21,38]. Suppose a feature set has K conditions, q i,k,j , i = 1, 2, . . . , I k ; k = 1, 2, . . . , K; j = 1, 2, . . . , J , where q i,k,j is the jth eigenvalue of the ith sample under the kth condition, I k is the sample number of the kth condition, and J is the feature number of each sample. Totally I k × K × J features are obtained in the feature set q i,k,j . Herein, a modified distance evaluation algorithm is presented as follows: (1) Calculate the average distance of the same condition samples: then obtain the average distance of K conditions: (2) Calculate the average eigenvalue of all samples under the same condition: then obtain the average distance between condition samples: (4) Calculate the compensation factor as: then normalize α j and obtaining the feature importance criteria: A higher α j indicates that the corresponding feature j has greater importance. Features can be ranked in terms of the α j values in Equation (15) in descending order. This algorithm is referred to as Modified Distance Evaluation algorithm (MDE). Although the top ranked features have superior discriminative capability, they may suffer from high multi-collinearity, which refers to the non-independence among features [39]. Herein, the variance inflation factor (VIF) is used to avoid high collinearity. Assuming a training sample set X with J features X 1 , X 2 , . . . , X J and class Y, the VIF of feature j is calculated as: where R 2 j is the R-squared value of the regression equation X j = β 0 + βX , in which X contains all features except X j . An improved algorithm combining MDE and VIF is presented in Algorithm 1 and is abbreviated as MDV (Modified Distance evaluation and variance inflation Factor). (1) Set the selected future subset F sub = ∅, j = 1; (2) Rank the J features in terms of the α j defined in Equation (15) in descending order. Set F r to represent the index list of the ranked features. Add the first feature in F r to F sub , j = j + 1; (3) while j < J : calculate the VIF j of the jth feature in F r with the features in F sub ; if VIF j < 10: add the jth feature in F r to F sub ; end j = j + 1; end The MDV algorithm describes the feature-subset selection for multi-class classification based on the filter method with the MDE and VIF. The threshold of 10 in MDV is an empirical value. A larger threshold will result in a higher correlation of the selected feature in F r with the existing features in F sub [23].
Data Prepraration
A series of wind tunnel experiments of the self-sensing composite wing were conducted under various angles of attack (AoAs) and freestream velocities at Stanford University. The open-loop wind tunnel with a square test section of 0.76 m by 76 m was used and a basis was designed to supported the composite wing allowing adjustments in the angle of attack (AoA). The composite wing dimension is outlined in Table 4. Compared with the size of the wind tunnel test section, the additional 0.1 m extension of the wing span was attached to the wing fixture. The AoAs range from 0 degree up to 18 degrees with an incremental step of 1 degree. At each degree, data were collected for all velocities ranging from 9 to 22 m/s (incremental step of 1 m/s). For experimental details, please refer to [2].
PZT signals reflect the coupled airflow-structural dynamics through the wing structural vibration and each time series contains coupled behavior with repeated patterns of a certain flight state. This study focuses on the usage of PZT sensor signals for flight state identification. In each experiment, the structural vibration responses (60,000 data points) were recorded from the PZT located near the wing root at 1000 Hz sampling frequency. For each flight state, data are prepared in two steps: (1) the entire signal of 60,000 data points is divided into 60 segments (1000 data points for each segment) to ensure enough samples for training while each segment has sufficient data points for feature extraction; (2) the first order difference and zero-mean are conducted for each sample sequence in order to eliminate the influence of zero drift. To evaluate the effectiveness of the proposed method and apply it for dangerous state pre-warning, two sets of data are collected for general flight state identification and stall detection and alerting.
General Flight State Identification
The first data set includes PZT signals with a coarse resolution covering the range of 16 flight states corresponding to combinations of four AoAs (1, 5, 9, 13 degrees) and four airspeeds (10, 13, 16, 19 m/s). Four signal segments are shown in Figure 3 under a series of AoAs and a fixed airspeed of 10 m/s as an example.
It is noticed that the flight state with AoA of 13 degrees and velocity of 10 m/s can be obviously identified since the amplitude of the voltage distinguishes it from other signals (it is because this flight state is close to the stall condition which will be discussed later). The second largest amplitude comes with 9 degrees which can be separated to a certain extent but already has overlaps with the rest two. In the study, the identification of the different flight states relies on the features selected by the developed method in Section 3. To compare the feature selection effectiveness, four other feature selection methods are employed including Univariate Feature Selection based on mutual information (UFS_m), Support Vector Machine with L1 regularization (SVM_L1), Gradient Boosted Decision Tree (GBDT) and Stability selection (STAB). These methods cover three main feature selection categories. A brief introduction is presented as follows: (1) UFS_m is a commonly used filter method. It performs test on each feature by evaluating the relationship between the feature and the response variable based on mutual information [40], which is defined as It measures the mutual dependence between variable X and Y. Features with low rankings are removed. (2) SVM_L1 is one of the embedded methods, which selects salient features as part of the learning system [18]. Support Vector Machine (SVM) is a popular machine learning method based on structural risk minimization principle. It constructs a hyperplane that has the largest distance to the nearest training data points, which are so called support vectors. An appropriate separation can reduce the generalization error of the classifier [41]. L1 is a regularization item added to the loss function as |W|, where W standards for the parameter matrix of the learning model [42]. This is a penalty item to make the model sparse with fewer useful input dimensions. (3) GBDT is a tree-based model belonging to the embedded category. It combines weak decision trees in an iterative manner based on gradient descent through additive training. Trees are added at each iteration with modified parameters learned in the direction of residual loss reduction [43]. (4) Stability selection is a kind of wrapper method, in which features are selected based on the established models using different subsets, model could be of various types and structures such as logistic regression, SVM, etc. By calculating the frequency of a feature ended up being selected as important from a feature subset being tested, powerful features are expected to have high scores close to 100%, weaker features will have lower score and the least useful ones will close to zero [44]. Herein, a randomized logistic regression is used as the selection model. Compared with the size of the wind tunnel test section, the additional 0.1 m extension of the wing span was attached to the wing fixture. The AoAs range from 0 degree up to 18 degrees with an incremental step of 1 degree. At each degree, data were collected for all velocities ranging from 9 to 22 m/s (incremental step of 1 m/s). For experimental details, please refer to [2].
PZT signals reflect the coupled airflow-structural dynamics through the wing structural vibration and each time series contains coupled behavior with repeated patterns of a certain flight state. This study focuses on the usage of PZT sensor signals for flight state identification. In each experiment, the structural vibration responses (60,000 data points) were recorded from the PZT located near the wing root at 1000 Hz sampling frequency. For each flight state, data are prepared in two steps: (1) the entire signal of 60,000 data points is divided into 60 segments (1000 data points for each segment) to ensure enough samples for training while each segment has sufficient data points for feature extraction; (2) the first order difference and zero-mean are conducted for each sample sequence in order to eliminate the influence of zero drift. To evaluate the effectiveness of the proposed method and apply it for dangerous state pre-warning, two sets of data are collected for general flight state identification and stall detection and alerting.
General Flight State Identification
The first data set includes PZT signals with a coarse resolution covering the range of 16 flight states corresponding to combinations of four AoAs (1, 5, 9, 13 degrees) and four airspeeds (10, 13, 16, 19 m/s). Four signal segments are shown in Figure 3 under a series of AoAs and a fixed airspeed of 10 m/s as an example. It is noticed that the flight state with AoA of 13 degrees and velocity of 10 m/s can be obviously identified since the amplitude of the voltage distinguishes it from other signals (it is because this flight state is close to the stall condition which will be discussed later). The second largest amplitude comes with 9 degrees which can be separated to a certain extent but already has overlaps with the rest two. In the study, the identification of the different flight states relies on the features selected by the developed method in Section 3. To compare the feature selection effectiveness, four other feature selection methods are employed including Univariate Feature Selection based on mutual information (UFS_m), Support Vector Machine with L1 regularization (SVM_L1), Gradient Boosted Decision Tree (GBDT) and Stability selection (STAB). These methods cover three main feature selection categories. A brief introduction is presented as follows: (1) UFS_m is a commonly used filter method. It performs test on each feature by evaluating the relationship between the feature and the response variable based on mutual information [40], which is defined as
Application to Stall Detection and Identification
The second data set covers a higher resolution of flight states (AoAs: 11, 12, 13 degrees, airspeeds: 10, 13, 16, 19 m/s) for critical states alerting. In aerodynamics, stall phenomenon is one of the dangerous conditions wherein a sudden reduction of the lift coefficient occurs as the angle of attack increases beyond a critical point. According to previous analysis [2], the signal energy can be used as an indicator of the lift loss of the self-sensing wing. From the wind tunnel experiments, the mean values of the signal energy for a series of AoAs (from 0 to 17 degrees) under four airspeeds (10, 13, 16, 19 m/s) are obtained and shown in Figure 4.
The signal energy variation with respect to the angle of attack is similar under four different airspeeds. It is noticed that for relatively low velocities (10 m/s, 13 m/s & 16 m/s), the significant increase occurs approximately after 14 degrees while for the relatively high speed (19 m/s), stall happens much early at 13 degrees. It should be noted that the data were stopped recording after 13 degrees with the high speed of 19 m/s, which is reflected in the red line with zero energy starting from 14 degrees. Therefore, we define the orange shaded area starting from 13 degrees as the stall region which should be avoided. Moreover, it is observed that at 12 degrees, the signal energy for some flight states has certain increase compared with the rest small angles. This degree is defined as the alert region as the transition between the safe region marked in light green and the critical stall region. When the self-sensing wing comes to this region, warnings should be provided to the flight control for angle reduction.
as logistic regression, SVM, etc. By calculating the frequency of a feature ended up being selected as important from a feature subset being tested, powerful features are expected to have high scores close to 100%, weaker features will have lower score and the least useful ones will close to zero [44]. Herein, a randomized logistic regression is used as the selection model.
Application to Stall Detection and Identification
The second data set covers a higher resolution of flight states (AoAs: 11, 12, 13 degrees, airspeeds: 10, 13, 16, 19 m/s) for critical states alerting. In aerodynamics, stall phenomenon is one of the dangerous conditions wherein a sudden reduction of the lift coefficient occurs as the angle of attack increases beyond a critical point. According to previous analysis [2], the signal energy can be used as an indicator of the lift loss of the self-sensing wing. From the wind tunnel experiments, the mean values of the signal energy for a series of AoAs (from 0 to 17 degrees) under four airspeeds (10, 13, 16, 19 m/s) are obtained and shown in Figure 4.
General Flight State Identification
The first data set with a relatively low resolution of 16 flight states is used to evaluate the performance of six feature selection methods, which include Univariate Feature Selection based on mutual information (UFS_m), Support Vector Machine with L1 regularization (SVM_L1), Gradient Boosted Decision Tree (GBDT) and Stability selection (STAB), Modified Distance Evaluation (MDE), and our proposed filter method Modified Distance Evaluation with Variance Inflation Factor (MDV). Feature rankings are obtained and the top 10 features for different methods are listed in Table 5 and their detailed expressions are listed in Appendix A. F41 F47 F47 F35 F35 2 F34 F43 F40 F12 F26 F30 3 F6 F39 F46 F21 F2 F5 4 F2 F25 F14 F20 F6 F28 5 F5 F46 F39 F19 F31 F42 6 F4 F19 F44 F18 F30 F45 7 F40 F33 F41 F17 F12 F41 8 F23 F13 F1 F16 F8 F46 9 F42 F44 F21 F15 F36 F14 10 F17 F10 F45 F14 F10 F23 It is observed from the table that the ranking results vary with the different methods. An intuitive evaluation is to simply visualize the features distribution under various flight states. For example, four features are plotted in Figure 5 including: F1 (mean value in time domain), F29 (spectrum kurtosis in frequency domain), F35 (spectrum power convergence in frequency domain), and F47 (Hurst Exponent in information domain). The x axis denotes the 16 flight states while the y axis is the feature value before normalization. The shaded area along each vertical line segment represents the feature distribution in a single flight state and each subplot of Figure 5 describes a feature distribution on 16 flight states. As mentioned in Section 3, F1 (mean value) has no effects in classification. Correspondingly, F1 has the highest overlap among flight states. Similarly, F47 has large overlaps which exhibits pool classification capability. Theoretically, the ranking of F1 and F47 should be low but they are ranked high in GBDT and STAB. In comparison, F30 and F35 show smaller overlap and thus have better classification performance. This may provide some physical insights of the effectiveness of different feature selection methods. The last column MDV in Table 4 is an improvement of MDE for preventing high collinearity. To examine the effects of the proposed algorithm, Correlation analysis is conducted for MDV and MDE as shown in Figure 6. The last column MDV in Table 4 is an improvement of MDE for preventing high collinearity. To examine the effects of the proposed algorithm, Correlation analysis is conducted for MDV and MDE as shown in Figure 6.
It is obvious that the top 10 features selected by MDE are highly correlated with each other. In comparison, the overall collinearity of the features in MDV is much lower except for the small region of the top three.
To visualize the feature selection performance by MDV, t-Distributed Stochastic Neighbor Embedding (t-SNE) is employed which is a relatively new method of dimension reduction particularly suitable for non-linear and high-dimensional datasets. It is a kind of manifold learning technique by mapping to probability distributions through affine transformation. For detailed algorithm, please refer to [45]. The 3D visualization by t-SNE is shown in Figure 7 The last column MDV in Table 4 is an improvement of MDE for preventing high collinearity. To examine the effects of the proposed algorithm, Correlation analysis is conducted for MDV and MDE as shown in Figure 6. It is obvious that the top 10 features selected by MDE are highly correlated with each other. In comparison, the overall collinearity of the features in MDV is much lower except for the small region of the top three.
To visualize the feature selection performance by MDV, t-Distributed Stochastic Neighbor Embedding (t-SNE) is employed which is a relatively new method of dimension reduction particularly suitable for non-linear and high-dimensional datasets. It is a kind of manifold learning technique by mapping to probability distributions through affine transformation. For detailed algorithm, please refer to [45]. The 3D visualization by t-SNE is shown in Figure 7. The left figure is the visualization using the entire feature pool while the right figure uses only top six features (NN). Cross-validation is used in each model and the average accuracy value of five tests is computed to reduce the unbalance influence between training and testing samples. It should be noted that since the objective of the case study is to compare the effects of different feature selection methods instead of obtaining the optimized parameter setting for each machine learning model to achieve the highest accuracy level, default parameter settings in Python scikit-learn package for LR, SVM, NB and RF are used and remain the same for all feature selection methods while for NN, the parameter setting is as follows: {hidden layer size = 20, solver = 'lbfgs', activation function = 'relu', learning rate = 0.001, maximum iteration = 100}. The identification results are shown in Figure 8. selection methods instead of obtaining the optimized parameter setting for each machine learning model to achieve the highest accuracy level, default parameter settings in Python scikit-learn package for LR, SVM, NB and RF are used and remain the same for all feature selection methods while for NN, the parameter setting is as follows: {hidden layer size = 20, solver = 'lbfgs', activation function = 'relu', learning rate = 0.001, maximum iteration = 100}. The identification results are shown in Figure 8. It can be observed that our proposed method MDV achieves the highest identification accuracy in all five machine learning models and particularly, there is a significant improvement in Logistic It can be observed that our proposed method MDV achieves the highest identification accuracy in all five machine learning models and particularly, there is a significant improvement in Logistic Regression. This demonstrates the superior effectiveness of MDV. The comparison between MDV and MDE shows that a group of individually powerful features with low collinearity can lead to better results.
Stall Detection and Alerting
So far, the developed MDV algorithm has achieved the best performance in feature selection and the final flight state identification accuracy is up to 100%. Herein, the second dataset with higher resolution is used for the application of stall detection and alerting. Similarly, totally 47 features as discussed in Section 3 are extracted and the most salient 6 features are selected by MDV as model inputs. A neural network is employed with the same parameter settings as the first case. The split rule is 80% samples for training and 20% samples for testing.
The classification report is shown in Table 6 including three criteria: Precision, Recall and F1-score. Precision is the ratio of correctly predicted positive observations to the total predicted positive observations while Recall is the ratio of correctly predicted positive observations to the all observations in the actual class. F1-Score is the weighted average of Precision and Recall: F1-Score = 2 * (Recall * Precision)/(Recall + Precision) [46]. Safe, Alert, and Stall regions are divided with corresponding flight states. The overall identification accuracy is 98%. To facilitate detailed analysis, a normalized confusion matrix is presented in Figure 9. Each row of the matrix represents the test samples in a true class label while each column indicates the samples in a predicted class label [47]. As can be observed from Table 6, for stall states (ID: 9 , 10, 11, 12), Recall values all equal to 100%, meaning that all the critical states can be successfully identified and there is no safety risk.
In terms of alert states (ID: 5, 6, 7, 8), Recall value of State 6 is 0.92, which means 92% samples in State 6 are correctly predicted. By examining the 6th row in the confusion matrix, the rest 8% samples are misclassified as State 1, which is in the safe region. This situation may lead to dangerous results since the wing is already in the alert states yet there is no warning. From the other perspective, the precision value of State 7 is 0.92, which indicates that among all samples predicted as State 7, there are 8% samples actually belonging to State 4 as shown in the 7th column of the confusion matrix. This value can be interpreted as the false-alarm ratio that the wing flying in the safe region yet receives a false alert.
For safe states (ID: 1, 2, 3, 4), the misclassified samples are for State 3 and State 4, in which 8% samples of State 3 are predicted as State 2 while 8% samples of State 4 are identified as State 7, which is the false alarm. Further, we select the different number of features from the modified distance evaluation (MDE) method and use the same neural network structure for training and testing. The comparison on the overall identification accuracy between MDV and various MDE is shown in Figure 10. The x axis denotes number of top ranked features selected. Further, we select the different number of features from the modified distance evaluation (MDE) method and use the same neural network structure for training and testing. The comparison on the overall identification accuracy between MDV and various MDE is shown in Figure 10. The x axis denotes number of top ranked features selected.
It can be seen that if we use the same number of input as MDV, features selected by MDE lead to a pool result of 0.33. The identification accuracy reaches the same level as MDV until the number of top ranked features selected from MDE increases to 20. This shows that our proposed method MDV is able to address the collinearity problem and uses fewer features to achieve superior performance with a considerable model complexity reduction. Further, we select the different number of features from the modified distance evaluation (MDE) method and use the same neural network structure for training and testing. The comparison on the overall identification accuracy between MDV and various MDE is shown in Figure 10. The x axis denotes number of top ranked features selected. It can be seen that if we use the same number of input as MDV, features selected by MDE lead to a pool result of 0.33. The identification accuracy reaches the same level as MDV until the number of top ranked features selected from MDE increases to 20. This shows that our proposed method MDV is able to address the collinearity problem and uses fewer features to achieve superior performance with a considerable model complexity reduction.
Conclusions
This paper focuses on the feature engineering in structural vibration signals obtained from a self-sensing composite wing through wind tunnel experiments. In addition to common statistical features from the time domain and frequency domain, complex features from the information domain inspired by electroencephalograph analysis and mechanical fault diagnosis are also extracted, some of which exhibit good classification ability. A novel filter feature selection method (MDV) is proposed by combining the modified distance evaluation (MDE) algorithm and the variance inflation factor (VIF). MDE is able to select individually powerful features but cannot address high collinearity. VIF is then applied for each top ranked feature to remove highly correlated elements. Results from both general flight state identification and stall detection & alerting demonstrate that this method can reduce the model complexity with fewer features while maintain a high identification accuracy. Knowledge can be gained by calculating the limited important features obtained by MDV efficiently for flight state identification using light-weight machine learning models. This would save considerable efforts in feature extraction and feature selection by manpower and has the potential to provide autonomous control with real-time flight state monitoring. For multi-sensor utilizations, this method can be applied to each sensor and ensemble methods can be developed to fuse multi-source results for more robust identification.
|
2018-05-03T02:53:34.281Z
|
2018-04-29T00:00:00.000
|
{
"year": 2018,
"sha1": "aace12c0b68946505318c3148676c08849b4740f",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/s18051379",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6b86b5594447cb8ce255c399d8f18798fcd27786",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
}
|
3284993
|
pes2o/s2orc
|
v3-fos-license
|
Evaluation of Underwater Image Enhancement Algorithms under Different Environmental Conditions
Underwater images usually suffer from poor visibility, lack of contrast and colour casting, mainly due to light absorption and scattering. In literature, there are many algorithms aimed to enhance the quality of underwater images through different approaches. Our purpose was to identify an algorithm that performs well in different environmental conditions. We have selected some algorithms from the state of the art and we have employed them to enhance a dataset of images produced in various underwater sites, representing different environmental and illumination conditions. These enhanced images have been evaluated through some quantitative metrics. By analysing the results of these metrics, we tried to understand which of the selected algorithms performed better than the others. Another purpose of our research was to establish if a quantitative metric was enough to judge the behaviour of an underwater image enhancement algorithm. We aim to demonstrate that, even if the metrics can provide an indicative estimation of image quality, they could lead to inconsistent or erroneous evaluations.
Introduction
The degradation of underwater images quality is mainly attributed to light scattering and absorption.The light is attenuated as it propagates through water and the attenuation varies according to the wavelength of light within the water column depth and depends also on the distance of the objects from the point of view.The suspended particles in the water are also responsible for light scattering and absorption.In many cases, the image taken underwater seems to be hazy, in a similar way as it happens in landscape photos degraded by haze, fog or smoke, which also cause absorption and scattering.Moreover, as the water column increases, the various components of sunlight are differently absorbed by the medium, depending on their wavelengths.This lead to a dominance of blue/green colour in the underwater imagery that is known as colour cast.The visibility can be increased and the colour can be recovered by using artificial light sources in an underwater imaging system.But artificial light does not illuminate the scene uniformly and it can produce bright spots in the images due to the backscattering of light in the water medium.
The work presented in this paper is part of the i-MARECULTURE project [1][2][3] that aims to develop new tools and technologies for improving the public awareness about underwater cultural heritage.In particular, it includes the development of a Virtual Reality environment that reproduces faithfully the appearance of underwater sites, giving the possibility to visualize the archaeological remains as they would appear in air.This goal requires a comparison of the different image enhancement algorithms to figure out which one performs better in different environmental and illumination conditions.We selected five algorithms from the state of the art and we used them to enhance a dataset of images produced in various underwater sites at heterogeneous conditions of depth, turbidity and lighting.These enhanced images have been evaluated by means of some quantitative metrics.There are several different metrics known in scientific literature employed to evaluate underwater enhancement algorithms, so we have chosen only three of them to complete our evaluation.
State of the Art
The problem of underwater image enhancement is closely related to the single image dehazing in which images are degraded by weather conditions such as haze or fog.A variety of approaches have been proposed to solve image dehazing and, in this section, we are reporting their most effective examples.Furthermore, we're also reporting the algorithms that address the problem of non-uniform illumination in the images and others that focus on colour correction.
Single image dehazing methods assume that only the input image is available and rely on image priors to recover a dehazed scene.One of the most cited works on single image dehazing is the dark channel prior (DCP) [4].It assumes that, within small image patches, there will be at least one pixel with a dark colour channel and uses this minimal value as an estimate of the present haze.This prior achieves very good results in some context, except in bright areas of the image where the prior does not hold.In [5] an extension of DCP to deal with underwater image restoration is presented.Based on the consideration that the red channel is often nearly dark in underwater images due to preferential absorption of different colour wavelengths in the water, this new prior, called Underwater Dark Channel Prior (UDCP), considers just the green and the blue colour channels in order to estimate the transmission.An author mentioned many times in the field is Fattal, R and his two works [6,7].In the first work [6] Fattal et al. formulate a refined image formation model that accounts for surface shading in addition to the transmission function.This allows for resolving ambiguities in data by searching for a solution in which the resulting shading and transmission functions are statistically uncorrelated.The second work [7] describes a new method for single-image dehazing that relies on a generic regularity in natural images, where pixels of small image patches typically present a one-dimensional distribution in RGB colour space, known as colour-lines.Starting from this consideration, Fattal et al. derive a local formation model that explains the colour-lines in the context of hazy scenes and use it for recovering the scene transmission based on the offset of the lines from the origin.Another work focused on lines of colour in the hazy image is presented in [8,9].The authors describe a new prior for single image dehazing that is defined as a Non-Local prior, to underline that the pixels forming the lines of colour are spread across the entire image, thus capturing a global characteristic that is not limited to small image patches.
Some other works focus on the problem of non-uniform illumination that, in the case of underwater imagery, is often produced by the artificial light needed at the deepest point.The work proposed in [10] suggests a method for non-uniform illumination correction for underwater images.The method assumes that natural underwater images are Rayleigh distributed and uses maximum likelihood estimation of scale parameters to map distribution of image to Rayleigh distribution.In [11] is presented a simple gradient domain method that acts as a high pass filter, aimed to correct the effect of non-uniform illumination and preserve the image details.A simple prior which estimates the depth map of the scene considering the difference in attenuation among the different colour channels is proposed in [12].The scene radiance is recovered from the hazy image through the estimated depth map by modelling the true scene radiance as a Markov Random Field.
Bianco et al. have presented in [13] the first proposal for colour correction of underwater images by using lαβ colour space.A white balancing is performed by moving the distributions of the chromatic components (α, β) around the white point and the image contrast is improved through a histogram cut-off and stretching of the luminance (l) component.In [14,15] is proposed a method for unsupervised colour correction of general purpose images.It employs a computational model that is inspired on some adaptation mechanisms of the human vision to realize a local filtering effect by taking into account the colour spatial distribution in the image.
Finally, we report a state of the art method that is effective in image contrast enhancement, since underwater images often lack in contrast.This is the Contrast Limited Adaptive Histogram Equalization (CLAHE) proposed in [16] and summarized in [17], which was originally developed for medical imaging and has proven to be successful for enhancing low-contrast images.
Selected Algorithms
In order to perform our evaluation, we have selected five algorithms that perform well and employ different approaches for the resolution of the underwater image enhancement problem, such as image dehazing, non-uniform illumination correction and colour correction.The decision to select these algorithms among all the other is based on a preliminary brief evaluation of their enhancement performance.Furthermore, we selected these algorithms also because we could find for them a trusty implementation done by the authors of the papers or by a reliable author.Indeed, we need such an implementation to develop the software tool we employed to speed-up the benchmark and that will be useful for further images processing and evaluation.The source codes of all the selected algorithms have been adapted and merged in the tool.We employed the OpenCV [18] library for the tool development in order to exploit its functions for images managing and processing.
Automatic Colour Enhancement (ACE)
The ACE algorithm is a quite complex technique, due to its direct computation on an N × N image costs O N 4 operations.For this reason, we have followed the approach proposed in [19] that describes two fast approximations of ACE.First, an algorithm that uses a polynomial approximation of the slope function to decompose the main computation into convolutions, reducing the cost to O N 2 log N .Second, an algorithm based on interpolating intensity levels that reduces the main computation to convolutions too.In our test, ACE was processed using the level interpolation algorithm with 8 levels.Two parameters that can be adjusted to tune the algorithm behaviour are α and the weighting function ω(x, y).The α parameter specifies the strength of the enhancement: the larger this parameter, the stronger the enhancement.In our test, we used the standard values for this parameter, e.g., α = 5 and ω(x, y) = 1/ x − y .For the implementation, we used the ANSI C source code referred in [19] that we adapted in our enhancement tool (supplementary materials).
Contrast Limited Adaptive Histogram Equalization (CLAHE)
The CLAHE [16,17] algorithm is an improved version of AHE, or Adaptive Histogram Equalization.Both are aimed to improve the standard histogram equalization.CLAHE was designed to prevent the over amplification of noise that can be generated using the adaptive histogram equalization.CLAHE partitions the image into contextual regions and applies the histogram equalization to each of them.Doing so, it balances the distribution of used grey values in order to make hidden features of the image more evident.We implemented this algorithm in our enhancement tool employing the CLAHE function provided by the OpenCV library.The input images are converted in lαβ colour space and then the CLAHE algorithm is applied only on the luminance (l) channel.OpenCV provide two parameters in order to control the output of this algorithm: the tile size and the contrast limit.The first parameter is the size of each tile in which the original image is partitioned and the second one is a parameter useful to limit the contrast enhancement in each tile.If noise is present, it will be amplified as well.So, in noisy images, such as underwater images, it should be better to limit the contrast enhancement to a low value, in order to avoid the amplification of noise.In our test, we set tile size at 8 × 8 pixels and contrast limit to 2.
Colour Correction Method on lαβ Space (LAB)
This method [13] is based on the assumptions of grey world and uniform illumination of the scene.The idea behind this method is to convert the input image form RGB to LAB space, correct colour casts of an image by adjusting the α and β components, increasing contrast by performing histogram cut-off and stretching and then convert the image back to the RGB space.The author provided us with a MATLAB implementation of this algorithm but, due to the intermediate transformations of colour space, needed to convert the input image from RGB to LAB and due to the lack of optimization of the MATLAB code, this implementation was very time-consuming.Therefore, we managed to port this code in C++ by employing OpenCV among other libraries.This enabled us to include this algorithm in our enhancement tool and to decrease the computing time by an order of magnitude.
Non-Local Image Dehazing (NLD)
The basic assumption of this algorithm is that colours of a haze-free image can be well approximated by a few hundred distinct colours.These few colours can be grouped in tight colour clusters in RGB space.The pixels that compose a cluster are often located at different positions across the image plane and at different distances from the camera.So, each colour cluster in the clear image becomes a line in RGB space of a hazy image, at which the authors refer to as a hazy-line.By means of these haze-lines, this algorithm recovers both the distance map and the dehazed image.The algorithm is linear in the size of the image and the authors have published an official MATLAB implementation [20].In order to include this algorithm in our enhancement tool, we have conducted a porting in C++, employing different library as OpenCV, Eigen [21] for the operation on sparse matrix not supported by OpenCV and FLANN [22] (Fast Library for Approximate Nearest Neighbours) to compute the colour cluster.
Screened Poisson Equation for Image Contrast Enhancement (SP)
The output of the algorithm is an image which is the result of applying the Screened Poisson equation [11] to each colour channel separately, together with a simplest colour balance [23] with a variable percentage of saturation as parameter (s).The Screened Poisson equation can be solved by using the discrete Fourier transform.Once found the solution in Fourier domain, the application of the discrete inverse Fourier transform yields the result image.The simplest colour balance is applied both before and after the Screened Poisson equation solving.The complexity of this algorithm is O(n log n).The ANSI C source code is provided by the authors in [11] and we adapted it in our enhancement tool.For the Fourier transform, this code relies on the library FFTw [24].The algorithm output can be controlled with the trade-off parameter α and the level of saturation of the simplest colour balance s.In our evaluation, we used as parameters α = 0.0001 and s = 0.2.
Case Studies
We tried to produce a dataset of images that was as heterogeneous as possible, in order to better represent the variability of environmental and illumination conditions that characterizes underwater imagery.Furthermore, we choose images taken with different cameras and with different resolutions, because in the real application cases the underwater image enhancement algorithms have to deal with images produced by unspecific sources.In this section, we describe the underwater sites, the dataset of images and the motivations that lead us to choose them.
Underwater Sites
Four different sites have been selected on which the images for the evaluation of the underwater image enhancement algorithms were taken.The selected sites are representative of different states of environmental and geomorphologic conditions (i.e., water depth, water turbidity, etc.).Two of them are pilot sites of the i-MARECULTURE project, the Underwater Archaeological Park of Baiae and the Mazotos shipwreck.The other two are the Cala Cicala and Cala Minnola shipwrecks.
Underwater Archaeological Park of Baiae
The Underwater Archaeological Park of Baiae is located off the north-western coasts of the bay of Puteoli (Naples).This site has been characterized by a periodic volcanic and hydrothermal activity and it has been subjected to gradual changes in the levels of the coast with respect to the sea level.The Park safeguards the archaeological remains of the Roman city that are submerged at a depth ranging between 1 and 14-15 m below sea level.This underwater site is usually characterized by a very poor visibility because of the water turbidity, which in turn is mainly due to the organic particles suspended in the medium.So, the underwater images produced here are strongly affected by the haze effect [25].
Mazotos Shipwreck
The second site is the Mazotos shipwreck that lies at a depth of 44 m, ca.14 nautical miles (NM) southwest of Larnaca, Cyprus, off the coast of Mazotos village.The wreck lies on a sandy, almost flat seabed and consists of an oblong concentration of at least 800 amphorae, partly or totally visible before any excavation took place.The investigation of the shipwreck is conducted jointly by the Maritime Research Laboratory (MARE Lab) of the University of Cyprus and the Department of Antiquities, under the direction of Dr Stella Demesticha.Some 3D models of the site have been created by using photogrammetric techniques [26].The visibility in this site is very good but the red absorption at this depth is nearly total, so the images were taken using an artificial light for recovering the colour.
Cala Cicala
In 1950, near Cala Cicala, within the Marine Protected Area of Capo Rizzuto (Province of Crotone, Italy), the archaeological remains of a large Roman Empire ship were discovered at a depth of 5 m.The so-called Cala Cicala shipwreck, still set for sailing, carried a load of raw or semi-finished marble products of considerable size.In previous work, the site has been reconstructed with 3D photogrammetry and it can be enjoyed in Virtual Reality [27].The visibility in this site is good.
Cala Minnola
The underwater archaeological site of Cala Minnola is located on the East coast of the island of Levanzo, in the archipelago of the Aegadian Islands, few miles from the west coast of Sicily.The site preserves the wreck of a Roman cargo ship at a depth from the sea level ranged from 25 m to 30 m [28].The roman ship was carrying hundreds of amphorae which should have been filled with wine.During the sinking, many amphorae were scattered across the seabed.Furthermore, the area is covered by large seagrass beds of Posidonia.In this site, the visibility is good but, due to the water depth, the images taken here suffer from serious colour cast because of the red channel absorption and, therefore, they appear bluish.
Image Dataset
For each underwater site described in the previous section, we selected three representative images for a total of twelve images.These images constitute the underwater dataset that we employed to complete our evaluation of image enhancement algorithms.
Each row of the Figure 1 represents an underwater site.The properties and modality of acquisition of the images vary depending on the underwater site.In the first row (a-c) we can see the images selected for the Underwater Archaeological Park of Baiae that, due to the low water depth, are naturally illuminated.The first two (a,b) were acquired with a Nikon Coolpix, a non-SLR (Single-Lens Reflex) camera, at a resolution of 1920 × 1080 pixels.The third image (c) was taken with a Nikon D7000 DSLR (Digital Single-Lens Reflex) camera with a 20 mm f/2.8 lens and have the same resolution of 1920 × 1080 pixels.The second row (d-f) shows three images of some semi-finished marble from the Cala Cicala shipwreck.They were acquired with natural illumination using a Sony X1000V, a 4 K action camera, with a resolution of 3840 × 2160 pixels.In the third row (g-i) we can see the amphorae of a Roman cargo ship and a panoramic picture, all taken at the underwater site of Cala Minnola.These images were acquired with an iPad Air and have a resolution of 1920 × 1080 pixels.Despite of the depth of this underwater site, these pictures were taken without artificial illumination and so they look bluish.Therefore, these images are a challenge for understanding how the selected underwater algorithms can deal with such a situation to recover the colour cast.In the last row we can find the pictures of the amphorae at the Mazotos shipwreck.Due to the considerable water depth, these images were acquired with an artificial light, using a Canon PowerShot A620, a non-SLR camera, with a resolution of 3072 × 2304 pixels that implicates an image ratio of 4:3, different from the 16:9 ratio of the images taken at the other underwater sites.The use of artificial light to acquire these images had produced a bright spot due to the backward scattering.
of a Roman cargo ship and a panoramic picture, all taken at the underwater site of Cala Minnola.These images were acquired with an iPad Air and have a resolution of 1920 × 1080 pixels.Despite of the depth of this underwater site, these pictures were taken without artificial illumination and so they look bluish.Therefore, these images are a challenge for understanding how the selected underwater algorithms can deal with such a situation to recover the colour cast.In the last row we can find the pictures of the amphorae at the Mazotos shipwreck.Due to the considerable water depth, these images were acquired with an artificial light, using a Canon PowerShot A620, a non-SLR camera, with a resolution of 3072 × 2304 pixels that implicates an image ratio of 4:3, different from the 16:9 ratio of the images taken at the other underwater sites.The use of artificial light to acquire these images had produced a bright spot due to the backward scattering.The described dataset is composed by very heterogeneous images that address a wide range of potential underwater environmental conditions and problems, as the turbidity in the water that make the underwater images hazy, the water depth that causes colour casting and the use of artificial light that can lead to bright spots.It makes sense to expect that each of the selected image enhancement The described dataset is composed by very heterogeneous images that address a wide range of potential underwater environmental conditions and problems, as the turbidity in the water that make the underwater images hazy, the water depth that causes colour casting and the use of artificial light that can lead to bright spots.It makes sense to expect that each of the selected image enhancement algorithms should perform better on the images that represent the environmental conditions against which it was designed.
Evaluation Methods
Each image included in the dataset described in the previous section was processed with each of the image enhancement algorithm introduced in the Section 3, taking advantage of the enhancement processing tool that we developed including all the selected algorithms in order to speed up the processing task.The authors suggested some standard parameters for their algorithms in order to obtain good enhancing results.Some of these parameters could be tuned differently in the various underwater conditions in order to improve the result.We decided to let all the parameters with the standard values in order not to influence our evaluation with a tuning of the parameters that could have been more effective for an algorithm than for an another.
We have employed some quantitative metrics, representative of a wide range of metrics employed in the field of underwater image enhancement, to evaluate all the enhanced images.In particular, these metrics are employed in the evaluation of hazy images in [29].Similar metrics are defined in [30] and employed in [10].So, the objective performance of the selected algorithms is evaluated in terms of the following metrics.The first one is obtained by calculating the mean value of image brightness.Formally, it's defined as where c ∈ {r, g, b}, I c (i, j) is the intensity value of the pixel (i, j) in the colour channel c, (i, j) denotes i − th row and j − th column, R and L denotes the total number of rows and columns respectively.When M c is smaller, the efficiency of image dehazing is better.The mean value on the three colour channels is a simple arithmetic mean M = M r +M g +M b 3 . Another metric is the information entropy, that represent the amount of information contained in the image.It is expressed as where p(i) denotes the distribution probability of the pixels at intensity level i.An image with the ideal equalization histogram possesses the maximal information entropy of 8 bit.So, the bigger the entropy, the better the enhanced image.The mean value on the three colour channels is defined as The third metric is the average gradient of the image which represents the local variance among the pixels of the image, so bigger its value better the resolution of the image.It's defined as: where I c (i, j) is the intensity value of the pixel (i, j) in the colour channel c, (i, j) denotes i − th row and j − th column, R and L denote the total number of rows and columns, respectively.The mean value on the three colour channels is a simple arithmetic mean G = G r +G g +G b 3 .
Results
This section reports the results of the quantitative evaluation performed on all the images in the dataset, both for the original ones and for the ones enhanced with each of the previously described algorithms.The dataset is composed by twelve images.So, enhancing them with the five algorithms, the total of the images to be evaluated with the quantitative metrics is 72 (12 originals and 60 enhanced).For practical reasons, we will report here only a sample of our results, that consists of the original image named as "Baia1" and its five enhanced versions (Figure 2). the total of the images to be evaluated with the quantitative metrics is 72 (12 originals and 60 enhanced).For practical reasons, we will report here only a sample of our results, that consists the original image named as "Baia1" and its five enhanced versions (Figure 2).Table 1 contains the results of quantitative evaluation performed on the images showed in Figure 2. The first column reports the metric values for the original images and the following columns report the correspondent values for the images enhanced with the concerning algorithms.Each row, instead, reports the value of each metric calculated for each colour channel and its mean value, as defined in Section 5.The values marked in bold correspond to the best value for the metric defined by the corresponding row.Focusing on the mean values of the three metrics ( , , ̅ ), it can be deduced that the SP algorithm performed better on the mean brightness, the ACE algorithm performed better on enhancing the information entropy and the CLAHE algorithm improved more than the others the average gradient.So, according to these values, these three algorithms in this case of the "Baia1" sample image gave qualitatively equal outcomes.Perhaps it's possible to deduce another consideration by analysing the value of the metrics for the single colour channels.In fact, looking at all the values marked in bold, the SP algorithm reached better results more times than the other two.So, the SP algorithms should have performed slightly better in this case.Table 1 contains the results of quantitative evaluation performed on the images showed in Figure 2. The first column reports the metric values for the original images and the following columns report the correspondent values for the images enhanced with the concerning algorithms.Each row, instead, reports the value of each metric calculated for each colour channel and its mean value, as defined in Section 5.The values marked in bold correspond to the best value for the metric defined by the corresponding row.Focusing on the mean values of the three metrics (M, E, G), it can be deduced that the SP algorithm performed better on the mean brightness, the ACE algorithm performed better on enhancing the information entropy and the CLAHE algorithm improved more than the others the average gradient.So, according to these values, these three algorithms in this case of the "Baia1" sample image gave qualitatively equal outcomes.Perhaps it's possible to deduce another consideration by analysing the value of the metrics for the single colour channels.In fact, looking at all the values marked in bold, the SP algorithm reached better results more times than the other two.So, the SP algorithms should have performed slightly better in this case.
Table 1.Results of evaluation performed on "Baia1" image with the metrics described in Section 5.For each image in the dataset we have elaborated a table such as Table 1.Since it is neither practical nor useful to report here all these tables, we summarized them in a single table (Table 2).The Table 2 has four sections, one for each underwater site.Each of these sections reports the average values of the metrics calculated for the related site and defined as
Metric
where (M 1 , E 1 , G 1 ), (M 2 , E 2 , G 2 ), (M 3 , E 3 , G 3 ) are the metrics calculated for the first, the second and the third sample image of the related site, respectively.Obviously, the calculation of these metrics was carried out on the three images enhanced by each algorithm.In fact, each column reports the metrics related to a given algorithm.This table enables us to deduce some more global considerations about the performances of the selected algorithms on our images dataset.Focusing on the values in bold, we can deduce that the SP algorithm has performed better in the sites of Baiae, Cala Cicala and Cala Minnola, having totalized the higher values in two out of three metrics (M s , G s ).Moreover, looking at the entropy (E s ), i.e., the metric on which SP has lost, we can recognize that the values calculated for this algorithm are not so far from the values calculated for the other algorithms.As regards the underwater site of Mazotos, the quantitative evaluation conducted with these metrics seems not to converge on any of the algorithms.Moreover, the ACE algorithm seems to be the one that performs better in enhancing the information entropy of the images.
For the sake of completeness, we want to report a particular case that is worth mentioning.Looking at Table 3, it's possible to conclude that the SP algorithm performed better than all the others according to all the three metrics in the case of "CalaMinnola2."In Figure 3 we can see the CalaMinnola2 image enhanced with the SP algorithm.It's quite clear, looking at this image, that the SP algorithm in this case have generated some 'artefacts,' likely due to the oversaturation of some image details.This issue could be probably solved or attenuated by tuning the saturation parameter of the SP algorithm, which we have fixed to a standard value, as for the parameters of the other algorithms too.Anyway, the question is that the metrics were misled by this 'artefacts,' assigning a high value to the enhancement made by this algorithm.In Figure 3 we can see the CalaMinnola2 image enhanced with the SP algorithm.It's quite clear, looking at this image, that the SP algorithm in this case have generated some 'artefacts,' likely due to the oversaturation of some image details.This issue could be probably solved or attenuated by tuning the saturation parameter of the SP algorithm, which we have fixed to a standard value, as for the parameters of the other algorithms too.Anyway, the question is that the metrics were misled by this 'artefacts,' assigning a high value to the enhancement made by this algorithm.
Conclusions
In this work, we have selected five state-of-the-art algorithms for the enhancement of images taken on four underwater sites with different environmental and illumination conditions.We have evaluated these algorithms by means of three quantitative metrics selected among those already adopted in the field of underwater image enhancement.Our purpose was to establish which algorithm performs better than the others and whether or not the selected metrics were good enough to compare two or more image enhancement algorithms.
According to the quantitative metrics, the SP algorithm seemed to perform better than the other in all the underwater sites, except for Mazotos.For this site, each metric assigned a higher value to a different algorithm, preventing us to decide which algorithm performed better on the Mazotos images.Such an undefined result is the first drawback to evaluate the underwater images relying only on quantitative metrics.Moreover, these quantitative metrics, implementing only a blind evaluation of a specific intrinsic characteristic of the image, are unable to identify 'problems' in the enhanced images, as the 'artefacts' generated by the SP algorithms in the case documented in Figure 3 and Table 3.
Anyway, looking at Figure 4 and performing a qualitative analysis from the point of view of the human perception, the result suggested by the quantitative metrics seems to be confirmed, as the SP algorithm performed well in most of the cases.The only case on which the SP algorithms failed was
Conclusions
In this work, we have selected five state-of-the-art algorithms for the enhancement of images taken on four underwater sites with different environmental and illumination conditions.We have evaluated these algorithms by means of three quantitative metrics selected among those already adopted in the field of underwater image enhancement.Our purpose was to establish which algorithm performs better than the others and whether or not the selected metrics were good enough to compare two or more image enhancement algorithms.
According to the quantitative metrics, the SP algorithm seemed to perform better than the other in all the underwater sites, except for Mazotos.For this site, each metric assigned a higher value to a different algorithm, preventing us to decide which algorithm performed better on the Mazotos images.Such an undefined result is the first drawback to evaluate the underwater images relying only on quantitative metrics.Moreover, these quantitative metrics, implementing only a blind evaluation of a specific intrinsic characteristic of the image, are unable to identify 'problems' in the enhanced images, as the 'artefacts' generated by the SP algorithms in the case documented in Figure 3 and Table 3.
Anyway, looking at Figure 4 and performing a qualitative analysis from the point of view of the human perception, the result suggested by the quantitative metrics seems to be confirmed, as the SP algorithm performed well in most of the cases.The only case on which the SP algorithms failed was in the Cala Minnola underwater site, probably due to an oversaturation of some image details that probably could be fixed by tuning its saturation parameter.
in the Cala Minnola underwater site, probably due to an oversaturation of some image details that probably could be fixed by tuning its saturation parameter.In conclusion, even if the quantitative metrics can provide a useful indication about image quality, they do not seem reliable enough to be blindly employed for an objective evaluation of the In conclusion, even if the quantitative metrics can provide a useful indication about image quality, they do not seem reliable enough to be blindly employed for an objective evaluation of the performances of an underwater image enhancement algorithm.Hence, in the future we intend to design an alternative methodology to evaluate the underwater image enhancement algorithms.Our approach will be based on the judgement of a panel of experts in the field of underwater imagery, that will express an evaluation on the quality of the enhancement conducted on an underwater images dataset through some selected algorithms.The result of the expert panel judgement will be used as reference in the algorithms evaluation, comparing it to the results obtained through a larger set of quantitative metrics that we will select from the state of the art.We will conduct this study on a wider dataset of underwater images that will be more representative of the underwater environment conditions.
Supplementary Materials:
The image enhancement tool is available online at www.imareculture.eu/projecttools.html.
Figure 3 .
Figure 3. Artefacts in the sample image "CalaMinnola1" enhanced with SP algorithm.
Figure 3 .
Figure 3. Artefacts in the sample image "CalaMinnola1" enhanced with SP algorithm.
Table 2 .
Summary table of the average metrics calculated for each site.
Table 3 .
Average metrics for the sample image "CalaMinnola2" enhanced with all algorithms.
Table 3 .
Average metrics for the sample image "CalaMinnola2" enhanced with all algorithms.
|
2018-02-18T15:55:56.643Z
|
2018-01-16T00:00:00.000
|
{
"year": 2018,
"sha1": "73bc84fb9184a9077ab07bd6dc68047205219dfc",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-1312/6/1/10/pdf?version=1517284451",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "73bc84fb9184a9077ab07bd6dc68047205219dfc",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
55289167
|
pes2o/s2orc
|
v3-fos-license
|
ANASTE Calabria Recommendations for the Treatment of Frail Elderly Diabetic Patients Hospitalized in Nursing Homes
The prevalence of diabetes increases with age became higher in elderly and in patients admitted to nursing home. During aging, a functional reduction of the beta cell and increase in insulin resistance causes a greater risk in developing diabetes mellitus. It is also well known that association between aging and insulin resistance, recognizes a multifactorial origin. In elderly, both reduced physical activity and increase of visceral adipose tissue may play a causal role that it at least partly follows. Older adults with diabetes have higher rates of functional disability and sudden death, and concomitant diseases such as hypertension, coronary heart disease and stroke than those without diabetes. Older adults with diabetes are at increased risk for several common geriatric syndromes, such as polypharmacy and adverse reaction to drugs, depression, cognitive impairment, urinary incontinence, and persistent pain. In particular, the risk of macrovascular events is doubled and it is related to the duration of illness, the metabolic compensation and the number of other cardiovascular risk factors already present. Most of the elderly subjects in Long-term care (LTC) facilities are frail. The treatment of the elderly with diabetes is complicated by the heterogeneity of functional and clinical status. Life expectancy and the clinical conditions are highly variable. In these patients who take care of elderly people with diabetes must take this heterogeneity of account when establishing the priorities and goals of treatment. *Corresponding author: Giovanni Sgrò, Scientific Coordination of National Association Structures Third Age (ANASTE) Section Calabria, Italy, E-mail: giosgro68@gmail.com Received June 06, 2013; Accepted August 14, 2013; Published August 16, 2013 Citation: Sgrò G, Malara A, Renda GF, Curinga G, Spadea F, et al. (2013) ANASTE Calabria Recommendations for the Treatment of Frail Elderly Diabetic Patients Hospitalized in Nursing Homes. J Gerontol Geriat Res 2: 128. doi:10.4172/21677182.1000128 Copyright: © 2013 Sgrò G, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Introduction
The objective of the present paper is to provide recommendations for the management of patients hospitalized frail elderly admitted to nursing home, Extensive Rehabilitation, associated with ANASTE Calabria. The paper is based on reviewed literature on Cochrane analysis, consensus statements processed by other documents, opinions of experts.
Epidemiology
The number of elderly patients with diabetes mellitus is strongly growing. According to the most recent surveillance data, the prevalence of diabetes among U.S. adults aged > 65 years varies from 22 to 33%, depending on the diagnostic criteria used [1]. The prevalence of diabetes increases with age up to 18.9% in people aged ≥ 75 years [2]. Diabetes is common in LTC facilities, with an overall diabetes prevalence of 25% [3]. The prevalence of diabetes mellitus was 27.5% in elderly patients in residential facilities associated ANASTE Calabria and the average age of patients with diabetes is 79.8 years [4]. The increase in the number of diabetic patients with age is consistent with the association of the aging processes to the decline in glucose tolerance and hence to an increased risk of developing type 2 diabetes [5,6].
Peculiarity of Type 2 Diabetes in Elderly
The onset of diabetes in the elderly is overwhelmingly related to the decline of beta-cell function and the consequent decrease in insulin secretion both in basal conditions and in response to glucose [6][7][8]. It is also well known the association between aging and insulin resistance, recognizes a multifactorial origin. In the elderly, both reduced physical activity that increase of visceral adipose tissue that it at least partly follows may play a causal role [9]. It has recently been reported that a cause of insulin resistance in the elderly also increased triglyceride content of skeletal muscle tissue resulting in the decline of mitochondrial function, with reduction of oxidized physics and phosphorylation [10].
Comorbidities
In Italy 18.8% of people aged ≥ 65 years have disabilities. The onset of disability is directly related to age, and ranges from 5.5% in the age group 65-69 years to 44.5% in the age ≥ 80 years, and 70% of older people with disabilities also has three or more chronic diseases [11,12]. In the United States diabetes is the sixth leading cause of death among the elderly. It has been estimated that patients with diabetes developed over the age of 65 have a reduced life expectancy of at least 4 years [13]. Older adults with diabetes have higher rates of functional disability and sudden death and concomitant diseases such as hypertension, coronary heart disease and stroke than those without diabetes. Older adults with diabetes have an increased risk for several common geriatric syndromes, such as polypharmacy and adverse reaction to drugs, depression, cognitive impairment, urinary incontinence, falls, and persistent pain [4]. The risk of macrovascular events, in particular, is doubled and is related to the duration of illness, the metabolic compensation and the number of other cardiovascular risk factors present [14].
The majority of elderly subjects in LTC are frail. The frail elderly may be defined as a person of advanced age, or very advanced, chronically suffering from multiple diseases, with unstable health, often disabled, in which the effects of aging and diseases are often complicated by socioeconomic problems [15]. Another feature of the elderly population living in LTC is the polypharmacy. In fact, it has been shown that these patients take an average of 6.87 ± 5 drugs [16].
Glicemic Control
The elderly diabetics, who have not physiological and cognitive disorders, with a good life expectancy, should be treated according to the same criteria used for young adults. For patients with advanced diabetes complications, concomitant diseases that limit life expectancy, substantial cognitive impairment or functional, it is reasonable to set less intensive glycemic target.
Glycemic goals for frail older people with functional and cognitive impairment, may be relaxed based on individual criteria, but symptomatic hyperglycemia or the risk of acute complications of hyperglycemia should be avoided in all patients.
The treatment of the elderly with diabetes is complicated by the heterogeneity of clinical and functional status. Some years before developing diabetes and may have significant complications, others who are newly diagnosed may have had years of undiagnosed diabetes, with complications that will arise from the disease. Some are frail and have other chronic conditions basic, consisting of comorbidities related to diabetes, or even a limited physical functioning and / or cognitive. Others have little comorbidity and are active. Life expectancy and the clinical conditions are highly variable. Who takes care of elderly people with diabetes must take this heterogeneity into account when establishing the priorities and goals of treatment. There are few longterm studies in the elderly show that the benefits of intensive glycemic control, blood pressure, and lipids.
The management of other risk factors provides a strong confirmation about the concept that comprehensive care for diabetes involves not only just hyperglycemia but overall treatment of each vascular risk factor.
The treatment of hypertension is virtually indicated in all older people, where as therapy with statins and aspirin can benefit those who have a life expectancy of at least equal to the period of time of the duration of studies of primary or secondary prevention. Screening for diabetes complications should be individualized in elderly adults; therefore special attention should be paid to complications which may lead to functional impairment [18]. There are no large trials of lipid-lowering interventions specifically in older adults with diabetes. A meta-analysis of 18,686 people with diabetes in 14 trials of statin therapy for primary prevention showed similar 20% relative reductions in major adverse vascular outcomes in those under compared with those aged >65 years [19].
Hypoglycemia
These patients are likely more exposed to suffer serious adverse effects from hypoglycemia. However the patients with poorly controlled diabetes may be subject to acute complications of diabetes, including dehydration, poor wound healing, and hyperglycemic coma and / or hyperosmolar. Glycemic targets should at least avoid these consequences. Particular care is required in prescribing and monitoring of drug therapy in the elderly. Treatment goals should be realistic, practical and explicit, with particular care to avoid hypoglycemic risk without jeopardizing the achievement of acceptable blood glucose levels [20].
In older diabetic subjects hormonal activation in response to hypoglycemia is attenuated, that is due to reduced glucagon response and is almost entirely supported by the secretion of adrenaline [21]. The counter regulatory catecholamine secretion may result in hemodynamic consequences and emerologiche with particularly negative effects on the brain and heart. On the other hand, the increased circulating catecholamine determines increased myocardial oxygen demand which can become critical in the presence impairment of coronary circulation. At this damage is associated thrombotic risk induced by adrenaline [22]. In elderly diabetics, therefore, hypoglycaemic episodes might be triggers of major cardiovascular and cerebrovascular episodes, such as myocardial infarction and stroke [20]. In subjects older than 70 years, hypoglycemia may occur with a set of symptoms not typical, characterized by weakness, faintness, unsteadiness, drowsiness, feeling light-headedness, difficulty in concentration. In addition, the very perception of symptoms is often attenuated, while we note the frequent occurrence of changes in motor coordination or joint report that simulate the clinical picture of an acute cerebral event [23,24]. During hypoglycaemia, loss of consciousness may lead coordination falls to the ground with traumatic injuries and fractures. Metformin is often contraindicated due to renal failure or heart failure significantly. The TZDs can cause fluid retention, which may exacerbate or lead to heart failure. These are contraindicated in patients with moderate and severe CHF (NYHA III-IV), and if used, must be with caution in patients with or at risk of milder degrees of CHF. Sulfonylureas, other insulin secretagogues and insulin may cause hypoglycemia.
Lifestyle and Therapeutic Options
Interventions designed to impact an individual physical activity level as well as food intake are critical parts in the management of type 2 diabetes. In LTC all every patient is included in programs of dietary interventions and physical activity compatible with the overall clinical condition. The diet should be controlled and contain fibre-rich foods such as vegetables, fruits, whole grains, legumes, low-fat dairy products, and fresh fish [25]. The choice of food must take into account the culture of the patients, preferences, personal goals, as well as abilities that can increase the quality of life, satisfaction with meals, and nutrition status [26]. Patients more frail, those with cognitive dysfunction especially, may have altered sensation of thirst, contributing to the risk of hypovolemia and hyperglycemic crisis; in fact, the fluid intake is encouraged, monitored and planned treatment [27]. Physical activity should be promoted as possible; ideally that should be aimed for at least 150 min / week of moderate activity [28]. On the other hand, a minimal tolerated physical activity is encouraged in complicated patients [29].
The drug should be initiated at the lowest dose and then increased gradually until the goals are achieved, without developing side effects. In addition, the elderly, screening for diabetes complications should be individualized. Particular attention should be paid to complications that may develop in a short period of time and / or which could significantly impair the functional state, as the visual complications and lower limbs.
The type 1 diabetic elderly should continue their insulin regimen. The multiple daily dosing regimens should be maintained even in old age, as it is the most secure even against hypoglycemia. Even a patient suffering from type 2 diabetes can be in conditions that require more or less stringent the use of insulin therapy.
The indications are: a) the presence of glucose values consistently high despite treatment and diet oral hypoglycemic agent, as frequently occurs for the progressive beta-cell depletion, commonly called "secondary failure", b) the existence of contraindications use of oral hypoglycemic agents in the presence of inadequate glycemic control with lifestyle changes alone, c) the presence of intercurrent disease. In elderly patients, insulin therapy should not be avoided but custom not to pay to achieve optimal glycemic targets with a hypoglycemic risk at acceptably high [30]. In most cases, the introduction of insulin in therapy is carried out with simplified diagrams, i.e. as a single injection of intermediate or slow insulin at bedtime to prevent hyperglycemia in the morning preferably. This scheme has its rationale only in patients who still exhibit a significant beta-cell secretory capacity with meals, allowing you to restore adequate blood glucose levels in the fasting state through inhibition of gluconeogenesis night. To reduce the risk of hypoglycemia while maintaining strict glycemic targets has been proposed using a regimen used as the basal insulin analogue glargine slow, peculiarly characterized by a slow-acting without peak. In a series of 426 patients with type 2 diabetes -age 40-80 years -in poorly controlled with oral hypoglycaemic therapy were randomized to therapy with insulin glargine vs. NPH bedtime, with no variation of the previous oral hypoglycemic therapy, it was observed that at the end of the 52 weeks of treatment the improvement of the compensation was similar in both groups, patients treated with glargine had values postprandial glucose better and a smaller number of hypoglycemic episodes especially at night [31].The group of patients treated with insulin glargine is also characterized by a lower weight gain [32]. A recent meta-analysis of randomized controlled trials involving the use of insulin glargine vs. NPH insulin in patients with type 2 diabetes has further shown a lower risk of nocturnal hypoglycemia with the use of insulin glargine [33]. Even with insulin glargine there is a rationale for combination therapy with oral hypoglycemics, chosen according to the patient's clinical phenotyping (metformin if there is more isulino resistance, if it prevails sulfonylurea insulin secretory deficiency). There is strong evidence that use of incretin therapy, in particular, the DPP-4 inhibitors, could offer significant advantages in older persons. Clinical evidence suggests that the DPP-4 inhibitors vildagliptin and sitagliptin are particularly suitable for frail and debilitated elderly patients because of their excellent tolerability profiles [34].
Patients with type 2 diabetes solvents are characterized by the presence of a pre-prandial glycemic line down, so that the fasting blood glucose is higher at times of the day and the glucose nadir is observed before dinner [35]. Therefore it is dangerous to monitor, in these patients, insulin therapy using fasting plasma glucose only, even when the administration of a single injection of evening intermediate insulin or slow formulation with or without the oral hypoglycemic agents. The daily blood glucose profile should be recommended, with particular attention to measuring blood glucose before dinner. If a defect is documented beta-cell important, it is unrealistic to expect to achieve adequate control by requiring basal insulin associated with oral hypoglycemic agents only [36]. Therefore, even in elderly type 2 diabetic patients, the most rational therapy turns out to be the one with multiple insulin injections, done with rapid insulin before meals and bedtime intermediate insulin or with rapid acting analogues of insulin before meals and insulin glargine. The diagram fourfold administration is undoubtedly the most suitable to achieve the desired glycemic goals [37].
According ACCORD, ADVANCE, and VADT studies, about the 80% of frail patients obtain the best possible glicemic control with insulin therapy [38]. We must avoid that these patients would suffer the consequences of a delayed introduction of insulin therapy for the mere fact of being elderly (clinical inertia) [39].
In the majority of elderly patients with type 2 diabetes, who do not submit the correct indications, there are serious obstacles to the introduction of insulin therapy, which often improves the general state of health and life prospects [40].
Appendix
Operating instructions on how to 'the management of diabetic fragile elderly patient Purpose: The purpose of this Operational Statement is to describe the management of the frail elderly diabetic patient entrance into the structure to its discharge with replacement home, if possible, to provide uniform procedures for all professionals involved, in order to make more effective and efficient intervention, and optimize resources.
Field of application:
The requirements of this Operating Instruction apply to all functions and processes associated structures ANASTE Calabria. These procedures are valid until you change it, without prejudice to any provision of the medical facility according to specific health needs.
Description of activities:
The day agreed with the patient and / or family, and / or other care setting, we proceed to the reception of new host and activate the various stages of recovery. The medical facility supported professional nurse, operator assistance, the rehabilitation therapist, psychologist and the social worker is moving in the medical and compilation of medical records. In the collection of the patient is checked for the presence of history of diabetes. In the subsequent medical examination are detected and physiological vital signs, including blood glucose measured capillary come with portable blood glucose meter. Are required blood tests for control, including blood glucose and glycosylated hemoglobin (HbA1c).
Patient with no history of diabetes
New diagnosis of Diabetes Mellitus: Evaluation of target metabolites: glucose, HbA1c, lipid profile, PA. Evaluation of the possible adherence to behavioral changes. Evaluation of adherence to drug therapy. Treatment as a function of HbA1c: HbA1c ≤ 7.5%; HbA1c> 7.5 and <9%; HbA1c ≥ 9%. Levels of management in relation to assessment of clinical risk.
Patient with a history of Diabetes Mellitus: Evaluation of target metabolites: glucose, HbA1c, lipid profile, PA. Evaluation of the possible adherence to behavioral changes. Evaluation of adherence to drug therapy. Treatment as a function of HbA1c: HbA1c ≤ 7.5%; HbA1c> 7.5 and <9%; HbA1c ≥ 9%. Levels of management in relation to assessment of clinical risk.
|
2019-03-10T13:07:02.709Z
|
2013-08-16T00:00:00.000
|
{
"year": 2013,
"sha1": "d16f7266a402a3927664080077f33f07a20441db",
"oa_license": "CCBY",
"oa_url": "https://www.omicsonline.org/open-access/anaste-calabria-recommendations-for-the-treatment-of-frail-elderly-diabetic-patients-hospitalized-in-nursing-homes-2167-7182.1000128.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "a9028a6b8ce809799f28c33f45a5447e1c2f6c40",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
132426795
|
pes2o/s2orc
|
v3-fos-license
|
Clinical and Neuropsychological Factors Associated with Treatment Response and Adverse Events of Atomoxetine in Children with Attention-Deficit/Hyperactivity Disorder
Objectives: The objective of this study was to investigate clinical and neuropsychological factors associated with treatment response and adverse events of atomoxetine in children with attention-deficit/hyperactivity disorder (ADHD) in Korea. Methods: Children with ADHD were recruited at the Department of Psychiatry of Asan Medical Center from April 2015 to April 2018. Diagnoses of ADHD and comorbid psychiatric disorders were confirmed with the Kiddie-Schedule for Affective Disorders and Schizophrenia-Present and Lifetime Version. The subjects were subsequently treated with atomoxetine for 12 weeks and illness severity was scored using the ADHD Rating Scale, Clinical Global Impression-Severity scale (CGI-S) and/or Improvement scale (CGI-I), at pre- and post-treatment. They also completed the Advanced Test of Attention (ATA), while their caregivers completed the Korean Personality Rating Scale for Children (KPRC) at pre- and post-treatment. Independent t-test, Fisher’s exact test, χ2 test, mixed between-within analysis of variance and correlation analysis were used for statistical analysis. Results: Sixty-five children with ADHD (mean age: 7.9±1.4 years, 57 boys) were enrolled, of which, 33 (50.8%) were treatment responders. Scores on the social dysfunction subscale of the KPRC (p=0.021) and commission errors on the visual ATA (p=0.036) at baseline were higher in treatment non-responders than in responders; however, the statistical significances disappeared after adjusting for multiple comparisons. Mood changes were also observed in 13 subjects (20.0%), and three of them discontinued atomoxetine due to this. Additionally, atomoxetine-emergent mood change was observed more frequently in girls (p=0.006), while the intelligence quotient (p=0.040) was higher in those subjects with mood changes than in those without. Conclusion: The results of our study suggest that clinical and neuropsychological factors could be associated with treatment response or adverse events of atomoxetine in children with ADHD. Further long-term studies with larger samples are needed.
INTRODUCTION
.0% of children with ADHD being treated with atomoxetine discontinue medication due to non-response [9,10]. Therefore, identifying the predictors of treatment response to atomoxetine is necessary in order to enhance medication adherence and improve the treatment outcome. Block et al. [6] reported that the score reduction in the items "fails to give close attention or makes careless mistakes" and the "easily distracted" on ADHD Rating Scale (ARS) at the first week atomoxetine trial is a positive predictor of its treatment response. Additionally, Newcorn et al. [7] identified a certain level of improvement by the fourth week of atomoxetine treatment as a predictor of the treatment response, from six to nine weeks of atomoxetine trials. Furthermore, Treuer et al. [11] reported older age and the female sex as positive predictors of a greater remission rate with patients from non-Western countries including Asia (China and Taiwan). However, factors associated with treatment response to atomoxetine have been studied less intensely in comparison with methylphenidate, especially with Asian population.
Adverse events of atomoxetine may include increased irritability, nausea, decreased appetite, and somnolence [12], that often results in in discontinuation of treatment [10,13]. However, there has been little research on the predictive factors of adverse events of atomoxetine. Identification of these predictors of adverse events can contribute to enhancing the medication adherence of atomoxetine and decreasing the time required to achieve the desired therapeutic goals, thus minimizing the loss incurred to individual patients and society caused by non-response and treatment delay.
This study aims to investigate the clinical and neuropsychological factors associated with treatment response and adverse events of atomoxetine in Korean children with ADHD.
Subjects and study design
Subjects were recruited at the Department of Psychiatry of Asan Medical Center from April 2015 to April 2018. The inclusion criteria for this study included the following: 1) Children aged 5-12 years; 2) Diagnosis of ADHD under the Diagnostic and Statistical Manual of Mental Disorders-Fourth Edition (DSM-IV) [14] and Kiddie-Schedule for Affective Disorders and Schizophrenia-Present and Lifetime Version (K-SADS-PL) [15]; and 3) Started on atomoxetine. The exclusion criteria included the following: 1) Presence of learning disorders, mental retardation, bipolar disorder, psychotic disorders, developmental disorders, organic brain disease, epilepsy, or neurological disorders; 2) Presence of tic disorders, obsessive-compulsive disorder, major depressive disorder, or anxiety disorders that required pharmaco-therapy; 3) Presence of severe suicidal ideation; 4) History of methylphenidate or atomoxetine treatment within the past six months; 5) Current serious medical conditions (such as cardiovascular, hepatic, renal, and respiratory disorders, and glaucoma); and 6) Current medication of alpha-2 adrenergic receptor agonists, antidepressants, antipsychotics, benzodiazepines, and anticonvulsants or dietary supplements with significant effects on the central nervous system.
Prior to initiating the atomoxetine medication, the subjects underwent the following baseline pre-treatment tests: The ARS [16], Advanced Test of Attention (ATA) [17], and Clinical Global Impression-Severity scale (CGI-S) [18], while their caregivers underwent the Korean Personality Rating Scale for Children (KPRC) [19]. After completion of the 12 weeks of atomoxetine treatment, the subjects underwent the following post-treatment tests: all the aforementioned baseline tests, as well as the Clinical Global Impression-Improvement scale (CGI-I) [18] for the assessment of treatment response. Among the subjects who withdrew from the study without completing the 12 weeks atomoxetine treatment, those who took the medication at least once were included in the analysis.
Standard-dose atomoxetine treatment was conducted in compliance with the following guidelines, whereby the optimal dose was determined based on its clinical efficacy as judged by the clinician. The initial dose of atomoxetine (Strattera ® , Eli Lilly and Company, Indianapolis, IN, USA) was set at 0.5 mg/kg/day, with the maximum dose at 1.4 mg/kg/ day. Dose adjustment was performed at intervals not shorter than a week. Subjects with a CGI-I score of 2 or less, or a decrease in the ARS total score of ≥50% from the baseline after 12 weeks atomoxetine treatment were classified as treatment responders. For those subjects who had withdrawn from the study prior to completing the 12 weeks atomoxetine treatment, treatment response was assessed based on the CGI-I and ARS scores at the time of withdrawal.
Adverse events were also evaluated, using a checklist that was partially modified from the 61-item checklist presented in a study with methylphenidate [20], this checklist was administered by a clinician as follows: each of the 44 types of adverse events was evaluated based on severity (mild, moderate, or severe), causality (not related, doubtful, possible, probable, or very likely), and clinical outcomes (resolved, improved, no change, aggravated, or serious adverse events), and the onset and resolution dates were registered. Furthermore, this checklist was administered at baseline (pre-treatment) and 12th week (post-treatment), and during this period, the manifestations of six mood states (depressed mood, labile affect, irritability, anger/hostility, euphoria, and loss of interest) were assessed based on severity and causality, and those with any evaluation of severity (mild, moderate, or severe) and a causality of possible, probable, or very likely were defined as "mood-related adverse events" associated with the use of atomoxetine.
This study was approved by the Institutional Review Board of Asan Medical Center (IRB NO. 2014-0157), and written consent for the overall study procedure was obtained from all the participants and their caregivers.
Assessment tools
Kiddie-Schedule for Affective Disorders and Schizophrenia-Present and Lifetime Version (K-SADS-PL) [15] The K-SADS-PL is a semi-structured interview tool designed to assess the severity of the current and lifetime morbidity of 32 DSM-IV child and adolescent psychiatric disorders. Its reliability and validity has been verified [15]. Additionally, the reliability and validity of the Korean version of K-SADS-PL, translated by Kim et al. [21], has been studied with respect to the items related to ADHD, tic disorders, oppositional defiant disorder, depressive disorders, and anxiety disorders. In this study, K-SADS-PL was administered by a pediatric psychiatrist and a clinical psychologist experienced in using the tool and familiar with clinical interviews.
Intelligence tests
Korean-Wechsler Preschool and Primary Scale of Intelligence (K-WPPSI) [22] The Korean-Wechsler Preschool and Primary Scale of Intelligence (K-WPPSI) is an individually administered intelligence test for children aged 3 years 0 months to 7 years 3 months. It is the K-WPPSI adapted for the Korean population and has been standardized by the Korean Institute of Developmental Tests. As an intelligence test developed to measure the intelligence of preschoolers and early schoolage children of ages lower than the defined age range for Wechsler Intelligence Scale for Children (WISC), it consists of two subscales: Verbal Intelligence Quotient (VIQ) and Performance Intelligence Quotient (PIQ), with each subscale comprising of six subtests. The overall test result is presented as the Full Scale Intelligence Quotient (FSIQ) [22].
Korean-Wechsler Intelligence Scale for Children-Third Edition (K-WISC-III) [22] The Korean-Wechsler Intelligence Scale for Children-Third Edition (K-WISC-III) is an individually administered intelligence test for the clinical assessment of cognitive abilities for children aged 6 years 0 months to 16 years 11 months. It is the KWISC-III adapted for the Korean population and has been standardized by the Korean Institute for Developmental Tests. The K-WISC-III assesses cognitive abilities with a variety of subtests designed to measure specific abilities. In addition to the VIQ, PIQ, and FSIQ, it provides fourfactor index scores based on factor analysis. Unlike its previous versions, this version consists of 13 subtests, with the subtest "symbol search" added to the standard 12 subtests for testing children's cognitive abilities [23].
Korean-Wechsler Intelligence Scale for Children-Fourth Edition (K-WISC-IV) [24] The K-WISC Fourth Edition (K-WISC-IV), a revised version of K-WISC-III (2001), is an intelligence test for children and adolescents aged 6 years 0 months to 16 years 11 months. K-WISC-IV provides not only the FSIQ, which indicates the overall cognitive abilities, but also subtest and index scores that indicate specific cognitive profiles. This version consists of five supplementary subtests (picture completion, letternumber sequencing, matrix reasoning, word reasoning, and cancellation), comprising a total of 15 subtests, of which the results are presented as the FSIQ and four index scores. The terms, VIQ and PIQ are replaced with verbal comprehension index and perceptual reasoning index, respectively [24].
ADHD Rating Scale (ARS)
The ARS is an 18-item scale developed by DuPaul [16] that is used to rate ADHD symptoms in school-age children. It is designed to be evaluated by researchers, parents, and teachers and presented as a parent-report or teacher-report inventory. The validity and reliability of this scale has been verified in many studies; So et al. [25] have verified the validity and reliability of the Korean version of the parent-report and teacher-report ARS with 1044 children.
Advanced Test of Attention (ATA)
The ATA is a computerized continuous performance test for the quantitative assessment of children's attention and impulse control abilities. It has been developed and standardized for Korean children by Shin et al. [17], and consists of a visual test and a auditory test, each yielding four indices: omission errors, commission errors, mean response time, and response time variability (standard deviation of reaction time). In this study, the z-score was used, whereby a score of 1.5 or more was determined as deviating from the normal range and the 1.0-1.5 as the boundary conditions. The internal consistency coefficient of the ATA is 0.87.
Korean Personality Rating Scale for Children (KPRC)
The KPRC was developed by modifying and complementing the Korean Personality Inventory for Children (KPI-C) [26] in order to address the problems of the latter, and has been standardized for children aged 3 to 17 years [19]. It consists of 177 items rated on a 4-point Likert scale that are categorized into three validity scales (T-R scale, L scale, and F scale), one ego-resilience scale, and 10 clinical scales (verbal development, physical development, anxiety, depression, somatic concern, delinquency, hyperactivity, family dysfunction, social dysfunction, and psychoticism).
Clinical Global Impression-Improvement and Severity scales (CGI-I/S)
The CGI-I/S is an observer-rated scale developed by Guy [18]. This scale used to describe the illness severity, response to treatment, and course of treatment, and is widely used in clinical studies as it can be administered quickly and easily used for evaluating psychiatric disorders. Many studies have demonstrated that it has a sufficiently high validity despite being used by raters without a sufficient level of knowledge of the clinical manifestations of these disorders. The CGI-I is designed to rate improvement on a 7-point Likert scale (1= very much improved; 7=very much worse), while the CGI-S is designed to rate the current severity on a 7-point Likert scale (1=normal; 7=extremely ill).
Data analysis
Five of the recruited subjects withdrew from the study prior to 12th week, and their data were analyzed using the Last Observation Carried Forward method. The collected data were analyzed as follows: χ 2 test or Fisher's exact test to compare the categorical variables, and independent sample t-test to compare the continuous variables; Bonferroni correction to adjust for multiple comparisons; Paired t-test to compare pre-and post-treatment clinical variables, and mixed between-within analysis of variance to compare pre-and posttreatment clinical variables between the treatment responder and non-responder groups; Correlation analysis to assess the correlations between individual clinical variable pairs representing pre-and post-treatment changes. Statistical analysis was performed using IBM SPSS 22 Statistics for Windows (IBM Corp., Armonk, NY, USA). The significance level was set at p<0.05.
RESULTS
Sixty-seven children with ADHD (mean age: 7.9±1.4 years; 58 boys) were recruited from April 2015 to April 2018, of which 65 children (mean age: 7.9±1.4 years; 57 boys) who were treated with atomoxetine at least once were enrolled in the study. Of these subjects, five discontinued the atomoxetine trial prior to 12th week for reasons of refusal (n=1), adverse events (n=3), and a concurrent medication of antipsychotics (n=1).
The overall mean atomoxetine dose was 27.2±9.7 mg/day and 0.89±0.23 mg/kg/day on average. The mean post-treatment CGI-S score decreased by 1.2±0.9 on average compared to the pre-treatment (baseline) score.
No significant differences were observed in age, sex, FSIQ, age of onset, and ADHD subtype between the treatment dropout group (n=5, treatment<12 weeks) and the treatment retention group (n=60, treatment=12 weeks). With the exception of anxiety disorder, which affected a significantly higher proportion in the treatment dropout group compared to the treatment retention group (1/5 vs. 1/60; χ 2 =5.202, p=0.023), no significant differences were observed in comorbid disorders. Additionally, the treatment dropout group was administered a significantly lower mean atomoxetine dose compared with the treatment retention group (0.22±0.24 mg/ kg/day vs. 0.50±1.39 mg/kg/day; U=54.000, p=0.018).
Subjects with a CGI-I score of 2 or less, or a decrease in the ARS total score of ≥50% from baseline to 12 weeks atomoxetine treatment were classified as treatment responders; hence, 33 out of the 65 subjects (50.8%) were treatment responders. No significant differences were observed in age, sex, FSIQ, ADHD age of onset, atomoxetine dose, ADHD subtype, and comorbid disorders between the treatment responder and non-responder groups (Table 1).
Comparison between the atomoxetine treatment responder and non-responder groups
The pre-and post-treatment comparisons of the ARS scores between the treatment responder and non-responder groups revealed significant differences in the "inattentive" No significant intergroup differences between the treatment responder and non-responder groups were observed in the pre-treatment scores of both subscales of the ARS (inattentive and hyperactive-impulsive) as well as the CGI-S ( Table 2).
With respect to the KPRC, the mean pre-treatment score of the non-responder group was significantly higher in the "social dysfunction" subscale (t=-2.367, p=0.021); however, the statistical significance disappeared after the post-hoc test. No significant intergroup differences were observed in the remaining 10 subscales (Table 2).
With respect to the ATA, the mean pre-treatment score of the non-responder group was significantly higher in "commission errors" on the visual ATA (t=-2.140, p=0.036); however, the statistical difference disappeared after the post-hoc test. No significant intergroup differences were observed in all the other subscales of the ATA (Table 2).
No significant intergroup differences were observed in the age, ADHD subtype, and comorbid disorders between the AEMC and non-AEMC groups. Furthermore, the AEMC group had a higher proportion of girls (p=0.006) and a significantly higher mean FSIQ (p=0.040) ( Table 4). However, the statistical significance of both intergroup differences disappeared after the post-hoc test (critical p-value after Bonferroni correction, 0.00625).
No significant intergroup differences between the AEMC and non-AEMC groups were observed in the pre-treatment scores of all subscales of the ARS, KPRC, ATA, and CGI-S. Definition of response to atomoxetine was determined as reduction in ADHD Rating Scale score of more than 50% between pretreatment and post-treatment (12th week) or less than three in Clinical Global Impression-Improvement scale score at post-treatment (12th week). *using Fisher's exact test. ADHD: attention-deficit/hyperactivity disorder, CD: conduct disorder, FSIQ: Full-Scale Intelligence Quotient, NOS: not otherwise specified, ODD: oppositional defiant disorder, SD: standard deviation
Correlation coefficients between the subscales of the KPRC and other ADHD-related scales in terms of pre-and post-treatment changes
Change was calculated by subtracting from post-treatment (12th week) measurement to pre-treatment measurement. The changes in the CGI-S score showed correlations with none of the changes in the KPRC subscale score; however, the changes in the "inattentive" subscale of ARS showed significant positive correlations with the changes in the KPRC subscales of depression (r=0.263, p=0.039), delinquency (r= 0.293, p=0.021), family dysfunction (r=0.294, p=0.020), and psychoticism (r=0.270, p=0.034). Additionally, the changes in the "hyperactive-impulsive" subscale of ARS were also positively correlated with the changes in the KPRC subscales of delinquency (r=0.332, p=0.008), hyperactivity (r=0.358, p=0.004), family dysfunction (r=0.358, p=0.004), and psychoticism (r=0.295, p=0.020) ( Table 5).
DISCUSSION
In this study, atomoxetine treatment non-responders showed higher baseline test scores in the "social dysfunction" subscale of the KPRC and the "commission errors" of the visual ATA, suggesting that these two subscales could be factors associated with atomoxetine treatment response. Moreover, sex and Intelligence Quotient (IQ) were observed as factors associated with AEMC. Finally, the AEMC group had a higher proportion of girls and a higher mean FSIQ as compared to the non-AEMC group.
The treatment non-response rate of this study (49.2%) was similar to those presented in previous studies. Newcorn et al. [7] defined non-response to treatment as less than 40% decline in ARS scores after a short-term atomoxetine treatment (6 to 9 weeks), compared to the baseline value. The non-response rate of atomoxetine treatment in their study was reported to be 40%, however, in contrast to this, in long-term Definition of response to atomoxetine was determined as reduction in ARS score of more than 50% between pre-treatment and post-treatment (12th week) or less than three in Clinical Global Impression-Improvement scale score at post-treatment (12th week). *p<0.05. ARS: attention-deficit/hyperactivity disorder Rating Scale, ATA: Advanced Test of Attention, CGI-S: Clinical Global Impression-Severity scale, KPRC: Korean Personality Rating scale for Children, SD: standard deviation outcomes of atomoxetine treatment, treatment non-response rates of 13% and 10% were reported after 6 and 24 months of treatment, respectively. [27]. Schwartz and Correll [8] reported a bimodal pattern of treatment response to atomoxetine and suggested that there is a need to examine the factors associated with the genotype or endophenotype of treatment non-responders. In this study, the atomoxetine treatment responders showed significantly lower social dysfunction scores and commission errors scores in the KPRC and the visual ATA than did the nonresponers, respectively [28]. Mentions of a social function or continuous performance test have rarely been made in previous studies on predictors of treatment response to atomoxetine. The aforementioned results of this study are consistent with the report of a Korean study [28] that revealed that methylphenidate treatment responders showed a significantly smaller response time variability (standard deviation of reaction time) in the baseline visual ATA. Based on this, it could be inferred that social dysfunction in the KPRC and commission errors in the visual ATA are not atomoxetinespecific predictors, but could serve as predictors of treatment outcomes for general ADHD medication exposure in Korean children.
Furthermore, significant intergroup differences in sex and FSIQ were also observed between the AEMC group (n=13, 20.0%) and non-AEMC group. AEMCs were observed more frequently in girls, presumably because they show easily observable subtle mood changes while concurrently showing impulsivity and hyperactivity less frequently, and further tend to report mood-related adverse events more frequently as compared to boys, on the account of their earlier onset of puberty. Similarly, girls were found to show mood-related adverse events more frequently than boys in a study on antidepressants [29]. Whereas many studies have reported on the association between higher IQ and better treatment response [30,31] and outcome [32,33] in ADHD children, there are no reports on the association between AEMC and IQ [34][35][36]. The effects of IQ on AEMC would have to be elucidated in a future study.
Little difference was observed in the atomoxetine dose administered between the AEMC and non-AEMC groups in this study. This allows for the assumption that in a clinical population that is likely to show an effective treatment out- Anger/hostility 5 (7.7) Labile affect 5 (7.7) Increased appetite 5 (7.7) Anxiety 5 (7.7) Dizziness 5 (7.7) Abdominal pain 4 (6.2) Nervousness 4 (6.2) MedDRA: Medical dictionary for regulatory activities come at a lower dose of atomoxetine, an unnecessarily relatively higher dose can act as a factor that boosts the occurrence of the adverse events. However, considering the reports of previous studies on the common adverse events of atomoxetine, the occurrence of these adverse events is not dosedependent [37], and is reportedly positively correlated with CYP2D6 metabolic activity [38], the individual drug metabolizing ability or drug sensitivity that may have influenced the occurrence of these adverse events in addition to atomoxetine dosing. After the 12 weeks atomoxetine treatment, positive correlations were observed between the decrease in the score of the ARS "inattentive" subscale and the decrease in the scores of the KPRC subscales of depression, delinquency, family dysfunction, and psychoticism. Similarly, in the ARS "hyperactive-impulsive" subscale, the post-treatment decrease was positively correlated with the decreased KPRC subscales of delinquency, hyperactivity, family dysfunction, and psychoticism. The result of the KPRC subscale items being positively correlated with each of the ARS subscales is similar to an earlier research finding [39], as well as to that of a previous study [40], which reported a significant correlation between reduced ADHD symptoms and functional improvement after ADHD pharmacotherapy.
This study has several limitations. First, the sample size was small and analysis was performed based on observations made during a short period (3 months). Second, as an openlabel study performed with patients at only one university hospital, these results may not reflect the characteristics of the entire ADHD population. Third, this study did not control for non-pharmacological treatments that could potentially affect the treatment response in ADHD children. Fourth, there was a significant difference in the comorbidity of anxiety disorders between the treatment dropout group and treatment retention group, with 1 out of 5 and 1 out of 60 subjects, respectively, which could be assumed to have had little impact on the actual outcome, but should nevertheless be considered in the interpretation of the results of this study.
CONCLUSION
The results of this study suggest a potential association between the clinical and neuropsychological factors, and the treatment response or adverse events of atomoxetine in Korean ADHD children. This association would have to be verified through further long-term studies with larger sample sizes and more detailed analyses.
|
2019-04-26T13:36:21.776Z
|
2019-03-31T00:00:00.000
|
{
"year": 2019,
"sha1": "618e0dd873c3c0eccdaa96baed8e2f39e9b15054",
"oa_license": "CCBYNC",
"oa_url": "http://www.jkacap.org/journal/download_pdf.php?doi=10.5765/jkacap.180030",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "618e0dd873c3c0eccdaa96baed8e2f39e9b15054",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
263265453
|
pes2o/s2orc
|
v3-fos-license
|
Kidney function as a key driver of the pharmacokinetic response to high‐dose L‐carnitine in septic shock
Levocarnitine (L‐carnitine) has shown promise as a metabolic‐therapeutic for septic shock, where mortality approaches 40%. However, high‐dose (≥ 6 grams) intravenous supplementation results in a broad range of serum concentrations. We sought to describe the population pharmacokinetics (PK) of high‐dose L‐carnitine, test various estimates of kidney function, and assess the correlation of PK parameters with pre‐treatment metabolites in describing drug response for patients with septic shock.
| INTRODUC TI ON
Sepsis is a clinical syndrome defined by life-threatening organ dysfunction and a dysregulated host response to infection. 1 In 2017, nearly 50 million cases of sepsis were identified worldwide, and mortality in the most severe form, septic shock, approaches 40%. 2 Beyond antimicrobials, treatment for sepsis remains largely nonspecific and supportive with a litany of failed clinical trials for more targeted interventions. 3 Sepsis pathophysiology is complex but is partly characterized by a hypermetabolic state and mitochondrial dysfunction, both of which are associated with greater mortality. 4,5Given the lack of targeted metabolic pharmacotherapy, L-carnitine, an endogenous metabolite that serves a key bioenergetic role in the mobilization of fatty acids for mitochondrial beta-oxidation, was recently tested in patients with sepsis.In a phase I, randomized, double-blind trial, high-dose Lcarnitine was found to be safe in 31 patients with septic shock and demonstrated a modest, but significant improvement in patient mortality versus placebo. 6A follow-up phase IIb trial did not find evidence that L-carnitine significantly improved patient mortality or organ dysfunction, 7 as measured by the Sequential Organ Failure Assessment (SOFA) score. 8However, pharmacometabolomic analyses of the phase I trial demonstrated significant inter-patient variability in posttreatment L-carnitine concentrations that correlated with mortality. 9,10bsequent work showed that variations in the genetics of the organic cation transporter novel family member 2 (OCTN2), body size, and kidney function may also be important drivers of the observed variability and possibly therapeutic response. 11Furthermore, a significant mortality benefit from supplemental L-carnitine was observed in the phase IIb trial in patients with elevated acylcarnitines including acetylcarnitine. 12Taken together, these findings suggest heterogeneity in the pharmacokinetics (PK) and effectiveness (pharmacodynamics, PD) of high-dose L-carnitine in septic shock.
The overall goal of our study was to construct a population PK model of high-dose, intravenous L-carnitine in an acutely ill cohort of patients with septic shock to better understand the factors that drive L-carnitine blood concentration variability.Given that L-carnitine is extensively cleared by kidney elimination, 13 we recognized an additional opportunity to leverage trial data and contribute to the ongoing conversation regarding the ideal approach to estimate kidney function in critically ill patients.These patients are prone to acute kidney injury for which serum creatinine is not routinely a reliable measure of renal function. 14As such, we tested different equations to estimate kidney function based on serum creatinine (S cr ), serum cystatin C (S cys ), and self-identified race in critically ill patients using high-dose L-carnitine.In addition, we sought to determine whether other widely available patient covariates improved the model's predictions.As an exploratory aim, we assessed the relationship between individual patient PK parameters with baseline metabolic status and genomic variability in OCTN2.
| Study design and participants
Our work was a secondary analysis of the Rapid Administration of Carnitine in Sepsis (RACE) clinical trial (NCT01665092). 7The RACE study was a multicenter, placebo-controlled, phase IIb clinical trial that adaptively randomized patients with septic shock to saline placebo or one of three dosing arms for intravenous L-carnitine: 6 grams, 12 grams, or 18 grams.The Bayesian adaptive randomization scheme selected the highest dose as the most efficacious. 15Study drug or an equivalent volume of saline placebo was given as an intravenous bolus (33% of dose) immediately followed by a 12-h infusion.The trial was conducted in accordance with the Declaration of Helsinki, where all patients or their legal representatives provided informed consent and all sites were approved by their local Institutional Review Board.
Adult patients were eligible for the trial if they were: (i) enrolled within 24 h of the identification of septic shock; (ii) required high-dose vasopressors; (iii) presented with moderate organ dysfunction (SOFA ≥6); and (iv) had a blood lactate of at least 18 mg/ dL (2 mmol/L).Patients who were pregnant, breastfeeding, immunocompromised, or had a history of seizures were excluded.Serum samples for drug and other metabolomics analysis were collected at baseline (T0), end-of-infusion (T12), and 24 h (T24), 48 h (T48), and 72 h (T72) after treatment initiation.Full inclusion and exclusion criteria, as well as detailed sample collection and processing, have been previously described 7,12,16 ; some additional trial details can be found in the supporting information.
| Carnitine and acylcarnitines
We used an existing metabolomics data set including time-series measurements of L-carnitine and pre-treatment (baseline) measurements of acylcarnitines.Analytes were measured in serum and dose adjustments based on renal function over a fixed or weight-based dosing paradigm.
K E Y W O R D S
critical illness, drug dosing, metabolomics, precision medicine samples collected in the RACE trial by reverse phase, liquid chromatography-mass spectrometry (LC-MS) at the Michigan Regional Comprehensive Metabolomics Resource Core at the University of Michigan as previously described. 9,12Acylcarnitines are esters formed from the conjugation of L-carnitine and fatty acids of various carbon chain lengths. 17Absolute quantification for L-carnitine and several acylcarnitines (C2, C3, C4, C5, C8, C14, and C16) was achieved through stable isotope internal standards at a known concentration (NSK-B Cambridge Isotope Laboratories).An additional eight acylcarnitines were relatively quantified by peak area.
| Small polar molecules
We also used an existing metabolomics data set of polar compounds that was acquired from pre-treatment serum samples using proton nuclear magnetic resonance spectroscopy ( 1 H-NMR).This assay, which was conducted at the University of Michigan's Biochemical Core, is detailed elsewhere 12,16,18 and identified and quantified 27 low-molecular-weight metabolites.Metabolites included several amino acids, intermediates of the tricarboxylic acid (TCA) cycle, and other bioenergetic compounds.
| Quantification of serum creatinine and cystatin C
Serum creatinine was measured clinically as part of the RACE study, and baseline measures were abstracted from the trial's research electronic data capture (REDCap) database. 19Cystatin C was measured using biobanked residual serum samples using a standard, commercially available enzyme-linked immunoassay (ELISA) assay according to the manufacturer's instructions (R & D Systems, Minneapolis, MN, catalog number DSCTC0).
| Equations to estimate kidney function
The Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) has established equations to estimate glomerular filtration rate (eGFR) based on S cr and/or S cys , patient age, sex, and race.Given the increasing controversy about inclusion of patient race as a variable 20 and the drawbacks of S cr as a kidney biomarker, 21 we estimated eGFR using four iterations of the CKD-EPI equation: the 2009 CKD-EPI equation 22 (includes patient race and S cr ); the 2021 CKD-EPI equation 23 (uses S cr but drops patient race); the 2012 CKD-EPI equation 24 (uses S cys ); and the 2021 CKD-EPI equation 23 (includes both S cr and S cys without an adjustment for race).All eGFR calculations were calculated according to standard body surface area and are in the units of mL/min per 1.73 m 2 .We also tested the estimated creatinine clearance (CrCl) using the Cockcroft-Gault equation. 25
| Transporter genotyping
L-carnitine is transported into the cell through the OCTN2 transporter, which is also responsible for its renal tubular reabsorption. 13ven L-carnitine's critical role in metabolic homeostasis, loss-offunction variants in the gene encoding OCTN2 (SLC22A5) are rare and result in inborn errors of metabolism.7][28] We isolated DNA from buffy coat collected in the RACE trial and genotyped patients at the rs2631367 loci using a commercially available TaqMan genotyping assay (ThermoFisher®, assay ID C__26479161_30).
| Pharmacokinetic modeling
We restricted our secondary analysis to patients who were randomized to receive the study drug and who had a baseline and at least one post-treatment serum sample available.For population PK analysis, post-treatment L-carnitine concentrations were baseline normalized in accordance with United States Food and Drug Administration guidance for modeling endogenous molecules. 29Baseline normalization was done on the individual level such that each post-treatment L-carnitine concentration was subtracted from the individual's pretreatment (baseline) measurement.Post-treatment concentrations below baseline were assigned a value of zero.
All data were cleaned in RStudio, and population PK analysis was performed in Monolix modeling platform (Version 2021R1, Lixoft SAS, Antony, France).Given the sparse sampling scheme of the RACE trial in relationship to the drug infusion time, we opted for a fixed population parameter for the volume of distribution (Vd) based on the median weight of the cohort and previous PK reports that Vd for intravenous L-carnitine is 0.2 to 0.3 L/kg. 13 determine the optimal structural PK model, we built a series of models with one, two, or three compartments and a linear elimination rate constant (k).We selected the model based on the Akaike information criterion (AIC) and model diagnostic plots.For the bestperforming structural model, we assessed the impact of different kidney function parameters as a covariate on the elimination rate constant.We tested the performance of eGFR as estimated by the various CKD-EPI equations described above; the CrCl according to Cockcroft-Gault; and S cr and S cys as standalone biomarkers.In addition, we considered the sarcopenia index, calculated as 100*(S cr / S cys ), which is biomarker of muscle mass rather than true kidney function. 30xt, we considered additional clinical and demographic patient variables as covariates using the available automated stepwise covariate model (SCM) building algorithm in Monolix.Patient demographics included age, sex, weight, and self-identified black race.We also considered the dose of L-carnitine received, organ dysfunction as measured by the SOFA score (with the kidney function score removed), and the sarcopenia index.
| Statistical analysis of individual PK parameters
Once the final covariate model was selected, we explored the relationship between the predicted individual patient PK parameters and baseline metabolites, OCTN2 genotype, and patient mortality.Specifically, we computed the Spearman's coefficient between individual's predicted values for k, the rate constant out of compartment one (k 12 ), and the rate constant out of compartment two (k 21 ) and measured concentration of baseline acylcarnitines and small, polar metabolites measured by NMR.The correlation for comparisons was plotted for relationships with a p-value less than 0.05.We also compared the model-predicted individual parameters stratified by OCTN2 genotype and 28-day patient mortality using the Kruskal-Wallis test and the Wilcoxon signed-rank test, respectively.All statistical analyses were performed using R Studio (RStudio: Integrated Development for R. RStudio, PBC; http://www.rstudio.com/).
| Patients and pharmacokinetic data
Of the 175 patients randomized to receive L-carnitine in the RACE trial, 130 patients had a baseline and follow-up serum sample available for population PK analysis.In these patients, we measured drug concentrations in 542 serum samples.Observations at T12 and T72 were underrepresented (Table 1), as these samples were only collected during the initial 'burn-in' phase of the trial, where the first 40 patients were randomized equally to all trial arms. 15As such, 60% of the cohort in this secondary analysis was randomized to the 18-g treatment arm, as it was selected as the most efficacious by the Bayesian adaptive design.
Baseline S cr was available for all patients.Four patients did not have a sufficient volume of residual baseline serum to measure S cys , and values were imputed from a simple linear model using S cr and patient age, sex, and weight as predictors (Figure S1).Buffy coat for DNA isolation and thus genetic information at the rs2631367 loci of the OCTN2 transporter was only available in a subset of the cohort (N = 110, Table 1).
| Population pharmacokinetic modeling
The two-compartment and three-compartment structural models provided significant improvements in model fit over the onecompartment model (Table 2,
| Other patient factors and individual variation in pharmacokinetics
From the final model, we determined the individual population PK parameters for the elimination rate constant (k), the rate constant out of compartment one (k 12 ), and the rate constant out of compartment two (k 21 ).Individual PK parameters were also compared to OCTN2 genotypes (rs2631367), baseline metabolite concentrations, and patient mortality at 28 days.Twenty-three patients were wild type (CC) at rs2631367, while 87 patients contained either one (CG, 50 patients) or two (GG, 37 patients) copies of the G allele, which has been associated with greater transporter expression.There was no evidence of a relationship between OCTN2 genotype and any individual PK parameters (by the Kruskal-Wallis rank test, p > 0.05).Patients who died before 28 days had a lower predicted value for k (Wilcoxon signed-rank test, p = 5.1e-05, Figure 1B), but similar values for k 12 and k 21 .Figure 1C shows the correlation between individual PK parameters and baseline metabolites measured by LC-MS or NMR.
Baseline acylcarnitines tended to be negatively correlated to k and positively correlated to k 21.Lactate and creatinine were also negatively correlated to k.
| DISCUSS ION
The host response to infection and pharmacotherapy in sepsis is highly heterogeneous. 31A phase I trial of intravenous L-carnitine in patients with septic shock demonstrated a high degree of interindividual variability in the response to the candidate metabolictherapeutic. 6,9 This high degree of variability was also evident in post-treatment blood concentrations of L-carnitine.In this secondary population PK analysis of the subsequent phase IIb trial, 7 a two-compartment model with a fixed population parameter for the Vd and eGFR as covariates of the elimination rate constant best fit the observed data.Importantly, we also found that patient mortality and baseline metabolic status, but not transporter genomics, were related to individual drug response.Pre-treatment Scr was the metabolite most strongly associated with L-carnitine elimination (Figure 1C); other metabolites, those attributable to energy metabolism, were also inversely associated with the estimated L-carnitine elimination rate constant.In aggregate, these findings suggest that while renal function is a primary driver of the variability in L-carnitine blood concentrations following high-dose administration in sepsis patients, pre-treatment energy metabolism also contributes.
In addition to the found broad-dynamic range of measured Lcarnitine blood concentrations, the lack of PK data for L-carnitine given at high doses in patients with septic shock served as the primary justification for our analysis.We also leveraged prior knowledge to inform our analysis.In the phase I trial of highdose L-carnitine in septic shock, there was considerable interpatient variability in carnitine and acylcarnitine concentrations post-treatment, with elevated levels associated with mortality. 6,9re, we see a similar broad-dynamic range in concentrations following treatment, with non-survivors characterized by lower individual values for the elimination rate and higher concentrations (Figure 1B).Similarly, all acylcarnitines measured were negatively correlated with individual parameters for the elimination rate as were other energy-related metabolites (Figure 1C).Adverse drug reactions due to L-carnitine were assessed in the phase I 6 and II 7 trials of L-carnitine but known toxicity to the compound, including an increased potential for seizures and gastrointestinal side effects, was not widely reported.This suggests the higher mortality in patients with elevated concentrations is not directly attributable to L-carnitine toxicity; however, this cannot be completely ruled out.Rather, we speculate that the patients with elevated concentrations had worse kidney function and greater metabolic dysfunction over the course of the study.Importantly, we and others have shown that elevations in acylcarnitines in patients with sepsis who have not received supplemental L-carnitine are associated with disease severity and mortality.Previous reports of L-carnitine PK utilized lower doses that are routinely used in the clinic and in patients who are not acutely ill. 13,35Administration of radiolabeled L-carnitine demonstrated a renally eliminated drug that can be represented as a threecompartment model with a central pool (approximating extracellular fluid), a faster equilibrating compartment (likely generalizing to kidney and liver), and a slowly equilibrating compartment (i.e., skeletal muscle). 36Moreover, endogenous L-carnitine is extensively (>98%) reabsorbed in the renal tubules, and single intravenous doses demonstrate saturation of this process and increased clearance of the compound. 37,38Although we tested more complex population PK models with nonlinear elimination and multiple compartments, these models were characterized by a higher AIC and poor predictions compared to the two-compartment model with linear elimination.However, our work does strongly support the importance of kidney function in the elimination of high-dose exogenous L-carnitine, as each kidney function estimate we considered as a covariate of k dramatically lowered the AIC compared to the base model (Table 3).This strengthens our justification in method to quantify S cr are strengths of this paradigm, there are increasing calls to adopt alternative kidney function biomarkers, particularly in critically ill patients. 14Another endogenous biomarker, S cys , has demonstrated modest improvement in estimating kidney function for renally eliminated drugs. 39In our analysis, we found that S cys outperformed S cr as an individual kidney function biomarker and covariate of the elimination rate of a renally cleared compound.We also found eGFR equations that leverage S cys provided superior model performance and that inclusion of race to estimate eGFR weakened model fit.Our work adds to growing calls to reconsider the approach to estimating kidney function in clinical practice and drug development.
Finally, we assessed genetic variability at rs2631367 in the OCTN2 transporter to determine its contribution to L-carnitine blood concentration variability and because we had previously found that it was associated with peak concentrations of L-carnitine. 11The G allele has been associated with increased mRNA expression of the transporter in eQTL analysis, potentially granting systemic tissue a greater ability to sequester exogenous L-carnitine. 11Our results here found that the elimination rate and the rates into and out of tissue were not meaningfully related to OCTN2 genotype.Given that OCTN2 is a highly conserved transporter, owing to its critical role in host bioenergetics, it is possible the impact of altered gene transcription was insufficient to impact drug response in a heterogeneous, acutely ill clinical cohort.Moreover, we lacked detailed concomitant medications in the RACE trial and are unable to account for drug-transporter interactions that could impact tissue sequestration of L-carnitine.
Our study has several strengths and limitations that warrant further consideration.We employed rigorous metabolomics and PK methods to build a well-performing population model of high-dose, intravenous L-carnitine in the setting of septic shock.
In building the population model, we chose to test the impact of only patient covariates commonly available in the clinical setting.
However, as mentioned above, this did not include an accounting of concomitant medication use, in particular those known to adversely impact mitochondrial function such as propofol and valproic acid. 11Nevertheless, we had a unique opportunity to assess the relationship between less commonly available patient information including OCTN2 transporter genotype and baseline metabolic status.We were also able to assess different approaches for estimating kidney function in critical illness using a therapeutic candidate drug that is extensively cleared by the kidneys.
However, we acknowledge since the study was not designed to model L-carnitine PK, the blood sampling scheme for the trial was rather sparse, particularly early during the drug's infusion, which superseded our ability to fit a population parameter for the Vd.
In addition, we opted to use baseline normalization when considering drug concentrations post-treatment, as L-carnitine is an endogenous molecule and the investigative product administered was not radiolabeled.Our estimates of kidney function are also indexed to a standard body surface area (i.e., eGFR estimates in mL/min/1.73m 2 ).Missingness in patient height data precluded us from individualizing these estimates in absolute units (mL/min).
Finally, our measurement of S cys was done using residual, biobanked serum and a commercially available ELISA kit rather than a clinical measurement from a fresh patient sample.As such, our results regarding the optimal method for estimating eGFR must be interpreted as exploratory and require rigorous further validation using additional cohorts of critically ill patients and probe drug molecules.
In conclusion, we found that high-dose intravenous L-carnitine in patients with septic shock can be reliably modeled at the population level using a two-compartment model with linear elimination.
Kidney function as a covariate of the elimination rate dramatically improved model performance, with methods that incorporate S cys, but not patient race, providing the greatest improvement.We also found that patient mortality and baseline metabolites were strongly related to individual patient PK parameters.Future assessment of high-dose L-carnitine as a therapeutic for septic shock could include a more tailored dosing approach that considers renal function and pre-treatment metabolic status.We have previously shown that pretreatment acetylcarnitine serum concentration is predictive of therapeutic benefit from L-carnitine treatment. 34Consideration of these patient features could aid in moving sepsis, a field that presently has few therapeutic options, toward a precision medicine approach.
T.S.J. has received support from the American Foundation of Pharmaceutical Education.The content is solely the responsibility of the authors and does not necessarily represent the official views of NIGMS or the National Institutes of Health.The concentration data for L-carnitine and the metabolomics data described in this manuscript will be publicly available on the NIH's Metabolomics Workbench site (https://www.metabolomi cswor kbench.org/).
∆AIC = −204.31and − 206.74 points, respectively).Although the three-compartment model could be considered a superior model based on AIC reduction alone, this model was plagued by high residual standard errors (R.S.E.) for both population parameters and random effects (Table 2).Thus, we opted to proceed with the simpler, more stable two-compartment structural model.Model diagnostic plots, including the observed versus predicted concentrations, the distribution of residuals, and the visual predictive check (VPC), are provided for the twocompartment structural model in the supplement (Figures S2A, S3A, and S4A).Kidney function as a covariate of the elimination rate constant reliably improved model fit regardless of the equation or biomarker used.
Figure 1A compares individual estimates for the elimination rate to the eGFR cr-cys stratified by self-identified race.This demonstrated a strong, positive relationship between the elimination rate of L-carnitine and kidney function that is consistent across the two groups.In contrast to kidney function, the relationship between patient weight and the elimination rate was negligible (Figure S5, R 2 = 0.01).
using L-carnitine to test alternative equations that estimate kidney function in critically ill patients.Current clinical and drug development standards rely on the measurement of S cr as a biomarker of kidney function.Although wide use and the international standardization of the analytical F I G U R E 1 Association of individual patient parameters with patient characteristics.(A) Scatter plot and line of best fit for the predicted elimination rate constant (K) from the final model versus eGFR.Self-identified black (in blue) and non-black (in green) patients are plotted separately.eGFR was estimated using the 2021CKD-EPI equation with serum creatinine and cystatin c.Patient parameters were logtransformed prior to plotting.(B) Boxplots of estimated elimination rate constant (k) stratified by 28-day mortality status.(C) Heatmap of Spearman correlation coefficients between conditional mode estimated individual parameters and baseline (pre-treatment) metabolite levels as measured by LC-MS (acylcarnitines, C2-C16) and 1 H-NMR spectroscopy.Individual parameters considered were k and rate into (k 12 ) and out of (k 21 ) tissue.eGFR: estimated glomerular filtration rate; CKD-EPI: Chronic Kidney Disease Epidemiology Collaboration; LC-MS: liquid chromatography-mass spectroscopy; 1 H-NMR: proton nuclear magnetic resonance.
Table 3
shows the impact on the AIC after including various kidney function estimates as a covariate on the elimination rate constant.The eGFR cr-cys , estimated according to the 2021 CKD-EPI equation using both S cr and S cys , provided the largest reduction in TA B L E 1 Patient demographics and clinical characteristics of individuals included in population pharmacokinetic modeling.
Population pharmacokinetic models with different renal function estimates as a covariate of the elimination rate, k. Results shown represent the population parameter estimate and residual standard error (%).population parameter for the elimination rate; k12_pop and k21_pop, fit population parameter for the rate into/out of second compartment; V_pop, fixed population parameter for volume of distribution; Note: Both two-and three-compartment models provided substantial improvement in model performance based on the reduction in AIC.The simpler, two-compartment model was selected based on model diagnostic plots and the high residual standard errors in the three-compartment model.Abbreviations: AIC, Akaike information criteria; b, estimated value from proportional error model; CKD-EPI, Chronic Kidney Disease Epidemiology Collaboration; eGFR, estimated glomerular filtration rate using equations that leverage self-identified black race (race), serum creatinine (cr), and/or serum cystatin C (cys); k_pop, fit population parameter for the elimination rate; k12_pop, k21_pop, k13_pop, k31_pop, fit population parameter for the rate into/out of compartments; V_pop, fixed population parameter for volume of distribution; ωV, ωk, ωk12, ωk21, ωk13, ωk31, standard deviation of random effects for population parameters.TA B L E 2Comparison of structural pharmacokinetic models.TA B L E 3Abbreviations: AIC, Akaike information criteria; b, estimated value from proportional error model; CKD-EPI, Chronic Kidney Disease Epidemiology Collaboration; eGFR, estimated glomerular filtration rate using equations that leverage self-identified black race (race), serum creatinine (cr), and/or serum cystatin C (cys); k_pop, fit a k = k_pop × (renal estimate/constant) β .For eGFR and creatinine clearance, the constant was 30.For serum creatinine, cystatin C, and the sarcopenia index, the constant was set equal to the median observed value (1.9, 2.3, and 81.3, respectively).
|
2023-10-01T06:17:40.717Z
|
2023-09-29T00:00:00.000
|
{
"year": 2023,
"sha1": "95d8aa1b4e7d741e19b112d8f9523545e4c20e17",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1002/phar.2882",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "071bab5de4a48927bfc4d2dfc651e1f374124fa9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
10839608
|
pes2o/s2orc
|
v3-fos-license
|
Aging in Place in Late Life: Theory, Methodology, and Intervention
This special issue focuses on aging in place in late life. Aging in place is about being able to continue living in one's own home or neighborhood and to adapt to changing needs and conditions. It is of high concern due to the increasing number of old and very old people in all societies and challenges researchers, practitioners, and policy makers in many societal and scientific areas and disciplines. We invited authors to contribute original research papers as well as conceptually driven review papers that would stimulate the continuing efforts to understand the different aspects of aging in place in late life. The papers that were submitted came from very diverse disciplines, such as sociology, psychology, occupational therapy, nursing, architecture, public planning, and social work. Given the number and diversity of papers submitted, we can conclude that aging in place is an important concern throughout the world and that different kinds of measures are taken to come up with local, national, and international solutions that enhance aging in place. It remains a very complex issue that needs and deserves to be investigated from many different perspectives and assessed by means of different methodological origin, covering qualitative and quantitative measures, as well as mixed-method approaches. Subsequently, the selection of papers presented in this issue only sheds light on some aspects of sociophysical person-environment exchange as people age, contributing to the ongoing discussion in the field of environmental gerontology.
This special issue focuses on aging in place in late life. Aging in place is about being able to continue living in one's own home or neighborhood and to adapt to changing needs and conditions. It is of high concern due to the increasing number of old and very old people in all societies and challenges researchers, practitioners, and policy makers in many societal and scientific areas and disciplines. We invited authors to contribute original research papers as well as conceptually driven review papers that would stimulate the continuing efforts to understand the different aspects of aging in place in late life. The papers that were submitted came from very diverse disciplines, such as sociology, psychology, occupational therapy, nursing, architecture, public planning, and social work. Given the number and diversity of papers submitted, we can conclude that aging in place is an important concern throughout the world and that different kinds of measures are taken to come up with local, national, and international solutions that enhance aging in place. It remains a very complex issue that needs and deserves to be investigated from many different perspectives and assessed by means of different methodological origin, covering qualitative and quantitative measures, as well as mixed-method approaches. Subsequently, the selection of papers presented in this issue only sheds light on some aspects of sociophysical person-environment exchange as people age, contributing to the ongoing discussion in the field of environmental gerontology.
Vasunilashorn et al. present a review study targeting the concept of aging in place as a research topic whose time has come. They found an increasing proportion of scientific papers over time, in particular those focusing on policy matters and the use of technology to support ageing in place. They concluded that aging in place is far from a one-size-fitsall issue but rather something that differs across populations due to, for example, culture, demographic, and legal systems.
The perspectives of the older persons themselves on social relationships and connectedness, social exclusion and inclusion, and the impact of the neighborhood were targeted in the following studies. By way of qualitative interviews, in the study by Emlet et al., older people were asked about their perception of social connectedness, how the society can help with life transitions to support aging in place, and what kinds of difficulties that they perceived in the home and neighborhood. However, different in conceptual framing and method, similar topics were emphasized by Yen et al., as well as Burns et al. The studies revealed that older people staying in the same neighborhood may experience strangeness, social exclusion, economic exclusion and insecurity due to gentrification and had few positive social ties in the neighborhood. They had a strong drive to stay active and to have meaningful social interactions with others, and they also wanted to contribute to the society. However, they experienced considerable structural barriers, for example, access to transportation services and other services in the neighborhood that made it difficult to stay active and connected to the society. Continuing on the same theme, a survey paper by Wu et al. investigated social isolation among older people in Singapore, finding that the strongest predictors were living alone or living with children. Also pointing towards the importance of community and Journal of Aging Research social processes for aging in place the next paper by Galinsky et al. developed and tested a new measurement of collective efficacy feasible for use among older people. Collective efficacy refers to social processes on the level of personneighborhood interactions, social cohesion, and informal social control, all known to be important for well-being in old age.
In contrast, indoor behavior may include various forms of person-environment relationships of more recent scientific interest. For instance, older adults with hoarding behaviors are often at risk of being evicted from their homes because they constitute a risk for other tenants' safety and security in the housing. For example, the risk of fire increases as does the sanitary risks of having a cluttered home. Thus, as Whitfield et al. pointed out in their paper, this group of people is at risk of being marginalized and to experience and rapidly declining health and well-being. The authors explored a collaborative community planning approach for finding solutions that could enhance the possibilities for aging in place. They found that, with structured collaboration between different actors in the communities, the professionals gained access to expertise from other staff and that such knowledge benefitted the community planning at large. The older people gained insight into their hoarding behavior, and they perceived that this approach fostered empowerment and minimized loneliness and isolation.
Yang and Sanford investigated the relationships between the environment, activity performance at home and community participation, and their potential for aging in place. Comparing older people with and without mobility limitations they found that persons with mobility limitations, experienced more environmental barriers in the home and the community than those without. They also found that environmental barriers in the home and the community explained travel and community participation among those with limited mobility. They reasoned that reducing environmental barriers in the home saves energy and the older person can thus be more active in the community.
The number of persons experiencing dementia "in place" is rising dramatically with increased age in the population. Their problems pose challenges to themselves but also to their close relatives and the society. Another study on aging in place with dementia by Beard et al. focused on couples where one partner had been diagnosed with dementia. In in-depth interviews, they expressed that they desired to go on as before and not to let the problems take over their lives. They strived to remain a couple and to invest as much energy as possible into a life where they worked together, developing a "joint career." Investigating the management of dementia home care resources by way of an ethnographic design, Ward-Griffin et al. found that care resource allocation was relying heavily on family care giving and that formal resources were used as a supplement, most often when the family situation was becoming serious. Family care givers and recipients found the care system difficult to navigate in and without flexibility for acute needs.
One of many interventional approaches to support aging in place in late life is to offer preventive home visits to older people living in the community, mostly above a certain age.
In some countries, it is mandatory for the municipalities to organize and conduct preventive home visits. The aim of the visit is to inform and identify current or potential risks to health, activity, and participation to be able to intervene before the problems occur. Different home visit protocols have been developed and applied in practice; however, the vast majority of them are not based on current evidence. In their study, Löfqvist et al. described the development and pilot testing of an evidence-based protocol for preventive home visits in Sweden. By way of reviewing scientific papers as well as conducting focus group interviews with older people, they identified key aspects important to include in the protocol. The protocol was then applied and tested for feasibility.
Finally, Jutkowitz et al. investigated post hoc the cost effectiveness of a home-based intervention targeting vulnerable older adults. The outcome was defined as life years saved. In the intervention group, the persons lived significantly longer, to additional costs for the intervention. Even though one can assume that the intervention group also may be healthier and consuming less health care resources, this remains to be investigated. To advance services and policies that support aging in place, economic analyses of programs are important. In this respect, the health economic approach used in the study offers a preliminary understanding of the costs of a highly effective intervention.
The variety in focus, theory, and methodology among the papers in this issue is a pleasing sign of the interest and effort being applied to aging in place issues by researchers and practitioners in diverse fields. Together and separately the papers have the potential to influence the societal debate as concern aging issues across the world and to inform decision makers in various fields about necessary measures to take in order to support aging in place in later life. We hope that the readers of this issue will find the papers interesting and inspiring for further research and debate.
|
2017-06-04T09:33:55.090Z
|
2012-04-26T00:00:00.000
|
{
"year": 2012,
"sha1": "8150b82d3f44d69ed02f9b8f792f28fb49844d9b",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/jar/2012/547562.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "eaca501807c21ad885c3d3aa96c279f55ba55842",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Sociology",
"Medicine"
]
}
|
18207568
|
pes2o/s2orc
|
v3-fos-license
|
On the Luminosity Function of Early--Type Galaxies
In a recent paper Loveday et al. (1992) have presented new results on the luminosity function for a sample of galaxies with $b_J \le 17.15$. After having morphologically classified each galaxy (early--type, late--type, merged or uncertain), they have estimated the parameters of a Schechter luminosity function for early-- and late--type galaxies. However, in their sample there is a bias against identifying early--type galaxies at large distances and/or faint magnitudes: in fact, many of the early--type galaxies at faint magnitudes have probably been classified as ``uncertain". As discussed in Loveday et al., the existence of such a bias is indicated by the fact that for these galaxies $=0.32$. In this paper we show, both theoretically and through the use of simulated samples, that this incompleteness strongly biases the derived parameters of the luminosity function for early--type galaxies. If no correction for such incompleteness is applied to the data (as done by Loveday et al.), one obtains a flatter slope $\alpha$ and a brighter $M^*$ with respect to the real parameters.
Introduction
An accurate knowledge of the optical luminosity function of galaxies is required for many applications in cosmology. For instance, it is essential in interpreting galaxy number counts and in analyzing the spatial distribution of galaxies from redshift surveys; in addition, the shape of this function is of theoretical interest as it may provide constraints on models of galaxy formation.
An interesting question about the luminosity function of galaxies concerns its universality: indeed, Binggeli, Sandage & Tammann (1988) have shown that the luminosity function depends on the morphological type, particularly at the faint end. On the other hand, it has been demonstrated that the mix of morphological types is closely related to the local matter density (Dressler 1980). The accurate knowledge of the luminosity function for each morphological type is of great interest also for the models of number-magnitude counts. These models strongly depend on the morphological mix and therefore need the knowledge not only of the fraction of the various galaxy types but also of their K-corrections and luminosity functions. Loveday et al. (1992) have recently presented new results on the luminosity function for a sample of galaxies with b J ≤ 17.15. After having morphologically classified each galaxy (early-type, late-type, merged or uncertain), they have estimated the parameters of a Schechter luminosity function for early-and late-type galaxies, using the STY parametric maximum likelihood method (Sandage, Tammann & Yahil 1979). While for late-type galaxies their parameters are in reasonable agreement with those derived from other samples (see f.i. Efstathiou, Ellis & Peterson 1988), the parameters for early-type galaxies are not consistent with previous determinations.
As mentioned by Loveday et al., in their sample there is a bias against identifying early-type galaxies at large distances and/or faint magnitudes: in fact, many of the earlytype galaxies at faint magnitudes have probably been classified as "uncertain", and therefore have not been used in computing the luminosity function. The existence of such a bias is demonstrated by the fact that for these galaxies < V /V max >= 0.32. The same bias appears not to be present in the classification of the late-type galaxies, for which < V /V max >= 0.47 (see Table 1 in Loveday et al. 1992). In this paper we show, both theoretically and through the use of simulated samples, that this incompleteness strongly biases the derived parameters of the luminosity function for early-type galaxies.
In Sect. 2 we demonstrate that the classification incompleteness biases the results of the STY method and in Sect. 3 we estimate the amount of this bias through simulations.
The luminosity function of galaxies
The luminosity function of galaxies is well represented by a Schechter (1976) form where α and L * are parameters referring to the shape of the function and φ * contains the information about the normalization; these parameters have to be determined from the data.
Many different methods have been used in the past years to compute the parameters of the galaxy luminosity function. Recently, however, the STY method (Sandage et al. 1979) has been the most widely used, and it has been shown that this estimator is unbiased with respect to density inhomogeneities (see f.i. Efstathiou et al. 1988). The basic idea of this method is to compute the estimator of the quantity , the probability of seeing a galaxy of luminosity where L min (z i ) is the minimum luminosity observable at redshift z i in a magnitude-limited sample.
The best parameters α and L * of the luminosity function are then determined by maximizing the likelihood function L(α, L * ), which is the product over all the galaxies of the individual probabilities p i . This corresponds to minimize the function where Γ is the incomplete Euler gamma function and N is the total number of galaxies in the sample.
This formula is correct only for a complete, unbiased sample in which all galaxies with m < m lim are members of the sample or all galaxies with m < m lim have the same probability of being members of the sample (as, for example, in a redshift survey with 1 n sampling). In other cases in which each galaxy of the sample has a different weight w i , which may be a function of an intrinsic property of the galaxy (f.i. the distance, the absolute or apparent magnitude, the diameter, etc.), eq. (2) is not valid anymore. If we define the weight w i as the inverse of the probability that the i th galaxy has of being included in the sample, eqs.
(2) and (3) have to be modified in: and Loveday et al. (1992) have used eq. (2) to compute the luminosity function for the galaxies in their sample, both when they considered all galaxies and galaxies divided in sub-groups, as a function of the morphological type. Since, as mentioned in the Introduction, their morphological classification of early-type galaxies is biased at faint apparent magnitudes, a weight w i (m i ) should be associated to each galaxy and the use of eq. (2) for determining the luminosity function of early-type galaxies is not correct anymore. In the following section we will quantify, through simulated samples, the differences between the results obtained through the use of eq. (3) and (5).
Results
In order to estimate in a quantitative way the error introduced applying eq. (3) instead of eq. (5) to an incomplete sample, we have used two types of random simulations.
and for each m we have randomly eliminated from the sample a fraction f (m) of galaxies.
We assumed that f (m) = 0 for m < m o , i.e. for galaxies brighter than m o the sample is complete. We have chosen m o = 16, assuming that for galaxies brighter than 16 th magnitude the morphological classification is relatively easy, and we have used various values for a, corresponding to different values of < V /V max > and, therefore, to different levels of incompleteness. Then we have computed the parameters of the luminosity function for these samples using both eq. (3) and eq. (5). Table 1 lists the derived parameters for three representative cases: column (1) gives the adopted incompleteness function, column (2) the < V /V max > of the sample and column (3) the number of galaxies; the parameters α and M * derived from eq. (3) and (5) are listed in columns (4) and (5), (6) and (7), respectively. From this table it is clear that, as a increases, the use of eq. (3) produces flatter slopes and brighter M * with respect to the real parameters.
In all these cases the derived parameters are not compatible with the real ones, as shown by the confidence ellipses in Fig. 1. But if we use eq. (5), which takes into account the incompleteness function, we obtain "corrected" parameters in very good agreement with the real ones. Table 2, whose columns have the same meaning as in Table 1, except that in this case all the parameters are the mean of the values derived for each catalogue. In Fig. 2 Confidence ellipses at 1σ level for the parameters listed in Table 1, referred to the complete and the three incomplete samples of case 1). The parameters derived for the incomplete samples are clearly not consistent with the real ones.
|
2014-10-01T00:00:00.000Z
|
1994-03-29T00:00:00.000
|
{
"year": 1994,
"sha1": "da23ba23fe34d101cbdc22b2a4acd4b694cbdd85",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/astro-ph/9403063",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "dfffb1622afbc5afccb5d866a5818df0ce625b3d",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
215761008
|
pes2o/s2orc
|
v3-fos-license
|
Child development and nutritional status in 12–59 months of age in resource limited setting of Ethiopia
Background Early years of life are period of maximal growth and development of human brain. Development of young child is influenced by biological endowment and health of child, nutritional status of child, relationships with primary caregivers, family, and support systems in the community. This study was aimed to assess childhood development in relation to their nutritional status. Method Community-based cross-sectional study was employed. Multi–stage systematic random sampling technique was used to select 626 children aged 12-59 months with mother/caregivers’ pairs in Wolaita district in 2015. Child development assessment was done using third edition of age and stage questionnaire. Height and weight were measured by trained data collectors then the WHO Anthro version 3.2.2 software was used to convert nutritional data indices. Data were entered into Epi-info version 3.3.5 and was exported and analyzed using STATA version 14. Correlation and multiple logistic regression were used. Result High risk of developmental problem in children were 19.0% with 95% CI (16.06%, 22.3%), and it is expressed as communication 5.8%, gross motor 6.1%, fine motor 4.0%, personal social 8.8%, and problem solving 4.1%. One-third (34.1%) of the study participants were stunted while 6.9% and 11.9% of them were wasted and underweight respectively. Weight-for-age (WAZ) and height-for-age positively correlated with all five domains of development, i.e., with communication, gross motor, fine motor, personal social, and problem solving (r = 0.1 − 0.23; p < 0.0001, and r = 0.131 − 0.249; p < 0.0001) respectively. Conclusion and recommendation Overall child development was directly related with nutritional status. So, available resources should be offered to decrease children undernutrition. Further assessment on childhood development of children is necessary
Introduction
Early childhood period is the most important developmental phase in life. The term "child development" indicates advancement of the child in all areas of human functioning: social and emotional, cognitive, communication, and movement [1,2]. Development of child is a maturational process resulting in an ordered progression of perceptual, motor, cognitive, language, socio emotional, and self-regulation skills. Multiple factors influence the acquisition of competencies and skills, including health, nutrition, security and safety, responsive care giving, and early learning [3].
Childhood under nutrition is contributing to childhood morbidity, mortality, impaired intellectual development, suboptimal adult work capacity, and increased risk of diseases in adulthood; hence it is one of the major global health problems [4,5]. It can be existed in the form of wasting (acute malnutrition, weight-for-height Z-score), stunting (chronic malnutrition, height-for-age Z-score), or underweight (weight-for-age Z-score) [4,6].
The 2016 Ethiopian Demographic and Health Survey (EDHS) showed that there has been improvement in the nutritional status of children in the past 15 years. Stunting was 38% in 2016 EDHS all over Ethiopia while severe stunting was 18%. Similarly, 24% of children under age five are underweight and 7% are severely underweight. However, there is no change in the prevalence of wasting, as it remained about 10% and 2% are severely wasted [7]. The Government of Ethiopia has continued its commitment to nutrition by developing the second phase of the National Nutrition Program (NNP II, 2016-2020) [8].
Despite long experience in fighting childhood illness and mortality, health care providers in low and middle income countries face new challenges in promoting child development. Early childhood development in developing countries estimated that over 200 million children in developing countries are not reaching their full developmental potential [9]. Developmental difficulties during early childhood is increasingly recognized in low and middle income countries as important contributors to morbidity in children and adults. Child development of the cognitive, social-emotional, and language and movement functions is influenced by the biological endowment and health of the child, as well as by the relationships with the primary caregivers, family, and support systems in the community. The early years of life are a period of maximal growth and development of the human brain and are therefore extremely important in determining whether the person reaches his or her full potential [1]. Hence, this study was designed to determine the relationship between childhood development and their nutritional status, and result obtained may be used by policy makers and program managers in different parts of the country.
Study design and setting
A community-based cross-sectional study was conducted from children residing in Wolaita zone from May 2015-June 30, 2015. Wolaita zone is found in SNNPR region covering an area of 4471.3 km 2 . It is located at 380 km South of Addis Ababa and 157 km away from Hawassa town. For administrative purpose, it is divided into twelve woredas and three administrative cities. Total population of the zone is estimated about 1, 721,339 with a density of 385 inhabitants per square kilometer. Wolaita Sodo town is the administrative center of the zone.
Sample size determination and sampling procedure All children 12-59 months of age residing in Wolaita zone were the source population, whereas all children residing in selected kebeles were considered as the study population. Sample size was determined by single population proportion formula by considering 44.1% prevalence of stunting in SNNPR from EDHS 2011 [10], margin of error 5%, confidence level of 95%, design effect of 1.5, and 10% of non-respondent, and then the final sample size was found 626. Multi-stage systematic sampling was used to select the study participants. First, 3 woredas and 2 town administrations were selected from 12 districts and 3 town administration. Boloso Sore, Sodo Zuriya, Offa woredas, Areka town, and Sodo town were selected by lottery method. Then sample size was allocated based on proportionate of the under-five population in each woreda. One urban and three rural kebeles were selected based on lottery method. Households which have children 12-59 months were selected using systematic sampling by taking the sampling frame from health extension workers.
Variables
Dependent variable childhood development score of children by age and stage questionnaire version three.
The primary independent variables were child nutritional variables (weight-for-height, height-for-age, and weight-for-age). The other independent variables were residence, formal education , wealth status, age of the mother, immunization of child, birth order of the child, sex of the child, age of the child, initiation of complementary feeding, dietary diversity score, meal frequency score, place of delivery, term of delivery, and others. Formal education of the mother is categorized as yes if a woman had attended any governmental formal education. Wealth status was defined as high, medium, and low (poor) based on principal component analysis. Ever breast feed, food frequency, term of delivery, dietary diversity score, and initiation of complementary feeding were defined as per different literature [11,12].
Measurement and data collection procedure
Pre-tested interview administer questionnaire was used for socio-demographic, household economic status variables, nutritional variables, maternal variables, child health related factors, and food accesses at household variables. This pre-tested questionnaire was developed after reviewing different literatures [10, 13,14]. Child development assessment was done using the third edition of age and stage questionnaire (ASQ-3) of mental development. The ASQ-3 has five subscales: communication, gross motor, fine motor, problem solving, and personalsocial. Age and stage questionnaire was answered as "yes" scored as 10, "sometimes" scored 5, and "not at all" scored 0 [15]. Each form contains 30 items, six for each subscale, written in a simple language. Some questions are specific for certain age groups, while other items are used for a wider age range and are repeated in the different age-specific questionnaires. The ages and stages questionnaire has validity of 0.83-0.88, reliability of 0.90-0.94, sensitivity of 38-91%, and specificity of 79.3-91% [16][17][18]. Child development was measured at their dwelling as per the recommendation of ASQ-3. Each domain was classified into three (high risk for developmental, needs monitoring and well development) for each age category based on ASQ-3. Finally, child development was categorized as developmental delay and well development based on recommendation.
Every child was examined medical status on their dwelling by supervisors and data collectors. Pre-test was conducted on 5% of the total sample size in one of the town administrations and the surrounding rural area which have similar basic socio-economic characteristics as the study kebeles, and necessary correction were made. Data were collected from caregivers or mothers of the children by ten BSc holder nurses who could communicate well with the local language.
Anthropometric data were taken by supervisors. Supervisors were 4 and had master's degree in public health and had health background. Anthropometric data was collected following the WHO standards. Children dietary frequency score and dietary diversity was assessed based on the last 24-h recall method. Dietary diversity score was assessed based on IYCF recommendation among 7 food categories [11].
Data collectors and supervisors were trained for 3 days and a regular supervision with practical session for height and weight measurements were done. Technical error of measurement (TEM) was computed during training. For this, an expert was taken two measurements weight and height of ten children and let supervisors take measurements of all ten children twice. Then, data entered and computed by the ENA SMART software and was confirmed as the result generated was acceptable.
For age and stage questionnaire training was given by psychiatrists who is knowledgeable and experienced on the age and stage questionnaire and by principal investigators.
In addition, regular check-up for completeness and consistency of the data was made daily. Moreover, high emphasis was given in designing data collection instruments for its simplicity and reproducibility. Weights and heights were measured twice, and the mean values were used for the analysis. Standardization anthropometric measurements were conducted to see whether the data collectors have good precision and accuracy and fortunately the precision and the accuracy of most of the enumerators were acceptable.
Data management and analysis
Pre-coded data were entered to Epi info version 3.5.3 and the WHO Anthro software was used to convert nutritional data into Z-scores of indices by using the new WHO growth standard. Children whose height-for-age, weight-for-height, and weight-for-age < − 2 SD from the median of the reference population were considered stunted, wasted, and underweight respectively. Then, data were exported to the STATA software version 14 for data processing and analysis. Principal component analysis was done using household assets possession to construct wealth index, as a proxy measure of household socio-economic status. Household socioeconomic status was finally divided into terciles (rich, medium, and poor). Assumptions of principal component analysis were checked. Relation between childhood development and their nutritional status was assessed with Pearson correlation coefficient with P value. Multiple logistic regression was used to assess factors associated with child nutrition and mental development, and P value of less than 0.05 will be considered significant.
Result
General characteristics of the population A total of 605 (96.8%) children with their mothers/caregivers were interviewed. From total respondents 413 (68.26%) were rural kebele residence. Nearly 91% of mothers were married and 69.26% (419) mothers had attended formal education. Mean age of mothers was 27.25, with SD of 6.025 and a minimum of 15 years and maximum of 50 years. On the average, 5 people lived at the household with SD, 1.5 and 46% of the children were living with a total family size of less than 4 ( Table 1).
From total children, 307 (50.7%) were males. Mean age of children were 33.87 months (SD 13.9 month). Above half of the children were toddlers. Thirty seven percent of children were first child for their mothers. Almost all children were term at delivery which is 99.3% and sixty percent (60.5%) of mothers were delivered at government health facility. Out of the total children, 95.5% of them were fully immunized. Twenty-three percent of children were sick in the last 2 weeks before survey (Table 2).
Nutritional status and dietary practices of the children
Dietary frequency score of 24-h recall method shows almost all (72.4%) of the children were above and equal to minimum recommendation. Fifty-nine percent of children diversity score were less than 4 types ( Table 2).
Child developmental status
Mean ASQ-3 score for total score was 231.23, with standard deviation from 61.54, all with a range from zero to 300. The mean ASQ-3 score for each domain was range from 39.05-53.28, with standard deviation from 13.19-19.28, all with a range from zero to 60. This study revealed that high risk of developmental problem in children were 19.0% with 95% CI (16.06%, 22.3%) and it is expressed as communication 5.8%, gross motor 6.1%, fine motor 4.0%, personal social 8.8%, and problem solving 4.1%. From communication 14.7%, from gross motor 9.6%, from fine motor 12.6%, from personal social 17.9% and from problem solving 6.9% were needs monitoring. The rest were well developed according to their age (Table 4).
Relationship between nutritional status and child development
Height-for-age (stunting) Z-score had significant association with communication, gross motor, fine motor, personal social, and problem solving. Weight-for-age (stunting) Z-score had significant association with communication, gross motor, fine motor, personal social, and problem solving (Table 5).
Factors affecting child cognitive development
First childhood development was categorized as risk for developmental delay and normal development. In bivariate logistic regression being stunted, underweight, dietary diversity score, mother's age at birth, frequency of feeding, immunization status, starting month for complementary feeding, birth order of child, residence, and educational status of the mother were significantly associated with developmental status of the child. Multiple logistic regression analysis showed that children who are stunted, starting month for complementary feeding, who had no get minimum dietary diversity score, and birth order of the child were associated with developmental delay. As the childbirth order increased by 1 being developmentally delayed increased by 1.2 times with 95% CI (1.01, 1.34). Stunted children had 2.2 times more likely to be developmentally delayed with 95% CI (1.3-3.5). Children who started complimentary feeding below and above 6 months of age were associated with developmental delay compared with those started at age of 6 months with AOR 3.73, 95% CI (1.5-8.8) and 3.24, 95% CI (1.87-5.62), respectively. Children who get minimum dietary diversity score below 4 were 2.1 times more likely to be developmentally delayed with 95% CI (1.3-3.4) ( Table 6).
Discussion
In this survey, one-third (34.1%) of the study participants were stunted while 6.9% and 11.9% of them were wasted and underweight respectively. Generally according to the WHO's classification, the prevalence of stunting in the study area was very high. It is almost similar with that of EDHS 2016 national and regional prevalence which was 38% and 38.6% respectively, but it is lower than Amhara regional (46%) [7]. As compared to other small pocket studies in different parts of Ethiopia, prevalence of stunting in the study area was found to be low. For example, prevalence in Haromaya district [14], Bule Hora district [20], northwest Ethiopia Dembia district [21], and northeast Ethiopia Lalibela [22] was 45.8%, 47.6%, 46%, and 47.3 respectively, while it is almost in line with finding from Hossana town [23] was 35.4%. This might be due to difference in method used, sample size variation, variation in agro-ecological pattern, and feeding practices.
In this survey, high risk of developmental problem in children were 19.0% with 95% CI (16.06%, 22.3%) and it is expressed as communication 5.8%, gross motor 6.1%, fine motor 4.0%, personal social 8.8%, and problem solving 4.1%. This finding was lower compared to the lancet 2016 report for the world which was 43 percent of children under 5 years of age-an estimated 250 millionliving in low-and middle-income countries are at risk of suboptimal development [3]. This finding was in line with the finding from Iran which was done in children from 4-60 months of age [24].
Height-for-age and weight-for-age of children had significant correlation with communication, gross motor, fine motor, personal social, and problem solving both in correlation. This finding is similar with the finding from Sidama, Ethiopia. There is a significant difference in mean cognitive test scores between stunted and non-stunted, and between underweight and normal-weight children [25]. But in multivariable logistic regression analysis, we find stunting is the only determinate for child development. This is in line with study conducted in Vietnam, Ethiopia, and Peru [26,27]. Height-for-age Z-score was linearly associated with cognitive, communication, and motor development Z-scores by study done in Tanzania [28].
Weight-for-age (stunting) Z-score had significant association with communication, gross motor, fine motor, personal social, and problem solving with correlation analysis. But underweight had no influence on the development in multivariable binary logistic regression.
According to this survey, factors that affect childhood development are birth order of the child, time for initiation of complementary feeding, and dietary diversity score. As birth order increase by 1, developmental delay increased by 20%. This is not similar with study conducted in Vietnam, Ethiopia, and Peru [26]. This might be that this study has done limited setting. Time for initiation of complementary feeding had significant association with child development. Those children who started complementary feeding below the age of 6 month and above 6 month had significant effect on child development.
Children who get minimum dietary diversity score below 4 were 2.1 times more likely to be developmentally delayed. This is similar with finding from Goba, Ethiopia [29].
The other important health parameters were very significantly better compared to national and regional estimates. Almost all were vaccinated fully according to their age. These findings were above the national estimate [30].
The limitation of this study was conducted cross sectional design due to it measure exposure and outcome at the same time. It does not show cause and effect association. It would be better if the study design is a follow-up study. We measure mental development at one time, but one time measurement might lead to biased. It would be better if it measured repeatedly than one time.
|
2020-04-15T14:31:51.840Z
|
2020-04-14T00:00:00.000
|
{
"year": 2020,
"sha1": "2ea7ddeaec0f3be5d36ad451998ccd89775b28e0",
"oa_license": "CCBY",
"oa_url": "https://jhpn.biomedcentral.com/track/pdf/10.1186/s41043-020-00214-x",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2ea7ddeaec0f3be5d36ad451998ccd89775b28e0",
"s2fieldsofstudy": [
"Medicine",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
267539001
|
pes2o/s2orc
|
v3-fos-license
|
Meta-analysis of factors influencing anterior knee pain after total knee arthroplasty
BACKGROUND Total knee arthroplasty (TKA) is a mature procedure recommended for correcting knee osteoarthritis deformity, relieving pain, and restoring normal biomechanics. Although TKA is a successful and cost-effective procedure, patient dissatisfaction is as high as 50%. Knee pain after TKA is a significant cause of patient dissatisfaction; the most common location for residual pain is the anterior region. Between 4% and 40% of patients have anterior knee pain (AKP). AIM To investigate the effect of various TKA procedures on postoperative AKP. METHODS We searched PubMed, EMBASE, and Cochrane from January 2000 to September 2022. Randomized controlled trials with one intervention in the experimental group and no corresponding intervention (or other interventions) in the control group were collected. Two researchers independently read the title and abstract of the studies, preliminarily screened the articles, and read the full text in detail according to the selection criteria. Conflicts were resolved by consultation with a third researcher. And relevant data from the included studies were extracted and analyzed using Review Manager 5.4 software. RESULTS There were 25 randomized controlled trials; 13 were comparative studies with or without patellar resurfacing. The meta-analysis showed no significant difference between the experimental and control groups (P = 0.61). Six studies were comparative studies of circumpatellar denervation vs non-denervation, divided into three subgroups for meta-analysis. The two-subgroup meta-analysis showed no significant difference between the experimental and the control groups (P = 0.31, P = 0.50). One subgroup meta-analysis showed a significant difference between the experimental and control groups (P = 0.001). Two studies compared fixed-bearing TKA and mobile-bearing TKA; the results meta-analysis showed no significant difference between the experimental and control groups (P = 0.630). Two studies compared lateral retinacular release vs non-release; the meta-analysis showed a significant difference between the experimental and control groups (P = 0.002); two other studies compared other factors. CONCLUSION Patellar resurfacing, mobile-bearing TKA, and fixed-bearing TKA do not reduce the incidence of AKP. Lateral retinacular release can reduce AKP; however, whether circumpatellar denervation can reduce AKP is controversial.
BACKGROUND
Total knee arthroplasty (TKA) is a mature procedure recommended for correcting knee osteoarthritis deformity, relieving pain, and restoring normal biomechanics.Although TKA is a successful and cost-effective procedure, patient dissatisfaction is as high as 50%.Knee pain after TKA is a significant cause of patient dissatisfaction; the most common location for residual pain is the anterior region.Between 4% and 40% of patients have anterior knee pain (AKP).
AIM
To investigate the effect of various TKA procedures on postoperative AKP.
METHODS
We searched PubMed, EMBASE, and Cochrane from January 2000 to September 2022.Randomized controlled trials with one intervention in the experimental group and no corresponding intervention (or other interventions) in the control group were collected.Two researchers independently read the title and abstract of the studies, preliminarily screened the articles, and read the full text in detail according to the selection criteria.Conflicts were resolved by consultation with a third researcher.And relevant data from the included studies were extracted and analyzed using Review Manager 5.4 software.
RESULTS
There were 25 randomized controlled trials; 13 were comparative studies with or without patellar resurfacing.The meta-analysis showed no significant difference between the experimental and control groups (P = 0.61).Six studies were comparative studies of circumpatellar denervation vs non-denervation, divided into three subgroups for meta-analysis.The two-subgroup meta-analysis showed no significant difference between the experimental and the control groups (P = 0.31, P = 0.50).One subgroup meta-analysis showed a significant difference between the experimental and control groups (P = 0.001).Two studies compared fixed-bearing TKA and mobile-bearing TKA; the results meta-analysis showed no
INTRODUCTION
Knee osteoarthritis is a chronic joint disease characterized by articular cartilage degeneration and secondary hyperosteogeny [1].The primary symptom is pain during knee joint weight-bearing and activity, severely affecting the quality of life.In the early stage, conservative treatment with medication is effective; however, in the middle and late stages (especially in the end stage), knee pain is severe, and the effective treatment is knee replacement [2,3].Total knee arthroplasty (TKA) is a mature procedure recommended for correcting knee osteoarthritis deformity, relieving pain, and restoring normal biomechanics [4].The patients enjoy excellent long-term survival [5][6][7][8].Although TKA is a successful and cost-effective procedure, patient dissatisfaction is as high as 50%.Knee pain after TKA is a significant cause of patient dissatisfaction; the most common location for residual pain is the anterior region [9].Between 4% and 40% of patients have anterior knee pain (AKP) [10][11][12].In this review, we searched PubMed, EMBASE, and the Cochrane database for randomized controlled trials related to AKP after TKA to explore the effects of various TKA approaches on AKP.
Eligibility criteria and outcome definitions
Studies were selected based on the following inclusion criteria: (1) Type of studies: A randomized controlled trial; (2) subjects: Patients undergoing TKA for the first time; (3) intervention: Not limited; (4) control group: Intervention different from the experimental group or no intervention; and (5) evaluation indicators: Occurrence of AKP (incidence and pain degree).The exclusion criteria were as follows: Patellar surgery, fracture history, high tibial osteotomy, no AKP, review or expert reports, cadaveric studies, model studies, and case reports.
Information sources and search strategy
PubMed, EMBASE, and the Cochrane Library were searched from January 2000 to September 2022.The keywords were "Total Knee Arthroplasty", "Anterior Knee Pain", and other related Medline search heading terms or expressions.
Study selection and data extraction
Two researchers independently read the title and abstract of the studies, preliminarily screened the articles, and read the full text in detail according to the selection criteria.Conflicts were resolved by consultation with a third researcher.We retrieved 294 articles from three databases.After reading the title and abstract, 67 articles were identified.After reading the full text, articles without AKP were excluded, and the controversies were resolved.Finally, 25 articles were included in this review.A flowchart of the studies considered for inclusion is shown in Figure 1.
Quality assessment
According to the Cochrane Risk of Bias tool, the risk of bias of each randomized controlled trial was graded as low, high, or unclear based on: The risk of bias assessments is shown in Figures 2 and 3.
Data synthesis and analysis
Data on study design, study population, interventions, and outcomes were extracted from the included articles' text, figures, and tables.Dichotomous outcomes were expressed as risk ratios with 95% confidence intervals (95%CIs), while continuous outcomes were expressed as mean or standard mean differences with 95%CI.Heterogeneity was expressed as P and I².This value of I² ranges from 0% (complete consistency) to 100% (complete inconsistency).If the P value of the heterogeneity test was < 0.1 or I² > 50%, a random-effects model was used in place of the fixed modality.
Publication bias was tested using funnel plots.Forest plots were used to graphically present the results of individual studies and the respective pooled effect size estimate.All statistical analyses were performed using Review Manager version 5.4.
Effect of patellar resurfacing on AKP
We included 13 studies on the effect of patellar replacement on AKP after TKA [4,[13][14][15][16][17][18][19][20][21][22][23][24].Ten reported the number of patients with AKP in each group, and the remaining three evaluated AKP using a visual analog scale (VAS) and hospital for special surgeries patellar score.These three studies did not conduct meta-analyses.There were 1197 TKA patients in these ten studies, including 586 TKA patients with patellar resurfacing (121 AKP) and 611 TKA patients without patellar resurfacing (100 AKP).The basic information of the ten studies (Table 1) and the forest plot (Figure 4) and funnel plot (Figure 5) of the meta-analysis are as follows (I² = 0%, using the fixed modality, P = 0.13, suggesting that there was no significant difference between the two groups.The funnel plot was symmetrical, suggesting no publication bias).
Effect of circumpatellar denervation on AKP
Six studies [25][26][27][28][29][30] compared circumpatellar denervation with non-denervation in TKA.The patellofemoral Feller score (PFS) was used to evaluate postoperative AKP in two studies, VAS was used in two studies, and the remaining two reported the number of cases of AKP in each group; therefore, they were divided into three subgroups for meta-analysis.The basic information of the six articles is presented in Tables 2 and 3.
PFS score subgroup
There were two studies [25,26] with 138 cases in the denervation group and 131 in the non-denervation group.The metaanalysis forest plot is shown in Figure 6A (I² = 66%, using the random-effects model, P = 0.31, suggesting no significant difference between the groups).
VAS score subgroup
There were two studies with 85 patients in the denervation group and 84 in the non-denervation group [27,30].The metaanalysis forest plot is shown in Figure 6B (I² = 34%, using the fixed modality, P = 0.001, suggesting that the difference between the groups was statistically significant).
Subgroup of the number of patients with AKP
There were two studies with 213 patients in the denervation group and 213 in the non-denervation group [28,29].The meta-analysis forest plot is shown in Figure 6C (I² = 90%, using the random-effects model, P = 0.50, suggesting no significant difference between the groups).
Effects of using fixed or mobile-bearing TKA on AKP
There were two studies comparing mobile-bearing and fixed-bearing designs.There were 88 cases of fixed-bearing and 71 of mobile-bearing [31,32].The basic information of the studies (Table 4) and the forest plot of meta-analysis (Figure 6D) are as follows (I² = 12%, using the fixed modality, P = 0.63, suggesting that there was no significant difference between the two groups).
Effect of lateral retinacular release on AKP
We included two comparative studies of lateral retinacular release and non-release, with 135 cases in the release group and 130 in the non-release group [33,34].The basic information of the two studies (Table 5) and the forest plot of metaanalysis (Figure 6E) are as follows (I² = 0%, using the fixed modality, P = 0.002, suggesting that the difference between the two groups was statistically significant).
Effect of other factors on AKP
Yuan et al [35] reported differences in patellofemoral function, clinical outcomes, and radiographic parameters between the freehand and cutting guide patellar resection techniques in patients undergoing TKA.The authors randomly assigned 100 patients to the freehand technique group and the cutting guide technique group, with 50 patients in each group.Finally, 42 patients in the cutting guide technique group and 44 patients in the freehand technique group were available for analysis.AKP occurred in 7.14% of the patients in the cutting guide technique group and 9.09% in the freehand technique group.There was no significant difference between the two groups.Fahmy et al [36] randomized into an experimental group, including patients with complete excision of the infrapatellar pad of fat (IPFP) and the control group with IPFP preservation.The authors randomly assigned 90 patients to the experimental and the control groups.At 6 months follow-up, 10 knees and 14 knees had AKP in IPFP preservation and excision group patients, respectively.The pain decreased during the follow-up period until the number of cases was almost equal at the final visit.There was no significant difference in AKP between the groups.Each group's mean VAS pain scores were comparable throughout the recorded follow-up period.
Effect of patellar resurfacing on AKP
Patellar resurfacing in TKA has long been controversial; some authors believe that patellar resurfacing can improve patient satisfaction, reduce postoperative AKP, and reduce the revision rate [37][38][39][40], while others hold the opposite view [41,42].We analyzed 13 randomized controlled trials of patellar resurfacing and non-resurfacing.Of these, 12 showed no significant difference in postoperative AKP between the groups.Wood et al [23] showed that postoperative AKP was lower in the patellar resurfacing group than in the non-resurfacing group.In that study, surgery was performed by one of six experienced surgeons or their trainees under their supervision, and the follow-up time varied substantially (36-79 months, mean 48 months).Different surgeons have different surgical preferences, and the postoperative results also show substantial differences.The patients were followed up for a minimum of 36 months and a maximum of 79 months.The incidence of AKP and the severity of pain after TKA decreased with time.Therefore, comparing results at 36 and 79 months is not appropriate.These reasons may explain the different results between Wood et al [23] and other studies Our meta-analysis showed no significant difference in the incidence of postoperative AKP between the patellar resurfacing group and the non-resurfacing group.Patellar resurfacing increases the operative time and blood loss.Furthermore, the patella in Asians is generally thin, leading to an increased risk of postoperative patellar fracture [41,42].Therefore, we do not recommend patellar resurfacing in TKA.
Effect of circumpatellar denervation on AKP
The peripatellar soft tissue and retropatellar fat pad have been reported to be the source of AKP [43,44].Immunohistochemical studies of nerve distribution in this area have shown the presence of substance-p nociceptive fibers in the peripatellar soft tissue [45].Electrocautery disables these pain receptors and achieves desensitization or denervation of the anterior knee region.Thus, postoperative AKP can be reduced [46,47].In our review, six studies compared circumpatellar denervation and non-denervation in TKA.Due to the inconsistency of the indicators to evaluate postoperative AKP, the meta-analysis was divided into three subgroups.
The results of the PFS score subgroup with AKP showed no significant difference between the denervation and nondenervation groups, while the VAS score subgroup showed that denervation was superior to non-denervation.Due to the large incision of TKA, peripatellar soft tissue and retropatellar fat pad are injured to a greater extent; therefore, achieving the surgical goal by performing only circumpatellar denervation is challenging.The heterogeneity among the six studies was considerable.The sample size was small, and the power of meta-analysis was weak; therefore, more studies are needed.
Effects of fixed or mobile-bearing TKA on AKP
The theoretical advantage of the mobile-bearing TKA is the ability to self-align and accommodate minor mismatches [32].The design of the mobile-bearing TKA could lead to a better range of motion during knee flexion activities [48].Breugem et al [12] found that over a one-year follow-up, the incidence of postoperative AKP of mobile-bearing TKA was lower than that of fixed-bearing TKA.However, postoperative AKP tended to be the same over time [32].This result is similar to other studies [49,50].This review included two studies comparing fixed-bearing TKA and mobile-bearing TKA, with follow-up times of 5.0 and 7.9 years, respectively.The meta-analysis showed no difference in the incidence of AKP between the groups.Therefore, the advantage of mobile-bearing TKA might decrease over time.
Effect of lateral retinacular release on AKP
Theoretically, proper lateral retinacular release improves patellar tracking and reduces patellofemoral contact pressure.These factors have been reported to be closely related to AKP [51,52].In a prospective cohort study of 271 patients, Lee et al [51] found that patients who underwent patellar decompression had less AKP than those who did not.Wilson et al [52] found that patients with AKP had abnormal patellar tracking compared with patients without AKP.This review included two studies comparing lateral retinacular release and non-release in TKA.The meta-analysis showed that lateral retinacular release reduced AKP.No studies reported that lateral retinacular release produces adverse postoperative complications.Proper lateral retinacular release increases the intraoperative field of vision, which is conducive to successful outcomes.
Effect of other factors on AKP
In patellar resections when conducting TKA, a number of principles should be considered including restoring patellar height, performing a symmetric resection, avoiding under-resection, and minimizing over-stuffing of the patellofemoral joint [53].Reasonable patellar excision is more beneficial to the installation of patellar components.At the same time, reasonable excision can reduce AKP, patellar fracture and patellar injury [54,55].This review included one study comparing freehand and cutting guide patellar resection techniques in TKA.In their prospective randomized controlled trial, no statistically significant difference was observed in the incidence of AKP between the two groups.Therefore, better knee function may be more related to basic principles, including excellent lower limb alignment, proper prosthetic placement, intact ligaments, and greater lower limb strength [35].
The IPFP is a piece of fat tissue located between the patellar ligament, the inferior patellar end, and the proximal tibia.Anatomically, it is considered to be an intraarticular extrasynovial compartment that may support effective joint lubrication [56].The sufficient surgical exposure often prompts many surgeons to remove it during surgery, as there is debate about the effectiveness of its removal, but there is not complete agreement.In the study of Fahmy et al [36], the difference of the postoperative AKP, range of motion, oxford knee score and the clinical outcomes whether infrapatellar fat pad was excised or not were statistically insignificant.Therefore, surgeons had better to save the IPFP if conventional exposure can be reached; otherwise, resection is preferred to improve exposure.The exact pathogenesis of AKP may be multifactorial.Laubach et al [57] concluded that quadriceps muscle strength, inlay thickness, and the patella position might be of particular relevance in avoiding postsurgical AKP.The results of another study suggest that the successful repair of the medial patellofemoral ligament after using a medial parapatellar approach in TKA could reduce the high rate of postoperative AKP [58].There are many other factors that may be related to AKP after TKA [59][60][61].Due to the lack of randomized controlled trials in the exploration of these factors, they were not included in the meta-analysis of this study.
Our meta-analysis had several strengths.First, it resulted in a different conclusion from the 2 reached in earlier metaanalyses [62,63].In the study by Duan et al [62], the results showed that patellar resurfacing had a significant protective effect on AKP with low heterogeneity and robust results.In our analysis, the incidence of AKP was not statistically significant with or without patellar replacement in TKA.A meta-analysis conducted by Xie et al [63].concluded that patellar denervation could significantly relieve AKP during follow-up up to 12 months, but not beyond 12 months.We found that the results of different assessment methods for AKP were different.Second, only randomized controlled trials were included in our study, and the results obtained were more accurate.Third, the studies we included were screened independently by two researchers according to inclusion and exclusion criteria, we used Cochrane Risk of Bias tool to assess publication bias, and these results indicated that publication bias was well controlled.This meta-analysis also had limitations.First, only a small number of trials was analyzed since we only included randomized controlled trials.Second, there is no single definition of AKP, and distinguishing patellofemoral pain syndrome is difficult.Third, the studies included in the meta-analysis applied different techniques and diagnostic criteria to AKP, which could lead to performance bias.Given these limitations, more high-level research is still needed in the future.
CONCLUSION
This meta-analysis of currently available evidence indicates that patellar resurfacing, mobile-bearing TKA, and fixedbearing TKA can't relieve AKP postoperatively after TKA.We do not recommend patellar replacement in TKA unless patellar replacement is necessary.In evaluating the effect of patellar denervation on TKA, the results of different assessment methods for AKP were different.Therefore, future high-level research is warranted for validation.Besides, lateral retinacular release in TKA is recommended because it is safe and result in good clinical outcomes in controlling AKP.
Research background
Knee osteoarthritis seriously affects the quality of life of the elderly.Total knee arthroplasty (TKA) is an effective treatment for end-stage osteoarthritis.Anterior knee pain (AKP) after TKA is the main cause of dissatisfaction in the elderly.The management of AKP after total knee replacement is very important.
Figure 1
Figure 1 Flowchart of included studies.
Figure 2
Figure 2 Proportions in the methodological quality assessment.
Figure 5
Figure 5 Funnel plot for patellar resurfacing vs no resurfacing.
Figure 6
Figure 6 Forest plot.A: Forest plot for patellofemoral Feller score subgroups; B: Forest plot for the visual analog scale score subgroup; C: Forest plot for the subgroup of patients with anterior knee pain; D: Forest plot for using fixed or mobile-bearing total knee arthroplasty; E: Forest plot for lateral retinacular release vs non-release.95%CI: 95% confidence interval.
|
2024-02-08T16:15:22.640Z
|
0001-01-01T00:00:00.000
|
{
"year": 2024,
"sha1": "aa607b55ffe38b538909555c0696e9e0f343773a",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.5312/wjo.v15.i2.180",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "cd108dfb2976cd043d7a45f79d2a7f27a948ab9c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
12468692
|
pes2o/s2orc
|
v3-fos-license
|
Parallel needed reduction for pure interaction nets
Reducing interaction nets without any specific strategy benefits from constant time per step. On the other hand, a canonical reduction step for weak reduction to interface normal form is linear by depth of terms. In this paper, we refine the weak interaction calculus to reveal the actual cost of its reduction. As a result, we obtain a notion of needed reduction that can be implemented in constant time per step without allowing any free ports and without sacrificing parallelism.
Introduction
Previously, we successfully adapted the approach of token-passing nets [3] to optimal reduction [2] as well as closed reduction 1 . However, dissatisfied with difficulties of adapting the approach of tokenpassing and having to leave the pure formalism of interaction nets by introducing the non-deterministic extension, we decided to consider implementation of the weak reduction to interface normal form [1].
Switching to weak reduction comes at a cost. First of all, a reduction step is no longer constant by time, but at least linear by depth of terms. Second, weak reduction requires the notion of interface in its original interaction calculus variant, while we do not allow any free ports in our implementation of interaction nets 2 . Moreover, we avoid any notion of a root of a net in order to preserve the option of implementing a distributed computation framework based entirely on interaction nets.
These considerations have led us to refining the interaction calculus for weak reduction. The resulted version of interaction calculus is presented in this paper. We define reduction that can be implemented in constant time per step, thus revealing the actual cost of weak reduction. Also, our interaction calculus formalizes the notion of needed reduction of interaction nets without allowing free ports. And finally, the option of parallel evaluation has been preserved.
Definitions
A term is inductively defined as t ::= !α(t 1 , . . . ,t n ) | α(t 1 , . . . ,t n ) | x, where x is called a name, α is an agent of type α ∈ Σ from a set Σ that is called signature, and n = ar(α) ≥ 0 is the agent's arity. If a term t has the form of !α(t 1 , . . . ,t n ), then we call t needed and denote it as !t. An interaction rule is where m = ar(α), n = ar(β ), and v i and w i are terms. Signature and a set of interaction rules together define an interaction system. In any interaction system, a configuration is defined as an unordered multiset of equations v i = w i denoted as v 1 = w 1 , . . . , v n = w n . Any name x can have either zero, or exactly two occurrences in a configuration. If a name x has exactly one occurrence in a term t, then substitution t[x := u] is the result of replacing x in t with the term u.
Reduction
Reduction relation on configurations is defined for three different cases.
. . , w n ], then the following reduction is called interaction: where (!) stands for either ! or absence of it. The second case of reduction is indirection defined for a name x that occurs in v: Finally, the following reduction is called delegation: meaning that a needed term makes its parent agent needed as well.
Interaction, indirection, and delegation together constitute the reduction relation of configurations.
Example
The following reduction sequence corresponds to read-back of ω ≡ λ x.x x as defined in [2]: including 5 delegations in addition to 5 interactions and 4 indirections.
Implementation
A natural implementation of full reduction for interaction nets is to have a queue of pairs of single-linked trees to represent the set of active pairs with each name in configuration represented as a pair of nodes linked to each other. That immediately gives a constant time per reduction step which is either interaction or indirection. Note that such a queue can be processed in any order and even in parallel. However, in case of the weak reduction, that data structure is not enough. In addition to links from a parent node to each of its children, one could choose to add backward links from each node to its parent node, the outermost node's parent link pointing to the equation in which the corresponding term occurs. Now, let us discuss how to implement the refined interaction calculus we introduced in this paper, aiming to preserve the good properties of the queue as noted above. First of all, instead of the queue of active pairs, we can choose to have a queue of needed entities which can be needed terms or equations. Processing such a queue is essentially replacing each needed node in the queue with its parent node and marking it needed as well. When a node's parent link points to an equation, the node is to be replaced with that equation in the queue. Implementation of interaction and indirection remains the same with the following two modifications. First, only needed equations are to be added to the queue after interaction. Second, after substitution of a needed term, that term is to be added to the queue.
Conclusion
Here, we introduced a version of interaction calculus that captures the notion of parallel needed reduction for pure interaction nets. Then, we discussed one possible way to implement it in software. The refined interaction calculus benefits from a constant time per step, reveals the actual cost of weak reduction, and preserves the option of parallel evaluation of interaction nets. Further, we would like to study its properties more carefully and compare its implementation with others. We expect some performance gain compared to the approach of token-passing, since implementation of delegation can be an order of magnitude cheaper than that of interaction.
|
2017-02-20T18:19:37.000Z
|
2017-02-20T00:00:00.000
|
{
"year": 2017,
"sha1": "5c583bf4060dcaf124b4df87c5fb7edc5c295317",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "5c583bf4060dcaf124b4df87c5fb7edc5c295317",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
}
|
211038693
|
pes2o/s2orc
|
v3-fos-license
|
Transparent Cellulose Nanofibrils Composites with Two-layer Delignified Rotary-cutting Poplar Veneers (0°-layer and 90°-layer) for Light Acquisition of Solar Cell
Our transparent cellulose nanofibrils composites (TCNC) directly from rotary-cutting poplar veneer (RPV) whose lignin can be easily stripped by our treatment. This TCNC is prepared by stripping lignin of original RPV and infiltrating epoxy resin (ER) into delignified RPV. This TCNC with two-layer delignified RPVs whose grains perpendicular (0/90°) to each other, which were solidified on solar cell while infiltrating ER. This TCNC with high transmittance (~90%), high haze (~90%), and equal refractive index fluctuation. Comparing with epoxy resin (ER), this TCNC can enhance open circuit voltage (VOC) from 1.16 to ~1.36 and short circuit density (JSC) from 30 to ~34 for the solar cell, and can enhance test fore from 0.155 kN to ~0.185 kN and displacement from 43.6 mm to ~52.5 mm.
In our this work (Table 1), the lignin of RPV was stripped by hydrothermal treatment in sodium hypochlorite (NaClO) solution, and impregnation treatment 1 in ammonium persulfate ((NH 4 ) 2 S 2 O 8 ) solution, and impregnation treatment 2 in sodium hypochlorite (NaClO) solution. Epoxy resin (ER) and its hardener were infiltrated into the two-layer delignified RPVs in this work. To compare with our previous work, this TCNC with more high transmittance (~90%), high haze (~90%) and equal refractive index fluctuation.
Results and Discussion
Cell wall contents of RPV before and after delignification. Fourier transform infrared spectroscopy (FTIR) was used to investigate the changes of its cell wall contents from original RPV to delignified RPV by using FTIR-850 (Gangdong, Tianjin, China). In the FTIR spectrum, the band at 1505 cm −1 is aromatic compounds (phenolic hydroxy groups) and is attributed to aromatic skeleton vibrations from lignin 15,27,30 . The bands at 1235 cm −1 and 1735 cm −1 are characteristic of hemicelluloses and C=O functional group respectively 27,[31][32][33] . Comparing with original RPV and delignified RPV of previous work (ref. 27 ), the peaks at 1505 cm −1 , 1235 cm −1 and 1735 cm −1 have disappeared in delignified RPV of this work, proving that lignin, hemicellulose and C=O functional group have been stripped from original RPV in our this work (Fig. 2). As Table 2 shows, the absolute-drying weight of original RPV (60 mm × 60 mm × 3 mm) is about 2.124~2.381 g, and the absolute-drying weight of delignified RPV (60 mm × 60 mm × 3 mm) is about 1.041~1.164 g. After delignification, the absolute-drying weight of delignified RPV was about 50% of original RPV.
Microstructure of TCNC.
ER is a kind of index-matching polymer for delignified wood, and transmittance of delignified wood can be developed by infiltrating ER 13 . Before and after ER infiltration, delignified RPV and TCNC were cut from its radial direction and longitudinal direction, these sections were examined by using Quanta 450 scanning electron microscopy (FEI, US). Figure 3(a-d) are SEM images of radial direction and longitudinal direction from delignified RPV and TCNC, respectively. In Fig. 3, graphical illustration and SEM images indicate that the microstructure of TCNC is well-infiltrated and well-preserved by ER. www.nature.com/scientificreports www.nature.com/scientificreports/ Optical properties of TCNC for light acquisition of solar cell. In TCNC, its cellulose nanofibrils network and its lumen are the main pathway of optical transmittance. Modification of the wood cell wall will help to tune the light scattering properties of its material, and introducing strong scattering, resulting in diffused luminescence from embedded quantum dots 15,16,27 . The optical haze of TCNC is due to its nature-structural anisotropy and its light scattering properties.
Transmittance and haze were obtained by using WGT-S transmittance and haze tester (SGIC, Shanghai, China). Figure 4(a,b) shows that our TCNC with high transmittance of ~90%, high haze of ~90%. When TCNC be in contact with the substrate whose colored shape can be clearly seen, and when it be took 5 mm above the substrate whose colored shape becomes very fuzzy. S130C photodiode power sensor (Thorlabs, US) was used to record the scattered light intensity distribution in both the x and y directions on the surface of TCNC. Figure 4(c) indicates that this TCNC with almost equal refractive index fluctuation in the x and y direction. In our previous work, TWC with one-layer delignified RPV, that has anisotropic light diffraction and lower refractive index fluctuation in the direction of aligned cellulose fibers 27 . Our this TCNC with two-layer delignified RPVs whose grains perpendicular (0/90°) to each other, that making its refractive index fluctuation of the x direction close to the y direction.
According to its high transmittance, high haze and equal refractive index fluctuation, TCNC is superior transparent layers for light acquisition of solar cell, which as Fig. 4(d) shows. The electrical properties of solar cell mainly includes open circuit voltage (VOC) and short circuit density (JSC) 8 , and the current density-voltage curves of solar cell with ER and with TCNC were obtained by using CS310H electrochemical workstation (CorrTest, Wuhan, China). Figure 4(d) and Table 3 indicate that TCNC improving the light acquisition of solar cell to compare with ER, and enhancing the solar cell's VOC from 1.16 to ~1.36 and its JSC from 30 to ~34.
Mechanical characteristics of TCNC. ER that is a kind of current material for surface of solar cell at present, but our TCNC has better tensile strength than ER. Figure 5(a) indicates that TCNC has almost equal tensile strength from longitudinal directions in 0°-layer and 90°-layer. Comparing with ER (60 mm × 60 mm × 3 mm), the test fore of TCNC (60 mm × 60 mm × 3 mm) can enhance from 0.155 kN to ~0.185 kN, and its displacement can enhance from 43.6 mm to ~52.5 mm, which as Fig. 5(b) and Table 4 show. The tensile strength was tested by using the tester of mechanical property SmartTest (Joyrun, China). TCNC can meet more flexible shape for solar cell to compare with ER.
Conclusions
For improving practicability of TWC in light acquisition of solar cell, we have basically mastered a kind of method of preparing TWC from original rotary-cutting poplar veneer. Our TCNC with high transmittance (~90%), high haze (~90%), and almost close refractive index fluctuation, which can enhance VOC from 1.16 to ~1.36 and JSC from 30 to ~34 for the solar cell to compare with ER. Although ER being a kind of current material for surface of solar cell at present, however, comparing with ER, our TCNC can enhance test fore from 0.155 kN to ~0.185 kN www.nature.com/scientificreports www.nature.com/scientificreports/ and displacement from 43.6 mm to ~52.5 mm, which can meet more flexible shape for solar cell. Furthermore, our future work will pay more attention to reduce the time cost and the resource consumption in preparation of TCNC, and to improve the quality of TCNC for the light acquisition of solar cell.
Stripping lignin of original RPV. The step 1 of delignification is hydrothermal treatment that boiling the sample of original RPV in the NaClO solution (0.405 mol L −1 in deionized water) for about 3 h at 130-160 °C. Then, the RPV sample was took out from the solution and its chemicals was removed by rinsing in hot distilled water. The step 2 of delignification is impregnation treatment 1 that immersing the RPV sample in the (NH 4 ) 2 S 2 O 8 solution (1.1 mol L −1 in deionized water) for about 72 h at 15-25 °C. Then, the chemicals of sample was also removed by rinsing in hot distilled water. The step 3 of delignification is impregnation treatment 2 that immersing the RPV sample in the NaClO solution (0.81 mol L −1 in deionized water) for about 24 h at 15-25 °C until its color has disappeared. After stripping lignin, the delignified RPV was preserved in C 2 H 6 O.
Infiltrating ER into delignified RPV and solidifying it on solar cell. First, the delignified RPV was attached to the surface of the sample of solar cell by C 2 H 6 O. Second, a kind of liquid resin was prepared by mixing ER and its hardener at a ratio of 3 to 1 (ER 45 ml, its hardener 15 ml), and this liquid resin (60 ml) was covered on the delignified RPV. Then, this liquid resin was filled into the delignified RPV by vacuumizing in RV-620-2 vacuum reactor (YBIF, Shanghai, China) at 25-30 °C. All the above processes should be completed within 30 min. After first layer of delignified RPV (0°-layer RPV) solidifying on solar cell for about 24 h at 25-30 °C, second layer of delignified RPV (90°-layer RPV) was solidified on 0°-layer RPV by repeating the above processes. Table 4. Test fore and displacement from ER or TCNC, respectively.
|
2020-02-06T15:59:56.539Z
|
2020-02-06T00:00:00.000
|
{
"year": 2020,
"sha1": "134872fd7c875cdd7b97e7da20c73e750ee125af",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-020-58941-4.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3c68d93112a9e3aec77fb4f862c1b8748bb1a672",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
}
|
285660
|
pes2o/s2orc
|
v3-fos-license
|
NA Proteins of Influenza A Viruses H1N1/2009, H5N1, and H9N2 Show Differential Effects on Infection Initiation, Virus Release, and Cell-Cell Fusion
Two surface glycoproteins of influenza virus, haemagglutinin (HA) and neuraminidase (NA), play opposite roles in terms of their interaction with host sialic acid receptors. HA attaches to sialic acid on host cell surface receptors to initiate virus infection while NA removes these sialic acids to facilitate release of progeny virions. This functional opposition requires a balance. To explore what might happen when NA of an influenza virus was replaced by one from another isolate or subtype, in this study, we generated three recombinant influenza A viruses in the background of A/PR/8/34 (PR8) (H1N1) and with NA genes obtained respectively from the 2009 pandemic H1N1 virus, a highly pathogenic avian H5N1 virus, and a lowly pathogenic avian H9N2 virus. These recombinant viruses, rPR8-H1N1NA, rPR8-H5N1NA, and rPR8-H9N2NA, were shown to have similar growth kinetics in cells and pathogenicity in mice. However, much more rPR8-H5N1NA and PR8-wt virions were released from chicken erythrocytes than virions of rPR8-H1N1NA and rPR8-H9N2NA after 1 h. In addition, in MDCK cells, rPR8-H5N1NA and rPR8-H9N2NA infected a higher percentage of cells, and induced cell-cell fusion faster and more extensively than PR8-wt and rPR8-H1N1NA did in the early phase of infection. In conclusion, NA replacement in this study did not affect virus replication kinetics but had different effects on infection initiation, virus release and fusion of infected cells. These phenomena might be partially due to NA proteins’ different specificity to α2-3/2-6-sialylated carbohydrate chains, but the exact mechanism remains to be explored.
Introduction
Influenza A viruses are single-stranded RNA viruses of the family Orthomyxoviridae, containing a segmented genome composed by eight RNA segments. Random combination of the 8 gene segments of influenza virus may produce novel viruses which exhibit different pathogenicity and propagation characteristics and even capacity for interspecies transmission [1,2].
The spread of an influenza virus among humans is determined by interactions between human hosts and the virus. The 1918 flu pandemic was the most disastrous outbreak on record and caused more than 50 million deaths globally [3], but the virus did not reemerge afterwards and ultimately disappeared. The mortality rate of people suffering from highly pathogenic avian influenza virus infection was high [4],but person-to-person transmission of H5N1 virus was very rare and has been limited so far. The 2009 swine origin pandemic (S-H1N1) influenza virus caused mostly mild influenza-like illness, and the small percentage of severe cases occurred primarily among the young and the middle-aged [5]. Notably, this virus has displayed apparently higher transmissibility among humans than seasonal influenza viruses and H5N1 viruses have. Christophe et al. made an early assessment of transmissibility of S-H1N1 and case severity by analyzing the outbreak in Mexico, early data on international spread, and genetic diversity of the virus. They gave an estimated case fatality ratio of 0.4% (range: 0.3 to 1.8%) based on confirmed and suspected deaths reported by late April, 2009 [6]. These facts suggested that viruses of higher pathogenicity may not necessarily cause larger catastrophes for humans, while low pathogenic influenza viruses with easy human transmissibility could pose a serious public health threat. However, it is not yet possible to predict the extent of prevalence and severity of illness in humans caused by an influenza virus from its subtype, and the mechanism for virulence acquisition has not been well understood.
Hemagglutinin (HA) and neuraminidase (NA) are the two envelope glycoproteins on the surface of influenza virions. HA attaches to sialic acid on host cell surface to initiate virus infection [7]; NA removes sialic acid from cell receptor which HA binds to facilitate virus release [8]. As HA and NA recognize the same molecule (sialic acid) and have the opposite activity, sharp changes in either protein will affect virus replication [9]. But studies on what changes in HA-NA combinations affect biological properties of influenza viruses have been limited. In this study, using reverse genetics technology, we devised and produced three recombinant viruses in the background of PR8 (H1N1) virus which differ only in NA. The origins of three NA were diverse, and we set out to explore the properties of these recombinant viruses to evaluate the relative contribution of the NA protein.
The PR8 virus is a mouse-adapted, attenuated, laboratory H1N1 strain [10]. For the three NA origin viruses, the S-H1N1 virus is the virus that spread swiftly among humans in 208 countries in 2009 [11], and the H9N2 [12] and avian H5N1 virus [13] are respectively a low-pathogenic and a high-pathogenic strain isolated from chickens. These viruses are diverse but of field relevance and we hoped the study of different HA-NA combinations with the same HA could help to further the understanding of certain clinical, epidemiological and virological features of influenza viruses.
Generation of Recombinant Influenza Viruses
Recombinant virus was rescued as described previously [15,16]. Briefly, 1 mg of each plasmid (pHW-PB2, -PB1, -PA, -HA, -NP, -M, -NS and pHW-H5-NA or pHW-H9-NA or pHW-SH1-NA) was combined with 18 ml transfection reagent Lipofectamine 2000 (2 ml per mg DNA, Invitrogen), incubated at room temperature for 30 min, and then transferred to monolayers of 10 6 293T cells on 6-well plates. Six hours later, the mixture was removed and replaced with Opti-MEM (Gibco-BRL) containing 0.3% BSA and 0.01% FCS. Forty-eight hours after transfection, culture medium was collected and inoculated onto 10-day-old SPF embryonated chicken eggs for virus propagation. Then allantoic fluids with positive HA titers were collected and stored at 280uC.
Sodium Dodecyl Sulfate-polyacrylamide Gel Electrophoresis (SDS-PAGE)
The rescued viruses were inoculated onto 10-day-old SPF chicken embryos. Allantoic uid was collected 72 h later and centrifuged (50006g) to remove cell debris. Then four viruses were inoculated in MDCK cells. At 72 h after infection, the supernatant were collected, first spun at 5000 g for 20 min for getting rid of cell debris and then ultracentrifuged at 4uC, 70,0006g for 3 h. Concentrated viruses were resuspended with SDS-PAGE sample loading buffer, incubated at 37uC for 30 min, and heated at 100uC for 1 min. The proteins were separated by 12% SDS-PAGE, and gels were stained with Coomassie brilliant blue G250.
Replication Properties of Rescued Viruses and Titration of Viruses
Virus growth curve was used to analyze the replication properties of the rescued viruses. MDCK cell monolayers were inoculated with diluted virus at a MOI of 0.001, the inoculum was removed after incubation at 37uC for 1 h. The cells were washed and overlaid with 3 ml of MEM containing 1.0 mg/ml TPCKtrypsin. The supernatant was taken at 12 h, 24 h, 36 h, 48 h, 60 h, 72 h post infection.
The 50% tissue culture infectious dose (TCID 50 ) was determined in MDCK cells which were incubated with 10-fold serially diluted viruses at 37uC for 72 h and cytopathic effect was observed. The 50% egg infectious dose (EID 50 ) was determined in 10-day-old specific pathogen-free (SPF) embryonated chicken eggs which were incubated with 10-fold serially diluted viruses at 37uC for 48 h. The TCID 50 and EID 50 were calculated by Reed-Muench method [17]. The virus titer in each experimental group (n = 5) was represented by the mean 6 SD.
Virus Elution Assay
The ability of NA to elute virus bound on erythrocytes was assessed as described previously [18]. Briefly, fifty microliters of virus with HA titers of 1:128 was incubated with 50 ml of 0.5% chicken erythrocytes at 4uC for 1 h. The mixture was then incubated at 37uC. In the next 8 hours, supernatant was taken periodically and measured for HA titer.
NA Activity
The NA activities of the recombinant viruses were determined according to the method described by WHO manual [19]. In this assay the viral neuraminidase (an enzyme) acts on the substrate (fetuin) and releases sialic acid and the enzymatic reaction is stopped by adding arsenite reagent. Then the amount of sialic acid liberated is determined chemically with thiobarbituric acid which produces a pink color in proportion to free sialic acid. The color is then quantified with a spectrophotometer at wavelength 549 nm.
Indirect Immunofluorescence Assay (IIFA)
MDCK cell monolayers were seeded on glass coverslips were inoculated with a virus solution which was removed after 1 hour of incubation, and the cells were incubated at 37uC for additional 3 h, 6 h or 12 h. At these specified time points, the cultures were fixed with 4% paraformaldehyde, permeabilized with 0.5% Triton X-100, blocked with 5% non-fat milk, and stained with polyclonal antisera to whole viruses. Next, the fluorescein isothiocyanate (FITC)-conjugated anti-mouse IgG (Millipore) secondary antibodies were added and then stained with Hoechst 33258 for 10 min. Fluorescent image analysis was performed on a Leica laser scanning confocal microscope with associated software as described previously [20]. Positive staining indicated successful virus entry in the cell [21].
Flow Cytometry Analysis
Infected MDCK cells in suspension (2610 6 ) were incubated with PBS alone (mock) or anti-PR8 antibodies for 45 min on ice. Following extensive washing, the fluorescein isothiocyanate (FITC)-conjugated anti-mouse IgG (Millipore) secondary antibodies were added and incubated for 30 min on ice. After washing 3 times with PBS, the cells were fixed with 4% paraformaldehyde, and the number of infected cells was determined by flow cytometric analysis on an FACS-aria III flow cytometer (BD Biosciences).
Pathogenicity and Lethality in BALB/c Mice
To test the pathogenicity of rescued viruses, BALB/c mice aged 6-8 weeks were anesthetized and inoculated intranasally (i.n.) with 20 ml of virus suspension at a titer of 1610 6.5 EID 50 . Bodyweight and survival of mice were recorded daily for 14 days after inoculation. To evaluate viral infection in the respiratory tract, three days after the challenge, five mice from each group were randomly taken for sample collection. The mice were anaesthetized with chloroform. The trachea and lungs were collected and washed three times by injecting with a total of 2 ml of PBS containing 0.1% BSA. The bronchoalveolar lavage was used for virus titration after removing cellular debris by centrifugation [13]. The animal experiment was approved by Animal Resource Center at the Wuhan Institute of Virology, Chinese Academy of Sciences (WIVA04201202).
Statistics
The results of the test groups were evaluated by analysis of variation (ANOVA). The difference was considered significant if pvalue was less than 0.05. The survival rates of the mice in the test and control groups were compared by using Fisher's exact test.
Generation of Recombinant Viruses Bearing Different NA Proteins
Three recombinant influenza viruses were designed to share 7 gene segments with PR8 (H1N1) strain and have the NA gene segment from N1 subtype influenza viruses A/H5N1 and swine A/H1N1, or from the N2 subtype A/H9N2 virus, respectively. The NA proteins of H5N1, S-H1N1, H9N2 and PR8-H1N1 virus have 469, 449, 466 and 454 amino acids (aa), respectively and the sequences were aligned in Fig. 1. The NA amino acid homology between wild-type PR8 virus and H5N1, S-H1N1, or H9N2 was 71.4% 61.5% or 30.9%, respectively ( Table 1).
The three viruses were successfully rescued by the eight-plasmid reverse genetics system, and they were named rPR8-H5N1NA, rPR8-H1N1NA, rPR8-H9N2NA.
The NA enzyme activity of the rescued viruses was evaluated and two viruses had activity comparable to that of PR8-wt, but the NA activity of rPR8-H1N1NA was slightly lower ( Table 1).
Characterization of the Recombinant Viruses
The effects of NA replacement on virus infectivity were assessed by growth kinetics in both MDCK cells and SPF embryonated chicken eggs. MDCK cells were infected with the viruses at an MOI of 0.001 and the supernatants were collected at the indicated time and tested for TCID 50 . Results of replication kinetics showed that the titer of wild-type virus was 10 4.5 /ml at 12 h.p.i, rose to 10 6.3 /ml at 24 h.p.i and peaked (10 6.3 /ml) at 36 h.p.i. No significant difference was observed in the growth curves between the recombinant and wild-type viruses ( Fig. 2A). Similarly, no difference was observed in the SPF chicken embryos infection experiment where all the viruses rapidly reached high yield of at least 10 8 EID 50 per ml (data not shown). In both tests, the recombinant viruses showed the same growth characteristics as those of the wild-type virus.
To examine the stability of the recombinant viruses, the viruses were subcultured on chicken embryos for 10 passages. They grew well and their HA titers were stable during the serial passage. RNA was extracted from the 10th-passage recombinant viruses for RT-PCR amplification of the HA and NA genes. No nucleotide sequence differences were detected between the 1 st -passage viruses and 10 th -passage viruses, demonstrating that these recombinant viruses were successfully generated and were genetically stable.
To examine if NA replacement would affect the ratio of NA protein in virions, the virions of the recombinant viruses were separated by 12% SDS-PAGE where equal loading of virions was ensured by an HA assay. The SDS-PAGE results showed that the proportion of NA protein in the recombinant virions were comparable to that of wild-type PR8 virus (Fig. 2B).
Virulence in Mice
To compare the virulence and pathogenicity of recombinant viruses in vivo, BALB/c mice (n = 5 per group) were inoculated intranasally (i.n.) with 1610 6.5 EID 50 /20 ml of recombinant viruses or PR8-wt virus. The mortality, weight loss and viral titers in bronchoalveolar lavages were evaluated. The bronchoalveolar lavages were obtained at 3 and 6 days after infection.
As with PR8-wt virus, infection with the recombinant viruses caused serious clinical symptoms including piloerection, lethargy, anorexia, and bodyweight loss. The viruses caused fatal infection and all the infected mice died within 10 days of challenge except one in the rPR8-H5N1NA group (Fig. 3A).
High virus titers were detected in the bronchoalveolar lavages of mice on day 3 p.i. The residual lung virus titers of mice infected with PR8-wt virus was 10 7.360.5 /ml, the highest among the four viruses but not significant higher than those of mice infected with the recombinant viruses (p.0.05) ( Table 2). The rate of body- weight loss also did not differ significantly between mice infected with recombinant viruses or PR8-wt virus (Fig. 3B). The results indicated that the replacement of NA gene didn't change the characteristics of the virus in terms of mouse-adaptation and lethality.
Virus Elution in vitro
To evaluate virus release rate in vitro, the four viruses, at a dose of 1:128 by HA titer, were respectively first incubated with the same volume of 0.5% chicken erythrocytes at 4uC for 1 h and then incubated at 37uC for a prolonged time during which supernatants were taken periodically for HA assay which indicates the amount of virus released. After 1 h incubation at 37uC, the PR8-wt and rPR8-H5N1NA viruses showed a 64-fold reduction in HA titer and maintained at the level till the end of observation (8 h). The rPR8-H1N1NA virus showed an 8-fold reduction in HA titer after incubation 1 h at 37uC and again 2-fold reduction at 4 h till the end of experiment. rPR8-H9N2NA virus had an 8-fold reduction in HA titer after 1 h incubation at 37uC, which maintained for 3 h, and then its HA titer dropped further 16 fold; by the end of observation, its HA titer dropped by 128 fold (Fig. 4). As for the agglutination phenomenon, the cells incubated with PR8-wt and rPR8-H5N1NA presented agglutination clumping in the V-shaped 96-well microtiter plates from the start till the end (8 h). In contrast, in cells incubated with rPR8-H9N2NA and rPR8-H1N1NA, the agglutination phenomenon appeared only by 1 h after incubation at 37uC, where the supernatant of rPR8-H9N2NA and rPR8-H1N1NA were very clear and all the red blood cells have settled completely, and it stayed that way till the end of observation. Clearly, NA replacement dramatically altered the characteristics of virus elution from erythrocytes.
The Initiation of Influenza Virus Infection
To determine if NA replacement affects the initiation of influenza virus infection, the number of MDCK cells infected by the virus 6 h.p.i was examined at a low MOI of 0.001 by flowcytometric and by IIFA where positive FITC-fluorescence signal indicated infected cells. As shown in Fig. 5A, while no fluorescence could be detected in the uninfected cells, cells infected with PR8wt and rPR8-H1N1NA showed 20.7% and 46.8% fluorescence positive, respectively. In comparison, cells infected with rPR8-H9N2NA and rPR8-H5N1NA showed significantly higher (p,0.05) positive rates (81.0% and 74.9%, respectively) (Fig. 5C). The percentage of FITC-fluorescence signal positive cells in Fig. 5B was calculated with the software Volocity Demo, and the trends in the four virus infected groups were consistent with those in Fig. 5C. This result suggested that NA replacement produced obvious effects on initiation of influenza virus infection. Furthermore, as seen from Fig. 5A, at 6 h after MDCK cells were infection by MOI of 0.001, the vRNPs of PR8-wt and rPR8-H1N1NA were seen only in the nucleus while vRNPs of rPR8-H9N2NA and rPR8-H5N1NA already exported into the cytoplasm. Therefore, a more rapidly infection process of rPR8-H9N2NA and rPR8-H5N1NA were observed than that of PR8-wt or rPR8-H1N1NA.
Influenza Viruses Induced Cell-cell Fusion
MDCK cell monolayers were inoculated with viruses at a MOI of 0.001 or 0.1; at one hour post infection, the inoculum was removed and the cells were incubated at 37uC for 3 h, 6 h or 12 h and then the cells were fixed and stained. At MOI of 0.1, Cell-cell fusion was not obvious at 3 h for the cells infected with the four viruses. Cell volume enlargement was not obvious and there were only a few slightly larger cells. Cell nuclei were homogenous in size and there were no abnormally large ones, and there was no cytoplasmic fusion as observed by DIC (Fig. 6A). At 6 h post infection, merged cytoplasm and fusion of multiple cells were observed for all four virus groups, and cell volume of the rounded unfused cells was significantly smaller than the fusion cells (Fig. 6B). For PR8-wt infected group, the FITC field presented two dissolving nucleus appearing as holes, which could also be seen in the corresponding DIC image and the cytoplasm has already fused together. In cells infected with rPR8-H1N1NA, rPR8-H9N2NA and rPR8-H5N1NA, there were apparent cell enlargement or cytoplasm fusion, where the nucleuses gathered together (Fig. 6C).
The above results indicated that NA substitution could effectively change the cell fusion process, rPR8-H9N2NA and rPR8-H5N1NA viruses induced earlier and more obvious cell-cell fusion than PR8-wt or rPR8-H1N1NA virus at the same MOI, while at higher MOI cell fusion occurred earlier and more extensively.
Discussion
HA, a type-I glycoprotein, plays a major role in virus replication in host cells. It attaches and fuses to cell surface via sialic acid and promotes virus entry by mediating membrane fusion between viruses and endosome [22]. NA, a type-II glycoprotein, can eliminate sialoglycoprotein from virus-infected cells and enable virus release [23]. Many studies have showed that low NA enzyme activity renders virus release from the virus-infected cells inefficient and leads to a large number of budding viruses gathered at the cell surface [24]; as the connection between these surface viruses and cell membrane is HA and sialic acid receptor of cell surface, a balance between HA and NA activities is crucial [25,26]. In brief, HA activity should be high enough to ensure virus attachment to cells while the accompanying NA activity should not be too high so as to prevent HA binding to cells and not too low so as to be insufficient for releasing of progeny viruses [27].
In the current study, we obtained 3 recombinant viruses of the genetic background of PR8 (H1N1) which differed only in NA, which were from a H9N2, H5N1 and swine-H1N1 virus, respectively. Several studies have classified the NA subtypes into two groups: the first group consisting of N1, N4, N5, and N8 subtypes and the second group consisting of N2, N3, N6, N7 and N9 subtypes [28], whereas intra-group homologies are much greater than inter-group homologies [29]. The overall structure of NA was regarded as conserved among different subtypes of influenza A viruses, despite sequence homology could be as low as 30% [30,31].
In the current study, successful rescue of the three recombinant viruses illustrated that the HA and NA activities of the viruses were by and large in functional balance. A series of tests were performed to examine the effect of NA replacement. The NA enzyme activity as assayed in vitro was equivalent among PR8-wt, rPR8-H5N1NA and rPR8-H9N2NA viruses, and was lower in rPR8-H1N1NA. These three recombinant viruses did not differ from the wild-type virus in growth kinetics in MDCK cells as measured by viral titer (TCID 50 ) in cell culture medium collected at various time after infection. The proportion of NA incorporated into the virions also was not altered by NA replacement, as determined by SDS-PAGE. These recombinant viruses replicated efficiently in mouse lung without prior adaptation and caused death of mice in a manner comparable to that of the PR8-wt virus. However, for virus elution from chicken red blood cells, rPR8-H5N1NA and PR8-wt showed faster and significantly higher release than rPR8-H9N2NA and rPR8-H1N1NA did. Furthermore, in the early hours of infecting MDCK cells, rPR8-H9N2NA and rPR8-H5N1NA infected much more cells than rPR8-H1N1NA and PR8-wt viruses did in the early 3 h. These viruses induced fusion in infected MDCK cells, and, consistent with the infection initiation data, the fusion phenomenon occurred earlier and more extensive in rPR8-H9N2NA and rPR8-H5N1NA infected cells than in rPR8-H1N1NA or PR8-wt infected cells.
From these results, we could preliminarily conclude that NA replacement in the background of PR8 virus has little effect on virion composition, virus replication kinetics and infectivity and virulence in mice, but it has significant effect on virus elution from erythrocytes, and on efficiency of infection initiation and cell-cell fusion.
We do not yet have a unified explanation for the different observations made in the current study. There are certainly many factors to be taken into consideration. First, the species factor should be considered. The PR8-wt virus is a laboratory H1N1 strain which had been adapted in mouse. The H5N1 and H9N2 strains are field strains isolated from chickens. The S-H1N1 strain is also a field strain isolated from humans but is believed to be of swine origin. The cell line MDCK used is a canine kidney epithelium cell line, and the erythrocytes used in the elution study were from chickens. It is known that substrate specificity of NA protein is related to the species as well as the year the influenza virus is isolated from [32]. NA, neuraminidase, is an exosialidase which cleaves a-ketosidic linkage between the sialic (N-acetylneuraminic) acid and an adjacent sugar residue [33]. NA has substrates specificity in that it can discriminate between sialic acids and linkage type with the next residue (2-3, 2-6 or 2-8), as well as identify internal regions of the oligosaccharide chain. One example is that the key amino acid positions of NA's neuraminic acid binding site are changed in viruses with a2-6-specificity (human, swine, and poultry H9N2 viruses) as compared to viruses whose HAs interact with a2-3-sialylated carbohydrate chains (i.e. avian and equine influenza viruses) [32]. Another example of species-specific NA substrate specificity is seen with data on N1 and N2 NAs of several duck, swine and human influenza virus isolates [32]. In these studies, all of the studied NAs desialylated a2-3-substrates better then a2-6 ones. For viruses with N1 neuraminidase, a2-3/a2-6 activity factor was ,60 for duck viruses, ,20 for swine viruses, and ,4 for human viruses. For H9N2 influenza viruses, this a2-3/a2-6 relation for viruses isolated from poultry is in a range from 30 to 15, and for swine virus ,6, finally, for human isolate ,10. The data obtained in the current study might be explained in this light. The lower erythrocyte elution rate seen in Swine/human origin H1N1 and the avian origin H9N2 NA could be related to their source viruses' a2-6specificity and thus less efficient NA enzymatic cleavage and slower virus release.
But this a2-6-specificity could not explain why the two recombinant viruses with avian origin NA infected MDCK cells faster and more extensively than the two with mammal origin or adapted N1 subtype NA. Studies from Bovin's group [32] have shown that NAs discriminate the fine structure of a2-3-substrates, that is, they discriminate between the structures of the inner parts of oligosaccharides. In addition, viral NA has been demonstrated by direct experimental evidence to play an essential role at the early stage of virus infection of human epithelium and NA is thought to promote virus entry [21,34]. The experimental data of Flint et al. indicated that influenza virus HA protein could facilitate cell-cell fusion [35].
Variation in other functional domains, the enzyme active site, the stalk length, the sialic acid binding site and potential glycosylation sites [36], might also affect NA activity. Unlike serial mutations based on a single NA sequence, the three NAs in the current study are from diverse field strains of recent years and they differ in many sites, therefore it is not easy or too preliminary to speculate on possible mechanisms. Therefore, at this point we do not intent to give a full mechanism to explain the differential results observed, instead, we present the NA sequences and in vitro and in vivo data here as materials for future interpretation by the research community when more relevant data are accumulated.
|
2016-05-15T09:06:49.134Z
|
2013-01-22T00:00:00.000
|
{
"year": 2013,
"sha1": "7b2874c44e641f97f1852d8c62dacaf02d57c7af",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0054334&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7b2874c44e641f97f1852d8c62dacaf02d57c7af",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
17441012
|
pes2o/s2orc
|
v3-fos-license
|
Comparison of Breast Cancer-Related Lymphedema (Upper Limb Swelling) Prevalence Estimated Using Objective and Subjective Criteria and Relationship with Quality of Life
This study aimed to investigate lymphedema prevalence using three different measurement/diagnostic criterion combinations and explore the relationship between lymphedema and quality of life for each, to provide evaluation of rehabilitation. Cross-sectional data from 617 women attending review appointments after completing surgery, chemotherapy, and radiotherapy included the Morbidity Screening Tool (MST; criterion: yes to lymphedema); Lymphedema and Breast Cancer Questionnaire (LBCQ; criterion: yes to heaviness and/or swelling); percentage limb volume difference (perometer: %LVD; criterion: 10%+ difference); and the Functional Assessment of Cancer Therapy breast cancer-specific quality of life tool (FACT B+4). Perometry measurements were conducted in a clinic room. Between 341 and 577 participants provided sufficient data for each analysis, with mean age varying from 60 to 62 (SD 9.95–10.03) and median months after treatment from 49 to 51. Lymphedema prevalence varied from 26.2% for perometry %LVD to 20.5% for the MST and 23.9% for the LBCQ; differences were not significant. Limits of agreement analysis between %LVD and the subjective measures showed little consistency, while moderate consistency resulted between the subjective measures. Quality of life differed significantly for women with and without lymphedema only when subjective measurements were used. Results suggest that subjective and objective tools investigate different aspects of lymphedema.
Introduction
Progressions in the treatment of breast cancer have led to increased survival rates and increased emphasis on improving outcomes and quality of life in the long term through targeted rehabilitation. Lymphedema is a condition that can develop early or years after treatment, due to the necessary rigor of surgery and radiotherapy which can interrupt the lymphatic system to varying degrees depending on the type of surgery and the dose of radiotherapy [1]. As a result, interstitial tissue can accumulate in the tissues, leading to skin fibrosis and cellulitis. There is evidence of substantial impacts of lymphedema on quality of life and functional outcomes [2]. There is evidence to support rehabilitation interventions specific to the management of lymphedema as a long-term condition, which are summarized in an international consensus document; they include skin care, specialized massage, sustained compression, and exercise [3]. When lymphedema is detected early, therapeutic management is more likely to be effective [4].
The incidence of lymphedema is likely to be affected by the increasing use of less conservative surgical techniques for breast cancer, such as breast-conserving surgery and sentinel lymph node biopsy, although this may also increase the use of axillary radiotherapy [5]. Researchers have investigated different possible risk factors for the development of lymphedema; a systematic review of prevalence and risk factors found the former to range from 0 to 34%. It established a pooled odds ratio for developing lymphedema of 1.46 (95% CI 1.16-1.84; 8 studies) for patients who have received radiotherapy compared to those who have not [6]. No differences were found in risk when comparing mastectomy with breast-conserving surgery in only three studies, but lymphedema was 11.67 times more likely (95% CI 1.45-93.65; 3 studies) where full axillary lymph node dissection (ALND) was conducted, compared with sentinel lymph node biopsy (SLNB). In one cross-sectional study comparing these two surgical procedures lymphedema prevalence associated with ALND was 43.3% and with SLNB was 22.2% ( = 102) [7]. A prospective study of 936 women evaluated five years after surgery found self-reported arm swelling in 3% of patients after SLNB and 27% with SLNB/ALND and objectively measured lymphedema of 5% and 16%, respectively [8]. Although prevalence estimates vary, it appears that SLNB poses a lower risk of lymphedema, supporting its use.
In order to assist developments in breast cancer treatment and subsequent rehabilitation, it is very important to estimate prevalence after different types of treatment and to monitor lymphedema over time. However, currently there is wide variability in prevalence estimates, from 2 to 86%, resulting from study samples with different treatment characteristics, the use of different measurements, and varied diagnostic criteria [5].
When considering measurement of lymphedema, most objective tools focus on limb volume. Volumetry (water displacement) was considered the gold standard until fairly recently but was not practical clinically, and instead, circumferential measurement of limb segments was used, alongside geometric formulae to enable estimation of volume. More recently, however, perometry (opto-electronic scanning) and multifrequency bioelectrical impedance measurements have gained in popularity as they are more reliable, fast to conduct, and hygienic [1,9]. When using this form of tool, diagnosis of lymphedema ideally relies on comparisons between pre-and postsurgical measurements, although in cross-sectional studies bilateral limb comparisons are usually made. Differences seen to indicate lymphedema vary, most commonly stated to be equal to or greater than 10% or 200 mL limb volume or 2 cm or 5 cm differences in limb circumference [10,11].
There is evidence to suggest that subjective assessment through patient self-report is more sensitive to the development of lymphedema, as well as being less expensive [1,5,11]. Diagnosis of the condition usually focuses on the presence of specific symptoms, such as "heaviness" or "swelling. " They also address functional and psychosocial aspects of the condition rather than focusing on physical dimensions. A combined approach is recommended by some [1].
Some studies report good correlations between measurement methods such as circumferential and volumetry measurements, but these do not agree and cannot be used interchangeably. They also focus on measurement of variables such as limb volume rather than estimation of incidence or prevalence [12,13].
One study demonstrated the impact of the classification or diagnostic criterion used with a single objective measure; sequential arm circumference measurement in 347 women was conducted twelve months after surgery [14]. Prevalence of 11% was found within the sample when lymphedema was defined as 2 cm or greater difference between limbs at any measurement point. Limb volume was calculated based on a truncated cone formula, and prevalence differed when using the criteria of 150 mL or greater difference between limbs (9%) and 5% or greater (16%).
A small number of studies have compared prevalence of lymphedema within samples estimated using different tools and diagnostic criteria. One study compared limb volume changes before surgery to those 12 months after diagnosis in 118 participants, using perometry, circumferential measurement, and self-report [11]. Twelve-month prevalence estimates varied from 21% to 70% prevalence, with perometry measurement (10% or greater change in limb volume) found to be the most conservative and 2 cm difference in limb circumference most sensitive. The same research group followed up 211 women 2.5 years after treatment and again found 2 cm difference in circumferences to be the most sensitive (91%) but found self-report to be the most conservative (41%) [15]. One further study of 176 women found higher estimation by self-report (27.9%) than by objective criteria, which varied between 0.6% (10% or greater difference in summed arm circumference), 11.4% (multifrequency bioimpedance: 3 or more SD above the reference score), and 11.9% (5 cm or greater difference in summed arm circumference) [10]. Therefore, the literature does not demonstrate consensus regarding whether subjective or objective measures provide higher estimates of lymphedema prevalence, or the most valid.
With no gold standard for measuring and classifying lymphedema, it is difficult to know which method produces the most valid prevalence estimate. One way of investigating this is to look at how a construct expected to vary with lymphedema incidence, such as quality of life, relates to estimates provided by different systems of measuring and classifying lymphedema. Indicators of quality of life, psychological morbidity, depression and anxiety, and quality of life scores have been found to differentiate between women who have and do not have lymphedema after breast cancer treatment [16,17]. One study found that lymphedema was the strongest of three factors that were independently associated with quality of life after treatment for breast cancer, along with nonwhite race and postmenopausal status [18].
To conclude, it is important to understand the relationship between subjective and objective estimations of lymphedema prevalence, to make future decisions relating to choice of measurement tool and classification of lymphedema. This secondary data analysis addresses four main questions: how do three different systems of measuring and classifying lymphedema differ in their estimation of prevalence?
Study
Design. This paper represents secondary analysis of a dataset from a cross-sectional study which aimed to screen for fatigue, impaired upper limb function, lymphedema, and pain in women following treatment for breast cancer. It was classified as service review by the South East Scotland Research Ethics Committee and full ethical review was carried out within the Higher Education Institution to ensure that procedures were in accordance with their standards (Declaration of Helsinki 1975, revised Hong Kong 1989).
Procedure.
Subjective and objective measurements of morbidity and quality of life were carried out on women who provided informed consent to participate while awaiting review appointments at the Breast Clinic. Women were approached with information and consent form if they had completed surgery (mastectomy or wide local excision; lymph node clearance or either four-node axillary sampling or sentinel lymph node biopsy), chemotherapy, and radiotherapy (breast and/or axilla), did not have recurrence, and could complete questionnaires in English. Consenting participants completed questionnaires in the waiting room and objective tests in a private clinic room. Medical records were reviewed in order to obtain treatment characteristics. The data used in this secondary analysis included three measures of lymphedema and one of quality of life. Quality of life was measured using the Functional Assessment of Cancer Therapy questionnaire with breast cancer and arm function subscales (FACT B+4). There is evidence for its reliability, validity, and practicality [19,20]. A five-point Likert scale is used, with greater quality of life corresponding with a high score once negatively phrased item scores are reversed. Scores are then calculated by summing the subscale scores, providing physical well-being (PWB), social well-being (SWB), emotional well-being (EWB) functional well-being (FWB), and breast cancer additional concerns subscales (BCC), plus the sum of four questions relating to upper limb swelling and function (arm-specific subscale: AS). All can be interpreted independently [21], and the Trial Outcome Index (TOI) is the sum of the PWB, FWB, and BCC subscales, found to be a better summary index for physical and functional outcomes [19]. Where items are missing for less than 50% of a subscale, the remaining item responses are prorated by using the mean of the answers provided for that subscale [19]. Overall, it is expected that an item response rate of over 80% for the FACT G is achieved (sum of PWB, SWB, EWB, and FWB).
Perometer (optoelectronic) measurement was used to provide objective measurement of percentage difference in upper limb volume (%LVD) between affected and unaffected limbs. The vertical perometer (400 T) was used; there is evidence for its validity and reliability in populations of women after breast cancer and with known lymphedema [22,23]. A standardised protocol was used to enhance reliability (Bulley et al., unpublished data). The mean of three measurements for each limb was used where available; the mean of two measurements (in 16 cases) or use of one measurement (in 3 cases) per limb was used if necessary. Where the %LVD was 10% or greater, lymphedema was identified [11].
The subjective self-report Morbidity Screening Tool (MST) was developed by the research group. This tool includes a short form focusing on lymphedema; the first question establishes whether or not a person perceives that they have lymphedema ("yes" response), and subsequent questions explore self-reported impacts on activities and participation. The research team investigated its validity and found evidence to support its use; further detail is available elsewhere (Bulley et al., unpublished data). When focusing on the initial question relating to presence or absence of lymphedema, significantly greater %LVD ( = 434), FACT G scores, and FACT B+4 arm-specific subscale scores ( = 613) were found in those self-reporting lymphedema on the MST, versus those who did not, in those with unilateral treatment (%LVD: = 112128; < 0.001; FACT G: = 14617.0; < 0.001; FACT B+4 arm subscale: = 9671.5; < 0.001).
The second subjective measure was provided by the Lymphedema and Breast Cancer Questionnaire (LBCQ), a structured interview tool that evaluates 19 symptoms both currently and in the past [1]. Face and content validity and test-retest reliability have been supported and logistic regression found two items to be the best predictors of lymphedema (limb circumference difference of 2 cm or more): "heaviness in the past" and "swelling now" [1]. In the current study, affirming one or both of these items was used as a criterion for identifying or classifying lymphedema. If a participant answered only one of the two items and negated it, the presence of lymphedema according to the LBCQ could not be established and the participant was excluded from analysis.
Analysis.
Data were stored in an Access database; SPSS was used to perform descriptive and inferential analysis. Descriptive analysis was conducted using frequencies and percentages where variables were categorical; means and standard deviations for normally distributed continuous data and medians and ranges for nonnormally distributed data. Normality was tested using the Kolmogorov-Smirnov test. In all inferential analysis, tests were two sided and statistical significance was set at < 0.05.
All available data for each tool were used when determining prevalence estimates for each tool. When analyzing the MST, nonrespondents to questions relating to lymphedema were excluded from analysis, leaving 577 participants. Where a participant had not answered one or both of the LBCQ questions used to identify lymphedema, they were excluded from analysis, leaving 410 people with available data. Bilateral perometry data were available for 389 women.
The three lymphedema prevalence estimates were compared using Cochran's Q Test in 341 women who had complete data for all three measures [24]. The proportion of individuals who were identified as having lymphedema by each of the three tools was identified and the Kappa measure of agreement was used to investigate the consistency of lymphedema identification. This compared two tools at a time. Sensitivity and specificity of the two subjective measures were investigated using the objective classification or diagnostic criterion of 10% or greater LVD as a comparator. The subjective tools were compared, using the more established LBCQ as the criterion. Analysis of the differences in quality of life between women with and without lymphedema, according to each measurement tool, was conducted in women with complete data for both variables. This was possible for 358 women when analyzing perometry %LVD scores, 378 for the LBCQ and 459 for the MST. Comparisons were conducted using t tests (two sided) where data were continuous and normally distributed and the Mann-Whitney U test where data were nonnormally distributed. The Chi-square test was used where variables were categorical.
Results
Participant characteristics for each dataset are provided in Table 1 and demonstrate similarities among the available data for each measurement tool. To give an indication of the time periods over which treatments were carried out, data were collected between November 2009 and May 2010, at which point 93% of participants were within 10 years after treatment (treatment between May 2000 and November 2009) and 99% were within 15 years after treatment (treatment between May 1995 and November 2009). Table 2 summarizes prevalence of lymphedema, which varied from 20.5% to 26.2%, with objective measurement achieving the highest estimate and the MST achieving the lowest. When considering all those responding positively to either or both of the subjective tools, prevalence was similar to the objective estimate. When comparing frequencies of individuals identified as having lymphedema between the measures, no significant difference was demonstrated (Cochran's = 1.504, = 0.471).
Consistency of identification of lymphedema between measures was evaluated using the Kappa measure of agreement between measurement pairs: a Kappa of 0.207 ( < 0.001) resulted between perometry and LBCQ; 0.143 ( = 0.008) between perometry and the MST; and 0.531 ( < 0.001) between the LBCQ and MST. This suggests moderate agreement between the subjective tools, but poor agreement between each subjective tool and perometry [24].
When evaluating sensitivity as the proportion of correctly identified true positives and specificity as the proportion of correctly identified true negatives [25], perometry %LVD was used as the reference method. The LBCQ had 40.7% sensitivity, compared to 36.8% for the MST-very similar. Specificity was also similar for both subjective tools: 80.0% and 78.1%, respectively. However, it is important to note that less than 50% of lymphedema cases that were objectively identified were also subjectively identified. About a fifth of those without lymphedema according to perometry were found to have lymphedema subjectively. Sensitivity and specificity were found to be 69.0% and 88.2%, respectively when comparing the MST to the LBCQ, higher between subjective tools than when either was compared with perometry.
When comparing quality of life subscales and the FACT B+4 TOI between those with and without lymphedema for each measurement tool (Table 3), perometry classification did not demonstrate significant differences in all except the arm subscale of the FACT B+4, which includes a question about arm swelling. The subjective measures demonstrated similar results to one another, with significant differences in all FACT B+4 subscales and the TOI for the LBCQ and all except SWB and EWB for the MST. Overall, where significant differences exist in the sample, they appear to be the strongest for the physical and functional subscales and the breast cancer and arm-specific subscales.
To summarize, objective measurement provided the highest prevalence estimate and the MST the lowest, although no statistically significant differences were found. Poor agreement between methods was found between subjective tools and objective ones, while moderate agreement was found between subjective tools. Sensitivity of the subjective tools, compared with objective, was not high, while specificity was better. Quality of life subscale scores did not differentiate significantly between those with and without lymphedema when using objective classification but did when using either subjective tool.
Discussion
The variability in prevalence estimated using the three systems of measuring and classifying lymphedema is consistent with previous results in this area [10,11,26], although the relatively small and nonsignificant differences between estimates are unexpected, with a range of only 5.7%. They are less variable than results found in previous studies that compare systems of measuring and classifying lymphedema within single samples. One study used perometry with two diagnostic criteria: 200 mL and 10% or greater change; prevalence estimates at 12 months were 42% and 21%, respectively [11]. Circumferential measurements were used, with a classification criterion of 2 cm increase at a single point, giving an estimate of 70% prevalence. Lastly, the Lymphedema and Breast Cancer Questionnaire [1], which diagnoses lymphedema if participants report signs and symptoms of heaviness and swelling, gave a prevalence estimate of 40%. A further study, with prevalence estimated at 2.5 years after treatment, found that the same criterion gave the highest estimate (2 cm difference in circumferences: 91%), but self-report gave the lowest estimate at 41% [15]. Estimates of 67% and 45% were found for 200 mL change in limb volume (perometry) and 10% or greater limb volume change (perometry), respectively. However, further evidence of greater prevalence estimation by self-report (27.9%) was also found when compared with 11.9% for 5 cm or greater difference in summed arm circumference, 0.6% for 10% or greater summed arm circumference, and 11.4% according to multifrequency bioimpedance, using a difference of 3 or more standard deviations above the reference score [10].
There is inconsistency in the existing literature as to whether subjective or objective measures provide higher prevalence estimates [10,11,15]. In the current study, objective estimation was found to give the highest prevalence. No other study has been located that compared two subjective measures; it is noticeable that the MST, which utilizes a single question relating to the presence or absence of lymphedema, gave a lower prevalence estimate than the LBCQ, where classification of lymphedema was made where a person affirmed one or both of two separate items. Both included a question relating to swelling, but only the LBCQ also included alterations in the sensation of heaviness when classifying lymphedema. Furthermore, the MST focuses on self-report of sensations "at the moment (e.g., in the past week), " while the LBCQ also addresses "in the past. " One could argue that the latter would include people with swelling that has resolved, which may not reflect chronicity. Some participants were positively diagnosed with lymphedema according to the MST, but not the LBCQ; this may relate to the "cues" provided by the MST question (e.g., tight-fitting rings or clothes), which are not provided in the specific items of "swelling" and "heaviness" within the LBCQ. The two subjective tools appear to have different advantages, with the LBCQ requesting sensations of both swelling and heaviness, while the MST provides cues to aid consideration of the question. If those responding yes to either tool are combined, the subjectively assessed prevalence estimate reaches 27.2%, only 1% greater than the objectively determined estimate. Both choice of items and the way the items are phrased should be carefully considered in subjective classification of lymphedema.
When considering the second research question of whether the three different measurement tools identify the same subsample of subjects as having lymphedema, analysis demonstrated that the three tools identified different subgroups. Statistical analysis demonstrated little consistency between perometry and either of the questionnaires and only moderate agreement between the questionnaires. There have been suggestions previously that self-report does not accurately reflect the presence of lymphedema [27]; however, others believe that subjective assessments can identify early indicators of lymphedema before objectively identified changes occur [5,28]. In response to this, a latent or subclinical stage has been added to the classification of lymphedema by the International Society of Lymphology [29].
When exploring sensitivity and specificity of lymphedema diagnosis, with the objective measure used as a reference, a relatively high percentage of potentially "positive" cases were not detected by either the LBCQ or MST, while these tools also identified individuals as having lymphedema who were not detected by objective testing. A previous study also reported that when compared with multifrequency electrical impedance, self-report was found to have a sensitivity of 65% but demonstrated an unacceptable number of "false negatives" and "false positives" [10]. Low sensitivity is concerning in relation to early and appropriate intervention, while low specificity could result in inappropriate resource usage and unnecessary anxiety for patients. In the current study, the former is of the greatest concern.
The differences between objective and subjective tools may reflect measurement of different aspects of a multifaceted condition-physical, cognitive, and affective [1]. Therefore it may not be meaningful to compare objective and subjective tools, if they measure different dimensions. Instead, it may be more valuable to consider which method is best suited for early detection of lymphedema to enable timely intervention and which is best for monitoring changes in the condition in response to treatment. The latter may depend on whether improvement in the condition is best reflected by reduction in swelling, improvement in function, or adaptation in coping. As there is no clear evidence that the amount of swelling correlates with the amount of distress, it may be more valuable to monitor subjective experiences of the condition over time [5,17]. This is supported by the study finding that quality of life scores differed significantly between those subjectively classified as having or not having lymphedema. This was not evident for objective classification, which is inconsistent with the existing literature that found significant differences in quality of life scores between those objectively identified as having or not having lymphedema [17,18]. Subjective tools may provide a better reflection of the diversity of physical, functional, social, and psychological symptoms and more strongly reflect the negative relationship between quality of life and lymphedema.
This study made use of a single objective measure and classification of lymphedema, which as previously has been identified to be conservative [11]; future work may benefit from the inclusion of more than one objective system of measuring and classifying lymphedema. It is also important to note that a limitation of this cross-sectional study was that objective measurement and classification of lymphedema relied on bilateral limb comparisons, rather than changes in limb volume over time. This means that normal bilateral asymmetries were not accounted for [1].
When arriving at decisions relating to appropriate tools for identifying and monitoring lymphedema over time, it may be beneficial to combine approaches, as suggested previously [1]. Tracking limb volume may give useful information about the efficacy of specific interventions that focus on reducing and maintaining limb volume. Meanwhile, subjective symptom assessment may allow identification of early symptoms and timely referral for management.
Conclusion
In this study between one in four and one in five women developed lymphedema after breast cancer treatment. Objective measurement was found to provide higher prevalence estimates than either of the two subjective tools, although no statistically significant differences were found. Poor agreement between methods was found between subjective tools and objective ones, while moderate agreement was found between subjective tools. Sensitivity of the subjective tools, compared with objective ones, was not high, while specificity was better. Quality of life subscale scores did not differentiate significantly between those with and without lymphedema when using objective classification, but did when using either subjective tool. The results support previous suggestions in the literature that lymphedema is multifaceted and that objective tools focus on the physical, while subjective tools reflect the functional and emotional dimensions. This supports the use of both objective and subjective tools in determining early signs of lymphedema development and in monitoring different dimensions of the experience following rehabilitation interventions. Further research is needed to establish an ideal battery of measures that would enable future comparisons between prevalence estimates and studies of treatment efficacy.
|
2017-06-16T09:26:40.319Z
|
2013-06-18T00:00:00.000
|
{
"year": 2013,
"sha1": "2a3f73c5e8cbe69ddc066f229a9404312da5547a",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/bmri/2013/807569.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5ee044f729cdce6d647a4d05e7adad44957743a2",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
243816492
|
pes2o/s2orc
|
v3-fos-license
|
Effect of Vacuum Impregnation with Sucrose and Plant Growth Hormones to Mitigate the Chilling Injury in Spinach Leaves
: Vacuum impregnation (VI) has been immensely used in modifying the physicochemical properties, nutritional values and sensory attributes of fruits and vegetables. However, the metabolic consequences of the plant tissue upon impregnation have not been profoundly explored although shelf life is strongly dependent on this factor. In this study, spinach leaves were impregnated with salicylic acid (SA), γ -aminobutyric acid (GABA) and sucrose to improve its quality and storage ability by reducing the chilling injury through the improvement of proline content. The spinach leaves were stored at 4 ◦ C for 7 days and were analyzed at 12 h interval. Upon 1 day of impregnation, the proline content in GABA, sucrose and SA impregnated leaves was increased by 240%, 153% and 103%, respectively, while in non-impregnated leaves, the proline content was decreased by 23.8%. The chlorophyll content of GABA impregnated leaves exhibited the lowest reduction (49%) followed by sucrose (55%) and SA (57%); meanwhile, non-impregnated leaves reduced 80% of chlorophyll content at the end of storage. Sensory evaluation showed that GABA, sucrose and SA impregnated leaves respectively, obtained higher score in terms of freshness, color, texture and overall appearance as compared to non-impregnated leaves.
Introduction
Vacuum impregnation (VI) is a food processing technique used to introduce different substances to the porous matrices of plant tissue. It has been commonly used in the enrichment of vegetables and fruits with probiotics and micronutrients [1], texture enhancement [2], modification of the sensory attributes [3] and extension of shelf life by pH reduction [4]. The metabolic consequences of impregnating different substances in plant tissues have not been widely studied, although it is an important factor that can affect the product shelf life. It is important to understand that the introduction of other substances into the plant tissue might affect its metabolism and might change the quality of the vegetable and subsequently affect the shelf life. This could lead to the increase in food waste.
According to Dou and Toth [5], along with roots and tubers, fruits and vegetables have the highest wastage rates (1.3 billion tons per year) of any food products due to their perishable nature. One of the most common methods used to prolong the post-harvest shelf life of fruits and vegetables is by storing them at low temperatures. As for leafy vegetables like spinach, the storage at low temperatures might reduce the microbiological deterioration but the sensitive tissue might be vulnerable to chilling injury. The effect of chilling injury can be seen as browning, surface pitting, wilting and loss of flavor [6] and, thus, might affect the consumer acceptability and resulted in the increase of post-harvest loss.
Most research has been done to treat chilling injury and most research has been focusing on treating and improving the physical appearance of the leafy vegetables. However, to the best of our knowledge, none of the research has explored the effectiveness of GABA, sucrose and SA compounds scientifically reported to mitigate the chilling injury effect.
Salicylic acid (SA), a type of phenolic compounds, is widely distributed in plants and is considered as a plant hormone because of its roles in plant growth and development, as well as in responses to environmentally stressful conditions [7][8][9]. In recent years, a few studies have reported the effects of SA on chilling injury in fruits and vegetables, such as increasing antioxidant enzyme activities in banana [10], enhancing total antioxidant activity and preserving bioactive compounds in orange fruit [11], improving proline content in banana [12], delaying activities of polyphenol oxidase (PPO) in banana [10] and reducing electrolyte leakage in banana and orange fruit [10,11].
γ-aminobutyric acid (GABA), a non-protein amino acid, is regarded as an endogenous signal molecule that plays a pivotal role in regulating the stress response and plant growth and development [13]. In plants, GABA content is typically low, but abiotic stresses such as chilling, heat, drought, UV irradiation and low level of oxygen can cause GABA to accumulate rapidly [14,15]. Recent studies have reported that GABA could be used as a post-harvest treatment to alleviate chilling injury in zucchini [16] and white clover [17] by enhancing proline accumulation and also delays senescence in cherry [18] and blueberry [19] by enhancing antioxidant system activity.
Sucrose represents the major transport form of photosynthetically assimilated carbohydrates and plays an important role in plants [20]. Sugars as sources of carbon skeleton are necessary to maintain energy supply and extend the post-harvest life of perishable fruits and vegetables [20]. It has been proven that exogenous sucrose supply can delay senescence in asparagus [21], reduce yellowing and enhance antioxidant capacity of broccoli [22] and reduce nitrate content in baby spinach leaves [23].
In this study, we suggest that exogenous administration of SA, GABA and sucrose using VI reduce the chilling injury and maintain a good quality of spinach leaves during cold storage by improving the proline content, slowing the chlorophyll degradation and showing evidence that impregnated leaves directly influence the physicochemical changes of spinach leaves.
Plant Material
Spinach leaves (Spinacia oleracea cv. Amaranthus) were purchased fresh from Pasar Borong Seri Kembangan, Selangor, Malaysia. The leaves were placed in a plastic bag before being transferred to a laboratory in UPM within 20 min. In the laboratory, the spinach leaves were stored at 4 • C. Only spinach leaves with a blade of 8.0 ± 0.5 cm length and 7.0 ± 0.5 cm width and petiole of 1.0 ± 0.1 cm length were selected for experiments. The leaves were subjected to VI treatment within 3 h after purchasing.
Impregnating Solutions
Salicylic acid (SA) solution of 2 mM (pH 3.11) and γ-aminobutyric acid (GABA) solution of 5 mM (pH 5.94) were prepared based on the most commonly used concentrations for the treatment of fruits and vegetables [10,13]. An isotonic sucrose solution of 0.3 M (pH 6.43) in equilibrium with spinach leaves was designed with respect to the cell sap. The isotonic solution concentration was determined by immersing three spinach leaves in a series of solutions with different concentrations ranging from 0.2 M to 0.6 M [24]. The variation of tissue weight was recorded every hour until equilibrium.
Vacuum Impregnation
Ten leaves were submerged in a beaker containing the solutions of interest and were immediately introduced to VI process at temperature 25 • C ± 2 • C, which was carried out in a desiccator connected to a vacuum controller (VACUUBRAND GMBH + CO KG, Wertheim, Germany) and a vacuum pump, as described by [24]. Based on preliminary experiments, to establish maximum weight gain and avoid tissue damage, a protocol with a minimum absolute pressure of 150 mbar is chosen. During the first phase of VI, the pressure was gradually decreased from 1000 mbar to 150 mbar in 16 min and was kept at 150 mbar for 2 min. During the second phase, the vacuum was released and the pressure progressively increased to atmospheric pressure for 30 min and was kept at atmospheric pressure for 15 min. The total treatment time was 63 min and this cycle was repeated twice. After VI process, the excess solution on the surface of the spinach leaves was removed with tissue paper and the weight gain (50% ± 1.5) of each leaf was recorded.
Sample Preparation
Non-impregnated and impregnated leaves were placed in a closed polypropylene container (10 leaves per container) with saturated humidity and left in darkness at 4 • C ± 0.3 • C for 0.5, 1, 1.5, 2, 2.5, 3, 3.5, 4, 4.5, 5, 5.5, 6, 6.5 and 7 days. For the analysis of proline and chlorophyll, freeze-drying was carried out using a laboratory freeze-dryer (FreeZone, Labconco, Kansas City, MO, USA) for 3 days. After drying, the leaves were ground to a fine powder with mortar and pestle.
Proline
The proline was extracted from non-impregnated and impregnated spinach leaves powder in 30 g/L sulfosalicylic acid at 100 • C for 10 min with shaking. Then, the extract was centrifuged at 10,000 rpm for 15 min and the supernatant was collected and stored at 4 • C for proline determination. Of supernatant, 2 mL was mixed with 2 mL glacial acetic and 3 mL acid ninhydrin reagent and boiled for 30 min. After 5 min of cooling, the reaction mixture was extracted by 4 mL of toluene and the absorbance of the organic phase was recorded at 520 nm using spectrophotometer (GENESYS 30, Thermo Scientific, Waltham, MA, USA). The proline content was expressed as mg proline per 100 g dry mass. The experiment was done in triplicates for each non-impregnated and impregnated spinach leaves [25].
Chlorophyll
Dried samples of non-impregnated and impregnated spinach leaves were weighed at 0.5 g and homogenized in tissue homogenizer with 10 mL of acetone and centrifuged for 10,000 rpm for 15 min at 4 • C. Of supernatant, 0.5 mL was mixed with 4.5 mL acetone and was analyzed for chlorophyll content using a spectrophotometer (GENESYS 30, Thermo Scientific, Waltham, MA, USA). The absorbance of the solution mixture was taken at 663 nm for chlorophyll a and 645 nm for chlorophyll b [26].
Weight Loss
The weight of non-impregnated and impregnated leaves was recorded before and after storage at 4 • C for every 12 h until 7 days. The weight loss of the spinach leaves was expressed as the percentage (%) [27].
Total Soluble Solids
Non-impregnated and impregnated spinach leaves were blended and 1 mL of the juice was used to determine the total soluble solids using a digital refractometer (PR-201α, ATAGO, Tokyo, Japan). The total soluble solids were expressed as Brix [28].
Titratable Acidity
Non-impregnated and impregnated spinach leaves were blended and 5 mL of the juice was used to determine the titratable acidity by titration with 0.1 M sodium hydroxide. The volume of sodium hydroxide used to reach the endpoint was recorded for the calculation of titratable acidity. The titratable acidity was expressed as the percentage (%) of oxalic acid [28].
Color
Color measurements were performed using a chroma meter (CR-410, Konica Minolta, Chiyoda City, Tokyo, Japan). The illumination area and observer of φ50 mm/φ53 mm and 2 • closely matches CIE 1931 Standard Observer (illuminant D65) were used, respectively. The L*, a*, b* values of the spinach leaves were recorded for non-impregnated and all impregnated leaves every 12 h for 7 days.
Overall Visual Quality
The visual observation of non-impregnated and all impregnated spinach leaves was taken with a camera (Apple iPhone 6 Plus, Apple Inc., Hessen, Germany) for every 12 h for 7 days.
Sensory Evaluation
A total of 60 untrained panelists aged from 22 to 35 corresponding to 25 men and 35 women) were involved in the sensory evaluation. They were required to evaluate the non-impregnated and impregnated spinach leaves of 0, 1, 3, 5 and 7 days in terms of freshness, color, texture, smell and overall appearance. For each category, they were given a 5-scale score. Note that 5 is the highest score, whereas 1 as the lowest. Total score (%) for each spinach were calculated as follow [29]:
Statistical Analysis
The statistical significance (p < 0.05) of the treatments was tested by means of twoway analysis of variance (ANOVA) using Minitab Statistical Software (Minitab 19, LLC, Dayton, OH, USA). The Tukey-Kramer multiple comparison test was used to evaluate true differences in treatment means.
Long Term Metabolic Response: Effects of Different Impregnating Solutions on Proline Content of Spinach Leaves
Our findings ( Figure 1a) show that as soon as it reaches 0.5 days after impregnation, GABA impregnated leaves showed a sharp improvement of proline content by 310% (865.6 ± 30.9 mg/100 g), but decreasing steadily afterward and starting to decrease significantly on the 4th day (236.1 ± 5.8 mg/100 g) of storage. Sucrose impregnated leaves recorded an increment of proline content by 97% (409.5 ± 28.1 mg/100 g) on 0.5 days and keep increasing by 214% (652.4 ± 40.0 mg/100 g) on the 2nd day before started to decrease throughout the storage. The increment of proline content was also observed in SA impregnated leaves on 0.5 days after impregnation by 60% (354.2 ± 16.7 mg/100 g) and the highest increment was observed on 1st day of storage by 103% (453.2 ± 49.0 mg/100 g) but started to decrease afterward until the end of storage. The proline content in both GABA and sucrose impregnated leaves were seen to return to its initial value on day 4.5 of storage, whereas in SA impregnated leaves, the proline content gets back to its initial value on day 2 of storage time.
On the other hand, non-impregnated leaves decrease up to 94% of proline content on the 7th day of storage (12.8 ± 2.4 mg/100 g). Statistical analysis shows that the proline content of GABA and sucrose impregnated leaves for all days were significantly different from non-impregnated leaves, but there were no significant differences observed between non-impregnated and SA impregnated leaves.
Long Term Metabolic Response: Effects of Different Iimpregnating Solutions on Chlorophyll Degradation in Spinach Leaves
Impregnation with all three solutions resulted in an increasing in chlorophyll content as compared to non-impregnated leaves. The chlorophyll content of all impregnated leaves for all storage days was significantly different with non-impregnated leaves. Figure 1b shows that an increase of chlorophyll content was observed in leaves impregnated with SA by 2% (2814 ± 200.6 mg/g), GABA by 18% (3332 ± 432.7) and sucrose by 26% (3613 ± 380.8) after 0.5 days of storage and started to reduce gradually throughout 7 days of storage with a significant decrease on the 2nd day. It was observed that the chlorophyll content of SA, GABA and sucrose impregnated leaves started to get back to its initial value on day 1, day 1.5 and day 2, respectively.
At the end of storage, much lower chlorophyll reduction was observed in SA, GABA and sucrose impregnated leaves, which were 57% (1180 ± 155.0 mg/g), 49% (1451 ± 117.0 mg/g) and 55% (1278 ± 148.6 mg/g), respectively, as compared to non-impregnated leaves, which recorded up to 80% (551 ± 23.3 mg/g) loss of chlorophyll on the 7th day of storage. Figure 1c shows that, after 7 days of storage, non-impregnated leaves showed a 8.4% loss of weight which, was slightly higher (not significantly different) as compared to 8% weight loss in SA impregnated leaves. Meanwhile, both GABA (3%) and sucrose (4.2%) impregnated leaves showed noticeably lower weight loss throughout 7 days of storage time as compared to non-impregnated leaves. Both non-impregnated leaves and all impregnated leaves showed significant weight loss after 3.5 days of storage. Figure 2a shows that the initial value of total soluble solid recorded by non-impregnated leaves was 8.0 • Brix. Throughout storage time, total soluble solid content in non-impregnated and all impregnated leaves were decreasing gradually, with a significant decrease observed on day 2.5. At the end of storage, sucrose (5.7 • Brix) impregnated leaves showed the highest total soluble solid content followed by non-impregnated (3.7 • Brix) leaves, GABA (3.3 • Brix) and SA (2.7 • Brix) impregnated leaves. Statistical analysis shows that the total soluble solids content of all three impregnated leaves for all storage days are significantly different from non-impregnated leaves. Figure 2b shows that the initial oxalic acid percentage in non-impregnated leaves was 0.09%. After 0.5 days, the non-impregnated and SA impregnated leaves recorded an increment in oxalic acid by 67% and 120%, respectively, while in GABA and sucrose impregnated leaves, the oxalic acid percentage reduced steadily with time. Based on the statistical analysis, there is no significant difference in oxalic acid (%) between storage days. However, the percentage of oxalic acid for all three impregnated leaves is significantly different from non-impregnated leaves.
Physicochemical Changes: pH Value
Our findings in Figure 2c shows that the same trend of TSS/TA and pH were observed in non-impregnated and all impregnated leaves. Throughout 7 days of storage time, the pH values of non-impregnated and SA impregnated leaves were decreased but increased gradually in GABA and sucrose impregnated leaves. However, there is no significant difference in pH value throughout 7 days of storage of all non-impregnated and impregnated leaves.
Physicochemical Changes: Colour
Figure 3a and shows the L* (lightness) values for non-impregnated and all impregnated leaves with a significant decrease on day 3 and onwards. However, comparing to non-impregnated leaves, there is no significant difference in L* values for all days of sucrose impregnated leaves but significant difference was observed in GABA and SA impregnated leaves.
For −a* values (Figure 3b) which represent the green color, showed a continuous reduction in non-impregnated and all impregnated leaves throughout storage, particularly in non-impregnated and SA impregnated leaves. A significant decrease was observed starting at day 1 and onward for both non-impregnated leaves and all impregnated leaves. Statistical analysis shows that −a* values of all three impregnated leaves are significantly different with non-impregnated leaves. Figure 3c shows the b* (yellow) values for non-impregnated and all impregnated leaves showed a continuous increase throughout 7 days of storage time with the most significant increase observed in non-impregnated leaves on day 3.5 and onwards. Statistical comparisons show that all impregnated leaves except for SA impregnated leaves show significant difference in b* values of all storage days with non-impregnated leaves. Figure 4 shows the visual observation of non-impregnated leaves and SA, GABA and sucrose impregnated leaves throughout 7 days of storage in chilling condition. It can be seen that non-impregnated leaves started to show chilling injury symptoms as early as 5 days of storage. However, all impregnated leaves were still in good condition even on the 7th day of storage.
Sensory Evaluation
The sensory evaluation showed that the freshness, color, texture, odor and overall appearance scores of non-impregnated leaves on day 0 which were the freshly bought spinach leaves were 62.7%, 71.7%, 53%, 69.3% and 62.7%, respectively.
Based on Table 1, the non-impregnated leaves, SA, GABA and sucrose impregnated leaves were significantly different in the category of texture. The non-impregnated leaves were significantly different from GABA and sucrose impregnated leaves in freshness and overall appearance categories whereas, in the category of color, only GABA impregnated leaves showed significant differences with non-impregnated leaves. However, for the odor parameter none of the impregnated leaves recorded significant difference with nonimpregnated leaves.
Throughout 7 days of storage time, there was no significant difference in the category of freshness, texture, odor and overall appearance for non-impregnated and all impregnated leaves. The significant difference was observed only in color category between day 0 and day 7.
Nevertheless, on the 7th day of storage time, non-impregnated leaves showed the lowest score for freshness (42%), color (58%), texture (44%) and overall appearance (51%) as compared to the impregnated leaves. Among the impregnated leaves, GABA impregnated leaves showed the highest score for all categories, which are freshness (67%), color (71%), texture (69%), odor (68%) and overall appearance (69%) followed by sucrose and SA impregnated leaves. Figure 4 shows the visual observation of non-impregnated leaves and SA, GABA and sucrose impregnated leaves throughout 7 days of storage in chilling condition. It can be seen that non-impregnated leaves started to show chilling injury symptoms as early as 5 days of storage. However, all impregnated leaves were still in good condition even on the 7th day of storage.
Sensory Evaluation
The sensory evaluation showed that the freshness, color, texture, odor and overall appearance scores of non-impregnated leaves on day 0 which were the freshly bought spinach leaves were 62.7%, 71.7%, 53%, 69.3% and 62.7%, respectively.
Based on Table 1, the non-impregnated leaves, SA, GABA and sucrose impregnated leaves were significantly different in the category of texture. The non-impregnated leaves were significantly different from GABA and sucrose impregnated leaves in freshness and overall appearance categories whereas, in the category of color, only GABA impregnated leaves showed significant differences with non-impregnated leaves. However, for the odor parameter none of the impregnated leaves recorded significant difference with nonimpregnated leaves.
Throughout 7 days of storage time, there was no significant difference in the category of freshness, texture, odor and overall appearance for non-impregnated and all impregnated leaves. The significant difference was observed only in color category between day 0 and day 7.
Nevertheless, on the 7th day of storage time, non-impregnated leaves showed the lowest score for freshness (42%), color (58%), texture (44%) and overall appearance (51%) as compared to the impregnated leaves. Among the impregnated leaves, GABA impregnated leaves showed the highest score for all categories, which are freshness (67%), color
Effect of VI on Proline Content of Spinach Leaves
Our results clearly show that impregnation of GABA, sucrose and SA into spinach tissue have resulted in the increment of proline by 240%, 153% and 103% respectively on day 1 of storage in chilling temperature (Figure 1a). Amino acid proline is known as one of the major organic osmolytes that accumulate in a variety of plants in response to environmental stresses such as temperatures, drought, salinity, UV radiation and heavy metals [30]. Luo et al. [31] stated that proline defends plants by functioning as a cellular osmotic regulator between cytoplasm and vacuole and by detoxifying of reactive oxygen species (ROS), thus protecting membrane integrity and stabilizing antioxidant enzymes. Therefore, higher proline accumulation can help in reducing the damages resulted from low temperature abuse during cold storage of spinach leaves.
In plants, the precursors for proline biosynthesis are L-glutamic acid and ornithine. Two enzymes involved in the proline biosynthetic pathway ( Figure 5) through L-glutamic acid are pyrroline-5-carboxylate synthetase (P5CS) and pyrroline-5-carboxylate reductase (P5CR). Meanwhile, ornithine δ-amino-transferase (OAT) takes part through the ornithine pathway [32]. Based on our results, the highest proline content was observed in GABA impregnated leaves, followed by sucrose and SA impregnated leaves. In contrast, the proline content in non-impregnated leaves was decreasing by as much as 94% throughout 7 days of storage. The results were supported by previous research of Shang et al. [32] regarding the effect of exogenous GABA treatment on chilling injury in peach after long term cold storage. It has been reported that immersion of peaches in GABA for 10 min could reduce chilling injury by enhancing the accumulation of proline and also endogenous GABA. The study also mentioned that proline accumulation depends on its degradation, which catalyzed by proline dehydrogenase (PDH). Shang et al. [32] also revealed that treatment with GABA increased the activities of P5CS and OAT but decreased PDH activity in peaches under chilling stress. The increase of P5CS and OAT activities could enhance the proline biosynthesis, while the decrease of PDH activity would contribute to the lower degradation of proline in the GABA-treated peaches.
The increment of proline content in sucrose impregnated leaves was supported by previous research of Cao et al. [33]. In this study, it was found that the effect of exogenous sucrose feeding to cucumber seedlings on chilling tolerance and proline content has resulted in higher proline. Ashraf and Foolad [30] explained that proline helps in chilling injury by providing sufficient reducing agents upon relief of stress, which later supports mitochondrial oxidative phosphorylation and generation of ATP for recovery from stress and repairing of stress-induced damages. In addition to proline's role in chilling tolerance, endogenous sucrose also contributes to the mitigation of chilling injury by activating the antioxidant enzymes [33].
Previous researches [9][10][11] reported that the application of exogenous SA could improve proline accumulation in fruits and vegetables, thus alleviating damages from chilling temperature storage. This was also found in our results where vacuum impregnation of SA in spinach tissue can increase proline accumulation by 103% as compared to untreated leaves. It is believed that exogenous SA can activate P5CS which triggers proline accumulation [34]. Based on our results, the highest proline content was observed in GABA impregnated leaves, followed by sucrose and SA impregnated leaves. In contrast, the proline content in non-impregnated leaves was decreasing by as much as 94% throughout 7 days of storage. The results were supported by previous research of Shang et al. [32] regarding the effect of exogenous GABA treatment on chilling injury in peach after long term cold storage. It has been reported that immersion of peaches in GABA for 10 min could reduce chilling injury by enhancing the accumulation of proline and also endogenous GABA. The study also mentioned that proline accumulation depends on its degradation, which catalyzed by proline dehydrogenase (PDH). Shang et al. [32] also revealed that treatment with GABA increased the activities of P5CS and OAT but decreased PDH activity in peaches under chilling stress. The increase of P5CS and OAT activities could enhance the proline biosynthesis, while the decrease of PDH activity would contribute to the lower degradation of proline in the GABA-treated peaches.
Effect of VI on Chlorophyll Content of Spinach Leaves
The increment of proline content in sucrose impregnated leaves was supported by previous research of Cao et al. [33]. In this study, it was found that the effect of exogenous sucrose feeding to cucumber seedlings on chilling tolerance and proline content has resulted in higher proline. Ashraf and Foolad [30] explained that proline helps in chilling injury by providing sufficient reducing agents upon relief of stress, which later supports mitochondrial oxidative phosphorylation and generation of ATP for recovery from stress and repairing of stress-induced damages. In addition to proline's role in chilling tolerance, endogenous sucrose also contributes to the mitigation of chilling injury by activating the antioxidant enzymes [33].
Previous researches [9][10][11] reported that the application of exogenous SA could improve proline accumulation in fruits and vegetables, thus alleviating damages from chilling temperature storage. This was also found in our results where vacuum impregnation of SA in spinach tissue can increase proline accumulation by 103% as compared to untreated leaves. It is believed that exogenous SA can activate P5CS which triggers proline accumulation [34].
Effect of VI on Chlorophyll Content of Spinach Leaves
Chlorophyll degradation can cause loss of green color or yellowing in spinach leaves and subsequently decreasing the market value of spinach [35]. Our findings showed that GABA, sucrose and SA impregnated leaves were able to reduce chlorophyll degradation as much as 57%, 55% and 49%, respectively, as compared to the non-impregnated leaves which lost to 80% chlorophyll on 7th day of storage (Figure 1b). Similar findings were also reported by Rezaei-chiyaneh et al. [36], Huang et al. [37] and Xu et al. [22] on the effect of GABA, SA and sucrose, respectively, in chlorophyll degradation. Huang et al. [37] explained that plants suffering from chilling stress can cause membrane damage, ROS generation and toxic compound accumulation, which can lead to a reduction of chlorophyll content, the disintegration of chloroplast membranes, disruption of photosystem biochemical reactions and the reduction of photosynthetic activity. Meanwhile, Xu et al. [22] mentioned that delaying chlorophyll degradation in post-harvest treatment with sucrose might be related to the inhibition of enzyme activities and expression of genes associated with chlorophyll degradation.
Our results from chroma meter show that L* values (Figure 3a) for non-impregnated, GABA and sucrose impregnated leaves were increasing throughout 7 days of storage time. The exemption was made to SA impregnated leaves that shows a decreasing pattern on L* values. Meanwhile, for the −a* values (Figure 3b) which indicate the green color, recorded a decreasing trend for non-impregnated and all three impregnated leaves. These values were in line with the chlorophyll content that also decreasing throughout storage. However, a significant decrease was observed in non-impregnated leaves and SA impregnated leaves throughout the storage days. For b* values (Figure 3c) which signify yellow color, non-impregnated and SA impregnated leaves showed higher value than GABA and sucrose impregnated leaves at the end of storage time. These values explained that yellowing of spinach leaves can be delayed by GABA and sucrose treatments as they can also reduce chlorophyll degradation by 49% and 55%, respectively, as compared to 80% reduction in non-impregnated leaves at the end of storage days.
Effect of VI on Physicochemical Changes of Spinach Leaves: Weight Loss, Total Soluble Solids, Titratable Acidity and pH
Physicochemical changes of the non-impregnated and impregnated spinach leaves were observed as they could be the indicator or symptoms of chilling injury. Weight loss in fruits and vegetables is mainly due to the loss of water caused by transpiration and respiration processes [38]. Based on our results, GABA impregnated leaves recorded the lowest percentage of weight loss after 7 days of storage time with 3%, followed by sucrose (4.2%) and SA (8%) impregnated leaves while higher weight loss was observed in non-impregnated (8.4%) leaves (Figure 1c). Higher loss of water in SA impregnated leaves can be explained by the low solubility of SA in water as compared to GABA and sucrose. According to Nordstrom and Rasmuson [39], the hydroxyl group in SA is hydrogen bonded intramolecularly to the carbonyl oxygen and, thus, reduced intermolecular hydrogen bonding which explains its low solubility in water. Therefore, more non-bonded water is available in SA impregnated leaves as compared to GABA and sucrose impregnated leaves which leads to a higher loss of unbound water.
According to Cavalcanti et al. [40], total soluble solids reflects the sugar content in spinach leaves. Our results show that sucrose impregnated leaves have significantly highest total soluble solid followed by non-impregnated leaves, GABA and SA impregnated leaves (Figure 2a). The high soluble solid content in sucrose impregnated leaves (8 • Brix ± 1.2) was due to the exogenous sucrose feeding which result in higher sucrose content in the spinach leaves [33]. On the other hand, the average value of titratable acidity of GABA impregnated leaves (0.09% ± 0.01) recorded the lowest followed by sucrose impregnated leaves (0.1% ± 0.02), non-impregnated leaves (0.13% ± 0.02) and SA impregnated leaves (0.18% ± 0.03) (Figure 2b). The results of TSS/TA are in line with the pH values of the leaves where on the 7th day of storage, sucrose (pH 5.85) recorded the highest pH or least acidic followed by GABA (pH 5.76), non-impregnated (pH 5.51) and SA (pH 5.45) impregnated leaves (Figure 2c). The changes in pH and TA in our findings were due to the administration of SA solution of pH 3.11, GABA solution of pH 5.94 and sucrose solution of pH 6.43. Throughout the storage, SA impregnated leaves and non-impregnated leaves show an increasing pattern in TA while GABA and sucrose impregnated leaves show a decreasing trend. Therefore, the TSS/TA ratio was increased for SA and non-impregnated leaves and was decreased for GABA and sucrose impregnated leaves. The pattern of changes in pH is in line with the changes in TSS/TA ratio. This result is in agreement with Taghipour et al. [41] findings where the pomegranate recorded the same trends in changes of pH and TSS/TA ratio as the result of intermittent warming.
Sensory Evaluation
Sensory evaluation of spinach leaves with and without impregnation of GABA, SA and sucrose was conducted in order to evaluate the chilling injury of the spinach and to acquire the consumers' preferences. The scores given by the 60 panelists were categorized by freshness, color, texture, odor and overall appearance. For freshness, there is no significant decrease in scores of GABA impregnated leaves stored from day 0 to day 7 at 4 • C. This indicates that spinach leaves impregnated with GABA can maintain their freshness even until 7 days of storage. On the other hand, non-impregnated leaves were given lower scores as compared to sucrose and SA impregnated leaves signifying that sucrose and SA impregnated leaves could improve the freshness of spinach leaves during storage in chilling temperature. For color and texture, spinach leaves impregnated with all three compounds showed higher scores as compared to non-impregnated leaves. These results are in line with both chlorophyll (Figure 1b) and proline contents (Figure 1a) as the green color of spinach leaves reflects the chlorophyll content and lower chlorophyll degradation with the increase of proline content. Thus, the green color is maintained. Meanwhile, for the odor, there were no significant differences in the scores for non-impregnated and impregnated spinach leaves throughout the storage time, as the original form of the impregnated substances did not exhibit any trace of odor upon treatment of the spinach leaves. From this result, it can be proven that leaves impregnated with these three compounds did not give any unpleasant smell of chemicals and was found not significantly different with the non-impregnated leaves. The score for the overall appearance revealed that at the end of storage, GABA impregnated leaves (68.7%) scored the highest followed by sucrose impregnated leaves (61%), SA impregnated leaves (56.7%) and non-impregnated leaves (51%). Through the organoleptic evaluation result, it can be concluded that all compounds used are able to reduce the chilling injury in spinach leaves with GABA being the most effective followed by sucrose and SA.
Conclusions
This study explores the metabolic responses of spinach tissue that follows the application of VI with SA, GABA and sucrose for the impregnation of fruit and vegetables. The following are the main results: 1.
All of these substances are able to increase the proline content in spinach leaves with increment of 240%, 153% and 103% in GABA, SA and sucrose impregnated leaves respectively. At the same time, these substances are proven to mitigate the chilling injury in spinach leaves based on the results from sensory evaluation which all of the impregnated leaves score better result as compared to non-impregnated leaves.
2.
The impregnation of these substances is also able to improve the chlorophyll content, subsequently lowering the chlorophyll degradation in all impregnated leaves as compared to the non-impregnated leaves. Thus, minimizing the chlorophyll degradation has delayed the yellowing of the spinach leaves.
3.
Spinach leaves impregnated with GABA and sucrose recorded lower percentage of weight loss which indicates that GABA and sucrose are able to maintain its textural integrity by preventing loss of water from transpiration and respiration in spinach throughout 7 days of storage time.
4.
The changes in pH value for all three impregnated leaves were not significantly different as compared to the non-impregnated leaves. 5.
The organoleptic evaluation revealed that all compounds used are able to reduce the chilling injury in spinach leaves, with GABA being the most effective followed by sucrose and SA.
|
2021-11-07T16:06:30.810Z
|
2021-11-05T00:00:00.000
|
{
"year": 2021,
"sha1": "fecd8ecfee7131d5b88abf28e6f483de1b9bde38",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3417/11/21/10410/pdf?version=1636111754",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "4861da442c411edab3cd7407a9def4b52e855834",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
}
|
197690772
|
pes2o/s2orc
|
v3-fos-license
|
Depletion of occupational performance effectiveness in electric power engineering industry: psychophysiological factors and risk evaluation
The modern society cause the increase of workload and impact of environment factors on performance efficiency of occupational duties and health safety of workers. These tendencies require the improvement of occupational psychophysiological selection that is developing in several directions. One of them, traditional, involves the specifying the list of professionally important qualities and ways to evaluate their level. Other direction is related to the applying methods for analyzing the complex of psychophysiological occupational qualities through the usage of various mathematical and statistical techniques and approaches. ABSTRACT
Within the first direction framework, there is revealed the complex of psychophysiological qualities that are significant for the occupational activity of workers. The ranking of these psychophysiological qualities by size allows making the occupational professional selection more purposeful. 1,2 The investigations established that traditional methods, used in professional selection, can be supplemented by genetic approaches that can increase its effectiveness and reasonably predict unwanted deviations in the behavior of workers in extreme conditions. 3 New technology for determining of professional successful accomplishment are offered. 4 This technology involves the presence of a video survey for employees in order to evaluate the performance of their work. The openness of the survey proved the existence of a moderate connection between the received measurements utility and the attitude of employees towards the proposed selection technology.
Insufficient development of worker's own psychophysiological qualities cause an inefficiency in cooperation between man and machine and can lead to loss of work safety and occupational health depletion. 5 By means of the expert assessments, it has been established that for the successful professional activity of motor vehicles drivers it is necessary to carry out training of sensorimotor reactions and other professionally important qualities that reflect the properties of higher nervous activity. However, workers with intentions and sympathy to their work are more likely to provide healthy and safe life. 6 Implementation of another direction in the development of occupational psychophysiological selection involves the usage of modern mathematical techniques for the collection of informative indicators set and aggregation of information in order to obtain clear answers about the degree of ability of workers to professional activities. The analysis of the typical structures of the occupational selection strategies includes the approaches to their formal description based on the multidimensional modification of the generalized structural method. 7 The modeling of main strategies in occupational selection allows to evaluate the various aspects of the professional selection integrity and to identify the optimal strategy for each specific goal. The proposed models describe the various typical structures of all sorts of occupation selection strategies. They give the possibilities to deduce the necessary relationships and to evaluate certain aspects of the integrity of a strategy in occupational selection as well as to compare the different professional selection strategies by set of indicators to identify the optimal for specific goal under given restrictions.
Factor models (Lenzenweger, 2015) and materials of external expert assessment (Shwetz and Kalnysh, 2007) are also very useful tools for studying the most valuable psychological and behavioral characteristics of a worker. 8,9 The split into key components in personnel selection procedure is also occurred by using a modular approach (Lievens and Sackett, 2017). 10 Using factor analysis, scientists identify a number of characteristics that influence the ability to provide the professional activities. There are (1) cognitive abilities; (2) communication skills; (3) computer skills; (4) more specific, related to problem performance, abilities; (5) interpersonal communication skills; (6) related abilities; (7) physical abilities; (8) ability to provide security; (9) independence; (10) ability associated with adapting to the structure of a particular activity by Persch et al. 11 In the electric power engineering industry, one of the key factors that causes significant tension in the body of operational personnel representatives is the informationloading factor. 12 The research emphasize that the additional factors of pressure are the remoteness of the operator from the controlled object; different load on the senses of the operator (mainly on the visual analyzer); excessively high pace of work or monotony; the fact that in the power management TPP system the operator, as a rule, operates under conditions of time shortage. The faults at the TPP due to the fault of the personnel mainly relate to the operational switching, during which there were short circuits or the exclusion of them under voltage; violation of the operations sequence according to the form of switching or execution of switching without a form; incorrect actions with overlays of relay protection and automatics. The main causes of the operator's errors are the violations of the rules for switching in electrical installations, the lack of necessary accuracy in the conduction of operational schemes, earthling, operational management, lack of appropriate control of switching, both from the administrative and technical personnel services, as well as from the side of operational personnel. Most mistakes made the operational staff at the beginning or at the end of the term and due to insufficient seniority in the post (up to 1-2 years). In general, the study of accidents and injuries in energy power industry showed that their cause is a "human factor" in 18.8% of cases. Some research emphasize the difference between man and woman performance. In particular, the quantitative investigation of railway station controllers assert that the frequency of mistakes by female operators is 2.16 times greater than by men. 13 The analysis of modern publications showed the importance of works devoted to issues of psychophysiological professional selection that focused on identifying a list of informative indicators and aggregation them into one integral indicator, revealing that the person belongs to one or another category of suitability. However, for timely and flexible management of the human resources workload we need to know more. Thus, we need to have not a jump-like estimation of the person's professional abilities, but estimation that can be smoothly changed and allows us planning the providing such a job for a worker that is more tolerant and safe according to his psychophysiological state.
The goal of our investigation is to develop the estimation technique for evaluation the risk of depletion in efficiency performance of occupational duties for operative service workers in electric power engineering industry.
METHODS
The characterization of occupational risks in reducing professional ability involves the establishment of natural psychophysiological causes of their formation during the long period of professional activity. The risk assessment procedure is a multi-stage process that provide the evaluation of the probability concerning the loss an appropriate level of efficiency because of the adverse impact of occupation activity. From practical point of view, an assessment of the risk is necessary to create adjustment decision to manage these risks. The risk management serve to develop the practical implementation aimed to eliminate or reduce the risks. On the other hand, the existence of quantitative information about the level of risk can give the opportunity to manage the adequate placement of personnel, taking into account the complexity of their work and the risk of critical depletion their performance under extreme conditions. In our investigation, we examined the materials of psychophysiological survey by the multivariate statistics, dispersion analysis and regression binary choice models. We obtained quantitative results using the EViews 8.0 software.
The investigation was based on workers' survey, encompassed 100 exogenous psychophysiological indicators. The sample included the observation of operative service workers in electric power engineering industry in Ukraine. The survey technique comprised several methodologies that consisted pendulum, adaptability, memory, square, triangle, square-circle, triangle-circle, attention switching, clock, closed space and other tools, as well as personal information.
Choice of the most important physiological indicators: factor analysis
Many of initial observed indicators were closely correlated and described the similar information. To construct a relevant analysis and adequate evaluate the risk of loss in occupational effectiveness, it was necessary to select smaller number of factors that nevertheless fully reflect important properties of data set. In this purpose we used the factor analysis at the first stage of our research.
According to the concept of the factor analysis, we describe the multidimensional vector Xi, that includes the observations of the n indicators for each worker i, by using the matrix equation: Where, µ is a (n × 1) vector that describes the means of variables; L is a (n × m) matrix of coefficients; Gi is a (m × 1) vector of unobserved variables, termed common factors, ui is a (n × 1) vector of errors or unique factors, or some special subjective factors that randomly affect the measurement results.
The analysis of the loading matrix coefficients for the first most important main components in the structure of the surveyed indicators formation (Table 1) • f1 = X4 -Total error (Pendulum methodology); • f2 = X7 -Number of positive values (Pendulum methodology); • f3 = X9 -Number of hits into zero (Pendulum methodology); The latter random term v take into account the measurement error or certain subjective stochastic factors that by chance affect survey results.
Model (2) is a linear multivariable regression model whose parameters are estimated using the least squares (LS) method. Estimations from this regression determine the expected probability that Yi = 1 for every observation i. The coefficients estimates of the linear probability model are interpreted as a change in the probability for a risk of professional qualities depletion, if the corresponding explanatory variable (psychophysiological factor) varies by unit and the remaining explanatory variables are fixed.
However, the linear probability model of form (2) does not allow us to evaluate adequately the probability of a depletion risk in professional activity efficiency in our case. It is occur because of two reasons. First of all, the , and therefore we are not be able to adequately interpret evaluated risk. Second, the statistical and mathematical analysis of the model (2) indicates the distribution discreteness of the random variable vi. This fact shows that error is not close to normal distribution. In addition, the random term in model is heteroscedastic that don't support the obtaining effective estimates of the influence parameters, αj. The disadvantages of the linear probability model (2)
RESULTS
Therefore, taking into account the previously conducted correlation and factor analysis that made possible to distinguish the main impact factors, we estimate the multivariate logit model The specification (5), in addition to the factors of the psychophysiological state of the worker, includes the variable AGE_GROUP that describes the age of the worker. The investigation provide analysis for 4 age categories of workers: group 1 collect workers under the age of 29, group 2 includes workers at the age between 30 and 39, group 3 includes workers at the age between 40 and 49, group 4 collects workers at the age 50 and older.
We analyzed the ability of the model to correctly evaluate the probability of professional skills drain and, accordingly, to predict the risk of a critical depletion in the professional activity efficiency and health. Performed on the basis of the Andrews test (Andrews goodness-offittests), comparison of actual and predicted values for different age groups (Table 2) confirmed the adequacy of evaluation technique for risk of critical reduction in professional competence and modeling correctness for all age groups.
The developed model (4)-(5) made possible to group and quantify the confidence intervals for risks. The analysis of the Hosmer-Lemeshow goodness-of-fittests test (Table 3) 4 0.1131 For the developed multivariate logit model, unlike linear regression, the marginal effects of factors are not constant and depend on the values of all factors. Therefore, we evaluated the effect of change in the significant factors for given average stable levels of all other factors ( Figure 1-2). The investigation also took into account different age categories. The simulation results showed that for workers with high values of average reaction time, X18, defined by squarecircle technique, ceteris paribus, we predict a high risk of professional efficiency depletion. On the contrary, the low values of X18 cause the low probability value for negative expectation (Figure 1). At the same time, the results for different age groups are rather close that indicates that the age factor is not relevant for investigating of the impact of this psychophysiological indicator.
DISCUSSION
The analysis also revealed that the increase in the number of positive values, defined by the pendulum methodology, (X7), and the variability, (X40), lead to a reduction in the risk of a critical depletion in the professional successful activity. In addition, the curve of the marginal effects of factor X7 is steeper than the curve of the marginal effects for factor X40 ( Figure 2B). This result indicates that the impact of the factor that describe the number of positive values, defined by the pendulum methodology, is stronger than effect of the factor, X89, which characterizes the variability. The simulation showed (Figure 2A) that for worker with average values of all factors but with a low value of positive values by the pendulum, regardless of the age group, we expect the significant degree of risk concerning the loss of professional qualities. However, for values of X7 that exceed 500 such a risk is practically absent. At the same time, both factor X7 and X40 give almost the similar results for different age groups of workers.
Therefore, we obtain that age of worker is not a determinative factor that significantly influence occupation performance efficiency and does not increase risk of its depletion in electric power engineering industry. It is slightly stronger conclusion in comparison to previous results that were obtained in psychophysiological assessment of operational reliability and working capacity supporting of military operators (Kalnysh, Shvets, 2011) and in investigation of labor productivity for military managers that suffer from hypertonia caused by professional activity (Saliev, Shvets, 2013). 14,15
Risk evaluation rule
The developed model imply that the probability of occupation qualities loss for a worker is calculated according to the formulas Therefore, our research develop the approach for the evaluation of the risk with regard to critical depletion of occupation effectiveness and health working. Specifically, in order to determine the risk level for an individual worker, it should be done three steps: 1) On the basis of the psychophysiological examination of the worker to determine the value of seven psychophysiological indicators; namely, based on the method of the pendulum methodology, to determine the total error (X4), the number of positive values (X7) and variability (X40); based on the adaptability methodology, determine the level of adaptability (X31) and the time of task performance (X89); by means of square-circle technique, determine average reaction time (X18) and, based on square technique, to determine variability (X47).
2) Substitute the values of the psychophysiological factors and the age of the worker in (7) and determine the value of the variable Z; 3) Based on formula (6), calculate the expected probability of R that characterizes the level of risk and the degree of the occupational efficiency depletion.
Getting the value of R, based on the formula It is also possible to evaluate the level of readiness of workers to work, in particular in difficult and extreme circumstances, and to decide on its involvement in the performance of occupational duties. Based on the developed methodology, we estimated the risk value for each person of the observed sample for 466 workers of operative service in electric power engineering industry in Ukraine (Figure 3).
We also calculated the percentage distribution of workers by allocating 5 risk groups (Figure 4). The results showed that for almost 44% of existing employees, the risk level of health and occupational effectiveness depletion is less than 0.2, which indicates their reliability and high readiness in case of dangerous. The investigation also shows that 84% of workers have a risk of effectiveness loss less than 0.5 so they can expect the success in their occupational activities. However, over 4% of workers can exposed the experience of critical reduction of their occupational qualities with risk factor above 70%. For 5 workers, the risk of health and effectiveness loss is greater than 80% that reveal their occupational unfitness and unreadiness to work in electric power engineering industry especially under difficult production circumstances.
CONCLUSION
The investigation revealed that the significant important factors, influenced the risk of occupational efficiency depletion, are the variability, the total error and the number of positive values according to pendulum methodology; the average reaction time according to square-circle technique; variability according to the square technique; adaptability and time of the task performance according to the adaptability methodology.
The investigation revealed that for workers with average values of all factors but with high values of the average reaction time by square-circle technique, regardless of the age group, we expect a high risk of professional qualities reduction. However, the increase in the number of positive values (by the pendulum methodology) and the time of task performance (adaptability methodology) lead to a reduction in the risk of a critical depletion in the professional successful activity. The results showed that the effects of first of these factors is stronger than effect of the second.
Based on developed approach, we estimated that for 84% of workers in electric power engineering industry in Ukraine the risk of effectiveness loss is less than 0.5 so they can expect the success in their occupational activities. However, over 4% of workers can exposed the experience of critical reduction in their occupational qualities with risk factor above 70% that is very dangerous especially in difficult circumstances that can face energy production.
|
2019-07-21T08:06:38.380Z
|
2019-03-25T00:00:00.000
|
{
"year": 2019,
"sha1": "3941f53384701c661e98c7bf6e8b37f5687e5e1c",
"oa_license": null,
"oa_url": "https://www.ijmedicine.com/index.php/ijam/article/download/1547/1196",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "1e4d53ae396f67d9ce68725366bc44520752ec76",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
258643419
|
pes2o/s2orc
|
v3-fos-license
|
Clinicians’ perspective of picture archiving and communication systems at Charlotte Maxeke Johannesburg Academic Hospital
Background Picture archiving and communication systems (PACS) are now an established means of capturing, storing, distribution and viewing of all radiology images. The study was conducted in a quaternary hospital, Charlotte Maxeke Johannesburg Academic Hospital (CMJAH), part of the University of the Witwatersrand teaching circuit, in South Africa. Objectives To measure the clinicians’ perceived benefits and challenges of PACS. To document perceived views on how the current PACS can be improved. Method This was a cross-sectional observational study over a period of 5 months from September 2021 to January 2022 carried out at CMJAH. Questionnaires were distributed to referring clinicians with PACS experience. Descriptive statistics was conducted. Categorical variables were presented as frequency and percentages. The continuous variables were presented as means ± standard deviation. Results A survey with a response rate of 54% found the benefits most reported by clinicians were improved patient care, less time needed to review an exam, improved image comparison and consultation efficiency. With respect to perceived challenges, the unavailability of images at the bedside, problems with access and the lack of advanced image manipulating software were noted. The most frequent recommendations on improvements focused on the aforementioned challenges. Conclusion Hospital-wide PACS was viewed beneficial by most clinicians. Nonetheless, there are a few aspects that deserve attention to improve the functionality and access of the system. Contribution The findings will assist in future hospital or provincial-wide PACS deployment projects.
Introduction
Picture archiving and communication systems (PACS) are now an established, recognised and appropriate means of digital image acquisition, archiving, distribution and viewing. 1 The technology is unique in that it delivers the radiology diagnostic images and reports to the clinicians at the point of care. 2 Picture archiving and communication systems present an opportunity to eliminate film-based imaging. 2,3 In the past, the means for capturing, storing and viewing medical images was the hard copy film.
The last few years have seen a tremendous increase in the adoption of PACS in most radiology departments in South Africa. 4 The implementation of PACS began primarily in the private sector, with the public sector implementation of PACS significantly lagging behind because of lack of funding. 4 Currently, most public sector hospitals in South Africa have a mini PACS limited to the radiology department. A hospital-wide PACS was first installed in 2016 at the Charlotte Maxeke Johannesburg Academic Hospital (CMJAH). The onsite-PACS network was set up by Phillips architects, who configured the system connecting all the radiology imaging hardware, radiology and clinician workstations, using a software called iSite. evaluate the benefits and challenges clinicians perceived from the PACS at CMJAH, 6 years post-implementation of the system. In addition, their views on how the system could be improved were documented.
Study design
A cross-sectional, observational, descriptive study design based on a questionnaire survey was followed. A pre-tested questionnaire used by Jorweker et al. 1 was adopted and modified to suit our local environment. A four-point Likert scale and a categorical approach were used to elicit responses for the majority of statements. Responses to statements ranged from 1 to 5: 1 (strongly disagree) to 4 (strongly agree) and 5 (Neutral).
Some opportunities for open-ended questions were included.
Study setting
The study population included referring doctors from different specialities who routinely referred patients to the radiology department for imaging. Interns, medical officers, registrars and consultants were among them. Further included were the registrars who had rotated through CMJAH from other university circuit hospitals. Radiologists and radiology registrars were excluded. The hospital is a quaternary government hospital situated in Johannesburg, in the Gauteng Province of South Africa.
Data collection
Data collection was performed over a 5-month period from 01 September 2021 to 31 January 2022. A sample size of 375 was calculated with the open source epidemiologic statistics calculator for public health 6 using a power of 80% at 0.05 alpha with a 95% confidence interval. Convenience sampling of the referring clinicians with PACS experience at the academic hospital resulted in the distribution of 682 questionnaires. The clinicians that responded were 372. The calculated response rate was 54.5%.
Two complementary methods of administering the questionnaire were employed: an online survey was used and hard-copy questionnaires were distributed at the time of academic meetings. For the online survey, a link was created and distributed to participants via e-mail. In both cases, participant information and consent sheets were distributed along with the survey questionnaires.
Data analysis
For the online survey, data were collected on Microsoft Forms. Data from the data collection sheets were entered into a Microsoft Excel spreadsheet. These data were imported into STATA ® Version 15 (Stata Corp) for further analysis. Descriptive statistics were conducted.
Data were grouped into categorical and continuous groups. Categorical variables were presented as frequency and percentages. The continuous variables such as the years of using PACS were presented as means ± standard deviation (s.d.) or medians and interquartile range (if not normally distributed). As part of quality assurance, data cleaning processes included checking for duplicates, missing values, recoding and categorising variables. Correlations between categorical data were assessed using the Pearson's chi-square or Fisher exact test. Pearson's correlation was done and Cronbach's alpha was used to check the reliability and validity of the questions.
The open-ended questions were analysed using a method of content analysis that determines the number of times certain qualities appear in a written text. In the context of this study, two coding units were used: words and themes.
Statistical analysis
Graphs of the results were generated. All statistical analyses were two-sided and p-values < 0.05 were statistically significant. For the purpose of using the 2 × 2 chi-square, the four-point Likert scale was collapsed into two categories: disagree included strongly disagree and moderately disagree and agree included strongly agree and moderately agree.
Demographics
The highest number of respondents were registrars, 194 out of 372 (52%). Distribution of participants by speciality and position held are presented in Figure 1 and Figure 2.
Overall, the mean (s.d.) for using PACS by position held was 2.5 (1.03) years. Consultant mean was 3.38 (0.93) years and interns had the lowest mean of 1.42 (0.75) years. When categorised to the nearest number, the majority had used PACS for 3 years composing 39% of the respondents, and only 3% had used PACS for 5 years. The distribution of participants by years of PACS experience is presented in Figure 3.
The results showed that the majority of respondents accessed PACS for both reports and examinations. Twenty-eight of the 54 (51.9%) interns accessed reports only, while 26 out of 54 (48.2%) accessed both reports and examinations. The majority of consultants 52 out of 53 (98%), medical officers 67 out of 69 (97%) and registrars 183 out of 194 (94%) accessed both reports and examinations. This was statistically significant with p-value = 0.001. The vast majority of the respondents accessed PACS from a hospital PC workstation 370 out of 372 (99.5%).
Survey results
The reliability of the questionnaire was measured using Cronbach's alpha. The data is presented in Table 1 and Table 2.
Perceived benefits
Most clinicians strongly agreed that PACS has reduced the length of patients' stay at the hospital (83%), improved the ability for decision making regarding patient care (98%), enhanced patient care and service delivery (88%), reduced time taken to review an exam (98%), increased access to more exams than with film (99%), improved teaching of medical students and registrars (98%), improved consultation with other clinicians and radiologists (99%), and reduced the number of repeat examinations (97%). There was moderate agreement that PACS had improved clinicians' efficiency (65%) ( Table 3).
Of the nine benefit measures asked to clinicians with respect to their position held, there were no significant differences in terms of level of agreement with respect to: time taken to review an exam (p = 0.272), exams being accessed more frequently with PACS than with film (p = 0.114), impact of PACS on improved consultation with other clinicians and/ or radiologists (p = 0.311), improved ability to make decision making regarding patient care (p = 0.343), reduced number of repeat exams (p = 0.075), enhanced patient care and service delivery (p = 0.075), improved teaching of medical students and registrars (p = 0.132), reduced length of patients' stay at the hospital (p = 0.604). There was significant difference among respondents in the percentage agreement with respect to the impact of PACS on improved efficiency (p = 0.02) ( Table 4).
Perceived challenges
The majority of clinicians strongly agreed that there was inadequate access to PACS workstations (80%) and mentioned inability to view images at the bedside using portable devices (96%). There was minimal agreement that there was inadequate workstation performance speed (25%), higher than acceptable downtime (26%), and the lack of system support availability (37%). There was little agreement that PACS had resulted in inadequate image quality (2%), that they had received inadequate PACS ENT, ear, nose and throat; GIT, gastrointestinal tract. training (19%), that they had difficulty finding images in PACS when needed (1%), or that they had difficulty logging onto the PACS (8%) ( Table 3).
Of the nine indicators for measuring perceived challenges, only three indicators were perceived to be significantly different among clinicians with respect to position held. Half of the interns (50%) agreed they received insufficient training, while only 19% of registrars, 7% of consultants, and 4% of medical officers felt this was the case (p = 0.001). The lack of availability of system support was identified by 50% of the interns, 44% of registrars, 22% of consultants and 21% of medical officers (p = 0.001). Downtime being higher than acceptable was identified by 26% of consultants, 30% of medical officers, 29% of registrars and 9% of interns (p = 0.001) ( Table 5).
Open ended questions
A total of 66 out of 371 (17.8%) clinicians responded to the open-ended question on additional comments on benefits and challenges. The total number of views expressed were 93. Positive comments were 46, negative comments were 41, and non-relevant were 6. Of the total number of views expressed (N = 93), 48% were focused on benefits, whereas 46% mentioned challenges. Table 6 presents a summary of the comments expressed by respondents. Taking into consideration that some respondents passed more than one comment in their response, the researcher determined if the views expressed were either negative or positive, and documented them as either a benefit or a challenge.
Access to PACS, whether in the clinic environment or in wards, was noted as a major challenge among 17.2% of respondents. This was followed by the lack of advanced image viewing software 9.7%, power outage related downtime 8.6%, and the lack of bedside access to PACS 5. 4%.
An open-ended question yielded respondents' recommendations for improvements on the current system. A total number of 230 out of 372 (61%) responses were received from respondents (Table 7). After subjective categorisation, the total number of views identified as recommended improvements was 466. The most frequent recommendations were: increase PACS access in wards, clinics and bedside with 157 out of 466 (37%), enable PACS access via portable devices with 143 out of 466 (31%), install advanced image viewing software 36 out of 466 (8%), introduce an online booking system integrated with PACS (7%), integrate systems with other hospital PACS or set up a provincial PACS system (5%), increase the number of PACS training workshops and technical support (5%), and offsite access to PACS (4%).
Patient care and service delivery
The strong agreement response on improved patient care and service delivery compares with the high level of agreement observed in other studies. Lenhart 7 conducted a study on 'PACS: Acceptance by orthopaedic surgeons' wherein she recorded 64% agreement that PACS improved patient care. There are no specific studies in the literature ----------0.272 Agree 367 98 56 96 54 100 70 99 192 99 - Agree 368 98 51 96 54 100 71 100 192 99 - that specifically focus on the impact of PACS on improving patient care. It is difficult to come up with an objective measure for patient care. Watkins 8 concluded that there was no clearly discernible influence of PACS on clinical decision making; however, prompt access to images could have some beneficial impact. This is particularly the case in ICU and the emergency department where immediate access to images is thought to be more critical in influencing further patient management.
Reduced hospital length of stay
It can be hypothesised that prompt access to radiology reports and exams via PACS may result in prompt decision making and initiation of treatment, thereby reducing the patient's length of stay. In a study conducted in Saudi Arabia evaluating PACS at three ministry hospitals by Alalawi et al., 9 79% of the participants agreed with this statement. However, in another study evaluating the benefits of PACS, Bryan et al. 10 concluded there was no convincing evidence Disagree 366 98 52 98 53 98 70 98 191 99 - that PACS reduced the length of inpatient stay. This was further supported by a study conducted by MacDonald et al. 3 who concluded that the length of stay was not significantly impacted by PACS. They pointed out many external influencing factors to PACS such as clinician practice, hospital type and policy, and patient comorbidities.
Cannot view images at bedside
Although our local clinicians communicated a reduced hospital stay with PACS, examples of some external factors include the following: Charlotte Maxeke Johannesburg Academic Hospital is overburdened by many emergency cases resulting in a lack of availability of high care or intensive care unit (ICU) beds which further delays scheduling of some major elective cases that require post-operative admission to these units. Some of the equipment required for surgical procedures is outsourced from private companies, for example, the equipment for neuro-monitoring and neuronavigation; however, if these companies are completely booked, there may be a delay in the scheduling of neurosurgical procedures, increasing the hospital length of stay.
Consultation with other clinicians and impact on efficiency
The authors expected PACS to reduce the interaction between clinicians and radiologists due to the availability of images and reports at multiple sites within the hospital. This study's results strongly supported this argument: 98% of clinicians agreed that PACS had facilitated consultation among clinicians, and clinicians with radiologists. A limitation of the study is that consultations among clinicians themselves, and consultations between clinicians and radiologists were not separated. This question should have been split into two to specify the type of consultation. MacDonald et al. 3 documented reduced in-site consultations with radiologists, and increased offsite consultations between radiology and clinicians in a provincial PACS-based system study. There was moderate agreement of 66.4% that PACS had increased offsite consultations. Redfern et al. 11 supported the notion that the availability of PACS stations at clinical areas would lead to decreased consultations with radiology. Most clinicians suggested they saved time by no longer consulting with the radiology department to view images and/or reports. The radiology exams were readily available at multiple clinician workstations immediately after the images were acquired. Treatment planning could commence prior to the patient's return from the radiology department. This benefit was particularly observed in the emergency medicine and trauma units.
Impact on academic teaching
Regarding the impact of PACS on the teaching of medical students and registrars, there was moderate agreement of 67% that there was an improvement. These results correlate with the study by Jorwerkar et al. 1 where 51% of the respondents were in agreement. Picture archiving and communication systems are valuable for teaching due to the ease with which images can be compared, the convenience with which exams can be archived for use in teaching, and the ease with which image quality may be manipulated.
Perceived challenges
The challenges most often cited were the inability to view the images at the bedside, the lack of portable device access, and few available viewing stations. While this limitation could be a gap in the implementation plan, it must be analysed within the context of what is practical in the hospital setting of interest. It would be costly to set up workstations at every bedside and in a public sector hospital in a low-to middle-income country, logistically near impossible. Theft of equipment was highlighted as a challenge by the PACS administrators. One practical solution would be for clinicians to access PACS from their portable devices (tablets, laptops and mobile phones). This would reduce the capital cost of deploying more workstations.
Image quality and performance (speed)
Although image quality assessment is subjective and dependent on the viewing platform, the majority of the respondents were satisfied with the image quality. Only 2% of respondents stated that PACS produces inadequate image quality. Although entry level clinician workstation monitors are not held to the strict quality control standards of dedicated diagnostic display units used for formal radiology reporting, recent technological advances yield these monitors sufficient for general hospital-wide image review. Mobile device technology has certainly matured significantly for use by radiologists in the on-call, hospital offsite setting and by doctors at the bedside or in the operating theatre. 12 Slow image retrieval can be attributed to network speed which is more of an information technology (IT) support issue, although no formal assessment of this factor was done. Furthermore, the recent electricity supply challenges faced by the country affected the network connectivity and speed.
System support and training
Insufficient PACS training was reported by 24% of responding clinicians, and 36% agreed that they experienced a lack of system support from PACS administrators. Although 20% -40% of the respondents did not constitute a majority, this nonetheless suggests there are training and support issues to be addressed. Picture archiving and communication systems administrators conduct two training workshops every year, however, the turnout of clinicians during these PACS training workshops is usually low. This could in part explain why some clinicians felt that they did not receive adequate training. A limitation of the study is the fact that the roles of IT support and PACS administrators were not distinguished when it came to system support.
Improvements
The most frequent recommendations 320 out of 466 (68%) were related to PACS access. Very few clinicians were aware that PACS can be accessed via portable devices (personal tablets, laptops and cell phones) from within the hospital. The hospital PACS is wired through a local area network (LAN) and is web-based. The hospital already has the infrastructure to facilitate wireless connectivity to this network. Devices to be connected to this network will need to be configured by the IT department. Some doctors opined that they were able to access laboratory results from their portable devices through internet connection; hence, the same technology could be availed for PACS access. One respondent commented that the network is very slow on portable devices.
Offsite access will be beneficial to clinicians who are on call as they will be able to view images in the comfort of their homes or call rooms. Furthermore, this will benefit those who would want to access images for teaching purposes on virtual platforms. This will require an upgrade to a private cloud-based PACS which is more cost-effective, reliable and secure. 13 Off-site access to PACS is a challenge for onsite PACS systems due to security and privacy requirements; therefore, onsite PACS solutions have trouble transmitting secure data outside of their immediate area. 14 Contrarily, cloud PACS is designed for offsite access and providers follow stringent security and privacy guidelines. 14 Only 7% of comments referred to an online booking system for radiology exams. This recommendation came mainly from interns, medical officers and registrars. Most of these respondents suggested this will increase their efficiency as they will spend less time going to the radiology department. From a radiology department perspective, physical bookings have the advantage of planning a patient's imaging in consultation with the requesting clinician, which will aid coming up with the best modality suitable for the clinical question and ultimately reduce unnecessary bookings. Telephonic discussions and online booking systems are compelling options to consider in the post-coronavirus disease 2019 (COVID-19) milieu, where everything has moved to digital platforms and remote access.
In this study, issues raised regarding downtime were specifically related to power cuts rather than routine scheduled maintenance. At the time of writing, South Africa was experiencing severe power outages; this crisis is predicted to continue into the near future. Connecting the entire PACS infrastructure to an uninterrupted power supply (UPS) system was suggested as an option to mitigate downtime. The radiology department prints hard copy films for clinicians during PACS downtime. Uninterrupted power supply (UPS) connectivity will help minimise film printing costs.
Limitations
• The study was limited to a post-PACS implementation evaluation. To fully assess the impact of PACS on clinical practice, a study that involved the pre-and post-implementation would have been ideal. However, this was not feasible as there were few respondents from the pre-implementation era.
• Despite a reasonable sample size, the response rate was low at 54%. At the time of writing, the hospital was only partially open due to ongoing renovations after a fire incident which forced the entire hospital to close in 2021. Some departments were still not fully functional with their staff deployed to satellite hospitals. This could have contributed to the low response rate. • Only clinicians referring patients to the radiology department were included. The study excluded radiologists. Hence, there is a need for further research to validate research findings by comparing outcomes of PACS users working in different environments. • During data analysis, detailed information could have been lost by collapsing the four-point Likert scale to two categories, which were 'disagree' and 'agree'.
Conclusion
The findings of this study provide overwhelming evidence that referring clinicians support the implementation of a hospital-wide PACS. The benefits of PACS, in particular reduction of repeat imaging, ease of comparison with previous imaging, image and report availability at multiple sites at any time and eliminating the scenario of lost films were seen as compelling rationale for the implementation of a hospital-wide PACS system.
The main challenges raised regarded PACS access both at inpatient and outpatient environments, downtime and the lack of advanced image manipulating tools at clinician workstations. These issues were cited as major areas that need improvements for clinicians to fully realise the benefits of PACS. The case for switching to a cloud-based PACS system is strong given the acknowledged desire from clinicians for offsite access and the difficulties faced by CMJAH with regard to equipment theft, few PACS access stations and frequent power outages.
This study will serve as a benchmark for future hospital and provincial-wide PACS deployment projects in public hospitals.
|
2023-05-13T15:12:07.204Z
|
2023-05-10T00:00:00.000
|
{
"year": 2023,
"sha1": "da54b0c9641ec95449f8b4078e3c59313df330da",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "e5ea314ccc0d7db3f6a95a2fb37fc40b1193afd6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
268718751
|
pes2o/s2orc
|
v3-fos-license
|
Improved Finite Element Thermomechanical Analysis of Laminated Composite and Sandwich Plates Using the New Enhanced First-Order Shear Deformation Theory
: This paper proposes a simple yet accurate finite element (FE) formulation for the ther-momechanical analysis of laminated composites and sandwich plates. To this end, an enhanced first-order shear deformation theory including the transverse normal effect based on the mixed variational theorem (EFSDTM_TN) was employed in the FE implementation. The primary objective of the FE formulation was to systematically interconnect the displacement and transverse stress fields using the mixed variational theorem (MVT). In the MVT, the transverse stress field is derived from the efficient higher-order plate theory including the transverse normal effect (EHOPT_TN), to enhance the solution accuracy, whereas the displacement field is defined by the first-order shear deformation theory including the transverse normal effect (FSDT_TN), to amplify the numerical efficiency. Furthermore, the transverse displacement field is modified by incorporating the components of the external temperature loading, enabling the consideration of the transverse normal strain effect without introducing additional unknown variables. Based on the predefined relationships, the proposed FE formulation can extract the C0-based computational benefits of FSDT_TN, while improving the solution accuracy for thermomechanical analysis. The numerical performance of the proposed FE formulation was demonstrated by comparing the obtained solutions with those available in the literature, including 3-D exact solutions.
Introduction
In recent years, the utilization of high-strength and lightweight structures has continued to improve energy efficiency in line with a wide range of environmental issues.In this regard, fiber-reinforced composite materials capable of providing an optimized stiffness-toweight ratio through a synergistic combination of two or more materials, such as reinforcing fibers and resins, are attracting considerable attention as prospective next-generation materials in various engineering fields.Continuous fiber-reinforced composites are widely employed in various high-value industries, including automotive, civil, and aerospace, owing to their ability to achieve excellent structural properties and multifunctional characteristics.Despite the aforementioned advantages, the distribution of transverse stress in laminated composite structures can give rise to inherent mechanical defects, notably layer slip and delamination.Therefore, accurate prediction of transverse stress is a crucial concern in the structural design process of laminated composite structures [1,2].
Over the past half-century, a range of analysis models based on the equivalent singlelayer theory have been developed to precisely elucidate the transverse behaviors of laminated composite plates .Starting from the well-known classical laminated plate theory (CLPT) and progressing through the first-order shear deformation theory (FSDT), a series of higher-order polynomial theories, including the higher-order shear deformation theory (HSDT), have been developed sequentially [3][4][5][6][7][8].However, most of these theories exhibit limitations in predicting interlaminar stresses because of their inability to enforce transverse shear stress conditions at both the surface and layer interfaces.To address this issue, a series of refined zigzag theories (EHOPT: efficient higher order plate theory, RHSDT: refined higher-order shear deformation theory, RZT: refined zigzag theory) have been proposed [14][15][16][17][18].These theories yield reliable results in predicting the global and local behaviors of laminated composites and sandwich structures by introducing a zigzag displacement field that varies discontinuously at the interlaminar interfaces.However, it requires the use of a nonconventional C1-class shape function (a slope continuity condition along the boundary of the element) in the finite element (FE) formulation, which is incompatible with commercial FE software such as ANSYS 2023 R1 (Ansys, PA, USA) and ABAQUS 2022 (Dassault Systemes, Pairs, France).As an attractive scheme to circumvent C1-class problems in FE analysis, enhanced analysis models (EFSDT: enhanced first-order shear deformation theory, EFSDTM: enhanced first-order shear deformation theory based on mixed variational theorem) were developed for the multiphysics analysis of laminated composites and sandwich plates [19][20][21].Enhanced theories can simultaneously improve the solution accuracy and computational efficiency by systematically deriving reasonable energy relationships between the conventional FSDT and EHOPT.Consequently, these theories allow for a C0-based finite element formulation based on an FSDT-like governing equation, providing clear advantages in terms of compatibility with commercial finite element (FE) software (ANSYS 2023 R1 andABAQUS 2022).
With technological advancements, laminated composites and sandwich structures can be exposed to various external environments, and there is a need to predict their multiphysical behaviors during the design process.In high-temperature environments, thermal deformation and stress induce significant defects.Consequently, thermomechanical analysis should be considered to ensure reliable design solutions for laminated composites and sandwich structures exposed to such conditions.Transverse normal deformation is a very important consideration in thermal analysis.Therefore, well-known analysis models (CLPT, FSDT, HSDT, EHOPT, etc.) that ignore the transverse normal strain effect are not suitable for predicting the thermal behavior of laminated composites and sandwich structures.In this regard, many refined theories have been proposed to precisely describe the thermomechanical responses of laminated composites and sandwich structures .As a higher-order polynomial model, the Lo-Christensen-Wu (LCW) theory attempts to consider the transverse normal strain effect effectively by assuming a smooth parabolic form of the transverse displacement field [29].Furthermore, various refined higher-order and zigzag theories have been proposed for the thermomechanical analysis of laminated composites and sandwich structures [29][30][31][32][33][34][35][36][37][38][39][40][41][42][43][44][45][46][47][48].As one of the most attractive zigzag theories, the efficient higher-order zigzag theory (EHOZT) proposed by Oh and Cho can provide reliable solutions for fully coupled electro-thermo-mechanical problems by enforcing transverse shear stress conditions at both the surface and layer interfaces [40][41][42][43].Kapuria and Achary developed a computationally efficient zigzag theory to predict the thermal behavior of laminated composite structures [44].Although this theory considers the transverse normal strain effect without introducing additional variables into the displacement fields, its applicability in analyzing sandwich plates is limited.This is because the use of different thermal expansion coefficients in adjacent layers can potentially violate the transverse displacement continuity conditions.Among the various enhanced theories for the thermomechanical analysis of laminated composites and sandwich structures [49][50][51], Han et al. proposed an enhanced first-order shear deformation theory including the transverse normal strain effect based on the mixed variational theorem (EFSDTM_TN) to take computational benefits of conventional FSDT [50].The main contribution of EFSDTM_TN is that it considers the transverse normal strain effect without introducing any additional unknown variables by extending the transverse normal displacement field under the prescribed thermal conditions.Furthermore, the transverse displacement continuity conditions are automatically satisfied in the sandwich plates by introducing layer-wise constants.Consequently, EFS-DTM_TN can provide reliable solutions for analyzing the thermomechanical behaviors of laminated composite and sandwich structures while ensuring the computational benefits of the C0-based 5-DOF element in the FE implementation.
To further extend the applicability of the EFSDTM_TN [50], an FE formulation based on the EFSDTM_TN was proposed and numerically tested.An 8-node serendipity element was utilized in the FE formulation to enhance the computational efficiency in deriving the stress distributions.The primary objective of the proposed FE analysis model is to ensure both the solution accuracy and computational efficiency by systematically blending FSDT_TN and EHOPT_TN based on the mixed variational theorem.Furthermore, the thermal responses of laminated composites and sandwich structures can be described more precisely by improving the transverse displacement field.To demonstrate the numerical performance of the proposed FE analysis model, representative thermal-mechanical problems for 2-D laminated composite and sandwich structures were considered as numerical examples.The accuracy and efficiency of the proposed FE analysis model were compared with other numerical results available in the literature, including 3-D exact solutions [57,58] together with the analytical solution of the EFSDTM_TN [50].enhanced theories for the thermomechanical analysis of laminated composites and sandwich structures [49][50][51], Han et al. proposed an enhanced first-order shear deformation theory including the transverse normal strain effect based on the mixed variational theorem (EFSDTM_TN) to take computational benefits of conventional FSDT [50].The main contribution of EFSDTM_TN is that it considers the transverse normal strain effect without introducing any additional unknown variables by extending the transverse normal displacement field under the prescribed thermal conditions.Furthermore, the transverse displacement continuity conditions are automatically satisfied in the sandwich plates by introducing layer-wise constants.Consequently, EFSDTM_TN can provide reliable solutions for analyzing the thermomechanical behaviors of laminated composite and sandwich structures while ensuring the computational benefits of the C0-based 5-DOF element in the FE implementation.
EFSDTM_TN for the
To further extend the applicability of the EFSDTM_TN [50], an FE formulation based on the EFSDTM_TN was proposed and numerically tested.An 8-node serendipity element was utilized in the FE formulation to enhance the computational efficiency in deriving the stress distributions.The primary objective of the proposed FE analysis model is to ensure both the solution accuracy and computational efficiency by systematically blending FSDT_TN and EHOPT_TN based on the mixed variational theorem.Furthermore, the thermal responses of laminated composites and sandwich structures can be described more precisely by improving the transverse displacement field.To demonstrate the numerical performance of the proposed FE analysis model, representative thermal-mechanical problems for 2-D laminated composite and sandwich structures were considered as numerical examples.The accuracy and efficiency of the proposed FE analysis model were compared with other numerical results available in the literature, including 3-D exact solutions [57,58] together with the analytical solution of the EFSDTM_TN [50].
Mixed Variational Theorem
Laminated composites and sandwich plates were considered as numerical models of thermomechanical problems.The geometric shapes and reference coordinates of the laminated plates are shown in Figure 1 In the EFSDTM_TN, displacement and transverse stress fields are assumed independently, with the aim of enhancing both solution accuracy and computational efficiency.These independent fields can then be systematically interconnected based on the mixed variational theorem (MVT).The first variation of the 2-D Hellinger-Reissner functional is defined by Equation (1). where In Equation (1), Ω represents the reference plane of the laminated plates, and mechanical loading ( p i ) is applied to the boundary area (S σ ).Additionally, (•) and (•) * are components defined by the displacement and transverse stress fields, respectively.The mixed part of the MVT plays a critical role in defining reasonable relationships between two independent fields [20,50,51].
Improvement of Transverse Displacement Field
In contrast to the mechanical behavior, the transverse normal strain effect is dominant in the thermal deformation of the laminated composite and sandwich plates.Therefore, this effect should be considered to provide a reliable solution for predicting thermal behavior.Intuitively, assuming a smoothly varying parabolic form for the transverse displacement field can help in this regard.Although this approach is able to predict the thermal behavior of laminated composites and sandwich plates precisely, it involves additional unknown variables as well as complicated 3-D governing equations.Therefore, to provide simple yet accurate solutions for thermal problems, a modified transverse displacement field is introduced as follows [50].
The underlined expressions are newly considered in Equation (3) for a reliable thermal analysis of the laminated composites and sandwich plates.Other than the underlined expressions, this represents a typical transverse displacement field that satisfies the assumption of a plane-stress state (u 3 ≈ u (0) 3 ).Based on Equation (3), the prescribed thermal conditions (T 0 and T 1 ) are utilized to define a smoothly varying parabolic field to consider the transverse normal strain effect without introducing additional unknown variables.Here, N represents the total number of layers and H(x 3 − x 3(k) ) is the Heaviside step function.T 0 and T 1 are the uniform and linear temperature loadings, respectively.In addition, φ (k) is a layer-wise constant that automatically satisfies the plane-stress condition.The value of φ (k) can vary depending on the material composing each layer because it is a function of the material properties and thermal expansion coefficients.Therefore, to fulfill the continuity conditions of u 3 for general layup cases such as sandwich plates, a layer-wise constant (S (k) 3 ) was additionally introduced.This modified form of the transverse displacement field enables simple yet accurate thermomechanical analysis of laminated composites and sandwich plates.
Transverse Stress Field
In this subsection, a reliable transverse stress field based on EHOPT_TN is independently assumed in the MVT to ensure solution accuracy.EHOPT_TN can rigorously satisfy the shear-free conditions at the surface, as well as shear continuity conditions at the layer interfaces by introducing third-order zigzag field in the in-plane displacement field.Furthermore, a modified form of the transverse displacement field was employed to provide reliable solutions to thermomechanical problems.The initial displacement field of EHOPT_TN is expressed as [50] u * α = u * (0) α where α is a linear zigzag field that enforces shear continuity conditions at the layer interfaces.By applying the shear stress conditions to the initial displacement field given in Equation (4), S 3 can be defined through relations in terms of the primary unknown variables (u * (k) i ) and prescribed thermal conditions (T 0 , T 1 ): Detailed definitions of a αβ and b 33 are provided in [50].Furthermore, to satisfy the plate equilibrium state rigorously when applying MVT to the general configuration of laminated structures, in-plane correction factors were introduced in EHOPT_TN.Consequently, Equations ( 4) and ( 5) yield the following refined displacement field for EHOPT_TN [50]: where (φ (N) +φ (1) ) in which where δ αβ is the Kronecker delta function and the in-plane correction factor C N α , C M α can be defined as can be determined by matching the resulting forces and moments in the process of establishing a relationship between the displacement and transverse stress fields based on Saint-Venant's principle [50].Based on the introduction of these in-plane correction factors, it is possible to provide highly reliable solutions for predicting the thermomechanical behavior of laminated composites and sandwich plates.
From Equation ( 6), the transverse stress tensors used in the MVT can be defined as follows:
Displacement Field
Simple displacement field based on FSDT_TN were also considered in MVT to retain computational efficiency [50].The displacement field based on FSDT_TN is given as where the components of FSDT_TN are indicated by overbars to clearly distinguish between the displacement and transverse stress fields in MVT.From Equation ( 11), the strain and in-plane stress tensors used in the MVT can be derived as where α γω and ∆T are the thermal expansion coefficient and temperature distribution.
Relationships between Displacement and Transverse Stress Fields
A reasonable relationship between EHOPT_TN and FSDT_TN can be systematically defined using the mixed part in the MVT as a constraint equation.The related constraint equation is expressed as the following [50]: where γ * 3α and σ * 3α are defined in Equation (10), while γ 3α is defined in Equation ( 12).In the constraint equation, the transverse shear resultant (Q * α ) derived from EHOPT_TN can be expressed as in which Â(0 Equations ( 13)-( 15) yield the relationships between u * (3) α and γ (0) 3α as follows: where Γ (1) in which Consequently, the transverse shear resultant (Q * α ) can be expressed in terms of the FSDT_TN variables by substituting Equation ( 16) into Equation ( 14) as follows: where A * α3β3 , B * α3β3 and D * α3β3 are the effective shear stiffness moduli, which depend on the in-plane correction factors.Thus, these in-plane correction factors and effective shear stiffness moduli should be updated by applying iterative calculations to improve the solution accuracy.Equation (20) indicates that the effective shear correction factor (SCF) can be calibrated automatically using EFSDTM_TN [50].
Based on the reasonable relationship between EHOPT_TN and FSDT_TN, the 2-D Hellinger-Reissner functional can be simplified as Therefore, considering transverse loading ( t 3 ), the governing equations of EFSDTM_TN can be derived as δu and the associated boundary conditions are given by δu It should be noted that the governing equations of EFSDTM_TN are similar to those of conventional FSDT.This implies that the EFSDTM_TN can be extended using a simple FE implementation.
Once the values of all of the unknown variables are determined based on the governing equation, the solution accuracy can be further improved by restoring the displacement field of EHOPT_TN.By applying the least-squares approximation, the following relationships between u * (0) α and u (0) α are obtained [50]: Substituting Equations ( 16) and ( 24) into Equation ( 6), the displacement field of EHOPT_TN can be systematically expressed using only the primary variables of FSDT_TN as follows: αβ Γ (2) in which
Finite Element Formulation Based on EFSDTM_TN
In this section, a finite element formulation based on EFSDTM_TN is presented to further extend its applicability.Considering the stress restoration based on the postprocessing procedure, a well-known 8-node serendipity element was employed in the FE implementation.Based on the FE discretization, the displacement field can be defined by the nodal variables of the 8-node serendipity element as follows [21]: (27) where N i represents the shape function for the (i)th node of the 8-node serendipity element.
Element Stiffness Matrix
The element stiffness matrix can be defined through the principle of minimum potential energy, and all unknown nodal displacements in each 8-node serendipity element can be expressed in vector form as follows [21]: As indicated in Equation (28), each element has 40 degrees of freedom (DOF).
From Equations ( 27)-( 29) and the assumption of small strain-displacement relations, the strain components for each element are defined as where the subscripts (m, b and s) denote the strain components derived from the membrane, bending, and transverse shear parts, respectively.The strain matrices ([B] m , [B] b , and [B] s ) can be written as follows: in which The element stiffness matrix is then reasonably defined by using the aforementioned strain matrices [B] (m,b,s) : in which In Equation ( 34), A , B , D , and G * (0) can be expressed as follows: in which It should be noted that the shear stiffness matrix, G * (0) , is defined based on the effective shear stiffness modulus (A * α3β3 ) instead of G = Q α3β3 .
Extermal Force Vector
In this FE implementation, thermal and mechanical loadings were considered as external force vectors.The corresponding external force vector considering the mechanical loading can be obtained as follows: where and Additionally, based on Equations ( 10) and ( 12), the corresponding external force vector considering thermal loading can be defined as where where T (0,1) , T(0,1) , and α represent vectors consisting of external temperatures and thermal expansion coefficients, respectively, as shown below: It should be also remarked that G * (1) and G * (2) are derived from the effective shear stiffness modulus given in Equation (20).
Based on the above improved FE implementation, both solution accuracy and computational efficiency can be further improved in the process of describing the thermomechanical behaviors of the laminated composite and sandwich plates.
Numerical Results and Discussion
In this section, the numerical performance of the proposed FE analysis model is investigated by considering the characteristic thermomechanical problems of the laminated composites and sandwich plates.For all numerical models, 2-D rectangular laminated plates with simply supported boundary conditions were used as the test beds.The lengthto-thickness ratio of the laminated plates was assumed to be S = L 1 /h = L 2 /h = 4 for mechanical problems and S = L 1 /h = L 2 /h = 5 for thermal problems.
-Each ply of the composite plates for the mechanical problems -Each ply of the composite plates for the thermal problems where the subscripts (•) L and (•) T represent the directions parallel and perpendicular to the fiber configuration.In addition, the material properties of the sandwich plates for thermal and mechanical problems were as follows [50,51]: -Facial sheets of the sandwich plates -Core of the sandwich plates For the thermomechanical problems, the corresponding thermal and mechanical loadings considered when deriving the external force vectors can be expressed as follows: The representative layup configurations of the laminated plates are listed in Table 1.The FE solutions of EFSDTM_TN were then compared with those obtained by conventional C0-class FE analysis models (FSDT, HSDT, and LCW) [3][4][5]8,29] as well as 3-D exact solutions [57,58].The Pagano solutions for thermomechanical problems were considered as benchmark solutions [57,58], and the SCF was assumed to be 5/6 in the conventional FSDT.For reasonable comparison, the numerical results reported herein were normalized in the following form: Numerical results for mechanical problems Numerical results for thermal problems
Validation of the Proposed FE Analysis Model
To examine the numerical errors that may occur during FE analysis, FE solutions based on EFSDTM_TN were validated against those obtained using the analytical approach.To this end, the convergence rate of the FE solutions was numerically verified by comparing the central deflections of the laminated composite and sandwich plates across different mesh densities, as listed in Table 2.The solutions for uniform temperature loading are not compared in Table 2 because of the absence of deflections.
From Table 2, it is observed that the FE solutions gradually converge to the analytical solutions with further refinement of the mesh.In addition, acceptable deflections were obtained when the FE model was discretized into an 8 × 8 mesh density or higher.Although an 8 × 8 mesh density is sufficient to describe the nodal displacement, potential numerical errors could arise when deriving in-plane and transverse shear stresses as these involve higher-order derivatives.The FE solutions for the in-plane and transverse shear stresses of [0 • /90 • /0 • ] laminated composite plates are illustrated in Figure 2. The accuracies of these FE solutions, for various mesh density, were compared with those of analytical solutions.The distributions of the transverse shear stress given in Figure 2 were derived from following 3-D equilibrium equation: analytical solutions.The distributions of the transverse shear stress given in Figure 2 were derived from following 3-D equilibrium equation: Figure 2 shows that an FE model with a 16 × 16 mesh density or higher can yield precise numerical solutions for predicting the local distributions of in-plane and transverse shear stresses.In particular, it can be observed that the FE solutions with a 32 × 32 mesh density can closely approximate the analytical solutions, even for transverse shear stresses that require third-order derivatives.This means that the FE model with a 32 × 32 mesh density can be reasonably applied in thermomechanical analysis of laminated composite and sandwich structures with arbitrary geometry, loading, and boundary conditions.Based on the numerical validation given in Table 2 and Figure 2, the FE solutions for all of the thermomechanical problems were obtained based on a 32 × 32 mesh density to ensure computational accuracy.Considering laminated composites and sandwich structures discretized with a 32 × 32 mesh density, kinematic unknown variables and corresponding total DOFs of the FE models were compared in Table 3. Table 3 shows that the total DOFs of the proposed FE model are the same as FSDT, representing reductions of 45.5% to 55.6% as compared to the total DOFs of the HSDT and LCW, respectively.Therefore, the proposed FE model can clearly improve its computational efficiency in the process of thermo-mechanical analysis.
FE Solutions for the Mechanical Problem
In this subsection, the mechanical behaviors of the laminated composite and sandwich plates are evaluated to verify the numerical performance of the proposed FE model based on EFSDTM_TN.For the mechanical problems, transverse external loading ( 3 0 P ≠ ) was applied to the top surfaces of the laminated plates.Figure 2 shows that an FE model with a 16 × 16 mesh density or higher can yield precise numerical solutions for predicting the local distributions of in-plane and transverse shear stresses.In particular, it can be observed that the FE solutions with a 32 × 32 mesh density can closely approximate the analytical solutions, even for transverse shear stresses that require third-order derivatives.This means that the FE model with a 32 × 32 mesh density can be reasonably applied in thermomechanical analysis of laminated composite and sandwich structures with arbitrary geometry, loading, and boundary conditions.
Based on the numerical validation given in Table 2 and Figure 2, the FE solutions for all of the thermomechanical problems were obtained based on a 32 × 32 mesh density to ensure computational accuracy.Considering laminated composites and sandwich structures discretized with a 32 × 32 mesh density, kinematic unknown variables and corresponding total DOFs of the FE models were compared in Table 3. Table 3 shows that the total DOFs of the proposed FE model are the same as FSDT, representing reductions of 45.5% to 55.6% as compared to the total DOFs of the HSDT and LCW, respectively.Therefore, the proposed FE model can clearly improve its computational efficiency in the process of thermo-mechanical analysis.
FE Solutions for the Mechanical Problem
In this subsection, the mechanical behaviors of the laminated composite and sandwich plates are evaluated to verify the numerical performance of the proposed FE model based on EFSDTM_TN.For the mechanical problems, transverse external loading (P 3 ̸ = 0) was applied to the top surfaces of the laminated plates.
Figure 3 shows the mechanical solutions of the in-plane displacements and stresses for cross-ply laminated composite plates.From Figure 3a, it can be seen that the EFSDTM_TN precisely describes the unsymmetrical zigzag distribution of the in-plane displacement in [0 • /90 • /0 • /90 • ] laminated composite plates.As shown in Figure 3b, EFSDTM_TN can provide a reliable local solution for the in-plane stress by capturing its noncontinuous distribution.However, other theories are only useful for predicting the global behavior of in-plane displacement.4a), it should be noted that EFSDTM_TN can provide a sufficiently reliable solution for the in-plane displacement, even for a sandwich plate.In addition, as shown in Figure 4b, the severe kink distribution of the transverse shear stress is completely captured by EFSDTM_TN.
FE Solutions for the Thermal Problem
To further investigate the numerical capabilities related to the thermal analysis of laminated composites and sandwich plates, several thermal problems were also analyzed.In the case of thermal problems, uniform and linearly distributed temperatures were considered as external loads.For the [0 • /Core/0 • ] sandwich plate, the distributions of the in-plane displacements and transverse shear stresses are shown in Figure 4.In terms of the in-plane displacement (Figure 4a), it should be noted that EFSDTM_TN can provide a sufficiently reliable solution for the in-plane displacement, even for a sandwich plate.In addition, as shown in Figure 4b, the severe kink distribution of the transverse shear stress is completely captured by EFSDTM_TN.4a), it should be noted that EFSDTM_TN can provide a sufficiently reliable solution for the in-plane displacement, even for a sandwich plate.In addition, as shown in Figure 4b, the severe kink distribution of the transverse shear stress is completely captured by EFSDTM_TN.
FE Solutions for the Thermal Problem
To further investigate the numerical capabilities related to the thermal analysis of laminated composites and sandwich plates, several thermal problems were also analyzed.In the case of thermal problems, uniform and linearly distributed temperatures were considered as external loads.
FE Solutions for the Thermal Problem
To further investigate the numerical capabilities related to the thermal analysis of laminated composites and sandwich plates, several thermal problems were also analyzed.In the case of thermal problems, uniform and linearly distributed temperatures were considered as external loads.
The in-plane and transverse shear stresses for a single-layer composite plate under uniform temperature loading (T 0 ̸ = 0) are shown in Figure 5.As stated in Section 2.2, the transverse normal strain effect plays an important role in analyzing the thermal behaviors of laminated composites and sandwich plates.Furthermore, this effect becomes significant under a uniform temperature loading.Considering this aspect, FESDTM_TN and LCW, which reasonably consider the transverse normal strain effect, can accurately describe the thermal stresses of a single-layer composite plate under uniform temperature loading.Another interesting observation from Figure 5 is that the FSDT and HSDT, which cannot consider the transverse normal strain effect, provide meaningless solutions for these thermal stresses.The in-plane and transverse shear stresses for a single-layer composite plate under uniform temperature loading ( 0 0 T ≠ ) are shown in Figure 5.As stated in Section 2.2, the transverse normal strain effect plays an important role in analyzing the thermal behaviors of laminated composites and sandwich plates.Furthermore, this effect becomes significant under a uniform temperature loading.Considering this aspect, FESDTM_TN and LCW, which reasonably consider the transverse normal strain effect, can accurately describe the thermal stresses of a single-layer composite plate under uniform temperature loading.Another interesting observation from Figure 5 is that the FSDT and HSDT, which cannot consider the transverse normal strain effect, provide meaningless solutions for these thermal stresses.Figure 6 compares the thermal distributions of the in-plane displacements and stresses of the cross-ply laminated composite plates under uniform-temperature loading ( 0 0 T ≠ ).As shown in Figure 6, EFSDTM_TN and LCW can precisely capture not only the parabolic distributions of the in-plane displacements, but also the noncontinuous distributions of the in-plane stress, whereas the other theories fail to describe the local distributions of the corresponding thermal behaviors.Figure 6 compares the thermal distributions of the in-plane displacements and stresses of the cross-ply laminated composite plates under uniform-temperature loading (T 0 ̸ = 0).As shown in Figure 6, EFSDTM_TN and LCW can precisely capture not only the parabolic distributions of the in-plane displacements, but also the noncontinuous distributions of the in-plane stress, whereas the other theories fail to describe the local distributions of the corresponding thermal behaviors.The in-plane and transverse shear stresses for a single-layer composite plate under uniform temperature loading ( 0 0 T ≠ ) are shown in Figure 5.As stated in Section 2.2, the transverse normal strain effect plays an important role in analyzing the thermal behaviors of laminated composites and sandwich plates.Furthermore, this effect becomes significant under a uniform temperature loading.Considering this aspect, FESDTM_TN and LCW, which reasonably consider the transverse normal strain effect, can accurately describe the thermal stresses of a single-layer composite plate under uniform temperature loading.Another interesting observation from Figure 5 is that the FSDT and HSDT, which cannot consider the transverse normal strain effect, provide meaningless solutions for these thermal stresses.Figure 6 compares the thermal distributions of the in-plane displacements and stresses of the cross-ply laminated composite plates under uniform-temperature loading ( 0 0 T ≠ ).As shown in Figure 6, EFSDTM_TN and LCW can precisely capture not only the parabolic distributions of the in-plane displacements, but also the noncontinuous distributions of the in-plane stress, whereas the other theories fail to describe the local distributions of the corresponding thermal behaviors.The thermal distributions of the in-plane and transverse shear stresses of the sandwich plates under uniform temperature loading (T 0 ̸ = 0) are shown in Figure 7.As shown in Figure 7a, the thermal solutions obtained by EFSDTM_TN and LCW are in good agreement with the exact solutions by precisely describing the severe noncontinuous distribution of the in-plane stress.In addition, Figure 7b indicates that EFSDTM_TN and LCW provide the best compromised thermal solutions for the local distribution of the transverse shear stress.Consequently, Figures 5-7 demonstrate that the transverse normal strain effect should be considered to accurately describe the thermal behavior of composite materials and sandwich structures subjected to uniform temperature loading.The thermal distributions of the in-plane and transverse shear stresses of the sandwich plates under uniform temperature loading ( 0 0 T ≠ ) are shown in Figure 7.As shown in Figure 7a, the thermal solutions obtained by EFSDTM_TN and LCW are in good agreement with the exact solutions by precisely describing the severe noncontinuous distribution of the in-plane stress.In addition, Figure 7b indicates that EFSDTM_TN and LCW provide the best compromised thermal solutions for the local distribution of the transverse shear stress.Consequently, Figures 5-7 demonstrate that the transverse normal strain effect should be considered to accurately describe the thermal behavior of composite materials and sandwich structures subjected to uniform temperature loading.For linear temperature loading ( 1 0 T ≠ ), the thermal distributions of the in-plane and transverse shear stresses of a single-layer composite plate are shown in Figure 8. Similar to Figure 5, it can be observed that the EFSDTM_TN and LCW provide reliable solutions in describing the in-plane and transverse shear stresses of a single-layer composite plate under linear temperature loading.Furthermore, as shown in Figure 8, the accuracy of all of the theories considered in this study improved, relative to those obtained under uniform temperature loading.This tendency is attributed to the fact that linear temperature loading can cause bending behavior of the plates.For linear temperature loading (T 1 ̸ = 0), the thermal distributions of the in-plane and transverse shear stresses of a single-layer composite plate are shown in Figure 8. Similar to Figure 5, it can be observed that the EFSDTM_TN and LCW provide reliable solutions in describing the in-plane and transverse shear stresses of a single-layer composite plate under linear temperature loading.Furthermore, as shown in Figure 8, the accuracy of all of the theories considered in this study improved, relative to those obtained under uniform temperature loading.This tendency is attributed to the fact that linear temperature loading can cause bending behavior of the plates.The thermal distributions of the in-plane and transverse shear stresses of the sandwich plates under uniform temperature loading ( 0 0 T ≠ ) are shown in Figure 7.As shown in Figure 7a, the thermal solutions obtained by EFSDTM_TN and LCW are in good agreement with the exact solutions by precisely describing the severe noncontinuous distribution of the in-plane stress.In addition, Figure 7b indicates that EFSDTM_TN and LCW provide the best compromised thermal solutions for the local distribution of the transverse shear stress.Consequently, Figures 5-7 demonstrate that the transverse normal strain effect should be considered to accurately describe the thermal behavior of composite materials and sandwich structures subjected to uniform temperature loading.For linear temperature loading ( 1 0 T ≠ ), the thermal distributions of the in-plane and transverse shear stresses of a single-layer composite plate are shown in Figure 8. Similar to Figure 5, it can be observed that the EFSDTM_TN and LCW provide reliable solutions in describing the in-plane and transverse shear stresses of a single-layer composite plate under linear temperature loading.Furthermore, as shown in Figure 8, the accuracy of all of the theories considered in this study improved, relative to those obtained under uniform temperature loading.This tendency is attributed to the fact that linear temperature loading can cause bending behavior of the plates.[0 • /Core/0 • ] sandwich plates, respectively.From Figure 9, it can be concluded that the solutions obtained by EFSDTM_TN and LCW closely approximate the exact solutions, while the local solutions obtained from other theories are relatively inaccurate.Considering the thermal behavior of the sandwich plate, as shown in Figure 10, it is noteworthy that the EFSDTM_TN provides the best compromised solution for the inplane thermal stress by accurately capturing the noncontinuous local distribution.It can thus be concluded on the basis of Figures 3-10 that EFSDTM_TN and LCW can provide reliable thermomechanical solutions for laminated composites and sandwich plates because these theories reasonably consider the transverse normal strain effect.Although LCW provides the most accurate solution for some thermal problems, EFSDTM_TN has a prominent computational advantage due to its C0-based 5-DOF FE implementation, which can be highly compatible with commercial FE software.Therefore, it can be concluded that the FE implementation based on EFSDTM_TN is a useful approach in the thermomechanical analysis of laminated composites and sandwich plates.Core sandwich plates, respectively.From Figure 9, it can be concluded that the solutions obtained by EFSDTM_TN and LCW closely approximate the exact solutions, while the local solutions obtained from other theories are relatively inaccurate.Considering the thermal behavior of the sandwich plate, as shown in Figure 10, it is noteworthy that the EFSDTM_TN provides the best compromised solution for the in-plane thermal stress by accurately capturing the noncontinuous local distribution.It can thus be concluded on the basis of Figures 3-10 that EFSDTM_TN and LCW can provide reliable thermomechanical solutions for laminated composites and sandwich plates because these theories reasonably consider the transverse normal strain effect.Although LCW provides the most accurate solution for some thermal problems, EFSDTM_TN has a prominent computational advantage due to its C0-based 5-DOF FE implementation, which can be highly compatible with commercial FE software.Therefore, it can be concluded that the FE implementation based on EFSDTM_TN is a useful approach in the thermomechanical analysis of laminated composites and sandwich plates.Core sandwich plates, respectively.From Figure 9, it can be concluded that the solutions obtained by EFSDTM_TN and LCW closely approximate the exact solutions, while the local solutions obtained from other theories are relatively inaccurate.Considering the thermal behavior of the sandwich plate, as shown in Figure 10, it is noteworthy that the EFSDTM_TN provides the best compromised solution for the in-plane thermal stress by accurately capturing the noncontinuous local distribution.It can thus be concluded on the basis of Figures 3-10 that EFSDTM_TN and LCW can provide reliable thermomechanical solutions for laminated composites and sandwich plates because these theories reasonably consider the transverse normal strain effect.Although LCW provides the most accurate solution for some thermal problems, EFSDTM_TN has a prominent computational advantage due to its C0-based 5-DOF FE implementation, which can be highly compatible with commercial FE software.Therefore, it can be concluded that the FE implementation based on EFSDTM_TN is a useful approach in the thermomechanical analysis of laminated composites and sandwich plates.
Conclusions
In this study, an FE formulation based on EFSDTM_TN was developed and numerically validated for the reliable thermomechanical analysis of laminated composites and sandwich plates.The main features of the proposed FE model are summarized as follows: • MVT was employed in the proposed FE model to independently assume the displacement (FEST_TN) and transverse stress (EHOPT_TN) fields.The displacement and transverse stress fields were systematically interconnected in the MVT by establishing reasonable energy relationships.Based on the predefined relationships, the proposed FE model can not only embrace the explicit computational advantages of FSDT_TN, such as the C0-based 5-DOF FE implementation, but also ensure the solution accuracy of EHOPT_TN.• The transverse displacement field was enhanced by incorporating the components of external temperature loading to account for the contribution of the transverse normal strain effect efficiently.Consequently, the proposed FE model can provide reliable thermal solutions without introducing additional unknown variables.
In the proposed FE model, an 8-node serendipity element was employed to effectively derive higher-order derivatives while evaluating stress distributions.To demonstrate the numerical performance of the proposed FE model, several cases of thermalmechanical problems for laminated composites and sandwich structures were analyzed.The solutions obtained herein were then compared with those of conventional theories (FSDT, HSDT, and LCW), as well as 3-D exact solutions.From the numerical results, it can be concluded that the proposed FE model based on FESDTM_TN provides reliable thermomechanical solutions for laminated composites and sandwich plates.Consequently, it is expected that the proposed FE model can be applied to the thermomechanical analysis of laminated composites and sandwich structures with arbitrary geometries, loadings, and boundary conditions.[B] (m,b,s) membrane, bending, and transverse shear part of the strain matrix.
[K] e stiffness matrix for each element.
[K] (m, mb, b, s) membrane, membrane-bending coupling, bending, and transverse shear part of the element stiffness matrix.
[F] e M external force vector for each element derived from the mechanical loading.
[F] e T external force vector for each element derived from the thermal loading.
T mechanical loading applied in each element.
Thermo-Mechanical Problem 2.1.Mixed Variational Theorem Laminated composites and sandwich plates were considered as numerical models of thermomechanical problems.The geometric shapes and reference coordinates of the laminated plates are shown in Figure 1.Unless otherwise specified in the tensor notation, the Greek indices use values from set {1, 2}, whereas the Latin indices are assigned values from set {1, 2, 3}.L α and h represent the in-plane length and thickness of the laminated plates, respectively.x 3 denotes the transverse position which takes values within the range [−h/2, h/2].
. Unless otherwise specified in the tensor notation, the Greek indices use values from set {1, 2}, whereas the Latin indices are assigned values from set {1, 2, 3}.L α and h represent the in-plane length and thickness of the laminated plates, respectively.3 x denotes the transverse position which takes values within the range [
Figure 1 .
Figure 1.Geometric shape and reference coordinates of laminated plates.
Figure 1 .
Figure 1.Geometric shape and reference coordinates of laminated plates.
Figure 2 .
Figure 2. Comparison between analytical and FE solutions for stresses of [0 / 90 / 0 ] o o o laminated composite plates: (a) transverse shear stresses under mechanical loading; (b) in-plane stresses under uniform temperature loading; (c) transverse shear stresses under linear temperature loading.
Mathematics 2024, 12 , 963 14 of 21 Figure 3
Figure 3 shows the mechanical solutions of the in-plane displacements and stresses for cross-ply laminated composite plates.From Figure 3a, it can be seen that the EFSDTM_TN precisely describes the unsymmetrical zigzag distribution of the in-plane displacement in [0 / 90 / 0 / 90 ] o o o o laminated composite plates.As shown in Figure 3b, EFSDTM_TN can provide a reliable local solution for the in-plane stress by capturing its noncontinuous distribution.However, other theories are only useful for predicting the global behavior of in-plane displacement.
Mathematics 2024, 12 , 963 14 of 21 Figure 3
Figure 3 shows the mechanical solutions of the in-plane displacements and stresses for cross-ply laminated composite plates.From Figure 3a, it can be seen that the EFSDTM_TN precisely describes the unsymmetrical zigzag distribution of the in-plane displacement in [0 / 90 / 0 / 90 ] o o o o laminated composite plates.As shown in Figure 3b, EFSDTM_TN can provide a reliable local solution for the in-plane stress by capturing its noncontinuous distribution.However, other theories are only useful for predicting the global behavior of in-plane displacement.
Figure 5 .
Figure 5. Thermal solutions for a single-layer composite plate under uniform temperature loading: (a) in-plane stresses; (b) transverse shear stresses.
Figure 5 .
Figure 5. Thermal solutions for a single-layer composite plate under uniform temperature loading: (a) in-plane stresses; (b) transverse shear stresses.
Figure 5 .
Figure 5. Thermal solutions for a single-layer composite plate under uniform temperature loading: (a) in-plane stresses; (b) transverse shear stresses.
Figure 8 .
Figure 8. Thermal solutions for a single-layer composite plate under linear temperature loading: (a) in-plane stresses; (b) transverse shear stresses.
Figure 8 .
Figure 8. Thermal solutions for a single-layer composite plate under linear temperature loading: (a) in-plane stresses; (b) transverse shear stresses.
Figure 8 .
Figure 8. Thermal solutions for a single-layer composite plate under linear temperature loading: (a) in-plane stresses; (b) transverse shear stresses.
Figures 9
Figures 9 and 10 illustrate the corresponding thermal behavior when the linear temperature loading ( 1 0 T ≠ ) is applied to [0 / 90 / 0 ] o o o
Figures 9
Figures 9 and 10 illustrate the corresponding thermal behavior when the linear temperature loading ( 1 0 T ≠ ) is applied to [0 / 90 / 0 ] o o o laminated composite and
αβ function of material properties to satisfy shear continuity conditions in the transverse stress field.properties to satisfy continuity conditions of transverse normal displacement.C N α , C M α in-plane correction factors derived by matching the force and moment resultants.
cN
,M(u * ,T 0 ,T 1 ) αβ coefficient of in-plane correction factors (C N α , C M α ).N i shape functions for the element in FE implementation.d e unknown displacements for each element.
F
(m, b, s) T o membrane, bending, and transverse shear part of the external force vector derived from the uniform temperature loading.F (m, b, s) T 1membrane, bending, and transverse shear part of the external force vector derived from the linear temperature loading.
Table 1 .
List of layup configurations for composite and sandwich plates.
Table 2 .
Convergence rate of central deflections for EFSDTM_TN.
Table 3 .
Kinematic unknown variables and total DOFs of the FE models (32 × 32 mesh density).
Table 3 .
Kinematic unknown variables and total DOFs of the FE models (32 × 32 mesh density).
|
2024-03-27T15:28:22.182Z
|
2024-03-24T00:00:00.000
|
{
"year": 2024,
"sha1": "25c16968039055ab3c8e7341f2a37cff80853a1f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-7390/12/7/963/pdf?version=1711276958",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "60905eefc26d9629b8f416dc26d44d94d2e0f706",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": []
}
|
258376837
|
pes2o/s2orc
|
v3-fos-license
|
Does structured obstetric management play a role in the delivery mode and neonatal outcome of twin pregnancies?
Purpose While the optimal delivery method of twin pregnancies is debated, the rate of cesarean deliveries is increasing. This retrospective study evaluates delivery methods and neonatal outcome of twin pregnancies during two time periods and aims to identify predictive factors for the delivery outcome. Methods 553 twin pregnancies were identified in the institutional database of the University Women’s Hospital Freiburg, Germany. 230 and 323 deliveries occurred in period I (2009–2014) and period II (2015–2021), respectively. Cesarean births due to non-vertex position of the first fetus were excluded. In period II, the management of twin pregnancies was reviewed; adjusted and systematic training with standardized procedures was implemented. Results Period II showed significantly lower rates of planned cesarean deliveries (44.0% vs. 63.5%, p < 0.0001) and higher rates of vaginal deliveries (68% vs. 52.4%, p = 0.02). Independent risk factors for primary cesarean delivery were period I, maternal age > 40 years, nulliparity, a history with a previous cesarean, gestational age < 37 completed weeks, monochorionicity and increasing birth weight difference (per 100 g or > 20%). Predictive factors for successful vaginal delivery were previous vaginal delivery gestational age between 34 and 36 weeks and vertex/vertex presentation of the fetuses. The neonatal outcomes of period I and II were not significantly different, but planned cesareans in general were associated with increased admission rates to the neonatal intensive care units. Inter-twin interval had no significant impact on neonatal outcome. Conclusion Structured regular training of obstetrical procedures may significantly reduce high cesarean rates and increase the benefit–risk ratio of vaginal deliveries. Supplementary Information The online version contains supplementary material available at 10.1007/s00404-023-07040-6.
Introduction
Worldwide, twin pregnancies account for 2-4% of all births [1].Due to higher maternal age and a growing utilization of reproductive medicine, the number has risen since the last four decades [2,3].In 2021, Germany had an incidence of > 13,000 multiple pregnancies representing 1.7% of all births [4].
Risk-stratified analyses have shown variations of the mode of delivery within Europe in both singleton and twin pregnancies whereby in twins, the cesarean rates varied between 31.1% in Island and 98.8% in Malta.The Netherlands and France had significantly lower rates (43.9% and 54.8%) as compared to Germany and Italy with 74.8% or 85.6%, respectively [5].According to a French prospective population-based study, vertex-first twins born between 32 and 37 gestational weeks by planned cesareans had higher composite neonatal mortality and morbidity rates with 5.3% versus 3.0%, respectively, as compared to vaginal deliveries [6].These data suggest that national attitudes, guidelines, obstetric training skills and potentially financial incentives have a higher impact on the mode of delivery in twin gestations than any medical indication.Therefore, it was our hypothesis that the introduction of a strategy that involved senior obstetricians with a subspecialty in maternal-fetal medicine providing systematic training would increase the confidence that vaginal delivery of vertex-first twins can be easily performed and decrease the originally high elective cesarean rates.
In this retrospective study, we will assess the delivery methods and neonatal outcome of mono-and dichorionic twin pregnancies in a single institution.In this context, we will separately investigate two time periods with different clinical direction, beliefs, and expertise to explore whether a structured and systematic obstetric management may influence the rate of cesarean deliveries and neonatal outcome.Additionally, we intend to identify predictive factors for primary cesarean delivery and successful vaginal delivery and evaluate the neonatal outcome of each delivery mode as well as the obstetric management period.
Study population and period
We queried our institutional database on all multiple pregnancies starting at 32.0 weeks of gestation which were delivered between October 2009 and February 2021 at the University Women's Hospital Freiburg.The following cases were excluded: triplets and quadruplets, monochorionic-monoamniotic twins, intrauterine fetal death, feticide, lethal congenital anomalies, omphalocele, gastroschisis, Two time periods were categorized: from October 2009 to December 2014 (period I) and from January 2015 to February 2021 (period II).Starting from period II, the management of twin pregnancies was reviewed and adjusted by two senior obstetricians with perinatal sub-specialization who implemented systematic training methods and standardized procedures.This change of policy was initiated by both after taking up leading roles in the department.They personally attended all twin deliveries at daytime and during their respective on-call duty.For the rest of the weekends and nighttime, they were available on standby to provide guidance for other senior physicians.
Standardized delivery management of twin pregnancies
Compared to period I, vaginal twin deliveries were actively encouraged in period II.Vaginal deliveries were offered in uncomplicated twin pregnancies without contraindications for labor and when the first twin was presented in vertex position irrespective of the position of the second twin.Though not being an absolute indication for vaginal delivery, the estimated weight difference between both twins should not be significant, in contrast to period I preferring the weight discordance to not exceed 20% and the first twin being higher in weight.A vaginal delivery could be planned after one previous cesarean birth.If the first twin was in non-vertex presentation or if the patient had two or more previous cesarean births, a primary cesarean delivery was performed.
For a planned vaginal twin delivery, a team consisting of two obstetricians, one of which being a senior physician, a midwife and a midwife in training had to attend the birth.A neonatologist was available at all times.For potential (emergency) cesarean deliveries, operating staff including anesthesiologists and surgical nurses were on standby.To allow fast transfer, the operating room was situated in proximity right next to the delivery room.
For patients with planned vaginal delivery, the placement of an epidural anesthesia was recommended during the first stage of labor when no contraindications were present.This may facilitate the second phase of labor especially when the delivery of the second twin involves potential manipulation.After the delivery of the first twin, the uterus was manually stabilized and an abdominal ultrasound was immediately performed to verify the position and fetal heart rate of the second twin.If the fetus was in oblique or transverse position, an immediate artificial rupture of membranes and excessive iatrogenic procedures were refrained from which is in line with the suggestions by Arabin et al. [7].Instead, the labor position was adapted to promote the engagement of the presenting part to either vertex or breech and its descent awaited.If the second fetus was in vertex or breech position and the labor proceeded physiologically, the descent of the head was also awaited without external force.It was aimed to achieve the delivery of the second twin within thirty minutes after the first twin.As long as the fetal heart monitoring was physiologic though, the wait could extend to up to one hour if necessary.Amniotomy was performed when the presenting part was in good contact with the pelvis and there was no risk of umbilical cord prolapse.Oxytocin was utilized restrictively and tocolysis was applied in case of pathological fetal heart rate changes.In case of pathological cardiotocography (CTG) or arrest of labor over an extended period, a vacuum or breech extraction, depending on the presentation and gestational age, may be applied.
Statistical analysis
Statistical analysis was performed using SAS.We performed T-tests to compare normally distributed mean values as well as Mann-Whitney-U-tests for non-normally distributed values.The relationship between categorical variables was assessed using Fisher's exact test and Pearson's Chi-square test, respectively.Multivariate logistic regression analysis was used to identify independent variables predicting binary outcomes (such as primary cesarean delivery or successful vaginal delivery).Backward elimination with 20% significance level was used to adjust for potential confounders.
Ethics statement
In accordance with the guidelines of the working group for the survey and utilization of secondary data (AGENS), no ethical approval is required for this study since it is a retrospective cohort study evaluating management and outcome of the department [8].Still, the approval by our institutional ethics committee of the University Hospital Freiburg was received .
Descriptive analysis
A total of 913 cases of multiple pregnancies were identified, of which 553 were eligible for the analysis.230 and 323 deliveries occurred in periods I and II, respectively (Fig. 1).Baseline characteristics of the study population (Table 1) showed no significant differences between the two time periods except for the delivery mode.Compared to period I, period II showed significantly lower rates of planned cesareans (44.0% vs. 63.5%,p < 0.001) and higher rates of vaginal
Predictive factors
After adjusting for confounders, the obstetric management period was shown to be an independent predictor for planned cesarean delivery and successful vaginal delivery.Women with twin pregnancies in period I were over twice as likely to have a planned cesarean delivery (OR: 2.86 (95% CI 1.91-4.30),p < 0.0001)) and half as likely to have a successful vaginal delivery (OR: 0.5 (95% CI 0.28-0.89),p = 0.02) compared to women in period II (Table 2).
Factors significantly associated with planned or primary cesarean delivery were maternal age above 40 years, nulliparity, a history with a previous cesarean, gestational age < 37 completed gestational weeks, monochorionicity and increasing birth weight difference per 100 g or > 20% (Table 3).Especially women who had a previous cesarean birth were 13 times more likely to undergo a planned cesarean delivery during the twin pregnancy.For birth weight difference between 1st and 2nd twin, the risk of a primary cesarean delivery increases by 16% with every 100 g and nearly threefold with ≥ 20% discrepancy.No significant associations were found with mode of conception, presentation of the fetuses and maternal BMI at birth.
Factors significantly associated with successful vaginal delivery were previous vaginal delivery, gestational age between 34 and 36 weeks and vertex/vertex presentation of the fetuses (Table 4).Especially women with previous vaginal delivery were 7.9 times more likely to deliver twins vaginally.No significant impact on the rate of secondary cesarean delivery was shown with chorionicity, maternal age, parity, mode of conception, maternal BMI at birth and fetal weight difference.
Neonatal outcome
For neonatal outcome, we analyzed umbilical artery pH, APGAR score at 5 min and the transfer rate to the neonatal intensive care unit (NICU) in general and for pregnancies over 36 + 0th gestational weeks.
After primary cesarean delivery, the 2nd twin showed higher umbilical artery pH and APGAR score at 5 min compared to the other delivery modes; however, the transfer rate to the NICU was also higher for both twins (36.1%/40.3%for planned cesarean delivery vs. 15.3%/18.6%for vaginal delivery, see Table 5).
For monochorionic twins, both primary and secondary cesarean deliveries showed higher NICU transfer rates for both twins compared to vaginal delivery.For dichorionic pregnancies, however, secondary cesarean deliveries showed the lowest transfer rate for the 1st twin.For the 2nd twin, vaginal delivery still resulted in the lowest rate of NICU transfer whereas primary cesarean delivery showed higher pH and APGAR scores.For pregnancies > 36 + 0th gestational weeks, there was no difference in the NICU transfer rate across all delivery modes (Table 6).
In a subgroup analysis of successfully performed vaginal deliveries with the second twin being in non-vertex position, we evaluated the neonatal outcome based on the time interval between the delivery of the first and second twin (inter-twin delivery interval).Out of 167 successful vaginal births, 41 were delivered with the second twin being in non-vertex presentation.25 occurred within an interval of < 30 min (61%) and 16 within an interval of > 30 min (39%).There were no significant differences in the neonatal outcome between the two groups (Table 7).Lastly, when comparing period I and period II, no significant differences in the neonatal outcome were observed (supplementary table).
Principal findings
In our study, we demonstrated that the rate of cesarean deliveries of twin pregnancies decreased by 19% from 80.9% in period I to 61.9% in period II.By contrast, the success rate of planned vaginal deliveries significantly increased by 15.6% from 52.4% in period I to 68.0% in period II.After adjusting for other variables, we identified the obstetric management period as an independent predictor of planned cesarean delivery and successful vaginal delivery.
Meaning of the findings
Considering that the average cesarean rate in Germany was shown to be around 75% [6], the cesarean rate in our department was initially above and later improved to below the German average without having an impact on short-term neonatal outcome.A significant decrease was achieved in planned cesareans but also in emergency cesareans, with the latter being an important obstetric outcome due to its association with increased maternal and neonatal morbidity and mortality [9,10].
Other studies which evaluated interventions to reduce the rate of cesarean births included educational strategies for specialists and pregnant women and their families as well as managerial strategies such as pain-free labor or decision making for cesarean deliveries only by experienced physicians [11].Another retrospective study reported a 33% relative reduction of cesarean deliveries after implementing a quality-improvement intervention comprised of modifications of the organization, staff training and unit policy [12].
While elective cesarean births in singletons are mostly requested due to psychological reasons or fertility issues [13], the data on elective cesarean deliveries on twin pregnancy are limited.Interestingly, in our study, the mode of conception, whether natural or via ART, did not have an influence on the mother's choice (p = 0.18).Instead, the strongest predictors were previous cesarean delivery followed by gestational age from 32.0 to 33.6 weeks.Similar to our results, a retrospective study identified that women with a previous history of a cesarean birth and of older age (30 vs. 20 years) were more likely to undergo another cesarean delivery for their multiple pregnancy; however, the sample size of the study was very small (n = 47) [14].
For successful vaginal delivery, the strongest predictors were previous vaginal birth and vertex/vertex presentation of both twins, which have also been reported in the literature [15,16].In our study, mode of conception (natural vs. ART) and maternal age had no significant impact on the success of vaginal delivery.However, one study reported a higher vaginal birth rate with spontaneous conception [17], while another study identified that higher maternal age as well as maternal hypertensive disorder and diabetes decreased the likelihood of vaginal birth [16].
In our study, primary cesarean deliveries led to better neonatal pH and APGAR scores, but also higher NICU transfer rates in general.Our results are in line with several other studies showing a high incidence of respiratory morbidity and NICU admission of infants delivered by elective cesarean delivery [18].
Clinical implications
According to the NICE (National Institute for Health and Care Excellence) guideline "Twin and triplet pregnancy" and the German guideline "Monitoring and care of twin pregnancies", both the planned vaginal and cesarean deliveries are safe choices when certain conditions apply [19,20].
Compared to vaginal labor, elective cesarean deliveries are associated with risks and complications such as postpartal hemorrhage [21], placental disorders [22], severe acute maternal morbidity [23], deep vein thrombosis, postpartal infection [9], longer in-patient stay [24] and impaired adaptation of the newborn [25].Yet, increasing rates of cesarean delivery of twins had been reported.From 1990 to 2012, an overall increase of 23.5% was reported in Germany [26].Similar trends were recorded in the United States [27].
Our results indicate that cesarean deliveries can be lowered when strategies or experienced attendance and supervision with regular teaching sessions and re-assurance of patients are systematically introduced-as in our center.Also, we identified having a history with a previous cesarean birth as a major risk factor for another cesarean delivery which implies, that preventing the first cesarean birth could be a key step to reduce the high rate of cesarean deliveries in twin pregnancies.Since fetal weight difference was identified as a risk factor for a planned cesarean, to further reduce cesarean rates, consideration should be given to the extent to which the estimated fetal weight difference between the two twins may influence the choice of delivery mode.According to the current guidelines, vaginal deliveries can be offered provided there is not a significant size difference between both twins.According to various sources, an estimated weight difference of 15-25% is considered as discordant [28,29].Our institution used to prefer a discordance of < 20% for vaginal deliveries in period I.However, based on the retrospective data available, twin discordance does not necessarily represent a contraindication for the trial of vaginal labor, even if the larger twin is the non-presenting twin.From the published data, weak evidence may support the consideration of cesarean delivery in extremes of discordance.From a practical standpoint, this may apply when the second twin is approximately > 40% larger than the presenting co-twin [30].Concerning the ideal gestational age for vaginal delivery, our study identified that the success rate was at its highest during 34.0-36.6weeks.A possible explanation could be the fetus' increased resistance to labor stress during late preterm compared to lower gestational weeks and simultaneously being smaller in size and weight compared to higher gestational weeks, thus fitting through the birth canal more easily.
For neonatal outcome, elective cesarean sections showed the highest NICU transmission rate.Although birth asphyxia is less likely to occur when the fetus is not exposed to labor, the newborn may instead face higher difficulty of respiratory adaptation and the clearance of fluids in the lung [31].This effect is most evident in early-term infants with surfactant deficiency.In our study, the NICU admission rate of infants born after 36.0 weeks was comparable between all three delivery modes.Interestingly though, for dichorionic pregnancies the NICU transfer rate for the first twin was remarkably lower after secondary cesarean delivery compared to the other delivery modes.A possible explanation could be that the first twin faced sufficient labor stress during the trial of labor, thus having less risk of respiratory adaptation difficulty compared to infants born by primary cesarean delivery.On the other hand, when the labor is terminated prematurely by secondary cesarean delivery, the risk of labor complications requiring postpartum neonatal care such as birth asphyxia or infection is also limited.For the second twin, the NICU admission rate after secondary cesarean delivery was considerably higher.This may be due to the fact that compared to the first twin, the second-born infant is faced with a significantly higher risk of respiratory distress syndrome which requires exogenous surfactant application [32,33].In a subgroup analysis, we additionally evaluated the inter-twin delivery interval since it is considered a risk factor for the short-term neonatal outcome of the second twin [34].
In our institution, the inter-twin delivery interval showed no significant impact.This is also reflected in our clinical practice as we limit iatrogenic measures after the delivery of the first twin and wait for the natural descent of the second twin regardless of its presentation, provided there is no fetomaternal harm and the fetal heart rate is physiologic.Similar to our study, other recent studies demonstrated that the short-term outcome of the second twin was not affected when the inter-twin delivery interval exceeded 30 min, raising the question of defining the optimal time frame for vaginal deliveries in twin pregnancies [35,36].
Research implications
Since this study is a retrospective analysis, further research should be dedicated to a prospective model in which a structured obstetric management for twin pregnancies is studied as an intervention.Also, maternal morbidity should be additionally assessed.Currently, there are intensive efforts within the German Workgroup Multiple Gestation to increase the skills and evaluate the results in the management of twin pregnancies (Hamza et al. unpublished).
Strengths and limitations
Our study had several strengths including its large sample size and collection of data spanning over 10 years.Additionally, multivariate models were used to control for potential confounding.The weakness of the study lies in its retrospective model limiting the determination of a cause-effect relationship.Additionally, the study was limited to deliveries > 32 gestational weeks, a cutoff given by the NICE and German guidelines when vaginal birth can be offered [19,20].It should be noted though that there is evidence that vaginal deliveries can also be performed in vertex-first twins between 26 and 32 weeks with no negative impact on the outcome or significant differences in morbidity and mortality as compared to a primary cesarean [37,38].Lastly, for neonatal outcome, only immediate effects of the delivery were evaluated.Morbidity until discharge and long-term morbidity were not assessed.
Conclusion
In our study, we have shown that obstetric management may influence the delivery mode of twin pregnancies.In our case, planned cesarean deliveries were reduced and the rate of successful vaginal labor was increased both significantly without impairment of the neonatal outcome.This concludes that vaginal deliveries in twin pregnancies are safe when no contraindications for labor apply.High rates of planned cesareans in general may be caused by multiple factors such as subjective indications suggested to the team and patients, lack of time and patience as opposed to a fast and scheduled delivery, financial incentives or the fear of litigation as seen in the practice of defensive medicine.Thus, this study marks the importance of structured and regular updates, training and review of concepts and procedures to maintain and improve the quality in an obstetrical department on a medical, educational and economical level.
Fig. 1
Fig. 1 Study population of twin pregnancies categorized into two periods of obstetric management (period I = 2009-2014, period II = 2015-2021) and their respective distribution of cesarean and vaginal deliveries
Table 2
Distribution of planned cesarean delivery and actual vaginal delivery according to obstetric management period (multivariate logistic regression)
Table 3
Distribution of planned delivery modes during both periods (n = 553, multivariate logistic regression): Predictive factors of a planned cesarean delivery
Table 4
Distribution of planned vaginal delivery with or without vaginal delivery of both twins during both periods (n = 265, multivariate logistic regression).Predictive factors for a successful vaginal delivery
Table 5
Neonatal outcomes during both periods depending on delivery mode (n = 1106)
Table 6
Neonatal outcomes during both periods depending on chorionicity and delivery modes (n = 1106)
Table 7
Subgroup analysis of successful vaginal deliveries with second twin being in noncephalic position: Neonatal outcomes during both periods depending on inter-twin delivery interval (n = 82) Funding Open Access funding enabled and organized by Projekt DEAL.Internal financial resources of the department of obstetrics and gynecology, University of Freiburg were used for the statistical analyses.Otherwise, no funds, grants, or other support were received during the preparation of this manuscript.
|
2023-04-29T06:18:12.669Z
|
2023-04-28T00:00:00.000
|
{
"year": 2023,
"sha1": "6ce3f3d871aaa99256a61115d0a4d880d07334e9",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00404-023-07040-6.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "b157c09948fcbd445c1add20aa95e5315a2294f4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
119034213
|
pes2o/s2orc
|
v3-fos-license
|
Temporal correlations between the earthquake magnitudes before major mainshocks in Japan
A characteristic change of seismicity has been recently uncovered when the precursory Seismic Electric Signals activities initiate before an earthquake occurrence. In particular, the fluctuations of the order parameter of seismicity exhibit a simultaneous distinct minimum upon analyzing the seismic catalogue in a new time domain termed natural time and employing a sliding natural time window comprising a number of events that would occur in a few months. Here, we focus on the minima preceding all earthquakes of magnitude 8 (and 9) class that occurred in Japanese area from 1 January 1984 to 11 March 2011 (the day of the M9 Tohoku earthquake). By applying Detrended Fluctuation Analysis to the earthquake magnitude time series, we find that each of these minima is preceded as well as followed by characteristic changes of temporal correlations between earthquake magnitudes. In particular, we identify the following three main features. The minima are observed during periods when long range correlations have been developed, but they are preceded by a stage in which an evident anti-correlated behavior appears. After the minima, the long range correlations break down to an almost random behavior turning to anti-correlation. The minima that precede M$\geq$7.8 earthquakes are distinguished from other minima which are either non-precursory or followed by smaller earthquakes.
I. INTRODUCTION
Seismic Electric Signals (SES) are low frequency (≤ 1Hz) transient changes of the electric field of the Earth that have been found [1,2] to precede earthquakes (EQs). Several such transient changes within a short time are termed SES activity. A model for the SES generation has been proposed [3] (see also Varotsos et al. [4]) based on the widely accepted concept that the stress gradually increases in the future focal region of an EQ. It was postulated that when this stress reaches a critical value, a cooperative orientation of the electric dipoles (which anyhow exist in the focal area due to lattice defects in the ionic constituents of the rocks) occurs, which leads to the emission of a transient electric signal. All solids including metals, insulators and semiconductors contain intrinsic and extrinsic defects [5][6][7][8][9][10][11][12][13]. The model is consistent with the finding that the time series of the observed SES activities (along with their associated magnetic field variations) exhibit infinitely ranged temporal correlations [14][15][16][17], thus being in accord with the conjecture of critical dynamics. Other possible mechanisms for SES generation such as the recently developed finite fault rupture model with the electrokinetic effect [18] and the piezoelectric effect [19] taking into account the fault dislocation theory [20] have been proposed, see also Ch. 1 of Varotsos et al. [21]. The observations of SES activities in Greece [4,21,22] have shown that their lead time is of the order of a few months. This agrees with later observations in Japan [23][24][25][26].
EQs may be considered as (non-equilibrium) critical phenomena since the observed EQ scaling laws [27] point to the existence of phenomena closely associated with the proximity of the system to a critical point [28]. An order parameter for seismicity has been introduced [29] in the frame of the analysis in a new time domain termed natural time χ (see below). This analysis has been found to reveal novel dynamical features hidden in the time series of complex systems [21].
A unique change of the order parameter of seismicity approximately at the time when SES activities initiate has been recently uncovered [30]. In particular, upon analyzing the Japanese seismic catalogue in natural time, and employing a sliding natural time window comprising the number of events that would occur in a few months, the following was observed: The fluctuations of the order parameter of seismicity exhibit a clearly detectable minimum approximately at the time of the initiation of the pronounced SES activity observed by Uyeda et al. [24,25] almost two months before the onset of the volcanic-seismic swarm activity in 2000 in the Izu Island region, Japan. (This swarm was then characterized by Japan Meteorological Agency (JMA) as being the largest EQ swarm ever recorded Japan Meteorological Agency [31].) This reflects that presumably the same physical cause led to both effects, i.e, the emission of the SES activity and the change of the correlation properties between the EQs. In addition, these two phenomena were found [30] to be also linked in space.
For the vast majority of major EQs in Japan, however, the aforementioned almost simultaneous appearance of the minima of the fluctuations of the order parameter of seismicity with the initiation of SES activities, cannot be directly verified due to the lack of geolectrical data. In view of this lack of data, an investigation was made [32] that was solely focused on the question whether minima of the fluctuations of seismicity are observed before all EQs of magnitude 7.6 or larger that occurred from 1 January 1984 to 11 March 2011 (the day of the M9 Tohoku EQ) in Japanese area. Actually such minima were identified a few months before these EQs. It is the main scope of this paper to investigate the temporal correlations between the EQ magnitudes by paying attention to the time periods during which the minima of the order parameter fluctuations of seismicity have been observed before EQs of magnitude 8 (and 9) class (cf. Sarlis et al. [32] also studied the M7.6 Far-Off Sanriku EQ which however is not studied in detail here -but only shortly commented in the last paragraph of the Appendix-since it belongs to a smaller magnitude class, i.e., the 7-7.5 class, being a single asperity event [33]). Their epicenters are shown in Fig. 1 (see also Table I). Along these lines, we employ here the Detrended Fluctuation Analysis (DFA) [34] which has been established as a standard method to investigate long range correlations in non-stationary time series in diverse fields (e.g., Peng et al. [34,35,36], Ashkenazy et al. [37], Ivanov et al. [38], Ivanov [39], Talkner and Weber [40], Goldberger et al. [41], Telesca and Lovallo [42], Telesca and Lasaponara [43], Telesca et al. [44]) including the study of geomagnetic data associated with the M9.0 Tohoku EQ [45]. For example, a recent study [46] showed that DFA as well as the Centered Detrended Moving Average technique remain "The Methods of Choice" in determining the Hurst index of time series. As we shall see, the results of DFA obtained here in conjunction with the aforementioned minima emerged from natural time analysis lead to conclusions that are of key importance for EQ prediction research. In particular, we find that each of these precursory minima of the fluctuations of the order parameter of seismicity is preceded as well as followed by characteristic changes of temporal correlations between EQ magnitudes, thus complementing the results of Sarlis et al. [32].
II. THE PROCEDURE FOLLOWED IN THE ANALYSIS
In a time series comprising N consecutive events, the natural time of the k-th event of energy Q k is defined by χ k = k/N [47][48][49]. We then study the evolution of the pair (χ k , p k ), where p k = Q k / N n=1 Q n is the normalized energy. This analysis, termed natural time analysis, extracts from a given complex time series the maximum information possible [50]. The approach of a dynamical system to a critical point can be identified [21,47,51] by means of the variance κ 1 of natural time χ weighted for p k , namely It has been argued [29] (see also pp. 249-253 of Varotsos et al. [21]) that the quantity κ 1 of seismicity can serve as an order parameter. To compute the fluctuations of κ 1 we apply the following procedure [21,32]: First, take an excerpt comprising W (≥ 100) successive EQs from the seismic catalogue. We call it excerpt W . Second, since at least 6 EQs are needed for calculating reliable κ 1 [29], we form a window of length 6 (consisting of the 1st to the 6th EQ in the excerpt W ) and compute κ 1 for this window. We perform the same calculation of successively sliding this window through the whole excerpt W . Then, we iterate the same process for windows with length 7, 8 and so on up to W . (Alternatively, one may use [21,30,52] windows with length 6, 7, 8 and so on up to l, where l is markedly smaller than W , e.g., l ≈ 40.) We then calculate the average value µ(κ 1 ) and the standard deviation σ(κ 1 ) of the ensemble of κ 1 thus obtained. The quantity β W ≡ σ(κ 1 )/µ(κ 1 ) is defined [53] as the variability of κ 1 for this excerpt of length W and is assigned to the (W + 1) th EQ in the catalogue, the target EQ. (Hence, for the β W value of a target EQ only its past EQs are used in the calculation.) The time evolution of the β value can then be pursued by sliding the excerpt W through the EQ catalogue and the corresponding minimum value (for at least W values before and W values after) is labelled β W,min . In addition, for the purpose of the present study, for the target EQ we apply the standard procedure [34,54] of DFA to the magnitude time series of the preceding 300 EQs, which is on the average the number of events that occurred in the past few months (see also below). Hence, for the target EQ we deduce a DFA exponent, hereafter labelled α (cf. α =0.5 means random, α greater than 0.5 long range correlation, and α less than 0.5 anti-correlation.). By the same token the time evolution of the α value can be pursued by sliding the natural time window of length 300 through the EQ catalogue. The minimum values α min of the α exponent observed (roughly three months) before (bef) and after (aft) the identification of β W,min are designated by α min,bef and α min,af t , respectively (cf. when a major EQ takes place, α min,af t is the minimum α value after β W,min up to this EQ occurrence). In particular, the α min,bef and α min,af t values (given in detail in Tables II to V) were determined by investigating the minimum of the α exponent up to three and half months (105 days) before and after β 250,min , respectively.
III. THE DATA ANALYZED
The JMA seismic catalogue was used. We considered all the EQs in the period from 1984 until the Tohoku EQ occurrence on 11 Fig. 1) studied by Varotsos et al. [30] for two reasons: First, when plotting in Fig. 1 the links along with the corresponding nodes recently identified by a network approach developed by Tenenbaum et al. [55], we see that the nodes in the uppermost right part are now surrounded by the black rectangle but not by the yellow one (cf. a node represents a spatial location while a link between two nodes represents similar seismic activity patterns in the two different locations [55] ). Second, the epicenter of the major EQ of magnitude 8.2 that occurred on 4 October 1994 lies inside the former rectangle, but not in the latter (Table I).
The energy of EQs was obtained from the magnitude M JMA reported by JMA after converting [56] to the moment magnitude M w [57]. Setting a threshold M JMA = 3.5 to assure data completeness, we are left with 47,204 EQs and 41,277 EQs in the concerned period of about 326 months in the larger (black rectangle) and smaller (yellow rectangle) area, respectively. Thus, we have on the average ∼ 145 and ∼ 125 EQs per month for the larger and smaller area, respectively. In what follows, for the sake of brevity in the calculation of β W values for both areas, we shall use the values W = 200 and W = 300 (as in Sarlis et al. [32]), which would cover a period of around a few months before each target EQ. In addition for the sake of comparison between the two areas, we will also investigate the case of W = 250 since this value in the larger area roughly corresponds to the case W = 200 in the smaller area. Figure 2 provides an overview of the values computed in this study. In particular, the following quantities are plotted versus the conventional time during the 27 year period from 1 January 1984 until the Tohoku EQ occurrence on 11 March 2011: In Fig. 2A the DFA exponent α is depicted with red line for the larger area and with green line for the smaller. In Fig. 2B, we show the quantities β 200 and β 300 (in red and blue, respectively) for the smaller area. Finally, in Fig. 2C, we show β 200 , β 250 and β 300 (in red, green and blue, respectively) for the larger area.
IV. RESULTS
A first inspection of the α values in Fig. 2A shows that in view of their strong fluctuations it is very difficult to identify their correlations with EQs. A closer inspection, however, reveals the following striking point: The deeper minima of the α values (when considering the α values in both areas) are observed in the periods marked with grey shade which are very close to the occurrence of the stronger EQs in Japan during the last decade. These two EQs are (Table I) E . This instigated a more detailed investigation of the α values close to these two major EQs for which unfortunately precursory geoelectrical data are lacking (for the case of the M9 Tohoku EQ only geomagnetic data are available, see below). Thus, before presenting these two investigations and in order to better understand the results obtained, we first describe below a similar investigation for the case of the volcanic-seismic swarm activity in 2000 in the Izu Island region, Japan, in which as mentioned both datasets, i.e., SES activities and seismicity, are available. July 2000, which is the date of occurrence of an M6.5 EQ close to Niijima Island (yellow square in Fig.1). This EQ was preceded by an SES activity initiated on 26 April 2000 at a measuring station located at this island [24,25].
An inspection of Figs. 3A to 3C reveals the following three main features referring to the periods before, during, and after the observation of the precursory β minimum: Stage A marked in cyan: Putting the details aside, we observe in Fig. 3A that around 12 February 2000 the DFA exponent in both areas went down to a value markedly smaller than 0.5, i.e., α ≈ 0.41, in the smaller area and α ≈ 0.43 in the larger area. These α min,bef values indicate anticorrelated behavior in the magnitude time series.
Stage B marked in yellow: Since the last days of March until the first days of June 2000, the exponent α becomes markedly larger than 0.5, i.e., around α ≈ 0.57, pointing to the development of long range temporal correlations. In Figs. 3B and 3C, we then observe that after the last days of March the variability β exhibits a gradual decrease and a minimum β W,min appears on a date around the date of the initiation of the SES activity. In particular, in Fig. 3C, the relevant curve (green) for β 250 in the larger area minimizes on 25 April 2000 which is approximately the date of the initiation of the SES activity, reported by Uyeda et al. [24,25], lying also very close to the date (21 April) at which in the smaller area the β 200 curve (red) in Fig. 3B minimizes. Thus, in short the minimum β W,min , appears when α > 0.5 and hence when long range correlations (corr) have been developed in the EQ magnitude time series. The corresponding α values during the observation of the minima β 250,min will be hereafter designated α corr . Hence, in this case α corr ≈ 0.57.
Stage C marked in brown: Approximately on 10 June 2000, Fig. 3A shows that the DFA exponent decreases to a value around 0.5. This means that the previously established long range temporal correlations between EQ magnitudes break down to an almost random behavior. The value α ≈ 0.5 remains almost constant until the third week of June and shortly after the aforementioned M6.5 EQ on 1 July 2000 occurred. (Tables II and IV), as can be seen in Figs.4C and 4B, and the corresponding α values are α corr ≈ 0.6.
(C) A breakdown of long range correlations starts around 1 September 2003, see the beginning of the brown region in Fig.4A where the α values decrease to α ≈0.5, i.e., close to random behavior, and subsequently go down to around α ≈0.45 (while finally α reaches the value α min,af t =0.384 and 0.434 for the larger and the smaller area, respectively, see Tables II and IV) indicating anti-correlation. The M8 EQ occurred three weeks later, i.e., on 26 September 2003, and after its occurrence the α value decreases to an unusual low α value (0.33 and 0.35 in the larger and the smaller area, respectively), which corresponds to the one -out of the two-deeper α minima mentioned above in the periods marked with grey shade in Fig. 2A. show that an evident decrease starts leading to a deep β minimum around 5 January 2011. This is the deepest β W,min observed [32] since the beginning of our investigation on 1 January 1984, as can be seen in the rightmost side of Figs. 2B and 2C. Remarkably, the anomalous magnetic field variations [58] (which accompany anomalous electric field variations, i.e., SES activities [59]) initiated almost on the same date, i.e., 4 January 2011.
(C) In the brown region lasting from about 13 January to 10 February 2011, the behavior turns to an anti-correlated one, which is very close to random, as evidenced from Fig. 5A in which the α values are α 0.5. The M9 EQ occurred almost four weeks after this period, i.e., on 11 March 2011.
The following important comment referring to the two deeper minima of the α values in Fig. 2A is now in order: Here, the unusually low α min,bef on 22 December 2010 (Fig. 5A) has been shortly followed by the deepest β minimum on 5 January 2011 (Figs. 5B and 5C). This is of precursory nature. To the contrary, the unusually low α minimum value on 26 September 2003 discussed in the previous case -which has not been shortly followed by a deep β minimum (see Fig. 2A)-is not precursory having been influenced by the preceding M8 Off Tokachi EQ. In other words, upon the observation of an unusually low α value, we cannot decide whether it is of precursory nature but we have to combine this observation with the results of natural time analysis and investigate whether this α min value is shortly followed by a deep β W,min value. Hence, it is of key importance to examine in each case whether the sequence of the aforementioned three main features A, B, C has appeared or not.
V. DISCUSSION AND CONCLUSIONS
DFA has been employed long ago for the study of seimic time series in various regions, e.g. see Telesca et al. [60] for the Italian territory. DFA studies of the long-term seismicity in Northern and Southern California were initially focused on the regimes of stationary seismic activity and found that long range correlations exist [61] between EQ magnitudes with α = 0.6. Similar DFA studies of long-term seismicity were later [53,62] extended also to the seismic data of Japan and the results strengthened the existence of long range temporal correlations. In particular, it was found [53,62] that the DFA exponent is around 0.6 for short scales but α =0.8-0.9 for longer scales (the crossover being around 200 EQs). In addition, the nonextensive statistical mechanics [63,64], pionered by Tsallis [65], has been employed [62] in order to investigate whether it can reproduce the observed seismic data fluctuations. In this framework, on the basis of which it has been shown [66] that kappa distributions arise, a generalization of the Guternbeng-Richter (G-R) law for seicmicity has been offered (for details and relevant references see Section 6.5 of Varotsos et al. [21] as well as Telesca [67]) and the investigation led to the following conclusions [21,62]: The results of the natural time analysis of synthetic seismic data obtained from either the conventional G-R law or its nonextensive generalization, deviate markedly from those of the real seismic data. On the other hand, if temporal correlations between EQ magnitudes, with different α values (i.e., α ≈ 0.6 and α ≈0.8-0.9 for short and long scales, respectively), will be inserted to the synthetic seismic data, the results of natural time analysis agree well with those obtained from the real seismic data. In other words, the parameter q of nonextensive statistical mechanics cannot capture the whole effect of long range temporal correlations between the magnitudes of successive EQs. On the other hand, the nonextensive statistical mechanics, when combined with natural time analysis (which focuses on the sequential order of the events that appear in nature) does enable a satisfactory description of the fluctuations of the real data of long-term seismicity.
In the present paper, we study the dynamic evolution of seismicity and pay attention to the regimes before major EQs by combining the results of DFA of EQ magnitude time series with natural time analysis since the latter has revealed that a minimum β W,min in the fluctuations of the order parameter of seismicity is observed before major EQs in California [68] and Japan [30,32] (cf. The nonextensive statistical mechanics cannot serve for the purpose of the present study, i.e., follow the dynamic evolution of seismicity). This combination has been applied in the previous Section to three characteristic cases in Japan, i.e., the volcanic-seismic swarm activity in the Izu Island region in 2000, the M8 off Tokachi EQ in 2003 and the M9 Tohoku EQ in 2011. The following three main features have been found in all three cases: Stage A(before the β W,min ): Clear anti-correlated behavior, α < 0.5 Stage B: Establishment of long range correlations, α > 0.5 during which a minimum β W,min appears (approximately at the date of the initiation of the SES activity as found in the Izu case as well as in the M9 Tohoku case).
Stage C (after the β W,min ): Breakdown of long range correlations with emergence of an almost random behavior turning to anti-correlation, α 0.5. A few weeks after this breakdown the major EQ occurs. This is strikingly reminiscent of the findings in other complex time series: In the case of electrocardiograms, for example, the longrange temporal correlations that characterize the healthy heart rate variability break down for individuals of high risk of sudden cardiac death and often accompanied by emergence of uncorrelated randomness [21,41,69].
The same features have been found to hold before all the other M JMA ≥ 7.8 EQs in Japan during the period 1 January to the Tohoku EQ occurrence, i.e., the Southwest-Off Hokkaido M7.8 EQ in 1993 (Fig.6) the East-Off Hokkaido M8.2 EQ in 1994 (Fig. 7), and the Near Chichi-jima M7.8 EQ in 2010 (Fig. 8). As for the observed pattern in α, i.e., anti-correlated, correlated and random, it might be related to the tectonics and geodynamics, but a precise physical justification of its origin is not yet clear.
The β minima that are precursory to M JMA ≥ 7.8 EQs can be distinguished from other β minima that are either non-precursory or may be followed by EQs of smaller magnitude through the following procedure (for details see Appendix and Tables II to V). We make separate studies for the two rectangular areas shown in Fig. 1, i.e., by analyzing the time series of EQs occurring in each area, first in the larger area and secondly in the smaller. In the study of each area, we do the following: We first identify the β minima that appear a few months before all M JMA ≥ 7.8 EQs and determine their β 300,min /β 200,min values. These values lie in a certain narrow range close to unity. Among the remaining minima, we choose those which are equally deep or deeper than the shallowest one of the β 200,min values that preceded the M JMA ≥ 7.8 EQs and in addition they have β 300,min /β 200,min values lying in the range determined above (see Appendix). In order for any of these minima to be precursory to M JMA ≥ 7.8 EQs, beyond the fact that they should exhibit the three main features, A, B, C, mentioned above, they should also have the following property: They should appear practically on the same dates (differing by no more than 10 days or so) in the investigations of both areas. The application of the aforementioned procedure reveals (see Appendix) that only the β minima appearing a few months before the five M JMA ≥ 7.8 EQs exhibit all the aforementioned properties. Remarkably, this procedure (see Appendix) could have been applied before the occurrence of the M9 Tohoku EQ, after the identification of the deepest β minimum observed on January 2011, leading to the conclusion that an M JMA ≥ 7.8 EQ was going to occur in a few months.
Let us summarize: Here, by employing the DFA of the EQ magnitude time series we show that the minimum β W,min of the fluctuations of the order parameter of seismicity a few months before an M JMA ≥ 7.8 EQ is observed when long range correlations prevail (α > 0.5). In addition, these β W,min is preceded by a stage in which DFA reveals clear anti-correlated behavior (α < 0.5) as well as it is followed by another stage in which long range correlations break down to an almost random behavior turning to anti-correlation (α 0.5). On the basis of these main features we suggest a procedure which distinguishes the β minima that precede EQs of magnitude exceeding a certain threshold (i.e., M JMA ≥ 7.8) from other β minima which are either non-precursory or may be followed by EQs of smaller magnitude.
Appendix A: Distinction of the β minima that precede EQs of magnitude MJMA ≥ 7.8 from other minima which are either non-precursory or followed by EQs of smaller magnitude Recall that in order to classify a β W,min value, it should be a minimum for at least W values before and W values after. Further, to assure that β 200,min , β 250,min and β 300,min are precursory of the same mainshock and hence belong to the same critical process, almost all (in practice above 90% of) the events which led to β 200,min should participate in the calculation of β 250,min and β 300,min .
To distinguish the β W,min that are precursory to EQs of magnitude M JMA ≥ 7.8 from other minima which are either non-precursory or may be followed by EQs of smaller magnitude, we make separate studies for the larger and the smaller area and the results obtained should be necessarily checked for their self-consistency. For example, a major EQ whose epicenter lies in both areas should be preceded by β W,min identified in the separate studies of these two areas approximately (in view of their difference in seismic rates) on the same date. In particular, we work as follows: Let us assume that we start the investigation from the larger area where five EQs of magnitude 7.8 or larger occurred from 1 January 1984 until the M9 Tohoku EQ in 2011 (Table I). We first identify the β minima that appear a few months before all these EQs and determine their β 300,min /β 200,min values (see Table II). These values are found to lie in a narrow range close to unity [32], i.e., in the range 0.92-1.06. (This range slightly differs from the previously reported [32] range 0.95-1.08 since in the present work the numerical accuracy of the calculated κ 1 values for W > 100 has been improved.) This is understood in the context that these values correspond to similar critical processes, thus exhibiting the same dependence of β W on W . During the whole period studied, however, beyond the above mentioned β minima before all the M JMA ≥ 7.8 EQs, more minima exist. Among these minima we choose those which are equally deep or deeper than the shallowest one of the β 200,min values previously identified (e.g., 0.294 in Table II) and in addition they have β 300,min /β 200,min values lying in the narrow range determined above. Thus, we now find a list of "additional" β minima (see Table III) that must be checked whether they are non-precursory or may be followed by EQs of smaller magnitude.
We now repeat the whole procedure -as described above-for the determination of β minima in the smaller area. Thus, we obtain a new set of β minima (with shallowest β 200,min =0.293 and β 300,min /β 200,min range 0.97-1.09, see Table IV) that appear a few months before all the M JMA ≥ 7.8 EQs (cf. there exist 4 such EQs in the smaller area, see Fig.1) as well as a new list of "additional" β minima (see Table V) to be checked whether they are non-precursory or may be followed by EQs of smaller magnitude. Comparing these new β minima with the previous ones, we investigate whether they: (1) appear practically on the same date (differing by no more than 10 days or so) in both areas, and (2) exhibit the three main features (i.e., the sequence (A) anti-correlated behavior / (B) correlated /(C) almost random behavior) emerged from the results of DFA of EQ magnitude time series discussed in the main text. In Tables III and V corresponding to the "additional" minima, we mark in bold the values which do not satisfy at least one of the three inequalities given below which quantify these three main features α min,bef ≤ 0.47, α corr > 0.50, α min,af t ≤ 0.50 (Note that considerable errors are introduced in the estimation of the α exponent when using a relatively small number of points as the one used here, i.e., 300. This is why we adopt α = 0.47 as the maximum value in order to assure anti-correlated behavior in the period before the appearance of β minimum.) A summary of the main results obtained after carrying out this investigation is as follows: First, the β minima identified a few months before all the M JMA ≥ 7.8 EQs with epicenters inside of both areas obey the aforementioned requirements (1) and (2), see Tables II and IV. Remarkably, in the remaining case, i.e., the East-Off Hokkaido M8.2 EQ labelled EQ2 in Table I, with an epicenter outside of the smaller area but inside the larger, we find β minimum not only in the study of the larger area (on 30 June 1994, see Table II), but also in the relevant study of the smaller area (see the third β minimum in Table V observed on 5 July 1994). Second, all the other "additional" β minima resulting from the studies in both areas (see Tables III and V) violate at least one of the requirements (1) and (2). In other words, only the β minima appearing a few months before the five M JMA ≥ 7.8 EQs exhibit all the aforementioned properties. Remarkably, this procedure could have been applied before the occurrence of the M9 Tohoku EQ, after the identification of the deepest β minimum on January 2011 having β 300,min /β 200,min almost unity, thus lying inside the narrow ranges identified from previous M JMA ≥ 7.8 EQs (see Tables II and IV), leading to the conclusion, as mentioned in the main text, that a M JMA ≥ 7.8 EQ was going to occur in a few months.
We clarify that the above procedure does not preclude of course that one of the "additional" minima may be of truly precursory nature, but corresponding to an EQ of magnitude smaller than the threshold adopted. As a first example we mention the case marked FA7 in Table III referring [30] is also shown with yellow rectangle. The small yellow square indicates the location of the Niijima Island where the precursory SES activity of the volcanic-seismic swarm activity in 2000 in the Izu Island region has been recorded [24,25]. Furthermore, the network links as reported by Tenenbaum et al. [55] (see their Fig.6(a)) are also shown. The values of βW,min, α min,bef , αcorr and α min,af t in the larger area investigated with sliding natural time windows of length 6 to W that preceded the MJMA ≥ 7.8 EQs listed in Table I. Hereafter, the value of αcorr is given when β250,min appears and α min,bef is the minimum of the DFA exponent up to three and a half months (105 days) before β250,min.
Label β200,min β250,min β300,min This value of α is observed 15.5 hours before the mainshock. This value of α is observed 24.5 hours before the mainshock.
|
2017-03-24T15:21:22.000Z
|
2016-12-21T00:00:00.000
|
{
"year": 2016,
"sha1": "8c25d8ab839e5bc03b78711b7616a2317c05c02e",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "8c25d8ab839e5bc03b78711b7616a2317c05c02e",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": [
"Geology",
"Physics"
]
}
|
226311107
|
pes2o/s2orc
|
v3-fos-license
|
An International Standardized Magnetic Resonance Imaging Protocol for Diagnosis and Follow-up of Patients with Multiple Sclerosis Advocacy, Dissemination, and Implementation Strategies
Standardized magnetic resonance imaging (MRI) protocols are important for the diagnosis and monitoring of patients with multiple sclerosis (MS). The Consortium of Multiple Sclerosis Centers (CMSC) convened an international panel of MRI experts to review and update the current guidelines. The objective was to update the standardized MRI protocol and clinical guidelines for diagnosis and follow-up of MS and develop strategies for advocacy, dissemination, and implementation. Conference attendees included neurologists, radiologists, technologists, and imaging scientists with expertise in MS. Representatives from the CMSC, Magnetic Resonance Imaging in MS (MAGNIMS), North American Imaging in Multiple Sclerosis Cooperative, US Department of Veteran Affairs, National Multiple Sclerosis Society, Multiple Sclerosis Association of America, MRI manufacturers, and commercial image analysis companies were present. Before the meeting, CMSC members were surveyed about standardized MRI protocols, gadolinium use, need for diffusion-weighted imaging, and the central vein sign. The panel worked to make the CMSC and MAGNIMS MRI protocols similar so that the updated guidelines could ultimately be accepted by international consensus. Advocacy efforts will promote the importance of standardized MS MRI protocols.
National Multiple Sclerosis Society (NMSS), Multiple Sclerosis Association of America (MSAA), and leading MRI manufacturers (GE Healthcare, Philips Healthcare, and Siemens Medical Solutions) and commercial image analysis companies (Cortech Labs and icometrix). The purpose of this meeting was to update the CMSC guidelines for a standardized MRI protocol for the diagnosis and monitoring of MS, with a major focus on discussion and development of plans to promote its use. Before the meeting, the CMSC general membership was surveyed about the current use of MRI with a consistent standardized protocol, as well as the use of gadolinium and diffusion-weighted imaging sequences and the utility of cortical lesions, brain atrophy, and the central vein sign, which helped in updating the technical and protocolspecific details for a standardized MRI examination. In the process of updating the protocol, in recognition of the value and importance of being able to have an international consensus protocol, there was a consistent effort to make the CMSC and MAGNIMS MRI protocols and guidelines similar. Working collaboratively with MAGNIMS and NAIMS, the updated CMSC recommendations were incorporated into the International 2020 Guidelines.
Recognizing the critical importance of promoting more widespread use of the standardized MS MRI examination, the CMSC Working Group developed several action plans to advocate, disseminate, and implement the updated recommendations worldwide. Advocacy involves promoting the protocol to become universally useful, usable, accepted, and adopted. Dissemination includes distributing the information internationally. Implementation involves putting all of these recommendations into effect. recommended protocols: 1) revised recommendations for a standardized MRI protocol and clinical guidelines for diagnosis and follow-up of MS from the CMSC in North America and 2) consensus guidelines on the use of MRI in MS from the Magnetic Resonance Imaging in MS (MAGNIMS) Study Group in Europe. [1][2][3][4][5] Although these protocols are well recognized and frequently cited, 6-10 disappointingly, they are not used widely. 11,12 During the 2019 MRI Consensus Guidelines Conference, the goal was to collaborate with multiple stakeholders in MS patient care and neuroimaging to begin developing globally aligned recommendations and to promote more widespread use of a standardized MRI protocol for MS. The first objective is within reach now as the CMSC Working Group, using the updated recommendation from this meeting, has partnered with the MAGNIMS Study Group and the North American Imaging in Multiple Sclerosis Cooperative (NAIMS) to produce the International 2020 MAGNIMS-CMSC-NAIMS Consensus Guidelines on the Use of MRI in Multiple Sclerosis (International 2020 Guidelines [manuscript submitted for publication]). The present paper reports on the second objective of promoting more widespread use of a standardized MS MRI protocol with discussions and proposals for advocacy, dissemination, and implementation.
Consensus Conference
The October 2019 consensus conference attendees consisted of neurologists, radiologists, MRI technologists, and imaging scientists with expertise in MS from the United States, Canada, and Europe, including representatives from the CMSC, MAGNIMS, NAIMS, make a difference with providers and payers. Patient education will help in the effort to encourage international acceptance and use of the guidelines.
Educating payers and insurance companies will also be key. If they understand that nonstandardized images that cannot be easily compared with previous studies are a waste of time and money, they will soon be requesting that all MRI facilities use the international guidelines. Payers want to keep costs low while providing highquality care for their clients. 15 Standardized images that provide optimal data and reduce the need for repeated imaging may be one way for the government and insurers to control health care spending. Of perhaps greater concern is that a suboptimal image could lead to the wrong management decision, which will be even more costly, especially if treatment decisions lead to prescribing more costly medications with potentially more adverse effects. Insurers and payers also need to advocate for their patients with MS by having them referred only to facilities that have adopted the standardized MRI protocol.
Dissemination
The CMSC has outlined a broad strategy for dissemination of the International 2020 Guidelines. To communicate with neurologists, radiologists, and others on the MS health care team, there have been multiple submitted abstracts to national and international neurologic, MRI, and MS-related meetings. Information about the updated guidelines is also available now through multiple resources, including posters presented at international meetings, published news articles, and video programming (Appendix S1, which is published in the online version of this article at ijmsc.org). Much more will be done with the soon-to-be-published consensus International 2020 Guidelines.
The CMSC will also work in conjunction with organizations such as the NMSS to disseminate the Guidelines by way of educational programming, webinars, distance learning, social media, and postings on their websites. The CMSC plans to have an FAQ (Frequently Asked Questions) on its website concerning the new International 2020 Guidelines, when these become available, to assist neurologists, radiologists, and MRI technologists as they incorporate the protocol into their everyday practice. Education on an international level will be critical to promoting use of the guidelines.
Examination cards that succinctly describe the full International 2020 Guidelines will be available to any-
MRI Survey
Ninety-five of the CMSC members responded to the question, "Do most of your patients get an MRI done with a standardized protocol?" Only 34% were definite that the CMSC protocol was used, 14% had to specifically request the CMSC or a standardized MRI protocol, 48% responded that either a local protocol was used or that they were uncertain whether the CMSC protocol was used, 3% indicated that studies "looked different each time," and 1% did not know.
Advocacy
Magnetic resonance imaging is invaluable in the diagnosis and ongoing monitoring of MS. Identifying new lesions and/or enhancement on MRI can lead to an earlier diagnosis of MS and help determine whether there is a need to initiate or change treatment. A well-performed standardized MRI examination is key. Using standardized T2-weighted/fluid-attenuated inversion recovery (FLAIR) sequences can accurately detect new MS lesions compared with previous standardized MRI studies, often without the need for gadolinium (reduces extra cost and can minimize concerns about gadolinium deposition with frequent administration). 1,5,13 In nonstandardized MRI, inconsistent slice thickness (often with slice gaps), incomplete brain coverage, and not using a reproducible acquisition plane (subcallosal plane is recommended) all contribute 14 to images that are different from one examination to the next, making them difficult to compare for accurate and confident identification of new lesion activity.
Raising awareness about the critical importance of standardized MS MRI protocols by advocating for their use with radiologists and neurologists will be required. Receiving endorsements from national and international neurologic and radiologic associations as well as patient advocacy groups, including the NMSS, Multiple Sclerosis Society of Canada, and MSAA, will be helpful to achieve this goal.
Educating patients with MS about the value of standardized MRI protocols is also important. As part of the MS health care team, patients are already active participants in their own care, and they often maintain digital copies of their own MRI records and images. It should, therefore, not be surprising if an informed and empowered patient specifically requests MRI following the International 2020 Guidelines. Having patients advocate for the use of the standardized MRI examination will Strategies for Standardized MRI Protocol able and practical, which most commonly means that the examination should be completed in a reasonable amount of time. There is also a mistaken perception of its complexity. The standardized MRI brain study, which includes core sequences (<3-mm slices, contiguous), three-dimensional (3D) (or two-dimensional [2D] if 3D not available) axial and sagittal FLAIR, and 3D (or 2D) T2-weighted and 2D diffusion-weighted images, can be easily acquired in less than 20 minutes, and for sites wanting the additional options for brain volume (3D high-resolution T1-weighted gradient echo) and central vein (susceptibility-weighted) assessment, the entire study can be accomplished in 25 to 30 minutes. 3 The use of gadolinium-based contrast agents is essential in the diagnostic work-up and can also be helpful in monitoring some patients with MS, particularly when there is highly active disease, unexplained or unexpected clinical worsening, or concern regarding an alternative diagnosis to MS. It is not necessary for most routine follow-up studies to identify new lesion activity when imaging is well performed in a standardized manner. Acquiring the 3D FLAIR sequences during the 5minute delay required after injection is an additional useful time-saving strategy.
Our understanding of MRI best practices continues to evolve and advance. A major improvement in MRI technology in the past few years is the ability to acquire high-resolution 3D images. In a time that is just slightly longer than that required to acquire a 2D sequence in only one plane, a 3D isometric acquisition can be reformatted in any imaging plane, replacing images in two different planes (axial and sagittal) and thereby reducing overall imaging time. The high-resolution images (typically 1×1×1 mm) are particularly helpful for lesion identification with the FLAIR sequence.
To make the standardized MRI protocol even easier to adopt, MRI equipment manufacturers are working to have the International 2020 Guidelines protocol sequences available on the MRI machine itself without requiring any changes to existing equipment. Having the sequences already preloaded into the imaging software and/or easily updated (online or with downloadable MRI protocol cards) will make the protocol efficient, easy to use, and more likely to be selected as the protocol of choice for diagnosis and monitoring of patients with MS.
Ideally, it would be best to use the same standardized MRI protocol, the same facility, and the same one visiting the CMSC, NMSS, and Multiple Sclerosis Association of America websites, and vendor-specific versions will be available for uploading onto MRI machines. Hard-copy laminated versions of the examination cards will also be available on request and widely distributed to MS clinics, MRI centers, and other health care facilities.
The US Department of Veteran Affairs was instrumental in early dissemination of the MRI protocol after the 2006 publication of the CMSC consensus guidelines, 16 and they are already working on dissemination and implementation strategies for the International 2020 Guidelines with their MRI facilities throughout the United States (Appendix S2). There are plans for the CMSC to have similar discussions with national MRI services and managed care providers to inform them of the details of the standardized MRI examination and to strongly encourage them and their members to adopt and use the standardized MRI protocol for their patients and clients with MS.
Implementation
Probably one of the biggest barriers for MRI centers that have been performing studies that do not meet the updated recommendations is the inertia of "we've been doing this a long time, we're familiar with what we're doing, and we don't want a change as change will be hard." The radiologist and staff will need to be convinced that the standardized MRI examination is useful and usable so that the changes can be made and the protocol will be used.
The radiologist and staff at the MRI center will need to understand that the standardized MRI examination will be useful as it will be helpful and beneficial to the patient with MS and referring physician by identifying new lesions and lesion activity, which aids diagnosis and informs management decisions. It will also be useful (helpful and beneficial) to the radiologist because it will be much easier to compare images that are consistent and reproducible, including when patients transfer care (eg, after a move). Having the neurologist (and other referring physicians) specifically request a standardized MRI examination according to International 2020 Guidelines will be important. Even better will be specific requests from patients with MS, and the payers, that they will only be imaged at MRI centers that do so.
For the standardized MRI examination to be usable, the radiologist and MRI center staff will need to recognize and understand that the examination is reason-2018 CMSC guidelines and only 7% satisfied the criteria for the T2-weighted sequence. 11 It has been the vision of the CMSC Working Group from the start that the Guidelines would be "4U": universal, useful, usable, and used. With the collaborative efforts of the CMSC, MAGNIMS, and NAIMS, the soon-to-be published International 2020 MAGNIMS-CMSC-NAIMS Consensus Guidelines on the Use of MRI in Multiple Sclerosis will make the guidelines (almost) universal. Having the status of "international" guidelines will lead to wider acceptance and, more importantly, wider adoption and use. A multipronged approach educating radiologists, neurologists, and other health care providers, as well as payers, insurers, and patients, will then be needed to promote the wider use of the standardized MRI protocol. Using the same facility whenever possible would be best for longitudinal surveillance imaging that relies on sensitive quantitative tools to detect subtle changes such as brain volume measures.
Advocacy and dissemination strategies will help raise awareness that the standardized MRI examinations are useful, being helpful and beneficial to radiologists and neurologists in providing care to patients with MS, and usable, being practical and reasonable to acquire. Radiologists who have overcome the inertia of not wanting change and have used the protocol are typically enthusiastic and extremely satisfied, and many find that they use the protocol even for other neurologic indications in addition to MS because of the protocol's sensitivity and versatility. Having MRI equipment manufacturers provide the recommended sequences on the machine, easily accessible as a one-step process, will help improve the standard of care for MRI in MS.
MRI equipment for yearly examinations. This is especially important when using follow-up MRIs to monitor subtle, technically challenging changes such as brain atrophy. Using the same facility and equipment may not be possible for all patients (eg, the patient may move or insurance providers may change, requiring a change in MRI facility). However, using the same standard protocol with full brain coverage, consistent image acquisition along the subcallosal plane, and slices that are contiguous and of similar thickness will allow for easy comparison and accurate assessment for new lesions and other changes on subsequent studies, even when there are differences between MRI machine type and/ or location. There are ongoing challenges in MRI of the spinal cord, and the guidelines recognize that individual centers should focus on acquisitions that are best suited and most familiar for their local MRI, emphasizing the value of getting at least two (of the four) recommended complementary acquisitions (FLAIR, T2-weighted, proton density-weighted, or T1-weighted [phase-sensitive inversion recovery or 3D inversion-prepared gradient echo]) to identify MS lesions.
Discussion
Magnetic resonance imaging plays an important role in the diagnosis and follow-up of patients with MS. The key is having standardized MRI examinations that enable easy comparison with previous studies and accurate lesion activity identification. [1][2][3][4][5][16][17][18] Recommendations and guidelines for the use of a standardized MRI examination by the CMSC were first proposed in 2001 under the visionary leadership of the late Professor Donald Paty and have since been updated and revised five times, 1,2,16 reflecting our understanding of MS and the evolution of MRI technology. Although widely referenced and known, surprisingly the guidelines are still not widely used. Although centers may use a locally defined MRI protocol for MS consistently, if they do not fully conform with the 2018 CMSC protocol, this would not allow studies to be easily compared for patients who move to a new area or have MRI performed in a different center. The survey of CMSC members performed in preparation for the consensus conference indicated that only 34% of respondents were definite that the MRIs performed were according to the 2018 CMSC guidelines. According to a poster presented at the 2020 Virtual Annual Meeting of the CMSC, of 1233 examinations from a real-world MRI data set, only 8% met the criteria for the T1-weighted sequence of the
PRACTICE POINTS
• Quality MS care includes magnetic resonance imaging (MRI) performed using a standardized protocol with images that can be acquired in 20 minutes or less. • Standardized MRI reduces the need for and expense of repeated studies by avoiding suboptimal images. • Advocacy efforts and strategies for dissemination and implementation will be key for the wider clinical use of standardized MRI examinations for patients with MS. Neurologists and other MS health care providers are also key in the effort toward advocating for the universal use of a standardized MRI protocol for patients with MS. Educating them about the International 2020 Guidelines is essential so that referring physicians will specifically request a standardized MRI examination for their patients.
Strategies for Standardized MRI Protocol
Educating patients about the International 2020 Guidelines is another key. Today, more and more patients are engaged in their own health care. When patients understand the importance of standardization, they will expect and insist that MRI facilities use International 2020 Guidelines knowing that they will benefit from it. Patients will advocate for access to "the right images," which should have the support of the payers and insurers, because this will be much more costeffective and minimize the need for repeated examinations for inadequate images. Viewing standardized MRIs through the lens of the COVID-19 pandemic, the focus of care at the present time is to minimize time in doctors' offices and health care facilities. Standardized MRIs can improve diagnostic accuracy, reduce the need for additional imaging, and reduce unnecessary community infection exposure for patients.
In conclusion, the CMSC Working Group has collaborated with MAGNIMS and NAIMS to publish the International 2020 Guidelines on the use of MRI for diagnosis and monitoring of MS patients. We very much look forward to sharing the newly updated International 2020 Guidelines for MRI in MS through various advocacy, dissemination, and implementation strategies. Ultimately, our vision, and goal, is for the updated protocol to be universally useful, usable, and used as the high-quality standard of care for MS patients, and we hope that a future follow-up survey will demonstrate improved acceptance, adoption, and use of this standardized MRI examination. o
|
2020-10-29T09:02:27.975Z
|
2020-09-01T00:00:00.000
|
{
"year": 2020,
"sha1": "659cc8d6e9fb3b44ba335a3151aff5d72d703a1b",
"oa_license": null,
"oa_url": "https://meridian.allenpress.com/ijmsc/article-pdf/22/5/226/2629477/i1537-2073-22-5-226.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "3dd25a4845a505e0b14e02622a386ed72ab663a6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
195659547
|
pes2o/s2orc
|
v3-fos-license
|
Systemic Molecular Mediators of Inflammation Differentiate Between Crohn’s Disease and Ulcerative Colitis, Implicating Threshold Levels of IL-10 and Relative Ratios of Pro-inflammatory Cytokines in Therapy
Abstract Background and Aims Faecal diversion is associated with improvements in Crohn’s disease but not ulcerative colitis, indicating that differing mechanisms mediate the diseases. This study aimed to investigate levels of systemic mediators of inflammation, including fibrocytes and cytokines, [1] in patients with Crohn’s disease and ulcerative colitis preoperatively compared with healthy controls and [2] in patients with Crohn’s disease and ulcerative colitis prior to and following faecal diversion. Methods Blood samples were obtained from healthy individuals and patients with Crohn’s disease or ulcerative colitis. Levels of circulating fibrocytes were quantified using flow cytometric analysis and their potential relationship to risk factors of inflammatory bowel disease were determined. Levels of circulating cytokines involved in inflammation and fibrocyte recruitment and differentiation were investigated. Results Circulating fibrocytes were elevated in Crohn’s disease and ulcerative colitis patients when compared with healthy controls. Smoking, or a history of smoking, was associated with increases in circulating fibrocytes in Crohn’s disease, but not ulcerative colitis. Cytokines involved in fibrocyte recruitment were increased in Crohn’s disease patients, whereas patients with ulcerative colitis displayed increased levels of pro-inflammatory cytokines. Faecal diversion in Crohn’s disease patients resulted in decreased circulating fibrocytes, pro-inflammatory cytokines, and TGF-β1, and increased IL-10, whereas the inverse was observed in ulcerative colitis patients. Conclusions The clinical effect of faecal diversion in Crohn’s disease and ulcerative colitis may be explained by differing circulating fibrocyte and cytokine responses. Such differences aid in understanding the disease mechanisms and suggest a new therapeutic strategy for inflammatory bowel disease.
Introduction
Inflammatory bowel diseases [IBD] are chronic, relapsing, inflammatory conditions mediated by concerted immunological, environmental, and genetic processes. 1,2 They include both Crohn's disease [CD] and ulcerative colitis [UC], and are believed to result from an overly aggressive immune response in individuals genetically susceptible to an environmental factor, such as gut commensals. 1,2 Faecal diversion is an approach to management of IBD patients whereby the stream of luminal contents is reduced or eradicated through the formation of an ostomy. Faecal diversion has not shown efficacy in the management of UC, but it has been used to great effect in CD. [3][4][5][6][7][8][9][10][11] In that setting, faecal diversion is associated with induction of clinical remission, mucosal healing, and maintenance of mucosal architecture. 4,7,12 The differential effect of faecal diversion in CD and UC implies that patients' microbiomes, alone or in combination with other faeces-borne factors, may be influential. We have demonstrated previously that patients with CD have a distinctly different mesenteric lymph node [MLN] microbiome compared with that observed in patients with UC. 13 Similarly, differences between the CD and UC gut microbiome have been reported. 14,15 In summary, the MLN microbiota of CD patients is more dysregulated than that of UC patients when compared with the reported healthy gut microbiome, and reflects increases in Proteobacteria. In contrast, the microbiome of MLNs from UC resections exhibits similarity to the reported normal gut microbiome, albeit comprising elevated Firmicutes. 13,16 MLNs are involved in the initiation and progression of immunological processes, which occur in response to bacterial translocation. [17][18][19][20] The differences observed in MLN microbiomes of patients with CD or UC could aid in understanding the mechanisms mediating each disease. Specifically, Proteobacteria, encompassing numerous pathogenic bacteria, may trigger more aggressive immune responses than bacteria present in the MLNs of UC patients. In contrast, MLNs from UC patients display an abundance of Faecalibacterium. 13 Faecalibacterium is associated with anti-inflammatory effects and can induce IL-10 production by dendritic cells. [21][22][23][24] An influential anti-inflammatory cytokine, IL-10, can reduce production of pro-inflammatory cytokines while increasing levels of anti-inflammatory cytokines. [25][26][27] Circulating fibrocytes have also been implicated in the pathogenesis of IBD [in addition to inflammatory and fibrotic diseases], through their differentiation to fibroblasts, myofibroblasts, and adipocytes at sites of inflammation. [28][29][30][31][32][33][34][35][36][37][38][39][40] These cells mediate mucosal and mesenteric fibrosis and inflammation in IBD, through increased proliferation and cytokine and extracellular matrix production. 28,[41][42][43][44][45][46] They are the cellular basis of many manifestations of IBD, particularly CD, including stricturing, fat wrapping, and mesenteric thickening. 43,44,47 To our knowledge, only two studies have investigated circulating fibrocytes in CD, with no published literature relevant to the UC setting. In CD, levels of circulating fibrocytes are increased when compared with healthy controls. 46,48 Recruited to sites of mucosal inflammation early in the inflammatory phase of CD, 46 circulating fibrocytes are present in diseased, but not normal mesentery. Levels of circulating fibrocytes increase as disease severity (mucosal, mesenteric, and the Crohn's Disease Activity Index [CDAI]) increases, 48 therefore suggesting a pathobiological role for fibrocytes in IBD.
Given this potential role in disease pathogenesis, we wished to investigate levels of circulating fibrocytes in patients with CD and, for the first time, UC. This study further attempted to explore the relationship between circulating fibrocytes and known risk factors of IBD. The majority of studies investigating faecal diversion in IBD have focused on clinical parameters and potential mucosal healing. However, here, faecal diversion facilitated an appraisal of systemic disease mechanisms and manifestations of IBD following removal [or considerable reduction] of the microbe-rich faecal stream, with emphasis on circulating fibrocytes and plasma-borne cytokines. = 20], were recruited from UHL and the University of Limerick, respectively. A sub-cohort of the recruited patients with CD were admitted to hospital for an emergency resection to manage a disease-related surgical indication, such as an obstruction, abscess, or perforation. These patients were not suitable for a resection due to the extent of their disease observed at the time of their exploratory laparotomy, and were defunctioned with the creation of an ostomy to divert the faecal stream away from the remaining intestine, and were also included.
Crohn's disease
Following admittance to hospital for an emergency resection, an exploratory laparotomy was conducted to assess the extent of disease in a sub-cohort of recruited patients with CD. It was determined that a resection which includes the mesentery, 48 or a classic conservative resection, could not be safely or successfully completed. Therefore, these patients were defunctioned through the creation of a loop ileostomy to divert the faecal stream away from the remaining intestine. An average of 7.2 months later, a second exploratory laparotomy was performed to once again evaluate the extent of disease. Here, it was decided whether to resect the diseased intestine and mesentery or to leave the defunctioning loop ileostomy in situ to allow further healing. Previous work by our group has demonstrated the efficacy of including the mesentery during resections for CD in reducing rates of surgical recurrence. 48 Blood samples were taken from patients with CD before their initial exploration laparatomy with loop ileostomy creation, and again before their second exploratory laparotomy with or without resection [7.2 months later].
Ulcerative colitis
Patients with UC underwent a total colectomy followed by a completion proctectomy. An end ileostomy was created to divert the faecal stream away from the remaining rectum following the patient's total colectomy, as per the standard of care at the Department of Surgery, UHL. Following an average of 13.7 months where the faecal stream was diverted, the end ileostomy was reversed and a completion proctectomy was performed with the formation of an ileal pouch-anal anastomosis. In this study, the term faecal diversion in patients with UC refers to the time period with the end ileostomy in situ [where the faecal stream was diverted away from the remaining rectum]. Blood samples were taken from patients with UC before their total colectomy with end ileostomy creation, and again before their completion proctectomy [13.7 months later].
Demographics and information collected
Retrieved data included patient's medical therapy, cigarette smoking status, age, and disease location and behaviour [Montreal classification system] at the time of their operation[s], in addition to their age at the time of diagnosis and a family history of IBD [if any]. 49 Preoperative white blood cell, lymphocyte, monocyte, eosinophil, basophil, neutrophil, and platelet counts were obtained for patients in addition to their preoperative C-reactive protein [CRP] levels. Patient data were generated by a combination of direct contact, chart reviews, operation and endoscopy notes, and pathology reports. PO 4 ]) and re-suspended in freezing medium [50% foetal bovine serum, 40% RPMI medium, and 10% dimethyl sulphoxide] prior to transfer to cryogenic vials in 1-mL aliquots. Samples were cooled in a cryogenic temperature control rate container to -80 °C until processing for flow cytometry.
Flow cytometric quantification of circulating fibrocytes
Circulating fibrocytes were quantified as described previously. 48 In brief, G Series glass slides were incubated with whole plasma overnight at 4°C following blocking for 30 min with RayBio® blocking buffer. G Series slides were then washed and incubated with a custom biotinylated antibody cocktail overnight at 4 °C. Subsequently, G Series slides were washed once again before incubation with IRDye 800CW Streptavidin [LI-COR Biosciences, UK] and visualisation on an Odyssey® SA [LI-COR]. Background signal was subtracted from values and data were normalised to the average of the positive controls within the replicate. Data were analysed as the fold change of the cytokine levels in preoperative CD or UC plasma when compared with the levels in healthy controls, or the fold change of cytokine levels in CD or UC patients following faecal diversion compared with the levels before faecal diversion. Levels of preoperative cytokines were assessed in a representative number of patients from our CD and UC cohorts and healthy controls. Sample selection was completed based on fibrocyte levels, ensuring that plasma from individuals with a range of fibrocyte levels was used.
Statistical analysis
Data are presented as mean ± standard error of the mean [SE] unless otherwise stated. All statistical analyses were completed using SPSSv24 [SPSS Inc., Chicago, USA]. A one-way analysis of variance [ANOVA] with Bonferroni post-hoc tests were used to compare the levels of circulating fibrocytes in patients with colorectal diseases and healthy controls. Two-tailed independent samples t tests were used to compare non-related parametric variables, and a Mann-Whitney U test was used to compare non-related non-parametric variables. A two-tailed paired t test was used to compare related parametric variables, and a Wilcoxon test was used to compare related non-parametric variables. Chi square tests and Z test for proportions were used to compare nominal data. A 5% level of significance was used for all statistical tests.
Patient and operation information
A full description of total patient demographics and operation information is available in Table 1. Seven of the recruited patients with CD underwent faecal diversion. Following faecal diversion for an average of 7.2 months, five patients were suitable for a resection that included the mesentery and the formation of an anastomosis, and two patients had diseased intestine and mesentery which remained unsuitable for a resection and the defunctioning loop ileostomy was left in situ. Demographics of all patients who underwent faecal diversion are provided in Supplementary Table 2, available as Supplementary data at ECCO-JCC online. Figure 1A]. The level of circulating fibrocytes in CD patients increased as their age at the time of surgery increased [11-20: Figure 1E]. Gender, family history of IBD, duration of disease, and perianal involvement had no effect on circulating fibrocyte levels in CD or UC [data not presented].
Common classes of medications used to manage inflammatory bowel disease have no effect on circulating fibrocytes in IBD
The majority of patients with CD and UC included in the study were undergoing medical therapy to manage disease at the time of surgery [ Table 1]. Information on the types of medications used and prescribed dosage ranges is provided in Supplementary Table 3]. Although these are anti-inflammatory, for the purposes of this study we have reported their effects separately from non-biologic anti-inflammatory medications [e.g., mesalazine and sulphasalazine] [ Table 1].
Medications commonly used to treat IBD had no effect on levels of circulating fibrocytes in CD patients [aminosalicylate: Table 3]. However, patients with UC undergoing immunosuppressive therapy were also administered biologic agents in combination therapeutic strategies [two patients on thiopurines and anti-TNF agents, one patient on thiopurine and an anti-α 4 β 7 agent].
3.4. Cytokines associated with fibrocyte recruitment and differentiation are increased in Crohn's disease but not ulcerative colitis Figure 3A].
Discussion
This is the first study to investigate and quantify levels of circulating fibrocytes in patients with UC [and is one of few studies to quantify their levels in CD] while, uniquely, determining the relationship between levels of circulating fibrocytes and known risk factors of IBD. Elevated circulating fibrocytes and distinct circulating cytokine profiles were associated with CD and UC when compared with healthy control samples. Patients with CD had increased levels in cytokines involved in fibrocyte recruitment and differentiation, whereas patients with UC displayed increased levels of pro-inflammatory cytokines. It has been reported previously that circulating fibrocytes are elevated in patients with CD, and may have a role in its pathogenesis. 28,46,48 We found that circulating fibrocytes are also increased in UC patients, suggesting a potential pathobiological role in that setting also. Fibrocytes, through differentiation to fibroblasts, myofibroblasts, and adipocytes, are capable of contributing to inflammation and fibrosis. 28,[41][42][43][44][45][46] However, although CD patients exhibited increases in plasma-borne cytokines associated with fibrocyte recruitment to sites of inflammation [e.g., CCL5, CCL24, and CXCL12] when compared with healthy controls, this was not replicated in UC patients. Therefore, although circulating fibrocytes are elevated in both CD and UC, it is reasonable to suggest that their recruitment to sites of inflammation in UC, such as the intestine and mesentery, may be compromised. Our data show that levels of circulating fibrocytes are highest in patients with stricturing CD, in agreement with previous reports of fibrocyte recruitment to sites of mucosal inflammation early in the inflammatory phase of CD, where they differentiate to fibroblasts and myofibroblasts and mediate stricturing. 43,46 In our patient cohort, smoking or a history of smoking was associated with increased circulating fibrocytes in CD, but not UC, when compared with those who had never smoked. The differential effect of smoking in CD and UC remains to be fully elucidated. Fold chance in cytokine levels in CD patients following faecal diversion compare to levels before faecal diversion IL4 IL6 IL8 IL10 IL12 p40 IL12 p70 IL13 IL16 IL17A IL23 IP10 I-TAC MCP-1 PDGF BB SDF1α SDF1β TGFβ1 TGFβ2 TGFβ3 TGFα TNFβ 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 Total patients with CD. 55,56 Our data indicate that increased circulating fibrocytes may be implicated. This study also represents the first time that changes in systemic disease mechanisms have been investigated in patients with CD or UC who underwent faecal diversion. Faecal diversion is associated with clinical remission and mucosal healing in CD, but not UC, [4][5][6][7]12 although a mechanistic basis for this remains to be determined. Systemic responses following diversion of the faecal stream may provide an explanation for its clinical differential. Despite relatively low numbers of patients, we observed distinctly different systemic responses in patients with CD when compared with those with UC, upon removal [or reduction] of the stream of luminal contents. In CD patients, faecal diversion resulted in a decrease of circulating fibrocytes, a reduction of pro-inflammatory cytokines and TGF-β1, and an increase in anti-inflammatory IL-10. Conversely, levels of circulating fibrocytes did not change in UC patients following faecal diversion. Notably, increases were observed in pro-inflammatory cytokines, cytokines associated with fibrocyte recruitment and TGF-β1, in addition to a decrease in IL-10 in patients with UC.
In health, bacteria are maintained at low levels in the MLNs by the host immune system, 57 whereas the gut microbiota is implicated in restriction of translocation of pathogenic bacteria to the MLNs. 58 These defences may be compromised in IBD, particularly CD. 13 The CD MLN microbiota reflects increases in Proteobacteria, whereas MLNs from UC patients have a microbiota similar to the healthy gut microbiome, albeit with increased Firmicutes. 13,16,59,60 We have demonstrated previously that the microbial profile of MLNs taken from the same patient was similar, irrespective of sampling location, 13 indicating that the MLN microbiome is influenced by disease rather than bacteria residing in the corresponding intestinal location. This suggests that MLN immune responses are disease-specific.
It is reasonable to suggest that the bacterial profile of MLNs from patients with CD or UC may influence their systemic responses. For instance, MLN production of TNFα in response to bacterial translocation has been demonstrated in a number of disease models. [61][62][63] IBD has been associated with increased levels of TNFα, 64 where it increases intestinal permeability 65,66 and, in doing so, allows increased bacterial translocation. However, diversion of the faecal stream may allow clearance of bacterial DNA from the MLNs. 13 Mechanistically, it may be that eradication of living cells or bacterial DNA from MLNs facilitates altered immune responses. If so, it can be postulated that removal of these factors from CD MLNs results in beneficial systemic responses for patients, with the inverse effect observed in UC. In UC MLNs, Firmicutes dominate, a phylum that contains numerous bacteria found in the healthy gut. 13 Faecalibacterium are abundant in MLNs from UC patients. 13 These bacteria exert anti-inflammatory effects and are capable of inducing dendritic cell production of IL-10. 23,24 It is possible that the removal of viable Faecalibcaterium and other healthy gut bacteria, and the associated eradication of bacterial DNA, from the intestine and MLNs could reduce levels of IL-10 and prove instrumental in eliciting a pro-inflammatory response in UC.
Furthermore, MCP-1 increased following faecal diversion in patients with CD and, to a lesser extent, UC. In addition to its role as a chemoattractant for macrophages to sites of inflammation, MCP-1 promotes an M2 macrophage phenotype. M2 macrophages are involved in tissue repair, and MCP-1-derived M2 macrophages produce increased levels of IL-10. 67 The increase of MCP-1 facilitated by faecal diversion may allow this shift to the M2 macrophage phenotype, leading to tissue repair and a reduction in pro-inflammatory processes. Similarly butyrate, a short-chain fatty acid [SCFA], exerts anti-inflammatory effects in the intestine and is produced directly by members of Clostridia and F. prausnitzii [both Firmicutes] or can be converted from other SCFAs by bifidobacteria. These bacteria, through their role in butyrate production, and supplementation of butyrate itself, have been suggested as therapeutic options for IBD. [68][69][70][71][72] Although desirable, it was not possible to assess breaches in intestinal integrity by immunohistochemistry or fluorescent cell staining.
IL-10 reduces production of pro-inflammatory cytokines, while increasing levels of anti-inflammatory cytokines, 25-27 mediating its potential in medical therapy for IBD. However, previous clinical trials assessing the effect of IL-10 supplementation in IBD have yielded disappointing results, 73,74 and it has been postulated that administered doses of IL-10 were too low to elicit a response. 75 Our results suggest that IBD patients may benefit from a threshold level of four times the level of IL-10 present in healthy controls to induce therapeutic effects. The increase of IL-10 observed in patients with CD correlated with a decrease in pro-inflammatory cytokines. Arguably, future studies investigating IL-10 therapy for IBD could usefully consider these threshold levels, and the balance of proinflammatory cytokines relative to levels of IL-10.
In conclusion, the distinct cytokine profiles associated with CD and UC indicate differing mechanisms for the diseases. The negative effect of smoking observed in CD, but not UC, may be partly explained by increased circulating fibrocytes. Systemic responses to faecal diversion also differ in CD and UC. This may provide us with an opportunity to understand the mechanisms mediating the disease, including the role of the MLN microbiota. Notably, we have identified an association between clinical improvement and increased IL-10, further supporting the potential of IL-10 in therapy for IBD. As per our results, a threshold level of four times the amount of IL-10 present in healthy individuals may be necessary to elicit a beneficial effect in IBD. This putative therapeutic strategy could be employed pragmatically by measuring levels of circulating IL-10 befor and at regular intervals during supplementation, in addition to monitoring the ratio of IL-10 to levels of pro-inflammatory cytokines.
Funding
This work was supported by the Graduate Entry Medical School [University of Limerick] Strategic Research Fund.
|
2019-06-27T16:21:59.971Z
|
2019-06-26T00:00:00.000
|
{
"year": 2019,
"sha1": "225b744a2d33e450da996bd4cc66c2cf4f96f009",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/ecco-jcc/article-pdf/14/1/118/31613488/jjz117.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "bf47cae39923db56a94335d608ea9792844e1d39",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
17032092
|
pes2o/s2orc
|
v3-fos-license
|
Measurements of the Carrier Dynamics and Terahertz Response of Oriented Germanium Nanowires using Optical-Pump Terahertz-Probe Spectroscopy
We have measured the terahertz response of oriented germanium nanowires using ultrafast optical-pump terahertz-probe spectroscopy. We present results on the time, frequency, and polarization dependence of the terahertz response. Our results indicate intraband energy relaxation times of photoexcited carriers in the 1.5-2.0 ps range, carrier density dependent interband electron-hole recombination times in the 75-125 ps range, and carrier momentum scattering rates in the 60-90 fs range. Additionally, the terahertz response of the nanowires is strongly polarization dependent despite the subwavelength dimensions of the nanowires. The differential terahertz transmission is found to be large when the field is polarized parallel to the nanowires and very small when the field is polarized perpendicular to the nanowires. This polarization dependence of the terahertz response can be explained in terms of the induced depolarization fields and the resulting magnitudes of the surface plasmon frequencies.
I. INTRODUCTION
In recent years, semiconductor nanowires have gathered much interest. Nanowires have been applied to an array of applications that highlight their versatility as building blocks of integrated electronics (transistors) and photonics (waveguides, lasers, photodetectors, solar cells) 1,2,3,4,5,6 . Germanium nanowires are of particular interest due to the attractive material properties of Germanium, including large electron and hole mobilities and large optical absorption in the visible/near-IR. These properties could make germanium nanowires the choice for next generation electrical and photonic devices, such as transistors, CMOS compatible photodetectors, and solar cells. Understanding the fast electrical and optical response as well as ultrafast dynamics of carriers in nanowires is important for most of the applications mentioned here. In this paper, we present results on the measurement of the terahertz (THz) response as well as ultrafast carrier dynamics in photoexcited germanium nanowires using optical-pump THz-probe spectroscopy.
Ultrafast carrier dynamics in group III-V, II-VI, and group IV semiconductor nanowires have been studied with optical-pump optical-probe spectroscopy measurements 7,8 which are sensitive primarily to the carrier occupation of specific regions in the energy bands. Optical-pump THz-probe spectroscopy, in which the probe photon energy is ∼5 meV, is sensitive to not only the total carrier density but also to the distribution of these carriers in energy within the bands. The latter is true since the energy distribution of carriers affects the THz optical conductivity 9 . Optical-pump THz-probe spectroscopy can therefore be used to simultaneously study both intraband relaxation and interband recombination dynamics of photoexcited electrons and holes on ultrafast time scales. Our results show intraband carrier relaxation rates (attributed to intravalley and inter-valley phonon scattering) in the 1.5-2 ps range and carrier density-dependent recombination rates (attributed to nanowire surface defects) in the 75-125 ps range at room temperature in 80 nm diameter wires.
The fast electrical response of nanowires at THz frequencies can also be studied with optical-pump THzprobe spectroscopy 10 . With this technique, we measure carrier momentum scattering times in the 60-90 fs range. Additionally, we find the THz response of oriented nanowires to be strongly dependent on the polarization of the THz field. The differential THz transmission through photoexcited nanowires is most affected when the THz field is polarized parallel to the nanowires, while no appreciable response is detected when the THz field is polarized perpendicular to the nanowires. The shape anisotropy of the nanowires at subwavelength scales leads to a strong polarization dependent macroscopic THz response. Our results indicate the possibility of realizing optically or electrically controlled active THz devices based on semiconductor nanowires.
II. GERMANIUM NANOWIRE FABRICATION
Germanium nanowires used in this work were ∼80 nm in diameter and ∼10 µm in length (see Fig. 1). They were grown via CVD in a hot-walled quartz tube furnace using germane as the source gas and gold nanoparticles for the catalyst 11 . Alignment of nanowires was achieved on quartz crystal substrates using a contact printing method previously reported by Fan et al. 12 . Nanowires used in this experiment were unintentionally doped, and expected initial carrier density is less than 10 17 1/cm 3 . Electron-hole pairs were optically generated in the nanowires using 90 fs pulses from a Ti:Sapphire laser with a center wavelength of 780 nm focused to a spot with standard deviation ∼150 µm. Pump pulse energies in the 1-12 nJ range were used. The photoexcited nanowires were then probed with a synchronized few-cycle THz pulse generated and detected with a THz time-domain spectrometer (see Fig. 2). The spectrometer, with a power SNR of 4x10 6 and measurable frequency range of .5-2.8 THz, was based on a semiinsulating GaAs photoconductive emitter 13 and a ZnTe electro-optic detector 14 . By varying the delay of the THz pulse with respect to the optical pump pulse, we measured the time-dependent differential change in the THz transmission. The optical pump and THz probe beams were mechanically chopped at 1400 Hz and 2000 Hz, respectively, and a lock-in amplifier was used to measure the signal at the sum of these frequencies. Measurements in this work were performed at 300K, and the measurement error, due primarily to long-term drift of optomechanical components and the Ti:Sapphire laser, is estimated to be ∼5%.
Fig. 3 shows the measured differential amplitudes of
THz pulses transmitted through photoexcited germanium nanowires for pump pulse energies of 10.2, 8.2, 6.1, 4.1, and 2.0 nJ for a fixed pump-probe delay. The THz field is polarized parallel to the nanowires. Since doublechopping is employed, the measured differential signal is affected only by the THz response of the photoexcited carriers within the nanowires. Fig. 3 displays no measurable carrier density dependence in the frequency dispersion of the THz response since the measured pulse shape remains unchanged for different pump pulse energies (only the pulse amplitude changes). As discussed in detail below, carrier density independent dispersion is a result of very small plasma frequencies. These results show that the dynamics of photoexcited carriers can be studied by measuring the differential amplitude of the peak of the transmitted THz pulse as a function of the pump-probe delay 15 . Fig. 4 shows the measured differential amplitude of the peak of the THz probe pulse as a function of the pump-probe delay for different optical pump energies. The THz transmission decreases in the first ∼5 ps following the optical excitation and then recovers on a 75-125 ps time scale. These two time scales in the measured transient can be explained by the intraband and interband carrier dynamics, respectively. The optical pulse creates electron-hole pairs near the Γ-point in the germanium reciprocal lattice (see Fig. 5). Electrons quickly scatter from the Γ-point to the X-point within 100 fs due to strong intervalley phonon scattering, after which they rates are highly dependent on the doping density. Recombination times as long as hundreds of microseconds for undoped germanium 18 and as short as hundreds of picoseconds for doped germanium 17 have been reported. Our measurements show that electrons and holes in 80 nm germanium nanowires recombine with carrier densitydependent recombination times between 75-125 ps. This shorter time scale, compared to that in bulk germanium, indicates that surface defect states may be responsible for faster recombination in agreement with recent electrical and optical pump-probe measurements 7,19 . The THz frequency response of a finite length nanowire can be described by the Drude model modified to consider the depolarization fields 20 due to the induced charges on the surface of the nanowire 10,21,22 . The in-clusion of the depolarization field leads to a surfaceplasmon-like resonance in the frequency dependent current response of the nanowire. The current I(ω) in a nanowire of cross-sectional area A can be written as, where σ • is the DC conductivity of the nanowire material, E ext (ω) is the applied field and E d (ω) is the depolarization field. The above expression can also be expressed as, Here, τ is the momentum scattering time and Ω p is the frequency of the surface plasmon resonance due to the depolarization field. Ω p is related to the bulk plasma frequency ω p by a constant factor g that depends on the polarization of the applied field with respect to the nanowire axis (see Fig. 6). For a field polarized perpendicular to the nanowire, g equals ǫ s /(ǫ s + ǫ • ), where ǫ s and ǫ • are the permittivities of the nanowire material and free-space, respectively. In the case of the field polarization parallel to the nanowire, the value of g is small and is on the order of (d/L) ǫ s /ǫ • , where d and L are the diameter and length of the nanowire respectively. Since d ≪ L, when the field is polarized parallel to the nanowires, Ω p is much smaller than ω p . We estimate Ω p to be less than 300 GHz for even the largest photoexcited carrier densities in our experiments. The interaction between nanowires is expected to reduce the depolarization field inside the nanowires and, therefore, further reduce the value of Ω p . At frequencies much larger than Ω p , σ eff (ω) reduces to the Drude result, and in the DC limit, σ eff (ω) goes to zero as it should for a finite-length uncontacted nanowire. The differential THz transmission (normalized to the transmission in the absence of photoexcitation) can be written as, where η • is the impedance of free space, f ≈ .08 is the fill factor of the nanowires, d = 80 nm is the diameter of a nanowire, and n = 1.96 is the THz refractive index of the quartz substrate 23 . F (ω) is the overlap factor that accounts for the frequency dependence of the measured THz response due to the mismatch between the optical and THz focus spots. Assuming Gaussian transverse intensity profiles for the optical and THz beams, the overlap factor is found to be, where ω • ≈ 2πc/a is approximately the frequency corresponding to the standard deviation, a = 150 µm, of the optical beam transverse intensity profile. In the case of the THz field polarized along the nanowires, since Ω p ≪ ω for frequencies ω in the 0.5-3.0 THz range, σ eff (ω) has the frequency dependence of the Drude model. There is therefore no carrier density dependence in the frequency dispersion of the differential THz transmission, in agreement with the measured results shown earlier in Fig. 3. Fig. 7 shows the measured frequency spectra (solid lines) of |∆t(ω)/t| for different pump pulse energies. Also shown are the theoretical fits (dashed lines) obtained from Equation 4. As seen in Fig. 7, the theory agrees well with both the frequency dependence and the carrier density dependence of the data. From our fits, we find the momentum scattering time to be τ = 70 ± 15 fs, which corresponds to an effective electron plus hole mobility of 4400 cm 2 /(V·sec). This is slightly smaller than the bulk germanium electron plus hole mobility of 5700 cm 2 /(V·sec) at 300 K found in the literature 24 .
Equation 4 shows that the differential THz transmission is approximately proportional to the carrier density through σ eff (ω). If the carrier density changes on a time scale much slower than the momentum scattering time, then the differential amplitude of any one point on the transmitted THz pulse measured as a function of the pump-probe delay can be used to study ultrafast carrier dynamics. The time resolution in our experiments is limited by the width of the optical pump pulse to ∼150 fs. In order to describe the complete differential THz transmission transient shown in Fig. 4, we model the time dependence of the photoexcited electron density in the germanium nanowires with rate equations. We assume that the photoexcited electron density N ′ in the higher energy valleys in the conduction band relaxes into the lowest energy L-valley with characteristic time τ r . In the L-valley, the electrons interact with the THz radia- tion until they recombine 16 . Recombination in bulk germanium with low doping is generally attributed to the Shockley-Reed-Hall (SRH) mechanism of defect assisted recombination. Auger recombination becomes dominant for doping densities above 10 18 cm −318 . Surface defect recombination in nanowires is also expected to have carrier density dependence similar to that of the bulk SRH mechanism. We assume that the recombination rate is described by a second-order polynomial in the L-valley electron density N , The initial photoexcited density N ′ (t = 0) is estimated to be 4.5×10 18 1/cm 3 for a 12 nJ pump pulse and is assumed to scale linearly with the pump pulse energy. The DC conductivity σ • equals N e(µ e + µ h ), where µ e and µ h are the electron and hole mobilities. The agreement between the rate equation model and the data is shown in Fig. 4.
The extracted values of the various parameters for best fit are as follows: τ r = 1.7 ps, A = 8.8 × 10 9 1/sec, and B = 2 × 10 −9 cm 3 /sec. The model agrees with the data for all pulse energies. The necessity of the B parameter indicates carrier density-dependent recombination rates in germanium nanowires. This is consistent with densitydependent SRH surface and Auger recombination 25 . Photoexcited carriers in oriented nanowires are expected to exhibit a polarization dependent THz response due to the geometries depicted in Fig. 6. In order to study the polarization dependence of the THz transmission, the incident THz electric field was polarized at 45 • with respect to the nanowires (see Fig. 8). As a result, the field had components both parallel and perpendicular to the nanowires. After transmission through the nanowire sample, the field polarization was selected for measurements of ∆t/t by rotating the polarizer through an angle θ with respect to the nanowires. Fig. 9 shows the measured values of |∆t/t| for different angles θ. The most striking feature of the data is the absence of any measurable THz response when the field is polarized perpendicular to the nanowires. In this case, the plasma frequency Ω p equals ω p ǫ s /(ǫ s + ǫ • ) and is in the tens of THz range for the photoexcited carrier densities in our experiments. Equation 2 shows that when Ω p ≫ ω, and the product ω τ is not too small, the induced current is significantly reduced compared to the case when Ω p ≪ ω. Therefore, for perpendicular THz field components the depolarization field is strong enough to suppress the induced current, and so the resulting THz response is much weaker compared to that for parallel components. In this way, the shape anisotropy of the nanowires on subwavelength scales determines the polarization dependence of the THz response. Assuming that the THz response of oriented nanowires is negligibly small when the field is polarized perpendicular to the nanowires, the measured THz transmission is expected to be proportional to the cosine of the angle between the field polarization and the nanowire axis. In our experiments, since the field polarization is selected post-transmission, the measured values of |∆t/t| are expected to be proportional to cos(θ)/ cos(π/4 − θ). Fig. 9(b) shows that the (peak) values of |∆t/t| exhibit exactly this angular dependence.
IV. CONCLUSION
In conclusion, we have measured the time, frequency, and polarization dependence of the THz response of germanium nanowires using optical-pump terahertz-probe spectroscopy. Carrier intraband relaxation times, interband electron-hole recombination times, and carrier momentum scattering times were also measured using the same technique. Our results demonstrate the usefulness of ultrafast THz spectroscopy for characterizing nanostructured materials.
|
2009-05-08T19:10:25.000Z
|
2009-05-08T00:00:00.000
|
{
"year": 2009,
"sha1": "9b2c3fbf1e89b3f9e019f949b64969f8e879d0f7",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0905.1315",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "af336701830ee18adecd9dc9aba02857bed9da8f",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science",
"Physics"
]
}
|
16060973
|
pes2o/s2orc
|
v3-fos-license
|
Terrestrial mountain islands and Pleistocene climate fluctuations as motors for speciation: A case study on the genus Pseudovelia (Hemiptera: Veliidae)
This study investigated the influences of geographic isolation and climate fluctuation on the genetic diversity, speciation, and biogeography of the genus Pseudovelia (Hemiptera: Veliidae) in subtropical China and tropic Indo-China Peninsula. Species nucleotide and haplotype diversities decreased with reduction in species distribution limits. The gene tree was congruent with the taxonomy of monophyly, except for four species, P. contorta, P. extensa, P. tibialis tibialis, and P. vittiformis. The conflicts between the genes and species tree could be due to long-term isolation and incomplete lineage sorting. Diversification analysis showed that the diversification rate (0.08 sp/My shifted to 0.5 sp/My) changed at 2.1 Ma, which occurred in the early Pleistocene period. Ancestral area reconstruction suggested that subtropical species possibly evolved from the tropics region (i.e., Indo-China Peninsula). Results implied that narrow endemics harbored relatively low genetic diversity because of small effective population and genetic drift. Radiation of subtropical Pseudovelia species was rapidly promoted by Pleistocene climate fluctuations and geographic isolation. The acute rising of the Hengduan Mountain with the entire uplift of the Qinghai–Tibet Plateau induced the initial differentiation of Pseudovelia species. These results highlighted the importance of geographical isolation and climate changes in promoting speciation in mountain habitat islands.
that of widely distributed species, speciation of mountain species with restricted distributions and small effective population sizes have been less investigated from a phylogeographic point of view.
Pleistocene glaciations or Ice Ages are recent geo-historical events with major global impact on biodiversity; these events represent the largest expansion of cold climates since the Permian period 250 million years earlier 10 . Glacial influences on the environment vary depending on geographical region. Although most lowland areas of subtropical China and adjoining tropical regions have never been covered by ice sheets, mountainous regions with relatively high altitudes probably experienced strong, cooler, and drier glacial climates as well as major biotic shifts during the Pleistocene 11 . Accumulating evidence suggested that Pleistocene climate oscillations seriously affect the geographic distribution of mountain species and patterns of intraspecific genetic variations 3,12 . However, a consensus has not been established regarding the importance of Pleistocene glaciations for inducing mountain speciation in Asia 13 . This phenomenon could be due to the intrinsic properties of existing models. For example, mountain plants (e.g., trees), with relatively long generation times, cannot result in species-level divergence during short duration of climate cycles 14 . For terrestrial mountain animals with relatively short generation times, the strong dispersal ability induces them to solely respond by shifting their ranges toward ecologically suitable areas 15 . Another reason is that the complex climate changes in Asia might have various influences on speciation by comparison with the well-known history of climate glaciations in North America and western Europe 11 .
Mountain stream invertebrates exhibit short generation times and restricted dispersal abilities and are thus suitable models for determining the effects of geographic isolation and Pleistocene climate fluctuations on speciation. This study focuses on the genus Pseudovelia (Hemiptera: Veliidae), which is a relatively species-rich genus of mountain stream invertebrates; this genus contains 22 recognized species in subtropical China and tropical Indo-China Peninsula [16][17][18][19][20] . Pseudovelia species live in quiet and secluded habitat behind boulders of mountain streams at relatively high altitudes 21 . Of the 22 recognized species and two undescribed species, only P. tibialis tibialis is widely distributed and most species have adapted to localized distributions (i.e. P. buccula, P. contorta, P. feuerborni, P. extensa, P. intonsa, P. pusilla, P. sexualis, P. sichuanensis, P. spiculata, P. sp2, P. ullrichi, P. vittiformis) or restricted to single mountain massifs (i.e., narrow endemics including P. anthracina, P. fulva, P. globosa, P. hsiaoi, P. longiseta, P. longitarsa, P. lundbladi, P. piliformis, P. recava, P. sp1, P. taiwanensis) 16,19,20 . The use of this genus as a study model can clearly define the diagnostic characteristics among closely related species as well as species boundaries; hence, this model can eliminate ambiguous taxonomic problem and enables the establishment of a priori precise designation of species for potential analysis of conflicts between gene and species tree (i.e., incomplete lineage sorting and hybridization) 10 .
In this study, we used a combination of phylogeography, interspecific phylogeny, and ecological niche modeling (ENM) methods as well as multilocus genetic markers to elucidate the effects of geographic isolation and Pleistocene climatic oscillations on the genetic diversity, speciation, and biogeography of Pseudovelia in subtropical China and tropical Indo-China Peninsula. In particular, we attempt to test the following hypotheses: (1) narrow endemics are related to relatively low genetic diversities, which are potentially sensitive to extinction events; (2) radiation of subtropical Pseudovelia species is boosted in a rapid and recent diversification event, mostly promoted by Pleistocene climate fluctuations and geographic isolation; (3) the phylogeny of recently radiated species may reveal conflicts between the gene and species tree on the account of the evolutionary process (i.e., incomplete lineage sorting), and (4) subtropical Pseudovelia species possibly evolve from the tropics region (i.e., Indo-China Peninsula) as influenced by the uplift of the Hengduan mountains.
Results
Species genetic diversities in different scales of distribution limits. Protein-coding regions (1378 bp) were obtained from 285 individuals of Pseudovelia species, including sections of the COI (699 bp) and COII (679 bp) genes. A total of 128 unique haplotypes were derived from concatenated COI + COII sequences among all individuals. The 332 polymorphic sites included 18 singleton variables and 314 parsimony informative sites. The nucleotide and haplotype diversities of Pseudovelia species ranged from 0.00000 to 0.01892 as well as from 0.000 to 1.000, respectively (Table 1). Both species nucleotide and haplotype diversities decreased with reduction in species distribution limits (Figs 1 and S1), with the lowest value (Hd = 0, π = 0) found in narrow endemics (Table 1).
Phylogenetic reconstruction based on mitochondrial and nuclear data. For the mitochondrial data, both Bayesian inference (BI) and maximum likelihood (ML) results revealed compatible tree topologies (Fig. 2), which are largely congruent with previous taxonomic studies based on monophyly 19,20 , except for four species, namely, P. contorta, P. extensa, P. tibialis tibialis, and P. vittiformis; of these species, P. extensa was strongly polyphyletic in the gene trees (Fig. 2). The phylogenic relationship within the "Node S Species Composition" (NSSC) species also showed moderately different topology in the two analytical methods (BI and ML) ( Fig. 2A,B). For the nuclear data, the consistently resolved topology (BI/ML) indicated moderate conflict with the mitochondrial gene tree (Fig. S2). All species were strongly monophyletic in the nuclear gene trees (Fig. S2). Of which, two pairs of species (i.e. P. contorta/P. extensa and P. anthracina/P. sp1) exhibited the identical nuclear sequences and shared the same haloptype respectively (Fig. S2).
Species tree analysis and divergence time estimation. The *BEAST multilocus species tree is largely congruent to the gene tree (Fig. 3A,B). The main difference of both tress is that, in the gene tree, P. intonsa is the sister to P. pusilla; conversely, in the *BEAST tree, P. intonsa is the sister to the ancestor of the P. anthracina, P. sp1, and P. tibialis tibialis. Almost all nodes of the NSSC species were poorly supported on account of polyphyly of the species P. contorta, P. extensa, and P. vittiformis (Fig. 3B). This issue could neither be solved by repeating runs with higher sample frequencies nor with application of simple substitution models 22 because of some evolutionary processes (e.g., incomplete lineage sorting). Estimating divergence time between the tropic and subtropical species Diversification analyses. Monte Carlo constant rates (MCCR) test (critical value = −1.94, P = 1) and the gamma statistic (value = 2.88, P = 0.9) indicated an increase in the diversification rate over time. Of the six models tested, the variable rate yule2rate model with one shift in diversification was selected as the best fit model for Pseudovelia, suggesting that the initial diversification rate of 0.08 species per million years (sp/My) shifted to 0.5 sp/My at 2.1 Ma (Table S1). The Lineage Through Time (LTT) plots also indicated that Pseudovelia did not exhibit a constant speciation rate: an acceleration in speciation of the NSSC species lineages occurred in the Pleistocene (Fig. 3C).
Exploring potential reasons influencing species monophyly. Our examination displayed no distinctly morphological variations among geographic populations, especially on the important diagnosis (i.e. structure of abdominal segment VIII) (Fig. S3); this finding is consistent with previous taxonomic studies and thus confirms the species status. However, P. tibialis tibialis and P. vittiformis showed slightly intraspecific morphological variations (i.e. color and body size) between the mainland specimen and their relatives from nearby islands (i.e. Taiwan and Hainan) (Fig. S3). The results of network also showed that both mtDNA and nrDNA haplotypes in P. tibialis tibialis and P. vittiformis respectively exhibited distinct genetic barrier between mainland and island populations that is separated by Taiwan Strait and Beibu Gulf (Fig. 4C,D). By contrast, the scattered mtDNA haplotypes, non-variation, and identity of nrDNA sequences in P. contorta and P. extensa suggested that incomplete lineage sorting or hybridization potentially influenced the non-monophyly of the two species (Fig. 4A,B), not geographical isolation. The results of the Joly, McLenachan and Lockhart (JML) run are given in Table S2. All species pairs exhibited genetic distances that are not significantly lower than expected (p > 0.1). Thus, we cannot reject the hypothesis of incomplete lineage sorting in any cases.
Paleoclimate niche modeling reconstruction. The high Area Under Curve (AUC) value was obtained from the currently potential distribution (AUC = 0.890), indicating good predictive model performance. The current niche predicted the ancestral distribution of the NSSC species, which showed a potentially continuous range located in the Wuyi mountains, Taiwan island, and scattered in coastal areas of southern China (Fig. S4). When projecting the current niche into historical climate conditions, during the LIG period, the suitable climate space continued to stay in situ, but a relatively continuous range expand moderately in the coastal areas of southern China (Fig. S4). During the ice age (LGM period), the species' potential range greatly contracted and habituated in the two segregated refugia (i.e. the Guangxi and the Taiwan space) under the Community Climate System Model (CCSM) model (Fig. S4).
Ancestral area reconstruction. The two runs of the Bayesian Binary Method (BBM) analysis for the major nodes of the tree produced identical results (Fig. 5), suggesting that Pseudovelia species in subtropical China were younger, which probably originated somewhere in tropic Indo-China Peninsula. The Principal Components Analysis (PCA) of pooled environmental variables revealed reduced significant components, defining a realized niche space occupied by subtropical and tropic species ( Table 2, Fig. 6). The first two components of the PCA were significant and explained 81.09% of the overall variance. The first component (PC-1) was closely associated with precipitation while the second (PC-2) was associated with temperature ( Table 2). The climate space occupied by subtropical species departed from that occupied by tropic species with respect to components, 1 and 2 (Fig. 6). The relatively disjuncted positions in climate space suggested that niche space might diverge between the subtropical and tropic species (Fig. 6).
Discussion
Low level of genetic diversities in narrow endemics. Species genetic diversity is an important index for conservation biology 23 . The variation of gene might determine whether population of species could adapt to abiotic and biotic factor changes for survival. Two parameters important to species' genetic diversity are the influence of effective population size and genetic drift. Effective population size (Ne) is usually linked to species distribution patterns and limits, which is a crucial metric because it integrates the genetic effects of life history variation on microevolutionary processes 24 . As Ne decreases, genetic drift erodes genetic variation, elevates the probability of fixation of deleterious alleles, and reduces the effectiveness of selection, which reduces overall fitness and limit adaptive responses. These genetic changes can drive a threatened species closer to extirpation 24 . A low level of genetic diversity is commonly expected for narrow endemic species 25 . Our analysis revealed that the widely distributed species, P. tibialis tibialis, exhibited the highest genetic diversities (Hd = 0.972; π = 0.01892) ( Table 1). As the species distribution limits reduced from localized scale to single mountain massifs, both species nucleotide and haplotype diversities decreased (Fig. S1). In subtropical China, extremely low diversity (Hd = 0, π = 0) had been detected in three of narrow endemics, i.e., P. globosa, P. hsiaoi, and P. recava, which only inhabited in Nanling, Jiugong, and Maolan mountains, respectively (Table 1). This genetic pattern might results from the combined effects of genetic drift and/or isolation 25 . These two factors, together with the relatively small effective population size, potentially eroded genetic variation over time. In fact, geographical isolation could inhibit gene flow and accelerate the effects of random genetic drift, which undoubtedly make narrow endemics more sensitive to extinction events (e.g., human disturbance and natural disaster). Conservation measures therefore should focus on the establishment of nature reserves and create adequate habitat to ensure the viability and long-term sustainability of narrow endemics populations.
Pleistocene climate oscillations and terrestrial mountain islands inducing a rapid radiation. The low divergences among haplotypes of the NSSC species suggest that these species originated in a recent and rapid radiation (Fig. 2). Our estimated divergence time in the species tree also showed that the NSSC species underwent a recent radiation, which occurred during the Pleistocene (i.e., 0.2-1.8 Ma) (Fig. 3B). The test for heterogeneity of the temporal diversification rate identified the yule2rate model that fit the LTT plot (Fig. 3C). The rate change (0.08 sp/My shifted to 0.5 sp/My) was at 2.1 Ma, which was during the early Pleistocene. Clearly, a high speciation rate has characterized the diversification of NSSC species lineages through time. In general, the average speciation rate in insects was proposed to be 0.16 species per My 26 . This fact suggested that the NSSC species was an unexpected event from what is known about the insect group in continental fauna with such fast diversification rate, because this situation was mostly found in the examples of island species radiation, such as Hawaiian crickets 27 and Japanese islands ground beetles 28 . This comparison indicated that relatively isolated mountains that resemble natural 'terrestrial islands' might occupy identical speciation rate compared with the island fauna radiation. Many speciation instances had suggested three major promoters of rapid radiation: the appearance of a key innovation that allowed the exploitation of previously unexploited resources or habitats 29 , the availability of new resources 30 , and the availability of new habitats, i.e., a rare colonization event or drastic environmental changes 10,14 . In the case of the NSSC species, we found no evident key innovation distinguishing this group from other Pseudovelia species. We have no data concerning internal morphology and physiology. Additionally, our observational data showed that the NSSC species had ecological requirements similar to those of other Pseudovelia species, which did not indicate the presence of any key innovations. There was also no indication of any new resource that could be specifically exploited by the NSSC species. Based on the ENM reconstruction of ancestral NSSC distribution (Fig. S4), we explored the possibility that drastic environmental changes during the Pleistocene climate oscillations that mediated the radiation of the NSSC species. Based on our results of species divergence time estimation and LTT plots during the Ice Ages, particularly since the late Pleistocene (i.e., 0.9 Ma), very rapid radiation probably occurred in the NSSC species group from 0.7 to 0.2 Ma (Fig. 3B). During this time, subtropical China experienced three glacial periods (i.e., Kunlun, Zhonglianggan and Guxiang glacial periods) and two interglacial periods 31 . The rotation of alternating glacial-interglacial induced large climate oscillations in temperature and rainfall, especially for the mountain regions 31 . Our ENM model simulated the climate oscillations in the late Pleistocene (LIG and LGM) to predict the variation of ancestral suitable habitat of the NSSC species. The results showed that the ancestor contracted mostly into the southern refugia during the cold condition, and expanded at the warm climate condition (Fig. S4). Here, we proposed that during the Pleistocene climate oscillations, the ancestor of the NSSC species might have been forced into ongoing cycles of retreating into, and the re-expansion from, refugia. Under the recurrent, extremely unsuitable climate conditions, the isolation of small populations by mountains over many generations might have promoted speciation and the fixation of morphological traits.
Underlying factors of discordance between gene and species tree.
The conflict between gene and species tree mainly account for the non-monophyly among closed related species. Our gene tree revealed that the four species, P. contorta, P. extensa, P. tibialis tibialis, and P. vittiformis were not monophyly in the mitochondrial gene tree (Fig. 2). The taxonomic and morphological studies showed that these species status, based on the important diagnostics of genital morphology, were well supported (Fig. S3). The phylogeographic analysis indicated that the non-monophyly of these four species was potentially caused by two kind effects of evolutionary processes, i.e. geographic isolation and incomplete lineage sorting. For the species P. tibialis tibialis and P. vittiformis, the mitochondrial and nuclear data both revealed distinctly isolated genetic groups (i.e. island and mainland populations) divided by clearly geographical barriers (i.e. Taiwan strait and Beibu Gulf), which strongly supported the long-time isolated event between the island and mainland populations (Fig. 4C,D). The populations inhabited in the Taiwan and Hainan islands situated 230 and 240 km respectively away from their relatives of mainland. The islands usually shaped local environments, such as the unique microclimate and derived new habitats. As new habitats formed in the Pleistocene, the islands were colonized by P. tibialis tibialis and P. vittiformis from the mainland. The founders adapted to the new environment, and the long geographical isolation inhibiting gene flow from their mainland ancestral populations had resulted in the high degree of gene differentiation observed today. Under the circumstances, the islands population probably could be easily distinguished by some intraspecific morphological variations from their mainland relative. This situation is also found in these two species, which showed that the island specimens were larger (P. vittiformis) and darker (P. tibialis tibialis) than their relative from mainland (Fig. S3). Therefore, we proposed that the long-time isolation among population would contribute to the high genetic divergence that would result to non-monophyly in the gene tree. Furthermore, we could imagine that the geographical isolation would lead to the formation of a new taxa if this isolated effect kept enough time.
It was in fact difficult to differentiate between the hybridization and incomplete lineage sorting. Hybridization, the transmission of alleles from one species into the gene pool of second species 32 , usually happened in hybrid zone and is increasingly believed to have played a role in the diversification of animals 33 . Hybridization for the observed mtDNA pattern in closely related species, had been proposed for various animal organisms 34 , which also had been shown to contribute to speciation 35 . However, in the case of P. contorta and P. extensa, the visible difference in their genital morphology, and the absence of specimens identifiable as hybrid zones, did not support hybridization 36 . Based on the analysis of JML, incomplete lineage sorting was the more likely explanation for the observed non-monophyly of mtDNA patterns in the case of the P. contorta and P. extensa. Incomplete lineage sorting, or the retention of ancestral polymorphism, was relatively common among recent and rapidly radiating species as these species had not yet have time to fix itself for alternative haplotypes or alleles 37 , which led to the phenomenon that the descendant lineages were expected to share polymorphic alleles with the ancestral population for some time. Once lineage sorting was complete in the closed related species, the phylogenetic relationships of incipient species typically progressed from initial polyphyly through paraphyly and reached monophyly. Thus, the relatively young species might tend to appear polyphyletic or paraphyletic owing to incomplete lineage sorting 38,39 . This evolutionary process could affect mitochondrial genes particularly in closely related species where hardly any diversification in nuclear genes was found 40 , which was well fitted to the evidence of the identical nuclear sequences (ITS1 + 5.8S) between the species P. contorta and P. extensa. In addition, the morphological studies showed that there was also no intraspecific morphological variation among the different geographic populations of these two species. Therefore, incomplete lineage sorting as an explanation for non-monophyly of the species P. contorta and P. extensa was reasonable.
Mountains uplift inducing Pseudovelia species initial differentiation.
Ancestral state reconstruction strongly supported a tropics region (i.e. Indo-China Peninsula) was origin of subtropical species, suggesting that the presence of the subtropical Pseudovelia species in the A area was considered to be the result of subsequent vicariance/dispersals from the ancestral distribution (Fig. 5). The tropic and subtropical species lineage was primarily differentiated in the mitochondrial gene tree (Fig. 2). The divergence time estimation revealed that the tropic species ancestor gave rise to the subtropical species lineage through the vicariance/dispersal events during the late Miocene (about 10.9 Ma) (Fig. 3), which was consistent with the beginning time (i.e. 7-10 Ma) of strong uplift of Hengduan mountains in the south region of Qinghai-Tibet Plateau 41 . The acute rising of the Hengduan mountains with the entire uplift of the Qinghai-Tibet Plateau induced the drastic climatic change in the Hengduan mountains region, which leaded to the weather aridification and disappearance of summer monsoon from the Indian Ocean into mainland Asia 41 .
Currently, the Hengduan mountains are characterized by parallel mountain ridges reaching elevations of over 5000 m a.s.l. and elevational differences from valleys to mountain tops that often exceed 2000 m a.s.l. 42 . The BI and ML mitochondrial phylogenetic trees both revealed that substructures of Pseudovelia species are probably located in the subtropical China and tropic Indo-China Peninsula, which showed that the Hengduan mountains have acted as an important geographical barrier preventing gene flow at the two sides of the mountainous region, leading to long-term isolation and in situ diversification. Furthermore, this extreme topographic complexity could lead to ecological stratification and heterogeneity of environment 43 . Thus, the climatic differences between lineage-specific niches might have placed ecological constraints on these lineages because each would be adapted to the local conditions. The genetic differentiation resulting from the geographical isolation could be further reinforced and accumulated because of ecological divergence over time. Our PCA analysis also indicated that temperature and water, particularly annual mean temperature (BIO1) and annual precipitation (BIO12), were the most important measured variables controlling the geographic distribution of tropic and subtropical groups (Table 2). Therefore, specific climate ecological factors at the two sides of the Hengduan mountains might also contribute to the restriction of gene flow between tropic and subtropical species, and promoted initial differentiation in these two regions.
Conclusions
Our study investigated influences of terrestrial mountain islands and Pleistocene climate fluctuations on the genetic diversities, speciation, and biogeography of the genus Pseudovelia (Hemiptera: Veliidae) in subtropical China and tropic Indo-China Peninsula. Our results provided evidence that the subtropical Pseudovelia species likely evolved from the tropics region (i.e. Indo-China Peninsula) and then experienced an extremely rapid and recent radiation, which was probably promoted by the Pleistocene climate fluctuations and geographic isolation. We also proposed a scenario wherein the evolving narrow endemics in the Pleistocene climate oscillations led to the repeated restriction and expansion of the ranges of the ancestral species of the NSSC species, which might have promoted the fixation of ecological adaptations and morphological traits in the small and isolated mountain refugia populations. Taking this scenario into account, the narrow endemics usually harbored relatively low genetic diversities on the account of the intensive effects of small effective population and genetic drift. This successful example highlighted the importance of geographical isolation and climate change in promoting speciation in mountain habitat islands.
Methods
Sampling and laboratory procedures. We obtained 285 individual samples from 16 of the 22 recognized species and two undescribed species in subtropical China and tropic Indo-China Peninsula, in which 15 species covering their entire distribution (Table S3, Fig. 1). All samples were preserved in 95% ethanol and stored in a freezer at − 20 °C in the College of Life Sciences at Nankai University (Tianjin, China). Genomic DNA was extracted from the entire body, excluding the abdomen and genitalia, by using a General AllGen Kit (Qiagen, Germany). All individuals were sequenced for the mitochondrial markers COI, COII, and the nuclear marker ITS1 + 5.8S. Polymerase chain reactions (PCR) were performed using specific primers following Ye et al. 12 . The PCR procedure for COI, COII, and nuclear markers (ITS1 + 5.8S) included an initial denaturation at 94 °C for 2 min, followed by 31-33 cycles of 30 s at 92 °C, 30 s at 48-52 °C, and 1 min at 72 °C, ending with a final extension at 72 °C for 8 min. All fragments were sequenced in both directions using the HiSeq 2000 sequencing system.
Species genetic diversities in different scales of distribution limits.
Genetic diversities were estimated based on the mitochondrial data of each species by using the number of haplotypes (Nhap), haplotype diversity (Hd), and nucleotide diversity (π), which were all calculated in DNASP 4.0 44 . We then used the genetic diversities for comparison between the narrow endemics and localized/wide distribution species.
Phylogenetic reconstruction based on mitochondrial and nuclear data. The gene tree of haplotypes was reconstructed using Bayesian inference (BI) and maximum likelihood (ML) methods, as implemented in MrBayes 3.2 45 and raxmlGUI 1.2 46,47 respectively. For the mitochondrial data, the final alignment was 1378 bp in length, and 129 haplotypes were acquired. Models of nucleotide substitution were tested using Modeltest 3.7 48 , and a corrected Akaike information criterion (AICc) was employed to determine the best-fit model. For the BI Scientific RepoRts | 6:33625 | DOI: 10.1038/srep33625 analysis, under the HKY + I + G model, two simultaneous runs for 5000000 generations with the first 25% was discarded as burn-in. For the nuclear data, the final alignment was 1344 bp in length, and 23 haplotypes were acquired. BI analysis was employed to reconstruct the phylogenetic trees under the GTR + G model. For the ML analysis, we used GTR + I + G model for both mitochondrial and nuclear data and conducted 1000 bootstrap replicates with thorough ML search.
Multilocus species tree analyses and divergence time estimation.
A Bayesian coalescent-based multilocus species tree approach was used to infer phylogenetic relationships among species, which was implemented in BEAST 1.8.2 49,50 . *BEAST required a priori designation of species, which we performed based on the taxonomic and morphological studies 19,20 . We incorporated 275 COI + COII sequences of mtDNA and 232 ITS1 + 5.8S sequences of nrDNA to estimate a multilocus species tree. This approach had been shown to outperform the traditional concatenation approaches in that incomplete lineage sorting is taken into account, especially in cases where branch lengths are short 51,52 . We excluded the P. globosa from the analyses because this species lacked data of nuclear loci. We conducted two runs of 10000000 generations (sample freq = 1000 and 25% burnin) and checked for convergence and normal distribution in Tracer v1.6 53 . Estimating divergence time among the Pseudovelia species were also assumed to derived from *BEAST analysis. Divergence time was estimated with an uncorrelated lognormal model and a speciation Yule tree prior with the mutation rate 0.4-0.8%/Ma for COI gene 54 and 0.4-1.0%/Ma for ITS1 gene 55 , with the chains running for 10 million generations with 25% of the initial samples discarded as burn-in.
Diversification analyses. We used the "chronopl" function of APE (http://ape-package.ird.fr/) to create an ultrametric species tree in R, and constructed semi-logarithmic LTT plots to visualize the temporal variation in the diversification rates 56 . Birth-death likelihood (BDL) models were used to test the significance of heterogeneity or the consistency of the temporal diversification rate 57 . Akaike information criterion (AIC) scores were computed for the constant-rate and the variable-rate models, including the pure birth, birth-death, yule2rate, yul-e3rate, exponential and logistic variants of the density-dependent speciation rate (DDL), and variable extinction rate and constant speciation rate (EXVAR) models. The model selection was based on the difference in the AIC scores between the best-fitting rate-constant and rate-variable models. The calculations were performed using laser 2.3 57 . In addition, the MCCR test was used by comparing the empirical gamma statistic with the distribution of gamma statistics of 1,000 simulated incomplete phylogenies to test whether the diversification has decelerated through time.
Exploring potential reasons influencing species monophyly. Based on the phylogenetic results of mitochondrial data, the four species, P. contorta, P. extensa, P. tibialis tibialis, and P. vittiformis exhibited non-monophyly in the gene tree, which also conflicted with the result of species tree (see result). Specially, taxonomy errors, geographical isolation, incomplete lineage sorting and hybridization might be the potential reason 10,58 . Therefore, we first re-examined morphological characters of all the sample of these four species to confirm species status. We then used Network 4.6.1.3 (Fluxus Technology, Suffolk, UK) to create intraspecific median-joining networks to visualize the evolutionary relationships between haplotypes using both mtDNA and nrDNA 59 , which might reveal the trace of isolated effects on genetic background. To further distinguish incomplete lineage sorting from hybridization, we used a posterior predictive checking method 60 implemented in JML 1.3.0 61 to test whether incomplete lineage sorting or hybridization explained discordant relationships between the gene and species tree. We specifically examined conflict phylogeny within node S of gene tree, namely "Node S Species Composition" (NSSC) (Fig. 2). First, we ran another *BEAST analysis of a subset of the multilocus dataset containing only the NSSC species, using the GTR + I + G and GTR + I model for mtDNA and nrDNA, respectively, which sets for a Markov chain Monte Carlo of 11000000 generations (samplefreq = 1,000 and 10% burnin). Second, we used Modeltest 3.7 48 to estimate the parameters of the substitution model for the mtDNA dataset. Lastly, we conducted a run of the JML software using the same mtDNA dataset, the locus rate of mtDNA as yielded by *BEAST, and the parameters yielded by Modeltest.
Paleoclimate niche modeling reconstruction. To test whether ancestral NSSC distribution was influenced by Pleistocene climate fluctuation, we incorporated species coordinates of NSSC as the ancestral distribution. A total of 19 occurrence records were obtained for niche modeling. We chose bioclimatic variables representing annual trends and extreme or limiting conditions. Variables that were highly correlated were excluded from our selection, leaving seven variables summarizing temperature and precipitation that were derived from the WorldClim database 62 , i.e. annual mean temperature (BIO1), mean diurnal temperature range (BIO2), maximum temperature of the warmest month (BIO5), minimum temperature of the coldest month (BIO6), annual mean precipitation (BIO12), precipitation of the wettest month (BIO13), and precipitation of the driest month (BIO14). All the variables were at a resolution of 2.5-arc. We used maximum entropy implemented in Maxent 3.3.3 k 63 to estimate niches in environmental dimensions. Analysis was ran using default program conditions [cumulative output, convergence threshold (10 −5 )], maximum number of iterations (500). Area Under Curve (AUC) of the Receiver Operating Characteristic (ROC) plot was used for model evaluation. For hindcasting the effect of Pleistocene climatic fluctuations, the current native niche models were calibrated using the above environmental variables and then transferred onto the reconstructed climatic conditions during the LGM (CCSM model) and LIG periods.
Ancestral area reconstruction. Bayesian binary Markov chain Monte Carlo (MCMC) (BBM) implemented in RASP 3.0 64 was employed to reconstruct the possible ancestral distribution areas (subtropicals or tropics) of Pseudovelia. We used the multilocus species tree inferred by *BEAST as input in the program. The study area was divided into three regions, i.e. subtropical China, subtropical Taiwan island, and tropic Indo-China Peninsula, which are separated by the Hengduan mountain and Taiwan Strait. Ten MCMC chains were ran in two independent analyses for 50000 generations under the F81 + G model. The state was sampled every 100 generations. We then compared climate space occupied by subtropical and tropic species using direct climate comparisons and principal component analysis (PCA), as these methods allowed for quick assessments of the relative positions of species in climate space 65 . We extracted the climate value of for each occurrence of these two species using ArcGIS (ESRI, Redlands, CA, USA). The seven variables occupied by subtropical and tropic species were compared visually in boxplots, and statistically tested using independent sample test in SPSS.
|
2018-04-03T01:40:25.642Z
|
2016-09-21T00:00:00.000
|
{
"year": 2016,
"sha1": "8d514478e05dac7bd9902eb1c3e0d3e629bafaf1",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/srep33625.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8d514478e05dac7bd9902eb1c3e0d3e629bafaf1",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
15976579
|
pes2o/s2orc
|
v3-fos-license
|
Search for Galaxies at z>4 from a Deep Multicolor Survey
We present deep BVrI multicolor photometry in the field of the quasar BR1202-07 (z_{em}=4.694) aimed at selecting field galaxies at z>4. We compare the observed colors of the galaxies in the field with those predicted by spectral synthesis models including UV absorption by the intergalactic medium and we define a robust multicolor selection of galaxies at z>4. We provide spectroscopic confirmation of the high redshift QSO-companion galaxy (z=4.702) selected by our method. The first estimate of the surface density of galaxies in the redshift interval 4<z<4.5 is obtained for the same field, corresponding to a comoving volume density of ~ 10^{-3}$ Mpc$^{-3}. This provides a lower limit to the average star formation rate of the order of 10^{-2} Mo/yr/Mpc^{-3} at z~ 4.25.
Introduction
Deep images from the Hubble Space Telescope and ground based telescopes (Keck, NTT, CFHT) are providing new exciting information about abundance and morphology of galaxies in a wide redshift interval up to z ∼ 4.5.
In particular the search of high redshift galaxies is relevant not only to extract information about the physical processes which control the formation of individual objects, but also to probe the cosmological evolution of the formation of galactic structures in the Universe. Indeed cosmological scenarios are attempting to follow the evolution in cosmic time of the galaxy formation, describing in detail the history of the star formation. In the standard CDM cosmology, for example, most of the stars are formed at intermediate redshifts (z ∼ 1; e.g. Cole et al. 1994).
A useful parameter which allows a more direct comparison between theoretical predictions and data interpretations is the star formation rate per unit comoving volume. A reference value at the present epoch of ∼ 5 × 10 −3 M ⊙ yr −1 Mpc −3 (for a Salpeter IMF) has been recently given by Gallego et al. (1995) on the basis of an Hα galaxy survey.
Recent estimates of the galaxy luminosity function of faint galaxies up to z = 1 are providing the first evidence of strong evolution by a factor of ten of the cosmological star formation rate in the redshift interval z = 0 − 1 (Lilly et al. 1996;Cowie et al. 1996). This of course implies that more than half of the stars formed at intermediate redshifts, in good agreement with theoretical expectations.
Nevertheless, the same models predict a fraction < 2% of the present mass density in stars at z > 4 ( Cole et al. 1994). It is therefore at these very high redshifts that cosmological scenarios for galaxy formation are more vulnerable to observational constraints.
Efficient selection criteria are needed to find out galactic structures at these very high redshifts. Well known examples of high z sources are luminous Active Galactic Nuclei like quasars. Their absorption spectra provide unique information on the abundance and ionization state of the intergalactic medium at very high z. The presence of the IGM has a twofold cosmological relevance. First, Lyman absorption by the IGM along any line-of-sight produces strong depression of the UV spectrum of high redshift sources. Moreover, its high ionization level requires a large background of UV ionizing photons up to z ∼ 4 − 5 (Giallongo et al. 1994. This large UV background is only marginally consistent with that produced by the observed quasars (Haardt & Madau 1996), leaving room for a possible UV contribution by a large number of star-forming galaxies at z > 4.
The Multicolor Selection
In selecting galaxies which are actively forming stars at very high redshifts, two different approachs can be followed. It is possible to exploit the intrinsic spectral properties expected from star formation activity, or, in case we want to select galaxies at z > 4, it is better to exploit the complex but universal opacity to the UV photons of the intergalactic medium.
The selection criteria based on the intrinsic spectral properties exploit the main UV features of the star-forming galaxies like the possible presence of strong emission lines and/or the Lyman absorption break of the flat UV continuum due to the stellar evolutionary properties plus Lyman continuum absorption by the interstellar medium present inside the same galaxy.
Surveys based on the detection of intense Lyman alpha emission by means of narrow band imaging in the optical/IR band in the redshift interval 1.8 < z < 6 have provided no systematic detections of high z galaxies with only few exceptions (e.g. Macchetto et al. 1993).
A very efficient method based on the detection of the Lyman break present in a flat rest-frame UV continuum has been proposed by Steidel & Hamilton (1992). An appropriate choice of a set of broad band colors can allow the detection of the Lyman break in a given, relatively narrow redshift interval. Steidel used in particular a set of U G R filters adopted to select Lyman break galaxies in the redshift interval 2.8 < z < 3.4. Since the spectrum of an actively star-forming galaxy is flat longward of the Lyman break, an average color of G-R∼ 0.5 is expected in the selected redshift interval. At the same time strong reddening is expected in the U-G color which samples the drop of the emission shortward of the Lyman break (U-G> 1.5) for z ∼ 3 galaxies.
Given the faintness of the galaxies (R ∼ 24 − 25), low resolution spectra with good s/n are beeing produced only in the last period from observations with the Keck LRIS instrument. Steidel et al. (1996) confirmed with low resolution spectra the identification of 15 galaxy candidates in the expected redshift interval showing the high success rate (> 70%) of this multicolor selection. Extrapolating the success rate obtained for the subsample of their candidates, Steidel et al. (1996) provide a first estimate of the surface density of galaxies at z ∼ 3 of the order of 0.4 arcmin −2 corresponding to a comoving volume density of 3.6 × 10 −4 h 3 50 Mpc −3 . The average rest frame UV luminosity of these galaxies would imply a cosmological star formation rate SFR∼ 3 × 10 −3 M ⊙ yr −1 Mpc −3 .
However, the selection of galaxies at redshift z > 4 becomes considerably more efficient if we take into account the complex absorption produced by the intergalactic medium in the UV spectrum of high z sources. The reddening produced by the IGM in the colors of high z quasars was investigated by Giallongo & Trevese (1990) and a considerable number of very high z quasars has been discovered by means of this multicolor technique (e.g. Warren et al. 1991;Irwin, McMahon & Hazard 1991). Recently, Madau (1995) has refined and applied this multicolor method to the selection of high z star forming galaxies. We have plotted in Fig.1 the average IGM absorption affecting the spectral properties of a constant star-forming galaxy emitting at z = 3.25 or z = 4.25. We have adopted the Madau (1995) absorption model and the galaxy spectrum by Bruzual & Charlot (1993) with a Salpeter IMF. First, it is to notice that, at given redshift, the absorption by IGM is characterized by the average Ly-man alpha forest absorption present just shortward of the galaxy Lyman alpha wavelength and by the absorption of the overall Lyman series down to the Lyman continuum absorption, where the IGM is fully opaque to the UV radiation. While at z ∼ 3 the Lyα forest absorption produces a fractional decrement of only ∼ 30%, at z ∼ 4.5, 60-70% of the galaxy emission is lost causing a strong and easily detectable reddening in the broad band colors which sample the relevant wavelength interval.
An efficient sampling of this complex absorption requires at least 4 broad band filters. We have chosen the BVI Johnson and Gunn r filters to extend the multicolor selection up to z ∼ 4.5 (Fontana et al. 1996). These filters are plotted in Fig.1 superimposed to the galaxy spectra.
The r-I color can select the intrinsic flat spectrum of any star-forming galaxy up to z ∼ 4.5, while the V-r and B-r colors provide evidence of the strong reddening expected because of the Lyα and Lyman continuum IGM absorption, respectively.
To examine how robust is the color selection of z > 2 galaxies, we have computed the expected colors as a function of redshift in our photometric system (Fontana et al. 1996) adopting the spectral synthesis Bruzual & Charlot (1993) model. Models of these kind have a number of parameters whose uncertainties can be large in some cases. However, the resulting color changes of a few tenths do not alter the robustness of our color selection, as shown in the following.
To explore how the colors of different galaxy spectral populations are distributed in redshift, we have considered the e-folding star-formation timescale τ as the main interesting parameter. Different τ values reproduce different spectral types. For example, a star-formation timescale of τ ∼ 1 Gyr is more appropriate for an early type galaxy, while τ > 3 Gyr represent the spectral properties of different late type galaxies. At each "observed" redshift, different ages (i.e. different formation redshifts ranging from 1 to 7) have been considered for galaxies with a given τ . A Salpeter IMF and a solar metallicity have been adopted.
Our relevant colors B-r, V-r, r-I are reproduced as a function of z in Fig.2 only for the case τ = 1 Gyr.
The first remark that should be done is that the r-I color is sampling the intrinsic spectrum of galaxies in a wide redshift interval from z = 0 to z ∼ 4.5. At z > 4.5 IGM absorption in the r band produces appreciable reddening in the r-I colors. In selecting galaxies in the redshift range 2.5 < z < 4.2 the fundamental property of galaxies of all spectral types is the flatness of their rest-frame UV spectra revealed in their r-I colors (see Fig.2). Indeed, in the relevant z interval is always r-I<0.2 due to the intense star formation activity. At z < 2.5 the r-I colors are sampling progressively longer rest-frame wavelengths where the galaxy spectra are in general steeper, always resulting in r-I>0.2. Thus, it appears that the r-I color selection is very useful to discriminate high z galaxies in the field. Of course the presence of non-negligible photometric errors suggests the use of bluer colors to select high z galaxies with high confidence.
From Fig.2 it can be seen that the IGM absorption produces strong reddening first in the B-r colors with B-r∼1 at z ∼ 3 then in the V-r color with V-r∼1 at z ∼ 4. Thus, the simultaneous presence of the three colors at the average expected values can select galaxies at z ∼ 3 and at z ∼ 4 or even more. Any possible contamination by an old population with steep blue spectra (a pronounced 4000Å break) producing red B-r and V-r colors at z = 0.5 − 1 can be avoided just requiring a "flat" r-I color.
Fig. 2.
Colors as a function of redshift for galaxies with star-formation timescale τ = 1 Gyr. Different formation redshifts have been adopted in the interval z = 1 − 7.
A QSO Companion Galaxy at z = 4.702
We have applied this multicolor technique to the field around one of the brightest high z QSO BR1202-07 at z = 4.694 , Storrie-Lombardi et al. 1996 where at least one very high z galaxy is close to the line of sight to the QSO as shown by the detection of a damped absorption system at z ∼ 4.4 (Giallongo et al. 1994).
Deep BVrI images were obtained during the 1994 at the NTT with the SUSI direct imaging CCD camera in very good seeing conditions (FWHM∼ 0.5-0.6 for the stellar objects in the r and I images). A diffuse object clearly stands out 2.2 arcsec NW of the QSO with an r magnitude of r=24.3. The companion galaxy has the unusual colors expected for star-forming galaxies at z > 4, i.e. r-I=0.2, V-r=1.9, B-r>3 (Fontana et al. 1996). On the basis of our multicolor selection criterion we estimated a probable redshift range 4.4 < z < 4.7 depending on the intensity of the galaxy Lyman alpha emission. On the basis of the 1500Å continuum flux measured in the I band we derived a star formation rate ∼ 16 M ⊙ yr −1 (for a Salpeter IMF). This galaxy has also been detected in the K band by Djorgovski who estimated a magnitude K∼ 23. The r-K∼1 color so derived implies a very young age < 10 8 yr independently of details on the assumed metallicity and star-formation timescale.
This galaxy has also been observed in narrow band imaging centered at the Lyman α QSO redshift by Hu et al. (1996) and in imaging spectroscopy by Petitjean et al. (1996). Both authors discovered a Lyman α emission in the galaxy spectrum at z ∼ 4.7. The strong Lyα emission increases the r flux (∆r∼ −0.8 mag) keeping a flat r-I color up to z ≃ 4.7 despite the strong attenuation in the r band due to the presence of the Lyα forest.
We have recently obtained a low resolution (15Å) spectrum of this galaxy at the NTT with EMMI (D'Odorico et al. 1996, in preparation) which extends well in the red up to 9000Å. The spectrum is shown in Fig.3 where the strong Lyα emission is detected at z = 4.702 corresponding to a proper distance from the QSO ∼ 600 kpc or equivalently to a velocity difference ∆v ∼ 400 km s −1 . The line flux f ≃ 2 × 10 −16 ergs s −1 cm −2 corresponds to a luminosity L Lyα ≃ 3.8 × 10 43 ergs s −1 . Although this line luminosity could be formally converted into a starformation rate, some contamination by reprocessing of the QSO UV continuum can not be excluded even at distances ∼ 100 kpc. More important is the absence of any CIV emission within the flux measured in the I band. This implies that the redshifted I flux can be converted in the star-formation rate of ∼ 16 M ⊙ yr −1 previously mentioned. These observations provide the first evidence of strong star formation activity at z > ∼ 4.5.
A Sample of High Redshift Galaxies
In the 2.2×2.2 arcmin 2 field centered on the QSO position we have detected and counted galaxies in the r band down to r≃ 26 mag by means of the SExtractor software package (Bertin 1994). Reliable colors have been obtained for galaxies with r≤ 25 mag. We have selected galaxies in two different redshift ranges. First, galaxies satisfying the criterion r-I<0.2 and B-r>1 are expected to lie in the redshift interval 3 < ∼ z < ∼ 4. We found 11 galaxies at r≤ 25 mag in this z interval corresponding to a surface density of 2.3 arcmin −2 . The derived average comoving volume density at z ≃ 3.5 is φ ∼ 10 −3 Mpc −3 . The redshift interval 4 < ∼ z < ∼ 4.5 has been selected imposing r-I<0.4, V-r>1 and B-r>2. We found 5 galaxies in the field corresponding to a surface density of 1 arcmin −2 and to a comoving volume density φ ∼ 10 −3 Mpc −3 at z ≃ 4.25 (see Fig.4). Of course these estimates have to be considered as lower limits since galaxies at fainter r magnitudes will contribute somewhat to the volume density. Moreover, the selected galaxies have colors consistent with dust free spectral models. Although an intrinsic reddening E(B-V) < ∼ 0.1 does not alter appreciably the r-I color selection, some high z dusty galaxies could be lost by our multicolor selection. The average I ∼ 24.5 mag of the galaxies at 3 < ∼ z < ∼ 4.5 corresponds to an average star-formation rate ∼ 8 M ⊙ yr −1 . The corresponding cosmological SFR per unit comoving volume is ∼ 10 −2 M ⊙ yr −1 Mpc −3 in agreement with the value found by Steidel et al. (1996) at z ≃ 3.25. This limit is about 2 times higher than the present value derived by Gallego et al. (1995) assuming a Salpeter IMF and 5 times lower that at z ∼ 1 (Lilly et al. 1996). Thus the cosmological SFR increases by a factor of 10 from z = 0 to z = 1 then it seems to decline by a factor 5 or less up to z = 4.5. Assuming a fiducial local stellar mass density ∼ 3 × 10 8 M ⊙ Mpc −3 (Cowie et al. 1995) and an age for the z > 4 galaxies of a few 10 8 yr, a lower limit to the luminous matter density at z ∼ 4.25 could be of the order of 1% of the local value. Of course our estimates are derived in a small field of 4.8 arcmin 2 centered on a high z QSO. Larger areas are needed to reduce density fluctuations.
Prospects for the VLT
The large collecting area of the VLT can be exploited to confirm and study high z galaxy candidates selected by multicolor photometry. The first and most obvious follow-up is the spectroscopic observation of galaxies down to r∼ 25 mag by means of the mos capability present in FORS. For objects fainter than r∼ 25.5, a different approach has to be pursued. Intermediate band filters (200-300Å) can be used to extend the redshift identification to r∼ 26.5 − 27 mag in a reasonable observing time (Fontana et al. 1996, this volume).
|
2014-10-01T00:00:00.000Z
|
1996-05-31T00:00:00.000
|
{
"year": 1996,
"sha1": "4c2ea430e6129236ea3d5fc81a51444d177726a5",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/astro-ph/9605196",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "dc826ccaab13af6a0e52b26e3c67c627b4753b70",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
235644162
|
pes2o/s2orc
|
v3-fos-license
|
An evidence‐based review of primary spontaneous pneumothorax in the adolescent population
Abstract Primary spontaneous pneumothorax (PSP) is a relatively common problem in emergency medicine. The incidence of PSP peaks in adolescence and is most common in tall, thin males. Recent advances in the care of patients with PSP have called into question traditional approaches to management. This clinical review highlights the changing management strategies for PSP and concludes with a proposed evidence‐based pathway to guide the care of adolescents with PSP.
NR, not recorded.
The incidence of PSP varies based on patient age, whether secondary pneumothoraces are included, and geographic location. In a 2012 study, the estimated annual incidence of all pneumothoraces among children in the United States was 34 cases per 100,000. 12 A retrospective, longitudinal cohort study from Taiwan estimated the annual incidence of PSP at 52 cases per 100,000 persons (children and adults).
By the age of 23 years, the annual incidence steadily decreased to 20 cases per 100,000 persons and continued to decline throughout adulthood. 13 A similar pattern was reported from researchers in Denmark, who examined the incidence of first PSP using a national registry.
The peak annual incidence was 16 cases per 100,000 persons between 16 and 20 years of age. 14 All 3 of these studies were of hospitalized patients, so the true incidence may be higher if patients managed in the outpatient setting were included.
PATHOPHYSIOLOGY
The mechanisms leading to a communication between the alveolar and pleural spaces are likely multifactorial, involving a complex interplay between age, sex, body habitus, and environmental and genetic factors. Of patients with PSP, >80% demonstrate apical subpleural blebs or parenchymal bullae on chest computed tomography (CT). 15 Investigators have identified specific ultrastructural abnormalities in the elastin fibers found in the apical regions of the lungs of individuals undergoing blebectomy/bullectomy, supporting the concept that localized connective tissue abnormalities lead to blebs and bullae development. 16 Spontaneous rupture of blebs or bullae is commonly believed to be the primary mechanism leading to pneumothorax. However, several observations challenge this idea. First, it is unclear how often these lesions are pre-existing at the specific site of air leakage, with sites of lung rupture difficult to demonstrate during surgery or from resected lung tissue. 17,18 Second, up to a quarter of patients with PSP do not demonstrate blebs or bullae on chest CT or during thoracoscopy. 19,20 Third, blebs and bullae are detected in asymptomatic individuals on CT scan and thoracoscopy at rates ranging from 4% to 33%. [21][22][23] There are alternative theories for the pathophysiology of PSP. "Pleural porosity" is the concept that mesothelial cells on the visceral pleura are thought to be replaced by a more porous inflammatory layer that allows air leakage into the pleural space. 24 In addition, as adolescents with PSP frequently have tall, asthenic body types, 25 investigators have speculated that rapid longitudinal growth during adolescence generates greater distending pressure in the lung apex. 6,9,11 Whether greater porosity or greater distending pressures subsequently lead to the formation of blebs or bullae, or contribute to the development of a pneumothorax in individuals who already have localized ultrastructural defects, is unclear.
Environmental factors, such as smoke exposure, may increase the risk for PSP. 25 Compared with non-smokers, the relative risk of a pneumothorax is 4 to 7 times higher in light smokers (1-12 cigarettes/day) and up to 100 times higher in heavy smokers (>22 cigarettes/day). 26 There also are reported associations between cannabis smoking and/or vaping and PSP, but they are confounded by concomitant tobacco smoking. [27][28][29] Several reports identified increased spontaneous pneumothoraces after days with large changes in atmospheric pressure, but there are conflicting findings in recent investigations. [30][31][32] Approximately 10% of patients with PSP have a positive family history of pneumothorax, 33,34 although no specific genetic mutations have been associated with sporadic PSP. 35
Presentation
The primary symptom associated with the development of PSP is chest pain ( Table 1). [5][6][7][8][9][10][11]25 Chest pain typically has an acute onset and is localized to the side of the pneumothorax. Bilateral pneumothoraces are unusual and reported in only 1.3% of cases. 6 Less common symptoms include dyspnea (43%) and cough (13%). PSP usually develops while the patient is at rest in children (76%) and adults (87%). 8,42 The evidence for associations between specific physical examination findings and the size of the pneumothorax is limited. Small pneumothoraces often are clinically silent, whereas larger pneumothoraces are thought to produce more classically described signs and symptoms, including ipsilateral hyper-resonant percussion, decreased or absent breath sounds, and decreased vocal fremitus. 1,43 Expanding pneumothoraces, generally considered to encompass >50% of the lung volume, may progress to tension physiology and shock. 1 The vast majority of pneumothoraces, however, do not develop tension physiology, that is, the patient has signs or symptoms of tension pneumothorax and not radiographic evidence alone. 44
Radiographic diagnosis
The diagnosis of PSP is almost always confirmed on a standing posterior-anterior chest X-ray (CXR). 17 Classic findings are radiographic displacement of the pleural line and the absence of lung markings between the visceral pleural line and the chest wall. Expiratory CXRs have no additional diagnostic benefit-studies comparing paired inspiratory and expiratory CXR demonstrate that pneumothoraces can reliably be demonstrated with inspiratory radiographs alone. 45,46 Although CT of the chest is more sensitive for small pneumothoraces, CXR is the preferred initial approach. Any pneumothoraces large enough to be symptomatic should be detected by CXR. Moreover, because of the cost and concerns of relative radiation dosage, chest CT is unnecessary for the majority of uncomplicated PSP cases already identified on CXR. 47,48 Several studies have shown that point-of-care ultrasonography (POCUS) has both high sensitivity and specificity for pneumothorax. 49 The routine use of POCUS in pediatric PSP, however, has not been clearly established. 50,51 The extent of a pneumothorax is usually expressed in 1 of 2 ways-as either an estimated percentage of lung collapsed or on an ordinal scale: small, moderate, or large. Several investigators and international societies have developed approaches to estimate the extent of the pneumothorax based on specific measurements obtained from the CXR.
However, the clinical utility of these approaches in children has proven problematic. First, there are no equations for estimating the size of a pneumothorax in the pediatric population, where the thoracic volume varies with age. Second, there is high variability between raters, approaches, and centers. 43,[52][53][54] Of the 7 recent international pediatric PSP case series, 5 reported unique methods to differentiate small versus large pneumothoraces. [5][6][7][8][9][10][11] Therefore, physicians should be aware that size estimations may not be reliable and, more important, may not correlate well with the clinical findings.
Oxygen
The inhalation of higher than ambient concentrations of oxygen creates a diffusion gradient of nitrogen from the pleural space into the alveoli, which experimentally increases the absorption of gas from the pleural cavity. 55 In animal models, oxygen therapy has been demonstrated to increase the rate of resolution of pneumothoraces. 56 Small clinical studies of older adult patients with secondary pneumothoraces have demonstrated mixed results with oxygen treatment, ranging from no effect to up to a 5-fold increase in the rate of absorption. 57,58 For PSP, the efficacy of oxygen therapy was examined in a retrospective study of 175 pediatric and adult patients. Patients were treated with either room air or 2-4 L/minute nasal cannula oxygen. Patients receiving oxygen had a radiographic resolution rate twice that of those receiving room air (4.3%/day vs 2.1%/day). 59 Although the administration of oxygen may hasten absorption of air from the pleural space, there remains uncertainty regarding the optimal fraction of inspired oxygen, especially the amount that speeds recovery without prolonging hospitalization.
Observation versus intervention
There is general agreement among guidelines that observation is the accepted treatment in minimally symptomatic, clinically stable patients with PSP. 60 With the increasing ease of radiographically assisted thoracentesis and tube placement for direct air evacuation; however, the rates of intervention in adults with PSP has steadily increased during the past several decades. 61,62 Management in children has followed suit, with almost 80% of pediatric patients undergoing some form of intervention-aspiration, chest tube drainage, or video-assisted thoracoscopic surgery (VATS; Table 2). [5][6][7][8][9][10][11] The notion that immediate intervention is needed in PSP has been challenged by several studies. Retrospective cohort studies in adults suggest that observation is safe and effective in patients with PSP who do not have a substantial risk of developing tension physiology. 61 Although we have evidence that observation is non-inferior and associated with fewer complications, the necessary duration of observation is unclear. Reported periods of observation range from 3 to 6 hours in the emergency department (ED) 65 or inpatient setting. 48 The estimated rate of reabsorption of air in the pleural space is 1.25%-2.2% of the volume of the hemithorax each 24 hours. 66,67 Therefore, it is important for the clinician to recognize that there will be delayed radiographic improvement for patients with PSP who do not undergo drainage. For example, a 20% unilateral pneumothorax in a hemithorax volume of 2 L would produce a pneumothorax volume of 400 mL. Assuming a daily absorption rate of 2% of the volume of the hemithorax (2 L × 0.02 = 40 mL), complete resolution of the pneumothorax would take approximately 10 days. The critical criterion for observation, therefore, is whether the patient is clinically stable and/or has radiographic evidence of the pneumothorax expanding.
Needle aspiration versus tube thoracostomy
For patients in whom intervention is indicated, the 2 primary approaches to removal of pleural air are needle aspiration or tube thoracostomy. In adults, recent RCTs and a Cochrane review comparing both methods found no differences in recurrence rates, whereas needle aspiration resulted in shorter duration of hospitalization and fewer adverse events. [68][69][70][71] In contrast to adults, tube thoracostomy is favored over needle aspiration in pediatric patients. Retrospective and prospective studies examining needle aspiration in pediatric patients with PSP report success rates of approximately 50%, which is lower than cited rates in adults. 5,8,72,73 Among 618 patients included among 7 pediatric case series, 12% underwent aspiration compared with 44% tube thoracostomy ( Table 2). The lower rates of needle aspiration use may reflect
Small-bore versus large-bore chest tubes
Despite the advantages of smaller tubes, large-bore tubes continue to be inserted in pediatric patients with PSP. 48 Studies in adults and adolescents report that small-bore chest tubes (<14 French) 76 are associated with less pain, shorter hospital length of stay, and fewer complications than large-bore catheters, with the equivalent ability to evacuate air from the pleural space. 10,77-83 In a prospective cohort study of adults with large pneumothoraces in the ambulatory setting, small-bore chest tubes connected to a 1-way valve demonstrated a successful resolution rate of 78% by day 4, with added cost savings and a 1-year recurrence rate of 26%. 84
Suction
The role of suction with tube thoracostomy in pediatric PSP is uncertain. This practice is used in cases of ongoing air leak to promote healing by the theoretical apposition of the visceral and parietal pleura. Several guidelines suggest that there is no role for the immediate use of suction in PSP, citing a lack of supportive data and concerns for both reexpansion pulmonary edema 19,47,60,85,86 and potentially increased risk of recurrence. 61,65 Two adult studies demonstrated the rate of lung reexpansion in PSP is similar with or without suction. 87,88
RECURRENCE
The risk of recurrence after a spontaneous pneumothorax is high in pediatric patients, ranging from 21% to 43% ( The risk for repeated recurrences (ie, more than 1 recurrence) among patients with PSP treated non-surgically is also believed to be high, ranging from 40% to 83%. 95,[98][99][100][101] However, the referenced literature on the risk of repeated recurrences is based on an adult series that included elderly patients, with smoking rates ranging from 61% to 72%.
Prevention of recurrence: VATS and pleurodesis
Since the 1990s, VATS with a wedge resection of a bleb or bullous lesion has been the preferred intervention to prevent recurrence of PSP in the pediatric population. 17,47,[110][111][112] The efficacy and role of VATS for preventing pneumothorax recurrence, however, are somewhat unclear. Among 10 recent retrospective studies, including >2,000 pediatric and adult patients, recurrence after VATS averaged 13% (range, 7%-21%), suggesting a significant reduction compared with non-surgically treated patients (Table 4). 8,11,103,104,110,[113][114][115][116][117] Among studies reporting the laterality of post-VATS recurrences, 54% were contralateral to the site of surgery (Table 4). 8,11,113,114 Younger age at the time of VATS was a consistent risk factor for post-VATS recurrence among studies performing multivariate analyses. 104,110,113,116,117 The high rate of contralateral recurrences in younger patients suggests that the underlying pathogenesis leading to a pneumothorax may progress independent of surgical interventions in developing lungs.
To reduce ipsilateral recurrence rates, pleurodesis is often recommended for patients undergoing VATS. 111,112 Pleurodesis is performed to create an adhesion between the visceral and parietal pleural membranes, theoretically preventing recurrence by removing the pleural space in the area of a previous bleb or bullae. There are several potential approaches to pleurodesis: partial pleurectomy, chemical pleurodesis, mechanical pleurodesis using pleural abrasion, and wide staple line coverage with absorbable material. 118 The benefit and optimal approach to pleurodesis in PSP remains unclear. Retrospective studies show lower recurrence rates post-VATS in patients undergoing mechanical and chemical pleurodesis. 112,113,116 However, an RCT of adolescents and adults with PSP found no difference in 18-month recurrence rates between thoracoscopic wedge resection with or without pleural abrasion. 119 RCTs comparing mechanical pleural abrasion with either apical pleurectomy, chemical pleurodesis with minocycline, or staple line coverage with cellulose mesh and fibrin glue have not demonstrated any approach to be more effective in preventing recurrences. 120-122
PATIENT TRANSPORT AND RESTRICTIONS ON FUTURE ACTIVITIES
Patients with a spontaneous pneumothorax requiring transport for definitive management should not have routine tube thoracostomy performed before transport. 123 Previous literature suggesting tube thoracostomy to be performed on the pre-hospital/community setting was based on a single case series of trauma patients and expert opinion. 124 Patients being transported via air medical transport should TA B L E 4 Recurrence of primary spontaneous pneumothorax after video-assisted thoracoscopic surgery in 10 studies that included adolescent patients Consensus recommendations vary for non-urgent air travel by patients with an active or recent pneumothorax. Guidelines recommend waiting from 7 to up to 21 days from the date of radiographic resolution. [125][126][127] The Aerospace Medical Association notes that the presence of lung cysts or bullae is not a contraindication to flying. 128 A history of PSP has consistently been a contraindication of compressed-air diving because a recurrence under water could theoretically increase the risk of rapid expansion and tension physiology.
A recent review examining the risk of PSP recurrence with diving supported this conclusion and noted that the available evidence does not support a specific waiting period after pneumothorax resolution or intervention. 96
AN EVIDENCE-BASED APPROACH TO MANAGEMENT
Although treatment patterns in pediatric centers favor interventional approaches, we recommend patients presenting with a PSP without hemodynamic or respiratory compromise be observed for 6 hours off oxygen in the ED, followed by a repeat CXR. If the patient remains clinically stable with no pneumothorax enlargement, the patient may be safely discharged with strict return precautions and follow-up by a primary care physician. In the absence of hypoxemia, we do not recommend oxygen administration to hasten pneumothorax resolution.
For patients with PSP and evidence of tension physiology, including sustained tachycardia, tachypnea, 129 or hypotension, 130 we recommend air evacuation with either needle aspiration or tube thoracostomy. If an initial attempt at needle aspiration is not successful, we suggest a second attempt to be performed, as studies in adults suggest a high success rate for lung expansion with a second attempt. 69,70 We recommended small-bore chest tubes (<14 French) be inserted via the Seldinger technique 76 due to equivalent efficacy compared with larger tubes and lower adverse effects. 10,[77][78][79][80] We recommend against the immediate use of suction.
If patients require needle aspiration or tube thoracostomy, we recommend local anesthesia via an intercostal nerve block followed by either intravenous anxiolysis/analgesia or conscious sedation. Anxiolysis with nitrous oxide is discouraged as it has been noted to enter the pleural space, potentially worsening the extent of the pneumothorax. 131 We recommend surgical treatment, specifically VATS, for patients with an air leak beyond 4 days after initial intervention. As there are no reliable predictors of recurrence in adolescents, including the presence of blebs or bullae on chest CT, we do not recommend surgery for otherwise uncomplicated first occurrences. Clinicians should inform patients that VATS will likely reduce the rate of ipsilateral recurrence, but the decision to proceed with surgery should be balanced by the uncertainty about the degree of risk reduction and the risks of operative complications. 132,133 There is increasing recognition that a pneumothorax may be the initial presentation of an underlying pulmonary or connective tissue disorder. For patients with a first PSP, we recommend hospital follow-up with a pulmonologist. The focus of pulmonology evaluation should be a thorough review of past medical and family histories, evaluating risk factors for connective tissue disease and respiratory disorders that predispose to PSP. Patients may require pulmonary function testing (PFT) investigating for occult reactive airways disease or other respiratory disorders (Table 5). An outpatient chest CT should be performed in patients with restrictive or moderate obstructive patterns on PFTs and for all patients with a history of familial PSP or a family history of blebs, bullae, or cysts.
Clinicians should be especially concerned for an underlying disorder in a preadolescent child with pneumothorax, including asthma, foreign body aspiration, congenital malformations, and connective tissue disorders. In addition to patients with positive PFT, chest CT is recommended for all preadolescent children, in patients with more than 1 recurrence, and in biological females, as the mutational TA B L E 5 Recommended outpatient pulmonology evaluation for pediatric patients with an initial episode of primary spontaneous pneumothorax Spirometry and plethysmography for: All patients Outpatient CT of the chest for: Family history of pneumothorax Family history of pulmonary blebs, bullae, or cysts All preadolescents (younger than 14 years of age) Females Recurrence Genetic testing/referral to geneticist for: Family history of pneumothorax CT findings of cystic lung disease Physical exam suggesting a genetic syndrome predisposing to pneumothorax, including skin lesions (fibrofolliculomas, trichodiscomas, skin tags, ash leafspots, translucent skin), skeletal (pectus excavatum/carinatum, scoliosis, hand/wrist sign), facial (thin lips and nose, micrognathia, marfanoid facial features) CT, computed tomography.
burden to cause pneumothorax in women appears to be higher. 38 Genetic sequencing for FLCN or referral to a geneticist are recommended for patients with familial PSP if physical exam findings suggest a pneumothorax-associated syndrome or chest CT findings are suggestive of an underlying cystic lung disease.
AREAS FOR FUTURE RESEARCH
We anticipate several areas of research during the ensuing decade will further improve our understanding of the pathogenesis and management of PSP. Continued efforts to identify novel genetic mutations associated with both sporadic and familial pneumothorax are essential, both to understanding the mechanism of pneumothorax formation and to guide management and prognostication. Future studies are needed examining the costs and benefits of routine chest CT to detect diffuse cystic lung disease in patients with a first-time PSP. This practice has recently been advocated for all adults as a cost-effective means of earlier detection of underlying cystic lung disease. 134 Studies with larger numbers of adolescents with PSP are needed to confirm the safety and efficacy of the observation-only approach.
If pediatric centers adopt more conservative, observation-based approaches, prospective cohort studies combined with genetic analyses for pneumothorax-associated mutations may provide a more personalized approach for selecting patients most likely to benefit from surgery.
For patients needing evacuation of their pneumothorax, prospective studies are needed to assess the efficacy and safety of needle aspiration compared with tube thoracostomy. Prospective studies in adolescents are also needed to determine both the additional utility of pleurodesis to prevent postsurgical recurrence and the frequency of operative complications. Should pleurodesis prove to be superior, investigations into the optimal pleurodesis techniques are also necessary.
CONCLUSIONS
PSP is a relatively common condition in adolescents in the ED, and recent research suggests that several changes to traditional management are indicated, with a focus on more conservative approaches such as observation or need aspiration. More research is needed to determine the ideal surgical approach in pediatric patients.
ACKNOWLEDGMENTS
This work was supported by grants from the National Natural Science
CONFLICTS OF INTEREST
The authors have no conflicts of interest to disclose.
|
2021-06-27T05:21:56.484Z
|
2021-06-01T00:00:00.000
|
{
"year": 2021,
"sha1": "ade72abb6bb74c2c80bcb533e677088336e2109e",
"oa_license": "CCBYNCND",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/emp2.12449",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ade72abb6bb74c2c80bcb533e677088336e2109e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
269896027
|
pes2o/s2orc
|
v3-fos-license
|
High-throughput screening identifies broad-spectrum Coronavirus entry inhibitors
Summary The COVID-19 pandemic highlighted the need for antivirals against emerging coronaviruses (CoV). Inhibiting spike (S) glycoprotein-mediated viral entry is a promising strategy. To identify small molecule inhibitors that block entry downstream of receptor binding, we established a high-throughput screening (HTS) platform based on pseudoviruses. We employed a three-step process to screen nearly 200,000 small molecules. First, we identified hits that inhibit pseudoviruses bearing the SARS-CoV-2 S glycoprotein. Counter-screening against pseudoviruses with the vesicular stomatitis virus glycoprotein (VSV-G), yielded sixty-five SARS-CoV-2 S-specific inhibitors. These were further tested against pseudoviruses bearing the MERS-CoV S glycoprotein, which uses a different receptor. Out of these, five compounds, which included the known broad-spectrum inhibitor Nafamostat, were subjected to further validation and tested against pseudoviruses bearing the S glycoprotein of the Alpha, Delta, and Omicron variants as well as bona fide SARS-CoV-2. This rigorous approach revealed an unreported inhibitor and its derivative as potential broad-spectrum antivirals.
Figure S3
. Dose-response activity and cytotoxicity of compounds before HPLC, related to Figure 4. (Left) Dose-response plot of the hits against VSVΔG-SW, VSVΔG-SM or VSVΔG-G and their cytotoxity profile.(Right) Dose-response plots of hits against pseudoviruses with glycoproteins of SARS-CoV-2 variants (VSVΔG-Sα, VSVΔG-S, VSVΔG-Sο).Error bars represent the SEM.ns are the readings where there is no statistically significant difference between VSVΔG-SM and VSVΔG-G at a given concentration.For all other readings, the P<0.05 (two-tailed unpaired t-tests).Table S1.List of hits from the primary screen, related to Figure 3A.Table S2.List of hits from the secondary screen, related to Figure 3A.Table S3.List of hits from the tertiary screen, related to Figure 3A.
Figure S1.Optimization of pseudovirus titer, related to Figure 1.(A) Infectious units/ml of VSVΔG-SW pseudoviruses present in the harvested supernatant at different times, indicating optimal titers at 30 hours post-harvest.(B) Infectious units/ml of samples subjected to centrifugation, indicating a two-fold increase in viral titer with centrifugation.Experiments were performed in the presence of α-G neutralizing antibody to exclude any residual infection from VSVΔG-G that was left over from the production.The statistical significance of conditions was also determined.P: ****≤0.0001(two-tailed unpaired t-tests).Nexperiments=3, nrepeats =9.
Figure S2 .
Figure S2.Overview of a 384-well plate from the screen, related to Figure 2. Column 1, plated with pseudovirus and cells, serves as the neutral control and indicates 100% infection.Column 2, plated with only cells, acts as the positive control, signifying 0% infection or 100% inhibition.Columns 3-22 are spotted with compounds.All wells containing pseudovirus in Columns 1 and 3-22 are pre-incubated with the α-G neutralizing antibody to remove any residual VSVΔG-G infection.To ascertain that the α-G antibody was active, VSVΔG-G pseudovirus were plated in the presence and absence of α-G in Column 23 and 24 respectively.All columns contain 0.01% DMSO.
Figure S4 .
Figure S4.Validation of PCM-0163855 against bona fide SARS-CoV-2, related to Figure 4. (A) Cytotoxicity profile of PCM-0163855 and dose-response plots comparing the inhibitory activity of PCM-0163855, and known inhibitors Nafamostat and Remdesivir on SARS-CoV-2 delta variant on viral replication in Vero E6 cells with (left) and without (right) TMPRSS2 over expression.
Figure S5 .
Figure S5.General reaction scheme for synthesis of PCM-0163855 and PCM-0282478, related to Figure 5.
Figure S7 .
Figure S7.PCM-0296174 reduces infectious SARS-CoV-2 virus by plaque assay, related to Figure 5. (A) Plaque assay performed in Vero E6 cells in presence of PCM-0296174 (active enantiomer) or PCM-0296173.(B) The active enantiomer reduces viral load of 2 logs.(C) RTqPCR done in parallel on the same samples reproduce the results shown in Fig 5B.
Figure S8 .
Figure S8.PCM-0296174 does not inhibit SARS-CoV-2 Spike binding to ACE2, related to Figure 5.The effect of PCM-0296174 (active enantiomer), PCM-0296173 (inactive enantiomer), Nafamostat (negative control), and soluble Spike (positive control) on SARS-CoV-2 Spike binding to ACE2 receptor was assessed.All the compounds did not inhibit the Spike-ACE2 interaction at the tested concentrations, while the positive control exhibited a dose-dependent inhibitory effect on the binding with IC50 of 0.03 µM.
|
2024-05-19T15:04:47.443Z
|
2024-05-01T00:00:00.000
|
{
"year": 2024,
"sha1": "7f81872718ffc28aba4293d25dd8139aeac40b9a",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.isci.2024.110019",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "debf43aa731d8151c87d6ac5d32a6a0b3542a765",
"s2fieldsofstudy": [
"Medicine",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
208471927
|
pes2o/s2orc
|
v3-fos-license
|
Causes and Risk of Death Using Verbal Autopsy in The Ibadan Study of Ageing
Background: The documentation of death is inadequate in many developing countries due to poor coverage of vital registration. In order to fill this gap, Verbal Autopsy (VA) has often been employed as a method of determining the cause of death. Method: A survey of the causes of death in a cohort of elderly persons (aged ≥65 years) over a 39-month period was conducted using Verbal Autopsy (VA). VA was conducted using a questionnaire designed by the World Health Organization (WHO) and the International Network of Demographic Evaluation of Populations and Their Health (INDEPTH), adapted for local understanding. The questionnaire was administered to household members with adequate knowledge of the circumstances of death in the cohort. Two physicians, with knowledge of local terms of illness and the living conditions of subjects, reviewed each VA form independently to assign one or more causes of death, and subsequently met to reach consensus for cases where there were differences of opinion. If consensus could not be reached, the cause of death was regarded as indeterminate. Assignment of causes of death was based on the 9th edition of the International Classification of Diseases (ICD-9), which is the official classification of morbidity and mortality in the country. Result: There were 268 deaths out of the 2149 elderly persons in the study cohort, giving a mortality rate of 33.3 per 1000 person years, with gender specific rates of 35.29 per 1000 person years for males and 31.48 per 1000 person years for females. Infective causes (malaria fever, diarrhoeal disease and febrile illness of unknown cause) accounted for 13.1 deaths per 1000 person years, followed by hypertension/cardiovascular accident and asthma/respiratory pathology which accounted for 6.8 and 4.6 per 1000 person years, respectively. Multivariate logistic analysis reveals that belonging to low to average socioeconomic class (OR=1.4, 95%CI=1.3-2.8, p=0.009); significantly increased the likelihood of dying at follow up while engaging in moderate intensity physical activities (OR=0.7, 95%CI=0.5-1.0, p=0.049) reduced it. Conclusion: Infections constituted the predominant causes of death among these elderly people and belonging to low to moderate economic status increased the risk of dying. Introduction Countries that cannot record the number of people who die or why they die cannot realize the full potential of their health systems [1,2]. Admittedly, the development of functional civil registration system with medical certification of cause of death takes time and resources. However, there are cheaper and readily available tools and techniques that can be used to obtain a fairly accurate representation of mortality trends [1-4]. The interest in causes of death for public health purposes goes back to the 17th century in London, when “death searchers” Citation: Lasisi AO, Esan O, Abiona TA, Gureje O (2018) Causes and Risk of Death Using Verbal Autopsy in The Ibadan Study of Ageing. J Trop Med Health: JTMH118. DOI: 10.29011/JTMH-118. 000118 2 Volume 2018; Issue 02 were recording deaths in the population by weekly household visits, with the main target being to estimate mortality from the plague. Since then, VA has been conducted in research settings by in-depth interviews with the family of the deceased persons [5]. Verbal Autopsy (VA) is a method of ascertaining probable causes of a death based on an interview with primary caregivers about the signs, symptoms and circumstances preceding that death [2]. It provides an understanding of the pattern of and trends in causespecific mortality, mortality differentials between population groups and the effects of interventions in the community. In recent times, two methodological approaches for analysing data obtained from VA have emerged: the first approach is through a judgment made by expert physicians while the second is by use of an automated computer program. The quality of the data is determined by the details of the information obtained with the questionnaire and the method of analysing its content to reach the probable cause of death [5]. The data so produced are important in efforts at reducing mortality rates and monitoring progress towards achieving health targets, for example, as in the Millennium Development Goals [2-6]. In developed countries, data on disease-specific mortality by age are readily available from national vital registration. In contrast, in developing countries, where the proportion of death is significantly higher [7], estimation of cause of death is more difficult because the levels of coverage of vital registration and the reliability of causes of death stated on death certificates are generally low, especially so in rural areas [4-7]. Consequently, information about mortality pattern obtained through VA can be vital for health care planning in developing countries [8,9]. This report from the Ibadan Study of Ageing provides information on cause-specific mortality as well as the relationship between some selected health and lifestyle variables and mortality among the elderly using verbal autopsy reports. Methodology Sampling The Ibadan Study of Ageing is a longitudinal cohort study of the mental and physical health status as well as the functioning of elderly persons (aged ≥65 years) residing in the Yoruba-speaking areas of Nigeria, which consist of eight contiguous states in the south western and north central regions (Lagos, Ogun, Osun, Oyo, Ondo, Ekiti, Kogi and Kwara). At the time of the study, the population of these states was approximately 25 million people, or about 22% of the national population. The baseline survey was conducted between November 2003 and August 2004. The methodology has been described in full elsewhere [10,11] and only a brief summary is provided here. Respondents were selected using a multistage stratified area probability sampling of households. In households with more than one eligible person (aged ≥65 years and fluent in Yoruba, the language of the study), the Kish table selection method was used to select one respondent. Baseline Assessments At the baseline in 2003/04, respondents were assessed, among other things, for the presence of chronic physical conditions. We assessed, by self-report, whether respondents had arthritis, diabetes, heart disease and asthma in the previous 12 months using a symptom-based checklist, a method of proven reliability and validity [1,3]. Current and lifetime depression was assessed with the World Health Organization Composite International Diagnostic Interview (CIDI) version three (CIDI.3) and diagnosed according to the criteria of the Diagnostic and Statistical Manual of Mental Disorders, 4th edition. As previously described, all respondents were assessed for functional limitations in six activities of daily living and seven instrumental activities of daily living [11,12]. A number of social and lifestyle features were also assessed at baseline. Participation in household activities and in community activities was assessed using the World Health Organization Disability Assessment Schedule, Version 2 (WHO-DAS II). The relevant items asked “During the last 30 days, how much did you join in family activities, such as: eating together, talking with family members, visiting family members, working together?” and “During the last 30 days, how much did you join in community activities, such as: festivities, religious activities, talking with community members, working together?” Answers are rated as 1) Not at all, 2) A little bit, 3) Quite a bit, and 4) A lot. For this report, responses to each of the items are dichotomized as “Not at all” versus all others. Respondents were asked about use of alcohol and tobacco and responses were dichotomized as ever use versus never use. This VA survey included those elderly subjects who died between 2003/2004 and 2007. Information about the death of a respondent was obtained during the follow-up study. Once the death of a respondent was confirmed by a member of the household or a neighbour, the VA questionnaire was administered to an informant within the household identified as the person who was most knowledgeable about the circumstances of death. This report is based on the survey of all the 268 deaths during the period. The Verbal Autopsy Interview and Questionnaire We used the VA questionnaire designed by the WHO and INDEPTH (International Network of field sites with continuous Demographic Evaluation of Populations and Their Health in developing countries) [8,9]. The instrument was adapted and translated to reflect local understanding of symptoms and signs of the assessed health conditions. The questionnaire includes open narrative and closed questions and was administered by trained interviewers. The interviewers had all completed secondary (high school) education and had previous experience in survey methodology. The training of the interviewers emphasized issues Citation: Lasisi AO, Esan O, Abiona TA, Gureje O (2018) Causes and Risk of Death Using Verbal Autopsy in The Ibadan Study of Ageing. J Trop Med Health: JTMH118. DOI: 10.29011/JTMH-118. 000118 3 Volume 2018; Issue 02 such as preferred respondents, period of interviews, approaching grieving respondents and compiling narrative material (ensuring that duration, frequency, severity and sequence of the symptoms were mentioned). Pretesting was done to facilitate understanding by the study population. Each interviewer was assigned to communities on the basis of his/her previous place of work on the project and experience. Field work was coordinated by supervisors who oversaw the data collection process, checked questionnaires for completeness and consistency, and conducted random quality checks by re-interviewing about 10% of the respondents. Two physicians reviewed each VA form independently to assign one or more causes of death, and subsequently met to reach consensus for cases where there were differences of opinion. If there was no consensus after discussion, the causes of death were recorded as indeterminate. The physicians were knowledgeable about study area, population and the common local terms used to express signs, symptoms, causes and conditions of death but they were not restricted in any way in order to preserve their independence. The causes of death were assigned by the physicians using clinical judgement based on the ICD 9 [13]. ICD 9 was used because it is the official classification of disease in Nigeria. Ethical approval for all study procedures within the Ibadan Study of Ageing was obtained from the Joint University of Ibadan/University College Hospital Ethics Committee. Data Analysis The average duration between baseline and follow-up assessments was 39.3 months (95% confidence interval: 39.1 39.5). We present the unweighted estimates of the occurrence of death. Mortality rates over the entire follow-up period were calculated by dividing the number of cases of death in each group of interest by the number of person-years of observation in the sample. The person-years at risk for an individual who died were calculated as the time between baseline and the reported time of death, if known, or the midpoint between baseline and time of follow-up, when exact time of death was unknown. We calculated mortality rates per thousand person years at risk for each cause of death and for males and females. Economic status was assessed by taking an inventory of household and personal items, a standard and validated method of estimating economic wealth of elderly persons in low income settings [14]. Respondents’ economic status is categorized by relating each respondent’s total possessions to the median number of possessions of the entire sample. Thus, economic status is rated low if its ratio to the median is 0.5 or less, low-average if the ratio is 0.5 1.0, high-average if it is 1.0 2.0, and high if it is over 2.0. Education was assessed using the number of years spent in formal education and it is classified as 0, 1-6years, 7-12years and 13 or more years in school. Residence was classified as rural (less than 12,000 households), semi-urban (12,000 20,000 households) and urban (greater than 20,000 households). Bivariate analysis was used to explore baseline predictors of death. Respondents’ economic status, presence of any chronic medical condition, level of physical activity, availability of and engagement with social network as assessed by contact with friend’s contact and community participation were explored for their association with death at follow-up. Unadjusted logistic regression was conducted for each of these variables. This was followed with multiple logistic regressions in which all significant variables after univariate analysis were entered into the model [15,16]. All analyses were conducted using the STATA (version 10) statistical package. Result The study sample consisted of 1148(53.4%) females and 1001(46.6%) males with a mean age of 77.3 years (SD = 0.3) at baseline. Majority (55.1%) of respondents had no formal education and resided in rural or semi-urban households (74.2%). The total number of deaths recorded over the follow-up period was 268, giving a mortality rate of 33.3 per 1000 person years at risk. These were made up of 136 females and 132 males, with gender specific rates of 35.29 per 1000 person years for males and 31.48 per 1000 person years for females. (Table 1) presents the results of a comparison of the socio-demographic variables of those who died and with those who were alive. The table shows that deaths were more likely to have come from lower economic groups (p=0.009). (Table 2) shows the cause specific mortality rates per 1000 person years at follow-up. Infective causes constitute the predominant aetiologic factors; malaria fever, diarrhoeal disease and febrile illness of unknown cause accounted for 13.2 deaths per 1000 person years. This is followed by hypertension/cardiovascular accident and asthma/respiratory pathology which accounted for 6.9 and 4.6 per 1000 person years, respectively. Citation: Lasisi AO, Esan O, Abiona TA, Gureje O (2018) Causes and Risk of Death Using Verbal Autopsy in The Ibadan Study of Ageing. J Trop Med Health: JTMH118. DOI: 10.29011/JTMH-118. 000118 4 Volume 2018; Issue 02 Variables Those who were dead (268) Those who were alive (1881) p-value n (%) n (%)
Introduction
Countries that cannot record the number of people who die or why they die cannot realize the full potential of their health systems [1,2]. Admittedly, the development of functional civil registration system with medical certification of cause of death takes time and resources. However, there are cheaper and readily available tools and techniques that can be used to obtain a fairly accurate representation of mortality trends [1][2][3][4].
The interest in causes of death for public health purposes goes back to the 17 th century in London, when "death searchers" were recording deaths in the population by weekly household visits, with the main target being to estimate mortality from the plague. Since then, VA has been conducted in research settings by in-depth interviews with the family of the deceased persons [5]. Verbal Autopsy (VA) is a method of ascertaining probable causes of a death based on an interview with primary caregivers about the signs, symptoms and circumstances preceding that death [2]. It provides an understanding of the pattern of and trends in causespecific mortality, mortality differentials between population groups and the effects of interventions in the community. In recent times, two methodological approaches for analysing data obtained from VA have emerged: the first approach is through a judgment made by expert physicians while the second is by use of an automated computer program. The quality of the data is determined by the details of the information obtained with the questionnaire and the method of analysing its content to reach the probable cause of death [5].
The data so produced are important in efforts at reducing mortality rates and monitoring progress towards achieving health targets, for example, as in the Millennium Development Goals [2][3][4][5][6]. In developed countries, data on disease-specific mortality by age are readily available from national vital registration. In contrast, in developing countries, where the proportion of death is significantly higher [7], estimation of cause of death is more difficult because the levels of coverage of vital registration and the reliability of causes of death stated on death certificates are generally low, especially so in rural areas [4][5][6][7]. Consequently, information about mortality pattern obtained through VA can be vital for health care planning in developing countries [8,9]. This report from the Ibadan Study of Ageing provides information on cause-specific mortality as well as the relationship between some selected health and lifestyle variables and mortality among the elderly using verbal autopsy reports.
Methodology Sampling
The Ibadan Study of Ageing is a longitudinal cohort study of the mental and physical health status as well as the functioning of elderly persons (aged ≥65 years) residing in the Yoruba-speaking areas of Nigeria, which consist of eight contiguous states in the south western and north central regions (Lagos, Ogun, Osun, Oyo, Ondo, Ekiti, Kogi and Kwara). At the time of the study, the population of these states was approximately 25 million people, or about 22% of the national population. The baseline survey was conducted between November 2003 and August 2004. The methodology has been described in full elsewhere [10,11] and only a brief summary is provided here. Respondents were selected using a multistage stratified area probability sampling of households. In households with more than one eligible person (aged ≥65 years and fluent in Yoruba, the language of the study), the Kish table selection method was used to select one respondent.
Baseline Assessments
At the baseline in 2003/04, respondents were assessed, among other things, for the presence of chronic physical conditions. We assessed, by self-report, whether respondents had arthritis, diabetes, heart disease and asthma in the previous 12 months using a symptom-based checklist, a method of proven reliability and validity [1,3]. Current and lifetime depression was assessed with the World Health Organization Composite International Diagnostic Interview (CIDI) version three (CIDI.3) and diagnosed according to the criteria of the Diagnostic and Statistical Manual of Mental Disorders, 4 th edition. As previously described, all respondents were assessed for functional limitations in six activities of daily living and seven instrumental activities of daily living [11,12].
A number of social and lifestyle features were also assessed at baseline. Participation in household activities and in community activities was assessed using the World Health Organization Disability Assessment Schedule, Version 2 (WHO-DAS II). The relevant items asked "During the last 30 days, how much did you join in family activities, such as: eating together, talking with family members, visiting family members, working together?" and "During the last 30 days, how much did you join in community activities, such as: festivities, religious activities, talking with community members, working together?" Answers are rated as 1) Not at all, 2) A little bit, 3) Quite a bit, and 4) A lot. For this report, responses to each of the items are dichotomized as "Not at all" versus all others.
Respondents were asked about use of alcohol and tobacco and responses were dichotomized as ever use versus never use. This VA survey included those elderly subjects who died between 2003/2004 and 2007. Information about the death of a respondent was obtained during the follow-up study. Once the death of a respondent was confirmed by a member of the household or a neighbour, the VA questionnaire was administered to an informant within the household identified as the person who was most knowledgeable about the circumstances of death. This report is based on the survey of all the 268 deaths during the period.
The Verbal Autopsy Interview and Questionnaire
We used the VA questionnaire designed by the WHO and INDEPTH (International Network of field sites with continuous Demographic Evaluation of Populations and Their Health in developing countries) [8,9]. The instrument was adapted and translated to reflect local understanding of symptoms and signs of the assessed health conditions. The questionnaire includes open narrative and closed questions and was administered by trained interviewers. The interviewers had all completed secondary (high school) education and had previous experience in survey methodology. The training of the interviewers emphasized issues such as preferred respondents, period of interviews, approaching grieving respondents and compiling narrative material (ensuring that duration, frequency, severity and sequence of the symptoms were mentioned). Pretesting was done to facilitate understanding by the study population. Each interviewer was assigned to communities on the basis of his/her previous place of work on the project and experience. Field work was coordinated by supervisors who oversaw the data collection process, checked questionnaires for completeness and consistency, and conducted random quality checks by re-interviewing about 10% of the respondents. Two physicians reviewed each VA form independently to assign one or more causes of death, and subsequently met to reach consensus for cases where there were differences of opinion. If there was no consensus after discussion, the causes of death were recorded as indeterminate. The physicians were knowledgeable about study area, population and the common local terms used to express signs, symptoms, causes and conditions of death but they were not restricted in any way in order to preserve their independence. The causes of death were assigned by the physicians using clinical judgement based on the ICD 9 [13]. ICD 9 was used because it is the official classification of disease in Nigeria. Ethical approval for all study procedures within the Ibadan Study of Ageing was obtained from the Joint University of Ibadan/University College Hospital Ethics Committee.
Data Analysis
The average duration between baseline and follow-up assessments was 39.3 months (95% confidence interval: 39.1 -39.5). We present the unweighted estimates of the occurrence of death. Mortality rates over the entire follow-up period were calculated by dividing the number of cases of death in each group of interest by the number of person-years of observation in the sample. The person-years at risk for an individual who died were calculated as the time between baseline and the reported time of death, if known, or the midpoint between baseline and time of follow-up, when exact time of death was unknown. We calculated mortality rates per thousand person years at risk for each cause of death and for males and females.
Economic status was assessed by taking an inventory of household and personal items, a standard and validated method of estimating economic wealth of elderly persons in low income settings [14]. Respondents' economic status is categorized by relating each respondent's total possessions to the median number of possessions of the entire sample. Thus, economic status is rated low if its ratio to the median is 0.5 or less, low-average if the ratio is 0.5 -1.0, high-average if it is 1.0 -2.0, and high if it is over 2.0. Education was assessed using the number of years spent in formal education and it is classified as 0, 1-6years, 7-12years and 13 or more years in school. Residence was classified as rural (less than 12,000 households), semi-urban (12,000 -20,000 households) and urban (greater than 20,000 households).
Bivariate analysis was used to explore baseline predictors of death. Respondents' economic status, presence of any chronic medical condition, level of physical activity, availability of and engagement with social network as assessed by contact with friend's contact and community participation were explored for their association with death at follow-up. Unadjusted logistic regression was conducted for each of these variables. This was followed with multiple logistic regressions in which all significant variables after univariate analysis were entered into the model [15,16]. All analyses were conducted using the STATA (version 10) statistical package.
Result
The study sample consisted of 1148(53.4%) females and 1001(46.6%) males with a mean age of 77.3 years (SD = 0.3) at baseline. Majority (55.1%) of respondents had no formal education and resided in rural or semi-urban households (74.2%). The total number of deaths recorded over the follow-up period was 268, giving a mortality rate of 33.3 per 1000 person years at risk. These were made up of 136 females and 132 males, with gender specific rates of 35.29 per 1000 person years for males and 31.48 per 1000 person years for females. (Table 1) presents the results of a comparison of the socio-demographic variables of those who died and with those who were alive. The table shows that deaths were more likely to have come from lower economic groups (p=0.009). (Table 2) shows the cause specific mortality rates per 1000 person years at follow-up. Infective causes constitute the predominant aetiologic factors; malaria fever, diarrhoeal disease and febrile illness of unknown cause accounted for 13.2 deaths per 1000 person years. This is followed by hypertension/cardiovascular accident and asthma/respiratory pathology which accounted for 6.9 and 4.6 per 1000 person years, respectively.
Those who were dead (268) Those who were alive (1881) p-value n (%) n (%)
Age ( Univariate analysis was conducted to baseline factors associated with the risk of dying at follow-up (Table 3). The results show that belonging to low to average socioeconomic class, the presence of any reported chronic medical condition, lifetime major depression, impairment in ADL and IADL, presence of any disability, poor community participation and lack of contact with friends were significantly associated with the risk of dying. On the other hand, persons who reported engaging in moderate or high level of physical activity at baseline were less likely to have died at follow-up. Multivariate analysis, in which all variables significant on univariate analysis were included, was next conducted to explore the independent association of each baseline feature. The results confirms that belonging to low to average socioeconomic group (OR=1.4, 95%CI=1.3-2.8, p=0.009) significantly increased the likelihood of dying at follow up while engaging in moderate to high level of physical activity (OR=0.7, 95%CI=0.5-1.0, p=0.049) reduced it.
Variable
Odds ratio 95%Confidence interval P value Table 3: Univariate logistic regression analyses exploring baseline predictors of death at follow-up.
Discussion
In this study we have estimated mortality rate in this elderly cohort to be 33.3 per 1000 person years at risk (35.29 for males and 31.48 for females) and have shown that infective health conditions (malaria fever, diarrhoeal disease and febrile illness of unknown cause) constituted the predominant causes of death. Even though several factors, such as the presence of a chronic medical condition and impairment of IADL and ADL as well poor social engagement at baseline were significant predictors of the likelihood of death over the period, none of these remained significant on multivariate analysis. Instead, the results of multivariate analysis suggest that persons in the lower economic groups are significantly more likely to have died at follow up while engaging in at least moderate level of physical exercise significantly reduced the likelihood of death. The mortality rate reported in this study is high when compared with the rates reported by Simbai et al. [17] in Beirut and Kanungo et al. [18] in India. Simbai et al.
[19] estimated total mortality rates of 33.7 and 25.2/1000 person years among men and women, respectively, over 50 years of age while Kanungo et al [18], whose work included all age groups, recorded an overall mortality rate of 6.2 per 1000 person-years. The higher figures we reported are undoubtedly partly due to the older age group that we studied. However, the differences could also reflect differences in health care between the study sites.
The causes of death recorded in this study were predominantly infective: malaria fever, diarrhoeal disease and respiratory infections. This is unlike the work of Simbai et al.
[17] which found cardiovascular diseases as the main causes of death. This may be a further evidence of the difference in the overall public health services between Lebanon and Nigeria. In a similar study among adults in a rural area in Kenya [19], death was attributed to infective causes in about 74% of cases with HIV (32%) and tuberculosis (16%) being the most frequent, followed by malaria, respiratory infections, anaemia and diarrhoeal disease (approximately 6% each). The authors of that report concluded that the majority of adult and adolescent deaths were attributed to potentially preventable infectious diseases. This was also similar to the report of Kyobutungi et al. [20] in a study conducted among adults in a slum in Kenya. Apart from lack of social amenities, many communities in developing countries are also characterized by high unemployment, overcrowding, insecurity, greater involvement in risky sexual practices, social fragmentation, and high levels of mobility [21][22][23]. This is in contrast to some other developing countries with presumably better healthcare model where cardiovascular diseases, cancer, respiratory ailments and digestive disorders account for the leading causes of death [17,18]. Kanungo et al. [18] found that mortality from cardiovascular problems, mainly ischaemic heart disease and cerebrovascular accidents, increased in people 40 years of age and older over a period of 2years. Cancer, respiratory diseases and digestive disorders were the second, third and fourth leading causes of death, respectively. Similarly, in Lebanon [17], the leading causes of death were non-communicable, mainly circulatory diseases (60%) and cancer (15%). We found that lower economic class significantly increased the odds for death. In a setting with poorly resourced health system, it is unlikely that elderly persons who are poor can survive for a long period of time.
To our knowledge this is the first report of verbal autopsy study in Nigeria. Verbal autopsy-based as a method of obtaining mortality information is widely used in countries where vital registration and death certification systems are weak and most people die at home and without medical certification of the cause of death. [24][25][26] This is the situation of Nigeria and many developing countries. A verbal autopsy is a method used to determine the cause of death from data collected about the symptoms and signs of illness and the events preceding death. [4,5] This method is based on the hypothesis that the symptoms and signs surrounding most causes of death can be recognized, recollected and reported by a person present during the period prior to death. [6,7,20,21] However, there are limitations to the use of verbal autopsies. An important one is the imprecision of some signs and symptoms in regard to the underlying disease process. Another is that any selfreport is prone to not just recall bias but also the possibility of individual differences in what is chosen to be reported on. The validity of the diagnoses reported here would have been further supported if we could follow back and determine details of health care that decedents had access to prior to death. This was not possible because most of the elderly people in the study had not sought care in formal public health care prior to their death. These limitations could also account for the high prevalence of the unspecified causes as seen in this survey. According to Snow et al [27] the common causes of death were detected by VA with specificities greater than 80%, sensitivity of the VA technique was greater than 75% for measles, neonatal tetanus, malnutrition, and trauma-related deaths.
However, malaria, anaemia, acute respiratory-tract infection, gastroenteritis, and meningitis were detected with sensitivities of less than 50%. Hence they concluded that VA used in malariaspecific intervention trials should be interpreted with caution and only in the light of known sensitivities and specificities. In this study, the diagnoses were agreed upon between two senior clinicians. The problem of recall bias might constitute a significant limitation to making accurate diagnoses of the cause of death. In addition, there were individuals who moved away from the study site over the period. For these respondents, it was difficult to ascertain whether they had died or were alive in their new location.
In conclusion, this work have shown that infections and their complications constitute the major causes of mortality in this setting and belonging to low to average socioeconomic group significantly increased the likelihood of dying. These findings have implications for mortality prevention policy in our society.
|
2019-12-01T02:04:03.194Z
|
2018-01-01T00:00:00.000
|
{
"year": 2018,
"sha1": "98979746d6f170eb6bcbfe89c84526820d4de7ef",
"oa_license": "CCBYSA",
"oa_url": "https://gavinpublishers.com/admin/assets/articles_pdf/1550665732article_pdf665605933.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "53c5757a55bfa66ff20fceeb7045748d0974ec7b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
226508336
|
pes2o/s2orc
|
v3-fos-license
|
Application of RASCH Models to Validate Emotional Intelligence Inventory (EII) in High School Students
This study aims to validate the Emotional Intelligence Inventory (EII) using the Rasch Model. The research sample of 394 senior high school students (SMK and MAN) in the city of Semarang. The sample selection using cluster random sampling techniques. The results of this study showed that from 54 item to be 34 items fit use the Rasch measurement model with acceptable fit index (0.66 -1.49). Further analysis showed item reliability of 0.99 and person reliability of 0.77. This research proved that EII can be used as a measurement tool for emotional intelligence in senior high
I. INTRODUCTION
Industrial Era 4.0 demands higher quality human resources, namely people who are not only superior in terms of their rationality but also have an emotional advantage. Research conducted by psychologists shows that rational intelligence or intelligence contributes approximately 20% to a person's success, while 80% is determine by other forces including emotional intelligence [1]. In the theory of multiple intelligence suggests that there are seven intelligence possessed by humans. Among the seven intelligences related to emotional intelligence is interpersonal intelligence.
Salovey and Mayer proposed definition of emotional intelligence as an emotional information processing that includes accurate appraisal of emotions in one-self and others, appropriate expression of emotion, and adaptive regulation of emotion in such a way as to enhance living [2]. Emotional intelligence is the ability of emotions to control themselves, have endurance when faced with problem, motive themselves, able to regulate moods, the ability to empathize and foster relationships with others [3].
Emotional intelligence works in synergy with cognitive skills. Based on the results show that there is a significant positive correlation between the level of emotional intelligence with the learning achievement of second grade high school students [4]. Thus, it can be interpreted that the higher the level of emotional intelligence of high school students, the higher the learning achievement will be. The other research by Chamundeswari [5] that there is a significant correlation between emotional intelligence with academic achievement of student. Other research shows that whether students learn or not is also influenced by emotions, so the teachers by helping students to understand their emotions can be useful to support the learning process [6]. This shows that emotional intelligence significantly influences student learning outcomes [7].
In addition to being influential in the academic field, emotional intelligence also influences individual decision making and leadership attitudes [8,9]. Research conducted by Aprilia [10] shows that there is a negative correlation between emotional intelligence and brawl behavior in adolescent boys who have been involved in brawls at SMK "B" Jakarta. Arbadiati in Yulianto [11] said that individuals who have intelligence have the ability to feel emotions, manage and utilize emotions appropriately so as to provide convenience in living life as a social creature.
For middle school adolescents, emotional intelligence has an important role. This emotional aspect is very important needed because it will truly understand the personal reality itself and that exists in the surrounding environment. In addition to this emotional aspect students / individuals learn to deal with conflicts and effective challenges, in the sense that students need efforts to build emotional intelligence in order to control their feelings and channel them in positive and productive ways. Based on the search results, in Indonesia only found two emotional intelligence instruments developed by Syamsu [12] and Wulandari [13]. The two elements were developed using the classical test theory approach. Wibisono as cited in Ardiyanti [14] argued that 95% of measurements in psychological studies are still being developed based on the classical test theory (CTT) approach. In classical test theory the attitude measurement is considered as interval data so that the data can be directly processed in the formula. Meanwhile, Mitchel as cited in Ardiyanti [14] also argued that the type of data obtained through measurement techniques that ask for opinions or attitudes, is nominal and ordinal so that analytical tools that can be used are also limited. Even this classical test theory approach was later improved by the emergence of item response theory (IRT). One approach model in IRT is the Rasch model. Therefore, this study aims to validate Emotional Intelligence Iinventory using Rasch modeling with the help of computer programming WINSTEPS 4.4.5 [15,16]. Bond and Fox [15] stated that the use of the Rasch model in instrument validation would produce more holistic information about the instrument and better meet the measurement definition. This statement was supported by Sumintono and Widhiarso [17] that argues for the advantages of Rasch modeling compared to other methods, especially classical test theory, namely the capacity to predict missing data. These advantages puts Rasch model above other methods in its' accuracy. Additionally, Rasch modelling is able to produce standard error measurement values for the instruments used which can improve the accuracy of the calculations. Calibration is carried out in Rasch modelling simultaneously in three ways, namely the scale of measurement, respondent (person), and item (item).
II. METHODS
Emotional Intelligence Inventory (EII) is a 54 item inventory based on Salovey & Mayer's five dimensions of emotional intelligence: knowledge of one's emotion, ability to control emotion, ability to motivate oneself, knowledge of other's emotion and foster relationships with others developed by Mulawarman [18] with 5 dimensions. Respondents in this study were 394 students of high school or its' equivalent in Semarang. The sampling was done using cluster random sampling techniques [19].
The Emotional Intelligence Inventory (EII) employs Likert scaling that uses a continuum of "always", "often", "sometimes", "rarely" and "never". For favourable statements, "always" is scored as 5 and "never" was scored as 1. Reversely, for unfavourable statements, "always" was scored as 1 and "never" was scored as 5. This study employs summated Ratings method to weigh the values [20]. Researchers used the WINSTEP Rasch Model computer program 4.4.5 [15,16] to analyse the information regarding response misfit can be accessed via link https://osf.io/q5c92/.
A. Checking Data Item and Person to Objective Measurement
This research conducts two preliminary checks to decide accurating the data items and respondents taking are according to the ideal measurement model. The ideal item and person suitability that the researcher took is determined by the measurement range value in Outfit Mean Squared (MNSQ) which is 0.5 logit < MNSQ < 1.5 logit; The standard Z Outfit (ZSTD) is -2.0 < ZSTD < +2 [10,9,13]. Based on item analysis, from a total of 54 items there were 20 items that were outliers or misfit. This meaning that 20 items were not able to measure what should be measured or items that were made difficult for respondents to understand. Information fit items can be accessed via the link https://osf.io/rz7xw/. At the person level researchers found that of the 394 people who filled EII there were 86 respondents who were inconsistent or were not serious about filling EII or did not understand the item well. Detailed information regarding response misfit can be accessed via link https://osf.io/q5c92/.
B. Reliability Test: Item and Person
Reliability explain how far the measurements made made many times will produce the same information. This means that it does not produce a lot of significant differences in information [17]. This research did reliability test to carried out through three processes, namely by looking at the reliability of items, reliability of people and the reliability of interactions between people and items when filling EII. Table. 1 showed summary statistic from Rasch model analyze [21]. Based on the table 1, it can be seen that the appropriate of the answers given by person or person reliability (.77) is 'adequately' while and the reliability of the items in the emotional intelligence inventory (.99) results are "excellent". While the interaction between person and item as a whole (.81) 'Good'. Thus, EII can be used to measure by considering doing rapport on the respondent so that respondents are really serious in filling.
C. Threshold: Partial Credit Model
In this study, the researcher analyzes the validity of the rating scale to verify whether the ranking (choice) of the choice used confuses respondents or not. In the table.2 can be seen that the average observation starts from logit -.69 for the choice of score 1, then the choice with a score of 2 is .00 logit and increases to logit + 1.31 for a score of 4, and + 1.94 for the choice of score 5. Logit value for each option showing improvement, this shows that respondents can ascertain the difference between the choice of answers provided in the EII. The average value of observations is relevant to the value of Andrich Threshold which moves monotonically from NONE then moves toward a negative logit and continues to lead to a Advances in Social Science, Education and Humanities Research, volume 462 positive logit for each answer choice indicating that the given answer choice is declared valid (NONE -> -1.87 logit -> -.81 logit -> .34 logit -> +2.34 logit). Thus, it can be concluded that the suitability between items and the choice of answers is ideal for measurement.
D. Estimation Validity Through Principal Component
Analysis Unidimensional test using Principal Component Analysis (PCA). Aims to measure the diversity of instruments measuring what should be measured in this case is an inventory of emotional intelligence. Based on the table, it can be seen that the results of the measurement of raw variance data amounted to 34% (> 20%). Thus, it can be concluded that the inventory of emotional intelligence meets unidimensionality.
E. Item Measure
One of the advantages of using Rasch Model for researchers is being able to provide a linear scale with the same interval. Item measures provide information about the items that are the most difficult to approve by the person and the items that are the easiest to approve.
Based on table 4, item 1 logit value with value (+ 2.11 logit) shows this is the item that is the most difficult to agree with the respondent in the given EII instrument; while item no. 2 with a value (-1.73 logit) is the method that is most easily agreed upon by respondents.
F. Person Measure
Person Measure provides information about each person's logit. Detailed information regarding person measures can be accessed via link https://osf.io/kqt3a/. based on a person measure analysis of 308 researchers found that person 48 showed the highest emotional intelligence compared to others (+2.90). while person 144 shows the lowest emotional intelligence by answering in the direction of never.
IV. CONCLUSION
Based on the data from this study, researcher conclude that the Emotional Intelligence Inventory has a very good item reliability, only for sufficient person reliability. For validity through PCA 34% (> 20%) is fulfilled. Limitation this research responden taked in semarang. To get good results, the EII needs to be followed up with continuing development both in terms of writing the statement items, the number of statement items, the contents of the statement to the subjects used in developing the Emotional Intelligence Inventory.
|
2020-08-27T09:06:56.458Z
|
2020-08-14T00:00:00.000
|
{
"year": 2020,
"sha1": "5501df45543fe0e6ba514fb689e4abf0d3e51bc0",
"oa_license": "CCBYNC",
"oa_url": "https://www.atlantis-press.com/article/125943335.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "ba23b1a57c18872014678e160e58daff94805c00",
"s2fieldsofstudy": [
"Education",
"Psychology"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
248965802
|
pes2o/s2orc
|
v3-fos-license
|
Effectiveness of Remotely Delivered Parenting Programs on Caregiver-child Interaction and Child Development: a Systematic Review
Remotely delivered parenting interventions are suitable to promote child well-being and development, in a context of social isolation, as our society faced due to COVID-19. The objective of this systematic review was to assess the effectiveness of remotely delivered parenting interventions for typically developing children on caregiver-child interaction and child development. We carried out a systematic search to find studies from the inception of the database to September 2021 on six electronic databases: MEDLINE, CINAHL, Embase, Scopus, Web of Science Core Collection and Regional Portal Information and Knowledge for Health (BVS), and gray literature. Eligible study designs were experimental and quasi-experimental studies. We included parenting interventions as long as they were remotely delivered and focused on typically developing children. Two outcomes were considered: caregiver–child interaction and child development. Three randomized controlled trials (RCT) and one quasi-experimental study met the inclusion criteria. Results from two RCT revealed positive, small-to-medium effects on child development. One study showed that the new intervention had a not inferior effect compared to the results achieved by the traditional support. Children who participated in the quasi-experimental study showed significant elevations in language ability. One study reported positive caregiver-child interaction results. There is insufficient evidence to draw definitive conclusions regarding the effectiveness of remotely delivered parenting interventions on child development due to the heterogeneity of participant profiles, mode of delivery, and assessment tools. The results suggest the need to develop future methodologically rigorous studies assessing the effectiveness of remotely delivered parenting interventions for typically developing children on caregiver–child interaction and child development.
Remotely delivered parenting interventions for typically developing children are suitable in a context of social isolation.
• There are limited randomized control trials which assess the effectiveness of remotely delivered parenting interventions for typically developing children.
It is well-known that the first years of life are crucial for child development and health, and the impact of experiences lived during early years can remain over a lifetime (Bick & Nelson, 2017;Fox et al., 2010;Shonkoff & Phillips, 2000). Nonetheless, a significant number of children in low-and middleincome countries, are at risk of not achieving cognitive, language, and socio-emotional skills in their early years of life (Lu et al., 2016;McCoy et al., 2016). Poverty-related risk factors can negatively interfere with the quality of caregiverchild interaction and stimulation of children at home which is associated with poor early child development outcomes of typically developing children (Black et al., 2017;Grantham-McGregor et al., 2007;Shonkoff & Phillips, 2000).
Evidence shows that families, especially those in vulnerable conditions, need support and motivation to promote the well-being and development of their children (Britto et al., 2017;Rayce et al., 2017;WHO, 1997). In this context, the 2017 Series on Advancing Early Childhood Development in The Lancet called for early child development programs to be integrated into government health services for large-scale implementation, with the aim of reaching the greatest number of children at risk of not achieving their developmental potential (Britto et al., 2017).
Parenting interventions were implemented considering that by improving caregiver-child interaction, it may positively influence child development (Attanasio et al., 2014;Teepe et al., 2017;Weisleder et al., 2018;Yousafzai et al., 2015). Previous systematic reviews and meta-analyses have confirmed positive effects of parenting programs on both caregiver-child interaction and child development (Britto et al., 2015;Jeong et al., 2021;Rayce et al., 2017). However, these interventions were delivered mainly through face-to-face modalities.
We have just experienced a situation of social isolation due to the pandemic caused by the coronavirus disease , which made it impossible to implement face-to-face interventions, and has been documented as a risk factor for the child development of typically developing children (Center on the Developing Child, 2020; Cluver et al., 2020;UNICEF, 2020). Thus, we conducted this systematic review to address the gaps in the knowledge about remotely delivered parenting interventions to improve both the caregiver-child interaction and early development outcomes of typically developing children. Interventions of interest were those aimed at typically developing children, those who did not have behavioral or other diagnosed problems that already place them at risk of child development delays.
A preliminary search of different databases did not identify systematic reviews that have specifically examined the effectiveness of remotely delivered parenting interventions for typically developing children.
Therefore, the objective of this review was to assess the effectiveness of remotely delivered parenting interventions for typically developing children on caregiver-child interaction and child development. The review question was: what is the effectiveness of remotely delivered parenting programs for typically developing children on caregiverchild interaction and child development when compared to usual care or no intervention?
Method
We performed a systematic review according to the guidelines of The Joanna Briggs Institute (Tufanaru et al., 2017) and the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement (Page et al., 2021).
Search Strategy
The search strategy aimed to find both published peerreviewed articles and gray literature from the inception of the database to September 2021. An initial limited search was performed on MEDLINE and CINAHL to analyze the text words contained in the title and abstract and in the index terms used to describe the articles. This information helped in the development of a search strategy, which was adapted to tailored for each information source. Other databases were searched including Embase, Scopus, Web of Science Core Collection, and Regional Portal Information and Knowledge for Health (BVS). The search for unpublished studies included Open Grey literature: Google Scholar, Science Direct, DART-Europe E-theses Portal, Biblioteca Digital Brasileira de Teses e Dissertações (BDTD), Theses Canada Portal, and Library and Archives Canada. Key terms searched included childhood terms (infant/child/children), parenting intervention (intervention/ training/program/parenting/positive parenting practices), caregiver-child interaction (parent-child relations/parentchild interaction) and child development (cognitive development/cognition/ executive function/language development/communication/motor development/gross motor/fine motor/socioemotional development/socio-emotional and emotional) (Supplementary material. Table 1). The English language was used in the search strategy, as well as singular and plural expressions. Truncation and proximity operators were employed to increase the accuracy of the search.
Participants
The review included studies on interventions aimed at caregiver-child dyads. Caregiver was defined as any adult person biologically or not related to the child, who lives with daily, and who has the responsibility of caring, stimulating, loving, educating, and with whom the child forms the strongest emotional bonds in the first years of life (WHO, 2004). Children under 36 months of age at the time of the intervention, as it is well-known that the first three years of life are a critical period for child development and long-term health (Center on the Developing Child, 2010). In addition, the participants must be typically developing children.
Intervention
This review included studies that evaluated parenting interventions, which are defined as interventions or services aimed to improving parenting interactions, behaviors, knowledge, beliefs, attitudes, and practices to improve child health and development (Britto et al., 2017), that was remotely delivered through any remote option such as phone, email, mail, online parenting groups, radio, and TV; they could not be face-to-face programs, i.e., the professional could not be present at the moment of the intervention, and that were specifically focused on typically developing children (i.e., children without developmental delays, diagnosed conditions, behavioral problems or other problems). There was no restriction regarding the content, length, and/or frequency of the intervention.
Comparators
This review considered studies that compared the intervention to the usual care or no intervention or another intervention not remotely delivered.
Outcomes
This review considered two outcomes: caregiver-child interaction and child development. Caregiver-child interactions are determined by both, external conditions (environmental and social factors), and by internal parental motivations (sensitivity, responsiveness, language and cognitive stimulation, positive regard/warmth, behavior guidance), and infant capacity (very low birth weight, neurological difficulties, immature neurophysiological systems) (WHO, 2004). In general, caregiver-child interaction provides children's physical needs and protects them from harm, thus, with their basic needs provided, children can turn their attention towards learning about the important features of their world (WHO, 2004). Considering the complexity of this concept and the variety of comprehension and tools to assess caregiver-child interaction, this review included studies that distinctly mentioned the expression "interaction" in their methods, including mother or parent, and all the different forms to assess the interaction.
Child development is a maturational and interactive process of change, that begins at the conception and in which the child acquires increasingly complex motor, cognitive, language and psychosocial functions. Furthermore, it depends on the interaction between genetics and environment (Black et al., 2017;Engle et al., 2007). This definition incorporates the three main domains into which child development has been divided in: physical, cognitive and psychosocial. The physical domain corresponds to the physical and brain growth, taking into consideration sensory abilities and motor skills. The cognitive development refers to mental skills such as thinking, memory, reasoning, learning, and language. Finally, the psychosocial domain is related to emotions, personality, and social relationships (Papalia et al., 2006). The three domains are interrelated, making the development to be multidimensional and multidirectional. In the literature, child development assessment includes a variety of tools to evaluate all domains or one in particular. Therefore, this review included all available tools.
Types of Studies
For the search strategy, we considered both experimental and quasi-experimental study designs, such as randomized controlled trials, non-randomized controlled trials, before and after studies, and interrupted time-series studies. Studies published in English, Portuguese and Spanish were included.
Exclusion Criteria
This review excluded: • Studies including children with specific characteristics such as: preterm birth, commonly defined as birth before 37 weeks of gestation; autism; victims of any type of violence; presence of illness at study time; behavioral problems, as aggressiveness; and presence of special needs; • Studies including parents with physical and mental illness; • Studies limited to adolescent parents or that did not distinguish this group in the statistical analyses; • Studies related to protocol of randomized clinical trials.
We decided to exclude studies including children with special needs or diagnosed conditions because these children require more targeted supports and outcomes among these children may not be generalizable to the general population.
Study Selection
All identified citations were uploaded into Mendeley (Mendeley Ltd, Elsevier, The Netherlands) and duplicates were removed. Studies were selected by screening titles and abstracts according to inclusion criteria, and assessed by two independent reviewers (KSC and LSD). Then, the full text of potentially relevant studies was retrieved and assessed according to the inclusion criteria. The reasons for the exclusion of full-text studies were presented in a PRISMA flow diagram. No disagreements arose between the reviewers; therefore, a third reviewer was not required.
Assessment of the Methodological Quality
The selected studies were critically appraised by two independent reviewers (KSC and LSD), using standardized critical appraisal instruments from the Joanna Briggs Institute (JBI), specific to the study design. These critical appraisal instruments have been developed and approved by JBI Scientific Committee following extensive peer review (Tufanaru et al., 2017). Any disagreements were resolved through consensus without consulting a third reviewer. All studies that were critically appraised were included for data extraction and synthesis (Supplementary material. Table 2).
Data Extraction and Data Synthesis
Data regarding specific details were extracted by the authors: study characteristics (including authors, country where the study was conducted, year of the publication, study design, participants), the mode that the parenting intervention was delivered, effectiveness on the caregiverchild interaction, and child development. Due to the heterogeneity of measures of caregiver-child interaction and child development, a meta-analysis could not be performed. The results are therefore presented as a narrative.
Description of the Studies
The results of the search are presented in the PRISMA flow diagram (Fig. 1). A total of 8825 articles were retrieved from the selected databases. Additional records identified through other sources included 1099 studies. After removing the duplicates, 7291 remained. Among these, 7243 were excluded after screening the titles and abstracts. Thus, 48 studies were evaluated for full-text eligibility; however, one study could not be retrieved, although it did not mention whether it assessed a remotely delivered parenting intervention. From the 47 articles that advanced to the full review phase, 43 did not meet the inclusion criteria (Supplementary material. Table 3). Therefore, four articles (Abimpaye et al., 2019;Feil et al., 2020;Gilkerson et al., 2017;Sawyer et al., 2017), representing four unique interventions for typically developing children, were assessed for methodological quality and subsequently included in this review. Table 1 describes the characteristics of the studies included in this review. Three included studies were randomized controlled trials and one was a quasi-experimental study. All of them assessed different modes of remotely delivered parenting interventions for typically developing children. Two studies compared the remotely delivered intervention with a control group that received usual care (Sawyer et al., 2017) or no intervention (Gilkerson et al., 2017). One study (Feil et al., 2020). One study compared the remotely delivered program with another intervention, as well as with the usual care (Abimpaye et al. 2019). The studies were carried out in three different countries Rwanda (Abimpaye et al., 2019), Australia (Sawyer et al., 2017), and United States of America (Feil et al. 2020;Gilkerson et al., 2017). All studies were published in English. It is worth mentioning that one of the randomized controlled trials assessed the non-inferiority outcomes of parenting program (Sawyer et al., 2017): the authors reported that the results of parenting intervention (clinic-based postnatal health check plus Internet-based support) were not inferior compared to the home-based support program.
Participant characteristics
A total of 2475 caregiver-child dyads from the four studies were considered in this review. Sample sizes ranged from 72 to 1450 dyads. Two studies were exclusively delivered to mothers (Feil et al., 2020;Sawyer et al., 2017), and two were addressed to the principal caregiver (mother, father, or other) (Abimpaye et al., 2019;Gilkerson et al., 2017). Three studies recruited families without specific characteristics (Abimpaye et al., 2019;Gilkerson et al., 2017;Sawyer et al., 2017) and one recruited mother-infant dyads from households at or below 130% of the federal poverty guideline (Feil et al., 2020). All four studies comprised children postnatally, from newborn to 36 months old (Table 1).
Interventions
Tables 2 and 3 present the intervention details. All four interventions focused on promoting parenting skills and improving developmentally appropriate learning opportunities for children through talking and playing. Additionally, Sawyer et al. (2017) included some topics related to child health. Considering that all four interventions were remotely delivered, there were different modes of delivery: radio (Abimpaye et al., 2019) and internet (Feil et al., 2020;Gilkerson et al., 2017;Sawyer et al., 2017). Furthermore, the intervention that used radio and one that used internet, were complemented with face-to-face support, such as home visits (Abimpaye et al., 2019) and clinic-based health check (Sawyer et al., 2017). The other two interventions were completely remotely delivered (Feil et al., 2020;Gilkerson et al. 2017) ( Table 2). One intervention was delivered by trained local volunteers (Abimpaye et al., 2019), two interventions were delivered by a language development expert and trained staff members (Feil et al., 2020;Gilkerson et al., 2017), one intervention moderated by nurses (Sawyer et al., 2017). The duration of the four interventions was short term (≤6 months). The intensity of the interventions varied from access when needed (Sawyer et al., 2017) to a monthly delivery (Abimpaye et al., 2019).
Outcomes
Three studies reported child development outcomes (Abimpaye et al., 2019;Gilkerson et al., 2017;Sawyer et al., 2017). One of the studies reported caregiver-child interaction results (Feil et al., 2020). (Table 1). The four studies presented post-test and follow-up results. One study reported long-term outcomes (Sawyer et al., 2017) ( Table 3). In general, the interventions showed satisfactory results in child development; and all studies confirmed their hypothesis, intervention groups showed better results than control groups, with small to medium effect sizes. However, the study by Gilkerson et al. (2017) did not report its results on language development, separately for the intervention and the control groups. Instead, the authors combined results at 12 months of both groups after receiving the intervention, which makes it difficult to confirm that the improvement in language development is specifically due to the intervention.
The trial comparing exclusively remotely delivered parenting intervention versus remotely delivered parenting intervention complemented with face-to-face support found a smaller effect size of the exclusively remotely delivered intervention (Abimpaye et al., 2019) (Table 3). However, the variation of participant profiles, mode of delivery, assessment tools, measurement times, as well as data treatment, make it difficult to do the comparative analysis of the results from the three studies.
Discussion
We aimed to assess the effectiveness of remotely delivered parenting interventions for typically developing children on caregiver-child interaction and child development. We identified and assessed three randomized controlled trials and one quasi-experimental study of remotely delivered parenting interventions for typically developing children that were carried out to enhance child development. Due to the aforementioned limitations of the study by Gilkerson et al. (2017) we will focus the discussion of the results on the other three interventions. Two studies showed small effect sizes of the remotely delivered interventions on socioemotional development (Abimpaye et al., 2019;Sawyer et al., 2017). The only study that reported caregiverchild interaction as an outcome showed positive effect of the intervention on mothers' language supportive parenting behavior component which is a component of caregiver- (Feil et al., 2020). It is not feasible to provide strong evidence on the effectiveness of remotely delivered parenting interventions for typically developing children on caregiver-child interaction and child development due to the low number of studies and the heterogeneity of the interventions. The researchers of the four studies meeting the inclusion criteria, used two different resources to remotely delivering the interventions such as: radio and Internet. This diversity of modes of delivering the parenting interventions may result from the context and characteristics of the people receiving the intervention. In the case of the First Steps Project, in the context of Rwanda, the radio is a highly-accessible and costeffective technology that allows to reach a larger number of families. Additionally, radio has the advantage that it allows to deliver the content of the intervention in a format that is effective in low-literate contexts. These characteristics of the use of radio make it a useful resource to deliver parenting interventions in other low-and middle-income countries. Notwithstanding, radio has the limitation of hindering twoway communication between families and providers which could require another face-to-face intervention to complement the program. In Australia and United States, as well as in other high-income countries, the use of the Internet to provide health care and support have increased significantly. Compared to radio, the Internet allows two-way, real-time communication between participants and providers, however, socioeconomically disadvantaged families with no access to Internet or families who do not have digital skills could not benefit from the intervention.
Remotely delivered parenting interventions for typically developing children seems to be an effective mode of delivery compared to the usual care or no intervention. Remotely delivered parenting interventions showed positive results, which suggests that remotely-supported interventions can guide families through the challenges of caring for their children. Like that reliable say states that doing will always be better than not doing, regarding child development of typically developing children, not doing has a high cost in opportunities and equity for children, families and society (Richter et al., 2017). Furthermore, considering the conditions in which health or social systems cannot guarantee face-toface programs, it is important to have early childhood interventions that can be remotely delivered. Nonetheless, when compared to other intervention which included face-to-face support, the effect size tended to be smaller (Abimpaye et al., 2019;Sawyer et al., 2017). We believe that the face-to-face contact may have better results because it promotes higher motivation and engagement of the families (Burrell et al., 2018). Moreover, interventions that used two modes of delivery may have better results, since an extensive review has consistently shown that those who used more than one modality obtained better results than those who used only one Internetbased group home-based support provided by a community nurse).
Source: Elaborated by the authors modality (Britto et al., 2015). The emergence of new and different modes of delivering parenting interventions do not intent to replace traditional modes of face-to-face delivered programs, which already have strong evidence, and clinical, social, and political support (Sawyer et al., 2017). In this regard, remotely delivered parenting interventions represent good options to complement face-to-face interventions and to ensure that all children and their families benefit from initiatives that seek to develop parenting skills and to improve early childhood outcomes. For example, the different and complex situation due to the COVID-19 crisis that enforced measures as staying at home and social distancing. Despite the existence of numerous systematic reviews and meta-analyses that evidenced the effectiveness of parenting interventions for typically developing childen on caregiverchild interactions and child development (Britto et al., 2015;Eshel et al., 2006;Jeong et al., 2018;Shah et al., 2016), none of these evaluated exclusively the effects of remotely delivered parenting interventions. The findings of this systematic review are similar to those of previous reviews and meta-analyses that showed modest benefits of parenting interventions. A systematic review and meta-analysis of 21 interventions, involving parents in psychosocial stimulation to promote child development, revealed that stimulation had a medium effect size of 0.42 and 0.47 on cognitive and language development, respectively (Aboud and Yousafzai 2015). Meta-analysis assessing interventions for families with children at risk of developmental harm found a high effect size for parent-child interaction, whereas the effects on cognitive development were either small or not significant (Rayce et al., 2017). Meta-analysis including 15 studies evaluating stimulation interventions reported medium effect size in improving mother-child interactions (Jeong et al., 2018). Therefore, it appears that regardless of whether the intervention is face-to-face or remotely delivered, achieving a large effect of the parenting interventions for typically developing children remains a challenge for researchers and stakeholders.
Currently, it is expected that there will be an increase in the number of remotely delivered parenting interventions aimed at typically developing children. Early childhood development services must respond to the demands that parents have about caring for their children. For all professionals and disciplines working with children, it is necessary to reinvent childcare services with the aim to guarantee the children's safety and their developmental potential. Isolation should not be a reason to leave families without support. Therefore, remotely delivered parenting interventions allow the continuity of childcare services that aim to promote child development and can reduce the negative impact of the current situation due to COVID-19 or other infectious diseases. The above-mentioned non-traditional initiatives, which were effective in improving child development, are excellent ideas to develop parenting interventions to support, educate and motivate families to provide a nurturing and safe environment, especially in the early years of life.
Strengths and Limitations
We identified some strengths in this systematic review. In relation to the methodological appraisal, the four studies Comparing the two intervention groups, the children in the light touch group were significantly more likely to meet the gross motor benchmark than children in the full intervention group (light touch = 0.58 vs full intervention = 0.24). Feil et al. (2020) ePALS strengthened mothers' language-supportive parenting behavior toward her infant during the book share activity. Significant correlation between residualized posttests scores for maternal language-supportive behaviors and infant language behaviors for mothers and infants in the experimental ePALS condition. Gilkerson et al., (2017) Little change was observed for the MB-CDI standardized vocabulary score, while the expressive language development quotient from the Child Development Inventory and the Snapshot showed an increase from baseline for the aggregate sample and for families whose conversational turns scores at baseline were above the 50 th percentile. were well-designed randomized controlled trials. They demonstrated a low risk of bias, examining a large sample size and with appropriate statistical analysis including adjusted variables. The most affected items were blindness of participants and providers, which is justifiable for educational intervention trials. This systematic review also has some limitations. Despite the comprehensive search across all databases, some eligible studies may have been missed. The review only included studies in English, Portuguese and Spanish, so additional studies written in other languages may have been excluded. The small number of included studies and the heterogeneous nature of interventions, outcomes, and settings represents a limitation. Furthermore, our findings may not be generalizable to children with special needs such as developmental delays, diagnosed conditions or behavioral problems.
Implications for Practice
A systematic review of randomized controlled trials is considered to yield the highest level of evidence in evaluating the effectiveness of interventions. However, due to the small number of studies and the insufficient evidence of the outcomes, it is difficult to draw definitive conclusions about the effectiveness of remotely delivered parenting interventions for typically developing children. The tentative findings indicate that remotely delivered parenting interventions may improve child development of typically developing children. They seem to have a positive and beneficial impact on children well-being. Hence, remotely delivered parenting interventions may be an appropriate tool for early childhood services and health professionals to keep childcare and family support even in isolation conditions due to COVID-19 or other infectious diseases.
Implications for Research
The small number of included studies suggests that there is an urgent need for large, high-quality trials to evaluate remotely delivered parenting interventions for typically developing children. In addition, the fact that only one of the trials included long-term follow-up results revealed the need for future investigations. Considering that long-term effects of remotely delivered parenting interventions may demonstrate the lasting effects of these kind of programs on child, caregiver and caregiver-child outcomes. Finally, it is necessary to study the effectiveness of interventions which support families during the pandemic due to COVID-19 or other infectious diseases.
Conclusions
This study is the first systematic review assessing the effectiveness of remotely delivered parenting interventions on caregiver-child interactions and child development. Currently, there is insufficient evidence to draw definitive conclusions and it was not possible to conduct a metaanalysis due to the heterogeneity in the characteristics of the studies, in terms of both the intervention designs and their outcomes. However, these results suggest the need for future methodologically rigorous studies that will assess the effectiveness of remotely delivered parenting interventions for typically developing children. Further studies could substantially contribute to the field of early childhood development.
Funding The authors have no relevant financial or non-financial interests to disclose.
Compliance with Ethical Standards
Conflict of Interest The authors declare no competing interests.
Informed Consent Informed consent was obtained from all individual participants (and the legal guardians of participants <18) included in the study.
Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
|
2022-05-23T05:06:26.767Z
|
2022-05-21T00:00:00.000
|
{
"year": 2022,
"sha1": "eb7e6cb0a63354d5d8322c428ef6a13947f304a5",
"oa_license": null,
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10826-022-02328-8.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "eb7e6cb0a63354d5d8322c428ef6a13947f304a5",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
257637192
|
pes2o/s2orc
|
v3-fos-license
|
Heat transfer correlations for buoyant liquid metal MHD flows in blanket poloidal channels
In recent years, several simulation codes for reproducing liquid metal magnetohydrodynamic (MHD) phenomena have been validated and benchmarked. Accurate simulation codes are crucial to enhance our understanding of how flow behavior affects heat transport in liquid metal-based breeding blankets. Using heat transfer correlations, that model the influence of flow characteristics on the transport of heat, is especially interesting for system designers because it saves them the effort and time in completely simulating every design proposal. Our group has studied the buoyant MHD flow in poloidal channels on the EU Dual Coolant Lead Lithium (DCLL) blanket geometry. Two different codes were used for this study: a 2D fully-developed code and a Q2D-fully-developed code. In this work, we explored the influence of different flow conditions in the heat transport phenomena parametrically. This article presents the results of the calculations performed using the two codes and provides heat transfer correlations for poloidal EU DCLL channels.
Introduction
Dual-coolant lead-lithium (DCLL) is a breeding blanket design that is being considered within EU DEMO project as advanced blanket [1]. The latest DCLL design (provided by CIEMAT ( [2] and [3]) consists of a single-module segment (SMS), avoiding U-turns in the liquid metal flow. The walls containing the PbLi flow will be of a ceramic insulating material, avoiding the electric coupling between fluid and walls.
In a recent work [4], our group explored the 2D fully-developed analysis of heat transfer in buoyant liquid metal MHD flows in DCLL outboard channels. The analysis used an electric potential MHD model to study the heat transfer coefficient applicable to a specific wall, as shown in Figure 1.
U represents the velocity direction (perpendicular to the studied plane), g is the gravity vector, opposed to U , B is the magnetic field direction, Q G represents the volumetric exponential heat generation, and Q W is the heat extracted through the outer wall by the helium cooling channels of the first wall. The rest of the walls are adiabatic. Note that the "wall of study" is the plane that separates the fluid and solid regions.
The work presented here is a continuation of that started in 2020 [4]. Our group had access to High Performance Computer resources and was capable to implement a much deeper parametric study on the influence of several parameters to the Nusselt number, characteristic of the heat transfer phenomena.
More specifically, the variables studied in this work are: the aspect ratio of the channel (AR), the wall conductance ratio (c W ), the Grashof number (Gr), the Grashof ratio (GrR), the Hartmann number (Ha), the volumetric heat generation shape coefficient (m), and the Reynolds number (Re). The influence of all of them to the Nusselt number is analyzed.
Recently, our group has worked on the development and validation of a quasi-two-dimensional (Q2D) MHD model [5]. In this study, we compare the results obtained by the electric potential MHD 2D model with the 1D fully-developed Q2D model, as described below. Figure 1: Wall of study [4] 2 Methodology The definition of the flow conditions is done using the dimensionless numbers of each field: the Hartmann number for the magnetic field intensity (Ha = Bb σ m /µ), the Reynolds number for the velocity field (Re = ρU D h /µ), the wall conductance ratio for the wall conductivity (c w = σ w t w /σ m b), the Grashof number for the heat generated in the fluid domain (Gr = gβ∆T a 3 /ν 2 ), and the Grashof ratio for the extracted heat through the outer wall (GrR = Q W /Q G ). Parameters σ w and t w correspond to wall electric conductivity and wall thickness, a is half width of the channel in the heat flux radial direction, b is half width of the channel in the magnetic field direction and D h is the hydraulic diameter. The aspect ratio, also studied, is defined as AR = a/b.
The temperature difference ∆T is provoked by Q G , a non-uniform volumetric heating (only in the liquid metal) caused by neutron flux interaction as described in [6]. The wall region has no volumetric heating. By convention, the Gr number definition determines the ∆T , and it is used to dimension the mean volumetric heat generation with q = k · ∆T /a 2 , with k the liquid metal thermal conductivity. Total heat generation is Q G = q · V . The heat deposition is modelled as an exponential profile shape, then Q G = V q 0 · e −mz · dV . Finding q 0 allows to find the source term: where S 0 = q 0 /(ρ · C p ). z = 0 is the centre of the channel width in the z direction as shown in Figure 1. m is the volumetric heat generation shape coefficient, mentioned at the end of Section 1. The first code used for this analysis is based in the electric potential as main electromagnetic variable: due to the negligible induced magnetic field, the low magnetic Reynolds number formulation is suitable. The buoyant effect is modelled in the momentum equation using the Boussinesq approximation. A detailed derivation of the applicable set of equations (described in [4]) and the PISO-like solution procedure using a finite volume approach can be found on the code development description by Mas de les Valls [7]. The code is capable to solve multiple regions, guaranteeing the conservation of electric current density at the walls, as showed a recent validation and verification exercise [8]. In this work, this code will be referred as "Epot", for electric potential. All the details associated with the physical properties of the materials, the mesh and the discretization strategy can be found in [4].
The second code used for this analysis is based in the Q2D MHD model proposed by Sommeria and Moreau in 1982 [9]. The model simplifies the flow -electromagnetic interaction adding a single term to the momentum equation. A recent work published by our group [5] describes the implementation and validation of the model. The Q2D model considers the hypothesis that for a strong enough magnetic field the influence of the Hartmann layers is sufficiently small to consider that the flow behaves as 2D, within the plane perpendicular to the magnetic field. The heat transfer configuration model of a fully-developed flow in the poloidal channels studied here is specially suitable for the use of the Q2D model. The ceramic walls are electrically insulated, what is a requirement for using Q2D approach, and the flow is fully-developed, what means that only 1D needs to be calculated, reducing drastically the computational time. Figure 2 shows the planes where the flow model can be simplified and their intersection line, for a liquid metal MHD flow channel with a transverse magnetic field and transverse heat flux. Using the 1D model implies that the results associated with the Q2D code ignore the temperature distribution in the same plane, as for example the heat extracted through the Hartmann layer or around the channel corners. This can lead, of course, to different results compared to the 2D model calculated with the electric potential approach. In this work we will assess the adequacy of using the Q2D model in this configuration.
All simulations have been carried out using a second order linear discretization scheme and the steady state criterion was set to max((ψ − ψ 0 )/ψ 0 ) < 10 −6 , with ψ either velocity or temperature and ψ 0 the previous time step value.
The Nusselt number, is defined as A bw is the area of the interface between bulk and wall in the "wall of study", that can be seen in Figure 1. T b is the mean temperature in the bulk 2D plane, T bw is the mean temperature of the "wall of study" and Q bw is the heat flux through the same surface (refer to Figure 1): Q bw = f k · ∇θ f · δA f . The area of each discretized face is A f and ∇θ f is the temperature gradient in such face. Nusselt number is dimensionless.
Base case and selected range of parameters
The parametric study is conducted using a base case, from which each variable is modified independently. The selected base case 1 , together with the minimum and maximum values of all variables are presented in Table 1. The selection of the parameters has taken into account the findings presented by Vetcha et. al. [10] in 2013, showing how of the combination of Re, Ha and Gr numbers influence on the stability of a volumetrically heated rectangular vertical duct, with ascending flow. It proposed that above certain critical Ha cr number (in combination with a Re and Gr numbers), the flow is linearly stable. Although the range of applicability (10 6 < Gr < 10 9 ) is far from the proposed operational conditions (Gr ∼ 10 12 ), making a simple extrapolation, one could suggest that in such conditions the flow is stable. The applicability range is wide enough to implement the 2D fully-developed model over an interesting range of parameters, to retrieve heat transfer correlations. The best fit, as explained in the original reference, corresponds to:
Re
with, P 1 = −5.98 · 10 −8 Re 2 + 2.284 · 10 −3 Re + 2.308 (3) The selected parameters (Re, Ha and Gr) for our study are shown together with the stability condition set by Vetcha et al. in Figure 3. One can see in blue the stability limit surface by Vetcha et al., in orange the extrapolated surface off range. The simulated values of Re, Gr and Ha are shown in red, green and black, respectively. The magenta star represents the DCLL design conditions at DEMO. This 3D plot shows that the fully-developed model has been applied to the stable region.
The maximum Ly = Ha 2 /Gr 0.5 (considered to assess the balance of electromagnetic forces to buoyant forces [11]) for this range of cases is 10 5 , while the minimum is 225, both well above 1, which suggests flow stability. The maximum Re 2 /Gr (criterion to transition from forced to natural convection [12]) is 400 while the minimum is 0.01. The nominal conditions for the DCLL are Ly = 10 2 and Re 2 /Gr = 10 -4 . Although the minimum Re 2 /Gr in our range of cases is significantly higher than the operating conditions and is inside the stable region provided by [10], it is still small enough that their results must be used with caution.
Results
Due to the exponential heat deposition profile and the heat extraction at the hottest wall, the temperature distribution in the centreline of the channel shows a profile similar to the shown in Figure 4. Reader should note that even though the heat flux goes from the liquid region to the front wall, the mean temperature of such wall is higher than the mean temperature of the liquid region. As indicated at the end of Section 2, the definition of the Nu number used in this work has been the common definition for heat transferred in pipes and channels. Usually in those situations where the heat is extracted from the liquid region through walls, and with no heat generation, the Nu number is positive, yet the ∆T bw = T b − T bw > 0. In the case modelled here and shown in Figure 4, the heat generation provokes such temperature profile that ∆T bw = T b −T bw < 0, providing negative Nu numbers in all results.
Nusselt vs. aspect ratio
The relationship between the Nusselt number and the aspect ratio can be observed in Figure 5. The best fit for the obtained data is: Nu opt (AR) = a/AR + b/AR 2 + c/AR 3 + d/AR 4 The coefficients to fit the function are shown in Table 2.
Nusselt vs. wall conductance ratio
The relationship between the Nusselt number and the wall conductance ratio (c w ) can be observed in Figure 6. The best fit for the obtained data is: Nu opt (c w ) = Nu min + k · e −(log10(cw)−µ) 2 /σ 2 (7) The coefficients to fit the function are shown in Table 3. Note that the Q2D code is not able to simulate different electric conductivities of the wall, since the model considers them electrically insulated.
Nusselt vs. Grashof ratio
The relationship between the Nusselt number and the Grashof ratio can be observed in Figure 7. The best fit for the obtained data is: The coefficients to fit the function are shown in Table 4. Note that the definition given for GrR implies that when it is zero, the Nusselt number, which is directly proportional to Q W , must also be zero. This fact is clearly seen in the results provided by the Q2D code. For this reason, the value of a has been set to zero in Table 4.
Nusselt vs. exponential coefficient (m)
The relationship between the Nusselt number and the volumetric heat generation exponential shape coefficient (m) can be observed in Figure 8. The best fit for the obtained data is: The coefficients to fit the function are shown in Table 5.
Nusselt vs. Grashof number
The relationship between the Nusselt number and the Grashof number can be observed in Figure 9. The best fit for the obtained data is: Nu opt (Gr) = a + k · Gr b (10) The coefficients to fit the function are shown in Table 6.
Nusselt vs. Reynolds number
The relationship between the Nusselt number and the Reynolds number can be observed in Figure 10. The best fit for the obtained data is: The coefficients to fit the function are shown in Table 7.
Nusselt vs. Hartmann number
The relationship between the Nusselt number and the Hartmann number can be observed in Figure 11. The best fit for the obtained data is: The coefficients to fit the function are shown in Table 8.
Discussion
The dependence of the heat transfer coefficient (through the Nu number) on the selected seven dimensionless parameters has been correlated. Almost 140 simulations of the 2D fully-developed model with the electric The selected correlation models fit well the obtained data. The parametric study has been performed varying one variable from the base case each time.
The comparison between codes yields a very interesting conclusion. The influence of each variable to the heat transfer is well captured by both codes. The difference in all cases but AR is an absolute value associated with the fact that the mean temperature in the wall is different in each code. Therefore, the correlated data using one code is approximately parallel to the correlated data with the other code.
Regarding the AR, the influence of the heat transferred in the Hartmann walls and the channel corners is modified with the AR, what prevents the correlated data to be parallel.
An interesting Nusselt peak has been identified using the electric potential code around c w ∼ 0.2. It corresponds to the maximum velocity of the jets formed in the side boundary layer. The velocity peak strongly reduces the ∇θ f but also a bit the ∆T bw . The combination of effects results in a higher Nu number.
The most surprising result of this work is that the best fit for the dimensionless variables Ha, Re, Gr, and GrR is an equation of similar form. Rearranging the relationship between them, and keeping aside the influence of c w , m, AR, the following expressions are proposed for the two models: The fitting coefficients are shown in Table 9. The coefficients provide the best fit using the results obtained from the parametric analysis in which only one parameter varies. They have been slightly rounded to couple the ratios between Ha, Re, Gr into one term.
The resulting RMSE using the expression and coefficients proposed for the Epot results is 0.00233. The RMSE for the proposed Q2D correlation and coefficients is 0.0012. Running the Q2D code for a fully-developed case confirmed that it effectively predicts the influence of the dimensionless parameters on the flow profile. Following these encouraging results and given the fast and lightweight calculation properties of this numerical approach, a final set of cases was run to confirm the accuracy of the proposed correlation and coefficients. In this final set of cases, the dimensionless parameters could be any value between the minimum and maximum of the studied range. Consequently, this approach varies several parameters from one simulation to another.
The dependence of the Nusselt number on the four dimensionless parameters is shown in Figures 12 and 13. Figure 13 shows the logarithm of the absolute value of the Nusselt number to provide more detail on the obtained Nu when GrR tends to zero. The obtained results are represented with blue dots, while the green bands represent the range between the maximum and minimum possible Nu obtained with the proposed correlation (15) using all possible combinations of dimensionless variables in the studied range. The fitting coefficients for this function have been updated and are shown in Table 10.
Nu opt Q2D (Ha, Re, Gr, GrR) = s + k · Gr a · Re b · Ha c · GrR d The effect of assigning an independently variable exponent to each parameter Re, Ha and Gr as in (15), rather than assigning the same exponent a to all three (14), was assessed using the results of this study. The data obtained allowed different fitting functions to be compared. The selected fitting functions for comparison were: • The one based on a single-variable parametric analysis (14).
• The same function (14) using all the data from this last multivariable set of cases to adjust the fitting coefficients.
• A new function using all of the data from this last multi-variable set of cases, separating the exponents of Re, Ha and Gr (15).
A comparison of the RMSE of each of the fitting functions for this set of cases is shown in Table 10. The first two rows of the table show that when the number of sample cases is increased, the resulting coefficients are more accurate, providing a lower RMSE. As expected, the function that treats the exponents independently (15) shows a lower RMSE than the functions that gather them into one term (14), but the difference is small. The exponent a = 0.91 and the ratio Gr/(Ha · Re) are the most important findings of this study.
|
2023-03-22T01:16:27.129Z
|
2023-03-20T00:00:00.000
|
{
"year": 2023,
"sha1": "fa871531d550d0a309f8be86570e8dfa39c3724a",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "fa871531d550d0a309f8be86570e8dfa39c3724a",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
}
|
246194539
|
pes2o/s2orc
|
v3-fos-license
|
Geosynthetic Interface Friction at Low Normal Stress: Two Approaches with Increasing Shear Loading
Featured Application: An insight into the possible influence of the nonuniform normal stress distribution in the inclined plane test, by comparing the results with an unconventional direct shear device, working with increasing shear load rather than controlled displacement. Abstract: The evaluation of geosynthetic interface friction is a key parameter for the stability of coupled geosynthetics, as in landfill capping liner. At the present time, few types of tests are suitable for measuring the interface friction at low normal stress: one of these is the inclined plane, usually carried out under a vertical stress of 5 kPa. This type of test is not without critical aspects, mainly due to the nonuniform normal stress state induced by the inclination of the plane, but, on the other hand, the most widespread direct shear test generally cannot be performed at such low values of normal stress. After a short discussion on the pros and cons of these two types of test, the paper presents a comparison of the interface friction angles obtained, for three interfaces, by means of an inclined plane and an unconventional direct shear apparatus, under the same low normal stress. The peculiarity of this latter device is of ensuring a gradual increase of the mobilized strength, in a way similar to what occurs during the inclined plane test. The good correspondence of the results of the two types of tests confirmed the validity of both the test approaches.
Introduction
Starting from the evidence of the undoubted advantages that the use of geosynthetics entails in geotechnical works, a continuous improvement of the knowledge on the short and long-term behavior of these polymeric materials could be desirable, in order to use these products knowledgably. In this context, an adequate expertise of the friction that these materials can mobilize when placed in contact with each other is necessary, since it is a common practice to combine layers of geosynthetics, having different functions, thus forming multilayer packages. Consequently, each contact surface between two geosynthetics, as well as those between geosynthetic and soil, represent a potential critical element for the stability, given the modest friction values that these surfaces generally can mobilize. This aspect assumes considerable relevance in the design phase and can be a source of serious errors if not properly addressed, even because a correct characterization of the interface friction is not so simple, having to take into account also the peculiarities of the behavior of the polymeric materials.
With reference to the evaluation of the static interface friction, several types of laboratory tests have been developed over the years [1,2] such as the direct shear test, the inclined plane test [3][4][5][6], the annular shear test [7,8] and the cylindrical shear test [9].
However, few types of tests are suitable for evaluating the interface friction at very low normal stress: one of these is the inclined plane test, usually carried out under a
The Direct Shear and the Inclined Plane Tests: Pros and Cons
The direct shear is probably the most known and widespread methodology of test for studying the interface friction. It is standardized by the ASTM D 5321-8 [10] and by the EN ISO 12957-1 [11] and it is carried out on specimens having minimum size of 300 mm × 300 mm, arranged horizontally and fixed to two supports, one of which remains still during the test while the other can slide with respect to the first. Once a prefixed normal stress is applied, half the device is pushed at a constant speed of 1 ± 0.2 mm/min while the contrast force necessary to keep firm the other half of the device is measured, until a maximum displacement of 50 mm is reached. In order to identify the failure envelope, the tests are repeated for at least three different values of normal pressure (i.e., for the European standard 50 kPa, 100 kPa and 150 kPa), and subsequently the envelope friction parameters (c , ϕ) are evaluated by linear interpolation of the results.
On the other hand, the inclined plane test is standardized only by the European EN ISO 12957-2 [12] and it should be noted that, strictly speaking, this rule refers only to interfaces between soil and geosynthetics, even if its indications can be easily extended also to the case of contact between geosynthetic and geosynthetic. The equipment used for this type of test consists of a tiltable plane, above which a steel block is placed. This element is able to slide along the plane and its motion can be directed by lateral guides or by vertical carriages, acting outside the contact area of the geosynthetics. The first geosynthetic, of the examined interface, is fixed on the inclined plane while the second is bound to the bottom of the upper block; in analogy with the direct shear test, the specimens should have a minimum size of 300 mm × 300 mm. The test starts with the plane in the horizontal position and, gradually, the inclination of the plane is increased, at a constant speed of 3 ± 0.5 • /min. During the test, the relative displacement of the upper block respect to the plane is monitored and the inclination angle of the plane, for which the sliding of the block occurs, is researched. According to the EN ISO 12957-2, the interface friction angle ϕ is equal to the plane inclination angle β reached when the upper block showed a reference displacement of 50 mm: Referring to this assumption, it is important to note that the displacement value of 50 mm, taken in analogy with the direct shear test, is actually an arbitrary reference; to remember that this friction angle is evaluated according to the standard EN ISO 12957-2, in the following, it will be denoted as "standard friction angle" (ϕ stand ). Moreover, we should point out that Equation (1) is correct only in the hypothesis that the block is in a static equilibrium, while its condition is more properly kinematic, given that it is in motion with speed and acceleration that are not always negligible [13]. For these reasons, some attempts to improve the inclined plane procedure are still in progress [14], as well as to research alternative procedures [15,16].
On the differences between the two methodologies, it should first be noted that the direct shear test is carried out at three different values of normal stress, in order to delineate the failure envelope, while the inclined plane test is conducted at a single value of normal stress and consequently it is interpreted in terms of secant friction angle, rather than the tangential one. This is not the only difference: for example, the range of normal stress, usually investigated, is higher for the direct shear test. The common laboratory shear devices allow perform tests under normal stress varying between extreme values of about 25 kPa and 500 kPa. Conversely, it is difficult to manage very low normal stress, because to the way in which the normal load is applied, that is through a hydraulic press. On the other hand, the inclined plane test is carried out only under very low normal stresses, generally equal to 5 kPa, which can be eventually increased to slightly higher values, up to about 10-15 kPa.
This difference of the investigated range of normal stress makes difficult the comparison of the results obtainable with the two types of tests, the direct shear and the inclined plane, and consequently the evaluation of the equivalence of the two approaches. However, the results reported in some studies [17,18] seem to indicate that the extrapolation of the results of direct shear tests to the field of low normal stresses may overestimate the interface shear strength with respect to the strength parameters provided by inclined plane tests.
In this regard, it should be pointed out that the failure envelope of the interfaces between geosynthetics can be curvilinear and, therefore, the choice of the type of laboratory tests should be related to the actual stress condition to which the interface is subjected on site. For example, using the results of direct shear tests, performed as said at medium-high stresses, for the design of the capping of a landfill, condition characterized by low vertical stress, may be very little precautionary. Similarly, the opposite case of the extension of the inclined plane test results to the design of lateral liners, where the geosynthetics are subjected to high normal stresses, could be unsafe.
In addition to the different stress range, another possible reason for differences between the results provided by the two types of test may be the different way of application of the shear stress. Indeed, in the case of direct shear, the interface experiences an imposed displacement at constant speed of 1 mm/min, while the response is evaluated in terms of shear force. On the contrary, in the inclined plane the interface is subjected to a gradually increasing shear stress, given by the component of the weight parallel to the plane, which increases as the plane inclination increases, and the interface response is detected in terms of block displacement. In summary, in the first case a displacement is imposed and the shear force is detected while in the second case a shear force is imposed and the displacement is detected. Although few studies on this subject are currently available [19], it can be stated that the approach followed in the inclined plane test is much more similar to the real kinematics involved by the sliding of an interface placed on a slope. Moreover, the findings of recent research seem to suggest that the direct shear approach could overestimate the friction when the interface can manifest very slow sliding phenomena, at speeds lower than that usually imposed in the direct shear test [14].
Continuing the discussion, another aspect to consider is the maximum displacement, allowed by the two devices, that is generally noticeably different. In the case of the direct shear, the standard requires a displacement of 50 mm and, in general, the maximum stroke of the devices is slightly higher than this value. This level of displacement may be insufficient to detect the residual shear resistance [8], but the only way to study the shear strength at large displacements with these devices is to perform many cycles of displacement, with motion reversal. However, this approach does not exactly correspond to the physics of the real phenomenon, since it involves a reordering of the geosynthetic fibers, or a different action on the microscopic asperities, at each inversion of the motion direction. This problem does not arise in the case of inclined plane tests, in which the maximum interface displacement can be several tens of centimeters, depending only on the length of the plane over which the block slides. In this case, the maximum displacement can be further increased by realizing several slips in succession, while being careful to bring back the block to its initial position and lifting it so as not to rub the material.
Another aspect to take into account is the hydration state of the interface implementable in the two types of test. For the direct shear, the test condition can range from dry to completely immersed, while for the inclined plane only dry and damp conditions can be executed, but not the immersed condition, unless the device is placed in a tank, a solution that is in fact quite not practicable, or unless a seepage motion is introduced at the interface [20].
To complete the discussion of the advantages and disadvantages of the two types of tests, it is necessary to remember that a common issue of the inclined plane test is related to the nonuniform distribution of the normal stress on the contact surface when the plane is inclined [21]. Indeed, if the normal stress could be considered uniform when the plane is horizontal, due to the eight of the center of mass respect to the base of the block, the distribution varies with the inclination of the plane, becoming trapezoidal, with higher stress values on the downstream edge and lower on the upstream one. To limit this effect, the EN-ISO 12957-2 recommends the use of some precautions such as the adoption of inclined side walls or the use of wedges, to obtain a backward position of the center of mass. In this way the normal stress distribution becomes nonuniform when the plane is horizontal but reaches a uniform condition at a certain prefixed inclination of the plane.
As seen, the different devices present each some pros and some cons; in this work a different experimental apparatus is presented, designed to study the interface friction between geosynthetics at low normal stresses, overcoming some limitations of the inclined plane device. This apparatus combines the typical advantages of the inclined plane, like the ability of performing tests under a condition of shear force increase, with the possibility of performing tests without eccentricity of the normal load and also in immersed condition. Lastly, the results obtained for three different interfaces will be presented and compared with those provided by the usual inclined plane tests.
Devices
In this work, two different devices were adopted. The first device was an inclined plane apparatus (Figure 1 dry to completely immersed, while for the inclined plane only dry and damp conditions can be executed, but not the immersed condition, unless the device is placed in a tank, a solution that is in fact quite not practicable, or unless a seepage motion is introduced at the interface [20].
To complete the discussion of the advantages and disadvantages of the two types of tests, it is necessary to remember that a common issue of the inclined plane test is related to the nonuniform distribution of the normal stress on the contact surface when the plane is inclined [21]. Indeed, if the normal stress could be considered uniform when the plane is horizontal, due to the eight of the center of mass respect to the base of the block, the distribution varies with the inclination of the plane, becoming trapezoidal, with higher stress values on the downstream edge and lower on the upstream one. To limit this effect, the EN-ISO 12957-2 recommends the use of some precautions such as the adoption of inclined side walls or the use of wedges, to obtain a backward position of the center of mass. In this way the normal stress distribution becomes nonuniform when the plane is horizontal but reaches a uniform condition at a certain prefixed inclination of the plane.
As seen, the different devices present each some pros and some cons; in this work a different experimental apparatus is presented, designed to study the interface friction between geosynthetics at low normal stresses, overcoming some limitations of the inclined plane device. This apparatus combines the typical advantages of the inclined plane, like the ability of performing tests under a condition of shear force increase, with the possibility of performing tests without eccentricity of the normal load and also in immersed condition. Lastly, the results obtained for three different interfaces will be presented and compared with those provided by the usual inclined plane tests.
Devices
In this work, two different devices were adopted. The first device was an inclined plane apparatus (Figure 1 The first geosynthetic specimen was fixed to the base of the block, while the second was fixed on the plane. In order to ensure a straight sliding, the block was constrained by two lateral guides and it was equipped with four lateral wheels, to avoid introducing significant additional friction forces. The inclination of the plane was measured by means of an accelerometer, while the block displacement was detected with a Linear Variable Displacement Transducer (LVDT). For the block, a rectangular shape was adopted, instead of the standard measures of 0.30 m × 0.30 m, to minimize the effects of the normal The first geosynthetic specimen was fixed to the base of the block, while the second was fixed on the plane. In order to ensure a straight sliding, the block was constrained by two lateral guides and it was equipped with four lateral wheels, to avoid introducing significant additional friction forces. The inclination of the plane was measured by means of an accelerometer, while the block displacement was detected with a Linear Variable Displacement Transducer (LVDT). For the block, a rectangular shape was adopted, instead of the standard measures of 0.30 m × 0.30 m, to minimize the effects of the normal load eccentricity, while maintaining the same contact area of 0.09 m 2 . In this regard, in the device setup adopted in this research, the height of the center of gravity of the block, for a mean normal stress of 5 kPa, was of only 34 mm and the center of mass was backward of about 12 mm. This configuration allowed to be obtained a uniform distribution of the normal stresses for an inclination angle of 21 • , reputed to be an acceptable compromise between the maximum and minimum values of interface friction observed during the tests.
The second device was a not standard apparatus, similar to a direct shear except for the way the shear stress was applied. It was composed of a steel block, with dimensions of 0.30 m × 0.30 m, resting on a horizontal plane. Also in this case, the first geosynthetic specimen was fixed to the base of the block, while the second was fixed on the plane. The block was hooked, with the interposition of a load cell, to a steel cable at the end of which, after a series of pulley, a counterweight was attached ( Figure 2). The test was carried out by gradually varying the shear force over time by means of a simple mechanism based on a hopper, through which some sand drops smoothly into a bucket (the counterweight) and ended when the block displacement reached the values of 50 mm.
Appl. Sci. 2022, 12, x FOR PEER REVIEW 5 of 13 load eccentricity, while maintaining the same contact area of 0.09 m 2 . In this regard, in the device setup adopted in this research, the height of the center of gravity of the block, for a mean normal stress of 5 kPa, was of only 34 mm and the center of mass was backward of about 12 mm. This configuration allowed to be obtained a uniform distribution of the normal stresses for an inclination angle of 21°, reputed to be an acceptable compromise between the maximum and minimum values of interface friction observed during the tests.
The second device was a not standard apparatus, similar to a direct shear except for the way the shear stress was applied. It was composed of a steel block, with dimensions of 0.30 m × 0.30 m, resting on a horizontal plane. Also in this case, the first geosynthetic specimen was fixed to the base of the block, while the second was fixed on the plane. The block was hooked, with the interposition of a load cell, to a steel cable at the end of which, after a series of pulley, a counterweight was attached (Figure 2). The test was carried out by gradually varying the shear force over time by means of a simple mechanism based on a hopper, through which some sand drops smoothly into a bucket (the counterweight) and ended when the block displacement reached the values of 50 mm. During the test, the horizontal force applied to the block was detected by the load cell while the block displacement was measured by a LVDT. Starting from the hypothesis of a static condition, the measurement of the horizontal force applied (H), related to the weight of the block (W), makes possible the evaluation of the mobilized friction angle, time by time, by means of the following simple equation: Once the limit static equilibrium is reached, Equation (2) would no longer be valid, but, in analogy with what is normally done for the inclined plane test, it was extended to the whole test, even if the speed and acceleration of the block were not actually negligible. During the test, the horizontal force applied to the block was detected by the load cell while the block displacement was measured by a LVDT. Starting from the hypothesis of a static condition, the measurement of the horizontal force applied (H), related to the weight of the block (W), makes possible the evaluation of the mobilized friction angle, time by time, by means of the following simple equation: Once the limit static equilibrium is reached, Equation (2) would no longer be valid, but, in analogy with what is normally done for the inclined plane test, it was extended to the whole test, even if the speed and acceleration of the block were not actually negligible.
To remark the difference with the usual direct shear, in the following this type of test will be indicated as SSI test, where the acronym indicates that the experiment was carried out following a shear stress increase.
Studied Interfaces
Three interfaces were studied, by coupling various geosynthetics; a first interface was represented by the contact between a structured HDPE geomembrane (GMB) and a drainage geocomposite (GCD 1 ). The geomembrane had thickness of 1.5 mm, mass per unit area of 1.42 kg/m 2 and tensile strength of 22.5 kN/m. The drainage geocomposite was formed by a draining body enclosed between two nonwoven geotextiles, made of polypropylene; it had a thickness of 5.5 mm under a pressure of 2 kPa, mass per unit area of 0.90 kg/m 2 and tensile strength (equal for MD and CMD) of 20 kN/m.
The second interface was the contact between two specimens of a same woven geotextile (GTXw), made of polypropylene, having mass per unit area of 400 g/m 2 and tensile strength (MD) of 90 kN/m.
The contact between another drainage geocomposite (GCD 2 ) and a geogrid (GGR), without the presence of the soil, was the third interface examined.
Referring to the testing conditions, all tests were carried out at a laboratory temperature of 22 ± 3 • C and under a normal stress of 5 kPa. All the interfaces were studied in the dry condition apart from the first and third interfaces that were studied, for comparative purposes, also in the wet condition. In this case, the specimens were fully immersed in water before the test but, while in the direct shear device they remained immersed in water throughout the test, in the inclined plane device they were only partially saturated, because the water naturally drained away due to the inclination of the plane.
Finally, to achieve a consistent number of data, which would allow to compare the different devices and conditions, the tests were repeated several times, as summarized in Table 1.
Test Results
Referring to the first interface, GMB-GCD 1 , Figure 3, shows a typical result of an inclined plane test, on dry virgin specimens, in terms of displacement of the block as a function of the plane inclination. Remembering that, in this type of test, the interpretation is performed according to Equation (1), by assuming always a static condition for the block, the angle of plane inclination is also equal to the mobilized friction angle. Conversely, Figure 4 shows an example of a direct shear SSI test, on the same interface and on dry virgin specimens; in detail, Figure 4a shows the increasing horizontal force and the corresponding block displacement versus the time, while Figure 4b shows the mobilized friction angle and the block speed as a function of the block displacement.
In analogy with what is usually done in the case of the inclined plane test, also in this case the interpretation of the results was made assuming a static equilibrium of the block (Equation (2)), even if the block speed and acceleration were not actually negligible. It is interesting to observe that, virtually flipping the axes of Figure 3, the evolution of the Conversely, Figure 4 shows an example of a direct shear SSI test, on the same interface and on dry virgin specimens; in detail, Figure 4a shows the increasing horizontal force and the corresponding block displacement versus the time, while Figure 4b shows the mobilized friction angle and the block speed as a function of the block displacement. In analogy with what is usually done in the case of the inclined plane test, also in this case the interpretation of the results was made assuming a static equilibrium of the block (Equation (2)), even if the block speed and acceleration were not actually negligible. It is interesting to observe that, virtually flipping the axes of Figure 3, the evolution of the mobilized friction respect to the block displacement is similar to the curve of Figure 4b. Conversely, Figure 4 shows an example of a direct shear SSI test, on the same interface and on dry virgin specimens; in detail, Figure 4a shows the increasing horizontal force and the corresponding block displacement versus the time, while Figure 4b shows the mobilized friction angle and the block speed as a function of the block displacement.
In analogy with what is usually done in the case of the inclined plane test, also in this case the interpretation of the results was made assuming a static equilibrium of the block (Equation (2)), even if the block speed and acceleration were not actually negligible. It is interesting to observe that, virtually flipping the axes of Figure 3, the evolution of the mobilized friction respect to the block displacement is similar to the curve of Figure 4b. Following an approach already presented in other research [22,23], after the first evaluation, carried out on virgin specimens, the tests were repeated several times, repositioning the block, after each sliding, in the initial position and proceeding in the same way for both the inclined plane and the direct shear SSI tests. Numerous values of the friction angle were thus obtained, related to the amount of the cumulated displacement experienced by the interface, which was pushed up to approximately 1.6 m, thus allowing us to investigate the wear effect [24]. Three pairs of specimens were studied, for each device, in the dry condition; moreover, the tests were repeated in the wet condition on three different pairs of specimens, for the inclined plane, and three others for the direct shear, for a total of 12 pairs of specimens and about 72 tests achieved. A comparison of the results, for the two conditions and for the two types of tests, is showed in Figure 5a, for the dry condition, and in Figure 5b, for the wet condition, in terms of standard friction angle versus cumulated displacement. The data show a good agreement between the two Following an approach already presented in other research [22,23], after the first evaluation, carried out on virgin specimens, the tests were repeated several times, repositioning the block, after each sliding, in the initial position and proceeding in the same way for both the inclined plane and the direct shear SSI tests. Numerous values of the friction angle were thus obtained, related to the amount of the cumulated displacement experienced by the interface, which was pushed up to approximately 1.6 m, thus allowing us to investigate the wear effect [24]. Three pairs of specimens were studied, for each device, in the dry condition; moreover, the tests were repeated in the wet condition on three different pairs of specimens, for the inclined plane, and three others for the direct shear, for a total of 12 pairs of specimens and about 72 tests achieved. A comparison of the results, for the two conditions and for the two types of tests, is showed in Figure 5a, for the dry condition, and in Figure 5b, for the wet condition, in terms of standard friction angle versus cumulated displacement. The data show a good agreement between the two types of tests and a similar trend of the interface friction with the cumulated displacement is highlighted by both the methodologies, corresponding to a moderate reduction of the available friction with the increase of the wear.
Albeit theoretically the nonuniformity of the normal stress could change the measured mobilized friction, giving rise to phenomena of progressive failure, in the literature there are not many experimental data on this topic. Consequently, the tests in dry condition were repeated, on another set of three pairs of specimens, with a different mass distribution of the sliding block, to analyze the possible influence of the normal load eccentricity. A uniform distribution of mass, inside the block, was adopted, instead of the configuration with the center of mass positioned backward and previously described.
At a reference inclination of 28 • , equal to the mean value of the friction for this interface, the uniform mass distribution ("configuration 2") involved a variation of the normal stress of ±25% respect to the mean value, while the configuration with the center of mass positioned backward ("configuration 1") entailed a variation of only ±6%. The comparison of the results is proposed in Figure 6 and, as can be observed, the different distribution of the masses did not induce significant differences in the results, considering the normal data scattering. types of tests and a similar trend of the interface friction with the cumulated displacement is highlighted by both the methodologies, corresponding to a moderate reduction of the available friction with the increase of the wear. Albeit theoretically the nonuniformity of the normal stress could change the measured mobilized friction, giving rise to phenomena of progressive failure, in the literature there are not many experimental data on this topic. Consequently, the tests in dry condition were repeated, on another set of three pairs of specimens, with a different mass distribution of the sliding block, to analyze the possible influence of the normal load eccentricity. A uniform distribution of mass, inside the block, was adopted, instead of the configuration with the center of mass positioned backward and previously described.
At a reference inclination of 28°, equal to the mean value of the friction for this interface, the uniform mass distribution ("configuration 2") involved a variation of the normal stress of ±25% respect to the mean value, while the configuration with the center of mass positioned backward ("configuration 1") entailed a variation of only ±6%. The comparison of the results is proposed in Figure 6 and, as can be observed, the different distribution of the masses did not induce significant differences in the results, considering the normal data scattering. An example of the results obtained for the second examined interface, GTXw-GTXw, is shown in Figure 7; in detail Figure 7a shows the results of an inclined plane test on dry virgin specimens, in terms of displacement of the block versus the plane inclination, while Figure 7b shows the results of a direct shear SSI test, again on dry virgin specimens, in terms of mobilized friction angle and block speed versus the interface displacement. Even in this case, the figures show a similar behavior of the interface, in terms of evolution of the mobilized friction respect to the block displacement, in the two types of tests. This interface was characterized by a "sudden sliding" behavior [14], with an abrupt increase of the block speed at the reaching of the limit static equilibrium. For this interface the tests were repeated many times on various pairs of specimens, for a total of 22 tests on the incline plane and 16 direct shear SSI tests. A graphical summary of the results pro- An example of the results obtained for the second examined interface, GTXw-GTXw, is shown in Figure 7; in detail Figure 7a shows the results of an inclined plane test on dry virgin specimens, in terms of displacement of the block versus the plane inclination, while Figure 7b shows the results of a direct shear SSI test, again on dry virgin specimens, in terms of mobilized friction angle and block speed versus the interface displacement. Even in this case, the figures show a similar behavior of the interface, in terms of evolution of the mobilized friction respect to the block displacement, in the two types of tests. This interface was characterized by a "sudden sliding" behavior [14], with an abrupt increase of the block speed at the reaching of the limit static equilibrium. For this interface the tests were repeated many times on various pairs of specimens, for a total of 22 tests on the incline plane and 16 direct shear SSI tests. A graphical summary of the results provided by the two devices, in function of the cumulated displacement or, in other terms, of the wearing level, is reported in Figure 8. As can be seen, the two methodologies show a very good correspondence of their results, in terms of both average value and data scattering. Referring to the wear effect, unlike the previous case, this interface exhibited a slight increase of the friction as the cumulated displacement increased.
An example of the results obtained for the second examined interface, GTXw-GTXw, is shown in Figure 7; in detail Figure 7a shows the results of an inclined plane test on dry virgin specimens, in terms of displacement of the block versus the plane inclination, while Figure 7b shows the results of a direct shear SSI test, again on dry virgin specimens, in terms of mobilized friction angle and block speed versus the interface displacement. Even in this case, the figures show a similar behavior of the interface, in terms of evolution of the mobilized friction respect to the block displacement, in the two types of tests. This interface was characterized by a "sudden sliding" behavior [14], with an abrupt increase of the block speed at the reaching of the limit static equilibrium. For this interface the tests were repeated many times on various pairs of specimens, for a total of 22 tests on the incline plane and 16 direct shear SSI tests. A graphical summary of the results provided by the two devices, in function of the cumulated displacement or, in other terms, of the wearing level, is reported in Figure 8. As can be seen, the two methodologies show a very good correspondence of their results, in terms of both average value and data scattering. Referring to the wear effect, unlike the previous case, this interface exhibited a slight increase of the friction as the cumulated displacement increased. The third considered interface, GCD2-GGR, is an example of contact characterized by a "gradual sliding" behavior [14], as can be deduced observing the results of an inclined plane test (Figure 9a) or those of a direct shear SSI test (Figure 9b), in both cases carried out on dry virgin specimens. The main feature of the "gradual sliding" behavior is that a relatively slow motion occurs at reaching the limit static equilibrium and that very slow displacements can occur also at mobilized friction angles significantly lower than the standard friction angle. The third considered interface, GCD 2 -GGR, is an example of contact characterized by a "gradual sliding" behavior [14], as can be deduced observing the results of an inclined plane test (Figure 9a) or those of a direct shear SSI test (Figure 9b), in both cases carried out on dry virgin specimens. The main feature of the "gradual sliding" behavior is that a relatively slow motion occurs at reaching the limit static equilibrium and that very slow displacements can occur also at mobilized friction angles significantly lower than the standard friction angle.
A graphical summary of the result set is provided by Figure 10a, for the dry condition, and by Figure 10b for the wet condition. A total of 76 tests were conducted for this interface (35 inclined plane tests and 41 direct shear SSI tests). In this case the two methodologies show a good correspondence of results, with a limited difference for the data related to the wet condition. Regarding this last aspect, it should be noted that the wet condition is not exactly the same in the inclined plane and in the direct shear device. Indeed, in the inclined plane test, the water naturally drains away from the specimens, due to the increasing inclination of the tilting plane, while in the direct shear device the specimens remain immersed in water throughout the test. This operational diversity may explain the difference of the results obtained in wet condition. As a final note, Figure 9a,b show in both cases a noticeable reduction of friction as the cumulated displacement increases, in dry as well as in wet conditions. The third considered interface, GCD2-GGR, is an example of contact characterized by a "gradual sliding" behavior [14], as can be deduced observing the results of an inclined plane test (Figure 9a) or those of a direct shear SSI test (Figure 9b), in both cases carried out on dry virgin specimens. The main feature of the "gradual sliding" behavior is that a relatively slow motion occurs at reaching the limit static equilibrium and that very slow displacements can occur also at mobilized friction angles significantly lower than the standard friction angle. A graphical summary of the result set is provided by Figure 10a, for the dry condition, and by Figure 10b for the wet condition. A total of 76 tests were conducted for this interface (35 inclined plane tests and 41 direct shear SSI tests). In this case the two methodologies show a good correspondence of results, with a limited difference for the data related to the wet condition. Regarding this last aspect, it should be noted that the wet condition is not exactly the same in the inclined plane and in the direct shear device. Indeed, in the inclined plane test, the water naturally drains away from the specimens, due to the increasing inclination of the tilting plane, while in the direct shear device the specimens remain immersed in water throughout the test. This operational diversity may explain the difference of the results obtained in wet condition. As a final note, Figure 9a,b show in both cases a noticeable reduction of friction as the cumulated displacement increases, in dry as well as in wet conditions.
Conclusions
In this paper, a comparison between the results of inclined plane tests and those obtained by means of a different experimental apparatus, have been presented. The latter device is capable of performing horizontal shear tests, at a vertical stress as low as 5 kPa, by ensuring a gradual increase of the mobilized strength, in a way similar to what occurs
Conclusions
In this paper, a comparison between the results of inclined plane tests and those obtained by means of a different experimental apparatus, have been presented. The latter device is capable of performing horizontal shear tests, at a vertical stress as low as 5 kPa, by ensuring a gradual increase of the mobilized strength, in a way similar to what occurs during the inclined plane test.
The experimental data showed a good correspondence between the results obtained with the two devices. This allows some conclusions to be drawn. The first is that the nonuniformity of the normal stress, an intrinsic factor of the inclined plane test, does not involve significant changes in the results, if compared to the centered load condition of the direct shear test. This conclusion is valid at least for the adopted block, with a rectangular area of contact and a low center of gravity, and for the moderate angles of inclination achieved during the tests. From this point of view, these tests exclude one of the main objections that could be made against the inclined plane approach.
Another observation is that the data show that the same evolution of friction with respect to the wear is outlined by the two apparatuses, even if the maximum displacement achievable in a direct shear test is less than that of the inclined plane test, and therefore a greater number of repetitions of direct shear tests are required to get to the same displacement level.
Lastly, the direct shear SSI test constitutes a valid alternative to the inclined plane test for the measurement of the interface shear strength between geosynthetics at low normal stress values. In addition to the absence of eccentricity of the normal load, it has the advantage of allowing the test of specimens in immersed condition, which is not possible with the inclined plane apparatus. The main point of interest is that this type of device allows a gradual increase of the shear stress, leaving free the interface response in terms of displacement, unlike the conventional approach of the direct shear test, in which the interface displacement is imposed over time. The few studies available on this subject seem to indicate that the method of application of the load may have a certain influence on the response of the interface, or rather on the measured friction angle. For these reasons, further studies, considering the comparison between tests carried out with imposed displacement or with imposed shear load, are desirable.
Author Contributions: Conceptualization, P.P., P.C. and N.M.; methodology, P.P.; data curation, P.P.; writing-original draft preparation, review and editing, P.P.; supervision, P.C. and N.M. All authors have read and agreed to the published version of the manuscript.
|
2022-01-23T16:49:50.267Z
|
2022-01-20T00:00:00.000
|
{
"year": 2022,
"sha1": "9254f81c949600313aa816719e85d8114e03b152",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3417/12/3/1065/pdf?version=1642680681",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "d8bedf372b38f824f4fedabefd26ec27ea7f1c05",
"s2fieldsofstudy": [
"Environmental Science",
"Geology"
],
"extfieldsofstudy": []
}
|
34464432
|
pes2o/s2orc
|
v3-fos-license
|
GITR-GITRL System, A Novel Player in Shock and Inflammation
Glucocorticoid-induced TNFR-Related (GITR) protein is a member of the tumor necrosis factor receptor superfamily that modulates acquired and natural immune response. It is expressed in several cells and tissues, including T cells, natural killer cells, and, at lower levels, in cells of innate immunity. GITR is activated by its ligand, GITRL, mainly expressed on antigen presenting and endothelial cells. Recent evidence suggests that the GITR/GITRL system participates in the development of inflammatory responses, including shock, either due to early response of neutrophils and macrophages, or together with autoimmune/allergic pathogenesis. The pro-inflammatory role of the GITR/GITRL system is due to: 1) modulation of the extravasation process, 2) activation of innate immunity cells, 3) activation of effector T cells also favored by partial inhibition of suppressor T cells and modulation of dendritic function. This review summarizes the in vivo role of the GITR/GITRL system in inflammation and shock, explaining the mechanisms responsible for their effects, considering the interplay among the different cells of the immune system and transduction pathways activated by GITR and GITRL triggering. The hidden aspects about GITR/GITRL function, crucial for treatment planning of inflammatory diseases and shock by modulation of this system is stressed.
INTRODUCTION
The concept of inflammation leads to a widening search for the types of cellular and molecular interactions responsible for linking the initial stimulus to the final abnormal function. It has not been possible yet to integrate all this information into a single model for the development of inflammation, but a useful framework is based on the behavior of the immune system. Receptors and soluble mediators produced by local tissue cells and infiltrating inflammatory cells, regulate the progression of inflammation. The nature of local events demands that the soluble mediators act in a spatial and temporally regulated manner.
The first events in response to an inflammatory stimulus mainly involve endothelial cells and innate immunity cells. Endothelial cells upregulate adhesion molecules promoting extravasation of leukocytes. After extravasation and migration, neutrophils (PMNs), monocytes and other leukocytes are activated and release soluble mediators (such as chemokines, cytokines and matrix metalloproteinases-MMPs) which orchestrate the cascade of cellular processes in the microenvironment including further modification in endothelial cells (such as tight junction disorganization and further upregulation of adhesion molecules), apoptosis and tissue remodeling causing, in some cases, fibrosis.
In several cases, the inflammatory response is activated by the reaction to foreign or self antigens, caused by a specific immune response. The principal scheme for integrating this information is based on the classification of the adaptive immune system, and especially the responses of T helper (Th) cells. In this scheme, CD4 + T cell-dependent responses are classified into T helper type 1 (Th1) or type 2 (Th2). An exaggeration of Th2 over Th1 responses to inflammatory stimuli leads to inflammatory disease. The innate immune system, in particular antigen-presenting cells (APC)(dendritic cells, macrophages and also epithelial and B cells) participate to the development of adaptive response. Recent concepts regarding the role of co-accessory receptors and receptor-ligand cross talk definitely contributes to the fine-tuning and orientation of the immune response at a given moment. On the other hand, there is an entire spectrum of cytokines and mediators (prostaglandins, kinins, nitric oxide (NO), chemokines, soluble adhesion molecules, and acute-phase reactants etc.), which contribute to the complexity of interactions. All these effects may render the inflammatory process acute or chronic depending on the persistence of the various signals.
Originally cloned in 1997, glucocorticoid-induced TNFR related (GITR) protein, also called TNFRSF18, is a receptor belonging to the TNFR superfamily selectively activated by its ligand, GITRL [1][2][3][4][5][6][7][8]. In the past few years, there has been much exploration of the GITR-GITRL system as regards the development and function of the immune system and inflammatory response. Nowadays, GITR is generally accepted as a costimulatory molecule on T lymphocytes [9][10][11]. However, its function is not confined to T cells. In fact, tissue distribution of GITR and GITRL and functional data suggest implication in several functions such as extravasation, activation of innate immunity, skin defense and bone remodeling. Full comprehension of their function is complicated by the peculiar properties of GITR and GITRL including their coexpression in several cells, the possibility of intracellular signaling deriving also from GITRL, the splicing of GITR and their modulation kinetics. This review is an update of the proven and potential role of the GITR-GITRL system, emphasizing its contribution to the inflammatory process and shock development, and the potential therapeutic use of fusion proteins and antibodies modulating the GITR/GITRL system. As regards GITR expression in hematological cells other than T or NK cells, Shimizu et al. described low mGITR expression in B220 + and F4/80 + cells [19], and Shin et al. found low levels of GITR expression in macrophages and a macrophage-derived cell line [29,30]. Weak expression of GITR on APC cells is further confirmed by other studies [11,12,19]. GITR is expressed on nonactivated bone marrow derived mast cells [25]. Expression levels of mouse and human GITR in hematological cells are summarized in Table 1.
Some non-lymphoid tissues, such as lung, kidney and small intestine express mGITR mRNA ( Table 2). GITR expression was detected also on osteoclast precursor cells, keratinocytes and retinal pigment epithelial (RPE) cells [31][32][33]. A similar (though not perfectly matched) pattern of expression was described in humans. hGITR mRNA was expressed at a good level in lung and, at a low level, in brain, kidney and liver [3,4]. It was also found in a colorectal adenocarcinoma cell line [3]. In summary, GITR is mainly expressed in hematological cells, but there is some evidence that it is also expressed in non-lymphoid tissues. Other TNFRSF members sharing structural properties with mGITR, such as 4-1BB, although expressed mainly in lymphoid organs are also found in some non-lymphoid cells such as lung [34].
GITR Expression is Upregulated in Activated Cells
After T cells are activated, both murine and human T cells strongly upregulate GITR expression at mRNA and protein level [1,3,4,15,35]. After T cell receptor (TCR) triggering, GITR expression is induced at 6 h and peaks within 24 h [15]. mRNA levels remain upregulated for at least 3 days from activation [1]. Interestingly, in an in vivo murine model, GITR, 4-1BB and OX40 were upregulated in tumor-specific T cells that promote regression of SP2/0 myeloma tumor [36]. hGITR was also upregulated in CD4 + T helper cell subpopulation of patients with non-infectious uveitis, a Th1 cell mediated autoimmune disease, and correlated positively with active uveitis [35].
Following NK and NKT activation, GITR is strongly upregulated [12,22]. GITR is also present in inflamed blood vessel endothelial cells [23] and lipo-polysaccharide (LPS) activated immature dendritic cells (DC) injected subcutaneously upregulates GITR [37]. In summary, several cells participating to the inflammatory process upregulated GITR expression after activation suggesting that GITR is involved in the modulation of inflammation.
GITR is Expressed at High Levels in T Regulatory (Treg) Cells and Other Suppessor T cells
Over the last ten years, the concept of specialized suppressor T cells, capable of controlling immune responses and preventing autoimmune diseases (T regulatory cells, Treg cells) has been well established [38]. However, markers capable of distinguishing genuine Treg cells from recently activated responder T cells are few and somewhat uncertain. In a search for novel Treg markers, 2 different studies found that freshly isolated murine CD4 + CD25 + Treg cells have higher mRNA and protein levels of GITR than conventional CD4 + CD25 -T cells (responder cells) [13,19]. At the same time, another study reached the same conclusion after comparing CD4 + T cell clones with suppressor function and Th1 and Th2 clones with responder function [17]. Human Treg cells (CD4 + CD25 + ) also expressed GITR high levels [35,39], and GITR was overexpressed in a human thymic CD8 + subpopulation with suppressor function (CD8 + CD25 + ) [21]. Treg cell activation increases GITR expression [13,19]. In human CD4 + CD25 + suppressor clones, suppressive activity correlated in full with the intensity of GITR staining and intracellular cytotoxic T cell associated antigen 4 (iCTLA-4), a marker of fully active Treg cells [40]. Some in vivo studies have provided further evidence that GITR is overexpressed in T cells with suppressor function. In murine T cells from tolerated skin grafts, expression of Treg markers (including GITR) was higher than in T cells from rejected skin grafts [17,41]. In human decidua, expression of GITR and OX40 is higher in cells positive for iCTLA-4 (CD4 + CD25 + iCTLA-4 + ) than in negative (CD4 + CD25 + iCTLA-4 -) and responder T cells (CD4 + CD25 -) [42]. Finally, CD4 + CD25 bright T cells in the human intestinal lamina propria and in the joints of patients with the remitting form of juvenile idiopathic arthritis present high levels of GITR on their surface [43,44].
Thus, there is overwhelming evidence that GITR is one of the few markers of cells with suppressor activity, and a practical demonstration is provided by studies in which GITR has been used to sort regulatory cells. For example, Shimizu et al. demonstrated that T cells depleted of GITR high T cells cause autoimmune gastritis in nude mice, suggesting that GITR high cells act principally as suppressor cells [19]. In addition, studying an in vivo murine model, Uraushihara et al. hypothesized that GITR is a more representative marker of Treg cells than CD25 [18], and demonstrated that CD4 + T cells with high levels of GITR on their surface (GITR high ) exert suppressor activity independent of CD25 expression. They suggested that the CD25 -GITR high cells are suppressor T cells with a memory function, while the CD25 + GITR high cells are Treg cells with an effector function [18].
A turning point in the definition of GITR as a Treg marker is represented by studies correlating GITR to forkhead box protein p3 (Foxp3), a transcription factor determinant for acquisition and maintenance of the Treg phenotype [45]. In fact, Foxp3 seems to be a negative regulator of IL-2 and IFNγ as a transcriptional repressor by histone deacetylation [46]. Foxp3 also binds the promoter regions of GITR, CD25 and CTLA-4 but acts as a histone acetylator here and therefore a coactivator of the mentioned genes [46]. Among these genes, GITR seems to be more sensitive to Foxp3 regulation. In fact, in Foxp3 transgenic mice, CD4 + CD25cells show suppressive activity and express high levels of GITR [47]. Furthermore, Foxp3 transfection of naive CD4 + cells causes GITR upregulation [48]. Downregulation of Foxp3 expression in human type 1 regulatory T cells (Tr1) causes the loss of suppressor activity together with the loss of GITR and iCTLA-4 expression but not CD25 expression [49]. In line with these results, virtually all Foxp3 + CD4 + cells are GITR + even if not all GITR + CD4 + cells are Foxp3 + [50]. In addition, when cocultured with activated endothelial cells, CD4 + effector T cells can generate suppressor T cells which are CD25 + iCTLA-4 + GITR high [51], suggesting that GITR high cells deriving from effector T cells may have acquired the suppressive phenotype, at least in some conditions. A similar conclusion was reached by a study using thrombospondin, a natural anti-inflammatory extracellular matrix protein, showing that this protein generates peripheral Treg cells in humans, expressing GITR, CTLA-4, OX40, independent of TGFβ, from resident CD25naive or memory cells [52].
Taken together these studies indicate that: 1) GITR is a marker of cells with suppressor function; 2) GITR seems to be a more reliable marker than CD25 because is present in regulatory cells that are CD25 -; 3) GITR is operationally more useful than iCTLA-4 and Foxp3 since there is no need to permeabilize and kill cells for staining.
While the use of GITR as a Treg cell marker seems reasonable and even advisable when studying cells from healthy animals (in which the immune system is not reacting against antigens), in human diseases (particularly chronic) it may be misused. In fact, GITR is upregulated in effector T cells during activation, reaching expression levels comparable to Treg cells. These observations (common to other Treg markers) might hamper the use of GITR as a Treg cell marker, particularly in chronic diseases. For example, in CD4 + cells from Foxp3 -/mice lacking suppressor cells, GITR expression is much higher than that observed in CD4 + cells from wild type mice [45]. In this case, GITR seems to be a better marker for activated T cells than for Treg cells. Therefore when sorting cells with suppressive activity, we propose at least a two-marker system including Foxp3 and GITR [18,53].
The above-cited studies suggest that GITR is expressed in several kinds of cells with suppressor activity. However, Every et al. demonstrated that CD4 + T cells preventing experimental autoimmune diabetes are not defined by Foxp3 and GITR markers [54], suggesting that not all regulatory cells are GITR + or that GITR is expressed only in suppressor T cells characterized by Foxp3 expression.
Regulation of GITR Expression by Glucocorticoids: A Controversial Matter
When originally cloned, GITR was found upregulated in a hybridoma T cell line treated with glucocorticoids [1]. GITR is, however, only slightly upregulated in T cells in primary cultures treated with glucocorticoids [55,56], and in Treg cells after dexamethasone treatment [55]. GITR expression in T cells is not decreased in glucocorticoid receptor knock out mice [56] and, in humans, is not upregulated in T cells after glucocorticoid treatment [13,19]. Therefore, the relationship between GITR and glucocorticoids remains controversial and seems to have slight functional meaning, if any.
Tissue Distribution of GITRL
When considering studies evaluating GITRL expression at protein level, it results that GITRL is expressed in professional and non-professional APCs, including unstimulated myeloid DC subsets, plasmacytoid DC precursors (pDC), B cells and monocytes [2,3,5,7,12,28]. Following a preliminary observation suggesting that GITRL is expressed in human endothelial cells [4], a recent array study demonstrated that GITRL is one of the 20 genes more differently expressed in endothelial cells compared to a panel of cells from other tissues. In particular, GITRL is expressed at good levels in microvascular-derived primary cultures, levels higher than in unstimulated APCs [57]. GITRL is also expressed in mouse endothelial cells as observed by Cuzzocrea et al. (personal communication). Not surprisingly, an analysis of EST (expressed sequence tag) expression suggests that GITRL is mainly found in the connective tissue (Mm268623 NCBI, Unigene, EST profile viewers).
Cells different from APC and endothelial cells express low levels of GITRL. According to an expression panel obtained with microarray technology, GITRL is expressed at low levels in T cells, PMNs and NK cells [12], but it must still be ascertained if the low mRNA level observed determines sufficient protein expression to have a functional meaning. GITRL is also expressed in osteoclast precursors, skin, keratinocytes and retinal pigment epithelium, which is an immunologically restricted area, where GITRL seems to modulate immune privilege vs. inflammation [31][32][33]58]. EST expression suggests that GITRL is expressed in some parts of the central nervous system (T1DBASE TNFSF18 Tissue Expression).
Interestingly, GITRL expression is strongly increased during inflammation, mainly in APCs and endothelial cells. In response to proinflammatory stimuli, GITRL is rapidly upregulated (peak within 2-24 hours) and declines in 1-2 days to the initial or even lower levels [7,11,16,57]. pDCs, stimulated with viruses, overexpress GITRL [12] and human monocytes stimulated with staphylococcal enterotoxin B (SEB) become GITRL + [59]. This is confirmed by in vivo experiment showing that 24-48 h subsequent to ocular herpes simplex virus 1 (HSV-1) infection, GITRL expression is increased in APCs of draining lymph nodes [60]. Not all pro-inflammatory stimuli promote GITRL ligand upregulation. For example, in endothelial cells, GITRL is upregulated by IFNα and IFNβ, and not by proinflammatory cytokines and LPS. In addition, T cells upregulate GITRL after activation [16] or DEX treatment ( Figure 1). In conclusion, the widespread distribution of GITRL, its expression in endothelial cells and the upregulation upon specific stimuli suggest that GITRL is involved in the development of inflammatory process.
THE ROLE OF THE GITR-GITRL SYSTEM IN IN VIVO MODELS OF ACUTE INFLAMMATION WITH RAPID ONSET, INCLUDING SHOCK
GITR gene deficient (GITR -/-) mice were a useful tool for studying the role of GITR/GITRL system in inflammation. The first study demonstrating a link between the GITR/GITRL system and acute inflammation was conducted on a mesenteric infarction model performed clamping the celiac and superior mesenteric arteries for 45 minutes and called splanchnic artery occlusion (SAO) model [61]. In this model, the survival rate of GITR -/mice was dramatically higher than that of GITR +/+ mice (70% vs. 5%). Decreased mortality of GITR -/mice correlated with a much lower infiltration of inflammatory cells in the mucosa (with particular reference to PMN), reduction of apoptosis at villus tips and reduction of lipid peroxidation, a marker of oxidant molecules and free radical production. Moreover, in GITR -/mice there was a lower production of cytokines such as tumor necrosis factor alpha (TNFα), as early as 1 hour following SAO procedure. At the same time, the adhesion molecules P-selectin, E-selectin and intercellular adhesion molecule-1 (ICAM-1) were upregulated in endothelial cells of GITR +/+ mice but upregulation was much less efficient in GITR -/mice, suggesting that the GITR/GITRL system favors PMN infiltration and leukocyte rolling, modulating adhesion molecules during the inflammatory process. Of note, ICAM-1 is expressed at basal levels in both GITR +/+ and GITR -/mice, suggesting that the GITR/GITRL system does not interfere with basal expression of adhesion molecules.
Involvement of GITR in acute inflammation was confirmed by the lower inflammatory response of GITR -/mice to carrageenan administration in the pleurisy model (carrageenan-induced lung inflammation) [23]. In this model, mice develop an inflammatory response promoting pleural exudation and lung inflammation, 2-8 hours following carrageenan injection in the pleural cavity. In GITR -/mice, pleural exudate, containing less pro-inflammatory cytokines and a lower number of proinflammatory cells, was reduced of about 50%. The decreased number of cells in the pleural cavity concerned all subsets of pro-inflammatory cells and correlated with decreased lung injury (including apoptotic cells) and inflammatory cell infiltration (with particular reference to PMN) in lungs of GITR -/mice. Moreover, lower expression of inducible nitric oxide (NO) synthase (iNOS) and cyclooxigenase-2 (COX-2) was found in the lungs of GITR -/mice, together with lower levels of NOderivative products, nitrotyrosine, and prostaglandin E 2 (PGE 2 ). Finally, adhesion molecules were less upregulated in GITR -/mice compared to GITR +/+ mice, similar to what was observed in the SAO model. Interestingly, co-administration of carrageenan and a fusion protein, formed by the extracellular domain of mGITR fused to human IgG1 Fc fragment (GITR-Fc), in GITR +/+ mice decreased pleural infiltration of macrophages and lung infiltration of PMN to levels comparable to those observed in GITR -/mice injected with carrageenan alone, suggesting that the differences observed between GITR +/+ and GITR -/mice were mainly due to the lack of GITR triggering by its ligand.
However, other in vivo models suggest that the triggering of GITRL (supposed to be capable of reverse signaling) positively modulated the inflammatory response. Intraperitoneal injection of recombinant monomeric GITR produced in E. coli (sGITR), caused inflammation of the peritoneal membrane and spleen as suggested by increased myeloperoxidase activity in the peritoneal membrane, PMN and monocyte infiltration, with later development of tissue damage, and enlargement of the spleen red pulp [28]. Infiltrating PMNs produce oxygen derivatives, serine proteases and zinc MMPs that promote tissue injury. Another study demonstrated that intraperitoneal injection of sGITR upregulates MMP-9 production [62]. Even if the above data may seem in contrast with attenuation of pleurisy by GITR-Fc fusion protein, note that the reagents used were different. Moreover, it is possible that abolishing GITR triggering is useful during inflammation, while GITRL triggering at levels higher than those obtained with physiological triggering may have a pro-inflammatory significance in the healthy animal. Further studies using GITR -/mice and GITRL -/mice (the latter, however, are still not available) will help to discriminate the effect of GITR and GITRL triggering.
Inflammatory Diseases in Which Innate Immunity Plays a Significant Pathogenetic Role
GITR -/mice were studied during the development of lung injury caused by bleomycin instillation, a pro-inflammatory stimulus leading to pulmonary fibrosis [63]. While bleomycin instillation caused death and weight loss in GITR +/+ , neither death nor weight loss was observed in GITR -/mice, suggesting that GITR -/mice were less sensitive to bleomycin treatment. In fact, in these mice the degree of lung infiltration and edema formation was reduced (about one third), 7 days after bleomycin intratracheal instillation. Histological evidence of lung injury was also less. In lungs of GITR -/mice myeloperoxidase activity and expression was about five fold less than in GITR +/+ mice. As a consequence of fewer inflammatory cells in lungs, cytokine production and nuclear factor kappa B (NF-κB) activation were reduced. Very similar results were obtained in GITR +/+ mice co-treated with bleomycin and a very low dose of GITR-Fc, administered by a mini-osmotic pump releasing the fusion protein over the whole 7 day-observation period.
Colon inflammation by 2,4,6-trinitrobenzene sulfonic acid (TNBS) delivered intrarectally is a murine model of inflammatory bowel diseases (IBD), a relatively common inflammatory disease of the gastrointestinal tract supposedly deriving from dysregulation of CD4 + T helper cells and the innate immune system. GITR -/mice were less sensitive to TNBS-induced colitis compared to GITR +/+ mice, as suggested by macroscopic (survival, weight, clinical score), and microscopic (histological score) parameters and cytokine production [24]. Administrating GITR-Fc partially protected TNBS-treated GITR +/+ mice from colitis similar to what observed in TNBS-treated GITR -/mice. The role of innate immunity in the development of TNBS-induced colitis was demonstrated using immunodeficient SCID mice that develop colitis in response to intrarectal instillation of TNBS, roughly comparable to that seen in wild-type mice. Administrating GITR-Fc to SCID mice partially prevented inflammation induced by TNBS, suggesting that GITR triggering has a role in the development of TNBS-induced colitis also in immunodeficient mice.
The epidermis is a tissue where the GITR-GITRL system seems to play an anti-inflammatory and protective role. In fact, GITR -/mice exposed to UVB, demonstrated two times more apoptotic cells compared to GITR +/+ mice [31]. This was confirmed in in vitro studies on keratinocytes from GITR +/+ and GITR -/mice. Moreover, GITR expression is downregulated in response to UV treatment, but when overexpressed, it protects cells from UVB-induced death. In conclusion, GITR protects keratinocytes from cells death, a feature of inflammatory response. The anti-inflammatory role of GITR in the skin is also emphasized by a study on human skin cells, demonstrating that suppressor T cells expressing high levels of GITR proliferate in the skin and may limit skin inflammation [64].
Inflammatory Diseases With an Allergic and Autoimmune Pathogenesis
In vitro studies clearly demonstrate that GITR potentiates T-mediated immune response. This is confirmed by studies on in vivo models where diseases are due to the adaptative immune response. Since in some of these models the inflammatory response is relevant, they are briefly summarized below.
Rheumatoid arthritis (RA) is an autoimmune disease with a substantial inflammatory reaction during both the acute and chronic phases. In the model of collagen-induced arthritis (CIA), GITR -/mice had a lower incidence of CIA and less joint injury compared to the control mice [65]. Clinical evidence correlated with a lower level of PMN infiltration and pro-inflammatory products, including chemokines, cytokines, iNOS and COX-2. Reduced susceptibility to CIA was due to GITR modulation of effector and Treg cell function. At the same time, another study demonstrated that anti-GITR Abs exacerbate CIA, as ascertained by clinical scores and cytokine production [66], so confirming the role of the GITR/GITRL system in this murine model. Involvement of GITR in RA disease is further suggested by a study in which paired samples of synovial cells and PBMC from rheumatoid arthritis patients were analyzed for GITR, OX40, Foxp3 and CTLA-4 (extra and intracellular). These proteins resulted in an increase in the synovium compared to the PBMC, suggesting that Treg phenotype cells tend to accumulate in the synovial fluid of RA patients, and that GITR is involved. Of note, the number of CD25 + cells was comparable [67].
Administration of anti-GITR Ab to OVA-sensibilized/challenged mice exacerbated allergic airway inflammation in this asthma model [66]. Bronchoalveolar eosinophilia, peribronchial and perivascular inflammation was increased compared to control mice. Serum anti-OVA IgE and total IgE was enhanced, while IgG1 and IgG2a was unaltered. These results suggest that in this case both Th1 and Th2 type responses are upregulated.
In vivo administration of anti-GITR Ab aggravates autoimmune thyroiditis (EAT) induced by thyroglobulin in a Hashimoto model, inhibiting tolerance induction and abrogating established tolerance, resulting in increased mononuclear infiltration of the thyroids and autoantibody production. GITR engagement induces autoreactive T cell development and escape from Treg suppression [68], as discussed below.
Altogether, the data presented suggest that GITR signaling increases the expression of those mediators involved in the inflammatory process.
Inflammatory Diseases Deriving From Response To Viruses
The relationship between viruses and TNFRSF members is well known and it was hypothesized that the low level of interspecies conservation of their extracellular domain is due to the crucial role of TNFRSF members in the struggle against viruses [69]. In a very recent study comparing TNFRSF members, GITR/GITRL pair was the only strictly species-specific one [8], suggesting that GITR may be one of the TNFRSF members more directly involved in the response against viruses. In fact, some in vivo studies have recently demonstrated that GITR triggering potentiates immune response against viruses [60,70,71].
Modulation of the GITR/GITRL system may be helpful also in controlling virus-induced inflammatory reaction. A model of inflammation-derived lesion following virus infection is corneal blindness caused by herpes simplex virus (HSV) infection. Effector CD4 + cells, modulated by Treg cells, orchestrate the immunopathological lesions. In planning the study of GITR activation effect on this model, the authors anticipated that treatment with agonistic anti-GITR Ab would cause more severe keratitis either because of negative modulation of Treg suppressive activity or due to the costimulatory effect of GITR that could enhance T cell effector function [60]. However, while anti-GITR treatment did enhance HSV-specific T cell immunity (as shown by increased IL-2 and IFNγ production in lymph nodes and spleen), it also reduced virus-induced angiogenesis and stromal keratitis. This effect was explained by 2 anti-GITR-induced effects: 1) decreased infiltration of CD4 + cells in corneas (about half compared to Ig-treated mice), evaluated 10 and 15 days after infection, 2) a five-fold lower production of MMP-9, a matrix-degrading enzyme involved in virus-induced angiogenesis, evaluated 2 and 13 days after infection. Thus, the GITR/GITRL system participates in modulating the inflammatory response caused by virus infection, but contrary to expectations, it plays an anti-inflammatory role.
HOW THE GITR-GITRL SYSTEM HAS A ROLE IN THE INFLAMMATORY PROCESS AND SHOCK: FROM THE EVIDENCE TO THE CELLULAR MECHANISMS GITR and GITRL: Multifaceted Players in Several Systems
The role of the GITR/GITRL system in modulating the inflammatory response is evidenced by the above-referred in vivo data and seems to be crucial both in the early phase and in sustaining the inflammatory process. This is due to the determinant role of the GITR/GITRL system in 4 different aspects of inflammation: 1) extravasation process, 2) production of inflammatory mediators, 3) production of cytokine, 4) activation of effector T cells. Though the effects of the GITR/GITRL system are impressive, it is not always clear how these effects are potentiated by pharmacological treatment.
The main confusing factor is the possibility that GITRL not only represents the molecule able to triggers GITR, but can activate signals (called reverse signaling) in the cells where it is expressed following GITR binding. Reverse signaling of TNFSF members was speculated when the high interspecies conservation of their short cytoplasmic domains was seen [72]. High interspecies conservation is observed in GITRL also. Among the different studies suggesting the existence of reverse signaling by GITRL, two convincingly support this, even if, in our opinion, a definite demonstration will be accomplished by working with GITR -/cells. The first study demonstrated that GITRL signaling causes cell cycle arrest and apoptosis in murine macrophages [73], the second that GITRL signaling stimulates osteoclast differentiation [32]. These studies also demonstrate that GITR fusion proteins, but not anti-GITRL Ab can trigger GITRL.
The potential GITRL reverse signaling is a confusing factor because several cells, including macrophages, PMNs, DCs and activated T cells express both GITR and GITRL every time a fusion protein is used it can elicit opposite effects on GITR and GITRL. For example, when an agonistic GITR-Fc is used, 2 effects are possible: 1) inhibition of GITR activation by endogenous GITRL, 2) activation of GITRL. Also in GITR -/mice cells lack both GITR and GITRL signaling, since GITRL, present in GITR -/mice, is not activated by GITR. The potential effects of GITR and GITRL triggering in macrophages are summarized in Figure 2. GITR gene encodes several alternative spliced products, 2 of which (GITRD and GITRD2) are soluble, as presented in detail in a following paragraph. The levels of GITRD/D2 expressed in responder T cells are good and are downregulated during T cell activation. They may function as a decoy target, impeding GITR activation by GITRL. Thus, the existence of GITR splicing variants and of GITRL reverse signaling together with expression kinetics of GITRL (rarely expressed at high levels for a long time), make it difficult to predict and understand different, sometimes contrasting, results obtained in different experimental settings.
In the following paragraphs we review in vitro data explaining how GITR/GITRL modulation affects different aspects of the inflammatory process and the role of antibodies and fusion proteins, which are potentially useful tools in the control of inflammation.
GITR-GITRL System in Leukocyte Extravasation and Edema
In the above described in vivo models there is overwhelming evidence that the GITR/GITRL system is involved in leukocyte extravasation, one of the crucial events of the inflammatory process and shock. However, there is no experimental evidence to fully describe how it happens and it is possible that both GITR and GITRL play a role on endothelial cells. In fact, GITRL is expressed at a high level in endothelial cells and its expression can be modulated by pro-inflammatory stimuli, and GITR is expressed during the inflammatory process.
Adhesion molecules ICAM-1, P-selectin and E-selectin are upregulated in endothelial cells following inflammation, but in the absence of GITR (GITR -/mice) upregulation is much less evident [23,61]. An obvious explanation is that GITR (expressed on endothelial cells) is triggered by GITRL (expressed on PMNs and monocytes) and participates in upregulation of adhesion molecules. That GITR-activated signals are able to modulate expression of P-selectin and E-selectin is suggested by a study performed on CD3 + cells cultured together with an irradiated retinal pigment epithelial (RPE) cell line (ARPE) [58]. In fact, CD3 + cells, activated in the presence of a GITRL-transfected ARPE cell line, produced much more P-Selectin and E-Selectin compared to those cultured together with a non-transfected ARPE cell line. The evidence that GITR-Fc fusion protein inhibits extravasation in the described inflammation models suggests a role of GITR in extravasation. Another hypothesis in line with the in vivo effect of GITR-Fc fusion protein is that GITRL may function as an adhesion molecule, favoring extravasation of cells that express GITR (such as lymphocytes, PMNs and monocytes). In alternative, since the expression of adhesion molecules is modulated by proinflammatory stimuli, such as TNFα [74] and other cytokines, the lack of adhesion molecule upregulation in GITR -/may be due simply to lower levels of pro-inflammatory stimuli and further studies are needed in this field.
Another feature regulated by endothelial cells is edema, a crucial event in shock and inflammation and due to several mechanisms, including tight junction changes. In some in vivo models, GITR -/mice edema was decreased compared to GITR +/+ mice [63,65]. Staining of ZO-1, a marker of tight junction integrity, showed much higher degree of immunostaining disruption in lungs of carrageenan-treated GITR +/+ mice compared to carrageenan-treated GITR -/mice, suggesting a direct or indirect role of the GITR/GITRL system in tight junction integrity [23].
GITR-GITRL System and Inflammatory Mediators
The early phase of the inflammatory process is characterized by the production of histamine, leukotrienes, platelet-activating factor and COX products, followed by PMN infiltration and production of PMN-derived free radicals and oxidants [75]. Major players in this process are the constitutive isoform of COX (COX-1) and the inducible isoform COX-2. This last isoform is under the regulation of NF-κB and MAP kinase signaling [75,76], and GITR is able to activate both systems. Indeed, there is less COX-2 in the joints of GITR -/mice in collagen-induced arthritis compared to wild type controls [65]. Moreover, lungs from GITR -/mice exhibit lower levels of COX-2 expression following carrageenan-induced lung inflammation, and, as expected, PGE 2 levels in pleural exudate are reduced [23]. In inflammatory cells from lung tissue, GITR -/mice expressed lower levels of COX-2 suggesting that macrophages of GITR -/mice are less activated. This effect may be due to lack of GITR or GITRL triggering. Several studies support the latter hypothesis. In fact, GITRL stimulation by sGITR (the extracellular domain of GITR produced in E.coli as a monomer) or GITR-Fc (the extracellular domain of GITR fused with Fc fragment, produced in eukaryotic cells as a dimer) induces COX-2 upregulation and PGE 2 production in bone-marrow stromal cells, peritoneal macrophages and RAW 264,7 cell line [29,77]. The same group reports that sGITR inhibits macrophage growth, and since anti-GITR Ab neutralizes this effect, but alone does not affect macrophage growth, they conclude that macrophage cycle-arrest is due to GITRL signaling [73].
Another player in inflammation is NO produced by the inducible isoform nitric oxide synthase (iNOS). NO is important as a toxic defense molecule against infectious organisms. It also regulates the function, growth and death of many immune and inflammatory cell types including macrophages, T lymphocytes, antigen-presenting cells, mast cells, PMNs and NK cells and its target cell specificity depends on its concentration, its chemical reactivity, the vicinity of target cells and the way target cells are programmed to respond. Among the pro-inflammatory effects, NO regulates MMP expression and activity. There are some links between GITR or GITRL triggering, iNOS, and NO production. In CIA and pleurisy models, less iNOS was found in the joints and in the lungs of GITR -/mice [23,65]. In a series of experiments, Shin et al. demonstrated that GITRL triggering by sGITR induces iNOS synthesis in murine macrophages [78,79]. Using iNOS inhibitor SMT and NO donor SMP, they demonstrated that there is no correlation between macrophage growth and sGITR induced NO production, so even if there is evidence of NO antiproliferative action, the effects of GITRL triggering do not include inhibition of proliferation [73]. Together, GITR and GITRL promote NO release.
Matrix metalloproteinases (MMPs) appear to regulate cellular behavior through several mechanisms including cell-matrix interactions, extracellular matrix remodeling, angiogenesis, cell growth/apoptosis and the release of bioactive signaling molecules. MMPs are synthesized in response to diverse stimuli including cytokines, growth factors, hormones, and oxidative stress and are involved in the development of several diseases, including inflammatory and vascular diseases. Modulation of GITR/GITRL system causes modulation of some MMPs but data are contrasting. Lee et al. demonstrated that GITRL triggering by sGITR upregulates MMP9 and MMP2 in murine peritoneal macrophages [62]. Accordingly, CD11b + cells, isolated from virus-infected corneas, increased MMP-9 secretion following anti-GITRL treatment [60]. However, the authors hypothesize that this is due to blocking of GITR/GITRL interaction more than to GITRL triggering. In fact, anti-GITR treatment negatively modulated MMP-9 expression both in vitro (CD11 + cells) and in vivo (corneal extract of mice with herpes simplex virus infection). Opposite results were obtained by Kim et al., showing that GITR stimulation by anti-GITR Ab induces MMP-9 in mouse and human macrophages from different tissues and in vitro monocyte/macrophage cell lines [80]. A possible explanation for the contrasting results is that GITR triggering elicits opposite effects in function of the microenviroment, activation status and type of stimulus. Thus, further studies are needed. Human GITRL triggering (shGITR) induces MMP-13 secretion in fibroblast-like synovial cells and may promote tissue destruction in rheumatoid arthritis [81].
GITR and/or GITRL Modulation by Fusion Proteins and Antibodies
Several studies demonstrate that anti-GITR antibody (such DTA-1) and recombinant GITRL have agonistic activity on GITR and favor the production of cytokines in various inflammatory cells, both in vivo and in vitro. In particular, anti-GITR and anti-CD3 Ab treatment induced higher IL-2 and IFNγ levels in T cells, compared to anti-CD3 Ab alone [6]. GITR co-triggering of T cells induces IL-2, IL-4, IFNγ and very strong IL-10 secretion, and this latter seems to counter-regulate enhanced proliferative response [15]. Anti-GITR antibody induced dose-dependent TNFα secretion in mono-macrophage cell lines and increased IL-8, MCP-1 secretion [80]. Cord blood mononuclear T cells (CBMC) show a positive correlation between GITR expression and IL-10 secretion subsequent to allergen exposure [82]. In NKT cells, DTA-1 increased TCR-dependent production of IL-4, IL-10 and IL-13 [22]. rGITRL also induced dose-dependent TNFα secretion in Raw 264,7 cells [80]. Administration of anti-GITR Ab during inflammatory reaction induces both Th1 and Th2 type cytokines in vivo. Anti-GITR Ab treatment of mice with CIA exacerbated joint inflammation and increased TNF-α, IL-5 and IFNγ production, while anti-GITR Ab treatment of mice with OVA-induced airway inflammation increased IL-2, IL-4, IL-5 and IFNγ [66]. Injection of anti-GITR Ab immediately after HSV-1 viral infection, increased IFNγ secretion by Treg cells [71].
Triggering of GITR is elicited by GITRL expressed on other cells even when they are fixed or irradiated [6,58]. For example, RPE cells which were transfected with GITRL and deadly irradiated, increased T cell-production of a series of pro-inflammatory cytokines as IL-2, IL-6, TNFα, IFNγ, Selectin P and E, and decreases previously high TGFβ levels [58].
In vitro GITR triggering induces mainly pro-inflammatory cytokines and promotes inflammation. This is emphasized also by the in vivo data, showing that GITR -/mice have considerably less inflammatory response than GITR +/+ controls. However, in some cases, the outcome of GITR triggering can be increased expression of anti-inflammatory cytokines, like IL-4 or IL-10, which presumably tends to limit overextended inflammatory reaction. This apparently contradictory data may suggest that various spectra of induced cytokines have different origins, and different kinetics, contributing to a fine-tuning of the satellite inflammation on proliferation. This advises maximum caution in using antibodies or fusion proteins targeting GITR/GITRL system in therapy. Reverse signalling, and different dynamics of affinity and regulation are highly variable and depend crucially on the kinetics moment, targeted cell type and location.
GITR/GITRL System and T Cell Activation
Acquired immunity plays a role in some inflammatory reactions. In particular, inflammatory reaction during autoimmune diseases is under the control of T lymphocytes and response to bacteria or viruses is in part due to B and T cells. Despite the several functions of the GITR/GITRL system in innate immunity cells, probably the main role of GITR is played in modulating the effector T cell response. We summarize here how GITR activation modulates activation of effector T cells and function of suppressor T cells (Figure 3). This aspect has recently been discussed in other review papers [9][10][11].
GITR in T Cell Regulation: Co-activating Function on Effector T Lymphocytes
There is overwhelming evidence that modulation of T cell response consequent to GITR triggering derives, first, from co-activation of effector T cells. In fact, GITR co-triggering increases activation and proliferation of TCR-triggered T cells [6,7,15,16,83]. This effect is evident when GITR is triggered by anti-GITR Abs or stimulated by soluble GITRL or GITRL-transfected cells [6,7,[14][15][16]. It is also evident in physiological conditions, as demonstrated by a decreased activation following addition of blocking anti-GITRL Abs to a co-culture of APC (physiologically expressing GITRL) and anti-CD3triggered T cells [16]. Increased activation is also due to rescue from anti-CD3-induced apoptosis [6]. GITR triggering effects are more evident when TCR is suboptimally activated [7,14](as usually happens for co-accessory molecules) and are evident with lower activation stimuli in CD4 + than in CD8 + cells [16]. In certain experimental conditions, full triggering of TCR and GITR decreases cell proliferation of CD4 + cells [7,15].
Though comparison with other co-stimulatory molecules is hampered by some technical variables, it is believed that the co-stimulatory power of GITR is lower than that of CD28 [6,15,19,58] and seems qualitatively different. Studies on total lymph node populations of GITR -/and CD28 -/mice demonstrated that in the presence of weak CD3 triggering (soluble anti-CD3 in the absence of feeder) and IL-2, the lack of CD28 only in part impaired T cell activation, while the lack of GITR completely abolished T cell activation [16]. This was due, at least in part, to the inability of GITR -/cells to express the high affinity IL-2R when cocultured with CD4 + CD25 + cells [16]. Studying the effect of retinal pigment epithelial (RPE) cells on T cell proliferation, was demonstrated that GITR triggering abrogated RPE-mediated immunosuppression, while a much smaller effect was seen with CD28 triggering [58], confirming the different effects of CD28 and GITR.
Some studies suggest a different role for GITR in CD4 + and CD8 + cells. During the activation process of CD4 + CD25cells, GITR upregulation is mainly dependent on CD28 co-triggering as demonstrated by the greatly increased expression of GITR after CD28-co-triggering and substantial inhibition of GITR upregulation upon activation when physiological CD28 engagement was inhibited by anti-CD80/86 Abs [16,84]. Of note, GITR expression is upregulated by CD28 activation also in the absence of TCR triggering [84], suggesting that a specific signal, not correlated with activation/proliferation, departs from CD28. As a consequence, when CD28 triggering is impeded, the costimulatory effect of GITR-triggering is decreased [16]. Kohm et al. demonstrated that this effect is dependent on CD28-driven IL-2 production [84], while Stephens et al. demonstrated that it is independent of this cytokine [16]. However, the latter used CD4 + CD25cells while the former used total CD4 + cells. In conclusion, it seems that, in CD4 + cells, GITR expression and signalling follows CD28 signalling and probably GITR should be regarded as one of the pathways activated by CD28 activation.
In CD8 + T cells the relation GITR/CD28 is somewhat different. In fact, our unpublished studies suggest that in the absence of GITR, CD8 + cells cannot be co-activated by CD28 stimulation when suboptimal doses of anti-CD3 Ab are used while in the absence of CD28, GITR can exert its coaccessory functions (Ronchetti et al., manuscript in preparation). If these findings are confirmed by other experimental models, in CD8 + cells GITR may be a molecule necessary for CD28 costimulatory effects. Even if GITR expression is increased by CD28 triggering, it seems partially independent of CD28 activation [16] (Ronchetti et al., manuscript in preparation). Finally, while full triggering of both GITR and TCR can elicit activation-induced cell death of CD4 + cells, increased TCR stimulation further increases the costimulatory activity of GITR and CD8 + cell activation [16]. These findings may explain why GITR activation potentiates more the response of CD8 + cells than that of CD4 + effector cells in some in vivo studies [85,86].
GITR in T Cell Regulation: Modulation of The Interplay Treg/Effector Cells
Following GITR triggering, an increased response of the immune system to antigenic stimulation is observed both in vitro and in vivo. This effect is due not only to costimulation of effector T cells (as discussed above) but also to negative modulation of suppressor T cells (including Treg cells), which are subsets of T cells able to control expansion of effector T cells upon TCR triggering. In 2002, two independent groups working on Treg cells using a different approach demonstrated that GITR activation interferes with the effector/Treg cells interplay [13,19]. Both groups tested Abs directed towards several TNFRSF members with co-accessory function, and demonstrated that anti-GITR Abs were the only ones capable of reverting the suppressor effect of Treg cells [13,19]. Other studies demonstrated the same effect when GITR triggering was exerted by GITRL expressed on APCs [2,6,7,27]. GITR triggering by anti-GITR Ab is also effective in abolishing suppressor activity of other cells such as CD4 + CD25 -T cells present in aged mice [87] or old human donors [88], or retinal pigment epithelial cells [58].
The lower suppressor activity of Treg cells observed in the above mentioned studies can be explained in 2 ways: GITR engagement either inhibits the suppressor activity of Treg cells or makes effector T cells resistant to Treg cell suppression. Since both explanations are well supported by experimental data, it is likely that both contribute to the final effect. The latter hypothesis (effectors resistant to suppression) has recently been proposed by Shevac and Stephens in an "opinion" paper [11] in which they reconsider their own original data [13] in view of the demonstration that GITR triggering is costimulatory for effector T cells [6,7,15,16]. They sustain that "more definitive studies now indicate that signals through GITR costimulate responder T cells and so allow their escape from suppression" [11]. Since the effects of GITR triggering in effector T cells is not impressive and, at best, quantitatively comparable with those obtained with other costimulatory molecules, such as CD28 [6,15], this hypothesis would suggest that GITR signaling specifically interferes with the signals activated by Treg cells on effector cells. The data presented by Stephens et al. are in line with this hypothesis [16]. In fact, they demonstrated that total lymph node cells (including CD4 + CD25 + cells) of GITR -/mice were unable to proliferate when stimulated by soluble anti-CD3 and IL-2, whereas CD28 -/cells were able, suggesting that the effect of GITR stimulation does not lower only the activation threshold. A similar conclusion was reached by Mahesh et al. investigating the meaning of GITRL expression in human ocular tissue [58]. Expression of GITRL on retinal pigment epithelial (RPE) cells abrogated RPE-mediated immunosuppression of CD3 + cells and the effect was independent of Treg cells. It was not a matter of potency in costimulation, but the kind of costimulation, as demonstrated by the very low efficiency of CD28 triggering in abrogating the RPEmediated immunosuppression vs. a much higher level of costimulation of CD28 in the absence of RPE. In conclusion, these data suggest that in effector T cells GITR triggering activates a pathway (still undisclosed) distinct from that activated by CD28 specifically antagonizing the immunosuppression.
Some in vitro and in vivo data suggest that GITR stimulation directly affects Treg function. In the first study demonstrating that anti-GITR Ab breaks immunological self-tolerance, Shimizu et al. found that GITR also possesses weak costimulatory activity [19]. Therefore, they used rat responder T cells (on which anti-mGITR Abs do not react) and mouse Treg cells, and demonstrated that the increase in cell proliferation is also due to abrogation of Treg cell activity [19]. Moreover, when Treg cells from GITR +/+ mice were cultured together with CD4 + effector T cells from GITR -/mice, anti-GITR Abs (in this experiment effective only on Treg cells), were able to increase the proliferation rate of effector T cells by inhibiting Treg suppressor activity [6]. However, this effect was not observed using mice with another background [16]. In another in vitro experiment, the suppressor function of CD4 + CD25 + T cells on B cells was lost when anti-GITR Abs were added [89]. Some in vivo models also confirm direct effects of GITR triggering on Treg cells. Depletion of Treg cells from donor T cells exacerbates GVHD induced by allogenic bone marrow transplantation. T cell-depleted bone marrow cells together with freshly purified effector T and Treg cells were transferred into irradiated mice that received an intraperitoneal injection of anti-GITR antibody [90]. The anti-GITR injected mice died from GVHD while the isotype-injected mice did not. To further demonstrate that survival was due to a direct effect on Treg cells, Treg cells were pre-treated in vitro with an anti-GITR antibody, washed and transferred together with the other donor cells in irradiated mice. In this case also, mice developed a lethal GVHD [90]. In fact, transfer of cells depleted of GITR + cells caused severe multi-organ inflammatory disease in Balb/c nude mice, ending in fatal autoimmune myocarditis with anti-myosin antibody secretion, and similarly, transfer of GITR depleted cells from prediabetic NOD mice to NOD-SCID mice accelerated the development of diabetes and induced skeletal muscle myositis and other autoimmune/inflammatory diseases [50]. To test how GITR modulates T cell response during CIA development, spleen cells from GITR -/or GITR +/+ arthritic mice were transferred intraperitoneally into SCID together with collagen [65]. The resulting arthritis was 3.5 fold more severe in GITR +/+ transferred mice compared to GITR -/transferred mice. In this model, GITR derived signals were important in both effector and Treg cells. In fact, when Treg-depleted spleen cells were transferred, CIA was again stronger in GITR +/+ transferred mice as compared to GITR -/transferred mice but at to lesser degree (only 2 fold). Moreover, when Treg-depleted splenocytes from GITR +/+ mice were transferred together with GITR +/+ or GITR -/-Treg cells, CIA was again stronger in GITR +/+ Tregtransferred mice compared to GITR -/-Treg-transferred mice suggesting that physiologic GITR triggering negatively modulates Treg cell activity [65]. A similar result was obtained in the TNBScolitis model, where, however, the difference between mice transferred with GITR +/+ and GITR -/-Treg was not significant due to the already high efficacy of GITR +/+ Treg cells [24]. Taken together, the above reported data suggest that, at least in some experimental conditions, GITR triggering also modulates Treg function.
In an attempt to find Treg cell inhibitory signals delivered by GITR, a global gene analysis of anti-CD3 activated Treg cells treated or untreated with anti-GITR Ab was performed [91]. More than 350 genes were transcriptionally modulated 12 hours after GITR triggering, but the full list of genes is not yet available. Granzyme B, a molecule participating in the suppressive/cytotoxic activity of Treg cells, is strongly upregulated in anti-CD3 triggered Treg cells and GITR engagement counters granzyme B upregulation [91], further supporting the hypothesis that GITR negatively modulates Treg cell activity.
GITR costimulation reverses the anergic phenotype of Treg cells after antigen presentation and this effect was correlated to their loss of suppressor function as previously summarized [9], but this may be an oversimplified view. For example, OX40 can modulate Treg function, at least in some experimental conditions, without delivering a costimulatory signal [90]. The pro-proliferative effect of GITR on Treg cells was further confirmed by recent in vitro and in vivo studies [92]. They also demonstrated that once GITR stimulation has occurred, Treg cells regain their suppressive activity, as previously demonstrated in another experimental setting [19]. The physiological role of GITR for Treg expansion is suggested by a decreased amount of Treg cells in GITR -/mice [6,16]. The stimulation of Treg cell proliferation by GITR together with the temporary inhibitory effect on Treg function: 1) may limit collateral damage of inflammatory response induced by the exaggerated response to foreign or self antigens, 2) may explain why in vivo GITR stimulation does not cause overt autoimmunity [92,93].
In conclusion, GITR triggering may have 4 distinct effects on Treg/effector cell interplay: 1) inhibition for a short time (hours?) of Treg cell suppressor activity by impeding the upregulation of molecules necessary for Treg suppressor activity such as granzyme B, 2) decreased sensitivity of effector T cells to Treg suppression, 3) induction of a partial deletion of Treg cells, 4) promotion of proliferation of functionally active Treg cells, expanding the Treg cell compartment.
GITR in T Cell Regulation: Modulation of The Interplay DC/Treg/effector Cells
Several studies suggest that professional APCs (i.e. DCs) express GITRL modulating its expression during antigen processing and presentation. Evidence for modulation of DC function by the GITR/GITRL system has been obtained studying C. albicans infection of GITR -/mice [95]. When DCs were cultured in the presence of heat-inactivated C. albicans and GITR +/+ or GITR -/-Treg cells, the level of DC-derived IL-12 was lower in DC cocultured with GITR +/+ Treg cells. A possible explanation is that GITR (on Treg cells) triggers GITRL (on DCs), modulating DC function. In turn, modulation of DC activity may modulate effector and suppressor T cell activity. Thus, GITRL may modulate immune response not only by triggering GITR on effector and Treg cells, but also by modulating dendritic cell activity through reverse signaling. In the C. albicans model the effect of the signaling was pro-inflammatory by favoring Th2 polarization but further studies are needed to fully disclose the effect on DCs.
GITR Structure and Promoter Region
GITR, like other TNFRSF members, is a type I transmembrane protein formed by a cytoplasmic, a transmembrane and an extracellular domain. Murine and human GITR genes comprise 5 exons [96].
The first 3 exons encode the extracellular domain; exon 4 encodes a small part of the extracellular domain, the transmembrane domain and part of the cytoplasmic domain while exon 5 encodes the cytoplasmic domain. mGITR is located on chromosome 4 and hGITR on chromosome 1 [91,96]. TNFRp75, OX40, CD30, 4-1BB, HVEM and DR3, all belonging to TNFRSF, are similarly located on the murine chromosome 4 and the human chromosome 1, suggesting a common origin and possibly a similar function. However, homology among TNFRSF members is not very high and GITR is not an exception.
Consensus elements for transcription factors involved in the inflammatory response were identified in the 5' flanking region of the GITR gene [96] (Bianchini, unpublished). Several consensus elements involved in the inflammatory process are present, including NF-κB, STAT5, SRF, LEF1, NF-AT1, NF-IL6, IFR4 and TFE3. They are crucial both for activation of T and innate immunity cells, further suggesting the role of GITR in the inflammatory process.
The above-mentioned studies demonstrate GITR expression in skin and bone. GITR expression in these tissues is also suggested by the presence of TFE3 and LEF1 (skin), and LEF1, STAT5, OCT1P and GKLF.1 (bone). Some elements (e.g. MYT1 and PTX1) promoting gene expression in neurons have been found, confirming the potential expression of GITR in the central nervous system, as suggested by GITR presence in the brain. Finally, two highly significant binding sites for MyoD and one for myogenin have also been found, suggesting that GITR is involved in muscle development [96]. Although the weak expression of GITR in neuron and muscle ( Table 2) the specific expression of GITR in different conditions and its role in these tissues deserves further investigation.
The Extracellular Domain of GITR
TNFRSF members are characterized by cysteine-rich domains in their extracellular portion. Cysteines form disulfide bridges, which contribute to structurally defined binding sites for the specific ligand [97]. A canonical cysteine-rich domain (CRD) contains 6 cysteines (numbered 1 to 6) which form 3 disulfide bonds (C1 with C2, C3 with C5 and C4 with C6) [97,98].
Although GITR belongs to TNFRSF, a canonically defined CRD according to BLAST utility (NCBI) is not present in GITR. To better define CRD, we compared the primary structure of TNFRSF members extracellular domain, and identified four different motifs based on cysteine position, conserved amino acidic residues and the spaces between (Table 3). These motifs, when present, are located in different positions of the extracellular domain and consequently, we named them CRD1 (next to signal peptide), CRD2/4, CRD3 and CRD4 (next to transmembrane). CRD1 and CRD2/4 motifs have 6 cysteines and are variants included in the canonical CRD. On the contrary, CRD3 and CRD4 motifs were not described so far as motifs characterizing the extracellular domain of TNFRSF members despite being observed in more than 35% and 50% respectively of members belonging to this family (not shown). Comparison between the extracellular domains of mGITR and hGITR. The amino acid position is reported between brackets. The Asparagine in white on gray background represents potential glycosylation sites. In the identity line, amino acid residues with similar function (I, V, M, L; H, R, K; E, D, N, Q; S, T; A, G; Y, F) present in both sequences are indicated by +. In the conserved residues line, the amino acid residues matching the respective CRD (see Table 3) are reported. Cysteine position in the CRD (or the position of the amino acid residue substituting the cysteine residue) is also reported.
The CDR3 motif (Table 3) [97]. When cysteines are absent, only some amino acid residues can replace them. The CRD4 motif (Table 3) usually contains 4 cysteines, cysteine 3 being replaced by tryptophan or histidine residues and cysteine 5 by alanine or glycine residues.
GITR contains a badly conserved CRD1, a fairly well conserved CRD3 and a perfectly matched CRD4 (Figure 4). In contrast, it lacks CRD2/4. Atypical CRD1 in GITR lacks the cysteine residue C4 and the tyrosine residue located after C1. Moreover, the amino acid residues between C3 and C5 are too few in mGITR and too many in human hGITR, to form a disulfide bond. Therefore, the CRD1 motif in GITR might contain just 1 disulfide bridge and may not represent a structurally defined CRD. Low conservation of the sequences representing this motif (51% similarity between mGITR and hGITR, Figure 4) in mGITR compared to hGITR suggests that CRD1 has little functional meaning in GITR. CRD3 is quite well represented in both mGITR and hGITR. The only missing amino acid residue is either the asparagine residue or the glutamic acid residue located near C8. The motif is characterized by 6 cysteine residues while C5 (H is instead present) and C7 (G is instead present) are lacking. CRD4 is perfectly represented in both mGITR and hGITR. The crucial meaning of CRD3 and CRD4 in GITR is further supported by the high similarity of mGITR compared to hGITR in these domains (61% and 75% similarity, respectively) and by the conservation of CRD3 and CRD4 in other species (Bos taurus, Canis familaris, Macaca mulatta, and Pan troglodytes) with a 60-65% similarity (CRD3) and 65-75% similarity (CRD4). GITR does not show a high homology towards other TNFRSF members in the extracellular domain, though. This explains why GITRL is extremely selective for GITR [8].
Reports are not available on the role of the CRDs of GITR in GITRL binding. Studies on other TNFRSF members proved to be difficult to predict the role of the different CRDs. For example, in Fas (TNFRSF6) the domains corresponding to CRD2/4 and to CRD3 play major roles in ligand binding. In TNFRI the domains corresponding to CRD1, CRD2/4 and CRD3 play a role, and in NGFR the domains corresponding to CRD3 and CRD4 are crucial [99,100]. The role of the different CRDs is interesting not only from a theoretical point of view but also for understanding the role of GITRD and GITRD2, soluble products of GITR gene, potential competitors for GITRL binding.
Both murine and human GITR have potential glycosylation sites: mGITR has 4 sites, whereas hGITR has only one that is conserved with respect to mGITR (see Figure 4). Western blot experiments indirectly confirmed that mGITR is glycosylated, since molecular weight of mature mGITR, calculated on the basis of amino acid composition, is 23.3 kDa, while experimental molecular weight ranges from 35 to 40 kDa, depending on the cell population tested [101].
The Cytoplasmic Domain of GITR
The cytoplasmic domain of mGITR and hGITR is respectively 52 and 53 amino acid residues long, and shows a good homology with the cytoplasmic domains of OX40, 4-1BB and CD27 (similarity between 45 and 50%)( Figure 5) [96]. The homologies span the complete cytoplasmic domain but are centered in 2 segments: domain 1, the sequence next to the -COOH terminus of transmembrane region, and domain 2, close to the -COOH terminus of the proteins ( Figure 5). Interestingly, domains 1 and 2 are coded by different exons [96].
Domain 1 is present in mGITR, mOX40, hOX40, m4-1BB, h4-1BB, mCD27 and hCD27, is characterized by 3 basic residues and is described by the motif Figure 5 shows that hGITR lacks 2 (hGITR) or 1 (hGITR variant) basic residues, which are deleted compared to the rest of the family members, but whether this lack has functional implications (activation of partially different pathways by mGITR and hGITR triggering) remains to be determined. [101]. Domain 2 therefore can be called the "life domain" as opposed to the death domain in other TNFRSF members, whose triggering activates apoptosis.
Motif 1 and motif 2 are shared by several (but not all) TNFRSF members whose genes are present in the same chromosomes where GITR is found. This may be a consequence of chromosome duplication. However, as CD40 and CD27 are not located in the same chromosomes as GITR, a functional convergence should be hypothesized rather than an evolutionary consequence. In conclusion, the good homology in the cytoplasmic domain between GITR, CD27, OX40 and 4-1BB and a quite good homology with CD40, together with the similarity in function, lead us to define a new subfamily of TNFRSF [1,96].
Alternatively Spliced Products of GITR
Several members of TNFRSF are characterized by splicing variants [102,103] and both murine and human GITR genes originate alternatively spliced products. Besides mGITR, the mGITR gene produces 2 more alternatively spliced products (mGITRB and mGITRC) ( Figure 6) [104], which share an identical extracellular domain and a transmembrane domain with mGITR. The -NH 2 terminus of the cytoplasmic domain (containing the above mentioned domain 1) is identical also in mGITR, mGITRB and mGITRC, but the -COOH terminal cytoplasmic domain is completely different. In fact, in GITRB, 11 base pairs of intron 4 are present, thus changing the open reading frame of exon 5. In GITRC, the short intron 4 (67 bp) is unspliced and the open reading frame of exon 5 is different from both mGITR and mGITRB. In T lymphocytes both splicing variants are expressed at low levels compared to mGITR, with GITRB being the less expressed. However, performing a library screening of a CD4 + T cell hybridoma, we found that GITRB was the predominant GITR gene splice variant, indicating that it may be expressed at high levels in a subpopulation of CD4 + cells [104]. The cytoplasmic domain of mGITRB contains, among others, a potential binding domain for p56 lck [104]. We also cloned 2 soluble spliced products of the GITR gene: mGITRD [104] and mGITRD2 (unpublished, GenBank accession number AF241229)( Figure 6). mGITRD and mGITRD2 mRNAs skip the exon 4, and thus these splicing variants lack the transmembrane domains and, as demonstrated experimentally, are soluble proteins (Nocentini et al., unpublished). As they contain the entire CRD1 and CRD3 motifs in mGITR together with a small part of the CRD4 motif (including Cysteine 1 and Cysteine 2) they may bind GITRL. Moreover, in peripheral T cell populations expression of mGITRD/D2 and mGITR are similar but mGITRD/D2 expression decreases following T cell activation [104]. All together, these observations suggest that these soluble proteins may function as decoy targets for GITR and interfere with GITR-GITRL interaction.
At present, only 3 splice variants of hGITR were described. The full length GITR (ortholog to mGITR) is called variant 1 and was originally cloned by Gurney et al. [4]. Variant 3, originally cloned by Kwon (we call hGITRv, Figure 5), lacks the 21 bp present at the 3' end of exon 4 [3], and consequently, 7 amino acids in the cytoplasmic domain, located inside motif 1. It has, however, an identical open reading frame of exon 5. Variant 2 (also called hGITRD) lacks exon 4 and thus the transmembrane domain (GenBank number AF241229 and NM_148901, Nocentini et al., unpublished).
In peripheral blood lymphocytes, hGITRD is expressed at very low levels. No other data are available on the expression of human variants.
None of the functional data we have for GITR, deals with GITR splicing variants, although knowledge of their tissue distribution and their function might be useful for understanding appearantly contradictory results.
TRAF Pathways Leading to NF-κB Activation
There is clear evidence that GITR binds TRAF2, TRAF1 and TRAF3 but does not bind TRAF6 [3,4,83]. Recently, Haurer et al. demonstrated that GITR binds TRAF5 but the binding is relatively weak compared to CD40 [105]. TRAFs bind TNFRSF as a trimer through the amino-terminal RING and zinc-finger motifs [106]. Following TRAF binding, several pathways are activated, including MAP-kinases signaling, and finally, activation of NF-κB and AP-1 family transcription factors, deeply involved in the inflammation process. Kwon et al. demonstrated that overexpression of hGITR alone induces NF-κB activation, that is further increased when TRAF2 is coexpressed. Coexpression of TRAF1 or TRAF3 downregulates activation of NF-κB below the levels of GITR overexpression alone. Overexpression of dominant negative TRAF2, lacking RING and zinc finger motifs, abolished the hGITR induced effects. A similar result was obtained by overexpression of a dominant-negative NF-κB inducing kinase (NIK), a transduction factor downstream TRAF2 in the NF-κB signaling pathway. In conclusion, GITR activates the NF-κB pathway by TRAF2/NIK, and downregulates it by TRAF3 and TRAF1 [3].
Surprisingly, Esparza et al. demonstrated that mGITR coexpressed with TRAF2 reduces significantly instead of increasing NF-κB activation, as in the case of hGITR. Of note, hGITR used by Kwon was hGITRv, lacking 7 amino acidic residues present in domain 1, which do not represent TRAF binding motif but may modulate TRAF function. Other reasons, potentially explaining the different effects of GITR-induced TRAF2 activation, are a different GITR-activated transduction pathway in humans and mice or different experimental settings. mGITR activation also seems to induce a new intracellular localization of TRAF2 from cytoplasm to plasma membrane. TRAF4 antagonizes all TRAF2 inhibitory effects on GITR-induced NF-κB activation, relocating TRAF2 inside the cell. However, TRAF4, not binding GITR directly, has been supposed to use an adaptor protein to interact with GITR [108]. Since TRAF4 has two nuclear localization sequences, GITRmediated NF-κB activation may be due to TRAF4 shuttling between cytoplasm and nucleus [10].
There are two pathways of NF-κB activation: 1) a canonical pathway via IKKβ resulting in the degradation of IκBα and nuclear translocation of p50/RelA heteromers requiring the protein IKKγ (NEMO), 2) a non-canonical pathway signaled by NIK (NF-κB-Inducing Kinase), a kinase activating IKKα operating independently of IKKγ; IKKα activation causes NF-κB 2 (p100) degradation and nuclear translocation of p52/RelB. For example, CD40 has two signaling options for NF-κB activation: via TRAF6, canonical, and via TRAF2/5-NIK, non-canonical. TRAF3 blocks this latter activation [105]. To elucidate the other possible transduction pathways activated by GITR stimulation and related to the non-canonical NF-κB pathway, the effect of GITR activation on TRAF5 was analyzed [109]. GITR triggering on TRAF5-deficient T cells elicits an inhibitory effect on NF-κB and MAPK p38 and ERK, while JNK (c-Jun N-terminal kinase) is less affected. However, while overexpression of TRAF2 and TRAF4 was not sufficient to activate the NF-κB pathway, expression of TRAF5 was. TRAF5 deficiency provokes a downregulation of anti-GITR-Ab-induced enhancement of antigen-depending T cell proliferation consistent with reduced NF-κB, p38 and ERK, which is not verified following CD28 triggering [109]. These studies suggest that TRAF5 is the main transduction protein able to activate NF-κB following GITR triggering. However, lack of TRAF5 does not abolish completely the GITR-dependent NF-κB activation, so there must also be a TRAF5-independent mechanism involved, like the one discussed above [10]. The role of NIK, a component of noncanonical NF-κB signaling, will be treated below.
Activation of NF-κB has been studied in effector and suppressor T cells. Studies on CD4 + and CD8 + peripheral T lymphocytes from GITR +/+ and GITR -/mice demonstrated the induction of activation of the NF-κB pathway following TCR and GITR co-triggering. In particular, when CD4 + CD25 -T cells from GITR +/+ mice were triggered with anti-CD3 and anti-GITR, p42 MAPK phosphorilation was higher than in T cells from GITR -/mice, indicating that GITR is involved in the MAPK pathway activation [6]. Consequently, NF-κB is more activated in co-triggered GITR +/+ mice compared to GITR -/mice. A similar result was obtained by Kanamaru et al. working with anti-GITR antibodies. In fact, nuclear fractions of p50, p65 and c-Rel (3-fold) are increased after (suboptimal) anti-CD3 + anti-GITR mAb treatment compared to anti-CD3 treatment alone [15].
Siva Pathway and Cell Death
GITR has no death domain in its structure, unlike Fas or TNFR1. Other structural homologues like CD40, 4-1BB, OX40 and CD27 lack death domains. The last member can induce apoptosis by binding a protein that has a death domain homology region in its central region, called Siva [101,110]. Several studies show that Siva is upregulated in various pathological conditions such as acute ischemic injury, Coxsackie virus infection, or anticancer treatment such as the TIP30 metastasis suppressor, which inhibits metastasis of the small cell lung carcinoma by predisposing cells to apoptosis [111]. Sivainduced apoptosis is caspase-dependent, with caspase 8 activated upstream of caspase 9 and 3 [111]. Moreover, Siva directly activates a pro-apoptotic Bcl-2 family member (Bid) and inhibits antiapoptotic Bcl-2 family members (Bcl-2 and Bcl-xL).
Spinicelli et al. demonstrated by co-immunoprecipitation that GITR binds Siva, and that overexpression of GITR and Siva leads to apoptosis [101]. It is generally accepted that overexpression of TNFR members promote low levels of activation, so it was assumed that co-transfection mimics GITR triggering. In addition, an increased level of apoptosis was observed after anti-mGITR Ab treatment [101]. In mGITR, SFQFPEEE (position 205-212) is the sequence responsible for Siva binding, with the PEEE sequence playing an important role [101]. In hGITR, the QFPEEE sequence (position 219-224) is conserved providing further evidence that it has a functional role. Since CD27 and OX40 also bind Siva, sequence alignment supports the conclusion that P-[IE]-[QE]-E is the main binding motif for Siva [101]. The domain formed by the above-mentioned amino acids is also capable of TRAF binding. Thus, it is possible that, depending on the functional status of the cells, triggering of these receptors may lead to different effects. Indeed, in both mGITR and hGITR, potential Casein Kinase II phosphorilation (CKII) sites are present in the Serine at position 199 (S199, mGITR), position 211 (S211, hGITR) and position 229 (S229, hGITR) ( Figure 5). S199 in mGITR is conserved, corresponding to S211 in hGITR. The Serine 199 (mGITR) is also necessary for Siva binding [101] and preliminary experiments suggest that S199 phosphorilation modulates TRAF2 and Siva binding in opposite ways (Nocentini et al., unpublished). Lu et al. studied CD4 + T cells from NIK deficient mice, a transduction factor mainly belonging to the non-canonical signaling able to activate NF-κB [112]. Effector T cells of NIK -/mice were normal for both activity and proliferation after TCR/GITR cotriggering. On the contrary, Treg cells were normally suppressive but after TCR/GITR co-triggering they proliferated much more than in wild type mice, suggesting the involvement of NIK in GITR costimulation of Treg cells. Since Siva binds to and inhibits NIK [113], Lu proposed that GITRdependent activation of Siva is possible only when NIK is present [112]. Thus, NIK and Siva may function as a negative control of GITR co-triggered proliferation.
As previously discussed, NIK is related to TRAFs. In particular, TRAF3 is a negative controller of NIK and promotes NIK degradation, while TRAF5 is an inducer of NIK activation and counteracts TRAF3-dependent NIK degradation [114]. Therefore, NIK levels are dependent on TRAF3/TRAF5 levels and activation, which may be different in Treg and effector T cells, explaining the abnormal response of Treg cells to GITR co-triggering. Moreover, Siva is required for TCR-induced apoptosis. In fact, Siva deficiency leads to resistance to anti-CD3, but not to Fas-induced apoptosis and in Sivadeficient cells canonical and non-canonical pathways of NF-κB are significantly increased with high levels of nuclear p65 and Rel B, respectively [115]. In conclusion, Siva, NIK, TRAF3 and TRAF5 levels seem crucial for the final effect of GITR triggering, including NF-κB activation.
PRMT1 Pathway
The cytoplasmic domain of GITR (next to transmembrane domain) is also quite similar to a portion of BTG2 (Figure 7), a cytoplasmic protein belonging to the BTG/TOB family. Several members of this family play a role in cell cycle negative control and differentiation [116]. BTG1 and BTG2 interact with the protein arginine N-methyltransferase (PRMT1) and modulate its activity positively [117]. PRMT1, the enzyme that catalyses most of the type-I methylation reactions, is involved in protein trafficking, signal transduction and transcriptional regulation such as transcriptional activation promoted by p53 [118]. Deletion studies have demonstrated that the DGSICVLYEE peptide is necessary for BTG1/2-PRMT1 binding (see PRMT1 binding domain of Figure 7). However, this peptide is not sufficient, suggesting that other portion of BTG1/2 are important excluding portion from 125 to the -COOH end of the proteins, which is unnecessary [119]. The high similarity between GITR and BTG2, shown in Figure 7, is concentrated on the PRMT1 binding domain and in a portion next to -NH 2 terminus of PRMT1 binding domain. Therefore, the above-mentioned study and the GITR/BTG2 alignment suggest that GITR could also bind PRMT1. Indeed, in vitro studies indicate the cytoplasmic region of mGITR binds PRMT1 but the mGITR -COOH deletion mutant (deleted starting from Alanine 200) does not bind PRMT1 (Nocentini et al., unpublished data). PRMT1 deficiency is lethal, having a crucial role in gene transcription modulation and regulating protein interaction [120]. PRMT1 is located in both cytoplasm and nucleus with a preference for cytoplasm and its localization is modulated by the concentration of its substrates and products [120]. Moreover, it is highly expressed in activated T helper cells [121]. One target of PRMT1 is NF-κB, whose activity is stimulated 10 fold by PRMT1 in transfection experiments [122]. For example, shear stress on endothelial cell lines HUVEC and ECL-305 induces NF-κB activation and PRMT1 upregulation [123]. Another target of PRMT1 is NF-AT interacting protein (NIP45), a protein bound by TRAF2 and TRAF5 and negatively modulated by them [124,125]. NIP45 modulates NF-AT activity and upregulates IFNγ and IL-4 gene transcription [121].
Although the functional significance of the GITR/PRMT1 interaction is presently unknown, GITR-PRMT1 binding suggests that the role of GITR in modulating the inflammatory/immune reaction may also be due to PRMT1 activation.
GITRL Structure and Signaling
GITR ligand is a member of the TNF superfamily (TNFSF) [3,4]. Murine GITR ligand (mGTRL) was only discovered in 2003 [2,[5][6][7], while its human ortholog was discovered simultaneously with its receptor in 1999 [3,4]. The encoded protein is a type II transmembrane protein of 173 amino acids and an apparent molecular weight of 20 KDa. This protein has 51% homology with its human ortholog, and is divided into an N-terminal cytoplasmic domain of 21 amino acids, a transmembrane and a -COOH terminal extracellular domain of 129 amino acid -COOH terminal part [2]. mGITRL gene is 9.3 Kb long, incorporating 3 exons: exon 1 of 135 bp, exon 2 of 34 bp and exon 3 of 353 bp [7].
The Extracellular Domain of GITRL
Analyzing GITRL using PFAM motif databank, there is homology with the TNF family starting from amino acid 61 to amino acid 166 (PFAM PF00229 interpro ipr006052), which makes it structurally matched to the TNF superfamily. Based on its structure, as is the case of other members of TNF, the trimer-formation is highly probable for the ability to stimulate GITR receptor. In the GITRL extracellular domain there is a hypothetical glycosylation site on Asn 74. Indeed, experimental molecular weight (western blot) is 25-28 kDa, that may be explained by different states of glycosylation [5]. Mahesh et al. found a soluble form of GITRL, potentially due to shedding [58]. Indeed, dedicated softwares give us various cleavage sites (not shown), but so far no experimental evidence of enzymatic cleavage has been published.
The Cytoplasmic Domain of GITRL
The cytoplasmic domain of GITRL shows a high homology with OX40L [95] (Figure 8). In both mouse and human, the GITRL gene is next to OX40L gene, suggesting that GITRL and OX40L derive from the duplication of an ancestral gene.
It has been hypothesized that GITRL is able to activate an intracellular signal following GITR binding [29,73,78,79]. This possibility is further suggested by the high homology in the cytoplasmic domain among mouse, human and other mammals (including Bos taurus, Canis familaris, Macaca mulatta and Pan troglodytes), so it is possible to find a conserved defined motif: (E-x-M-P-L-x(2)-Sx(2)-Q-x-A-x-R-x(2)-K-x-W-L). This motif is peculiar to GITRL gene. There is no evidence of domains for kinase or other enzymatic activities and there is no experimental data regarding binding of transduction proteins. The only evidence for a potential transductional role of GITRL is the presence of a potential phosphorilation site on Ser10.
Splicing Variants of GITRL
mGITRL mRNA has a potential RNA destabilization signal consisting of AU-rich sequences near the 3' end, in the UTR. An isoform mRNA, without this destabilization signal, was revealed in IL-10treated bone marrow derived DC that express high levels of mGITRL mRNA. This alternative splicing derives from the substitution of a part of exon 3 (UTR region) with exon 4. Thus, mGITRL mRNA levels might be controlled also by post-transcriptional regulation [7]. No alternative splicings giving different GITRL proteins were cloned so far.
CONCLUDING REMARKS
GITR functions as an activating molecule in several cells of innate and acquired immunity. It is triggered by GITRL that can also deliver cytoplasmic signals, most of which have a pro-inflammatory meaning. Moreover, the GITR/GITRL system seems to play a crucial role in the extravasation process. Thus, it is not surprising that GITR/GITRL system has pro-inflammatory role in several in vivo models including those of shock, acute inflammation or allergic and autoimmune pathogeneses. This suggests that the use of fusion proteins or Abs may be helpful in controlling these diseases, but their role and mechanism of action is not well defined. For example, GITR-Fc fusion protein binds GITRL with potentially 2 effects: 1) inhibition of GITR activation by endogenous GITRL, 2) activation of GITRL. In some models the first effect seemed prevalent, as when pleurisy was diminished by GITR-Fc injection [23]. In other models, the second effect was prevalent, as when mice injected with GITR-Fc developed an inflammatory reaction [28]. Moreover, in some experimental models such as in vitro stimulation of GITR -/-T lymphocyte or C. albicans infection, a higher activation in the absence of GITR has been described [26,95]. Although the mechanism of this effect was clarified only in part, the possibility that GITR/GITRL interaction regulates the activity of other cells in the system should be considered. Among these, a prominent role may be played by the regulation of APC function. In addition, in other models an antiapoptotic and protective function of GITR on peripheral tissue was described [31,60], suggesting that GITR triggering not always plays a pro-inflammatory role. Thus, the final response consequent to GITR/GITRL triggering may well depend on the experimental model used and most of the experimental designs warrant further studies, especially before planning the use of fusion proteins and Abs to modulate shock and the inflammatory process through the GITR/GITRL system.
|
2018-04-03T02:38:28.622Z
|
2007-05-01T00:00:00.000
|
{
"year": 2007,
"sha1": "d9922c14c1de0eef4f17388d01519c16ad4ce2f8",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/tswj/2007/738254.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d9922c14c1de0eef4f17388d01519c16ad4ce2f8",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
225618688
|
pes2o/s2orc
|
v3-fos-license
|
Halal Industry in Indonesia; Challenges and Opportunities
The global halal industry has shown significant development, and Indonesia has become one of the countries with great potential. This study aims to identify opportunities and challenges in developing the halal industry in Indonesia. This research uses a qualitative approach. Data sources used in the form of secondary data, which comes from library sources—technical analysis of data using the SWOT analysis approach. The results showed that the development of the halal industry in Indonesia included several sectors, namely the food and beverage sector, tourism, fashion, media and recreation, pharmaceuticals and cosmetics, and renewable energy. Based on SWOT analysis, it was found that there are strengths, weaknesses, opportunities, and challenges in developing the halal industry in Indonesia. Thus, in the future, to improve the halal industry in Indonesia, it is necessary to optimize the synergy of various elements ranging from the community, industry players, government, financial institutions, associations, academics, and educational institutions, as well as other related parties. @2020 Journal of Digital Marketing and Halal Industry
Introduction
The development of the global halal industry experienced significant development (Hamid et al., 2019). This development does not only refer to countries with a Muslim majority but also minority countries (Adha et al., 2017;Haryani et al., 2017). Halal labelling is a global concern, especially for product quality assurance and living standards (Anggara, 2017;Pratisti & Maryati, 2019). Muslims choose the guarantee of halal products and services as a form of adherence to religion, while for non-Muslims, the reasons are in the form of guarantees of cleanliness, safety, quality carried out from the beginning to the end (Nurrachmi, 2018). Various sectors that are developing rapidly in the halal industry include the food sector, finance, travel, fashion, cosmetics and medicine, media and entertainment, healthcare, and education (Ab Talib & Hamid, 2014).
The tendency to choose an Islamic lifestyle is still growing (Tieman & Darun, 2015). This growing trend is not only related to Muslim consumers choosing food products that are following Islamic demands, namely meat, milk, and other processed products. Now it has grown for clothing, cosmetics, real estate, restaurants, hotels, Islamic banking, then to an integrated Islamic school (Johan, 2018). The high growth of Islam in the world of Indonesia and its young adherents, as well as the increasing purchasing power of young Muslim consumers, has become a new wave affecting the business world (Tri Ratnasari et al., 2019). The State of the Global Islamic Economy report ranks Indonesia first for halal food product consumers, which is $ 154.9 billion (State of The Global Islamic Economy: 2018). However, the Indonesian government has not been able to maximize the market potential because Indonesia is still ranked 10th in the category of halal food producers.
Indonesia is a country that has the largest Muslim population in the world (Norafni Farlina et al., 2013). Based on data from The Pew Forum on Religion & Public Life, the number of Indonesians who are Muslim is 209.1 million people or 87.2 percent of the total population of Indonesia. That number represents 13.1% of all Muslims in the world. While globally, the entire Muslim population of the world will increase from 1.6 billion people in 2010 to 2.2 billion people in 2030. Indonesia, which is the country with the largest Muslim population in the world, spent the U.S. $ 218.8 billion for the Islamic economic sector in 2017. Therefore, Indonesia has the potential to become the country with the most extensive halal product in the world. But Indonesia has not been able to maximize the market potential. This is evident from Indonesia, and It appears that Indonesia has not been ranked in the top 10 for the category of halal food producers in the State of The Global Islamic Economy (2018). Research on the halal industry has been widely carried out; a study from Ismoyowati (2015) states that one of the factors influencing people's consumption behaviour is halal aspects. Thus, the halal element becomes an essential part of the development of the industry going forward. Waharini & Purwantini (2018) concludes that the halal food industry in Indonesia has enormous potential, and even ranks first as a halal food consumer in the world. The research of Mei et al., (2017) states that halal food supply chains ranging from raw materials to ready-to-eat ingredients or starting from the agricultural process to how to cook them, to ensure that halal integrity is needed. Besides that, to be a leading sector of the halal industry, Indonesia must have a security system in the halal food trade route, the system is commonly known as Security Strategies and Non-Tariff Barriers (NTBs) (Ratanamaneichat & Rakkarn, 2013). This study aims to identify the development of the halal industry in Indonesia, identify the strengths, weaknesses, opportunities, and challenges of the halal industry in Indonesia, and identify strategies for developing the halal industry in Indonesia.
Literature Review
The concept of halal in basically is not only related to food and drink but more than that, the idea of halal-haram applies in various aspects (Haleem, 2017 Based on these verses, the definition of halal and haram is apparent, where everything good (body, mind, and soul) is lawful, whereas everything that brings danger (body, reason, and soul), then the law is haram (Omar et al., 2012). Halal-haram for Muslims, this relates to how the impact on life today and also life after death (Anismar et al., 2018). In other words, for halal-haram Muslims is related to maintaining health and maintaining obedience. Meanwhile, for non-Muslim, halalharam is only related to how aspects of life in this world are fulfilled, without regard to how it will be in life after death (Park & Jamaludin, 2018).
Halal and haram in various matters are related to whether or not something is prohibited (Al-Kwifi et al., 2019). Thus the halal industry is an activity to process or process goods or objects by Islamic regulations (Baharuddin & Kassim, 2015). The purpose of the halal industry, according to Law No. 33 of 2014 concerning Guaranteed Halal Products, is that the State must provide protection and guarantees about the halal status of a product. Article 1 of the Law, referred to as halal products, includes "goods and or services related to food, beverages, medicines, cosmetics, chemical products, biological products, genetic engineering products, as well as used goods that are used, applied or utilized by the community. " Meanwhile, according to the State of the Global Islamic Economy (2018), there are six sectors in the Halal industry, including "food and beverages, clothing, halal tourism, entertainment and media, pharmaceuticals and cosmetics. In the future, we need a precise definition, where the halal industry is not only limited to halal products, but also lifestyle. Based on the report of the Ministry of National Development Planning (2019), the halal industry sector in Indonesia includes food and beverages, tourism, fashion, media and recreation, pharmaceuticals and cosmetics, renewable energy, and support of Islamic financial institutions.
Research Method
This study uses a qualitative approach, namely, by describing the development of the halal industry in Indonesia, starting from the strengths, weaknesses, opportunities, and challenges, as well as its development strategy. The data source used is secondary data. Data collection techniques using library search. Library search in the form of books, research, websites, and other relevant library sources. Data analysis techniques used a descriptive approach through the SWOT Analysis method.
Development of the Halal Industry in Indonesia
Indonesia is a country with the majority Muslim population in the world. This has become one of the strengths for Indonesia to become a leading global halal industry. Potential sectors to develop in Indonesia include food and beverages, tourism, fashion, media and recreation, pharmaceuticals and cosmetics, and Islamic finance.
The first sector is food and beverages. In this sector, especially in Indonesia is Indonesia's superiority, this is due to the dominance of the population, especially in areas that are predominantly Muslim. Various types of food and beverages typical in very diverse regions are also a separate opportunity for Indonesia. Based on data from the Central Statistics Agency (BPS), the food and beverage industry is one sector that has a significant contribution to the processing industry sector. This can be seen in the contribution of this sector to the Gross Domestic Product. In addition, halal-certified products in Indonesia have also increased.
Source: LPPOM-MUI processed The second sector is tourism. In Indonesia, tourist destinations are also available quite a lot with various choices. This is because Indonesia is a country that has 17,508 islands, so it has a variety of potential tourist attractions. Attractions both on land and in the sea are not few, besides that the appeal of Indonesia also refers to the richness of local culture. Besides general tourist destinations, religious-based tourism is also quite developed; for example, walisongo pilgrimage, mosque tours, and various other halal tourism areas.
To support this sector, it is necessary to have good transportation facilities (air, sea and land), hotels and accommodations, restaurants and cafes, as well as travel and tours.
The third sector is fashion. The development of the fashion industry in Indonesia began in 2010 and continues to grow until now. This is indicated by the high market demand, so that it raises various things such as designers, exhibition events, to events with the theme of Islamic fashion. In the global sphere, Indonesia is even ranked 2nd in the top 10 indicators of the Muslim fashion sector.
The fourth sector is the media and recreation, and the sector is currently one of the creative economy subsectors that have potential. The growth of film, animation, and video has increased significantly. This increase was also one of them as a result of the existence of the Covid-19 pandemic. However, in the context of halal-based media and recreation is not yet optimal; this is indicated by the lack of public interest in religious-based films.
The fifth sector is pharmaceuticals and cosmetics. In this sector is a sector that has good potential. This is because pharmaceuticals and cosmetics become one of the fundamental needs in the current era. Indonesia based on the State of Global Islamic Report in 2018, became the 4th largest country as the country with the highest consumption of pharmaceutical products. From the cosmetics sector, Indonesia is the second country with the largest amount of cosmetics consumption after India. The consumption trends of these two things continue to increase from year to year. Source: Global Islamic Economy Report (2018) processed
Figure 2. Global Muslim Consumption of Pharmacy and Cosmetics in U $ D
The last sector is the Islamic financial sector. This sector is one of the determining factors in the smooth running of funding and capital, especially for halal industry players. This is because to be able to develop the halal industry sector requires funding or financing that is easy and inexpensive, and also requires effective and efficient operations. The Islamic financial institutions that can be developed and are expected to be able to contribute to the progress of the halal industry in Indonesia, including Sharia Banking, Islamic Capital Markets, Sharia Non-Bank Financial Institutions, Philanthropic Institutions, and other financial institutions.
SWOT Analysis of the Halal Industry in Indonesia
right strategy is needed to develop the industry. So that Indonesia's opportunity to become a leading sector in the global halal industry can be realized. To achieve this, identification needs to be started from strengths, weaknesses, opportunities and challenges so that the strategies implemented are able to run optimally. The SWOT analysis of the halal industry in Indonesia can be seen in table 2.
Strenght Weakness
Government support in the halal industry The existence of a certification body that has survived Significant halal product campaigns Significant halal trends from various sectors not only Muslims but also non-Muslims especially in the food and beverage sector Various institutions and institutions of higher education have the potential to become centres of innovation Sharia economic and financial developments The low awareness of industry players and the public about the importance of halal aspects. The lack of cooperation between the same industry sector. Policy framework and product guarantee protection that are not yet established Lack of halal-certified companies
Opportunities Treatment
The largest Muslim population in the world Increased demand for halal products and services ASEAN Free Trade Area (AFTA) Global halal trade potential The use of IT in online trading Investment opportunities in halal-certified industries Various research studies that lead to the use of halal products Various countries, both Muslim and non-Muslim, develop the halal industry Product quality that does not yet have competitiveness The rampant non-halal products and the circulation of non-halal materials Low awareness about the use of non-halal materials, especially for small scale producers There is no uniform halal standardization SARA issues are still quite strong
Development Strategy for Halal Industry in Indonesia
Based on a SWOT analysis of the halal industry in Indonesia, the strategy to optimize the halal industry in Indonesia is to strengthen various sectors and also improve the synergy of all elements. The strategies that can be carried out include the following: a. Improve socialization and education about the importance of halal certification to the people of Indonesia and industry players, both small and large. b. Strengthening legal certainty c. The government needs to optimize plans and disseminate information on halal industrial areas to improve the quality of Indonesian halal products. d. They are increasing the quality of Indonesian halal industry products in order to compete in the domestic and international markets.
Conclusion
Based on the results and discussion, it can be concluded that the development of the halal industry in Indonesia includes several sectors, namely the food and beverage sector, tourism, fashion, media and recreation, pharmacy and cosmetics, and Islamic finance. Based on SWOT analysis, it was found that there are strengths, weaknesses, opportunities, and challenges in the development of the halal industry in Indonesia. Development strategies focus on increasing the role of stakeholders and the community in developing the industry and optimizing policies by maximizing product quality.
Recommendation
Based on these results, in the future, it is necessary to have efforts to improve, including: a. To the government to be more fully supportive especially related to various policies that can encourage this sector to grow and develop, which later Indonesia can become a leading sector in the halal industry. b. To the actors in the halal industry sector, there needs to be synergy between industry players in this sector, so that the progress of this sector is motivated by the spirit of togetherness and mutual support between actors so that later they can compete on a global level. c. To the public, to be able to pay more attention to the halal aspects of the various consumables carried out, and also provide constructive criticism and suggestions for stakeholders in this industry. d. Subsequent research can identify in detail, by exploring in detail the halal industry stakeholders.
|
2020-07-16T09:09:04.391Z
|
2020-07-10T00:00:00.000
|
{
"year": 2020,
"sha1": "c313e75fa38a11e8fd55a35c0faac00753cc85f2",
"oa_license": "CCBYSA",
"oa_url": "https://journal.walisongo.ac.id/index.php/JDMHI/article/download/5856/2718",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "526afa0e695ae8527ca47806b9daf970de96b582",
"s2fieldsofstudy": [
"Business",
"Environmental Science"
],
"extfieldsofstudy": [
"Business"
]
}
|
264314059
|
pes2o/s2orc
|
v3-fos-license
|
Presence of bla CTXM-1 , bla CTXM-9 , and bla TEM-1 Genes in Extended-spectrum β-lactamase-producing Escherichia coli Isolates from Hospital Wastewater
system results showed that both influent and effluent samples were positive for ESBL-EC at 33.3% and 16.7%, respectively. Multiplex PCR results revealed that various E. coli isolates were of ESBL-EC bla CTXM-1 , bla CTXM-9 , and bla TEM-1 genes. Multi-drug resistance was observed among all ESBL-EC isolates with resistance being highest against ampicillin, cefuroxime, ceftazidime, ceftriaxone, cefepime, piperacillin, and aztreonam. CONCLUSION: As the study revealed the presence of ESBL-producing bacteria, efforts must be made to ensure the prudent antimicrobial use with possible emphasis on antibiotic rotation accompanied by intensified infection prevention and control in hospital settings.
Introduction
Antimicrobial resistance (AMR) is a major concern to human health contributing to approximately 700,000 deaths annually worldwide from infections due to resistant bacteria.This tally is expected to increase to over 10 million by 2050.(1) The extensive and indiscriminate use of antibiotics in recent years has substantially increased the number of bacterial pathogens resistant to drugs.With fewer antibiotic options available, the emerging difficulty in treating infections is a cause for concern.(2) Beta (β)-lactams are among the most relevant classes of antibiotics to treat a number of clinically important infections.(3)Pathogenic microorganisms carrying extended-spectrum β-lactamase (ESBL), which have rising prevalence globally, threaten antibiotic therapy.(4)ESBL enzymes cause resistance by deactivating most β-lactams, particularly penicillin, the first to third generation cephalosporins, and aztreonam through their production of cefotaxime-Munich (CTXM), temoniera (TEM), and sulfhydryl variable (SHV) β-lactamases which are encoded by bla CTXM , bla TEM and bla SHV genes, respectively.(5,6) Among ESBL-producers are ESBL-producing Escherichia coli (ESBL-EC) which are selectively proliferated in the human gut, excreted through feces, and deposited through wastewater lines.The discharge of human waste from sites such as health facilities may thus facilitate the emergence of AMR.(7) bla CTXM , bla TEM and bla SHV genes are the most common isolated gene among the ESBL-EC isolates in Asia.The leading cause of rapid spread is the latent ability of each gene for transmission of resistance genotype that results in a widespread range of contamination.(8) Therefore, wastewater must undergo a treatment process, in which liquid and solid wastes are reduced into stable, non-polluting, and non-infectious matter.(9) When community sanitation and sewage disposal and treatment are not present, efficient, and effective, the chances of community outbreaks due to hospital organisms are extremely high.(10) Thus, this study aimed to determine the antimicrobial susceptibility profile and the presence of bla CTXM-1 , bla CTXM-9 , bla TEM-1 , and bla SHV-1 genes in ESBL-EC from wastewater of selected hospitals in Manila and Quezon City, the Philippines.
Sample Collection and Preparation
This study employed a cross-sectional study design and was conducted in 6 months.Study sites included tertiary hospitals in Manila and Quezon City with a wastewater treatment facility.Upon the approval from the hospital administrations, water samples (50 mL) were collected 5 mm below the surface using "grab sampling" technique in sterile amber glass containers from two sampling points: influent and effluent tanks.Triplicate samples (n=150 mL) from each tank were pooled aseptically, properly labeled, and transported to the laboratory at approximately 5±2°C.Samples were processed within 2 h after collection, maintaining standard procedure.(10) The study protocol has been exempted for review by the Trinity University of Asia-Institutional Ethics Review Committee (No. TUA-IERC-015-R01).
Screening for Lactose-fermenting Colonies
Wastewater samples were processed using serial dilutions and cultured on MacConkey agar supplemented with 2 µg/mL of cefotaxime (CTX) to capture the presumptive ESBL producing bacteria.(11) After overnight incubation at 35°C, lactose-fermenting colonies were selected from each plate sample and subcultured thrice in trypticase soy agar (TSA) for purification.Purified colonies were inoculated in MacConkey and eosin-methylene blue agars.(12,13)
Identification of E. coli and ESBL Production With Conventional Method
Colonies of presumptive E. coli were Gram stained, inoculated aerobically into triple sugar iron (TSI) agar, sulfide indole motility (SIM) medium, Simmons citrate agar (SCA), lysine iron agar (LIA) slant, and methyl red Voges Proskauer (MRVP) media at 35°C for 24 h.(14,15) Disk diffusion method on Mueller-Hinton agar (Difco, Franklin Lakes, NJ, USA) was performed in accordance with the Clinical and Laboratory Standards Institute (CLSI) 2020 guidelines.(16) Following an overnight subculture on stock TSA plates, six E. coli colonies were inoculated in 5 mL of Mueller-Hinton broth (Oxoid, Hampshire, UK) and incubated for 6 h at 37°C.Each colony was compared to 1.5×10 8 CFU/mL, or the 0.5 McFarland standard.The inoculum was spread on a 25 mL Mueller-Hinton agar plate (Oxoid) using an L-shaped spreader.The plates were incubated at room temperature for approximately 15 min and then antibiotic discs were placed onto the plate surface (≤5 antibiotic discs per plate).The plates were then further incubated for 18 to 24 h at 37°C .(17) ESBL phenotype was determined using antibiotic susceptibility discs (Oxoid) wherein CTX (30 μg) ≤27 mm, ceftazidime (CAZ) (30 μg) ≤22 mm.(18) Results were recorded through measurement of the inhibition zone diameter and interpreted according to standard measurement tables.(16)
Identification of E. coli and ESBL Production With VITEK® 2 Rapid Method
Each colony was selected and confirmed up to species-level identification using ID-GNB Card of automated VITEK® 2 Compact system (bioMérieux, Durham, NC, USA).(19,20) The cards were manually put inside the VITEK® 2 readerinoculator module after being vacuum-inoculated with a 0.5 McFarland suspension of the organism from a sheep blood agar plate that was incubated for 18 to 20 h.Fluorescence was assessed every 15 min for 3 h.(21) Susceptibility tests were performed using VITEK® 2 system with the 64-well AST-N261 cards in accordance with the manufacturer's instructions and CLSI guidelines.(16) To determine the minimum inhibitory concentration (MIC) of each antibiotic, growth curves were generated using the signal produced every 15 min for 18 h and then compared to the controls.An algorithm created specifically for each antibiotic was used to perform the calculation.Specifically, CTX and CAZ were used in the ESBL test, either individually (at 0.5 g/mL) and in combination with clavulanic acid (4 g/mL).Once the growth control well has achieved a predetermined threshold (4-18 h of incubation), analysis of all wells was carried out automatically through VITEK® 2 system.The presence of ESBL was shown by a specified decrease in the growth of the CTX or CAZ wells containing clavulanic acid as compared to the level of growth in the well with the cephalosporin alone.(5) Listed in Table 1 are the different antibiotics used in this study with their Access, Watch, Reserve (AWaRe) classification.This classification is a tool which emphasizes the importance of the appropriate use of various antibiotics.(22) Six antibiotics were used in disk diffusion while 20 antibiotics were used performing the VITEK® 2 Compact system in accordance with the criteria of the Clinical Laboratory Standards Institute (CLSI) with the European Committee on Antimicrobial Susceptibility Testing (EUCAST) (Table 1).(16,23) For quality assurance of the test, E. coli ATCC 25922 and E. coli NCTC 13353 were used as negative and positive controls, respectively.(17,24)
DNA Extraction, Concentration, and Purification
Genomic DNA extraction using Presto TM Mini gDNA Bacteria Kit (Macherey-Nagel, Düren, Germany) was performed on all E. coli isolates after inoculation in tryptic soy broth (TSB) media and incubation at 37°C for 18 h.(25)NanoDrop spectrophotometer 2000 (Thermo Scientific, Waltham, MA, USA) was used to measure the yield, absorbance and calculate the concentration of nucleic acids (260 nm) and purified proteins (280 nm).(26) The concentration of each selected ESBL-EC isolates were standardized and recorded.
Detection of bla CTXM , bla TEM , and bla SHV Genes
The phenotypically confirmed ESBL-EC were subjected to multiplex polymerase chain reaction (PCR) in order to detect for the presence of bla TEM , bla SHV , and bla CTXM genes.(25) Primer sequences were used for the detection of the ESBL genes (Table 2).(27) The reaction mixture included a reverse and forward primer (1 μM each), 5 μM Firepol Master Mix (Solis Biodyne, Tartu, Estonia) reaction mixture, DNAse free water, and the DNA template.Amplification was carried out as follows: initial denaturation at 94°C for 10 min; 30 cycles of 94°C for 40 s, 60°C for 40 s and 72°C for 1 min; and a final elongation step at 72°C for 7 min.(27) A 100 bp
Phenotypic Detection of ESBL-EC
Out of the 12 hospitals, six isolates were identified as E. coli by conventional methods (Table 3) and VITEK® 2 Compact system.Four of which were from influent water, while the remaining two were from effluent water.Conventional bacterial identification methods and the VITEK® 2 Compact system results showed that both influent and effluent samples were positive for ESBL-EC at 33.3% and 16.7%, respectively.
All E. coli isolates were determined to be ESBLproducing by disk diffusion method and VITEK® 2 Compact system.By disk diffusion method, all isolates were resistant to more than half of the six antibiotics tested with all being resistant against CTX and CAZ (Table 4).By VITEK® 2 Compact system, most isolates presented resistance to more than half of the twenty antibiotics.Resistance was most common to AMP, PIP, CXM, CAZ, CRO, FEP, and ATM, while no resistance to AMK and CST was observed (Table 5).Note that VITEK ® 2 only analyzed specific antibiotic on a certain colony.One major limitation of the VITEK® 2 system in evaluating the susceptibilities of Gramnegative bacteria is its inability to provide the MICs of some agents.
ESBL-encoding Genes (bla CTXM , bla TEM and bla SHV ) in E. coli from Hospital Wastewater
Out of four E. coli isolates from the influent tank which showed phenotypic resistance, two isolates carried the bla CTXM gene (Figure 1A) while three isolates carried the bla TEM gene (Figure 1B).Particularly, there were two isolates (H05 In & H07 In) that carried the bla TEM gene only, while one isolate (H06 In) carried the bla CTXM-1 gene only.There was one isolate (H04 In) that carried both bla TEM and bla CTXM-1 genes.The bla SHV -type gene was not detected (Figure 1B).One isolate was negative for the three β-lactamase gene primers.
On the other hand, both ESBL-EC isolates from the effluent tank carried the bla CTXM gene, with one isolate (H06 eff) carrying the bla CTXM-1 gene and the other (H01 eff) bla CTXM-9 gene (Figure 1C).
DNA ladder was used for DNA fragment sizing in agarose gel electrophoresis.No template control was utilized as a negative control, while E. coli NCTC 13353 was used as a positive control.(28) The DNA amplicons were observed after running at 100V for 1 h on a 2% agarose gel containing ethidium bromide.(27) 29) The findings of the study revealed that all ESBL-EC presented multidrug resistance which is common for these organisms as plasmids that carry genes that code for ESBLs also commonly contain other genes that encode for mechanisms of resistance to other antimicrobial drugs such as quinolones, aminoglycosides, chloramphenicol.(30,31) Therefore, infections caused by ESBL-producing Enterobacteriaceae are difficult to treat which may entail a significant increase in the burden of nosocomial infections.(32) Moreover, resistance was observed against each "Access" group antibiotic with the exception of AMK.The Access group includes essential first or second choice antibiotics that have activity against a wide range of pathogens while presenting the lowest potential for resistance than other antibiotics.(22) On the other hand, at least one isolate conferred resistance against each of the included "Watch" group antibiotic.(22) The Watch group includes antibiotics with a higher potential for resistance and contains most of the highest-priority agents among Critically Important Antimicrobials for Human Medicine.(22) For "Reserve" group antibiotics, high resistance was observed against ATM while all isolates were susceptible to CST.When all other treatments have failed or are inappropriate, the Reserve group of antibiotics should be used as a "last resort" option for very specific people and situations.(22) Drug resistance usually entails prolonged hospital stays and increased treatment costs, with the increased risk of treatment failure from inappropriate antibiotic therapy.(31) It is also concerning that the detected organisms were resistant to CIP and GEN which are considered alternative treatment regimens to carbapenems.(31) The production of β-lactamase enzymes among Enterobacteriaceae is driven by selective pressure due to many interacting factors including clinical and environmental factors, human activities, and indiscriminate use of antimicrobials.This highlights the requirement for more stringent hospital policies to maximize the proper use of antibiotics and reduce antibiotic resistance (33,34) with emphasis on the appropriate use of third to fourth generation cephalosporins and possibly antibiotic cycling.Its components must include: 1) multidisciplinary coordination between hospital administrators, clinicians, infectious disease specialists, infection control teams, microbiologist, hospital pharmacist; 2) regulation of prescription by consultant specialist; 3) monitoring and auditing of drug use; 4) surveillance and reporting of resistance patterns of the hospital flora; and 5) proper management of hospital specially during transit of patients.(8,34,35) Surveillance for nosocomial infections by ESBL-producing organisms may be focused on oncology, burns, intensive care units and neonatal wards where these are commonly associated.(31) Many studies in clinical and/or environmental settings showed that among the three ESBL types, the dominating type is the bla CTXM gene while others found bla SHV as the most prevalent.(28,32,36,37) Still, other studies have revealed that bla SHV was the least common among Enterobacteriaceae.(38) In this study, the environmental wastewater samples did not reveal the bla SHV gene.The differences in the predominant type of ESBLs are understandable as they may vary per location.(8) Generally social, health, and environmental factors have been identified to be connected to variations in the abundance of AMR genes.(39) In contrast with the results in the untreated wastewater, findings in the treated wastewater agree with more studies, in which the bla CTXM gene was the predominant type.(28,32,36) Considering its widespread presence in China and India, bla CTXM genes have been speculated to be the most frequent type worldwide.(40) Wastewater is treated to make it suitable for reuse and release into the environment without harming the ecosystem.In terms of water treatment, all hospitals in the study employed only chlorination with most disposing their effluent to drainage and one disposing to a river.Only two hospitals recycled their effluents.Although chlorination can kill bacteria, the antimicrobial resistance gene may survive and spread to different bacteria via horizontal gene transfer from a biofilm in the effluent tank.The ESBL resistance gene of E. coli has been considered genetically diverse and highly mobile.This may explain why ESBL-EC was detected in effluent despite absence in the influent.As ESBL-EC organisms were still observed in the effluents of two hospitals, there is a need to strengthen the implementation of efficient water treatment facilities across all hospitals.(40) Further, there is the need to study the content of ESBL-EC in wastewater recycling to determine safety before disposing into sewers or into rivers.
Additionally, the findings of the study also highlight the need for environmental surveillance.(34) Several studies have documented the widespread occurrence of AMR genes in hospital wastewater in spite of treatment which contributes to the spread of these emerging pollutants in the environment.Resistance may further develop in the environment when these organisms are mixed with other waste and chemicals, which may propagate freely to find its way back to humans.(28,32,36) According to Philippine Clean Water Act of 2005, hospitals should provide water utilities for septage management services.Local Government Units (LGUs) are required to offer such services in the absence of a water utility, either independently or under a service agreement.In some cases, private organizations ought to offer these services in place of or concurrently with LGU or water utility activity.Sewage management is a practical first step for most utilities and LGUs because sewerage systems are scarce and expensive to build and run.The National Building Code of the Philippines (RA 6541) and the Revised National Plumbing Code of the Philippines also have regulations addressing proper septic tank design, operation, and maintenance, in addition to the legislation mentioned.
Septage management includes comprehensive programs for managing septic tank and wastewater treatment.A comprehensive septage management program includes septic tank design and construction; septic tank inspection which include testing of wastewater samples for the presence of pathogenic microorganism such as antibiotic resistant bacteria and other operating conditions (pH, free available chlorine, hydraulic retention time, solid retention time, biomass concentration) every month; procedures for septic tank desludging and septage transportation; record keeping and reporting; and septage disposal.Thus, there is a need for strengthened collaboration with the environmental sector whose role in combating AMR has long been recognized through the One Health approach.(34) Increasing monitoring and surveillance for these microorganisms complemented by timely reporting and feedback must also be given importance to detect possible outbreaks in clinical settings.Environmental surveillance must also be institutionalized to monitor the emergence of other resistant organisms of public health concern.
Conclusion
The study showed the presence of ESBL-producing bacteria.Both influent and effluent wastewater samples may be a source of ESBLs, antibiotic resistance bla CTXM-1 , bla CTXM-9 , bla TEM-1 and present a potential environmental health risk.As the study revealed a high positivity rate of ESBL-producing E. coli, efforts must be made to ensure the prudent use of antimicrobials with possible emphasis on antibiotic rotation accompanied by intensified infection prevention and control in hospital settings.
|
2023-10-20T15:06:25.830Z
|
2023-10-18T00:00:00.000
|
{
"year": 2023,
"sha1": "3f535a369d7aa58d6e88369eeecd13b297e6f695",
"oa_license": "CCBYNC",
"oa_url": "https://inabj.org/index.php/ibj/article/download/2531/662",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "5a1b2d3ca2d707ef2e90523e3af7e8951c4df360",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
}
|
233425178
|
pes2o/s2orc
|
v3-fos-license
|
Using a Deep Learning Method and Data from Two-Dimensional (2D) Marker-Less Video-Based Images for Walking Speed Classification
Human body measurement data related to walking can characterize functional movement and thereby become an important tool for health assessment. Single-camera-captured two-dimensional (2D) image sequences of marker-less walking individuals might be a simple approach for estimating human body measurement data which could be used in walking speed-related health assessment. Conventional body measurement data of 2D images are dependent on body-worn garments (used as segmental markers) and are susceptible to changes in the distance between the participant and camera in indoor and outdoor settings. In this study, we propose five ratio-based body measurement data that can be extracted from 2D images and can be used to classify three walking speeds (i.e., slow, normal, and fast) using a deep learning-based bidirectional long short-term memory classification model. The results showed that average classification accuracies of 88.08% and 79.18% could be achieved in indoor and outdoor environments, respectively. Additionally, the proposed ratio-based body measurement data are independent of body-worn garments and not susceptible to changes in the distance between the walking individual and camera. As a simple but efficient technique, the proposed walking speed classification has great potential to be employed in clinics and aged care homes.
Introduction
Walking ability is an important consideration during routine therapy treatment and rehabilitation following surgery and is crucial for human mobility, which enables predictions of quality of life, mortality, and morbidity [1,2]. Walking speed is a simple, rapid, and easily obtained assessment tool [3], but significantly affects all gait parameters, such as cadence, stride length, stance, and swing durations [4,5]. For a long time, walking speed has been used as an independent screening indicator of demographic characteristics (e.g., age and sex), functional activities (e.g., kinematic and kinetic patterns and spatiotemporal parameters), and various physical outcomes (e.g., activity-related fear of falling) in normal controlled individuals (e.g., healthy) and patients (e.g., Parkinson's disease and osteoarthritis) [6][7][8][9][10]. Additionally, the functional movement performance of individuals with neuromuscular conditions, such as post-stroke and cerebral palsy, can be assessed based on their walking speed, which might have an impact on gait [9,10]. The gait speed of an individual with a physical impairment might be affected by changes in walking conditions, which do not appear to affect the gait speed of healthy individuals. For example, at similar walking speeds, patients with diseases such as Alzheimer's disease exhibit a slower walking gait speed than healthy controls, and this difference might be a good indicator for classifying patients and healthy controls [11]. Furthermore, a slow walking speed in elderly individuals (>60 years) predicts increased morbidity and mortality [12]. Walking speed provides a significant contribution to health assessment, including changes in spatiotemporal, kinematic, and kinetic parameters during the gait cycle [13]. Therefore, the efficient classification of walking speed could play a vital role in the scrutinization of normal and abnormal gait measurements, particularly in gait-based assessments during a rehabilitation process, and might thus help improve clinical care and our understanding of gait balance.
Spatiotemporal gait data (e.g., walking speed, swing phase time, and double stance time) are the second most often used parameters, among two other parameters, namely, kinematic and kinetic walking gait parameters [14]. Spatiotemporal gait data are multidimensional time-domain sequences representing the evolution of body posture during a gait cycle [15]. Additionally, human gait is a form of cyclic motion regardless of the walking speed, and as a consequence, the time-domain sequences estimated from this motion contain periodic and/or quasi-periodic patterns [16]. Collected sequential spatiotemporal gait data are used in gait assessments where the periodic and/or quasi-periodic patterns are classified as normal (typical) or anomalous (atypical) gait in different neuromuscular conditions [10,17]. Typically, sequential spatiotemporal gait data from a walking individual are collected by optoelectronic motion capture systems using reflective marker-based (attached to the individual's body) and/or marker-less approaches [18]. These approaches for gait recognition mostly rely on two-dimensional (2D) and three-dimensional (3D) gait analysis methods. Both the marker-less or marker-based approaches can be applied independently or in combination and can be widely used for gait measurement using 2D and 3D video systems, but marker-less technologies have more potential than marker-based approaches due to their advantages regarding cost, time, and need for highly skilled operators. In addition, although 3D, marker-based and/or marker-less techniques are well known for the analysis of walking gait [19,20], 3D approaches have many drawbacks, such as the need for multiple cameras with high image resolution, which usually then results in a longer computational time, specific repeated calibration procedures, a complex process for time synchronization between cameras, and the need for a large space to record gait data [21]. Therefore, a 2D technique with a less complicated camera setup (such as a single camera) is an alternative approach for the efficient assessment of walking gait. Notably, any sequential spatiotemporal gait data can also be estimated from a single-camera-based marker-less 2D video system employing lateral-view video of a walking individual because continuous 2D image sequences from the video can show the continuous body postures of human gait [21,22]. This 2D approach is currently gaining popularity as an alternative to the marker-based optoelectronic system due to its simplicity, rapidity, and ability to potentially provide more significant assessments of human movement in research and clinical practice [23][24][25][26].
Several research studies have investigated walking gait (particularly speed-related parameters) using a 2D setup. For example, Castelli et al. estimated three types of walking speed (i.e., slow, comfortable, and fast) using body measurement data from walking individuals, such as the unilateral joint kinematics of the individual's hip, knee, ankle, and pelvic tilt [21]. However, their extracted body measurement data highly depended on the garments worn by the walking individuals (i.e., socks and undergarments), which were used as segmental markers for tracking foot and pelvis parameters in the image [21]. A study conducted by Verlekar et al. estimated walking speed using the lower-body width of the walking individuals [22], but a walking individual's body measurement data, such as height, mid-body width, lower-body width, and body area, in an image show inconsistent variations depending on the distance between the individual and the camera in different environments (e.g., indoor and outdoor settings) [27]. Thus, the results show that body measurement data that depend on the distance between the walking individual and camera might produce varying walking speed patterns for the same individual due to the camera configuration [22]. One possible solution for this limitation could be to scale or resize the image sequences of the video to normalize the walking individual's body measurements in each image, but this process might cause visual distortion and degrade the image quality due to squeezing or stretching [28]. Another possible solution for this limitation could be to use the walking individual-to-camera distance independent body measurement data, which would produce stable walking speed patterns [27]. A study by Zeng and Wang proposed ratio-based data (such as body height-width ratio data), which are stable regardless of the distance between the walking individual and the camera [27]. In addition to body height-width ratio data, the study [27] also utilized inconsistent body measurements, such as the mid-body width, lower-body width, and body area data, to establish the walking speed pattern used for walking speed classification. The abovedescribed studies indicate a further need for establishing ratio-based body measurement data that (a) can be extracted from 2D image sequences without the use of any marker, (b) are consistent regardless of the distance between the participant and camera in both indoor and outdoor environments, and (c) exhibit consistent periodic (or quasi-periodic) walking patterns suitable for walking speed classification. However, to our knowledge, this walking gait-related classification task has not been directly investigated using any computational intelligence methods.
Artificial intelligence (AI) techniques such as machine learning and deep artificial neural network methods successfully applied and provided new predictive models for complex gait analysis [29,30]. Therefore, a good classification method is needed for the classification of any gait-related task (e.g., walking speed patterns) with reliable and good accuracy [15]. Among the published studies on walking speed estimated from lateral-view 2D images of marker-less walking individuals [21,22,27], only that conducted by Zeng et al. directly investigated an individual's walking speed classification; these researchers employed the radial basis function (RBF) neural network to solve the classification task [27]. More recently, Khokhlova et al. reported a strongly predictive performance model with a large capacity to learn, the ability to capture long-term temporal dependencies, and the capacity to use variable-length observations that was developed based on the recurrent neural network (RNN)-based deep learning (DL) method long short-term memory (LSTM) for sequential data classification [15]. Additionally, some other image-related classification tasks, such as handwriting recognition [31], speech recognition [32], and text classification [33], have been performed using LSTM and its successor methods (e.g., bidirectional LSTM (biLSTM) and convolution neural network LSTM (CNN-LSTM)). In support of this, LSTM approaches are also currently gaining popularity for clinical gait classification tasks, such as pathological [15] and impairment gait classification [34], due to their promising applicability in labeling sequential gait data. Furthermore, previous research studies have shown that biLSTM exhibits better classification accuracy than LSTM [35]. In general, both the biLSTM and LSTM DL methods need large datasets for training and validation purposes to obtain good accuracy and to avoid data overfitting and poor generalization [36,37]. However, there is lack of availability of sources (i.e., databases) providing a large clinical gait dataset, particularly of lateral-view 2D images of marker-less individuals walking over different ranges of controlled walking speed trials [38,39]. More specifically, there is a limited number of datasets consisting of a small number of subjects with lateral-view image sequences, few variations among controlled walking speed trials, and data collected in limited environments (e.g., either indoor or outdoor settings) that exhibit restricted licensing for public use [21,40]. To overcome this complexity, our study used large gaitrelated datasets from two publicly available state-of-the-art databases, namely, the Osaka University-Institute of Scientific and Industrial research (OU-ISIR) dataset A [41], and the Institute of Automation at the Chinese Academy of Sciences (CASIA) dataset C [42]. These publicly available image datasets from these two databases were recorded in large populations using lateral-view videos of walking individuals obtained using a single 2D camera (marker-less) and exhibit substantially varied controlled walking speed trials. The gait data from OU-ISIR dataset A and CASIA dataset C were obtained in indoor (treadmill) and outdoor (overground) settings, respectively. A number of previous studies have used these two datasets for vision-based gait recognition and obtained a reliable performance [43][44][45]. One prominent study by Verlekar et al. [46] suggested that images from both datasets could be a possible solution for studies on walking speed pattern recognition that need a large population dataset of lateral-view 2D images of marker-less walking individuals. However, to our knowledge, walking speed patterns have not been previously classified using these datasets and state-of-the-art computational intelligence techniques, such as the biLSTM DL algorithm, to obtain the most reliable and highest accuracy.
The aim of this study was to investigate potential ratio-based body measurement data that (a) can be extracted from lateral-view 2D image sequences without any marker, (b) are consistent with respect to the distance between the participant and camera in both indoor and outdoor settings, and (c) exhibit consistent quasi-periodic walking patterns that are suitable for walking speed classification. Additionally, this study aimed to investigate whether the walking speed patterns obtained from ratio-based body measurement data could be utilized to classify walking patterns in terms of speed using the DL model and thereby obtain reliable accuracy. To achieve these aims, this study proposed five ratio-based body measures: (i) the ratio of the full-body height to full-body width, (ii) the ratio of the full-body height to the mid-body width, (iii) the ratio of the full-body height to the lowerbody width, (iv) the ratio of the apparent to the full-body area, and (v) the ratio of the area between two legs to the full-body area. This study hypothesized that these proposed five ratio-based body measurements exhibit the above-detailed qualities. Additionally, these five ratio-based body measurement data could be used to classify an individual's walking speed pattern based on three speeds-slow, normal, and fast-by adopting the biLSTM model with a mean classification accuracy greater than 80% in indoor settings (using a treadmill, i.e., OU-ISIR dataset A) and greater than 75% in outdoor settings (overground, i.e., CASIA dataset C).
Participants and Datasets
In this study, 2D marker-less motion image sequences in the lateral view from 187 participants were considered to classify the walking speed patterns at three speeds: slow, normal, and fast. These image sequences were obtained from OU-ISIR dataset A [41] (obtained using an indoor treadmill) and CASIA dataset C [42] (obtained in outdoor overground settings) and separated to obtain our own datasets based on the walking speed patterns, namely, Dataset 1 (indoor trials) and Dataset 2 (outdoor trials), respectively, for training and testing purposes [41,42]. Three walking speeds were categorized: slow (2 to 3 km/h), normal (4 to 5 km/h), and fast (6 to 7 km/h) [42,47,48]. With both datasets, a walking speed pattern was established using five quasi-periodic signals calculated from the minimum number of image sequences (i.e., frames) available for the three above-described speeds. First, OU-ISIR dataset A consists of image sequences with a walking speed between 2 and 7 km/h for 34 participants, and these data were separated into slow, normal, and fast. Twelve image sequences were available for each participant, and in total, these were 408 image sequences with varying length and a minimum sequence length of 240 frames. As a result, Dataset 1 contains 136 walking speed patterns calculated consistently from 240 frames for each of the three speeds. In contrast, CASIA dataset C contains two, four, and two image sequences for slow, normal, and fast walking, respectively, and these were captured from 153 participants. Overall, the dataset contains 1224 image sequences of varying length, and the shortest sequence length is 35 frames. As a result, Dataset 2 contains 306, 612, and 306 walking speed patterns calculated from 35 frames of each of the slow, normal, and fast walking speeds, respectively.
Data Extraction and Gait Speed Pattern Creation
The five ratio-based body measurements estimated from image sequences were the following: (i) ratio of the full-body height to the full-body width (HW1), (ii) ratio of the full-body height to the mid-body width (HW2), (iii) ratio of the full-body height to the lower-body width (HW3), (iv) ratio of the apparent to the full-body area (A1), and (v) ratio of the area between two legs to the full-body area (A2). Notably, we directly used the original lateral-view silhouette image sequences provided in OU-ISIR dataset A and CASIA dataset C. Figure 1 shows a graphical representation of the extraction of the five ratio-based body measurements obtained from an image sequence. To estimate the three height-to-width (i.e., HW1, HW2, and HW3) ratio-based body measurements, a rectangular boundary box was created around the whole body in each image using the regionprops function in MATLAB 2020a (MATLAB™, Natick, MA, USA). The height and width of the boundary box, which represent the full-body height and full-body width of the participant, respectively, were calculated from the properties of the function. We divided the full boundary box region into three equal parts and then placed a new rectangular boundary box around the object in the middle part to calculate the mid-body width and another rectangular boundary box around the object in the lower part to calculate the lower-body width. We then calculated the three height-to-width ratio-based body measurements using Equations (1) The variation in each of the five ratio-based body measurements over time produces quasi-periodic signals. All the quasi-periodic signals were normalized to 0 and 1 to eliminate the difference in the data obtained at the three speeds [27]. Figure 2 shows the five quasi-periodic signals calculated from image sequences in OU-ISIR dataset A (Figure 2a) and CASIA dataset C (Figure 2b) for a representative individual walking at three different speeds. After all quasi-periodic signals were obtained, the walking speed patterns were Here, HW1-ratio of the full-body height to the full-body width; HW2-ratio of the full-body height to the mid-body width; HW3-ratio of the full-body height to the lower-body width; A1-ratio of the apparent to the full-body area; A2-ratio between area between legs and full-body area.
To estimate the two area ratio-based body measurements, we calculated the participant's apparent-body area in the image by counting the numbers of white pixels in the image. We also calculated the participant's full-body area in the image by multiplying the full-body height and the full-body width. We divided the full boundary box region into two equal parts (upper and lower): the upper part extends from the head to the hip, and lower part extends from the hip to the leg. We removed any noise from the lower part of the image by deleting the smallest unconnected object to avoid even the smallest trace of a swinging hand. After connecting the toe points by inserting a line in the noise-free lower part of the image, the region between the two legs was filled using the imfill function in MATLAB 2020a (MATLAB™, Natick, MA, USA) and extracted by subtracting the noise-free lower part of the image from the image in which the region between the legs was filled. The area between the two legs was calculated by counting the number of white pixels in the extracted region between the two legs. We then calculated the two area-based body measurements using Equations (4) and (5).
A1 =
Apparent-body area Full-body area (4) A2 = Area between two legs Full-body area (5) The variation in each of the five ratio-based body measurements over time produces quasi-periodic signals. All the quasi-periodic signals were normalized to 0 and 1 to eliminate the difference in the data obtained at the three speeds [27]. Figure 2 shows the five quasi-periodic signals calculated from image sequences in OU-ISIR dataset A ( Figure 2a) and CASIA dataset C (Figure 2b) for a representative individual walking at three different speeds. After all quasi-periodic signals were obtained, the walking speed patterns were established to create Dataset 1 (indoor trials) and Dataset 2 (outdoor trials). To analyze the oscillatory behavior of the quasi-periodic signals produced by the five ratio-based body measurements (i.e., HW1, HW2, HW3, A1, and A2), we calculated the amplitude and frequency of the signals from a minimum sequence length of 240 and 35 frames of each signal in Dataset 1 and Dataset 2, respectively. The occurrence of local maxima in the quasi-periodic signals was calculated using the findpeaks function in MATLAB 2020a (MATLAB™, Natick, MA, USA) to estimate the frequency. Additionally, to compare the overall variation in the body measurements (such as full-body height, full-body width, mid-body width, lower-body width, apparent-body area, full-body area, and area between two legs) over consecutive frames at three speeds, we calculated the standard deviation (SD) from the mean over all image sequences.
Figure 2.
Quasi-periodic signals produced by the five ratio-based body measurements estimated from image sequences from one individual walking at three different speeds included in (a) OU-ISIR dataset A and (b) CASIA dataset C. Here, HW1-ratio of the full-body height to the full-body width; HW2-ratio of the full-body height to the mid-body width; HW3-ratio of the full-body height to the lower-body width; A1-ratio of the apparent to the full-body area; A2-ratio between area between legs and full-body area.
Model Training and Cross-Validation
A biLSTM-based DL architecture was created based on the following five layers: an input layer of size five, a biLSTM layer with 100 hidden units, a fully connected layer with three outputs specifying the three classes, a softmax layer with an output between 0 and 1, and a classification layer with cross-entropy function for multi-class classification with three mutually exclusive classes [49][50][51]. The other properties of these layers were selected according to the default values in MATLAB 2020a (MATLAB™, Natick, MA, USA). The specified options for the training process are reported in Table 1. Previous research has shown that this simple setup is sufficient for obtaining non-overfitting and high-accuracy solutions to similar classification problems [52,53]. HW3-ratio of the full-body height to the lower-body width; A1-ratio of the apparent to the full-body area; A2-ratio between area between legs and full-body area.
Model Training and Cross-Validation
A biLSTM-based DL architecture was created based on the following five layers: an input layer of size five, a biLSTM layer with 100 hidden units, a fully connected layer with three outputs specifying the three classes, a softmax layer with an output between 0 and 1, and a classification layer with cross-entropy function for multi-class classification with three mutually exclusive classes [49][50][51]. The other properties of these layers were selected according to the default values in MATLAB 2020a (MATLAB™, Natick, MA, USA). The specified options for the training process are reported in Table 1. Previous research has shown that this simple setup is sufficient for obtaining non-overfitting and high-accuracy solutions to similar classification problems [52,53]. To ensure that the classification approach was robust and that the data were not overfitted, the performance of the developed DL-based model was evaluated using two cross-validation methods: Method 1, which consisted of k-fold cross validation with training, validation, and testing subsamples, and Method 2, which consisted of repeated random sub-sampling cross-validation with training, validation, and testing subsamples [54]. In this study, both Dataset 1 and Dataset 2 can be considered multiclass datasets as they consist of three types of walking speed patterns. For Dataset 1, we applied 17-fold cross-validation with a total of 272 combinations of training, validation, and testing subsamples (Method 1) and repeated random sub-sampling cross-validation with 272 randomly selected training, validation, and testing subsamples (Method 2). For each fold or subsample in Methods 1 and 2, the training, testing, and validation data consisted of 88.24% (360 walking speed patterns), 5.88% (24 walking speed patterns), and 5.88% (24 walking speed patterns) of the walking speed patterns, respectively. For Dataset 2, we applied 18-fold cross-validation with a total of 306 combinations of training, validation, and testing subsamples (Method 1) and repeated random sub-sampling cross-validation with 306 randomly selected training, validation, and testing subsamples (Method 2). For each fold or subsample used in Methods 1 and 2, the training, testing, and validation data consisted of 88.9% (1088 walking speed patterns), 5.55% (68 walking speed patterns), and 5.55% (68 walking speed patterns) of the walking speed patterns, respectively. MATLAB 2020a (MATLAB™, Natick, MA, USA) software with an Intel(R) Core (TM) i5-2400CPU, 3.10 GHz computer was used for model training, validation, and testing the dataset. A complete workflow of the study is shown in Figure 3. To ensure that the classification approach was robust and that the data were not overfitted, the performance of the developed DL-based model was evaluated using two crossvalidation methods: Method 1, which consisted of k-fold cross validation with training, validation, and testing subsamples, and Method 2, which consisted of repeated random sub-sampling cross-validation with training, validation, and testing subsamples [54]. In this study, both Dataset 1 and Dataset 2 can be considered multiclass datasets as they consist of three types of walking speed patterns. For Dataset 1, we applied 17-fold crossvalidation with a total of 272 combinations of training, validation, and testing subsamples (Method 1) and repeated random sub-sampling cross-validation with 272 randomly selected training, validation, and testing subsamples (Method 2). For each fold or subsample in Methods 1 and 2, the training, testing, and validation data consisted of 88.24% (360 walking speed patterns), 5.88% (24 walking speed patterns), and 5.88% (24 walking speed patterns) of the walking speed patterns, respectively. For Dataset 2, we applied 18-fold cross-validation with a total of 306 combinations of training, validation, and testing subsamples (Method 1) and repeated random sub-sampling cross-validation with 306 randomly selected training, validation, and testing subsamples (Method 2). For each fold or subsample used in Methods 1 and 2, the training, testing, and validation data consisted of 88.9% (1088 walking speed patterns), 5.55% (68 walking speed patterns), and 5.55% (68 walking speed patterns) of the walking speed patterns, respectively. MATLAB 2020a (MATLAB™, Natick, MA, USA) software with an Intel(R) Core (TM) i5-2400CPU, 3.10 GHz computer was used for model training, validation, and testing the dataset. A complete workflow of the study is shown in Figure 3.
Statistical Analysis
To determine the differences in performance between the two cross-validation methods, SPSS statistical software (Version 25; IBM Corp., Armonk, NY, USA) was used to obtain basic descriptive statistics, such as the means (± standard deviations (SDs)), and to
Statistical Analysis
To determine the differences in performance between the two cross-validation methods, SPSS statistical software (Version 25; IBM Corp., Armonk, NY, USA) was used to obtain basic descriptive statistics, such as the means (± standard deviations (SDs)), and to perform one-way repeated-measures analysis of variance (ANOVA) on all the classification accuracy results. The normalization of the data was assessed using the Shapiro-Wilk test (p > 0.05) prior to ANOVA, and Bonferroni adjustment was used for the post hoc analysis.
Results
The mean (± SD) amplitudes (in percentages %) and frequencies (number of maximum peaks per sequence) of the quasi-periodic signals produced by the five ratio-based body measurements at the three walking speeds are presented in Tables 2 and 3, respectively. The results showed that a mean (± SD) amplitude between 51.66 (±7.33) and 80.50 (±0.99) was obtained using the three height-to-width ratio-based body measurements (HW1, HW2, and HW3) calculated from both datasets ( Table 2). However, the area ratio-based body measurements (i.e., A1 and A2) yielded a mean (± SD) amplitude in the range of 9.53 (±2.16) to 58.71 (±0.74). The mean (± SD) frequency of the quasi-periodic signals from the five ratio-based body measurements showed trends similar to that found for the amplitude for both datasets (Table 3). In addition, the maximum and minimum frequencies obtained for the height-to-width ratio-based body measurements were 8.18 (±0.65) and 2.64 (±0.45), respectively, and those found for the area ratio-based body measurements were 8.10 (±0.65) and 2.47 (±0.58), respectively. HW1-ratio of the full-body height to the full-body width; HW2-ratio of the full-body height to the mid-body width; HW3-ratio of the full-body height to the lower-body width; A1-ratio of the apparent to the full-body area; A2-ratio of area between legs and full-body area. HW1-ratio of the full-body height to the full-body width; HW2-ratio of the full-body height to the mid-body width; HW3-ratio of the full-body height to the lower-body width; A1-ratio of the apparent to the full-body area; A2-ratio of area between legs and full-body area.
The overall variation in the body measurements (such as the full-body height, fullbody width, mid-body width, lower-body width, apparent body area, full-body area, and area between the legs) over consecutive frames at the three speeds was calculated using the standard deviation (SD) from the mean over all image sequences and is presented (in terms of percentages, %) in Table 4. Minor variation was found in the participants' body height with both datasets (Table 4): the minimum variation was ±0.50, and maximum variation was ±2.52. In contrast, substantial variation was found in the widths (minimum variation of ±9.65 and maximum variation of ±20.91) and areas (minimum variation of ±5.23 and maximum variation of ±30.45) of the body over time with both datasets. Table 4. Variation in the body measurements over consecutive frames at three walking speeds. This variation was calculated using the standard deviation (SD) from the mean over all image sequences and is presented in terms of percentages (%).
Dataset
Speed Full-Body Height The mean (± SD) classification accuracy of the experimental model was found to equal 88.05 (±8.85)% and 88.08 (±8.77)% using Methods 1 and 2, respectively (Table 5), with Dataset 1 (indoor trials), whereas mean (± SD) classification accuracies of 77.52 (±7.89)% and 79.18 (±9.51)% were achieved using Methods 1 and 2, respectively, with Dataset 2 (outdoor trials). Further descriptive statistics of the classification accuracies obtained with the training, validation, and testing data generated using the two cross-validation methods with the two datasets are provided in Table 5. The ANOVA results showed no significant differences (p > 0.05) in the overall classification accuracies obtained with Dataset 1 between the two methods. Additionally, no significant differences (p > 0.05) in the overall classification accuracies were found between the two methods with Dataset 2. Average time (min) for model training was 17.43 and 17.85 min for Method 1 and Method 2, respectively, using Dataset 1, while the time was 9.71 and 10.20 min for the two respective models when using Dataset 2. Table 5. Descriptive statistics of the classification accuracies obtained with the training, validation, and testing data and the two cross-validation methods with the two datasets.
Discussion
The main goal of the study was to investigate ratio-based body measurement data that can be extracted from marker-less 2D image sequences and are independent of the distance between the camera and the walking participant. Additionally, this study assessed whether these ratio-based body measurement data could be reliably and accurately utilized to classify an individual's walking patterns in terms of speed in both indoor (treadmill trial) and outdoor (overground trial) environments using the biLSTM DL model.
This study constitutes the first comprehensive analysis of walking gait speed patterns using five ratio-based body measurements from 2D video images: three body measurements were calculated based on the ratio of the body height to width (HW1, HW2, and HW3), and the other two body measurements were based on ratios of body areas (i.e., A1 and A2). All five ratio-based body measurements showed a quasi-periodic nature over time in image sequences captured in both indoor (treadmill trial) and outdoor (overground trial) environments. The results proved that the overall amplitude of the quasi-periodic signals obtained with the ratio-based body measurements decreased with an increase in the walking speed, and this finding was obtained with both Dataset 1 and Dataset 2 ( Table 2). A reason for this result is that regardless of the walking speed, only a minor variation was found in the participants' body height, whereas significant variation was found in the widths and areas of the body over time (Table 4) [27,55]. More specifically, the widths and areas of body decreased to minimum values when the legs were together and both hands were straight along the body during the early stance and mid-swing phases of the gait cycle. Subsequently, these widths and areas reached maximum values when the legs and hands were furthest apart in opposite directions during the late-stance and late-swing phases of the gait cycle. The swinging of hands and legs in opposite directions increases the widths and areas of the body as the walking speed is increased. As a result, the variation in these widths and areas increased as the walking speed increased (Table 4). Therefore, the average amplitude of the quasi-periodic signals obtained with the three height-width ratio-based body measurements (HW1, HW2, and HW3) decreased as the walking speed increased. However, a slightly different variation in the amplitude was obtained with the area ratio-based body measurements (A1 and A2). The above explanations are supported by the results from previous studies, which also showed that the amplitudes of the cadence, step length, stride length, and stance duration are decreased at slower speeds and increased at faster speeds [56,57]. Again, in contrast to the amplitude, the average frequency of the quasi-periodic signals obtained with all five parameters increases proportionally with the speed when Dataset 1 was used because the swinging of both the upper and lower limbs is greater at faster walking speeds (Table 3). This explanation is supported by previous studies, which suggested that the hand swing frequency, step frequency, and stride frequency increase with increases in the walking speed and that the hand swing gradually changes from synchronous with the step frequency to locking into the stride frequency [58,59]. Note that the frequency of the ratio-based body measurements estimated using the image sequences in Dataset 2 did not follow the same trend as those obtained with Dataset 1, and this difference could be due to the smaller number of image sequences obtained in an outdoor environment and thus a smaller number of data points [21,40]. Both the amplitude and frequency of all ratio-based body measurements exhibited variation over the image sequences, and therefore, the ratio-based body measurements could be used to classify the walking patterns at different speeds. Our proposed five ratio-based measurements are more appropriate for indoor environments when compared to outdoor environments. However, the potential of the proposed measurements indicates further investigation for use in outdoor environments.
The experimental DL-based model achieved mean classification accuracies of 88.05% and 88.08% using cross-validation Methods 1 and 2 on Dataset 1, respectively (mean accuracy, Table 5). Although the overall classification accuracies obtained using crossvalidation Methods 1 and 2 and on Dataset 1 ranged from 41.67% to 100% and from 37.50% to 100%, respectively, almost 50% of the trained models achieved classification accuracies higher than 89%, as demonstrated by applying both cross-validation methods with Dataset 1 (min-max accuracy and 50th percentile accuracy, Table 5). Only a few models compared with the total number of trained and tested models achieved low classification accuracies (number of outliers, Table 5). The model tested using Dataset 2 achieved mean classification accuracies of 77.52% and 79.18% using Methods 1 and 2, respectively (mean accuracy, Table 5). Although the classification accuracies obtained using both methods ranged from 25% to 100% with Dataset 2, almost 50% of the trained models achieved a classification accuracy with Dataset 2 greater than 75% with both methods (min-max accuracy and 50th percentile accuracy, Table 5). Some models achieved low classification accuracies, but this amount is small compared with the total number of trained and tested models (number of outliers, Table 5). The above findings are rational because Dataset 1 was created using images acquired in a controlled indoor treadmill trial environment, whereas Dataset 2 was established using images from an outdoor field trial with a more challenging environment [60]. Additionally, the current study achieved an excellent classification result, but the results are slightly different compared with those obtained in a previous study [27] on walking speed classification due to the cross-validation methods used in both studies. More specifically, the previous study [27] trained the model with a multiclass setting, i.e., all three types of walking speed patterns, and tested the models using a single-class setting, i.e., any one of the three walking speed patterns, whereas the current study used a multiclass setting as well as multiple runs for the training, validation, and testing of the model, which is beneficial for achieving accurate classification accuracy and building a successful model [61,62].
The ratio-based body measurements used for walking speed classification in this study were successfully estimated from lateral-view 2D image sequences of marker-less walking individuals captured with a digital camera. The concept of estimating body measurements from lateral-view 2D image sequences of marker-less walking individuals captured with a digital camera is supported by previous studies [21,46]. However, the ratio-based body measurements used in the current study are more robust than those used in previous studies because they are independent of the use of a body-worn garment as a segmental marker and of variations in the distance between the walking individual and the camera. To examine whether the ratio-based body measurements are independent of variations in the distance between the walking individual and camera, two datasets, namely, OU-ISIR dataset A and CASIA dataset C, which include data from both indoor and outdoor environments and different participant-camera distance settings, were used in this study. Additionally, the extraction of the proposed ratio-based body measurements preserves the natural movement of the participants during data collection in an outdoor environment [23]. The ability of classifying the walking speed in an indoor environment with high classification accuracy and in an outdoor environment with moderate classification accuracy will enable clinicians to use this method for regular diagnosis in clinical settings and for gait monitoring in aged care homes [63].
Although the proposed method has great potential for use in regular diagnosis in clinical settings and gait monitoring, the method has only been tested with healthy participants. A population with gait impairment could not be assessed in this study due to the scarcity of substantially large datasets available in the current research community [38,39]. This issue will be taken into consideration in the future by creating a large low-resolution image-based dataset focusing on a range of walking speeds. Additionally, this study only classified walking speeds using height-to-width ratio-based and area-based body measurements. In the future, this study will be extended to estimate other spatiotemporal parameters, such as the stride length, step length, joint angles, joint angle velocity, and acceleration, such that we can obtain greater insights on the participants' health and classify normal and abnormal gait patterns. Although in this study we have used silhouette-based analysis [22,46], we will extend the work to advanced feature extraction techniques, such as pose estimation techniques [64][65][66], in the future so that the classification can be done with real-time video. Furthermore, this study was conducted using the minimum sequence length for walking speed patterns. As a consequence, the sequence length was short in the outdoor dataset. In the future, this study will be extended to apply a maximum sequence length by bridging time lags to increase the sequence length, so that a more appropriate analysis can be done in outdoor settings. Finally, this study uses only the biLSTM method to conduct classification tasks. Other state-of-the-art classification algorithms will be applied in the future to obtain solutions for optimum classification accuracy.
Conclusions
In summary, our proposed ratio-based body measurements were successfully extracted from marker-less 2D image sequences without the need for any body-worn garments and did not show any variations due to changes in the distance between the walking individual and the camera. Additionally, our deep learning classification model showed excellent mean classification accuracies (88.08% and 79.18%) using a large dataset of lateralview 2D images of marker-less walking individuals undergoing controlled walking trials at different speed ranges in both indoor (treadmill trial) and outdoor (overground trial) environments, respectively. The excellent results obtained in this study support the use of simple ratio-based body measurement data that evolve with changes in the walking speeds, produce periodic or quasi-periodic patterns, and, more importantly, can be estimated from marker-less digital camera images in the sagittal plane to classify walking speeds using the currently available deep learning method. As a simple but efficient technique, the proposed walking speed classification method has great potential to be used in clinical settings and aged care homes.
|
2021-04-29T05:22:54.447Z
|
2021-04-01T00:00:00.000
|
{
"year": 2021,
"sha1": "decfc8d470ae0f3f916b7d80513e78db5bcb8466",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/21/8/2836/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "decfc8d470ae0f3f916b7d80513e78db5bcb8466",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
}
|
252185266
|
pes2o/s2orc
|
v3-fos-license
|
RecSys Fairness Metrics: Many to Use But Which One To Choose?
In recent years, recommendation and ranking systems have become increasingly popular on digital platforms. However, previous work has highlighted how personalized systems might lead to unintentional harms for users. Practitioners require metrics to measure and mitigate these types of harms in production systems. To meet this need, many fairness definitions have been introduced and explored by the RecSys community. Unfortunately, this has led to a proliferation of possible fairness metrics from which practitioners can choose. The increase in volume and complexity of metrics creates a need for practitioners to deeply understand the nuances of fairness definitions and implementations. Additionally, practitioners need to understand the ethical guidelines that accompany these metrics for responsible implementation. Recent work has shown that there is a proliferation of ethics guidelines and has pointed to the need for more implementation guidance rather than principles alone. The wide variety of available metrics, coupled with the lack of accepted standards or shared knowledge in practice leads to a challenging environment for practitioners to navigate. In this position paper, we focus on this widening gap between the research community and practitioners concerning the availability of metrics versus the ability to put them into practice. We address this gap with our current work, which focuses on developing methods to help ML practitioners in their decision-making processes when picking fairness metrics for recommendation and ranking systems. In our iterative design interviews, we have already found that practitioners need both practical and reflective guidance when refining fairness constraints. This is especially salient given the growing challenge for practitioners to leverage the correct metrics while balancing complex fairness contexts.
INTRODUCTION & BACKGROUND
In recent years, recommendation and ranking systems have become increasingly popular on digital platforms. These often personalized systems leverage algorithms to recommend content, items, or information that matches users' perceived preferences. However, previous work has highlighted how personalized systems might lead to unintentional harms for users, such as degenerate feedback loops [14,22], sexist stereotyping [12], and racial bias [1]. Practitioners require metrics to measure and mitigate these types of harms in production systems. To meet this need, many fairness definitions have been introduced and explored by the RecSys community [6,9,16,21]. Unfortunately, this has led to a proliferation of possible fairness metrics from which practitioners can choose. The increase in volume and complexity of metrics creates a need for practitioners to deeply understand the nuances of fairness definitions and implementations. Additionally, practitioners need to understand the ethical guidelines that accompany these metrics for responsible implementation. Jobin et al. [15] described the proliferation of ethics guidelines and found more than 80 documents containing ethical principles or guidelines for AI, pointing to the need for more implementation guidance rather than principles alone. The wide variety of available metrics, coupled with the lack of accepted standards or shared knowledge in practice [7,23], leads to a challenging environment for practitioners to navigate. In this position paper, we focus on this widening gap between the research community and practitioners concerning the availability of metrics versus the ability to put them into practice. We address this gap with our current work, which focuses on developing methods to help ML practitioners in their decision-making processes when picking fairness metrics for recommendation and ranking systems. In our iterative design interviews, we have already found that practitioners need both practical and reflective guidance when refining fairness constraints. This is especially salient given the growing challenge for practitioners to leverage the correct metrics while balancing complex fairness contexts.
THE COMPLEXITY OF FAIRNESS METRICS
In machine learning, fairness definitions may have multiple associated fairness metrics. Additionally, each of these metrics may have unique associated parameters and thresholds that must be determined before a fairness measurement can occur. Measuring fairness in recommendation systems adds even more complexity to this space. For example, recommendation systems are often multistakeholder systems -meaning they must cater to the needs of multiple groups of stakeholders [4]. The two most common stakeholder groups are providers (those who provide or create content to be recommended) and consumers (those who interact with or consume the recommendations) [4]. Fairness metrics can be used between or within each stakeholder group, and sometimes conflict with one another. Moreover, recommendation systems may consist of multiple components, meaning fairness needs to be measured within each 2022. Manuscript submitted to ACM component, from content generation and retrieval to pool re-ranking [17,26,27,31]. The combination of these variables for measuring fairness in recommendation systems compounds the complexity of choosing fairness metrics for practitioners.
Other decisions involved in choosing a metric include prioritizing between measuring group and individual fairness, determining quantifiable proxy variables for fairness, and defining qualitative fairness constraint(s). In machine learning fairness literature, researchers have broadly categorized fairness into two categories: group fairness versus individual fairness [3,8]. Group fairness measures if sensitive and/or non-sensitive groups acquire similar recommendation outcomes, while individual fairness requires that similar individuals are treated similarly. In recommendation and ranking, different metrics can measure group versus individual fairness within each stakeholder category [9]. Understanding how to differentiate between these fairness constraints and leveraging the correct metric for their context is one of the many complexities practitioners face.
In one paper, Verma et al. [30] classified RecSys fairness metrics as accuracy based, error based, and causal based.
More recently, Ekstrand et al. [9] published an in-depth review of fairness in recommendation systems. Unlike Verma et al. [30], their review categorized pairwise fairness metrics with accuracy metrics, alleviating the potential confusion between distinguishing when a metric measures error versus accuracy. This difference in categorization reflects how fairness literature may change over time, making it difficult for practitioners to stay up to date and navigate this complex research space. In addition to understanding stakeholder and metric categories, practitioners must also understand how to implement their chosen metric correctly. Within each metric, various parameters and fairness thresholds can determine which fairness constraint the metric is attempting to measure. Leveraging different types of comparison distributions can cause practitioners to evaluate different fairness constraints [10]. Though it requires time and expertise to analyze all possible metric options, a team may feel rushed to 'just choose one' for initial analysis to move towards an impactful audit. However, it can be difficult for practitioners to know if they are headed in the right direction.
CHOOSING AN APPROPRIATE FAIRNESS METRIC
In 2021, Moss et al. [20] discussed Algorithmic Impact Assessments to help engineers more easily report potential impacts of an algorithmic system. In practice, these assessments aim to help engineers describe potential impacts that their system might have on users in a worst-case scenario. These impact statements are perfect candidates for teams to map from a system's potential impact to a possible metric to quantify said impact. However, mapping from qualitative statements of values to quantitative proxies for measurement is no easy task. Stray et al. [29] describe some of the difficulty in choosing an appropriate quantitative proxy for a qualitative construct in recommendation, a task previously defined as construct validity [28]. In machine learning, construct validity can be challenging in all stages of deciding on a metric. These challenges include (1) determining if a plausible metric exists; (2) checking if the metric appropriately captures the qualitative constraints; (3) if the metric is comparable to other existing metrics; and (4) if the metric captures something different than previously used metrics [13].
Even without the challenge of achieving construct validity, practitioners encounter other obstacles when appropriately scoping fairness concerns for measurement. In one study, researchers discovered that ML practitioners have anxieties about their "blind spots" when addressing fairness issues -which could lead them to choose a metric that does not take certain vulnerable sub-populations into consideration [11]. In another study, researchers discovered that some ML practitioners had similar anxiety around failing to identify the "correct" fairness criteria for their users. For these participants, they mentioned that having additional resources about best practices for aligning fairness criteria with users' lived experiences would be beneficial for them to incorporate fairness into their existing ML workflows [19].
An alternate study reported that "participants told us that their organizations' business imperatives dictated the resources available for their fairness work and that resources were made available only when business imperatives aligned with the need for disaggregated evaluations" [18]. This difficulty arose partly because of the need for context-specific fairness metrics, which must account for the potentially competing interests of different user groups and the broader values of the organization hosting the AI system. However, the existence of broader organization values concerning fairness is not a given for practitioners [5]. Without organization-defined values and support, implementing contextual fairness in a large corporation could result in different teams leveraging competing fairness constraints or metrics, which could have potentially harmful downstream effects [5].
With so many metrics to choose from and metric implementation decisions to make, it can be daunting for a practitioner to attempt to measure and mitigate bias in their system. Additionally, most ML practitioners are not trained in disciplines like ethics or philosophy [25], which creates another barrier to entry for deciding an appropriate fairness metric. Without institutional knowledge and academic knowledge of fairness metrics, choosing an appropriate metric might seem impossible. To combat some of these challenges, Saleiro et al. [24] created a decision tree for selecting an ML fairness metric. However, the decision tree assumes that the practitioner has prior knowledge of policy and ethics jargon, with some branches in the tree asking questions like, "are your interventions punitive or assistive. " Additionally, this decision tree was designed for the context of binary classification, not ranking or recommendations. In the context of recommendation systems, fairness metrics and considerations are vastly different from a binary classification setting, especially since outcomes are not binary nor measurably favorable -given that there is rarely a "ground-truth" to compare the final recommendation lists against beyond assuming user engagement as a positive prediction [2]. Thus, we see a gap in the RecSys discipline that needs to be addressed by helping ML practitioners decide on appropriate fairness metrics that complement their complex, contextual fairness considerations.
CONCLUSION & FUTURE WORK
Though academic literature has recently introduced dozens of fairness metrics, there are not enough resources that guide practitioners in choosing a metric that complements their specific contexts, organizational values, or prior knowledge -especially in the discipline of recommendation and ranking. If fairness metrics only exist in academic papers, they will not be able to serve the purpose they were created for -to help identify and measure unfair treatment or impact in machine learning systems. To promote and cultivate the use of fairness metrics in industry, we must start making tools and providing guidance to lower the barrier to entry for utilizing this knowledge. Our current research addresses this need. Through semi-structured interviews with real practitioners and an iterative design study, we have begun creating a decision-making framework to help practitioners choose a fairness metric that aligns with their specific fairness context. We have already begun uncovering specific challenges that practitioners face when refining fairness constraints or selecting a metric that matches their needs, and we are using this feedback to iterate on the design of a tool for alleviating these challenges. We recommend that future research should similarly include a strong focus on working with real practitioners and live recommendation systems in order to understand the real-world needs and obstacles that practitioners face when incorporating fairness into their pre-existing workflows and systems. Ideally, by collaborating with industry practitioners, we can enable fairness metrics to have real-world impact beyond the theoretical impact demonstrated via toy examples in academic papers. We hope this work can inspire the creation of tools, libraries, and guidelines to help practitioners evaluate fairness in a way that accurately captures the real experiences of their users and the practical constraints of online ranking and recommendation systems.
|
2022-09-12T01:16:13.092Z
|
2022-09-08T00:00:00.000
|
{
"year": 2022,
"sha1": "ad52e27f380a15763a08c1d58efe93c3dad5bba9",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "ad52e27f380a15763a08c1d58efe93c3dad5bba9",
"s2fieldsofstudy": [
"Computer Science",
"Business"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
268865243
|
pes2o/s2orc
|
v3-fos-license
|
Creative management: a framework for designing multifunctional play biotopes - lessons from a Scandinavian landscape laboratory
Most children grow up in urbanised settings with a low possibility to experience biodiversity and nature. However, expe - riencing nature and other species increases children’s wellbeing, health, learning abilities and their understanding of nature values. Play biotopes is one solution for supporting a co-existence between children and different species in nature-based play settings. Play biotopes are based on ecological theories, where structures in the morphology of landscapes at different scales and the content of flora and fauna can support children’s interplay with a part of the landscape. However, tradi - tional landscape management is not adapted to support the dynamic nature of play biotopes, especially when considering multiple scales. This makes it interesting to explore more dynamic management concepts arching over multiple scales. Accordingly, we here explore creative management as a scale-based framework for design by management to further develop the concept of play biotopes. Using examples from a landscape laboratory in southern Sweden, we propose that a creative management framework combining the scales of landscape, biotope, place, and object together with play con - nectivity can support the creation and management of multifunctional play biotopes.
Introduction
Most children today live in an urbanised setting in which the possibility to experience biodiversity is low (UNICEF 2012), yet, experiencing nature and other species increases children's health, wellbeing, learning abilities and the understanding of nature values (Chawla 2015(Chawla , 2020)).Present urban green areas are often solely managed for one purpose, such as aesthetically pleasing short mown lawns and trees in parks or natural remnants left for free development species.Habitats are closely interlinked to biotopes but represent the specific abiotic and biotic resources in an area needed for a specific species.A specific niche represents a place or setting suitable for a specific species' need or use.An oak woodland is thus a biotope that works as a habitat for a specific species given its resources, where this usage can be seen as its niche.A niche as such is a two-sided concept including both the use of a habitat and the effect such use has on the system itself.Applying the ecology concept to a children's play-environment makes an oak woodland where children could play a play biotope, where the configuration and structure of the woodland enable the play, i.e., a playhabitat, while a play-niche represents specific play activities within this habitat.A play biotope as such can hold different kinds of affordances, i.e., opportunities for children to take meaningful action and facilitate their various activities, making stones jumpable, shrubs possible to hide in and trees climbable (Heft 1988).
Play biotopes can thus support managers and planners in understanding how specific biotopes in the city can be more multifunctional, as it gives a framework for understanding both the synergies and dis-synergies of biodiversity and children's play.However, the management of more multifunctional biodiverse urban green spaces faces many challenges, and traditional park management, forestry and nature conservation are often not fully adapted to this, focusing mainly on maximising single functions and seldom considering more than one or two main scales while doing so (e.g., Wiström et al. 2023).On an operational level, multiple scales are seldom included in management; instead, the management often focuses mainly on the biotope (e.g., nature conservation), stand level (e.g., forestry) or individual trees, as within arboriculture (Fallding 2000;Lämås et al. 2023;Östberg et al. 2018).Therefore, we here argue for the need to search and explore practical management frameworks that work over multiple scales.Additionally, in most cases, when designing and planning for green spaces in the city, the inherent dynamics of vegetation related to community´s succession and disturbance regimes, such as children´s wear and tear, are neglected (Gustavsson 2004;van Dooren and Nielsen 2019).Traditional design approaches, based on a main illustrated masterplan that is implemented and upkept through standard park maintenance, thus become less suitable when working with more dynamic and nature-based designs (Gustavsson 2004) such as play biotopes.One suggested way forward in relation to these challenges is to elaborate on management as design at different scales through the framework of "creative management" (Tregay 1983;Ruff 1987;Koningen 2004;Wiström et al. 2023).
Creative management
The fact that nature and vegetation is highly dynamic, depending on various natural processes, is formative to children´s play outdoors (Mårtensson 2004).Therefore, when designing with nature, as when creating play biotopes, design cannot be separated from management.The original design can only set the start for the forthcoming development of plants (Tregay 1983;Koningen 2004;Gustavsson et al. 2005;Wiström et al. 2023); hence, management becomes design placed in a time continuum.A community of trees and shrubs, irrespective if planted or naturally regenerated, can, depending on the thinning and pruning, develop into stands with radically different structures and species compositions (Tregay 1983;Rydberg and Falck 1998).Given this wide variety of possibilities, a large degree of creativity and design-based thinking becomes essential to management (Tregay 1983;Koningen 2004).
Common to such creative management is a high level of place specificity, where the management is adapted to the local context, focusing on specific places (Wiström et al. 2023).Places are here seen as distinct areas compared to their surrounding space in the range of about 10 to 2000 m 2 .Sites of this size can be experienced as a place and thus attributed experiential values and meaning.The actions taken within these specific places are, however, always taken in relation to the larger landscape, as well as in relation to a smaller scale of individual objects.It is a way of activating a landscape with relatively small resources, as it is possible to combine with more standardised management or conservation approaches for the overall landscape matrix (Lerner 2014;Duinker et al. 2017;Wiström et al. 2023).
As the core of creative management is relatively small and specific areas in the landscape, this makes it suitable for processes of co-creation and supportive of people forming emotional bonds with the place, so-called place attachment (Manzo and Devine-Wright 2021).For example, co-creation between landowners, users, municipals and nature conservationists have been used within the creative management framework for teaching landscape students how nature and culture reserves can be strengthened concerning its readability, authenticity, biodiversity and experience through place-specific management operations (Gustavsson et al. 2019).In the landscape laboratory of Sletten Holstebro (Denmark), co-creation with the inhabitants has been successfully developed using specific co-management zones for edges between housing and surrounding planted woodlands (Fors et al. 2019).As such, creative management and co-creation are applicable both when working with existing nature and when creating new nature-like environments.However, the involvement of children´s perspectives in such creative and co-creative management has not been explored in depth.In the following, we synthesise, discuss and exemplify some of our practical experience and knowledge of using the creative management framework for co-creating multifunctional and biodiverse play biotopes in the Alnarp landscape laboratory (Sweden) in case-studies involving children 3-7 years old (Hladikova and Sestak 2017;Gabriel 2021;Herngren and Ågren 2021;Mårtensson et al. 2021).These studies included observations of children´s play in order to document their use and preferences in the landscape laboratory and to identify specific structures, objects and characteristics, which could provide affordances from potential play biotopes.Further, in a selection of settings, management interventions were made in collaboration with experts and children, followed by additional observations, in order to learn about their effects and the dynamic interface between children and nature.
Alnarp Landscape Laboratory
One cannot move the landscape to a laboratory; thus, one must move the laboratory thinking to a landscape (Nielsen 2011).Guided by this idea, and the will to test new ideas on a scale of 1 to 1 regarding how to create rich multifunctional landscapes, the Swedish University of Agriculture sciences (SLU) has since the 1980s created Europe's first landscape laboratory at SLU's Alnarp campus (Gustavsson 2002).It has been followed by several other landscape laboratories and projects inspired by its thinking, especially within Scandinavia but also other parts of Europe (Szanto et al. 2016).The Alnarp campus is located between villages that belong to the suburban landscape context of the cities Malmö and Lund.The campus covers about 100 ha, with roughly one third each allocated to traditional field trails, late 1800s park with old woodland remnants and buildings including offices, housing and one kindergarten.The surrounding landscape is strongly anthropogenic and dominated by agriculture and urbanisation, making the Alnarp campus and its landscape laboratory one of few woodlands in the surroundings and thus, an important recreational asset.
The landscape laboratory is located in the temperate vegetation zone with a mean annual precipitation of 535 mm and a mean annual temperature of 7.7 °C.It was established on former fertile agricultural land, with a limestone bedrock and a deep loamy glacial till overlaid by fine sand.To aid the understanding of its ecological context, a much-simplified summary of the local successional stages and vegetation dynamics is given below based on Ellenberg (1988) and Sjöman et al. (2015).If the agricultural soils of Alnarp would be left, for free development, it would according to traditional climax concepts first be rapidly colonized by annual and biannual agricultural weeds followed by perennial herbaceous species and grasses.The duration of this grassy stage, that could last for decades is depending on multiple factors, among other, the species pool and browsing pressure in the landscape effects the time needed for pioneer shrub and tree species to take over the dominance.Over time, secondary tree species would become more dominating and then upkept in dominance mainly by gap-driven disturbances.Depending on the hydrological conditions, the climax vegetation type would be beech (Fagus) forest on mesic sites, mixed oak (Quercus) forests if partly more dry or moist and if wet, ash (Fraxinus) and alder (Alnus) dominated forests.In the case of the landscape laboratory, the weedy and grassy stages have been by-passed by dense planting and weeding, also meaning the woodlands, although young can be dominated by secondary tree species.Still, natural regeneration is occurring at different parts and rates of the laboratory depending on species composition and management, including unplanted parts resembling early successional stages typical for the region.
The layout of the landscape laboratory includes all of the above mentioned main forest types, and uses several complexity/diversity gradients for the main landscape elements approaches (e.g.forestry thinnings, haymaking, grass cutting, brushwood clearing and free development) on a landscape and stand level.
In the following sections, we will present and discuss some overall experiences and examples from applying the above-described creative management framework to children's play in the landscape laboratory.
Stand level management: creating a diverse hut forest
In urban situations, multifunctional forests that combines production, recreation and biodiversity are often of high interest.A model woodland trying to achieve this is the Trolleholm model planted in the landscape laboratory (Fig. 2a).Developed in 1994 (Gustavsson and Ingelög 1994), it departs from studies of reference landscape and forest stands at a regional estate (Trolleholm) together with interviews with its forest manager.Oak Quercus robur is supposed to act as the main crop tree together with a few other light tree species.The more shade tolerant hornbeam Carpinus betulus, linden Tilia cordata and birdcherry Prunus padus form a varied understory, shading the oak trunks and its epicormic branches, thus giving it a better timber quality (Henriksen 1988).However, when species are planted at the same time, this desired stratification does not develop by itself, especially when many species have similar initial growth rates as in many mixed oak stands (Richnau et al. 2012).Therefore, to support the oaks, large growing specimens of the understory species are coppiced.These shade tolerant species find their role in the understory, mainly as multi-stemmed and lower trees when they re-sprout from root-suckers or from the stump.Coppice over time and repeated thinnings give rise to a complex multi-layered stand with many multi-stemmed trees of different sizes below the oaks, together with some large and deep crowned shade tolerant trees (Fig. 2c).Such a structure is not only good for many birds (Fuller and Green 1998;Heyman and Gunnarsson 2011) but also attracts a certain kind of games and provides many affordances for building huts and dens.Furthermore, the big timber trunks create fascination among children with their size and bark but the play itself is mainly supported by the understory beneath the trunks and the spaces and structures that it provides.At the same time, the thinning of the tree canopy is what enables enough light to maintain a vital understory (Richnau et al. 2012).Additionally, by leaving dead wood from the thinning in different suitable sizes, the affordances of hut and dens have been enforced while simultaneously giving more substrate to saproxylic species (Hedblom and Söderström 2008;Jonsson et al. 2016).
of woodland stands, edges, water and open areas.This means that stands of only one species, a few species and many species can be found in the area as well water-streams and ponds ranging from the simplest of form (straight ditch) to the highly complex (meandering stream valley).This diversity in form and species is paired with a management trying to display several different options instead of a single optimal one.
Creative management in the landscape laboratory
The initiation of creative management in the landscape laboratory started in 2002 (Hladikova and Sestak 2017) and has been ongoing ever since.This type of management focuses on place, objects and paths, but always in relation to the overall landscape and its different stands or biotopes.Central to this approach is place specificity -normally an area covering about 50 to 1000 m 2 and its relation to the landscape at multiple scales is in focus.Although the main aspects of creativity, seen by the visitors are place-specific interventions such as artfully pruned glades and trees, these management interventions are set within a larger framework.By deciding on how and when different stands should be thinned, an overall syntax is given to the landscape that often enhances the original design of different complexity ladders, e.g., a simple structure with straight paths in contrast to speciesrich stands and the specific actions for that area.Part of the creativity is that not one optimum or standard management approach is applied; instead, some stands are thinned to promote pillared halls while others are formed as multi-layered stands.Additionally, some areas are left for free development, whereas in others dead wood is taken out or everything cut is left in piles or on the ground.This adds variation to the landscape at both the landscape and stand scale.Further, the overall variation of the landscape is enhanced by how the path system is laid out, making it possible to pass by beech forest, hybrid aspen, glade, water, dense edges, open edges, etc., in just a few hundred meters.Instead of constructing the paths from the start, as in most conventional landscape designs, they have instead been thinned out over time.This has given the possibility to include odd looking trees, spontaneous shrubs, small extra bends, etc., along the path.Moreover, a hierarchy exists in the path system with smaller and larger walks, which enables multiple options for movement.In relation to the overall landscape configuration given by the coarser management and path system, site-specific detailed management actions focusing on special places (e.g., glades) and objects (e.g., special trees) are added, accounting for only about 10% of the total area (Hladikova and Sestak 2017;Wiström et al. 2023), leaving approximately 90% of the landscape laboratory for more rational and conventional vegetation management forest while waiting for the trees to mature into a classic pillared hall of beech (Hladikova and Sestak 2017).To enhance this concept further, a miniature-pillared hall was also created at the start of the walk by raising the stems of the small, young beech trees to about 2 m.This provided some distinct room, especially at younger children's eyelevel.However, as the surrounding trees grew bigger, the renaissance walk (Fig. 2d) was increasingly shaded out.However, instead of letting this continue unabated, the rows closest to the cut hedges were thinned out in 2012, transforming the walk into a large beech hedge, which one
Place-based management: creating a formal space for games
In the southern part of the laboratory, the beech species Fagus sylvatica dominates, resulting in a dark forest type.However, where there is darkness, the contrast of light becomes stronger.In 2002 and 2003, this notion was used to create a narrow walk with a formally cut beech hedge by pruning two of the planting rows on each side of the straight path, the so-called renaissance walk (Fig. 2a).The initial idea was that this action would give values to the young landscape laboratory, we have explored this thinking further to also include trees with possible affordances for children´s play, i.e., play affordances.Here at least two main types of frame trees for play have been observed as important: trees for hut building and trees for climbing.Both of these categories differ from high quality timber trees in that they give priority to trees with low branches and multiple stems.In contrast, many conservation values with an increased number of micro habitats and more sun exposure for bark (e.g., Gran and Götmark 2019; Asbeck et al. 2021) could probably be combined within frame trees for play and biodiversity.It would often also be possible to combine the selection of different frame trees for different functions within the same stand to promote a more multifunctional stand (Löf et al. 2016).However, it is also important to recognise that the normal smallest object for forestry is the tree, whereas for play and biodiversity, even smaller objects are central for play; indeed, loose natural material, as pointed out by, e.g., Fjørtoft and Sageie (2000), seems especially vital for play biotopes.
Ways forward -landscape scale and play connectivity
The places studied in the landscape laboratory range from simple monocultures to the most diverse woodland planting as well as free growing spontaneous vegetation, all providing different types of play affordances.This diversity shows that there are possibilities to develop play biotopes using management both in situations with more natural vegetation as well as when restoring nature through planting.Thus, in cases where there are existing indigenous vegetation in an urban context, efforts could be directed toward keeping and developing it, ideally integrating play, biodiversity and sometimes also forestry.When such existing vegetation is missing, it becomes important to establish it anew, ideally ahead of urban development in order to allow time for it to develop and grow before its now walks alongside, while the surrounding trees have been trimmed into extremely high hedges (Fig. 2d).Traditional forestry thinning in the stand to the west of this formal room has given rise to a forest that one can see and walk through, whereas the east side has been left un-thinned, creating a dense and almost impermeable structure.This part stands out as the most formal and controlled part of the landscape laboratory with its geometric shapes and only one tree species.The linear features of the place, and contrast between open and more closed parts, invites mainly running games.Given that, other places were overall seen to promote more diverse affordances for play than the renaissance walk, it should be noted that a place-based approach suitable to support aesthetics and a sense of place for adults cannot directly be transformed to places for children's play, although they might overlap.While the renaissance walk mainly enforced running-related activities, it also showcased that different management can give complementary affordances for play.One simple application of this insight could be the use of variable density thinnings (Carey 2003) to increase the structural variation for biodiversity while simultaneously activating some areas for running-based games while more dense parts could support other play activities.
Object directed management: creating trees with character
Within silviculture and woodland conservation, there is an increased realisation that the individual management of valuable crop trees for high quality timber or habitat is a cost-effective way of management (Löf et al. 2016;Pommerening et al. 2021).Central to both cases is the selection of specific trees for biodiversity or high quality timber (Fig. 2e) and selective thinning to support these so-called frame trees (Pommerening et al. 2021).It has also been suggested that those frame-trees can also be selected to support aesthetics values (Pommerening et al. 2021).In the Fig. 3 Schematic overview of the creative management framework and how it can be related to an updated play biotope framework to aid managers.By adding the landscape scale and play connectivity to the play biotope concept, a more scale-based framework is created that links to the scales and framework of creative management intensity grazing, thus stressing the need to embrace the twosided aspect of the niche concept.Children not only use the resource for play, but through the play also effect and interact with it.They also have another scale of space; thus, when thinking of place-based interventions of play, there is a need to adopt to this.One management solution is to support microplaces such as dens of shrubs but also to provide half-finished places that children themselves can modify actively through their play (e.g., Jansson 2015).This is a central aspect of adopting the creative management framework more towards play since, traditionally, it has focused on visual aspects of aesthetics, landscape readability and authenticity, which might not always directly support play affordances.Thus, a better understanding of the places and play biotopes that support different kinds of play are essential.Here a more detailed description and analysis of specific play settings in relation to its details, where species as well as landscape configuration and connectivity are central, is a research area in great need of further exploration.
Conclusion
Our studies in the Alnarp landscape laboratory have shown a wide range of different play interactions with the natural setting, and at different spatial scales.We propose that creative management together with play connectivity could be used as a scale-based framework for combining the landscape, biotope, place, and object scales, to support the creation and management of multifunctional play biotopes.However, additional studies need to confirm its implementation in other contexts, especially outside Sweden and Europe.Additionally, there is a need for more detailed studies on the place specific interaction between biodiversity, play, design and management.integration into the urban fabric with schools and residential areas.This means that in forest-poor regions, the establishment of multiple use afforestation and restoration projects becomes vital (Nielsen and Jensen 2007), while in more forest-rich landscapes (Nielsen et al. 2017), initiatives such as those presented by Rydberg and Falck (1998), which use natural forest regeneration, should be a first-hand option.However, both existing and new vegetation benefit from place specific creative management that creates multifunctional landscapes for children´s play.
Common to such place-specific creative management is that it ranges from details of gardening to coarse forestry thinnings, and its uses and functions are related to the overall landscape configuration and our movement through it.Since different play biotopes give different play affordances and support different species, there is a need to expand the play biotope framework to also address how different play biotopes are interlinked to each other on a landscape scale (Fig. 3).In the same way that biodiversity is scale dependent (alpha, beta and gamma diversity) and needs a diversity of different habitats interlinked with each other (e.g., Whittaker 1972;Stein et al. 2014), a diversity of play affordances and their landscape configuration is what should be guiding the management, not the idea of one ideal play biotope or play setting.As such, extending the play biotope framework to address the combination and configuration of different play biotopes on a landscape scale, as within creative management, is central to further research and practice.Within the creative management framework, the path system is central to combining and working across scales as it sets the main lens for experiencing the different parts of the landscape.While adults generally use the path system and thus are guided by it in their use and experience of the landscape, children move more freely.Over-focusing on the path aspects thus might be unbeneficial for children's exploration and play.As such, the focus on paths should be further elaborated to include a more overall connectivity approach, where a better understanding of movement between different play biotopes and play-niches needs to be addressed in future research.
In addition to these larger scales and their interrelation, small scale management actions, even on a micro scale, are central to creative management and added play affordances, but it is easy to miss if one only focuses on management on the stand or biotope scale, as is common in traditional forestry, conservation and park management.Moreover, small-scale interventions focusing on details are also very suitable for cocreation with children where they can take an active part in landscape creation and management.
Children are active users of the landscape and modify it through their uses (play) to a much larger extent than adults, who typically visit for recreation.Branches are collected, trees are bended and broken (Gunnarsson and Gustavsson 1989), and as such, children's play is to some extent a bit like low
Fig. 1
Fig. 1 Schematic representation of the play biotope framework
Fig. 2
Fig. 2 (a) Map of the part of the landscape laboratory in Alnarp used in the study with the stands and places mentioned in text marked with italics in the plan.(b) Geographical context of Alnarp where black = country, brown = city, white = the Baltic sea.(c) Trolleholm Model.(d) Renaissance Walk.(e) Example of frame tree marking
|
2024-04-03T15:58:01.044Z
|
2024-04-01T00:00:00.000
|
{
"year": 2024,
"sha1": "9a8bc8e4c390ddd98db0c3fc1b7e779e4a187a96",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11252-024-01537-x.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "49681a67afdbb456e97618f32df231d2c34fd7e6",
"s2fieldsofstudy": [
"Environmental Science",
"Education"
],
"extfieldsofstudy": []
}
|
158890572
|
pes2o/s2orc
|
v3-fos-license
|
Optimal Taxation and the Tradeoff Between Efficiency and Redistribution
This paper studies the aggregate and distributional implications of introducing consumption taxes into an otherwise deterministic version of the standard neoclassical growth model with income taxes only and heterogeneity across agents. In particular, the economic agents differ among each other with respect to whether they are allowed to save (in physical capital) or not. Policy is optimally chosen by a benevolent Ramsey government. The main theoretical finding comes to confirm the widespread belief that the introduction of consumption taxes into a model with income taxes only, creates substantial efficiency gains for the economy as whole, but at the cost of higher income inequality. In other words, consumption taxes reduce the progressivity of the tax system, and maybe, from a normative point of view, this result justifies the design of a set of subsidies policies which will aim to outweigh the regressive effects of the otherwise more efficient consumption taxes.
Introduction
The literature on optimal taxation typically focuses on income taxes and rules out consumption taxes. For example, Chamley (1986), Judd (1985) and Lucas (1990) assumed that the consumption of goods is untaxed in each period and that there are only taxes on income from savings and labour. However, consumption taxes are a very popular tax policy instrument, in the hands of policymakers, and this can be confirmed by their widespread use in most industrialized economies. For instance, according to Table 1 below, average effective consumption tax rates are about 22.1% in a sample of 25 countries, where data are taken by Eurostat for a ten year period (2002)(2003)(2004)(2005)(2006)(2007)(2008)(2009)(2010). The estimates regarding average effective consumption tax rates vary considerably across countries. For example, in the aforementioned sample of countries, the average effective consumption tax rates are between 15.1 and 32.7. Furthermore, revenues from consumption taxes represent a significant proportion of total tax revenues. For instance, the average percentage of revenues from consumption taxes over total tax revenues for the same sample is about 33.2%. For a number of countries, such as Cyprus, Latvia, Lithuania, Hungary and Portugal, this percentage is even higher and exceeds 38%.
This popularity of consumption taxes, as a policy instrument in the hands of policymakers, can be explained by the widespread belief, that they are a less distortive policy instrument relative to income taxes and, thus, increase aggregate efficiency (see e.g. Coleman (2000), Correia (2010) and many others). However, consumption taxes are also believed to increase income inequality and, thus, hurt the medium and low income social classes. Motivated by the above, this paper aims to study the role of consumption taxes in a two-period deterministic version of the neoclassical growth model. Following most of the relevant literature, we follow the Ramsey approach to the optimal tax policy problem according to which the government is able to commit to future policies 1 . By allowing the government to choose optimally the tax mix (between income and consumption taxes), we aim to study the tradeoff between efficiency and redistribution.
To capture the distributional implications, we need to distinguish among the various economic agents so as to generate a potential conflict of interests. According to Turnovsky (2000), the most common distinction in the literature that creates a potential conflict of interests is the functional distribution between income going to capital and that going to labour. Thus, we work with a two-period deterministic version of Judd's (1985) neoclassical growth model, in which households differ in capital holdings. In particular, we assume that there are two groups of households, called capitalists and workers, where capital is in the hands of capitalists, while workers, who form the majority in our economy, are not allowed to save2. Moreover, we assume that capitalists are more skilled than workers and, thus, the aggregate labor input is a linear function of high-skilled and low-skilled labor, which are supplied by capitalists and workers respectively (as in Hornstein et al. (2005)). This differentiation between high-skilled and low-skilled labor is driven by differences in labor factor productivities. The government is allowed to finance the provision of utility-enhancing public goods by choosing not only the level of government spending but also the mix between income and consumption taxes. All types of taxes are proportional to their own tax base 3 . Our main result is that the introduction of consumption taxes into a model with income taxes only generates a tradeoff between efficiency and redistribution. In particular, the economy with both income and consumption taxes is more efficient than the economy with income taxes only, in the sense that output is higher in the former case. Moreover, both groups of households are better off, both in terms of income and welfare, in the economy with income and consumption taxes. On the other hand, income inequality increases once consumption taxes are introduced in the economy, which simply implies that capitalists benefit more than workers as we move to a more efficient economy. Hence, it seems that we are able to confirm the widespread belief mentioned above that a switch to a mix of income and consumption taxes creates welfare gains for both the economy as a whole and the various social classes individually, but at the cost of higher net income inequality. In other words, the introduction of consumption taxes reduces the progressivity of the tax system. From a normative point of view, this may also justify the design of a set of subsidies policies which will aim to outweigh the regressive effects of the otherwise more efficient consumption taxes. The rest of the paper is organized as follows. Section 2 discusses briefly the relevant literature and explains how the paper differentiates from it. Section 3 describes the economic environment and defines, first the Decentralized, and then, the Ramsey, General Equilibrium (DGE and RGE respectively). Section 4 discusses the parameter values used in numerical solutions, and presents and discusses the numerical results. Section 5 presents the case of a nonutilitarian government. Finally, Section 6 concludes. Various algebraic details are included in an appendix.
Related Literature and How Our Paper is Differentiated
Our paper belongs to the huge -and still growing -literature on the relationship between fiscal policy, and in particular taxation, and macroeconomic outcomes. A key question in this literature concerns how changes in the tax mix affect, among others, growth, welfare and inequality.
In particular, the literature on optimal Ramsey taxation studies extensively the role of consumption taxes on the grounds of efficiency. It is a common belief that a shift from income taxation to consumption taxes raises aggregate efficiency since it induces capital accumulation and reduces the so-called under-investment problem due to high capital income taxation. Notice that an increase in the income tax rate decreases future consumption relative to current consumption. Thus, the choice between income and consumption taxes can be thought of as the choice of the optimal taxes on current and future consumption. In this vein, Coleman (2000) studies optimal policy in a representative agent model where the government is allowed to choose optimally capital income and labour income taxes, as well as taxes on consumption expenditure, and finds that there are large welfare gains when the government uses a mix of Electronic copy available at: https://ssrn.com/abstract=3378384 income and consumption taxes. Zhang et al. (2007) study optimal Ramsey taxation in a neoclassical growth, representative agent model in order to examine the superiority, in terms of welfare, of consumption taxes relative to income taxes. Particularly, they choose optimally a mix of fiscal instruments that consists of capital income, labour income and consumption taxes as well as subsidies on net investment. Their main findings are that: (a) the government should tax leisure and private consumption at the same rate and should subsidize net investment at the same rate with capital income taxation, (b) the tax rate on capital should be higher than that on labour so as to increase labour and reduce leisure, and (c) all taxes and subsidies should be constant over time, except from the capital income tax rate which can differ in the initial period. In the same context, Motta and Rossi (2013) study optimal fiscal and monetary policy in a new Keynesian representative agent model with public debt and examine how the Ramsey policy changes when the government chooses optimally labour income taxes and consumption taxes. They show that the introduction of consumption taxes generates substantial welfare gains which are not limited to the steady state but also are evident in the dynamic stochastic equilibrium. Moreover, the optimal size of the provided public services is remarkably higher in the presence of consumption taxes.
In addition to the above, the literature on endogenous growth models has investigated extensively the effects of both income and consumption taxes on economic growth. For instance, Barro (1990) incorporates a public sector into a simple representative agent model of endogenous growth, in which, the engine of growth is productive public expenditures. The aim is to show, among others, how the externalities associated with public expenditures and income taxes affect negatively savings and economic growth. Also, King andRebelo (1990), Rebelo (1991), Pecorino (1993) and Stokey and Rebelo (1995), all within a representative consumer setup, show that income taxes are in general growth reducing. On the other hand, the impact of consumption taxes on growth is ambiguous and depends on model specifications. Milesi-Ferretti and Roubini (1998), using a representative agent model, examine the macroeconomic effects of taxes on capital income, labour income and consumption spending on economic growth and aggregate welfare. In their model, the engine of growth is the accumulation of human and physical capital. In particular, they show that the effects of the different taxes on economic growth depend crucially on the specification of leisure and the structure as well as the tax treatment of the human capital accumulation sector. Taxes on capital and labour income have a negative effect on growth. Also, consumption taxes hurt long-term growth, when government spending as a fraction of output is constant and, thus, the revenues raised due to the presence of the consumption tax are rebated in a lump sum way to consumers. As a result, the effect on labour supply is negative and economic growth decreases. Following Jones, Manuelli and Rossi (1993), where the optimal tax mix consists in setting all taxes equal to zero in the long run and accumulating government budget surpluses in order to finance future government spending through the returns of its assets, Milesi-Ferretti and Roubini (1998) focus on the growth maximizing tax mix in the long run rather than on welfare maximizing tax policy.
For reasons of simplification, they assume that labour is inelastically supplied and, thus, consumption tax has no impact on economic growth and limit their analysis to the study of the optimal structure of taxation between human and physical capital taxation. On the contrary, Rebelo (1991) claims that changes in consumption taxes do not affect growth if government spending is allowed to change parallel to consumption taxes. This happens because the extra tax revenues due to an increase in the consumption tax rate are not rebated to consumers and the income and substitution effects of the consumption tax leave unaffected labour supply and, thus, growth. Therefore, the main lesson derived from most of the above studies that use representative agent models in order to investigate the implications of the use of income and consumption taxes for the macroeconomy, is that consumption taxes are a less distortive policy instrument relative to income taxes, and, thus, are good on the grounds of efficiency. This conclusion seems to be in accordance with what most people tend to believe and may explain the popularity of consumption taxes relative to income taxes as a tax policy instrument.
However, and leaving aside efficiency issues, it is also believed that the use of consumption taxes increases inequality by hurting the poor people. Hence, it is also crucial to study the implications of the use of consumption taxes for the distribution of income. This extremely interesting and important question, regarding the distributional implications of the consumption tax rates, has paradoxically attracted very limited attention by the researchers. Few notable exceptions include, for instance, Penalosa and Turnovsky (2008), Correia (2010), and Krusell et al. (1996), all of whom however, focus on exogenous tax policy systems and reforms.
In particular, Penalosa and Turnovsky (2008) study the effects of exogenous changes in capital, labour and consumption taxes on the wealth distribution in a model in which agents differ in initial capital endowments and the labour supply is endogenously determined. Their main result is that exogenous tax changes that reduce the labour supply not only decrease output but also decrease after tax income inequality. Moreover, the higher the consumption tax rate the smaller the decrease in output and inequality. On the other hand, Correia (2010) uses a model in which she allows for heterogeneity across agents and shows that the exogenous substitution of income taxes with a flat consumption tax rate increases aggregate efficiency and reduces income inequality. However, this occurs due to the presence of nondiscriminatory lump-sum transfers that increase the progressivity of the tax system. This in turn more than outweighs the regressive consequences of the use of consumption taxes. Finally, Krusell et al. (1996) study the efficiency and distributional effects of an exogenous switch from an income tax to a consumption tax using a neoclassical growth model in which political-equilibrium theory is applied. Heterogeneity across agents is based on their wealth holdings and/or labour Electronic copy available at: https://ssrn.com/abstract=3378384 productivity. They examine whether an economy with consumption taxes only is superior to the same economy with income taxes only, when government's outlays are used for redistribution through transfers. They find that income taxes are generally a better policy instrument than consumption taxes in the sense that the use of more distortionary taxes results in higher welfare, since they reduce the level of government activity. On the other hand, tax systems with both income and consumption taxes are better relative to one-tax systems since the government has at its disposal more policy instruments. Moreover, they find that when the aim of the government is to finance the provision of public goods and not to provide any transfers to agents, then less distortionary taxes are superior. Concerning the distributional implications of the aforementioned tax changes Krussell et al. find that only very rarely the median agent improves her welfare after a change in the tax mix (for example, when the economy switches from a one tax-system to a two-tax system, the median agent gains whereas the rest -those having the same labor efficiency with the median agent but different non-human wealth -lose).
Therefore, our paper differs to the existing relevant literature, discussed briefly above, in that at the same time, first, we introduce consumption taxes in an otherwise standard neoclassical growth model with heterogeneous agents and income taxes only, and second, we focus on optimal Ramsey policies. In other words, by combining all these elements, our paper somehow generalizes the relevant literature, which so far had given emphasis to specific aspects of this problem ignoring at the same time other important dimensions. This more generalized setup not only allows us to investigate the properties of optimal taxation once consumption taxes are introduced, but also explore its aggregate and distributional implications of this richer menu of tax policies. Namely, we show that in such a setup, the introduction of optimally chosen consumption taxes in an economy with income taxes only, implies substantial aggregate output and welfare gains, thus making all agents better off relative to the case in which public spending is financed by income taxes only. However, in the absence of ex-post redistributive schemes, net income inequality increases in the sense that capitalists benefit more relative to workers by the introduction of consumption taxes. Moreover, and since our results are numerical, we should mention that they do not depend on whether the government is utilitarian or not, and are relatively robust to various parameter changes.
In summary, this is a paper that focuses on both efficiency and distributional issues in the sense that it studies the implications of the use of less distorting policy instruments, i.e. consumption taxes, for both the economy as a whole and the distribution of income among capital owners and workers. Obviously, from this perspective, it should be considered as a positive exercise which does not provide normative suggestions on how policy should deal with the unequal distribution of income.
Description of the Model
The setup is a two-period deterministic version of the standard neoclassical growth model comprised of households, firms and a government. This model is extended to allow for heterogeneity among agents. In particular, the private sector consists of two groups of households that are assumed to differ in capital holdings and labor productivity. Following Judd (1985) and Lansing (1999), capital is in the hands of a small group of agents, called capitalists, while workers, who, by assumption, form the majority in our economy, are not allowed to make savings 4 . Also, as in Hornstein et al. (2005), the aggregate labor input is a linear function of high-skilled and low-skilled labor, for capitalists and workers respectively, with different factor productivities. Households derive utility from private consumption, leisure and the provision of public goods. For simplicity, we use a logarithmic utility function in which preferences are separable in all three components. In the first period, capitalists consume, work and save, while workers only consume and work. In the second period both groups of households consume and work. In the production sector of the economy, private firms, which are owned to capitalists, maximize their profits by using capital and labour inputs to produce a single homogeneous good. They produce this good using constant returns to scale production function, which is strictly concave, differentiable and strictly increasing in both inputs. There are competitive factor markets. Each capitalist owns a firm and, thus, profits, if any, are distributed to capitalists.
Also, there is private good production in both periods.
The government needs revenues to provide public goods in both periods 5 . To finance these utility -enhancing public goods, it imposes linear taxes on income and consumption spending.
For simplicity, we abstract from public debt so the government budget is balanced in each period. Policy is chosen optimally. We will examine optimal policy with commitment, the socalled Ramsey policy, in which policy is chosen once-and-for-all at the beginning of the time horizon. Thus, the government will maximize a weighted average of capitalists' and workers' welfare by choosing income taxes, consumption taxes, as well as the associated amount of the public good.
Total population size, N, is exogenous and constant. Workers are indexed by the subscript = 1,2, …, and capitalists by the subscript = 1,2, . ... . In particular, among N, < are identical capitalists, while the majority = − and > are identical workers. There are also = 1,2, … , private firms where the number of firms, for simplicty, equals the number of capitalists, = . Notice also, that there is no social mobility between the two groups.
Households as capitalists
Each capitalist k chooses consumption, , and , , labour effort, , and , , in both periods and savings in the first period, , in order to maximize her two-period lifetime welfare: subject to her two consecutive budget constraints: where the parameters , , >0 are preference weights, , , , are the gross returns to capital and labour respectively in both periods, 0<β<1 is the discount rate, 0≤δ≤1 is the capital depreciation rate and 0≤ , , , <1 are tax rates on income and consumption spending in both periods. Notice that capitalists are not allowed to leave bequests and, thus, we set , ≡ 0. Since the assumptions we make, regarding the operation of firms (see below), imply zero profits in equilibrium, we omit them from the capitalist's budget constraints.
The first order conditions include the two consecutive budget constraints and the optimality conditions with respect to , , , and , : Electronic copy available at: https://ssrn.com/abstract=3378384 Note that , is the beginning-of-the-first period capital stock and is predetermined. The first two static equations, 1.3 and 1.4, respectively, give the labour-supply decisions for the capitalist in each period, whereas the last one, equation 1.5, is the standard Euler equation.
Households as Workers
Each worker w chooses consumption and labour effort in both periods, , , , and , , , respectively, in order to maximize her two-period lifetime welfare: = log , + log 1 − , + log + [ log , + log 1 − , + log ] subject to her two consecutive budget constraints: The first order conditions include the two consecutive budget constraints and the optimality conditions with respect to , and , : ( Note that the workers are not allowed to save. The above two static equations, 1.8 and 1.9 respectively, give the labour-supply decisions for the worker.
Firms
There is production in both periods. There are = 1,2, … , firms owned by capitalists. Thus, each capitalist owns a firm and hence = . Each firm maximizes its profits in each period: where output in period 1 and 2 is produced according to the following standard Cobb-Douglas production functions: Electronic copy available at: https://ssrn.com/abstract=3378384 where , , , are the capital inputs supplied by capitalists, , and , are the aggregate effective labour inputs, while A>0 and 0<α<1 are usual technology parameters.
We assume that capitalists are more skilled than workers, and, therefore, the two types of agents face different factor productivities. Thus, as in Hornstein et al. (2005), we generalize the production function by disaggregating the contributions to production of the two labour inputs. We assume that the aggregate effective labour input , is a weighted linear function of labour supplied by high-skilled agents (i.e. capitalists) and labour supplied by low-skilled agents (i.e. workers), , and , respectively, where the weights, and , reflect the different productivities/skills between capitalists and workers, and where it holds that > : Thus, the production functions presented in equations 1.12 and 1.13 can be written as: where , , , are the capital inputs, , , , are the labour inputs supplied by capitalists and , , , are the labour inputs supplied by workers.
The first order conditions of the above profit maximization problems with respect to , , , , , , , , , and , are respectively: Electronic copy available at: https://ssrn.com/abstract=3378384
Government
The government operates in each period. It needs revenues to provide utility-enhancing public goods and, therefore, we assume that it finances the provision of these public goods by a mix of linear income and consumption taxes, which are both proportional to their own tax base. The two consecutive government budget constraints, written in aggregate terms, are: (1.20) where ≡ is the total provision of the public good in each period t.
Market clearing Conditions
Each capitalist owns a firm. Hence, it holds that = = − . It is convenient to define the population shares of the two groups as
Decentralized Competitive Equilibrium (for given policy)
Now we can define the Decentralized Competitive Equilibrium (DCE) for any feasible policy.
Electronic copy available at: https://ssrn.com/abstract=3378384 where in the above equations we use: Instead of the equations 2.1-2.2 (capitalist's budget constraints), we can use the two resource constraints of the economy which are given below: Hence, we end up with a system of 11 equations (2.1-2.11 or 2.3-2.13) in 9 endogenous variables: { , , , , , , , , , , , , , , , , , } and 2 of the policy instruments { , } which adjust to satisfy the two consecutive government's budget constraints. This is for any tax policy { , , }. In the case of Ramsey policy the above equations will serve as the constraints to the Ramsey government when the latter chooses the policy instruments in the beginning of the time horizon subject to the above equations. Irrespectively of how policy is chosen below we need to make sure that the DCE system delivers a meaningful numerical solution. We check this in subsection 5.1.
Ramsey General Equilibrium
We will consider optimal policy with commitment. In this case, the so-called Ramsey General Equilibrium, policy is chosen once-and-for-all at the beginning of the time horizon before private agents make their choices. Notice that the government is benevolent and, thus, maximizes a weighted average of the utilities of the two groups of households, taking into account the DCE equations. The problem is solved by backward induction. This means that we first solve the private agents' problem and then we solve for optimal policy.
We now define the Ramsey equilibrium, i.e. when the policy-maker is able to commit to future policies. Notice that, in order to make the Ramsey policy problem non-trivial, we impose a restriction on the first-period income tax rate , for example by taking it as given at a small number (for instance, 0.15 at our numerical solution -see also below). This approach rules out taxing heavily the initial capital stock which would be equivalent to a non-distorting lump-sum tax, since , is in fixed supply.
We assume commitment technologies, i.e. the government can commit itself to the policies that will be in place arbitrarily into the second period. The sequence of time is as follows. Policy is chosen once-and-for-all in the beginning of period 1 before any private decisions are made.
We solve the problem by backward induction. This means that the agents first solve their optimization problems for given policy and then the government chooses the policy instruments , , , to maximize a weighted average of the utility of the two agents, (1 − ) + subject to the DCE equations derived earlier, where the given political preferences 0 ≤ ≤ 1 and 0 ≤ 1 − ≤ 1 measure respectively the influence of the two social classes, workers and capitalists, in the policy setting process.
Electronic copy available at: https://ssrn.com/abstract=3378384 The That is, we follow the dual approach 6 to the Ramsey policy problem, where the government rechooses the allocations and the policy variables subject to the DCE. The first-order conditions of the above maximization problem are presented in detail at the appendix. Since it is impossible to get an analytical solution of the Ramsey General Equilibrium, we resort to numerical simulations which are presented in the next section.
Parameterization
Since the above described general equilibrium cannot be solved analytically, we present numerical solutions using common parameter values. Specifically, in the private production function, in equations (1.12) and (1.13), the Cobb-Douglas exponent on employment, 1 − , is equal to the labour share of income, 0.7, which is close to the value used by Angelopoulos et al. (2011); the rest, a, is the exponent on capital. The scale parameter in the production function, , is set at 1. Also, we set = 5 and = 1, since capitalists are assumed to be more skilled than workers and, therefore, face higher productivity for their labour supply, resulting, in turn, in higher wages. This assumption not only allows us to get a solution regarding labour supply of capitalists within accepted range but also is in line with a strand of literature that has examined the occupational choice of economic agents, usually focusing on the distinction between entrepreneurs and workers and its implications for skill acquisition (see, e.g., Quadrini 2000;Matsuyama 2006;Kambourov and Manovskii 2009) 7 . The time preference rate, , is set at 0.9. For the weights given by the households to private consumption, leisure and public consumption respectively, we assume that μ₁ = 0.3, ₂ = 0.5 and ₃ = 0.2. The latter value is close to the one used by similar studies, whereas the weights assigned to private consumption and leisure imply hours of work within usual ranges. Finally, the capital depreciation rate, , is set at 0.12.
Regarding the capitalists' and workers' population shares in total population we set them to be 0.3 and 0.7 respectively. Notice that is the weight given by the government to worker's welfare. Thus, when we assume that the government is utilitarian, the policy is chosen by a government that attaches weights and (1 − ) to the utility of workers and capitalists equal to their population shares (see e.g. Angelopoulos et al., 2011). Therefore, we set = and (1 − ) = . 8 Furthermore, the first-period capital stock , is exogenously given and set at 0.05; we report that our results are robust to changes in the value of the initial capital stock.
Finally, as already said in the previous section, in order to make the Ramsey policy problem non-trivial, we impose a restriction on the first-period income tax rate , by taking it as given at a small number. Otherwise, the government would choose to tax heavily the initial capital stock which would be equivalent to a non-distorting lump-sum tax, since , is in fixed supply. Therefore, in our specific numerical simulations we have set the value of at 0.15; regarding this parameter value see also the discussion in footnote 13. 7 Notice that our results are robust to changes in these parameter values. 8 According to Grossman and Helpman (1996), capitalists seem to have more power in the government decision process. Also, as Angelopoulos et al. (2011) claim policies are chosen by governments that may have specific ideological preferences over groups or are preferred by the majority of the voters. Thus, in reality, the governments are not utilitarian. Therefore, it is useful to examine the effects of the introduction of consumption taxes on redistributive incentives in an economic environment where the government cares more or less for one specific group. We check this case in section 6.
Revenue-Neutral Tax Reforms When Policy is Exogenous
Before we study optimal policy, it is useful to study some exogenous policy reforms. In particular, we examine a revenue-neutral change in the second-period income tax rate and the impact of this reform on efficiency and redistribution incentives. Initially, we assume that = 0.15, = 0.3 and = = 0.2 whereas ₁, ₂ are residually determined by the two consecutive government's budget constraints.9 Next, we change the second-period income tax rate and, at the same time, we keep the total tax revenues constant. As a result, the consumption taxes are determined residually by the Tax Revenue equation (in each period), which is given by: The quantitative and qualitative effects of this tax revenue-neutral reform are presented in Tables 3.1 and 3.2 (see also Figure 1 in the appendix). In particular, a decrease in second-period income tax rate results in an increase in the second-period consumption tax rate. Hence, income taxes are substituted by higher consumption taxes, since the government has to generate the required revenues to finance the provision of public goods, which, as said, is held constant. Moreover, savings , are lower. This happens because the savings decision is made in the first period only, where the income tax rate is given. Thus, the introduction of a positive consumption tax in the first period reduces the disposable income of the capitalists and, thus, private consumption and savings. On the other hand, there is a positive effect on savings from the decrease in the second-period income tax rate. However, as the former effect dominates the latter effect, the net effect is a lower capital stock in the second period 10 . Also, labour supply , increases when second-period income tax falls and consumption tax rises, resulting in higher output and welfare in the economy. Thus, the economy is more efficient as income taxes are substituted by consumption taxes. Also, at an individual level, workers can benefit from a more efficient economy, as can be seen in Table 3.2, where has increased. On the other 9 The values that have been assigned to , and are close to OECD averages. 10 Although we have not managed to establish theoretically why this is the case, we should report that we have experimented with changes in a wide range of parameter values to which this specific result seems to be robust. An exception is the value of the initial income tax rate. In particular, as the initial income tax rate, , increases, ceteris paribus, it is possible that the positive effect, coming from the lower second period income tax rate, dominates the negative effect, coming from the first period positive consumption tax rate, and thereby savings, ₂, increase. This is a rather expected result, since the higher the initial income tax rate the lower the need for the government to raise tax revenues through the endogenously determined tax instruments and thereby the less it relies on the use of consumption tax rates which affect negatively -in the first period, through the reduction in the disposable net incomethe level of savings, ₂. We would like to thank a referee for highlighting this point.
hand, the welfare of the capitalists does not behave monotonically. In particular, it initially increases and then it decreases as falls. This happens because high consumption taxes hurt capitalists more. Net income inequality 11 decreases in the first-period (where the first-period income tax rate is given), while in the second period, net income inequality increases, implying that capitalists' net income increases more than workers' net income . Thus, although the substitution of income taxes with consumption taxes is Pareto improving, the associated efficiency gains come at the cost of higher income inequality. This means that there is a tradeoff between efficiency and equity.
The above analysis is for given policy. Next, we move to optimal policy with commitment, the so-called Ramsey equilibrium, in which second-best policy is optimally chosen by a benevolent Ramsey government.
Results from the Representative Agent Model
It is useful for what follows to present the numerical results from the respective representative agent model, using the same benchmark parameter values as above. 12 Thus, we work as follows: Initially, we solve for the commitment equilibrium when the government chooses optimally only the second-period income tax rate. Hence, the government chooses , ₁, ₂ to maximize the utility of the representative agent subject to the decentralized competitive equilibrium, when we exogenously set = = 0. This serves as our benchmark regime. Next, we assume that the government can choose optimally both income and consumption taxes and we solve for two different cases. In the first regime, we introduce a flat consumption tax = = that is common in both periods and the government chooses optimally , ₁, ₂, . In the second regime, we assume that the government chooses optimally, among others, two different consumption taxes, one in each period, ≠ . A numerical solution for these regimes is presented in Tables 4.1 and 4.2 below.
The main results are the following: There are welfare gains when the government is able to choose optimally both income and consumption taxes. For instance, welfare and secondperiod output , are higher with the introduction of consumption taxes. Moreover, the secondperiod net income ₂ⁿ of the representative household increases. This happens because the government finds it optimal to raise revenues by setting a positive consumption tax rate. This, ceteris paribus, causes an increase in total tax revenues, creating a fiscal space which allows for a decrease in the more distorting income tax rate in the second period. Also, consumption is lower in both periods due to the high consumption taxes while the lower second-period income tax rate triggers an increase in the second-period labour supply, which, in turn, increases second-period output , . Savings ₂ are lower with the introduction of the consumption taxes, although the second-period income tax rate decreases. This happens because the savings decision about ₂ is made in the first period, where the income tax rate is given and equal to 0.15. The introduction of a consumption tax in the first period (or a flat consumption tax that affects both periods) reduces the household's first-period disposable income, which in turn reduces savings ₂ and consumption ₁. Thus, there are two opposite effects on savings, where the negative effect from the introduction of the consumption tax rate in the first-period dominates the positive effect from the reduction of the second-period income tax rate. 13 To sum up, the economy with income and consumption taxes is more efficient than the economy without consumption taxes. In other words, a mix of income and consumption taxes increases welfare and output. Next, we move to the heterogeneous agents' case so as to investigate the distributional implications of the introduction of consumption taxes.
Results when Heterogeneity is Allowed
In this section, our aim is to highlight the aggregate and distributional implications of introducing consumption taxes into a model with income taxes only and heterogeneous agents, when the government chooses optimally the mix of income and consumption taxes. Thus, we choose to work as follows. First, we solve for the Ramsey/commitment equilibrium when the government chooses optimally the second-period income tax rate only. Thus, the government chooses , ₁, ₂ to maximize a weighted average of the utilities of the two agents, capitalists and workers, subject to the decentralized equilibrium equations, when we exogenously set = = 0. This serves as our benchmark regime. Next, we assume that the government can choose optimally both income and consumption taxes and we solve for two different cases. In the first regime, we introduce a flat consumption tax = = that is common for both periods and the government chooses optimally , ₁, ₂, . In the second regime, we assume that the government chooses optimally, among others, two different consumption taxes, one in each period, ≠ . A numerical solution for these regimes is presented in Tables 5.1 and 5.2 below.
The main results from the comparison of these regimes are as follows: First, the economy with the consumption taxes is welfare superior to the economy without (benchmark regime).
For instance, second-period total output ₂ and aggregate welfare are now higher and this is reasonable since the government has one more policy instrument at its disposal which is less distorting relative to income taxes. Second, at an individual level, both capitalists and workers are better off and benefit from a more efficient economy. For instance, second-period net incomes, and , and individual welfares, and , are higher when the government is allowed to choose optimally both income and consumption taxes. Notice that savings , are lower with the introduction of the consumption taxes. This happens because the saving decision is made by the capitalists in the first period, in which the income tax rate is given. Thus, high positive consumption taxes in the first-period hurt substantially the first-period net income of the capitalists, since is given, and, in turn, reduce savings and private consumption. This negative effect on savings dominates the positive effect from the decrease in the second-period income tax rate. Notice here, however, that if we allow for a three period economy where in the second period both the beginning-of-period and the end-of-period capital stock are endogenously determined, the effect of the introduction of consumption taxes on second-period savings , is positive. Hence, the capital stock in the third period is higher, since the capitalists can benefit from the lower income tax rate in the second period. We present the results for this special case in the appendix.
Electronic copy available at: https://ssrn.com/abstract=3378384 Third, for the regime with the flat consumption tax, the government finds it optimal to set at a lower value (0.1645) relative to the case with income taxes only, thereby it chooses to substitute partially the more distorting income tax with the less distorting consumption tax. In other words, the Ramsey government, by realizing that it has a less distorting policy instrument at its disposal, chooses to generate the required revenues to finance the provision of public goods by taxing consumption ( = 0.3964), so as to mitigate the distortionary effects imposed on the economy by high income taxation. Also, for the regime with the two different consumption taxes, the government chooses a positive consumption tax in the first period ( = 0.3723) 14 and an extremely high second-period consumption tax ( = 2.6258) so as to finance the increased provision of public goods and a very high income subsidy in the second period ( = −1.3471) .This is a reminiscent of the quite large income subsidy and consumption tax, well in excess of 100%, in Coleman (2000) and many others. The related 14 First-period income tax rate is given and set equal to 0.15 and, therefore, there is no need for the government to offset any distortionary effects. Hence, the government chooses a first-period consumption tax that is lower than 100%, since there is no need to finance an income subsidy and the additional revenues by the consumption tax are used to finance a larger amount of public good ₁ in the first period. literature on optimal taxation derives that the optimal tax mix implies the same constant tax rate on consumption and leisure in each period and a zero tax on capital income. Hence, the tax mix that achieves the first best allocation is one that taxes consumption, provides the same amount of subsidy to labour and imposes a zero capital income tax rate (see Lansing (1999), Coleman (2000) and Correia (2010)). The quantitative difference in our results, where the amount of labour subsidy is lower than the amount of the consumption tax, is driven by the fact that we use a single income tax, rather than separate taxes on capital income and labour income. Otherwise, if the Ramsey government can use capital income taxes, labour income taxes and consumption taxes, it could attain the first-best allocation.
Fourth, net income inequality increases when we move from the benchmark regime with income taxes to the regimes where the government chooses optimally both income and consumption taxes. Hence, the reduction in the optimal second-period income tax benefits capitalists more, since they work more, while workers' labour supply is unaffected from changes in the optimal income tax rate. Thus, there is a tradeoff between efficiency and redistribution. Although the introduction of consumption taxes by a Ramsey government is Pareto improving and benefits both capitalists and workers, income inequality increases. The mechanism that drives this result is twofold.
First, a consumption tax alters the savings/consumption decision of the capitalists (see equation 2.5) affecting their income from wealth while there is no income from wealth and such a decision for the workers. Notice that this channel is also present in the representative agent case. Moreover, the degree of substitution between consumption and income taxes in the optimal policy setting with heterogeneous agents determines the nature of the distributional implications of the introduction of consumption taxes in an economy with income taxes only. In our simulations, and for the specific parameter values we have experimented with, this channel leads to higher net income inequality.
Second, the change in capital caused by consumption taxes has differential effects in the income from labour (due to productivity differences), which results to labour income inequality. 15 However, we should report that our results do not depend on the assumption of different labour productivities among capitalists and workers although in the absence of this differentiation the regressive distributional implications of introducing consumption taxes get weaker. The introduction of consumption taxes in an economy with income taxes only can still be regressive, even in the absence of different labour productivities, as long as, workers are not 15 As explained in section 4, we choose to differentiate labour productivities for two reasons: First, because this assumption is consistent with literature on inequality and especially with that strand of literature that has examined the occupational choice of economic agents, usually focusing on the distinction between entrepreneurs and workers and its implications for skill acquisition (see, e.g., Quadrini 2000;Matsuyama 2006;, and second, because it allows us to get a solution for the labour supply of capitalists within accepted range. allowed to save, or in case they are allowed, they face a higher -than capitalists -transaction cost for participating at the capital market. Therefore, at least in our numerical simulations, and for the specific parameter values we have used, both the above described channels seem to lead to higher net income inequality. 16
Revenue-Neutral Tax Reforms when Policy is Chosen Optimally
In this section, we study again the aggregate and distributional implications of introducing consumption taxes into a model with income taxes only, when the government chooses optimally both income and consumption taxes, but we focus mainly on the case in which the overall public spending remains constant and equal to its value when the government chooses optimally only the income tax rate. Thus, we choose to work as follows. First, we solve for the Ramsey/commitment equilibrium when the government chooses optimally the second-period income tax rate only. Thus, the government chooses , ₁, ₂ to maximize a weighted average of the utilities of the two agents, capitalists and workers, subject to the decentralized equilibrium equations, when we exogenously set = = 0. This serves as our benchmark regime. Next, we assume that the government can choose optimally both income and consumption taxes and we distinguish between two different cases. In the first case, we set ₁, ₂ as in the benchmark regime and allow for the government to choose optimally , , .
In the second case, we assume that the government chooses optimally all the policy instruments and, particularly, , , , ₁, ₂. Tables 6.1 and 6.2 below present the numerical results for these cases.
A comparison of the above cases reveals the following: The economy with the consumption taxes is welfare superior, even if we keep ₁, ₂ constant. For instance, aggregate welfare and second-period output , are higher. At an individual level, workers are better off, since both their welfare and their second-period net income are higher. On the contrary, capitalists are worse off when we allow for the government to set the public spending as in the benchmark regime. Notice also that the government chooses to subsidize income and generate the necessary revenues to finance its activity by taxing only consumption. This happens because consumption taxes are less distorting tax instruments than income taxes. Moreover, in terms of inequality, the second-period net income of capitalists relative to workers increases when the government can choose optimally both income and consumption taxes, even if we keep 16 We would like to thank a referee for highlighting this twofold mechanism which allowed us to make clearer the intuition behind the distributional implications of the introduction of consumption taxes into an economy with income taxes only. public spending constant. Hence, the introduction of optimally chosen consumption taxes by a Ramsey government in an economy with income taxes only, increases the aggregate efficiency but also increases net income inequality; even in the case we maintain the level of public spending constant.
A non-utilitarian Ramsey government
So far we have assumed that the government is utilitarian in the sense that the weights that the government attaches to the welfare of capitalists and workers follow their population shares Electronic copy available at: https://ssrn.com/abstract=3378384 (see e.g. Angelopoulos et al, 2011). But, what will happen if we assume that the government is not utilitarian anymore? This is what we do in this section. In particular, we study the aggregate and distributional implications of introducing consumption taxes when the government is not utilitarian. We present numerical results (see Tables 7.1 and 7.2 respectively) for four cases, namely = 0.4, = 0.5, = 0.6, and = 0.8. Furthermore, we provide a graph (see figure 1 below) in which the relationship between net income inequality with and without the presence of consumption taxes and (for 0 ≤ ≤ 1) is depicted. As can be seen in Tables 7.1, 7.2 and Figure 1, our main results remain as the ones analyzed already in the previous sections of the paper. Thus, also when the government is not utilitarian, Electronic copy available at: https://ssrn.com/abstract=3378384 consumption taxes seem to create substantial welfare gains for the economy as a whole, whereas, they also seem to increase inequality by hurting the working class. In other words, the efficiency and redistribution effects do not change qualitatively when the government chooses to attach a proportional or less/more proportional weight on the welfare of a specific group (i.e. a = or < or > 0.5). Moreover, as can be seen in Figure 1, the increase in net income inequality, once we move from an economy with income taxes only to an economy with both income and consumption taxes, gets higher the higher is the weight that the governments attaches to capitalists' welfare.
Concluding Remarks
In this paper, we study the aggregate and distributional implications of introducing consumption taxes into a model with income taxes only, extended to allow for heterogeneity across agents. This heterogeneity is based on the wealth distribution of income. In particular, capitalists are allowed to save while workers are not. The government is allowed to choose optimally a mix of single income and consumption taxes and the associated amount of the provided public good. Notice that we solve for optimal policy with commitment (the so-called Ramsey equilibrium) in which policy instruments are chosen once-and-for all at the beginning of the time horizon.
Electronic copy available at: https://ssrn.com/abstract=3378384 The main theoretical findings can be summarized as follows: Assuming that a benevolent Ramsey government chooses optimally the tax policy mix, consumption taxes turn out to be efficiency enhancing, since they are a less distorting policy instrument. In particular, the government chooses to decrease the second-period income tax rate and generate the required revenues to finance its activities by setting positive consumption taxes. The increased efficiency benefits both groups of households, i.e. capitalists and workers. However, these welfare gains are accompanied by higher inequality, when the latter is measured by the ratio of net incomes. For instance, the net income of capitalists increases by more relative to the net income of workers. Thus, we confirm the widespread belief that the introduction of consumption taxes into a model with income taxes only, creates substantial efficiency gains for the economy as whole, but at the cost of higher net income inequality. Thus, there is a tradeoff between efficiency and redistribution, since the introduction of consumption taxes reduces the progressivity of the tax system. Therefore, from a normative point of view, this may also justify the design of a set of subsidies policies which will aim to outweigh the regressive effects of the otherwise more efficient consumption taxes. This study can be extended in several ways. For example, one can study the aggregate and distributional implications of introducing consumption taxes in the presence of tax evasion or Electronic copy available at: https://ssrn.com/abstract=3378384 progressive (non-linear) income taxation. Second, one can solve for time-consistent policies and compare them with the commitment / Ramsey equilibrium. We leave these extensions for future work.
A.2 A Three Period-Model
Notice that for the three-period model we assume the same parameter values as in those presented in table 2. We choose to work as follows. First, we solve for the Ramsey/commitment equilibrium when the government chooses optimally the second-period and the third period income tax rates. Thus, the government chooses , , ₁, ₂, ₃ to maximize a weighted average of the utilities of the two agents, capitalists and workers, subject to the decentralized equilibrium equations, when we exogenously set = = = 0 . This serves as our benchmark regime. Next, we assume that the government can choose optimally both income Electronic copy available at: https://ssrn.com/abstract=3378384 and consumption taxes and we solve for two different cases. In the first regime, we introduce a flat consumption tax = = = that is common for all periods and the government chooses optimally , , ₁, ₂, ₃, . In the second regime, we assume that the government chooses optimally, among others, three different consumption taxes, one in each period, ≠ ≠ . A numerical solution for these regimes is presented in Tables 8.1 and 8.2 below.
|
2019-05-20T13:01:38.568Z
|
2018-01-07T00:00:00.000
|
{
"year": 2018,
"sha1": "7dfa3f4ddb14c8e7b6365ea89a807655f2afe672",
"oa_license": "CCBYNC",
"oa_url": "https://openjournals.uwaterloo.ca/index.php/rofea/article/download/1506/1923",
"oa_status": "GOLD",
"pdf_src": "ElsevierPush",
"pdf_hash": "4550915cfbc0b42eaa2deace79829eb644e71ff1",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
}
|
53433436
|
pes2o/s2orc
|
v3-fos-license
|
Functional heterogeneity of human tissue-resident memory T cells based on dye efflux capacities Functional heterogeneity of human tissue-resident memory T cells based on dye efflux capacities
Tissue-resident memory T cells (TRMs) accelerate pathogen clearance through rapid and enhanced functional responses in situ. TRMs are prevalent in diverse anatomic sites throughout the human lifespan, yet their phenotypic and functional diversity has not been fully described. Here, we identify subpopulations of human TRMs based on the ability to efflux fluorescent dyes [efflux(+) TRMs] located within mucosal and lymphoid sites with distinct transcriptional profiles, turnover, and functional capacities. Compared with efflux(–) TRMs, efflux(+) TRMs showed transcriptional and phenotypic features of quiescence including reduced turnover, decreased expression of exhaustion markers, and increased proliferative capacity and signaling in response to homeostatic cytokines. Moreover, upon activation, efflux(+) TRMs secreted lower levels of inflammatory cytokines such as IFN-γ and IL-2 and underwent reduced degranulation. Interestingly, analysis of TRM subsets following activation revealed that both efflux(+) and efflux(–) TRMs undergo extensive transcriptional changes following TCR ligation but retain core TRM transcriptional properties including retention markers, suggesting that TRMs carry out effector function in situ. Overall, our results suggest a model for tissue-resident immunity wherein heterogeneous subsets have differential capacities for longevity and effector function. of inflammatory cytokines such as IFN- γ and IL-2 and underwent reduced degranulation. Interestingly, analysis of TRM subsets following activation revealed that both efflux(+) and efflux(–) TRMs undergo extensive transcriptional changes following TCR ligation but retain core TRM transcriptional properties including retention markers, suggesting that TRMs carry out effector function in situ. Overall, our results suggest a model for tissue-resident immunity wherein heterogeneous subsets have differential capacities for longevity and effector function. may program distinct subsets of TRMs. These TRM subsets exhibited a differential capacity for IFN- γ α and IL-17 production as well as a differential propensity toward degranulation. This stratification of function suggests that different TRM subsets may preferentially mediate specific functions.
Introduction
Memory CD8 + T cells provide long-lived protection and exist as heterogeneous subsets differing in tissue homing and self-renewal properties (1). Tissue-resident memory T cells (TRMs) are a noncirculating subset maintained in peripheral tissues that mediate optimal in situ protection against invading pathogens (2)(3)(4). In mouse models, TRMs are distinguished from circulating memory subsets by expression of the early T cell activation marker CD69, along with the integrin CD103 for CD8 + TRMs (for a review, see ref. 4). Protective TRMs in mice can be generated by diverse viral and bacterial pathogens and following site-directed vaccination (2,3). However, TRMs can also direct pathogenic immune responses to allergens in the lung (5,6) and have been implicated in diseases such as psoriasis and vitiligo in skin (7). Given their potential to participate in protective and pathogenic immune responses in tissues, TRMs are an important target for immunomodulation. However, recent studies have demonstrated that TRMs are highly heterogeneous, encompassing multiple unique subsets (8)(9)(10)(11)(12). Understanding the role of these different subsets in immune responses is therefore necessary before therapeutic modulation of TRMs can be achieved.
We previously demonstrated through transcriptome profiling that human TRM-phenotype cells share a core signature with key homology with mouse TRMs (13), including expression of specific homing/ adhesion molecules (CD49a and CXCR6), negative regulators (PD-1 and CD101), and elevated production of IL-2 and IL-10 compared with circulating effector-memory T (TEM) cells (12). However, our study also revealed phenotypic heterogeneity within human TRMs, as has been found in other studies (9,10). TRM subsets identified by markers such as CD49a and CD103 have been shown to be transcriptionally and developmentally distinct and can occupy nonoverlapping subanatomic niches (9)(10)(11); this suggests that the TRM compartment actually comprises multiple distinct subsets with tissue-retention properties.
Tissue-resident memory T cells (TRMs) accelerate pathogen clearance through rapid and enhanced functional responses in situ. TRMs are prevalent in diverse anatomic sites throughout the human lifespan, yet their phenotypic and functional diversity has not been fully described. Here, we identify subpopulations of human TRMs based on the ability to efflux fluorescent dyes [efflux(+) TRMs] located within mucosal and lymphoid sites with distinct transcriptional profiles, turnover, and functional capacities. Compared with efflux(-) TRMs, efflux(+) TRMs showed transcriptional and phenotypic features of quiescence including reduced turnover, decreased expression of exhaustion markers, and increased proliferative capacity and signaling in response to homeostatic cytokines. Moreover, upon activation, efflux(+) TRMs secreted lower levels of inflammatory cytokines such as IFN-γ and IL-2 and underwent reduced degranulation. Interestingly, analysis of TRM subsets following activation revealed that both efflux(+) and efflux(-) TRMs undergo extensive transcriptional changes following TCR ligation but retain core TRM transcriptional properties including retention markers, suggesting that TRMs carry out effector function in situ. Overall, our results suggest a model for tissue-resident immunity wherein heterogeneous subsets have differential capacities for longevity and effector function.
Like memory T cells, hematopoietic stem cells (HSCs) exhibit long-term persistence and are largely maintained in tissues -specifically in bone marrow (BM) niches. HSCs are endowed with self-renewal capacities and are resistant to chemotherapeutic agents, associated with an enhanced expression of ATP-binding cassette (ABC) family multidrug transporters, which efflux proteins and small molecules, and maintain cellular homeostasis (14)(15)(16). Recently, subsets of memory T cells with the ability to efflux fluorescent dyes have been identified in human tissues including BM (17) and intestines (18). Another recent study identified effluxing TRM populations in human tissue sites and showed in a mouse model of LCMV infection that effluxing TRMs are associated with quiescence (19). Nonetheless, the distribution of effluxing T cells across human tissues, within TRMs, and their functional role in tissue immunity are unclear.
Here, we identify a population of memory CD8 + T cells with the ability to efflux fluorescent dyes [efflux (+)] in multiple lymphoid and nonlymphoid sites. Efflux(+) cells predominate within the TRM compartment, are enriched for TRM core signature markers, and maintain a TRM profile when stimulated, yet represent a functionally and transcriptionally distinct subpopulation. Notably, efflux(+) TRMs exhibit reduced turnover, transcriptional signatures associated with longevity and quiescence, and an increased capacity to proliferate compared with efflux(-) TRMs, which have a higher effector capacity. Together, these results demonstrate that dye efflux identifies a population of TRMs present in multiple tissues with a unique role in the tissue immune response.
Results
CD8 + TRMs efflux fluorescent dyes. We investigated the efflux capacity of memory CD8 + T cells in different tissues based on their ability to be labeled with fluorescent mitochondrial dyes, as done previously with HSCs (20). As the predominant phenotype of memory (CD45RO + ) CD8 + T cells across human tissues is TEM (CD45RA -CCR7 -) (21,22), our studies are focused on this subset. T cells isolated from healthy human tissues obtained from organ donors (see Methods) were labeled with MitoTracker Green, a fluorescent dye that labels total mitochondrial mass (23), revealing cells with high and low levels of mitochondrial dye (Mito hi and Mito lo ) ( Figure 1A). Similar fractions of Mito hi and Mito lo cells were observed using either MitoTracker Green or CMXRos, a dye dependent on mitochondrial membrane potential ( Figure 1B), suggesting that changes in mitochondrial state are not responsible for the observed staining patterns. To determine whether the Mito lo subset was due to dye efflux, we stained cells in the presence of increasing concentrations of cyclosporine A (CSA), a competitive inhibitor of efflux pumps (24). A single Mito hi population was observed when cells were labeled in the presence of CS A (Figure 1C). Similar results were obtained when using verapamil ( Figure 1C), another competitive inhibitor of efflux pumps that has been shown to have no effect on mitochondrial mass (19). Moreover, the 2 inhibitors used and the mitochondrial dye staining did not have any effect on cell viability (Supplemental Figure 1A; supplemental material available online with this article; https://doi.org/10.1172/jci.insight.123568DS1). Thus, the Mito lo subset constitutes a population with dye efflux capacity and will be referred to hereafter as "efflux(+)," with the corresponding Mito hi subset referred to as "efflux(-)." Compared with efflux(-) cells, efflux(+) cells expressed higher levels of MDR1 (ABCB1) (Supplemental Figure 1B), a cell surface transporter that mediates efflux of fluorescent dyes and xenobiotics in HSCs (25), suggesting that ABC transporters may contribute to dye efflux.
Compiling data from several donors, we found that the frequency of efflux(+) compared with efflux(-) populations differed between tissue sites, with the lowest frequency of efflux(+) memory T cells observed in the blood (≤40%) and the highest frequency observed in spleen and lung (>60%) ( Figure 1D). Building on prior work in which we found that cytomegalovirus-specific (CMV-specific) T cells were maintained long-term in several tissue sites (26), we investigated whether efflux(+) T cells were present in CMV-specific populations. Indeed, efflux(+) cells were detected within CMV-specific memory T cells in multiple tissue sites in frequencies similar to the total efflux(+) frequency for that tissue ( Figure 1E). Together, these results indicate that the proportion of efflux(+) cells is a feature of the tissue site and that efflux(+) cells can be generated following infection.
Tissue-resident phenotype of efflux(+) CD8 + T cells. Given the abundance of efflux(+) cells in tissues compared with blood, we investigated whether efflux(+) T cells in tissues exhibited features of human TRMs. We first assessed whether efflux(+) cells were differentially distributed within CD69 + and CD69fractions, as CD69 is a phenotypic marker that distinguishes human TRMs from circulating memory T cells (9,13,18). In spleen, BM, and lung, efflux(+) cells were more highly represented among the CD69 + compared with the CD69fraction of memory CD8 + T cells across multiple donors ( Figure 2, A and B). Importantly, CD69 + memory CD8 + T cells were highly enriched for efflux(+) cells, comprising an average of 70% CD69 + T cells across multiple tissues and donors ( Figure 2B). We also assessed whether the efflux(+) CD69 + T cells were mucosa-associated invariant T (MAIT) cells, which have previously been shown to efflux dyes (27,28). We detected only a small fraction (<10%) of T cells within tissues expressing the canonical phenotype of MAIT cells (CD161 + /Vα7.2 + ) among CD69 + T cells (Supplemental Figure 2), indicating that the majority of efflux(+) T cells across healthy human tissues are polyclonal memory CD8 + T cells.
We previously determined that human TRMs are enriched within the CD69 + fraction of tissue memory T cells and exhibit a core phenotypic and functional profile (12). We therefore investigated the expression of core TRM-associated markers by efflux(+) and efflux(-) subsets of CD69 + memory CD8 + T cells (TRMs). Strikingly, expression of CD103, a canonical CD8 + TRM marker (29), was enriched within the efflux(+) compared with efflux(-) fraction in both spleen and lung, with some interdonor variability for the lung ( Figure 2C). Similarly, expression of the integrin CD49a and CD101, both part of the human TRM core signature (12), was increased within the efflux(+) compared with efflux(-) TRM subset in multiple donors ( Figure 2D). Overall, these results suggest that efflux(+) cells predominate within the TRM compartment and express canonical TRM markers at high levels compared with their efflux(-) counterparts.
Efflux(+) TRMs but not circulating TEMs have a unique surface receptor phenotype. We next investigated the phenotype of efflux(+) and efflux(-) cells. Given the relative enrichment of efflux(+) cells within TRMs, we sought to determine if efflux status is associated with unique properties specifically for TRMs or more generally across memory subsets. We first measured CD127 (IL-7 receptor) expression, as IL-7 is critical for memory T cell homeostasis, and high CD127 expression is associated with long-lived memory CD8 + cells, while low CD127 expression can indicate chronic activation and exhaustion (30,31). Efflux(+) TRMs had elevated CD127 expression compared with efflux(-) TRMs ( Figure 3A), while no differences were observed between efflux(+) and efflux(-) TEMs. We then assessed the expression of the coreceptors CD27 and CD28, which are downregulated following TCR stimulation. Within the TRM compartment, efflux(+) TRMs had a significantly reduced frequency of CD27 + and CD28 + cells compared with efflux(-) TRMs ( Figure 3, B and C). For circulatory TEMs, CD27 and CD28 expression was similar for both efflux(+) and efflux(-) subsets ( Figure 3, B and C).
Expression of PD-1, which inhibits T cell responses and is associated with exhaustion and chronic activation (32), was reduced in efflux(+) compared with efflux(-) TRMs (CD69 + ) ( Figure 3D). Within TEMs, some differences were seen within spleen and BM; however, the differences were of reduced magnitude as compared with the TRM fraction. Additionally, efflux(+) TRMs had lower expression of CD57, a marker of replicative senescence and cytotoxicity (33), compared with efflux(-) TRMs, while no significant differences were seen in circulatory TEMs ( Figure 3E). Further supporting a core tissue-resident phenotype, efflux(+) cells also expressed higher levels of CD39 ( Figure 3F), a marker of liver-resident TRMs (34) that has also been proposed to mark functionally distinct CD8 + T cells with immunomodulatory function (35). Overall, these results suggest that efflux status is associated with unique phenotypic properties within the TRM compartment but not for circulatory memory T cells. Efflux(+) TRMs appear to have a distinct activation history and potentially enhanced capacity for cytokine responses and longevity.
TRM subsets differing in efflux capacities exhibit a distinct transcriptional profile. Given that efflux status was associated with a unique phenotype primarily within TRMs, we performed whole transcriptome profiling of efflux(+) and efflux(-) CD69 + memory CD8 + T cells from 3 organ donors by RNA-sequencing (RNA-Seq) to further characterize these subsets (Supplemental Table 1). Principal component analysis (PCA) revealed that the efflux(+) subset is transcriptionally distinct from the efflux(-) subset across all donors, based on the second principal component accounting for 21% of the variation in gene expression between these subsets ( Figure 4A). We used DAVID online functional annotation analysis on the top 500 differentially expressed genes to interpret their biological significance (36,37). This analysis reveals enriched pathways that contain genes differentially expressed within our data set. These results indicate fold enrichment and significance but do not give information about the directionality of the pathways. The pathway with the greatest enrichment within our gene set was phospholipid translocation, and significant enrichment for pathways controlling T cell costimulation, signaling, cytokine responses, adhesion, and migration was also observed ( Figure 4B).
We identified 133 genes differentially expressed between efflux(+) and efflux(-) TRMs by DESEQ analysis applying criteria for significance (false discovery rate [FDR] ≤ 0.05 and absolute value of log2 fold change > 1; see Methods). Notably, the number of genes distinguishing efflux(+) and efflux(-) TRMs was much lower than the 300-400 differentially expressed genes we previously reported when comparing TRM-enriched (CD69 + ) to circulating CD69memory CD8 + T cells in human blood and tissues (12). Expression levels of these 133 genes were consistent across all 3 donors ( Figure 4C) and comprised several major pathways ( Figure 4D). Genes associated with nutrient, ion, and xenobiotic transporters including ABCA1, which encodes the protein MDR1, were expressed at higher levels in efflux(+) TRMs, as were those involved in cell adhesion such as ITGAE (encoding CD103), ITGA1 (encoding CD49a), NCAM1, MCAM1, CDH4, and ANK1 ( Figure 4D); efflux(+) TRMs additionally upregulated CCR9, CCR1, and CCR6, and downregulated CCR4 and CCR8, receptors involved in chemotaxis ( Figure 4D). Expression of key transcription factors regulating differentiation and function differed between effluxing subsets ( Figure 4D), with efflux(+) TRMs exhibiting increased expression of TLE1, a transcriptional regulator associated with Notch/RBPJ signaling (38), which regulates TRM differentiation (39). Moreover, efflux(+) cells had elevated RORC and RORA expression, two transcription factors that drive Tc17-type responses in CD8 + T cells (40), and expressed high levels of IL17A, and genes encoding IL-23 and IL-17 receptors ( Figure 4D), all associated with type 17 responses. Notably, levels of the nuclear receptor NR4A1, which is upregulated in blood efflux(+) cells (19), did not differ significantly between efflux(+) and efflux(-) TRM subsets. Analysis of genes associated with cell cycle and apoptosis control showed elevated levels of CD101, SPRY1, and SPRY2, genes with documented roles in suppressing T cell proliferation and TCR-mediated calcium signaling (41,42), coupled with reduced levels of cyclin B2 (CCNB2). Overall, these data indicate that efflux(+) TRMs have a unique transcriptional program composed of genes controlling adhesion and migration, as well as T cell activation, function, and proliferation.
Distinct functional profile and proliferative capacity of efflux(+) TRMs. Differential expression of key genes involved in cell cycle, quiescence, and T cell function suggested distinct proliferative and functional capacities of efflux(+) and efflux(-) TRMs. Before examining the function of these TRM subsets, we chose to investigate the stability of these subsets after stimulation. After 48 hours of stimulation, the majority of efflux(+) TRMs retained their ability to efflux dyes, and similarly the majority of efflux(-) TRMs remained efflux(-) ( Figure 5A). However, a small fraction of each subset did convert to the opposite subset ( Figure 5A), suggesting some plasticity following stimulation. We then measured the ability of these TRM subsets to produce cytokines following 48 hours of stimulation. Efflux(-) TRMs produced higher levels of multiple cytokines compared with efflux(+) TRMs, including proinflammatory mediators TNF-α (1.7-fold difference) and IFN-γ (1.4-fold difference), IL-2 (2.2-fold difference), and the Th2-associated cytokine IL-4 (8.7-fold difference) ( Figure 5B). By contrast, efflux(+) TRMs produced increased levels of IL-17 compared with efflux(-) TRMs ( Figure 5B), consistent with transcriptome profiling results. Interestingly, levels of IL-10 production were comparable between the 2 subsets. Along with increased effector cytokine production, efflux(-) TRMs exhibited markedly higher degranulation measured by CD107a (LAMP1) expression compared with efflux(+) TRMs ( Figure 5C).
We recently showed that CD8 + TRMs can vary in proliferative capacity (43). When fractionated based on efflux capacity, efflux(+) memory cells exhibited increased proliferation compared with efflux(-) cells following stimulation ( Figure 6A). TCR stimulation induces metabolic reprogramming and expression of key transcription factors that regulate proliferation and effector cell differentiation such as IRF4 (44)(45)(46)(47). While proliferating efflux(+) and efflux(-) TRMs expressed increased levels of IRF4 compared with nondividing cells, IRF4 levels in proliferating efflux(-) cells were higher than those in efflux(+) cells across multiple divisions and in multiple donors ( Figure 6B). As IRF4 is associated with effector differentiation, these results are consistent with the findings above showing that efflux(-) TRMs produce higher levels of TNF and IFN-γ compared with efflux(+) TRMs ( Figure 5B).
The reduced effector function and increased proliferative capacity of efflux(+) compared with efflux(-) TRMs suggested that these subsets may likewise differ in their homeostatic maintenance. Consistent with this hypothesis, efflux(+) TRMs expressed lower levels of Ki67 ( Figure 6C) compared with efflux(-) TRMs, suggesting that efflux(+) TRMs persist in a more quiescent state. Efflux(+) TRMs also exhibited increased IL-7 signaling following IL-7 stimulation ex vivo as shown by enhanced STAT5 phosphorylation ( Figure 6D), a key effector of IL-7 signals (31), consistent with our finding of increased CD127 expression by efflux(+) compared with efflux(-) TRMs (Figure 3). These results indicate that efflux(+) TRMs have an increased capacity to respond to homeostatic cytokines important for memory cell longevity. Overall, these results demonstrate that while efflux(+) TRMs have increased capacity to proliferate and produce IL-17, efflux(-) TRMs exhibit high effector and cytotoxic potential and reduced proliferation. 6). For all panels, *P ≤ 0.05, **P ≤ 0.01, ***P ≤ 0.001, ****P ≤ 0.0001 by paired t test. ns and n.s., not significant.
The transcriptional response of TRM subsets to TCR stimulation. To investigate the bases for the different functional response of efflux(+) and efflux(-) TRM subsets to TCR/CD3 stimulation, we examined their activation-induced transcriptional profile. We performed RNA-Seq on efflux(+) and efflux(-) TRM subsets as in Figure 4 following 12-hour TCR stimulation. Applying the criteria for significance as above, we found 865 and 487 genes to be differentially expressed between stimulated and unstimulated efflux(+) and efflux(-) TRMs, respectively ( Figure 7A). Interestingly, more genes were downregulated than upregulated following stimulation for both efflux(+) and efflux(-) cells ( Figure 7A). Further, the number of genes differentially expressed by either subset was substantially higher than the number of genes differentially expressed between memory and naive T cell subsets (48) or between TRMs and TEMs in humans (12). These data suggest that TRMs are poised for a robust and rapid transcriptional response to stimulation.
The magnitude of the transcriptional response to stimulation raised the question of whether efflux(+) or efflux(-) cells lose TRM-like properties after stimulation, particularly those associated with tissue retention. To address this, we compared the genes that were differentially expressed following stimulations (from our current data set) to the genes that are differentially expressed between TRMs and TEMs (using our previously published data set in ref. 12). Interestingly, very few genes overlapped between these 2 data sets ( Figure 7B), implying that transcriptional changes following stimulation in either efflux(+) or efflux(-) cells are unrelated to the genes that define the TRM subset. TRMs are characterized by low expression of the homing receptors S1PR1 and CCR7 and reduced expression of the associated transcription factor KLF2, which together help TRMs avoid egress cues (4,49). Following stimulation, both efflux(+) or efflux(-) TRMs further downregulated these genes ( Figure 7C), implying that the migratory program of TRMs is reinforced to allow these cells to carry out functions in situ. Taken together, these results suggest that while TRMs undergo large transcriptional changes upon activation, these cells remain TRM and do not lose defining features of the subset.
Comparison of the transcriptional response to stimulation between efflux(+) and efflux(-) TRMs revealed a similar set of genes that was differentially regulated ( Figure 7D). Following stimulation, there was significant downregulation of genes involved in pathways for cytokine signaling and TCR-coupled signaling, with the sirtuin pathway, which controls metabolism and cell cycle progression/apoptosis (50), emerging as the top result for both subsets ( Figure 7E). Other pathways included cell cycle regulation and metabolism ( Figure 7E). Genes related to T cell function and cytokine signaling were also differentially expressed, including proinflammatory cytokine genes (IL17F, LTA [lymphotoxin α], and IL13), the cytotoxic gene GZMB (encoding granzyme B), and a number of chemokines ( Figure 7F).
Discussion
It has become increasingly clear that long-term immunity is regulated locally at the tissue level through the establishment and persistence of TRMs. Using healthy primary human tissues, we provide evidence for functional heterogeneity within the CD8 + TRM compartment based on differential capacity to efflux fluorescent dyes. Critically, efflux(+) and efflux(-) TRM subsets were transcriptionally and functionally distinct, both at steady state and following TCR stimulation. Efflux(+) cells differentially express transcription factors related to type 1 and type 17 inflammatory responses, indicating that key regulators of lymphocyte cell fate decisions (B) Functional annotation analysis by DAVID software. Select gene ontology (GO) terms with significant adjusted P values (Adj. p) are displayed, along with fold enrichment. (C) Heatmap of normalized expression levels of all genes with significant differential expression between the 2 groups, defined as FDR ≤ 0.05 and absolute value of log2 fold change ≥ 1. (D) Select significantly differentially expressed genes that are upregulated ["up in efflux(+)"]or downregulated ["down in efflux(+)"] grouped by category. Shown are the log2 fold changes (log2FC) of select genes between efflux(+) and efflux(-) cells for each donor, designated by a unique shape (see legend in bottom left panel). Genes marked with a "*" did not meet FDR criteria (0.05), but had log2FC ≥ 1 and significant P values and were included for potential biological relevance. may also program distinct subsets of TRMs. These TRM subsets also exhibited a differential capacity for IFN-γ/TNF-α and IL-17 production as well as a differential propensity toward degranulation. This stratification of function suggests that different TRM subsets may preferentially mediate specific functions.
Previous studies have found that effluxing memory T cells were enriched among certain TRM populations in specific sites (19). Here, we show through an extended functional and transcriptional analysis of TRMs in multiple human tissues that TRMs comprise both effluxing and non-effluxing populations, with each playing distinct roles. Efflux(+) cells retain a higher proliferative potential following TCR stimulation and may constitute a resting pool of cells that repopulates the more effector-like efflux(-) subset to promote type 1 inflammatory responses. Differences in the expression of genes controlling cell cycle progression and proliferation, as well as different levels of inhibitory receptor expression, may together explain the observed differences in proliferative capacities. Additionally, efflux(+) TRMs exhibited increased responses to IL-7, a cytokine associated with memory maintenance. These results demonstrate that a portion of TRMs are not fully terminally differentiated and can undergo substantial proliferation during re-activation. This proliferation of human TRMs identified here is consistent with 2 recent reports showing that mouse TRMs could proliferate in situ to antigenic peptide and pathogen challenge (53,54).
Our analysis of transcriptional responses to TCR stimulation has implications for TRM biology as a whole. We found that TRMs preserve their core profile of downmodulated tissue egress molecules (e.g., S1PR1, CCR7, (B) Cytokine production following TCR stimulation. Efflux(+) and efflux(-) TRMs were sorted and stimulated with anti-CD3/CD28/CD2 beads for 72 hours, and cytokines in supernatant were quantified using cytometric bead array (see Methods). Graphs show levels of indicated cytokines in supernatants compiled from 8 donors. (C) Degranulation of efflux(+) and efflux(-) TRM subsets. Sorted cells were pulse labeled with CD107a antibody followed by PMA/ionomycin stimulation. Representative plots and quantification of CD107a + from splenic efflux(+) and efflux(-) TRM subsets from 4 donors. For all panels, *P ≤ 0.05. **P ≤ 0.01 ****P ≤ 0.0001 by paired t test. ns, not significant. and KLF2) after TCR stimulation, while undergoing a rapid and extensive transcriptional response. Notably, TRM stimulation preserved a tissue-retentive profile and was accompanied by extensive downmodulation of genes involved in cytokine and TCR signaling, with a focused upregulation of specific cytokines and chemokines. We propose that this narrowing of transcriptional response in activated TRMs enables them to promote effective and specific responses in situ, without triggering overt tissue damage and inflammation.
The stratification of function between TRM subsets suggests a cooperative model for TRM maintenance and functional responses. Specifically, efflux(-) TRMs may contribute more to IFN-γ/TNF-α responses and cytotoxic functions, while efflux(+) TRMs may mediate type 17 inflammation to a greater extent while serving as a proliferative reservoir to replenish the TRM compartment. However, transcriptional data also indicate substantial overlapping functions, suggesting a model in which certain functions are universal properties of TRMs, while others are primarily mediated by a specific TRM subset. The division of labor between TRM subsets may also be driven by anatomic differences, as suggested by distinct adhesion molecule and migratory receptor expression. Left: STAT5 phosphorylation following IL-7 stimulation ex vivo. Right: Quantification of pSTAT5 + cell percentage within efflux(+) and efflux(-) TRMs. For all panels, *P ≤ 0.05. **P ≤ 0.01 ****P ≤ 0.0001 by paired t test. n.s., not significant.
While we identified differences in function between these TRM subsets, we have no evidence that efflux capacity, per se, has direct effects on their function. Rather, it may reflect the physiological state of the cells and their ability to survive in diverse environments. Efflux pumps expel toxic xenobiotics and have been implicated in the persistence of human lymphocytes during chemotherapy (28). TRMs persist in peripheral tissue sites for years or even decades, where they are exposed to a range of foreign agents, particularly in sites such as skin and lung. Given our data that efflux(+) TRMs show evidence of increased quiescence and longevity, heightened expression of efflux pumps may contribute to TRM survival and homeostasis in peripheral tissues. Efflux capacity may also mediate differential susceptibility to chemotherapy and drug treatments (55).
Given that efflux(+) TRMs exhibit increased IL-17 production as well as Th17-associated signaling, specifically targeting efflux(+) cells in IL-17-mediated inflammatory diseases such as psoriasis might be an optimal therapy that spares protective TRMs while eliminating pathogenic TRMs. Previously, there has been interest in the possibility of targeted immune therapies that specifically modulate either TRMs or circulatory T cells while leaving other aspects of the immune system undisturbed (7). Our data suggest that this specificity can be extended to target specific TRM subsets. Overall, the identification of these distinct subsets could be leveraged toward next-generation therapies for infection, cancer, and autoimmunity.
Methods
Acquisition of tissue from human organ donors. Human tissues were obtained from deceased organ donors at the time of organ acquisition for clinical transplantation through an approved research protocol and MTA with LiveOnNY, the organ procurement organization for the New York metropolitan area. All donors were free of chronic disease and cancer, hepatitis B, C, and HIV negative. A list of donors from which tissues were obtained and used in this study is presented in Supplemental Table 1.
Cell isolation from human lymphoid and nonlymphoid tissues. Tissue samples were maintained in cold saline and brought to the laboratory within 2-4 hours of organ procurement. Spleen, lung, and BM samples were processed using enzymatic and mechanical digestion resulting in high yields of live leukocytes, as described previously (21,22). Mononuclear cells were isolated from peripheral blood using centrifugation through lymphocyte separation medium (Corning). Non-enzymatic isolation was also used for spleen tissue using the Bullet Blender Tissue Homogenizer (Next Advance). Tissue samples were chopped into small pieces (≤5 mm) using scissors and 4-5 g of tissue was placed in a 50-ml conical tube and complete RPMI was added to a total volume of 10 ml, followed by addition of 7 or 8 4.8-mm stainless steel beads (product SSB48). Tissues were homogenized in the bullet blender for 2 minutes at speed setting 3-4. Following homogenization, the mixture was filtered through a 70-μm Cell Strainer (Corning). ACK buffer was used for RBC lysis, followed by an additional filtration through a 70-μm Cell Strainer.
Efflux dye labeling. T cells were labeled with MitoTracker Green FM (50 nM) or CMXRos (25 nM) (Thermo Fischer Scientific) for 15 minutes in complete media (10% FBS in RPMI) according to the manufacturer's instructions. Efflux was blocked by performing fluorescent labeling in the presence of 25-50 μM CSA (24) or in the presence of 25-50 μM verapamil.
Functional assays. For quantification of cytokine production by different subsets, sorted cells were plated in 96-well round-bottom plates at 10 5 cells/well in complete RPMI medium and stimulated using anti-CD3/ Figure 7. The transcriptional response of efflux(+) and efflux(-) TRMs to stimulation. RNA-Seq analysis was performed on splenic efflux(+) and efflux(-) TRM subsets isolated as in Figure 3 following stimulation with anti-CD3/CD28 for 12 hours. (A) Differential expression was assessed using DESEQ2, and plots display the number of significantly upregulated and downregulated genes following stimulation in both efflux(+) and efflux(-) subsets. (B) Both efflux(+) and efflux(-) TRMs preserve TRM-like characteristics after stimulation. Venn diagrams show overlap between genes that are differentially expressed when comparing stimulated versus unstimulated TRMs (current data set) and genes that are differentially expressed when comparing human spleen CD8 + TRMs and TEMs ("TRM core genes"; from ref. 12). (C) TRMs downregulate egress receptors after stimulation. Plot shows log2 fold changes (log2FC) of CCR7, S1PR1, and KLF2 when comparing stimulated versus unstimulated efflux(+) and efflux(-) TRM samples. (D) Scatterplot displays all genes found to have significant differential expression in either efflux(+) or efflux(-) stimulated versus unstimulated samples as in A. Value on the x axis represents the log2FC of the gene between stimulated versus unstimulated efflux(+) samples, and the y axis represents the log2FC of the same gene between stimulated versus unstimulated efflux(-) samples. Samples are color coded by whether the differential expression was significant in only efflux(+) TRMs, only efflux(-) TRMs, or both groups. (E) Ingenuity Pathway analysis (IPA) analysis. Select pathways that had significant P values (≤0.01) when comparing stimulated versus unstimulated samples in both efflux(+) and efflux(-) TRMs are displayed. Direction of enrichment in stimulated samples is proportional to the color intensity of each bar. (F) Plot shows log2FC of select genes related to T cell function when comparing stimulated versus unstimulated TRM samples for both efflux(+) and efflux(-) subsets. (G) Expression of genes (log2FC) from C that have opposite-direction changes for efflux(+) and efflux(-) samples after stimulation across all 3 donors. Plot shows log2FC of the selected genes, comparing expression by stimulated versus unstimulated samples for efflux(+) (orange triangles) and efflux(-) (blue circles) TRMs.
|
2018-11-17T16:21:27.943Z
|
2018-11-15T00:00:00.000
|
{
"year": 2018,
"sha1": "dce0a3baf5e95261b984cd8fc92cad3b9cf89a51",
"oa_license": null,
"oa_url": "http://insight.jci.org/articles/view/123568/files/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "c80b221010bbe172ad050c785351f409e8616773",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
3274687
|
pes2o/s2orc
|
v3-fos-license
|
Factors affecting distribution patterns of organic carbon in sediments at regional and national scales in China
Wetlands are an important carbon reservoir pool in terrestrial ecosystems. Light fraction organic carbon (LFOC), heavy fraction organic carbon (HFOC), and dissolved organic carbon (DOC) were fractionated in sediment samples from the four wetlands (ZR: Zhaoniu River; ZRCW: Zhaoniu River Constructed Wetland; XR: Xinxue River; XRCW: Xinxue River Constructed Wetland). Organic carbon (OC) from rivers and coasts of China were retrieved and statistically analyzed. At regional scale, HFOC stably dominates the deposition of OC (95.4%), whereas DOC and LFOC in ZR is significantly higher than in ZRCW. Concentration of DOC is significantly higher in XRCW (30.37 mg/l) than that in XR (13.59 mg/l). DOC and HFOC notably distinguish between two sampling campaigns, and the deposition of carbon fractions are limited by low nitrogen input. At the national scale, OC attains the maximum of 2.29% at precipitation of 800 mm. OC has no significant difference among the three climate zones but significantly higher in river sediments than in coasts. Coastal OC increases from Bohai Sea (0.52%) to South Sea (0.70%) with a decrease in latitude. This study summarizes the factors affecting organic carbon storage in regional and national scale, and have constructive implications for carbon assessment, modelling, and management.
Increasing in atmospheric concentration of carbon dioxide (CO 2 ) and methane (CH 4 ) since mid-20 th century 1 causes the global warming 2 . Wetland ecosystems can deposit a large amount of photosynthesized carbon (C) into sediments 3 , and in-depth research on organic carbon (OC) sequestration and distribution must be undertaken to understand the processes and factors affecting it. OC can be divided into three C fractions on the basis of the stability and solubility of C in soils or sediments 4 . These fractions are: heavy fraction organic carbon (HFOC), light fraction organic carbon (LFOC) and dissolved organic carbon (DOC). Among the three fractions, HFOC (density > = 1.7 g cm −3 ) is relatively stable to climate change and other external environmental conditions 5 , and LFOC (density < = 1.7 g cm −3 ) is sensitive to the change of environment and microbial activities 6 . Furthermore, DOC has been studied widely with regards to biochemical activities, such as nitrification and denitrification 7 , and C mineralization 8 . Thus, carbon storage and factors affecting the carbon storage can be indicated by the study on carbon fractions as they participate in many biochemical activities and are easily affected by environmental variables.
Wetland, with its abundant plants and microbes, has a higher capacity of C deposition than cultivated soils or other land types 9 . River, as one natural wetland, can deposit plant decays and denature pollutants. To improve the efficiency and accelerate these processes, wetlands have been constructed near the river wetlands 10 . Thus, it is important to study whether constructed wetland has larger preponderance in C deposition than in the river wetland.
At regional scale, Cao et al. (2015) indicated that constructed wetland has higher carbon storage than river wetland 4 . Previous reports showed that surface soil has higher concentrations than subsurface soil 11,12 . Guo et al. (2015) shows the microbial phylum Acidobacteria can inhibit the decomposition and mineralization of organic carbon 13 . And Xu (2015) also showed the advantages of summer on carbon mineralization over winter in wetland 14 . At large scale, Mitsch et al. (2014) reported that tropical wetlands have significantly higher OC than boreal wetlands 15 , and variation in precipitation, climate and landscape can also influence the C distribution and storage 16,17 . Therefore, the regional and large-scale factors such like wetland types, soil depths, seasons, climate and precipitation, etc. may affect the OC deposition more or less. It is pertinent to identify important factors in the assessment of C storage at regional and national scales. However, systematic analysis of these factors affecting wetland C storage has not been undertaken in China.
Therefore, a research project was implemented at regional and national scale to study the factors affecting deposition of OC in wetland. The principal objectives of this study were to: 1) Assessing the distribution difference of three C fractions in two wetland types, two sampling campaigns, and among the sampling stations, 2) Evaluating the distribution of OC at the national scale, and, 3) Determining the relevant factors (precipitation, nitrogen content, microbes, etc.) affecting the distribution and storage of OC in wetland ecosystems. These objectives are realized by testing the hypothesis that sampling season and wetland types can significantly affect the storage of C fractions at regional scale, and OC is also affected by precipitations and climatic zones at large scale.
Results
Distribution patterns of carbon fractions at regional scale. Neither TOC nor HFOC exhibited any significant differences among two study zones (Fig. 1), two wetland types (constructed wetlands and river wetlands) or the four wetlands in Shandong Province of China ( Fig. 1; Table 1). However, HFOC differed significantly among sampling stations in ZR and ZRCW and attained the highest value (3.072%) in downstream of ZR. Further, distribution of HFOC in XR and XRCW did not exhibit any significant differences (Fig. 1). River wetlands had significantly higher LFOC than those of the constructed wetlands (P = 0.012; Table 1 and Fig. 2), which may be mainly attributed to the significantly low LFOC in ZRCW (0.020% ± 0.01) and high LFOC in ZR (0.189% ± 0.17). Cluster analysis of LFOC showed that XR and XRCW had similar LFOC clustering. However, ZR and ZRCW had significantly different LFOC, which attributed to the high LFOC in Mid-ZR and Down-ZR Table 1. Carbon contents and differences among the four wetlands (mean ± SD). NA: no significant difference, *: P < 0.05, **: P < 0.01. Data with a and b show significant difference at α = 0.05 (Duncan test).
( Fig. 2). The same clusters for DOC between XR and ZRCW and between ZR and XRCW showed that DOC distribution differed among different wetlands but not among wetland types (Fig. 3). Further, ZR and XRCW contributed most to DOC deposition. Both Mid-and Down-ZR had the same trend of LFOC and DOC, which was significantly higher than in Up-ZR and ZRCW (P = 0.000; Figs 2 and 3).
Distribution differences of carbon fractions between two sampling campaigns. In general, carbon fractions in summer (June, 2014) had higher concentrations than those in autumn (October, 2015). Difference of HFOC was the least among the carbon fractions, whereas DOC was the largest (Fig. 4). LFOC in summer was higher or significantly higher than that in autumn except for Up-XRCW. One-way ANOVA analysis showed that XR and XRCW had significantly higher HFOC (P = 0.039) and DOC (P = 0.000) in the summer than in the autumn by analyzing all the sampling stations though the difference between certain stations may not significant (for example, the HFOC of Up-and Down-XRCW in summer and autumn are similar). Total organic carbon (TOC; 2.56% in summer & 2.06% in autumn) in summer was significantly higher than in autumn (P = 0.039).
Physical and microbial factors affecting carbon deposition. Moisture content and bulk density were significantly correlated to HFOC, heavy fraction organic nitrogen (HFON; the concentration of organic nitrogen in the separated heavy fraction, which was described in the "Material and Method" section) and DOC, while they did not have a marked effect on LFOC and light fraction organic nitrogen (LFON; Pearson correlation analysis from the Supplementary file). A prominent linear relationship existed between LFOC and LFON (R 2 = 0.907, P = 0.000), with the mean value (24.77) of LFOC to LFON ratio, which was lower than TOC to TON ratio (49.81) (Fig. 5). And Principle Component Analysis showed that moisture is very close to LFOC, LFON, and DOC (Supplementary file). C and N fractions were strongly associated with each other, except for HFOC and HFON (Supplementary file). In general, carbon input also had a complex relationship with microbes. In the present study, Acidobacteria-6 was positively associated with LFOC (P < 0.01) in ZR and ZRCW, and Bacteroidetes was negatively correlated with HFON (P < 0.05; Supplementary file). Thiobacillus, Burkholderiales and Rhodocyclales were all positively associated with carbon fractions and LFON in XR and XRCW. Specific Pearson correlation analysis between microbial communities and carbon and nitrogen fractions were showed in the Supplementary file.
Distribution patterns of carbon deposition at national scale. River sediments were studied for four precipitation regions (600 mm, 800 mm, 1500 mm, and 1600 mm) and three climatic zones (cold temperature zone, north subtropical zone, and edge subtropical zone). The OC concentration differed significantly among the precipitation regions (P = 0.000; Fig. 6 and Table 2), and attained the maximum value (2.29%) corresponding with the annual precipitation of 800 mm, and it was significantly higher than OC concentrations (0.60%) with the precipitation of 600 mm. However, river OC with the mean value of 1.58% across China, did not differ among the three climate zones in (P = 0.272). Google Scholar, covering most studies on OC across China, showed that most studies of the sediment OC are confined to the eastern China. OC in river sediments (1.58%) was significantly higher than that in the marine sediments (P = 0.000; 0.59%), and follows an increasing trend with the decrease in the latitude (Table 2).
Discussion
Carbon: nitrogen ratio is an important factor which can be used to distinguish carbon sources from aquatic plants or terrestrial plants 52 . Meyers (1994) reported that carbon deposition in soils or sediments is primarily derived from terrestrial plants if the C: N ratio is >20 53 . Therefore, the high C: N ratio observed in the present study (TOC/TON = 49.81; LFOC/LFON = 24.77) suggests that OC input is primarily from the terrestrial plants. Goldman et al. (1987) reported that 53:6 (or 8.83:1) is the optimum C: N ratio for microbial growth, and any deviation from this ratio would strongly limit the microbial growth 54 . Tan et al. 6 and Xiang et al. 55 also reported that microbial biomass carbon to nitrogen ratio is about 9.5:1 and 23:3 ( = 7.7:1), respectively. Thus, the high LFOC: LFON ratio observed in the present study may indicate that the C is primarily derived from the terrestrial plants rather than from the microbial residues. In addition, the significantly higher TOC: TON ratio in ZRCW (70.6:1) and up-XR (141.8:1) than that in other stations in the present study indicate that low N concentration limits microbial growth and activities. Furthermore, it could also limit C accumulation but increase C mineralization. These results are similar to those of Moore et al. 56 who also observed that unbalanced C: N ratio can lead to N fixation but C mineralization until the dynamic balance is achieved. Therefore, under conditions of N insufficiency, available N sequestration is an important factor affecting the C deposition. Carbon storage from the above ground plants into sediments immediately impacts the microbial activities 57 . Elshahed et al. (2007) reported that Acidobacteria-6 impacts both carbon deposition and ammonia oxidation in sediments or soils 58 . Further, such a relationship is also observed between LFOC, HFON, DOC and Acidobacteria-6 in ZR and ZRCW. However, the negative correlation between HFON and Bacteroidales may suggest that activities of Bacteroidales limit the deposition of HFON in the present study 13 .
The stable HFOC 5 , which accounts for 95.4% of total OC, leads to the non-significant differences of OC distribution among the studied four wetlands. Whereas the interaction effect between study zones and wetland types causes the prominent differences of LFOC and DOC distribution (Table 1). Therefore, LFOC and DOC are relatively sensitive to environmental changes, which is in accord with the reports of Tan et al. 6 . The lower DOC and LFOC in down-ZR and ZRCW than these in up-ZR and mid-ZR may be associated with the low concentration of dissolved oxide (DO, ranging from 0.0 to 0.7 mg l −1 ) and subsequently the microbial communities. Fasching et al. 59 reported that DOC can influence the microbial activities, whereas Jiao et al. (2010) showed that dissolved organic matter (DOM) can be mineralized by microbes 60 . Therefore, DOC can be regulated and limited by microbes to some extent. LFOC is primarily related to land use, plant coverage 61, 62 , microbial activities 63 and C mineralization 64 . XRCW, with significantly higher plant coverage and species and microbial diversity than XR, whereas XRCW has similar level of LFOC with XR. Thus, the data presented herein show that plant coverage and microbial activities may be not the determining factors for LFOC. However, for LFON, which had very low content in the sediments, is significantly correlated with LFOC (R 2 = 0.907; Fig. 5). So LFOC deposition from terrestrial plants into wetland sediments is mainly limited by LFON in this study. This finding can also explain the result of Lal (2005) that increased plant litter may not necessarily raise the carbon storage 65 .
The four seasons of a year, with different climates, air temperatures, water level fluctuations and precipitations, have strong impacts on C deposition and emission 66 . The data reported herein show that higher carbon fractions (HFOC, LFOC, and DOC) in June than those in October. This result further suggests that summer has higher C storage than autumn. Whereas Sabrekov et al. (2014) proved that emission of greenhouse gases (GHGs) is mostly during the summer 67 . In addition, Xu et al. (2015) observed that CH 4 content in summer of XRCW is 15.5 times higher than that in autumn 14 . Therefore, summer is an important season to assess whether wetland is a carbon source or sink. Emission of GHGs is also strongly affected by plant species and the relative surface covered 66, 68 . Xu et al. (2014) also showed that GHGs emission in mud flat, which has no covered plants, exhibited no significant difference among seasons 66 . Thus, notably higher plant coverage and species in XRCW than in XR may contribute to the higher potential for GHGs emission in summer in XRCW than XR, and was also easily to be carbon source.
Concentration of OC in worldwide natural wetlands (22.92 mol kg −1 ) is significantly higher than that in river wetlands of China (1.58% ± 0.011) 9 , so is the OC density (8.01 kg C/m 2 in China to 10.60 kg C/m 2 in the world) 69 . Lal (2004) reported that soil physical structure can affect the carbon sequestration significantly 70 . Therefore, the prominent relationships among HFOC, DOC and bulk density herein may indicate that soil structure significantly affects the deposition of HFOC and DOC but not the LFOC in the present study. Above the ground, carbon sequestration may also be influenced by plants and carbon dioxide (CO 2 ) in atmosphere. Mitsch et al. (2014) reported that plant richness in wetland can notably increase carbon sequestration compared to increasing methane (CH 4 ) emission 15 in atmosphere, increased microbial decomposition rate and CH 4 emission in natural wetland limits carbon sequestration process 71,72 . Thus, the significantly lower carbon sequestration in natural wetland of China than the world's mean level suggests that Chinese natural wetlands still have great potential for carbon sequestration. Effective measures should be carried out to mitigate the increasing CO 2 concentration in China 73 . Google Scholar showed that most field studies on organic carbon of ecosystem in China focused on the eastern China, where concentrated with primarily population, industries, precipitation, cultivated land and river wetland 17 . Semi-arid grassland soils (such as northern China) can also accumulate stable organic carbon without much land use 73,74 . In China, effective management and proper protection on semi-arid grassland may improve higher carbon sequestration than that on the eastern land, which endured substantially disturbance. Consequently, these management and protection to terrestrial ecosystems are essential to carbon storage in national level. Three climatic zones and four precipitation regions, which were divided by Shi et al. (2013) and Wang et al. (2014) 16,74 , were involved to analyze OC storage in this study. The significant different OC distribution across among precipitation regions and the non-significant OC distribution across the sampled four wetlands suggested that precipitation is one important factor affecting carbon storage in large scale. The higher carbon storage in precipitation about 800 mm than those about 600 mm, 1500 mm or 1600 mm suggested that proper precipitation prone to carbon storage. Areas with precipitation of 1500 and 1600 mm had relative high temperature. And Bauer et al. 75 showed that dry sites are more inclined to be carbon sink than humid sites in tropical. A significant logistic relationship showed that OC increased with an increase in precipitation and moisture to some extent 76 . Therefore, the too much precipitation may go against the carbon storage. Carbon deposition trend among the climatic zones of China is also similar with the report of Bauer et al. 75 , who showed higher carbon sequestration in temperate wetland than in tropical and boreal wetland by balancing CO 2 sink and CH 4 emission. Fine fraction of soils prone to carbon accumulation 73 and high temperature and water content would increase microbial decomposition rate to plant residues 72 . Therefore, OC differences among different climatic zones and precipitations may be also induced by proportions of particle sizes in sediments and microbial richness. The notably lower OC in marine sediments than inland river wetland may suggest that carbon accumulation in ocean is less than in river wetland. The OC distribution trend, increasing with the reduced latitude, differs from carbon distribution in river wetland. One source of the OC in coastal sediments is the water flow from terrestrial rivers 77 . The particle fluxes from terrestrial river into coast are related to the terrain, runoff amount, and other environmental conditions 78 . And Ni et al. (2008) also reported that the maximum fluxes of suspended particle are in coincidence with the largest precipitation 79 . However, specific contributing factors should be studied in further research.
In conclusion, the hypothesis we established is proved in this study, and the results presented support the following conclusions: (1) Sampling season can affect the storage of carbon fractions in temporal scale significantly; (2) Imbalanced C: N ratio could hinder the carbon sequestration in wetland in regional scale; (3) Proper precipitation is beneficial to carbon deposition in large scale, and carbon storage in river wetlands is prominently higher than in the coastal China; (4) However, the effects of wetland types and climatic zones on OC storage is not prominent in the present study. Therefore, comprehensive work should to be done to further confirm the influence of wetland types and climate on OC deposition, and global studies on carbon storage are also needed in the next step.
Materials and Methods
Field sampling and data collection. Sediments were sampled from four wetlands (Zhaoniu River (ZR) and Zhaoniu River Constructed Wetland (ZRCW), Xinxue River (XR) and Xinxue River Constructed Wetland (XRCW)) in Shandong Province in the northern China. The factors affecting wetland C storage in regional scale were analyzed. ZR and XR are two tributaries of Tuhai River and Nansi Lake, respectively (Fig. 1). Nansi Lake is one of the largest lakes in the South-to-North Water Transfer Project and Tuhai River is one important river of the Haihe River Basin. The ZRCW and XRCW were constructed in 2012 and 2007, respectively 51 , (Fig. 1) on the Tuhai River and Nansi Lakes to control pollution by the domestic sewage and industrial wastewater of cities.
Surface sediments were sampled in June 2015 from upstream, midstream, and downstream of ZR and XR; and upstream and downstream of ZRCW and XRCW. A total of 40 sediment samples (34°32′-34°48′N; 117°08′-117°15′E) were collected from the four wetlands ( Fig. 1) to analyze the distribution of the three C fractions. The OC deposition was assessed in October 2015 in XR and XRCW to compare seasonal differences among C fractions 4 .
China is a fast developing country 80 , and papers on OC published before 2006 mostly concentrated in terrestrial systems which suffered serious environmental damage 81,82 . In addition, part of the methods to determine the OC ten years ago was not as accurate as the method we use in recent years 83 (Potassium dichromate external heating method and the element analyzer method). Thus, data published before 2006 are excluded from this research. Previous studies also reported that the concentration of OC in surface soils and sediments was higher than in subsurface or deep soils 11 . And OC in deep soils was relatively stable and not easy to be affected by environmental factors 84 . So sediments deeper than 30 cm were also excluded from the present study. To make sure the data we retrieved is reliable, the published papers those have high cited times are referred firstly. The specific process of data retrieve is listed below.
"The OC in river sediments of China" was searched with Google Scholar (http://scholar.glgoo.org/), and the first 100 publications (listed by correlations high to low;) and the data were screened for the following requirements: 1) the articles published after 2006; 2) the data of sediments sampled before 2000 were eliminated; 3) sediment samples of deeper than 30 cm were excluded from the collected data; 4) the research stations not relevant to river wetlands or coastal wetlands were eliminated; 5) the data reused in two or more publications were retrieved only once; 6) the OC contents which could specifically be transformed into percentile system were retrieved; and 7) the OC determined by elemental analyzer was used to avoid experimental error. Finally, 595 data from river sediment samples and 364 data from coastal sediment samples published in 38 articles were retrieved for this study. In total, sediment data of over 40 rivers and tributaries and four coastal seas (Bohai Sea, Yellow Sea, East Scientific RepoRts | 7: 5497 | DOI:10.1038/s41598-017-06035-z Sea, and South Sea) were used in the present study. The sampling stations and C distributions are described in Fig. 2. In addition, data on precipitations, climate zones and land-sea differences were also obtained to assess the distribution trend of OC across China 16 .
Laboratory analyses.
Prior to further analysis, moisture content and bulk density were calculated by comparing the volume of sediment samples under room temperature and 105 °C 85 . Sediment samples were air dried, ground and sieved through 2 mm at room temperature (~20 °C) for extraction of DOC 86,87 . The concentration of DOC was measured by a total-C analyzer (TOC-L CPN, Shimadzu, Japan) using a non-purgeable OC analysis procedure. The pH was measured in 1:2.5 sediment: water suspension. The 1.70 g mL −1 of sodium iodide solution was used to separate heavy fraction organic matter (HFOM) and light fraction organic matter (LFOM) from sediment samples 84 . LFOM and HFOM were weighed by an electronic balance (0.0000 g), and C and N contents (LFOC, LFON, HFOC, and HFON) were determined by an elemental analyzer (Vario EL III, Elementar Analysensysteme, Germany). Total carbon to nitrogen ratio (TC/TN), light fraction carbon to nitrogen ratio (LFOC/LFON), and heavy fraction carbon to nitrogen ratio (HFOC/HFON) were calculated for further analysis.
The analyses of DNA extraction and Illumina MiSeq sequencing of the amplified DNA were conducted at Shanghai Paisennuo Biological Technology Co. Ltd (Shanghai, China). Microbial communities and populations were cited and analyzed to explain the distribution of C fractions 88 .
Statistical analyses. Statistical analyses of the data were performed by using the SPSS 21.0. Mean value analysis and one-way analysis of variance (ANOVA) were computed to compare the differences of OC in inland rivers and sea areas of China. In addition, mean value analysis, one-way and two-way ANOVA, cluster analysis and correlation analysis were performed for the data on the carbon fractions. Cluster analysis to LFOC and DOC: mean values of LFOC and DOC in the four wetlands (ZR, ZRCW, XR, and XRCW) are as four variables for cluster analysis to LFOC or DOC. Moreover, linear-regression analysis in SPSS and Principal component analysis (PCA) in Canoco 4.5 were performed between C fractions and other characteristics of the sediments (pH, moisture, bulk density, and nitrogen fractions). Correlation analysis was also done between C, N fractions and main microbial taxonomies. Origin 9, ArcGIS 10.
|
2018-04-03T00:55:46.918Z
|
2017-07-14T00:00:00.000
|
{
"year": 2017,
"sha1": "2da4ed77e3736a866b8da4eeefe485894c5a5064",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-017-06035-z.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8babe58bf4bcd703731bafb2ad96f1eb6e3ab1a8",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science",
"Medicine"
]
}
|
240104512
|
pes2o/s2orc
|
v3-fos-license
|
Spatial Distributions of Atmospheric Ammonia in a Rural Area in South Korea and the Associated Impact on a Nearby Urban Area
Ammonia (NH3) plays an important role in air quality and atmospheric chemistry, yet studies on the characteristics and impacts of NH3 are limited. Herein, we revealed the spatial distribution of atmospheric NH3, as measured by passive samplers, at three different sites (R1, R2, and R3) in the rural area (livestock environment) of Jeongeup, South Korea, from September 2019 to August 2020. At site R1, the boundary of a large-scale pig farm, dramatically high daily mean concentrations of NH3 were observed (118.7 ppb), whereas sites R2 and R3, located ~1 km from R1, exhibited lower concentrations of 18.2 and 30.4 ppb, respectively. In the rural environment, the monthly NH3 variations showed a peak in June (34.2 ppb), which was significantly higher than in the urban and remote areas. To examine the impact of NH3 from the rural area on a nearby urban area in June 2020, simultaneous measurements were performed using a real-time instrument in Jeonju. When high NH3 events occurred in the urban area in June, the results for the NH3 concentrations and observed meteorological conditions in the rural and urban areas showed that the rural area influenced the NH3 levels in the adjacent urban area.
Introduction
Ammonia (NH 3 ) is a base gas that reacts with acidic species in the atmosphere, such as sulfuric acid (H 2 SO 4 ) and nitric acid (HNO 3 ), to produce secondary inorganic aerosols (SIAs). These SIAs contribute to the degradation of visibility and air quality, and have adverse effects on human health [1][2][3].
Global NH 3 emissions have increased from 1.9 to 16.7 Tg between the 1960s and 2010s [4]; southern Asia, including China and India, accounts for more than 50% of the total global NH 3 emissions since the 1980s. More than 90% of the total NH 3 emissions derive from the livestock industry, agricultural activities, and domestic animal fertilizer use [5,6]. Additionally, some studies have shown that industrial and traffic emissions may be a source of NH 3 in urban environments [7,8]. Recently, effective controls have significantly reduced the emissions of some gaseous pollutants, including NO x and SO 2 [9,10], but NH 3 emissions have continued to increase [11,12].
Field measurements have shown that variations in NH 3 concentrations depend on the location and season. Due to elevated emissions from livestock husbandry and agricultural actives, higher concentrations of ambient NH 3 are typically recorded in rural areas [13,14]. For example, the atmospheric NH 3 concentrations measured from April 2009 to August 2011 near the boundary of a pig farm in China ranged between 46.6 and 674.7 ppb [13]. Additionally, Kubota (2020) reported that NH 3 concentrations in high emissions areas (i.e., adjacent to livestock sources) were greater in winter (~71 ppb) than those in summer (~56 ppb), which is not a typical seasonal variation pattern [14]. In contrast, lower concentrations of ambient NH 3 have been reported in urban areas [15,16]. For instance, the NH 3 concentration measured in Seoul, South Korea, was~11.6 ppb from 2010 to 2011; the mean seasonal value of NH 3 was~14.0 and~9.0 ppb in the summer and winter, respectively [15]. Meng (2011) and Zhou (2019) suggested that NH 3 emitted from livestock activities could influence the ambient NH 3 concentrations in nearby urban areas [17,18]. Although atmospheric NH 3 has a significant impact on the air quality in rural areas and adjacent urban areas, data on spatial distributions and characteristics of atmospheric NH 3 are, at present, limited (there are especially limited data available for atmospheric NH 3
measurements in South Korea).
In this study, to explore spatial distributions of atmospheric NH 3 in a rural area, we conducted a one-year measurement (from September 2019 to August 2020) at three different monitoring sites in Jeongeup, a rural site in South Korea. Jeongeup is a typical agricultural area characterized by large-and small-scale livestock farms and facilities. Moreover, to investigate whether the NH 3 emitted from the agricultural source influenced the atmospheric NH 3 in the urban area, simultaneous measurements for atmospheric NH 3 were carried out in both the rural area (Jeongeup) and a nearby urban area (Jeonju) during summer in June 2020.
Monitoring Sites
Atmospheric NH 3 concentrations were measured in the rural area of Ongdong-myeon, Jeongeup (35.655 • N, 126.987 • E) from September 2019 to August 2020. Jeongeup is an agricultural area of approximately 692.7 km 2 , characterized by the largest NH 3 emissions in Jeollabuk-do [19] (Figure 1). Significant livestock populations (such as pigs, cows, and chickens, etc.) are located within the region of Jeongeup (i.e.,~393,000 pigs,~89,000 cows, and~823,000 chickens in 2021 [20]). Based on the Clean Air Policy Support System (CAPSS), the main NH 3 source among the livestock in the rural area is pigs [19]. Three different sites in Ongdong-myeon, Jeongeup, were selected: one site at the boundary (R1) of a large-scale mechanically ventilated pig breeding farm (~12,000 pigs in 2020) and two sites located 1 km from the farm (R2 and R3). Sites R2 and R3 are located to the north and south of the pig farm, respectively ( Figure 1). Additionally, NH 3 concentrations were measured on the second floor of the Natural Science Building at Jeonbuk National University, Jeonju, which is the capital of Jeollabukdo, South Korea (35.847 • N, 127.129 • E). Jeonju is an urban area located approximately 40 km from the monitoring sites in the rural area of Jeongeup. This site is surrounded by business offices, residential buildings, and roads. Major livestock areas emit NH 3 in the western vicinity of this region (Jeongeup, Iksan, and Gimje). Figure 1 shows the locations of the monitoring sites in both the rural and urban areas.
Atmospheric NH 3 Measurements
Passive samplers (RAD 168, Radiello, Italy) were employed to obtain the NH 3 concentrations over the rural sites of R1-R3 at a height of~3 m from the ground from September 2019 to August 2020, with the aim of analyzing their spatial distribution. A passive sampler is consisted of an outer porous cylindrical diffusive body, which controlled the diffusion rate, and a cylindrical inner polyethylene tube coated by phosphorous acid, leading to NH 3 adsorption. This is a widely used instrument to collect atmospheric NH 3 [14,18,[21][22][23]. In this study, atmospheric NH 3 was collected over a one-day period from 09:00 a.m. to 08:00 a.m. local time the following day during the entire sampling period; however, NH 3 was collected over a two-day period in June (because of the rainy season). Sampling could not be conducted in October and November 2019 owing to limited access in the area owing to African swine fever virus (ASF) and in July 2020 during the monsoon season. After NH 3 collection, samples were stored at −18 • C before extraction. The samples were extracted in 6 mL of deionized water (18.2 MΩ·cm, Merck Milli-Q ® , Millipore, Burlington, MA, USA) and sonicated for 45 min. Subsequently, the extracts were analyzed as an NH 4 + concentration using ion chromatography (Aquion, Thermo Scientific, USA). The NH 4 + concentrations obtained were then converted to NH 3 using a previously reported equation [24,25]. Based on the field blanks, the detection limit of NH 3 was calculated to bẽ 0.85 µg/m 3 (~1.2 ppb). A total of 95 samples were analyzed from R1 (32 samples), R2 (32 samples) and R3 (31 samples), as listed in Table 1. After collection, all samples were analyzed within two weeks. To obtain the temporal variations in the atmospheric NH 3 concentrations in Jeonju (the urban area), NH 3 was measured on the second floor of Jeonbuk National University using a cavity ring-down spectroscopy (CRDS) analyzer (Picarro Inc., model G2103, Santa Clara, CA, USA) at 1 s intervals from 1-30 June 2020. The detailed methods and calibration of the instrument describes in Park (2020) [26]. Briefly, the detection limit of the NH 3 analyzer was less than 0.09 ppb and the average precision was 0.3 ppb for 300 s with a response time of less than 1 s [27]. In principle, the NH 3 analyzer does not require additional external calibration; however, in this study, mixtures of a standard NH 3 gas (9.2 ppm, with an accuracy of ±2%; Airkorea, Korea) and N 2 (Airkorea, Korea, 99.999%) were used to confirm the calibration performance of the analyzer. Calibration was conducted using five different NH 3 concentrations (150, 100, 50, 30, and 0 ppb); the resulting R 2 was 0.9997. Hourly averaged data were used for data analysis. Data that exceeded the hourly amount of precipitation of 5 mm were excluded from data analysis to reduce the effect of precipitation.
Hourly averaged meteorological parameters, including air temperature, relative humidity, wind speed, wind direction, and precipitation, were collected at Jeongeup (rural; station id: 47245,~16.5 km from R1) and Jeonju (urban; station id: 47146,~1.5 km from U1) using the automated synoptic observing system (ASOS) from the Weather Data Service of the Korea Meteorological Administration (Available online: https://data.kma.go.kr) (accessed on 13 May 2021).
Modeling of NH 3 Origin
To identify the relative concentrations of the pollutants contributing to the potential source regions at the receptor site, we performed a concentration weighted trajectory (CWT) analysis [28,29]. The study field covering a geographical area from 90 • E to 150 • E and from 20 • N to 60 • N includes 2400 grid cells with a spatial resolution of 1 • × 1 • . The CWT analysis was combined with a 72-h air mass backward trajectory using the National Oceanic and Atmospheric Administration Hybrid Single-Particle Lagrangian Trajectory (HYSPLIT4) model at four times; 00:00, 06:00, 12:00, and 24:00 UTC at 100 m above ground level (AGL). The meteorological data used for backward trajectory calculating were the GDAS (Global Data Assimilation System) with a resolution of 1 • × 1 • data and were downloaded from the web server of NOAA Air Resources Laboratory. Additionally, to access the regional scale transport and local pollution source emissions, a conditional probability function (CPF) analysis was performed using NH 3 concentrations, wind direction, and wind speed data obtained from ASOS. The threshold criterion of the 90th percentile was selected to indicate the directionality of the sources.
Spatial Distributions of Atmospheric NH 3 in the Rural Area
During the observation period from September 2019 to August 2020, the monthly averaged temperature, relative humidity, and wind speed were 13.1 ± 9.8 • C and 71.4 ± 17.8 %, and 1.5 ± 1.8 m/s, respectively; the prevailing wind direction was north in the rural area ( Figure S1). A total of 95 passive samplers were used at the three different sites within the rural area during the entire study period. Table 1 presents a description of each monitoring site and the average seasonal NH 3 concentrations. Figure 2 shows the variation in the daily average NH 3 concentration at R1, R2, and R3 in the rural area. At site R1 (boundary of a large-scale pig farm), dramatically high NH 3 concentrations were recorded, with a daily average of 118.7 ± 51.7 ppb, which varied from a minimum of 40.3 ppb to a maximum of 272.2 ppb (Figure 2a). For the seasonal variation, the average NH 3 concentrations at R1 were 100.5, 128.6, 60.3, and 141.6 ppb in spring, summer, autumn, and winter, respectively (Table 1). Average NH 3 concentrations over 100 ppb were recorded during all seasons, except in autumn owing to frequent heavy rain and typhoons (Figure S1b) in September 2019 ( Figure S2a). As listed in Table 1, in winter, the average seasonal NH 3 concentration at site R1 was approximately two-fold higher than that in autumn. Ammonia is temperature-dependent [30]; high concentrations have been generally reported during summer in various environments [15,21,[30][31][32][33]. However, at the NH 3 point source observed at the boundary of the large-scale pig farm (R1), the NH 3 concentrations were insensitive to the ambient temperature, instead recording a high NH 3 level in winter ( Figure 2 and Table 1). This is possibly due to ventilation differences at a mechanically ventilated pig farm depending on the season. The ventilation system usually operates at a significantly higher frequency in summer and a lower frequency in winter to maintain the internal temperature [34][35][36][37][38][39]. Increased ventilation rates can easily diffuse NH 3 into the atmosphere under high ambient temperatures, producing relatively lower concentrations of atmospheric NH 3 in summer. In contrast, reduced ventilation rates can diffuse concentrated NH 3 under low ambient temperatures, resulting in relatively high concentrations of atmospheric NH 3 in winter [37][38][39]. Additionally, high concentrations of ambient NH 3 at R1 may have also been driven by the livestock industry environments and activities of such a large-scale pig farm. In Asia, large-scale farms are usually equipped with open manure storage facilities; in these facilities, farmers actively store manure and produce fertilizer during winter for use on farmland in spring with the start of agricultural activity [40]. The NH 3 emitted from intensive manure production in open storage facilities in winter increases the atmospheric NH 3 concentration. Previous studies have reported similar results: ambient NH 3 concentrations were higher in winter months than in other months with active fertilization in agricultural areas [14,[41][42][43]. Kubota (2020) conducted atmospheric NH 3 measurements with a passive sampler at the near livestock sources from October 2018 to January 2020, finding that the average NH 3 concentration in winter was higher than that in summer, which is consistent with our results [14]. Additionally, García-Gómez (2016) and Loftus (2016) showed that the highest seasonal NH 3 concentrations occurred in winter, which is related to the presence of livestock in the vicinity [41,42].
At R2 (~1 km north of R1), the atmospheric NH 3 concentration reached 56.1 ppb, with a daily average of 18.2 ± 11.5 ppb during the observation period ( Figure 2b and Table 1). Compared with the NH 3 level observed at R1 and R3, significantly lower atmospheric NH 3 concentrations were recorded at R2 (Figure 2). This is because the monitoring site is windward of the pig farm at R1, with no surrounding farms. The atmospheric seasonal NH 3 concentration at R2 peaked at 28.5 ± 13.6 ppb in summer, and in June at 30.1 ± 15.0 ppb ( Figure S2b), which was approximately two-fold higher than that in the other seasons (15.8 ppb for spring, 11.2 ppb for autumn, and 13.6 ppb for winter), as listed in Table 1. The seasonal variation at R2 was comparable to the variation observed for the peak concentration in winter at the point source of R1, as described above.
At R3 (~1 km south of R1), the daily average NH 3 concentration was 30.4 ± 12.1 ppb ranging from 15.6 to 56.3 ppb (Figure 2c). Seasonally, the average NH 3 concentrations were 27.5, 38.2, 38.7, and 25.0 ppb in spring, summer, autumn, and winter, respectively, yielding negligible seasonal variations compared with R2 (Table 1). There are several small-scale mechanically ventilated pig farms near small households at R3. Continuous sources from mechanically ventilated small farms likely caused such stable variation. In addition, a higher average daily NH 3 concentration was observed at R3 (30.4 ppb) than at R2 (18.2 ppb) during the entire period, as listed in Table 1.
Sites R2 and R3 are located only~1 km from the NH 3 point source at R1, but significantly lower NH 3 levels were observed (Figure 2). Previous studies have also observed this pattern of decreasing NH 3 concentrations with increasing distance from NH 3 emission sources [44][45][46]. For example, López-Aizpún (2018) reported that NH 3 concentrations decreased from 74.7 ppb at 30 m to 2.1 ppb at 1000 m distance from livestock [46].
Comparisons of Atmospheric NH 3 Concentrations in Different Environments
In this study, we defined the representative atmospheric NH 3 concentration for the rural environment as the average NH 3 concentration obtained from sites R2 and R3 in the rural area of Jeongeup. Site R1 was excluded because of its proximity to the NH 3 emission source [47][48][49]. Figure 3 shows the seasonal variations in the ambient NH 3 values obtained from sites R2 and R3 from September 2019 to August 2020. In the rural environment, the daily mean atmospheric NH 3 concentration was 24.2 ± 13.3 ppb, with a seasonal variation of 33.3 ± 14.6 ppb in summer, 22.2 ± 13.6 ppb in autumn, 21.7 ± 11.8 ppb in spring, 19.3 ± 8.8 ppb in winter. Significantly higher NH 3 concentrations were observed in summer compared with winter in the rural area. The highest monthly concentration was recorded in June 2020 at 34.2 ± 15.7 ppb ( Figure S3).
Many studies have reported a strong positive correlation between the ambient NH 3 and temperature; the atmospheric NH 3 concentration increased with an increase in the ambient temperature [21,26,30,50]. In summer, high temperatures favor the volatilization of NH 3 emitted from various sources, such as agricultural activities, leading to a thermodynamically stable phase state as gaseous NH 3 , rather than particulate NH 4 + in the atmosphere [51][52][53]. Our results for the high NH 3 levels recorded in summer is consistent with previous results (Figure 3). Table 2 summarizes the atmospheric NH 3 concentrations observed in various environments in rural (livestock villages), urban, and remote areas. Although the measurement period was different, significantly greater NH 3 concentrations were observed in rural areas compared with other environments. For example, in Beijing, China, the average recorded NH 3 concentration was 37.0 ppb near the pig facilities [13]. Additionally, in Navarre, Spain, where two high-intensity point-sources of NH 3 are located (pig and cattle farms), the average NH 3 concentration was 33.8 ppb, with a maximum value of 74.7 ppb [46]. In Colorado, USA, the NH 3 had an average concentration of 61.9 ppb, as influenced by emissions from adjacent large concentrated animal feeding operations [54].
The range of NH 3 concentrations reported in urban areas is relatively variable, as listed in Table 2. In China, mean annual NH 3 concentrations are 7.8 ppb in Shanghai [55] and 15.2 ppb in Nanjing [56]. In Korea, the mean NH 3 concentrations in Seoul [15], Mokpo [57], and Jeonju [26] are 11.6, 8.6, and 10.5 ppb, respectively. Compared with other urban areas in Asia, the ambient NH 3 concentrations are relatively lower in New York, USA [18], and Douai, France [58], with values of 3.2 and 4.2 ppb, respectively. In some urban areas, diurnal variations in the NH 3 concentration are dependent on the traffic emissions, which may be an important NH 3 source in urban areas [59][60][61]. Compared with rural and urban areas, significantly lower NH 3 levels, i.e., <~5 ppb, have been observed in remote areas, including coastal areas, mountains, and forests (Table 2) [43,47,56,62,63].
Impact on NH 3 Levels in Nearby Urban Area
As discussed in Sections 3.1 and 3.2, average seasonal atmospheric NH 3 concentrations in the rural area peaked in summer (Figure 3), particularly in June ( Figure S3). To investigate whether the high NH 3 concentrations observed in the rural area influenced the NH 3 levels in nearby urban areas, temporal variations in the atmospheric NH 3 were measured using CRDS in a nearby urban area at site U1, Jeonju. Location of the urban monitoring site (U1), which is~40 km from the rural area, is shown in Figure 1. During 1-30 June 2020, the hourly mean ambient NH 3 concentration at site U1 was 18.6 ± 7.8 ppb, with a minimum at approximately the detection limit of~1 ppb and a maximum of 59.3 ppb ( Figure S4). Figure 4 shows the diurnal variation in the ambient NH 3 measured at U1 in June 2020. A high NH 3 level was maintained in the afternoon with a mean hourly concentration of >20 ppb from 13:00 to 21:00, with a peak value of 23.1 ppb at 18:00. The NH 3 concentration then remained low from night to sunrise. Park (2020) also observed a similar diurnal pattern with a single NH 3 peak in the late afternoon in June 2019 at an adjacent site, i.e., Samcheon-dong in Jeonju [26]. This ambient NH 3 peak was only measured in an urban area characterized by NH 3 transported from an adjacent rural area. Previous studies have hypothesized that a single NH 3 peak, appearing in the late afternoon, was caused by NH 3 emitted from agricultural activities and the evolution of the planetary boundary layer [66][67][68]. In the morning, farmers in rural areas begin to fertilize their land. As the temperature rises in the afternoon, NH 3 volatilizes into the atmosphere, with the occurrence of vertical exchange via extension of the mixing layer. These processes can lead to elevated levels of atmospheric NH 3 in rural areas; the increasing NH 3 concentrations can then be transported to nearby urban areas with increases in the mixing height and wind direction [26,[66][67][68][69]. In this study, based on simultaneous measurements at the rural and urban sites in June, a high NH 3 level was recorded at both the rural (daily average of 49.6 ± 5.3 ppb obtained at sites R2 and R3) and U1 (hourly average of 23.2 ± 7.6 ppb) sites from 2 to 8 June ( Figure 5). On 2 June, a high daily NH 3 concentration of 52.3 ppb in the rural area was recorded with high wind speeds of ≤~5.2 m/s in the afternoon, as shown in Figure 5a. The prevailing wind direction in the rural area then changed from a northeasterly to a southwesterly direction toward the urban site from 2 to 3 June; this was then maintained until 7 June. Throughout the same period, westerly winds were also dominant in the urban area, with low wind speed conditions (average of 1.3 ± 0.9 m/s), and high temperature (average of 23.0 ± 3.5 • C) at U1 (Figure 5b). These stable meteorological conditions in the urban area favored pollutant accumulation, including NH 3 . This resulted in increasing NH 3 concentrations at U1 until 7 June, especially in the late afternoon (Figure 5b), as hypothesized in previous studies [26,[66][67][68]. Therefore, based on our simultaneous measurements, the elevated NH 3 level in the late afternoon in the urban area was most likely transported from the adjacent rural area. Moreover, the CPF analysis revealed that there is a high probability of NH 3 concentrations > 29 ppb (90th percentile) in June ( Figure 6). Additionally, the CWT results showed that the high concentration of atmospheric NH 3 originated domestically, rather than via long-range transport ( Figure S5). These results indicate that the adjacent rural area influenced the high NH 3 concentrations observed in the urban area in June.
Conclusions
To investigate the spatial distributions of atmospheric NH 3 in a rural area, atmospheric NH 3 concentrations were analyzed from 95 samples collected using passive samplers at three different sites (R1, R2, and R3) in Jeongeup, South Korea, from September 2019 to August 2020. During the entire period, the average daily NH 3 concentrations were 118.7 ± 51.7 ppb at site R1 (boundary of a large-scale pig farm), 18.2 ± 11.5 ppb at site R2 (~1 km north of R1), and 30.4 ± 12.1 ppb at site R3 (~1 km south of R1). Significantly high levels of atmospheric NH 3 were recorded at the NH 3 emission source of R1 during winter (average of~141.6 ppb) due to the low ventilation rates and active production of livestock manure. In contrast, there were significant decreases in the atmospheric NH 3 concentrations at R2 and R3, even at a distance of only~1 km from the NH 3 emissions source (R1). In this study, we used the average NH 3 concentration at sites R2 and R3 to determine the representative atmospheric NH 3 concentration in the rural area as R1 is in close proximity to the NH 3 emission source. The average atmospheric NH 3 concentration of the rural areas (average of sites R2 and R3) was 24.2 ± 13.3 ppb, with a seasonal variation of 33.3 ± 14.6 ppb in summer, 22.2 ± 13.6 ppb in autumn, 21.7 ± 11.8 ppb in spring, and 19.3 ± 8.8 ppb in winter. Particularly, the NH 3 concentrations were highest in the summer of June 2020.
To explore the impact of the high NH 3 concentrations monitored in the rural area during June on the atmospheric NH 3 level in a nearby urban area, atmospheric NH 3 concentrations were simultaneously measured in an urban area of Jeonju using a CRDS in June 2020. The hourly mean NH 3 concentration in June was 18.6 ± 7.8 ppb in the urban area, where a high level was maintained in the late afternoon. When high NH 3 episodes in June occurred at the urban site, elevated NH 3 concentrations were also observed in the adjacent rural area. During these episodes from 2 to 8 June, westerly winds were dominant in the urban area with low wind speed conditions and high temperatures, thus leading to stable meteorological conditions. The CPF analysis also showed that there was a high probability of NH 3 concentrations in June. Conclusively, the increasing ambient NH 3 concentrations observed in the urban area in June were influenced by high NH 3 concentrations from the rural area located to the west. These results can provide a more comprehensive understanding of the spatial and temporal distribution of atmospheric NH 3 and its impact, as well as a scientific basis to develop effective control strategies for atmospheric NH 3 levels.
Supplementary Materials: The following are available online at https://www.mdpi.com/article/ 10.3390/atmos12111411/s1, Figure S1: Monthly meteorological conditions of (a) wind direction (WD) and wind speed (WS), and (b) cumulative precipitation (Pre.), temperature (T), and relative humidity (RH) at Jeongeup in the rural area from September 2019 to August 2020. Figure S2: Monthly variations in ambient NH 3 concentrations with standard deviations at the monitoring sites of (a) R1, (b) R2, and (c) R3 in a rural area during September 2019-August 2020. Figure S3: Monthly variations in ambient NH 3 concentrations averaged at the rural sites R2 and R3 during September 2019-August 2020. Figure S4: Time series of hourly mean NH 3 concentrations measured at the urban (Jeonju) during 1-30 June 2020. Figure Data Availability Statement: Publicly available meteorological archived datasets analyzed in this study can be found at https://data.kma.go.kr (accessed on 13 May 2021). The publicly available Hybrid Single-Particle Lagrangian Integrated Trajectory (HYSPLIT) model can be found at https: //www.ready.noaa.gov/HYSPLIT.php (accessed on 13 May 2021) and run either online or offline. The data can be found from the link: ftp://arlftp.arlhq.noaa.gov/pub/archives/gdas1/ (accessed on 13 May 2021).
|
2021-10-29T15:18:22.119Z
|
2021-10-27T00:00:00.000
|
{
"year": 2021,
"sha1": "538f4d1d328dec97d81134d71a44767e64e86108",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4433/12/11/1411/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "06e4e75ed4b7965af86d2465634d11d1e3972569",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
}
|
870376
|
pes2o/s2orc
|
v3-fos-license
|
Pivoting: leveraging opportunities in a turbulent health care environment *
Purpose: The purpose of this lecture is to challenge librarians in clinical settings to leverage the opportunities presented by the current health care environment and to develop collaborative relationships with health care practitioners to provide relevant services. Discussion: Health care organizations are under financial and regulatory pressures, and many hospital librarians have been downsized or have had their positions eliminated. The lecture briefly reviews hospital librarians’ roles in the past but focuses primarily on our current challenges. This environment requires librarians to be opportunity focused and pivot to a new vision that directs their actions. Many librarians are already doing this, and colleagues are encouraging us to embrace these opportunities. Evidence from publications, websites, discussion lists, personal communications, and the author’s experience is explored. Conclusion: Developing interdisciplinary and collaborative relationships in our institutions and providing relevant services will mark our progress as vital, contributing members of our health care organizations.
INTRODUCTION
Like all Janet Doe lecturers, I thank those who selected me and confess to a sense of anxiety because of the responsibility to honor Janet Doe and offer something worthwhile to my colleagues. I briefly considered trying to start a new tradition called the ''Janet Doe TED Talk,'' since TED talks can be no longer than eighteen minutes. But I realized I really did have quite a bit I wanted to say about medical librarianship in the current environment, and I am grateful for this opportunity.
The theme of my lecture is ''Pivoting: Leveraging Opportunities in a Turbulent Health Care Environment,'' and I will be exploring this idea throughout the lecture. The picture on the title slide is an artist's rendering of the new Saint Joseph Hospital that will open in December of 2014. It has been built with the energy and vision of Saint Joseph Hospital staff who had to stay focused on taking care of patients and the demands of the health care system, while at the same time working to bring the vision of the new hospital to life.
Many in this audience may not know very much about Janet Doe or why she continues to be honored through this endowed lectureship. In looking at previous lectures, I was particularly moved by the words of Virginia H. Holtz, AHIP, FMLA, in her 1986 lecture, because they were not so much about Janet Doe's many accomplishments as they were about her attitude regarding the Medical Library Association (MLA) and her colleagues. Holtz said: Janet Doe was among the first of those who have set for me a ''gold standard'' for what MLA and MLA members should be, through the example of her enthusiasm for the fellowship and ideas of colleagues, young and old, a joy in the task to be done, and pleasure in accomplishment, no matter whose. Among her enduring gifts to this association, renewed at each annual meeting, is this open, generous, and joyful sharing of each other's company, insights, and accomplishments. [1] Janet Doe lecturers are asked to provide their perspectives on the history or philosophy of medical librarianship. Most lectures are a mixture of both, since the work of our predecessors has contributed to our values of purpose, service, and excellence. Many lectures are also quite personal. Nina W. Matheson, AHIP, FMLA, noted that ''All [lecturers] have written about what they hold nearest and dearest to their professional hearts, seeking to inform, to provide insight, to inspire, and even to entertain'' [2]. In his 2004 lecture, Rick B. Forsman, FMLA, said that ''To a significant degree [the lecture] is a self-disclosure, an intimate exposure of how one thinks, what one believes is important, and what are the innermost musings that may have been shared with a small circle of colleagues and friends, but that rarely are presented so publicly'' [3].
As I reflected on the content of my lecture, I thought about the words of Mother Xavier Ross, the founder of the Sisters of Charity of Leavenworth and Saint Joseph Hospital. She said, ''It is wisdom to pause, to look back and see by what straight or twisting ways we have arrived at the place we find ourselves'' [4]. In his 1976 lecture, David Bishop cited the maxim ''never talk of yourself,'' but he allowed an ''occasional personal note'' [5], so I hope he would humor my wish to share a little about my medical librarianship journey and how I arrived at where I am today.
MY BACKGROUND
Unlike many previous lecturers, I never knew Janet Doe. When Gertrude Annan gave the first lecture in 1967, ''The Medical Library Association in Retrospect, 1937Retrospect, -1967'' [6], I was an English major finishing up my sophomore year in college. When I was not worrying about the war in Vietnam or anticipating the latest Beatles album (which turned out to be Sgt. Pepper's Lonely Hearts Club Band), I was planning to enjoy the life of academic scholarship, teaching, and perhaps membership in Garrison Keillor's yet to be formed ''Professional Organization of English Majors.'' After graduation from college, I was accepted into the master's program at Loyola University in Chicago as a teaching assistant. Although I eventually received my master's degree in English, I was at first stymied by the prerequisite of a passing grade on a Princeton foreign language exam; foreign languages were never my strong suit. I was chagrined in reading Janet Doe's oral history when she said, ''I don't believe you could get anywhere in medical librarianship without some knowledge of French and German, and a little smattering of Spanish [and] Italian'' [7]. Fortunately, I did not know that.
Realizing I might need to find a different career and encouraged by my parents and friends, I enrolled in the School of Library and Information Science at Rosary College, now Dominican University, in River Forest, Illinois. In one similarity with Janet Doe, at that time, I never knew there was such a thing as a special library. I thought I might partly achieve my academic aspirations as a college librarian.
It was there that I pivoted, inspired by Katherine (Kay) Haas, a medical librarian who had been recruited by Sister Lauretta McCusker, dean of the library school, to return to Rosary to teach medical librarianship. Haas impressed upon us that a good librarian could succeed in any environment; even liberal arts majors could become excellent medical librarians. Haas encouraged me and my classmate Joan M. Stoddart, AHIP, to become certified. In those days, one could achieve Level I certification by completing a course approved by MLA. Haas also recommended me to John A. Timour, FMLA, for my first position in 1972 as Connecticut Regional Medical Program librarian, located at the Yale Medical Library. I owe Haas-and Timour-a great deal.
As an extension librarian at the University of Connecticut Health Center, I visited hospitals around the state giving MEDLINE demonstrations with my trusty Texas Instruments Model 725. This portable beauty with the built-in acoustic coupler printed thirty characters per second, weighed about thirty pounds, and had its own carrying case. Later, I worked in the public library in Grand Forks, North Dakota, in the mid-1970s. Then, after moving to Denver, Colorado, in 1978, I applied and was hired for the medical librarian position at Saint Joseph Hospital.
It was here that my life changed again. I found not just a job, but a vocation. Many hospital librarians have that sense of a calling, perhaps because of our daily contact with clinicians and often with patients and families. Every day, you are reminded of why you are doing this work. In reviewing previous lectures, I found one other, that by Jacqueline D. Doyle, AHIP, FMLA, for many years a hospital librarian, who referred to this work as a vocation. She said, ''The passion that many librarians bring to their jobs makes librarianship a vocation as much as a profession'' [8].
In an article on Florence Nightingale, Victoria Sweet, author of God's Hotel, wrote: What would she (Nightingale) have thought of the Affordable Care Act? She would have liked its emphasis on public health, on data and on adequate care for everyone. There's just one thing she would have missed-her belief that caring for the sick is not a business but a calling. She didn't mean ''calling'' in a religious sense. She meant having a kind of feeling for one's work-an inner sense of what is right, which she termed ''enthusiasm,'' from the Greek entheos, having a god within. [9] Medical librarians often have this inner sense of what is right; they demonstrate it in the enthusiasm they bring to their work, often in spite of the barriers and frustrations they encounter. I would like to share with you how that happened for me.
HOSPITAL LIBRARIANSHIP AS A VOCATION
One morning in the early 1980s, a young woman opened the door to a small medical library on the eleventh floor of the hospital, directly under a helicopter landing pad. I was surprised to see someone who was not a staff member. Visitors were rare because of the remote location, and patients never came. But this visitor was a patient, recently diagnosed with Menière's disease and terrified by unexpected dizzy spells. While she was relieved to have a diagnosis, she was worried about what the future held. Her doctor suggested that she visit the hospital library to find something to read about the disorder; he thought it might help her cope.
I told her I would find some information, but it might be quite technical. The patient was anxious for any information, and I found a few items in textbooks and journals. Shortly before she left the hospital, the patient stopped by to thank me and to give me a gift, a bookmark with a verse by Grace Haines that ends with these lines: So here's to all the little things, The ''done and then forgotten'' things, Those ''Oh, it's simply nothing'' things, That make life worth the fight.
I still have the bookmark, and I have never forgotten this incident. I realized then that the knowledge I had about the medical literature might help not only the professionals, but also patients. If this patient needed information, probably others did too. This was the beginning of what I would come to Bandy understand as my philosophy of medical librarianship and clarified for me more than any other experience why I was doing this work. Whether I was providing information to doctors, to nurses, or to patients, the purpose for my work was the patient, and I felt that I had found a vocation or that it had found me.
CONSUMER HEALTH INFORMATION
Fortunately, the timing was right. Denver colleagues Marla Graber, Sandi Parker, and Rosalind F. Dudden, AHIP, FMLA, shared my interest. MLA's Consumer and Patient Health Information Section, which had achieved provisional status in 1984, put me in contact with some of the pioneers in the field. Although the consumer movement in the health care sector was still fairly young, there were helpful publications, including Alan Rees's 1982 book, Developing Consumer Health Information Services [10], that provided insights and practical advice from health care professionals and librarians, including Joanne Gard Marshall, AHIP, FMLA. In Hospital Library Management [11], edited by Jana Bradley, FMLA, Ruth Holst, AHIP, FMLA, and Judith Messerle, AHIP, FMLA, and published in 1983 by MLA, Rebecca Martin and Ellen Gartenfeld contributed chapters about services for patients and community health information. Rees's 1991 book, Managing Consumer Health Information Services [12], included descriptions of programs in hospitals and other settings.
I also found a partner in my own institution, our patient education coordinator, Nancy Griffith. She had attended a conference where she heard the talk by Kathleen A. Moeller, AHIP, FMLA, about the Overlook Hospital consumer health library and asked me if we could do something similar. This experience showed me the power of leveraging opportunities and the energy of collaboration, as Griffith and I worked together to plan and open a consumer health library in 1985. It also taught me the link between being opportunity focused and pivoting to a new vision that directs action. This happened for me thirty years ago, and it is even more important today as hospital librarians must leverage opportunities in this turbulent health care environment.
LEVERAGING OF OPPORTUNITIES
One of the items on Peter Drucker's list of eight practices for effective leaders is ''focus on opportunities,'' rather than on problems. Drucker states, ''problem solving, however necessary, does not produce results. It prevents damage. Exploiting opportunities produces results'' [13]. It is not easy to do this. We all tend to focus on problems because we are under so much pressure to remain relevant in this environment. This can make us fear-based in our actions-or nonaction. We are constantly hearing that we need to reinvent, transform, evolve, move out of the library, embed ourselves, redefine, take on new roles, and so forth. Many hospital librarians, including me, are using the concept of knowledge services to reframe current services and expand into new ones.
Today, I am adding another concept, by suggesting that we need to ''pivot.'' The idea is from the pivoting meditation in 365 Tao by Deng Ming-Dao, who begins: Some days, you and I go mad. Our bellies get stuffed full. Hearts break, minds snap. We can't go on the old way so we change. Our lives pivot, forming a mysterious geometry He goes on to say that ''Life revolves. You cannot go back one minute or one day. In light of this, there is no use marking time in any one position. Life will continue without you, will pass you by, leaving you hopelessly out of step with events. That's why you must engage life and maintain your pace'' [14].
Many hospital librarians have been pivoting to meet the needs of their organizations that are under tremendous pressures, both financial and regulatory. Many of our colleagues have not survived these pressures. Those who have survived do not possess some secret sauce and we only need to get their recipe. And no outside agency-including the Joint Commission, the Centers for Medicare and Medicaid Services (referred to as CMS), or the Accreditation Council for Graduate Medical Education (ACGME)is going to mandate that every hospital should have a librarian. It is highly unlikely, and we should not waste time and energy hoping that day will return.
When I asked Hospital Libraries Section (HLS) colleagues for their ideas about leveraging opportunities, I received many responses. Sheila Hayes, AHIP, from Hartford Hospital recommended that I read Changing Roles and Contexts for Health Library and Information Professionals, edited by Alison Brettle and Christine Urquhart [15]. In her enthusiastic review, Hayes noted that ''Since 2009 there has been a plethora of literature on the roles of librarians, how to change them and how to engage in more definitive activity in [our] respective institutions.'' She added, ''All this information has been enough to cause an emotional breakdown on some level in all librarians; at last a book has arrived to put sanity back in our heads and in our practices'' [16].
Diane G. Schwartz, AHIP, reminded me of the list her Vital Pathways team compiled to show the diversity of services that hospital librarians provide [17]. Claire Joseph, AHIP, sent me the excellent article that she and Helen-Ann Brown Epstein, AHIP, published in the March 2014 issue of the Journal of Hospital Librarianship, called ''Proving Your Worth/ Adding to Your Value'' [18]. The article highlighted a number of ideas that hospital librarians can implement in their own institutions.
HOSPITAL LIBRARIANS: EVOLVING ROLES
Previous Janet Doe lecturers have looked at the evolving roles of hospital librarians, although Doyle's 2002 lecture was the second of only two that were Pivoting presented by hospital librarians. The first hospital librarian to give the lecture was Holst. Her excellent 1990 lecture, ''Hospital Libraries in Perspective,'' provided a history of American hospitals and the various roles that the hospital library has played within its parent institution during the twentieth century [19]. Perhaps some of these roles resembled a 1940s vocational guidance film [20].
Although there have been only a few hospital librarians who have had the privilege of presenting the lecture, hospital librarians have been mentioned in many presentations. In his 1977 lecture, ''Foundations of Medical Librarianship,'' Erich Meyerhoff, AHIP, FMLA, noted that ''the emergence of hospital librarians as a creative and productive group of practitioners with professional strivings and close relationships with their clientele represents a pool of talent which has already begun to make its mark'' [21]. Betsy L. Humphreys, AHIP, FMLA, in 2001, noted the sometimes contentious relationship between the National Library of Medicine (NLM) and hospital librarians [22]. I have to confess that I was one of the hospital librarians who protested in 1989 when NLM announced Grateful Med to hospital administrators without mentioning hospital library services. I guess you can take the woman out of the 960s, but you can't take the 960s out of the woman.
Ana D. Cleveland, AHIP, FMLA, in her 2010 lecture recalled telling Estelle Brodman about her enthusiasm for educating her students to be clinical librarians. She said, ''Dr. Brodman…proceeded to tell me that hospital librarians have been providing information to doctors, residents, patients, and others in the hospital wards for a long time. She was…determined that I would get the point that this was a new name for a service provided by hospital librarians for years'' [23]. Meyerhoff also spoke of clinical librarianship, saying, ''It is a mode of service which promises to establish once again a close and systematic relationship between physicians and librarian'' [21]. It is interesting to consider that Janet Doe was happy that she retired before the advent of automation. In her 1977 oral history interview, she said, ''The automation has changed, to some extent, the relationship between the physician and the librarian, because it has made available to the physician directly much more information that had to be gathered for him by the librarian'' [7]. Thus, Meyerhoff hoped that clinical librarianship would restore this relationship.
When I was thinking about how I wanted to talk about hospital librarianship, colleagues suggested that possibly I could point out the contributions of hospital librarians through the years. Clinical librarianship, the benchmarking network, and library standards came to mind. Instead of exploring these important activities, I decided to touch on some of the radical changes in the health care environment impacting hospitals, and therefore both hospital librarians and academic librarians who work with affiliated hospitals. My goal was to offer an optimistic message and, at the same time, deliver one that would recognize the realities that we are facing in the current health care environment.
According to research from the American Hospital Association, ''Hospitals have faced repeated cuts to Medicare and Medicaid payment since 2010 due to both legislative and regulatory changes'' [24]. These changes and others are putting financial pressures on hospitals and hospital librarians that some will not survive. In addition to discussing recent trends, I also wanted to highlight several individuals who have inspired us by words and actions, as they have pivoted to address these challenges. Although pivoting has become a buzzword in politics and in business, especially in Silicon Valley, the concept can also apply to us. For example, in his work related to startup companies, Eric Reis wrote: I want to introduce the concept of the pivot, the idea that successful startups change directions but stay grounded in what they've learned. They keep one foot in the past and place one foot in a new possible future. Over time, this pivoting may lead them far afield from their original vision, but if you look carefully, you'll be able to detect common threads that link each iteration. [26] In an email to me, Marshall shared an idea from tai chi. She said that ''In tai chi, we pivot on our heel when we want to turn in a different direction. It is a key action for facilitating movement in a safe, stable way'' [27].
In the pivoting meditation, Deng says, ''Each time you make a decision, move forward. If your last step gained you a certain amount of territory, then make sure that your next step will capitalize on it…But how do we develop timing for the process? It has to be intuitive'' [14]. When Deng talks about intuition, he is referring to what we might call tacit knowledge. Amrit Tiwana, author of The Knowledge Management Toolkit, states that ''Tacit knowledge includes judgment, experience, insight, rules of thumb, intuition.'' He says, ''Experts and professionals generally practice primarily with tacit knowledge'' [28]. It is our tacit knowledge that can help us pivot in this environment, while staying well grounded in our fundamental values.
In her introduction to the October 2013 Journal of the Medical Library Association (JMLA) issue on new roles for health sciences librarians, Lucretia W. McClure, AHIP, FMLA, reminded us that ''A constant in librarianship is the ability to move and adapt with the changes in medicine, science, and the environment'' [29]. One could replace ''move and adapt'' with the word ''pivot.'' In her editorial in the January 2014 JMLA, Jane Blumenthal, AHIP, had compelling advice:
Bandy
What do you do when you find out seemingly overnight that the roles you have been playing in your institution are no longer needed or valued? Adapt. Find new roles. Move away from activities that are not valued and embrace valueadded activities that demonstrate return on investment. Move quickly and change direction on a moment's notice. [30] In her 2010 lecture, Cleveland noted that: It is essential that educational programs do not abandon the basic tenets of library and information sciences-what we often call the core principles. On the other hand, we cannot lose sight of the fact that our programs require interdisciplinary and collaborative curricula that integrate the total domain of the health care enterprise, library and information sciences, and other information-centered fields. [23] Hospital librarians must live the interdisciplinary and collaborative essentials that Cleveland emphasized as they pivot to new endeavors in this current environment.
Commenting on a recent MEDLIB-L discussion on hospital library closures, Elaine Russo Martin said, ''We will need to challenge everything we have held dear in the past and perhaps no longer do these things. But do new things, in new ways, under new conditions…most importantly I think we will need the Will and the Perseverance to do so.'' She added, ''I don't think we have been ready for the radical changes I would see necessary for us to move to ensure the future of medical librarianship. Are we now?'' [31]. In other words, ''pivot.''
HEALTH CARE ENVIRONMENT CHALLENGES
Cleveland's 2010 lecture included what she called a model of the health care environment [23]. The major elements are clinical practice, information, technology, consumers, and research. As shown in Figure 1, she then expanded each element to include the paradigm shifts and trends that health information professionals need to know about to provide relevant services. I will focus on just a few of the elements in the model: technology, the link between legislative mandates/regulatory requirements and patient safety, and consumer health information/health literacy.
Of course, our colleagues have always demonstrated awareness of changes in the health care environ- Pivoting ment. For example, when the Joint Commission standards changed, Connie Schardt, AHIP, FMLA, and the Hospital Library Standards Committee revised the ''Standards for Hospital Libraries'' to be complementary [32]. When ''Total Quality Management'' was being adopted in hospitals in the early 1990s, hospital librarian leaders including Chris Jones and others were educating HLS members about it through the National Network and other venues [33].
In the January 2002 JMLA, a symposium on ''patient-centered librarianship'' focused on the clinical environment, including the then new informationist concept. While some of these articles were then and still are inspiring, twelve years later, the scene has changed dramatically. The symposium article ''Hospital Librarianship in the United States: At the Crossroads'' by Diane G. Wolf and others quoted Edwin A. Holtum, who said, ''Regardless of the vast leaps made in digitizing information…there is no magic black box containing the world of medical knowledge [from] which busy clinicians will be able to…receive precisely targeted feedback during the clinical encounter'' [34]. Wolf added, ''Focused, highquality patient-care information will be most cost effective and reliable when obtained by using the skills of specialists, and hospital librarians are the specialists in this arena'' [35].
While this was accurate then, before long it may no longer be true, as my short fantasy video of a clinical librarian robot suggests. My purpose in showing this video is not to alarm you or to talk about the implications of robotics and big data in health care. However, the video is intended to be another reminder about the technological changes in the health care environment that will continue to have an impact on us and other professionals. While some of our current activities may continue well into the future, we cannot be complacent, thinking and telling each other that only we can provide these services. For example, Chris Patrick and Karena Man commented, in Information Week, on the changes brought about by cloud computing that are impacting chief information officers (CIOs) in all sectors. They wrote, ''The fundamental choice facing every company, CIO, and aspiring CIO is the same: Embrace the possibilities of a world without walls, or cling to what feels familiar and secure and risk becoming irrelevant more quickly than you ever imagined possible'' [36]. We could change this quote from ''CIO'' to ''hospital librarian,'' many of whom are already providing services without walls.
For the past few years, our hospitals have been dealing with the technological challenges of electronic health records (EHRs), including the requirements of ''meaningful use.'' Meaningful use is a set of criteria for the use of certified EHR systems to improve patient care that provides incentive payments for Medicare providers. The concept of meaningful use rests on the ''5 pillars'' of policy priorities for health outcomes: 1. Improving quality, safety, efficiency, and reducing health disparities 2. Engage patients and families in their health 3. Improve care coordination 4. Improve population and public health 5. Ensure adequate privacy and security protection for personal health information [37] Participation in the program is now voluntary, but if entities that are called ''Eligible Hospitals'' or ''Eligible Professionals'' fail to join by 2015, there will be negative adjustments to their Medicare/Medicaid payments, starting at 1% reduction and escalating to 3% reduction by 2017 and beyond. The Advisory Board Company created a poster to help organizations compare the latest objectives and measures of meaningful use stages 1 and 2, as outlined by CMS for 2014. The poster demonstrates the complexity inherent in the meaningful use program [38].
Many librarians are leveraging opportunities as they are working with their hospitals' chief medical information officers to provide point-of-care resources that will enhance the usefulness of the clinicians' EHRs and the patients' personal health records. Librarians are also supporting the development of evidence-based order sets to improve clinical care [39].
Other pressures that our hospitals are facing are the changes in reimbursements created by health care reform and the regulatory requirements that address patient satisfaction, clinical quality, and high reliability in patient safety. Of course, patient safety in hospitals is not a new concern. Florence Nightingale wrote about it in 1863 in the preface to Notes on Hospitals. She said, ''It may seem a strange principle to enunciate as the very first requirement in a hospital that it should do the sick no harm. It is quite necessary, nevertheless, to lay down such a principle'' [40]. In our modern era, Donald Berwick was one of the earliest and most influential proponents of bringing quality improvement techniques to health care. Former head of CMS and a 2014 candidate for governor of Massachusetts, Berwick founded the Institute for Healthcare Improvement (IHI) in the late 1980s to focus on specific quality and safety issues. Today, there is a multitude of entities that hospitals may voluntarily work with to address quality and patient-safety requirements that are often tied to Medicare reimbursement. Cybrarian Lorri Zipperer has often written about the role of librarians in patient safety. Her latest publication, Patient Safety: Perspectives on Evidence, Information and Knowledge Transfer, includes contributions from many MLA members. It highlights the essential role of librarians in improving patient safety throughout the continuum of care [45].
Unfortunately, there has been a discouraging lack of progress in preventing harm to patients since To Err Is Human. A study in the 2013 Journal of Patient Bandy Safety by John T. James is titled ''A New, Evidencebased Estimate of Patient Harms Associated with Hospital Care.'' James based the estimate on the findings of four recent studies that identified preventable harm suffered by patients. He concludes that ''the epidemic of patient harms in hospitals must be taken seriously if is to be curtailed'' [46]. Patient safety pioneer Lucian Leape, who was on the IOM committee that wrote To Err Is Human, was quoted as saying that they knew at the time that their estimate of medical errors was low and that he has confidence in the four studies and the estimate by James [47].
Diagnostic error is also receiving a new focus in patient safety. [49].
Hospital librarians likely will be receiving more requests dealing with diagnostic error and research to improve diagnostic accuracy. Barbara B. Jones, Library Advocacy/Missouri coordinator at the National Network of Libraries of Medicine, MidContinental Region, is coordinating a collaborative program, called ''Expert HealthSearch,'' that was initiated and is being sponsored by the Society to Improve Diagnosis in Medicine. A pilot project for the program was launched in April of 2014: ''As designed,…a team of five librarians from across the country who are passionate about helping people and are working on their own time at nominal rates does the actual searches for patients'' [50].
GOVERNMENT INCENTIVES
What has changed since Berwick and others first urged improvement in quality and patient safety is that government incentives are now tied to these requirements. Hospitals receive bonuses or penalties from CMS based on how they score on process measures, patient experience, and mortality rates. These incentives may force improvements that the shock from the original IOM report has not been able to achieve.
One of the new Medicare payment programs is the Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS), which is part of the Hospital Value-Based Purchasing Program created in the Affordable Care Act [51]. Value-based purchasing rewards high-quality providers and penalizes weak performance. HCAHPS is just one of the federal reporting programs designed to provide incentives for hospital performance across a range of quality metrics. The HCAHPS survey was developed for CMS by AHRQ to provide a standardized survey instrument and data collection methodology for measuring patients' perspectives on hospital care. Hospitals authorize companies such as Press Ganey and Gallup to administer the survey. Using the HCAHPS questions, the companies ask recently discharged patients about their hospital stays, and the companies report the survey results to CMS.
The survey asks patients to rate the frequency of events during their care. The choices are never, sometimes, usually, or always; some questions only require a yes or no answer. Hospitals that perform well in comparison to peers will receive a quality bonus; those that perform poorly will incur a penalty. Questions about communication include: During your hospital stay: & how often did nurses explain things in a way you could understand; & how often did doctors explain things in a way you could understand; & before giving you any new medicine, how often did hospital staff describe possible side effects in a way you could understand; & did you get information in writing about what symptoms or health problems to look out for after you left the hospital? [52] For the incentive payments and penalties, Medicare counts only the percent of patients who answer ''always'' or ''yes.'' The results for satisfaction are posted on the Hospital Compare website, along with the quality metrics reported by hospitals. Consumers can select multiple hospitals and compare performance measure information [53].
HOSPITAL LIBRARIANS AND HEALTH LITERACY
When I looked at the HCAHPS survey, the questions reminded me of Rees's 1992 Janet Doe lecture, ''Communication in the Physician-Patient Relationship.'' Rees challenged librarians to take a ''leadership role in opening newer channels of communication between physicians and patients'' [54]. Many did, by encouraging patients to discuss the information that they received from the librarians with their physicians.
Today, direct access to tools like MedlinePlus can produce better-informed patients and consumers. We have come a long way since my 1980s story of the patient with Menière's disease. But while consumer health information services are still vital and serve a need, hospital librarians are pivoting to a new level of involvement in improving communication with patients. Despite all of the information available to patients and consumers, numerous government and private agencies, the health professionals in our hospitals, and librarians have recognized that low health literacy is an increasing problem.
Funded by NLM and with the leadership of coprincipal investigators Jean P. Shipman, AHIP, FMLA, and Carla J. Funk, CAE, and Project Coordinator Sabrina Kurtz-Rossi, MLA developed a health information literacy program in 2008 that has been used in a variety of settings, including my own. The program is available on MLANET, and additional resources continue to be added to the Health Pivoting Information Literacy page [55]. Many hospital librarians are forming collaborative endeavors with practitioners that can address the specific HCAHPS questions that are associated with the communication domains mentioned above. In 2013, the librarians and nurse champions in my hospital organized an interdisciplinary community of practice to help embed health literacy best practices in our hospital and clinics. Our hospital uses ''Lean Six Sigma'' methodologies, and ''going to Gemba'' is recommended as a first step in continuous process improvement. Gemba is a Japanese term meaning ''the actual place.'' It involves going to the place where the work is being done, the front line, to observe and learn [56].
Members of the committee conducted twenty-five observations of activities including preoperative teaching, discharge instructions, medication teaching, and the admissions process. The results were compiled and categorized on a fishbone cause-and-effect diagram, and through multi-voting, the team selected its first specific aim, which was to improve the teaching environment by reducing the distractions, interruptions, and noise that they had observed in nearly every setting.
A checklist was developed and is currently being piloted on several hospital units. It suggests ways for the teacher to improve the environment by turning off phones and other devices, assessing who needs to be in the room, and sitting down and being present to the learners. Another suggestion is for the teacher to close the door and post a ''Teaching Time Out'' stop sign to reduce interruptions. Results to date have been positive from both teachers and learners. In the future, other practices will be addressed, including the use of teach-back and improvement in tools such as the after-visit summary that patients receive before they leave the hospital.
As hospital librarians help embed health literacy best practices in their hospitals, practitioners may achieve the Health Literate Care Model that Howard Koh and others proposed in a February 2013 Health Affairs article. They wrote that everybody is at risk for not understanding and that organizations should institute what they call ''health literacy universal precautions'' [57]. Components in the model include community partners, health literate systems, strategies for health literate organizations, and productive interactions.
Several hospital librarians have shared stories about their participation in health literacy activities. As recommended by the Health Literate Care Model, these librarians are part of prepared, proactive, health literate, health care teams in order to foster productive interactions with informed, health literate, activated patients and families, all leading to improved outcomes. Brenda R. Pfannenstiel, AHIP, from Children's Mercy Hospitals and Clinics in Kansas City, Missouri, was a charter member of their health literacy committee, developing a wiki and a web page for the committee, and ensuring continuing librarian involvement when she rotated off the committee [58]. Andrea Harrow, AHIP, from Good Samaritan Hospital Medical Library in Los Angeles, publicizes her services as a librarian who can help to find patient information in other languages, and she schedules continuing medical education (CME) topics promoting health literacy and translation services [59]. Helen Houpt, AHIP, from Pinnacle Health System in Harrisburg, Pennsylvania, has collaborated with nurses to present a variety of health literacy classes to both internal and external audiences [60].
Other activities include a New England Region webinar, ''Creative Health Literacy Projects,'' in which Margo H. Coletti, AHIP, from Beth Israel Deaconess Medical Center in Boston, collaborated with other experts to develop a workshop on how to compose clearly written informed consent forms. In the same forum, Nancy Goodwin, AHIP, director of library and knowledge services at Middlesex Hospital in Middletown, Connecticut, described how she led the effort of a multidisciplinary committee to rewrite the hospital's admission booklet to make it health literate [61]. She leveraged an opportunity by taking on a job no one else wanted but that her hospital needed.
Can librarian involvement in health literacy activities improve a hospital's HCAHPS scores? It can be a contributing factor. More importantly, through these activities, librarians are discovering new ways to use their skills to help patients, their hospitals, and their communities.
PIVOTING TO THE NEW WORLD OF HOSPITAL LIBRARIANSHIP
So how do we find the time to leverage the opportunities presented to us by the current environment and pivot to this new world of hospital librarianship? Elaine Martin said, ''We will need to challenge everything we have held dear in the past and perhaps no longer do these things'' [31].
Do hospital librarians still bind print journals? I have recently been in the process of removing our library's bound journals because we will not be taking them with us when we move to the new hospital in December. Instead, we will be providing collaborative spaces, as many hospital libraries now do. The volumes are beautiful and bound in appropriate colors, such as red for the journal Blood. There was a time when having bound journals made me feel as though I had a real library. I am embarrassed to notice how pristine many of these volumes still are, a sign that they have not been opened much since they were shelved fresh from the bindery. Michelle Kraft, AHIP, suggested that perhaps hospital librarians should no longer check in print journals or maybe even stop getting print journals, and simplify our cataloging [62].
And we certainly cannot leverage opportunities by complaining about the failure of our MLA headquarters staff to advocate for us. As Kraft noted in the recent MEDLIB-L thread about library closings, ''We need to stop asking the 16 overworked people to start advocating for us, they are doing as good of a job as they can.'' She added, ''The 3,543 members need to Bandy work with the rest of the medical librarians on this listserv who aren't MLA members to come up with ideas'' [63]. Kraft might have been channeling Janet Doe who said, ''when we say 'the Association' we mean the individuals who have composed its membership and have done its work'' [64]. In fact, our elected leaders, headquarters staff, and MLA members have made many efforts over the years to provide hospital librarians with tools and resources, including National Medical Librarians Month, the Vital Pathways project, the Myths and Truths materials, and the Advocacy Toolbox.
The Joint Commission's ''Speak Up'' brochure, ''Understanding Your Doctors and Other Caregivers,'' is a recent successful advocacy example. Based on recommendations from MLA headquarters, the Joint Commission incorporated information about libraries, MedlinePlus, and MLA into the brochure [65]. These kinds of efforts will continue. But we also realize that these national organizations have their own agendas that may not align with ours.
T. Scott Plutchak, AHIP, FMLA, in an editorial in the July 2004 JMLA, suggested that the reinvention of librarianship ''requires rethinking everything we do, and we can only do that when we put our services and priorities in the context of the larger organization that we serve.'' He said, ''We have talents, resources, and skills that are essential for the success of our institutions. All of our efforts should be focused on doing whatever we have to do to make the most of those opportunities. Opportunities have walked into my library from time to time, as in my first story, but that was thirty years ago. It is less likely to happen now in this current environment unless we have already been visible contributors to the organization's important goals. If necessary, hospital librarians should invite themselves to safety committees and nursing councils, and if that seems difficult, they should enlist a champion from that committee to invite them. As we participate, we gain insights into the challenges that these practitioners face through the tacit knowledge that is being shared. This is the best way to really know and not simply assume what our constituents need. And it is how we can focus our efforts, leverage opportunities, and pivot in this environment.
When I posted a question to the HLS email discussion list asking how hospital librarians can leverage opportunities in the current environment, I received an email from Louise McLaughlin, information specialist at Women's Hospital Health Science Library in Baton Rouge. I am quoting a portion of it with her permission: I credit Elaine Martin's summation with providing me with clarity and direction. In short, I had better get a move on before it's too late. I also realized that I need human connection. I have reawakened the vision, I have joined with some HLS colleagues, and I am carrying the word to librarians in Louisiana that we must do all that we can to face reality, reinvent ourselves, get uncomfortable, move in new ways. Maybe by doing this, we can avert the dreaded pink slip. And if it comes anyway, we will know that we did all we could to avoid the ''coulda, woulda, shoulda, moment.'' [67] At this annual meeting, we will learn about the new Values2 initiative for hospital libraries, created by the MLA Board and implemented by HLS [68]. It builds on the Vital Pathways Project that was initiated by M. J. Tooey, AHIP, FMLA [69], and the values research study by Marshall and others that was published in the JMLA in 2013 [70]. There will be contributed papers and posters and what promises to be a stimulating program, called ''Professional Identity Reshaped.'' Our colleagues will challenge and inspire us with their research and new programs that we can adapt to our situations. These activities exemplify Janet Doe's enduring gift to MLA that Holtz eloquently described.
CLOSING THOUGHTS
At the conclusion of the pivoting meditation, Deng says: On certain days, we come to our limits, and our tolerance for a situation ends. When that happens, change without interference of concepts, guilt, timidity or hesitancy. Those are the points when our entire lives pivot and turn toward new phases, and it is right that we take advantage of them. We mark our progress not by the distance covered but by the lines and angles that are formed. [14] As we pivot to leverage opportunities in this turbulent health care environment, it will be the interdisciplinary and collaborative lines and angles that we form that will mark our progress as vital, contributing members of our health care organizations. Whether as an individual or as an association, when we pivot, we will change the trajectory of our profession.
Many previous Doe lecturers have quoted poetry to sum up their messages. As I thought about my conclusion, I knew I wanted to end with an inspiring verse, worthy of my colleagues and, also perhaps, a way to indulge my English major avatar. Tennyson's lines from ''Ulysses'' came to mind. Come, my friends, 'Tis not too late to seek a newer world. Push off, and sitting well in order smite The sounding furrows; for my purpose holds To sail beyond the sunset, and the baths Of all the western stars, until I die. It may be that the gulfs will wash us down: It may be we shall touch the Happy Isles, And see the great Achilles, whom we knew. Tho' much is taken, much abides; and tho' We are not now that strength which in old days Moved earth and heaven, that which we are, we are;
Pivoting
One equal temper of heroic hearts, Made weak by time and fate, but strong in will To strive, to seek, to find, and not to yield. [70]
|
2018-04-03T03:09:06.231Z
|
2015-01-01T00:00:00.000
|
{
"year": 2015,
"sha1": "d058f33fa2f389cbd5fd78198225f91eb8a1aaf6",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3163/1536-5050.103.1.002",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "a82983dc715301530abd19c036206abc99c99566",
"s2fieldsofstudy": [
"Education",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
21959971
|
pes2o/s2orc
|
v3-fos-license
|
Household Food Insecurity, Mother's Feeding Practices, and the Early Childhood's Iron Status
Background: Health consequences of food insecurity among infants and toddlers have not been fully examined. The purpose of this study was to assess the relationship between household food insecurity, mother's infant feeding practices and iron status of 6–24 months children. Methods: In this cross-sectional study, 423 mother-child pairs were randomly selected by multistage sampling method. Children blood samples were analyzed for hemoglobin and serum ferritin concentrations. Household food security was evaluated using a validated Household Food Insecurity Access Scale. The mother's feeding practices were evaluated using Infant and Young Child Feeding practice variables including: The duration of breastfeeding and the time of introducing of complementary feeding. Results: Based on the results, of the studied households only 47.7% were food secure. Mild and moderate-severe household food insecurity was 39.5% and 12.8%, respectively. Anemia, iron deficiency (ID), and iron deficiency anemia were seen in 29.1%, 12.2%, and 4.8% of children, respectively. There was no significant association between household food insecurity; mother's feeding practices and child ID with or without anemia. Conclusions: We found no association between household food insecurity and the occurrence of anemia in the 6–24 months children. However, these findings do not rule out the possibility of other micronutrient deficiencies among the food-insecure household children.
and feeding practices have an efficient role in growth and development in early childhood. [4]Duration of breastfeeding, the time of introducing of complementary feeding, and compliance with infant feeding suggestions are important factors to ensure obtaining appropriate foods in the early years of life. [5]However, all these factors may be adversely affected by household food insecurity. [3,6]Studies have shown that strategies used by households to combat food insecurity can affect infant feeding practices. [7]In food insecure households, mothers show less positive behaviors when feeding their children. [5,7]It has also been shown that in the context of food insecurity when adequacy and accessibility of food are impaired, mother's decision making for infant feeding is also disrupted. [8] Iran, a considerable imbalance between energy and nutrient contents of the foods consumed in the households is observed.While high intake of foods with low nutrient density is reported at all income levels and over-consumption of energy-dense foods is evident among more than a third of households, food insecurity is common among 20% of the population. [9][12] Food insecurity during the first 3 years of life may have substantial negative effects on subsequent physiological, behavioral, and cognitive development. [13]6] Iron deficiency (ID) and ID anemia (IDA) are considered as the major public health problems and the most common nutritional deficiencies around the world. [17,18]Infants and young children have a high risk for developing ID because they have a high demand for iron due to rapid growth. [19]IDA can be associated with functional impairments that affecting mental and psychomotor development and it has a significant effect on the health and development of children. [20,21] Iran, 30-50% of women and children suffer from IDA. [22] It is estimated that 43.9% of children are anemic, and 29.1% have IDA, in southwest Iran; [23] the prevalence in Southern Iran is estimated 19.7% in under 5-year-old children. [24]e research focusing on iron and health-related outcomes has been narrow in coverage of food-insecure children. [7,15,25]A study by Skalicky et al. found that household food insecurity is related to ID and IDA in children aged 6-36 months. [26]Some other studies have reported no significant relationship between child food insecurity and ID. [27]Alaimo et al. [7] reported that low-income children were more likely to have ID than high-income children.A recent study permitted clearer evaluation of the determinants and outcomes of child food insecurity. [28] the other hand, there is evidence that household food insecurity affects parenting behaviors with adverse outcomes for children. [29]Mothers in food-insecure households have more problems in infant feeding.They are more likely to have unhealthy eating patterns themselves while children in these families consume more low cost, less nutritious, and high energy foods.Therefore, children from food-insecure households are at an increased risk of being overweight and micronutrient deficient. [9]fant feeding practices make the early feeding environment because infants depend on parents' foods choices. [30,31]Very few studies have explained the relationship between maternal feeding practices and infant and young children iron status, in the context of food security research.Therefore, it is necessary to examine the practices of breastfeeding, complementary feedings, and their effects on child's iron status.This study was carried out to evaluate the association among household food insecurity, maternal-infant feeding practices, and the body iron status in children aged 6-24 months.
This study can expand the existing body of knowledge about a link among inappropriate food access in households, mother's feedings behaviors, and child iron status.Considering the critical role of women in the households, the findings can provide basis for developing intervention programs to modify mother's infant feeding behaviors, even with existing household food insecurity and would enable an improved targeting of resources, including micronutrient supplementation and fortification programs.
Subjects
In this cross-sectional study, 423 mothers and their children aged 6-24 months in Varamin (a city at South East of Tehran with about 220,000 inhabitants) were recruited through multistage sampling method.Based on population density of the health centers, households in each district were selected.
Informed signed a consent to participate in the study was obtained from all participants.The study protocol was approved by Ethics Committee of the National Nutrition and Food Technology Research Institute (NNFTRI), Tehran, Iran.
Procedure
According to a predetermined schedule, face-to-face interviews were conducted with the mothers in the urban health centers.Data collection was conducted in early 2014.The research team consisted of trained nutritionists with communication skills.Prior to the survey, a pilot study was carried out to review the questionnaires and practice field work.Data were collected using a questionnaire with three sections including: Sociodemographics; Household Food Insecurity Access Scale (HFIAS) and mother's feeding practices.
a. Demographics and socio-economic data, including gender and age of the head of the household, family size, number of children, household head, and the interviewee's educational level and occupation; and the household residential status were collected b.Household food security was evaluated by HFIAS, a 9-item questionnaire, which had been already validated for Iranians. [32,33]The questionnaire asks whether a specific condition associated with the experience of food insecurity ever occurred during the previous 30 days.The questionnaire includes perceptions about food insecurity.Based on HFIAS questionnaire scores, households were grouped into four categories of food access insecurity: Secure (0-1), mildly food insecure (2-7), moderately food insecure (8-14), and severely food insecure (15-27).
Households with severe food security were combined into moderate food insecure groups because of the small number of severe food insecurity category and labeled as "moderate-severe food insecurity" c.Mother's feeding practices were evaluated using WHO Infant and Young Child Feeding practice indicators, [34] including: (a) Duration of breastfeeding, (b) time of the introducing of the complementary feeding, and (c) the meal frequency during the day.Mother's feeding practices were categorized into three levels: Appropriate (>16 scores), partly appropriate (12-16 scores), and poor maternal feeding practices (<12 scores).
Blood sampling and analysis
A venous nonfasting blood sample (3 ml) was drawn from each child by a trained laboratory technician.Blood samples were divided into tubes either with or without the anticoagulant, ethylenediaminetetraacetic acid.The samples were transported within 4 h to the laboratory of the City Health Network for measurement of hemoglobin (Hb) concentration using a cell counter.Serum-clot activator tubes were centrifuged at 800 ×g for 10 min at room temperature.Sera were aliquoted into 500 ml prelabeled micro-tubes and kept frozen at −20°C.Frozen serum samples were transported in a cold box to the laboratory of Nutrition Research at NNFTRI, for further analyses.Serum concentration of ferritin (Ferr) was measured by the enzyme-linked immunosorbent assay (ELISA) using commercial kits (Ferritin AccuBind ® ELISA Microwells Monobind Inc., USA).Iron status was classified in three categories: (1) Anemia (with Hb <11 g/dl), ID defined as ferritin <12 ng/ml, [35] and IDA defined as Hb <11 g/dl in combination with ferritin <12 ng/ml. [36]
Anthropometry
Children and mothers' weight and height were measured using standard methods of WHO, to the nearest 0.1 kg with a Seca electronic scale (Seca 876 Hamburg, Germany) and 0.1 cm with a stadiometer (Seca 213), respectively, while the subjects had a light clothing and barefoot. [37,38]For the children, the recumbent length was measured using an infantometer on an adjustable child length measuring board with a precision of 0.1 cm.Body mass index (BMI) was calculated by dividing body weight (kg) to height squared (m²).Mothers' BMI <18.49, between 18.5 and 24.9 between "25-29.9"and >30 were categorized as underweight, normal weight, overweight, and obesity, respectively. [39]
Statistical analysis
Data were analyzed using SPSS (version 22.0; SPSS Inc., Chicago, IL, USA).Descriptive data analyses included examining frequencies, means, and standard deviation for study variables.The significance level was defined as P < 0.05.Analyses were performed by using independent samples t-test, Fisher's exact test, Chi-square test, analysis of variance, and Pearson correlations.Multiple linear regressions were used to assess the relationship between food insecurity, mother's feeding practices, and child iron status.
RESULTS
The study sample included 423 children with mean age 15.1 ± 5.7 months, weight 10.5 ± 1.7 kg, height 78.3 ± 6.5 cm; and their mothers with mean age 28.1 ± 5.2 years, weight 66.3 ± 13.4 kg, height 160.3 ± 5.7 cm, and BMI 25.7 ± 4.8 kg/m 2 .Characteristics of the study participants according to household food security status are presented in Table 1.Food security, mild, and moderate-severe food insecurity were observed in 47.7%, 39.5%, and 12.8% of the households, respectively.Of the households, 70.7% were residents in urban areas.Family size between "4 and 5" was observed in 51.8% of the households.Maternal BMI in the moderate-severe food insecure households was insignificantly higher, compared to those with mild food insecurity and food secures.Of the studied mothers, 96.7% were housewives.Age at delivery in 89.6% was "18-30 year."About 71.9% of mothers were lactating, and 87.2% did not take any supplements.Mother's education level was higher in food secure households (30.0%).Of the children, 53.9% were male.About 70% and 82.5% of children did not have acute respiratory infections and diarrhea, in the past month, respectively.The frequency of using iron supplements in children was 79.6%.
The prevalence of breastfeeding was 78.6%, including 38.1% of food secure, 30.5% of mild, and 10.0% of moderate-severe food insecure households.Appropriately, for 64.2% of children complementary feeding was started after 6 months of age.Of the studied children, 51% were given more than three meals for a day.
Appropriate, partly appropriate, and poor maternal feeding practices were observed in 3.1%, 58.6%, and 38.3% of the mothers, respectively.Poor feeding practices were seen in 18.0%, 15.8%, and 4.5% in food secure, mild food insecure, and moderate-severe food insecure mothers.Statistically significant differences were observed in mean poor feeding practices between food secure (8.4 ± 1.9), mild food insecure (8.2 ± 2.1), and moderate-severe food insecure mothers (6.9 ± 2.7), P = 0.014.Anemia, ID and IDA were seen in 29.1, 12.2, and 4.8% of children, respectively.In urban areas, as compared to the rural areas, the prevalence of anemia (22.6 vs. 6.6%,P = 0.016), ID (6.3 vs. 3.7%), and IDA (2.9 vs. 0.5%) were all higher.The occurrence of anemia was higher in girls than in boys (15.5 vs. 13.6%),while ID (7.4 vs. 4.8%), and IDA (2.9 vs. 1.9%) were higher in boys than in girls.There were no any significant differences in Anemia, ID and IDA between boys and girls.As shown in Table 2, anemia (14.2%) in food secure, ID (5.8%), and IDA (2.1%) in mildly food insecure children were more prevalent than in other groups.However, the difference was not significant.
As shown in Table 3, there was no significant correlation between food security and mother's feeding practices and serum ferritin and Hb.In addition, there was no association between household food insecurity and child ID with or without anemia.
Based on Table 4, multiple linear regression analysis showed that a unit increase in mother feeding practices (such as continued breastfeeding beyond 12 months) led Variables are defined based on WHO guidelines on infant and young child feeding practices. [34]BMI=Body mass index, SD=Standard deviation, FI=Food insecurity, Hb=Hemoglobin, IYCF=Infant and Young Child Feeding, NS=Not significant, WHO=World Health Organization to 1.303 ng/ml decrease in serum ferritin.However, the association was not significant.Household food insecurity was not associated with child serum ferritin.
DISCUSSION
The present study aimed to determine the relation between the iron status of children under the age of 2, maternal feeding practices, and household food insecurity.We found that: (1) There was no significant relationship between child blood Hb level and serum ferritin concentration; (2) There was no significant relationship between household food insecurity, child iron status, and maternal feeding practices; (3) There was no significant relation between maternal feeding practices and child iron status.
The relationship between child hemoglobin level and serum ferritin
In this study, no significant relation was observed between child Hb concentration and serum ferritin level.In the children under study, anemia, as defined by Hb concentrations (29.1%), which was less than the WHO's threshold (40%), was considered as a serious public health problem. [35]Association between Hb concentration and child iron status may occur via multiple pathways. [40,41]In general, child iron intake through continued breastfeeding is low.On the other hand, increased caloric intake by eating high amounts of cereals containing iron absorption inhibitors can lead to reduced iron bioavailability. [42]Finally, after a child reaches the age of 1, child and mother's dietary qualities will become alike since they share the same economic and social environments. [42,43]Studies by White in the United States and Thurlow in Thailand revealed that ID is a nonmajor factor for anemia during childhood. [44,45]It has been shown that the risk of anemia caused by ID depends on the complex interactions between dietary iron content (type of diet), iron bioavailability (breastfeeding duration and appropriate complementary feeding practices), increased iron intake (growth rate), and improper loss of iron (infections and parasitic diseases). [46]e relationship between household food insecurity and infantile iron status Our results were in accordance with Skalicky et al. and Nisar et al. studies that showed no relationship between food insecurity and child Hb concentration. [26,47]onversely, Miller et al. showed anemia risk occurrence caused by ID in children of 3-5-year-old growing in food-insecure households is about 11 times higher than those of food-secure households. [14]alicky et al.'s study results showed that children in households with moderate and severe food insecurity were more than twice as likely to have IDA as children in food secure households.Moreover, he stated other household's characteristics can be risk factors of child iron levels. [26]The study conducted by Park et al. displayed children in extremely food-insecure households not only get involved in ID twice those growing in mildly food-insecure households, but also suffer from IDA to the same extent. [48]rious studies have demonstrated food insecurity has no significant effects on anemia and thus anemia may occur due to ID, inefficient iron absorption or physiological increased requirement. [49]In low and middle income countries, such factors as damage to crops and agricultural productions resulting from climatic changes, [50] continuing economic crises in the world associated with weakening of socio-economic developments, and worsening of food insecurity conditions can threaten public health and aggravate childhood's anemia. [51]e relationship between food insecurity and mother's feeding practices In the current study, no significant relationship was found between food insecurity and mother's feeding behaviors.The rates of proper mother's feeding practices, including continued breastfeeding and complementary feeding after 6 months within appropriate times per [52] In that population, breastfeeding persistency in the second year of child life may indicate that the children from poor families are at risk of inadequate complementary feeding or represent mothers' attempts to provide sufficient food in response to food insecurity.Unlike this study, Bronte-Tinkew et al. showed breastfeeding durations of food-insecure mothers are less than those of food-secure mothers.Their findings demonstrated food insecurity before the age of two would be a hurdle for parental and child interactions influencing the main growth aspects such as public health and overweight. [53]urthermore, Webb-Girard et al. indicated that families are experiencing food insecurity, the campaign against the barriers of exclusive breastfeeding. [54]e WHO recommends mothers to exclusively breastfeed their babies for the first 6 months of life and follow continued complementary breastfeeding from the ages of 6 months to 2 years so as to satisfy their babies' additional needs for energy, iron, zinc, and other minerals. [55]e relationship between household food insecurity, mother's feeding practices, and child iron status Studies have shown that breast milk contains a considerable amount of bioavailable iron, whose concentration decreases gradually with time. [56]According to the researches, carried out, continued breastfeeding [57] and maternal anemia [58] are associated with increasing risk of ID and IDA in infants and toddlers.Furthermore, in infants up to 6 months of age, iron requirements for growth and red blood cell development increase.Therefore, exclusive breastfeeding is insufficient and supplemental sources are needed. [59]though Meinzen-Derr et al. showed that infants under the age of 1 are only breastfed and anemic mothers have depleted iron stores, [57] serum ferritin levels in children with continuing breastfeeding did not reduce up to the age of 2 based on the results of the current study.Since nearly 80% of the studied children received supplemental iron, it seems that the problem (decreased ferritin) is not exacerbated with continued receiving of breast milk by children.In addition, unlike Pasricha et al.'s study, no significant relationship was observed between child serum ferritin levels and household food insecurity. [60]The difference could be due to the fact that 68% of mothers in Pasricha et al.'s study were anemic, and the children received low supplemental iron if any.
In general, the correlation between breastfeeding persistency in the second year of life and decreased child serum iron concentration has not been well described, especially within the developing nations.
Strengths and restrictions
This study had some limitations.First, the cross-sectional design prevented the determination of cause-effect relationships.Second, measurements of other variables affecting child iron status were not possible since the amounts of blood needed were higher than the maximum size accepted by the community.Third, assessment of maternal feeding practices was done based on reports from mothers and mother's feeding behaviors were not directly observed.One of the strength points of this study was a high level of community participation.Given the socio-economic similarities of the study population with those of the other parts of the country, results of this research can be generalized to other ethnic or geographical groups across the region and the country.
CONCLUSIONS
Although this study showed the relationship between household food insecurity and mother's feeding status and child serum iron level is not significant, continued breastfeeding in children may lead to their risks of IDs and thus it is necessary to monitor the implementation of the iron supplementation programs, food fortification, and nutritional education within the population.Moreover, not only maternal iron status should be improved during pregnancy and lactation through foods and supplements containing enough iron contents, but also complementary
Suggestions
Longitudinal studies are necessary to identify the exact pathways between household food insecurity and parental feeding behaviors, especially mothers and assessment of their consequences on child nutritional health.Other quantitative and qualitative studies are needed to clarify better the impacts of continued breastfeeding on children's nutritional status in food-insecure households of various ethnic groups with different cultures.
Table 1 : Mothers and infant and toddlers profile characteristics based on household food security status Variables Household food security status P
a
|
2018-04-03T04:42:38.074Z
|
2015-09-03T00:00:00.000
|
{
"year": 2015,
"sha1": "3a9ff96b45c0e30b0d946670796476eab702f8f8",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/2008-7802.164414",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "64e4ee2fb67238b9a326a509c98a5fa7d288a386",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
660064
|
pes2o/s2orc
|
v3-fos-license
|
The flow of blood to lymph nodes and its relation to lymphocyte traffic and the immune response.
The blood flow to individual lymph nodes of sheep and rabbits has been determined with 85Sr-labeled microspheres. A popliteal node of the sheep received 0.014% of the cardiac output and a comparable node in the rabbit 0.011%. A sheep lymph node weighing 1 g received an average of 24 ml/h of blood. It was calculated that there was a highly selective removal of lymphocytes by the node and that an equivalent to one in every four lymphocytes that entered a normal lymph node migrated out of the blood, through the substance of the node, and into the efferent lymph. During the immune response to either allogeneic lymphocytes or tuberculin, the blood flow to sheep lymph nodes, even without considering the increase in node weight, increased an average of fourfold. During the primary immune response in the rabbit to keyhole limpet hemocyanin, the blood flow increased threefold. The increase in blood flow preceded the antigen-induced increase in lymphocyte traffic recorded in the efferent lymph. The early phase of increased blood flow was considered to be due to hyperemia, whereas the latter phase had a significant angiogenesis component. It was calculated that an equivalent to 60% of the entire mobilizable pool of lymphocytes could pass through an average lymph node in the blood during an immune response lasting 5 days.
Injection of Microspheres. Carbonized microspheres labeled with s~Sr (10 mCi/g) of 15 _ 5 tLm size (3M Company, London, Ontario) were injected in volumes of 2-3 ml. The suspending medium was 10% dextran. With a two-way stopcock attached to the arterial catheter, the syringe was washed and the residual microspheres were flushed into the animal with physiological saline. From 5 to 10 x l0 s microspheres (determined with a hemocytometer) were injected, which corresponded to approximately 5-30 x 106 cpm, depending upon isotope decay. The number of microspheres injected in both species was determined by counting two aliquots of the injected suspension. Sheep were unanesthetized and standing unrestrained in metabolism cages during all experiments. They were sacrificed with a lethal injection of anesthetic either a few minutes after introduction of the microspheres or up to several days later.
Approximately 2 x 10 s microspheres were injected into rabbits. The microspheres were injected while the rabbit was under the same anesthetic used for catheter placement.
Counting Methods. Organs and tissues were removed, weighed, and either ashed or cut into appropriate sizes for counting in plastic tubes. In most animals kidneys, spleen, thymus, eyes, and popliteal, prefemoral, and prescapular lymph nodes were removed. In some animals internal lymph nodes such as the lumbar, iliac, renal, hepatic, mesenteric, and para-aortic nodes were also removed. Samples did not exceed a depth of 3 cm in the counting tubes. A Nuclear Chicago singlechannel gamma spectrometer (Nuclear-Chicago Corp., Des Plaines, Ill.) was used. Lymphocyte counts were done with a hemocytometer and the differential counts were done on Leishmanstained smears.
Lymph Node Casting. Microfil (Canton Bio-Medical Products, Boulder, Colo.) was infused into the abdominal aorta immediately after the rabbits were killed. The tissue was dehydrated and cleared as described previously (9).
The Distribution of Microspheres to Lymphoid and Other Organs.
When the number or total radioactivity of the microspheres injected is known, the distribution to various organs can be considered in relation to this number. With certain assumptions this value can be considered as a reflection of the cardiac output of the animal (8). Table I shows the distribution and standard error for some organs of the sheep. The distribution per gram of organ is also compared. On a weight basis, the blood flow to a lymph node was found to be equivalent to the blood flow to an equal weight of spleen. The kidney flow was considerably higher. In the eye the localization of microspheres was predominantly in the choroid region. The lowest flow per gram was to the thymus and the variation in this flow between different animals was also large (SE of 29%), presumably reflecting the variation in maturity of this organ. Thymus weight varied from 8.2 to 22.2 g.
The mean blood flow to the lymph nodes (per gram) was 0.012% of the cardiac output. The average weight of the popliteal and prefemoral nodes was 1.2 g. Prescapular nodes averaged 3.2 g. With 3.3 liter/min as an average cardiac output of sheep of this age and size (11), the blood flow to a lymph node weighing 1 g was 3.3 liter/min x 0.012% = 0.396 ml/min or 24.06 ml/h. A differential count on blood from jugular veins of these sheep gave a mean blood lymphocyte count of 4.66 _+ 0.43 x l0 s cells/ml. Right Kidney~ cpm xl0 -6 Fro. 1. The distribution ofmicrospheres to the kidneys. The total radioactivity in the left kidney is plotted against the right kidney for each animal. The theoretical line of identity is drawn.
Therefore, 4.66 × 106 × 24.06 or 1.12 × 108 lymphocytes/h entered such a lymph node in the blood. In the Discussion this is considered in relation to the number oflymphocytes leaving such a node, and the efficiency of lymphocyte removal by a node is calculated. Assessment of the Symmetry of Microsphere Distribution. If microspheres are adequately mixed with the blood leaving the left ventricle, then they should segregate evenly to organs having a bilaterally symmetrical distribution. Fig. 1 shows the distribution to the two kidneys in the 10 sheep studied. The radioactivity of the left kidney is plotted against the same animal's right kidney. Fig. 2 shows the distribution to the left and right eye in the same sheep. In spite of the catheter in one carotid artery, circulation in the head was such that the left and right eye received almost identical numbers of microspheres in every animal examined.
The popliteal, prefemoral, and prescapular lymph nodes, as well as nodes of the lumbar chain, are bilaterally paired, and Fig. 3 shows the variation between the number (cpm) of microspheres localized in the nodes of the right side compared with the left side. It is clear that the variation is greater than was found in the kidneys, even though the differences between contralateral nodes was always less than twofold. When the weights of these nodes were plotted in a similar manner (left versus right), the random variation was a similar order of magnitude (less than twofold) (Fig. 4). It was concluded that the variation in weight and blood flow was most probably due to the natural variation in the immunological status of different lymph nodes. In rabbits, there was also the same degree of symmetry in the distribution of microspheres.
Distribution of Microspheres within Lymph Nodes. The lymph nodes of the sheep are sufficiently large that slices 2-4 mm thick can be made with a sharp blade. It is then feasible to cut fragments of cortex and medulla separately. When this was done, the fragments were weighed and the radioactivity was counted, there was an average of 3.10 _+ 0.37 (four nodes) times more micro- Effect of the Immune Response on the Localization of Microspheres in Sheep Lymph Nodes. Individual sheep lymph nodes were challenged with either 100 ~g of purified protein derivative (PPD) 1 (Connaught Laboratories, Toronto) or with injections of allogeneic lymphocytes. Animals receiving PPD had received BCG 1-2 mo before. Injections were made at subcutaneous sites in the drainage area of the appropriate node. Allogeneic lymphocytes were obtained from the efferent lymph of a sheep and the cells were injected suspended in 0.9% NaC1 in doses up to a total of 5 × l0 s cells, in multiple sites. At various times (3-6 days) after challenge, the animals were injected with microspheres and sacrificed, and individual nodes were weighed and counted. Table II shows a comparison between five such nodes and the values obtained from normal nodes. The stimulated nodes all had a larger number of trapped microspheres even when compared on an equal weight basis. The average of the stimulated nodes was 4.5 times the normal nodes. A comparison of the difference between these two mean values with an independent t test gave a P value < 0.001.
A second comparison was made with the same lymph nodes. The stimulated node was compared with its contralateral normal node, since the bilateral distribution of microspheres was shown to vary less than twofold. Table III shows this comparison for both the weight and the radioactivity per gram.
Abbreviations used in this paper: BCG, bacille Calmette-Gu~rin; KLH, keyhole limpet hemocyanin; PPD, purified protein derivative. Stimulated nodes weighed an average of 1.42 times the control, and the microsphere localization per gram was 4.18 times the control. The Administration of Microspheres during Lymph Drainage of Single Lymph Nodes. By monitoring the lymphocyte output in the efferent lymph, it was possible to determine if the localization of microspheres within the node affected the functional integrity of the node in terms of the movement of lymphocytes through it. In four experiments no evidence of an effect of the microspheres on either the flow rate of the lymph or on the cell output of the node was evident. No effect of the microspheres on gross physiological parameters of the animals has been observed. Higher doses ofmicrospheres have not yet been used to determine the tolerance of a lymph node to such treatment.
In another experiment, we combined lymph drainage and microsphere injection after stimulation of the node with antigen. In the example shown in Fig. 5, a lymphatic catheter was positioned in the efferent vessel of the left popliteal node and of the right prefemoral node. An aortic catheter was also positioned. AIlogeneic lymphocytes were injected into the drainage area of the left popliteal node only. Evans blue was mixed with the injected cells and the dye appeared in the efferent lymph within minutes and cleared over the next day. At 103 h after the injection of antigen, microspheres were injected via the carotid catheter. There was no evidence of an immune response in the prefemoral lymph and the microsphere injection did not detectably alter the lymphocyte traffic through this node. However, a typical response to allogeneic lymphocytes was recorded in the popliteal lymph with an increase in the lymphocyte output as well as the subsequent appearance of increased numbers of transformed, blast cells. When the animal was sacrificed at the end of this experiment, the weight and the radioactivity of the nodes was determined. As shown in Fig. 5, the challenged node received approximately 10 times the number of microspheres and weighed four times more than the prefemoral node. The increased blood flow occurred at a time when the lymphocyte output was enhanced.
In a subsequent experiment, the microspheres were injected at the end of the response after the cell output had returned to a pre-injection level. In this experiment the microsphere counts in the control node and in the stimulated node were similar (4,462 and 4,587 cpm/g, respectively) although since the stimulated node weighed 2.7 g and the contralateral control node 1.2 g, the total blood flow to the stimulated node was still enhanced.
Blood Flow to Rabbit Lymph Nodes during a Primary Immune Response. In order to measure the kinetics of blood flow throughout the course of an immune response, 40 rabbits were studied. The distribution of the cardiac output to some organs in a group of six of these rabbits can be seen in Table IV. The lymph node values were comparable with those found for sheep (Table I). An average, single popliteal node of the rabbit received 0.011 +_ 0.004% of the cardiac output and weighed 188 mg. 2 The blood flow to the spleen of the rabbit was lower than that found in other species studied in our laboratory by this method (rabbit 0.32 _+ 0.11; mouse 1.0 _+ 0.13; sheep 1.8 _+ 0.3). The symmetry of distribution to the right and left side of the animal, however, was similar to that shown for the sheep. The primary immune response to keyhole limpet hemocyanin (KLH) (Calbiochem, San Diego, Calif.) was examined by injecting 2 mg into the hind footpad of a rabbit. The opposing footpad received the same volume (0.1 ml) of 0.9% NaC1. Microspheres were injected via the carotid catheter at various times after the antigen. Groups of three or more rabbits were sacrificed at each time and the popliteal nodes weighed and the radioactivity was counted. The blood flow was significantly increased (P < 0.05) within 1.5 h after injection. The flow continued to increase until 14 h and then decreased until near 24 h. It was, however, still significantly increased compared with resting nodes. During the subsequent 3 days of the immune response it increased again, and averaged three times the normal flow. Fig. 6 shows the changes in blood flow and the changes in wet weight of these nodes. As was found in the sheep, the blood flow per 100 mg of node weight increased during the immune response. We have tentatively labeled the early part of the response the "hyperemia" phase and the later part an ~'angiogenesis" phase. Fig. 7 illustrates the increased number of vessels of a rabbit popliteal node stimulated 5 days previously with a primary injection of KLH, compared with a saline-injected contralateral node.
Discussion
The validity of the use of microspheres to measure regional blood flow has been established in several studies (7-9, 11, 13, 14). Ideally, microspheres should be well mixed at the site of injection so that they distribute with the blood, they should be trapped in the microcirculation during their first passage, and they should not disturb the circulation. In order to minimize recirculation, the microspheres should be of a sufficiently large diameter. It has generally been found that less than 1.5% x of 50-or 80-tzm-sized microspheres reach the venous circulation (7,8,14). With 15-tzm microspheres, the recirculation appears to be less than 10%, and Hales (11) has found that only 1.6% of such microspheres bypassed the systemic circulation in conscious sheep. At the same time, sufficient microspheres must be trapped in any organ or region to minimize the counting and sampling errors.
In the present studies adequate mixing of the microspheres in the ascending aorta took place, since the distribution of microspheres to bilateral organs was symmetrical. However, standard error of the mean percentage of cardiac output in the eyes was 35% and this was considerably greater than was found for the of 16.54% of the cardiac output but in lambs 12.68%. The value 11,9% in the present study is closer to the lamb values. Considering the average weight of the kidneys in the present study, we used animals comparable in size to lambs. Some other reported values for kidney distribution are 12.3% in the rhesus monkey, 11.1% in the dog, 10% in the newborn lamb, and 16.2% in the rabbit (11). The distribution to the rabbit kidneys in the present study was 13.5%.
The blood flow to single lymph nodes of sheep was calculated to average 24.06 ml/h per g. All of the figures required for this calculation were determined in these animals except for the cardiac output of the sheep. An appropriate value was chosen from the literature. Accurate measurements of cardiac output can be made by withdrawing arterial reference samples during the injection of the microspheres. In this way the cardiac output at the time when the microspheres localized is determined (13).
Experiments of Hall and Morris have shown that the average cell output in the efferent lymph from a 1-g lymph node of the sheep was 3 × 107/h. The examples shown in the present study show values close to this figure. Furthermore, Hall and Morris measured the number of cells formed within the node and also the input of lymphocytes from the afferent lymph vessels (15). They concluded that 95% or more of the efferent cells were derived from the blood. From the blood flow and blood lymphocyte count, the input of blood-borne lymphocytes is 1.12 x 108 while the efferent lymph output of cells derived from the blood is 3 × 107/h less 5%, or 2.85 x 107/hour. Therefore, such a lymph node removes (1.12 × 108)/(2.85 × 10 ~) or an equivalent of one in four lymphocytes which enter in the blood. The remaining three out of four lymphocytes presumably leave the node in the venous blood. This highly selective process occurs under normal conditions in or near the cortical regions of the node presumably at the postcapillary venules. Hall (16) estimated 10-15% of lymphocytes left the blood within the node. In these experiments blood flow was estimated by simply cutting the node vessels and measuring the volume of blood collected in a measured time period.
The administration of antigen clearly enhances the traffic of lymphocytes through the regional lymph node, as shown in Fig. 5 and described elsewhere (4,5,17). From the present studies it is clear also that antigen increases the blood flow to the regional node. We conclude that the increase in lymphocyte traffic is a direct consequence of this increase in blood flow for the following reasons. Both the traffic and the blood flow increase by a similar order of magnitude (about four fold). An alternative mechanism to account for the increase in lymphocytes delivered from the blood would be chemotaxis. Since convincing evidence for antigen-induced lymphocyte chemotaxis does not exist, we favor the enhanced blood flow as the simpler explanation and the more likely cause. From the rabbit data presented, the increase in blood flow occurred in two distinct phases. It is common for two separate peaks of lymphocyte output to appear in the efferent lymph of stimulated sheep lymph nodes (Fig. 5). The characteristics of the "recruitment" peak have been described (17). Furthermore, the experiments of Cahill et al. (18) demonstrated that the number of intravenously injected 51Crlabeled, isologous lymphocytes increased in the lymph node within 3 h after antigen. This occured before their appearance in the efferent lymph since there is a certain transit time from blood to lymph. Although simultaneous measurements of traffic and blood flow are necessary to draw firm conclusions, the two peaks of cell output appear later than the two peaks of increased blood flow. This time difference would correspond to the transit time through the node. Finally, in separate experiments, there was a correlation between the degree of cell traffic through cellular hypersensitivity lesions in the skin and the increased blood flow to such lesions. 3 We conclude that a similar cause-and-effect relationship applies in both the node and the skin reactions.
An increase in regional blood flow can be due to local vasodilation, to new vessel growth, or to both. Since the early increase in blood flow to the stimulated rabbit nodes was evident within 1.5 h, and reached a maximum near 14 h, it seems unlikely that this was a consequence of endothelial cell division. Hyperemia, caused by the release of a local mediator, is a more plausible explanation. This mediator has not been identified. However, in other quantitative studies on hyperemia induced by known mediators, the E-type prostaglandins were found to be much more potent than either histamine or bradykinin in this respect (19). The possible involvement of these compounds is being investigated.
It is interesting to note that Gershon et al. (20) have implicated a role for vasoactive amines in the infiltration ofmononuclear cells in delayed hypersensitivity lesions in mice and have speculated on a role for such compounds in the "trapping" of lymphocytes in lymphoid organs. Although we would disagree with "trapping" as an appropriate description of these antigen-induced changes in lymphocyte migration (see 18), the possibility of an early involvement of vasoactive amines in the enhancement of blood flow is plausible.
The second phase of increased blood flow is due, in part at least, to angiogenesis, since the vascular bed was obviously increased 5 days after KLH. Endothelial cell division in the immune response has been described (21,22). Angiogenesis does not preclude a component of hyperemia as well. It is difficult to quantitate the proportion of the antigen-induced increase in lymph node weight due to cell infiltration, cell division, or plasma accumulation, but quantitative kinetics on the entry and exit rates of the various components makes such a realization more feasible than was formerly possible.
The proportion of the mobilizable lymphocyte pool passing through a popliteal node in the blood can be determined from this data. A node weighing 1.2 g increased an average 1.42 times and the blood flow per gram increased 4.18 times after antigen. During an immune response lasting 5 days, such as the one described in the sheep, 1.2 x 1.42 x 4.18 x 24.06 ml/h x 144 h = 24.7 liters of blood passed through the node. The mobilizable lymphocyte pool has been estimated at 10 times the blood lymphocyte pool (23). This represents, therefore, approximately six times the blood volume and an equivalent to 60% of the entire mobilizable lymphocyte pool.
Summary
The blood flow to individual lymph nodes of sheep and rabbits has been determined with ssSr-labeled microspheres. A popliteal node of the sheep received 0.014% of the cardiac output and a comparable node in the rabbit 0.011%. A sheep lymph node weighing 1 g received an average of 24 ml/h of blood. It was calculated that there was a highly selective removal of lymphocytes by the node and that an equivalent to one in every four lymphocytes that entered a normal lymph node migrated out of the blood, through the substance of the node, and into the efferent lymph. During the immune response to either allogeneic lymphocytes or tuberculin, the blood flow to sheep lymph nodes, even without considering the increase in node weight, increased an average of fourfold. During the primary immune response in the rabbit to keyhole limpet hemocyanin, the blood flow increased threefold.
The increase in blood flow preceded the antigen-induced increase in lymphocyte traffic recorded in the efferent lymph. The early phase of increased blood flow was considered to be due to hyperemia, whereas the latter phase had a significant angiogenesis component.
It was calculated that an equivalent to 60% of the entire mobilizable pool of lymphocytes could pass through an average lymph node in the blood during an immune response lasting 5 days.
|
2014-10-01T00:00:00.000Z
|
1977-01-01T00:00:00.000
|
{
"year": 1977,
"sha1": "074be2cfcd8225bf43e78f50acee6768ea873adb",
"oa_license": "CCBYNCSA",
"oa_url": "http://jem.rupress.org/content/145/1/31.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "074be2cfcd8225bf43e78f50acee6768ea873adb",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
258763682
|
pes2o/s2orc
|
v3-fos-license
|
Comparative Genomic Landscape of Urothelial Carcinoma of the Bladder Among Patients of East and South Asian Genomic Ancestry
Abstract Background Despite the low rate of urothelial carcinoma of the bladder (UCB) in patients of South Asian (SAS) and East Asian (EAS) descent, they make up a significant portion of the cases worldwide. Nevertheless, these patients are largely under-represented in clinical trials. We queried whether UCB arising in patients with SAS and EAS ancestry would have unique genomic features compared to the global cohort. Methods Formalin-fixed, paraffin-embedded tissue was obtained for 8728 patients with advanced UCB. DNA was extracted and comprehensive genomic profiling was performed. Ancestry was classified using a proprietary calculation algorithm. Genomic alterations (GAs) were determined using a 324-gene hybrid-capture-based method which also calculates tumor mutational burden (TMB) and determines microsatellite status (MSI). Results Of the cohort, 7447 (85.3%) were EUR, 541 (6.2%) were AFR, 461 (5.3%) were of AMR, 74 (0.85%) were SAS, and 205 (2.3%) were EAS. When compared with EUR, TERT GAs were less frequent in SAS (58.1% vs. 73.6%; P = .06). When compared with non-SAS, SAS had less frequent GAs in FGFR3 (9.5% vs. 18.5%, P = .25). TERT promoter mutations were significantly less frequent in EAS compared to non-EAS (54.1% vs. 72.9%; P < .001). When compared with the non-EAS, PIK3CA alterations were significantly less common in EAS (12.7% vs. 22.1%, P = .005). The mean TMB was significantly lower in EAS vs. non-EAS (8.53 vs. 10.02; P = .05). Conclusions The results from this comprehensive genomic analysis of UCB provide important insight into the possible differences in the genomic landscape in a population level. These hypothesis-generating findings require external validation and should support the inclusion of more diverse patient populations in clinical trials.
Introduction
Urothelial bladder carcinoma (UCB) is a multifactorial disease driven by environmental exposures-especially cigarette smoking-inherited genetic variants, and an accumulation of somatic genetic events.It is the 10th most common cancer worldwide, with more than 55 000 new cases annually, causing significant morbidity, mortality, and burden in the healthcare systems. 1 Rates vary by regions of the world explained in part by the fact that different ancestries demonstrate varying germline mutations and tend to encounter different environmental exposures in a dynamic interplay.In UCB, the highest incidence is seen in the developed countries of Southern and Western Europe and North America with an incidence of 10-15 cases per 100 000 persons. 2 Conversely, the lowest incidence is seen in Middle Africa, Central America, and Western Africa. 1
Patients of Asian ancestry have a similarly
The Oncologist, 2023, Vol.28 low incidence of UCB.Specifically, those patients of East Asian ancestry (EAS) from the countries of China, Japan, and Korea have an incidence of 4.5 cases per 100 000 persons.Those of South Asian ancestry (SAS) from India, Pakistan, Afghanistan, Nepal, Bangladesh, Sri Lanka, and Bhutan see an even lower incidence of roughly 2.5 cases per 100 000 persons. 2 While the incidence is low, these Asian cohorts make up a significant portion of the world's population and account for over 160 000 cases per year.Unfortunately, much of the large-scale scientific research in the UCB space has disproportionately included less of this population from clinical trials.Understanding the genomic differences in cancers of diverse populations will become even more important, as we continue to see advances in precision and personalized oncology.The use of targeted agents in advanced UC has only recently been used clinically. 3Unfortunately, much of these data did not include patients of Asian ancestry.Few Asian patients were included in the large clinical trials that define the standard treatment options for patients.][6] This inequality will become increasingly relevant, as targeted agents based on genomic alterations become more important in the treatment of UC.There is a critical need to improve our understanding of the molecular pathogenesis of the disease in the East and South Asian populations to provide insights that could impact future treatment options.To address that, we present a comprehensive genomic profiling (CGP) study of UCB in patients of East and South Asian genomic ancestry and focus on the identification of frequent and potentially targetable genomic alterations.
Methods
A total of 8728 consecutive, centrally reviewed UCB specimens underwent comprehensive genomic profiling (CGP) as per routine clinical care in a Clinical Laboratory Improvement Amendments-certified (CLIA), College of American Pathologists-accredited (CAP), New York State-regulated reference laboratory (Foundation Medicine, Inc., Cambridge, MA, USA).Briefly, ≥50 ng of DNA per specimen was isolated and sequenced on Illumina HiSeq instruments to high, uniform coverage (median >500×), as previously described. 7he DNA extracted from formalin-fixed, paraffin-embedded (FFPE) tumor specimens was analyzed using an adaptor ligation, hybridization capture-based platform (FoundationOne /FoundationOne CDx) which interrogates the entire coding region of at least 324 cancer-related genes and additional select introns from at least 19 genes commonly rearranged in cancer.
Genomic data were analyzed for base substitutions and short insertions/deletions (short variants), amplifications and homozygous deletions (copy number changes), and rearrangements (including gene fusions).Tumor mutational burden (TMB) was determined on at least 0.8 Mbp of targeted sequence and is reported as mutations per megabase (Mb), and microsatellite instability (MSI) status was determined on at least 1500 loci.As self-reported race was not available, predominant patient ancestry was determined for each specimen using a custom SNP-based classifier, as previously described. 8Mutational signatures were False discovery rate (FDR) corrected using the Benjamini-Hochberg procedure.
Table 1. Continued
The Oncologist, 2023, Vol.Categorical variables reported as frequencies were compared between groups via Fisher's Exact testing while patient age, mean number of pathogenic genomic alterations (GA) per specimen, and mean TMB were compared using 2-sample 2-tailed T testing.False discovery rate (FDR) correction was performed by the Benjamini-Hochberg procedure to correct P values for multiple hypotheses, and the resulting significance values are presented.Statistical significance was defined as a P < .05.
Results
Of the cohort of 8728 UCB, 7447 (85.3%) were of European genomic ancestry (EUR), 541 (6.2%) were of African genomic ancestry (AFR), 461 (5.3%) were of Admixed-American genomic ancestry (AMR), 205 (2.3%) were EAS, and 74 (0.85%) were SAS.Clinical features, mean number of pathogenic GA per specimen, and a selected set of GAs and signatures likely associated with systemic therapy response are reported in Table 1.
The biological sex distribution was similar among the cohorts, with 71.2% male in the EAS cohort, 75.7% male in the SAS cohort, and 74.9% male in the remaining global cohort.Median age at time of tissue acquisition was also similar, ranging between 70 and 72 for all ancestry cohorts.There were no significant differences in MSI status, mutational signatures, or PD-L1 status between any cohorts.Across all 3 cohorts, MSI-high was rare, occurring in 0.9% of the global cohort, 1.5% of the EAS cohort, and 0% of the SAS cohort.There were no significant differences identified between the EAS and SAS cohorts when directly compared to each other.
When evaluating the EAS cohort, pathogenic GAs were most frequently detected in TP53, occurring in 59.0% of patients.TERT promoter mutations were found to be significantly less frequent in EAS as compared to the remaining global cohort (54.2% vs. 72.9%;P < .005).This association held true when stratifying the global cohort and comparing only to those of EUR ancestry (54.2% vs. 73.6%;P < .005)(Fig. 1A).Interestingly, PIK3CA alterations were significantly less common in the EAS cohort, as well (12.7% vs. 22.1%;P < .05),which again was seen when comparing only to the EUR group (12.7% vs. 22.1%;P < .05).Rates of putative driver mutations were comparable between EAS and the overall group, as well as to the EUR group.Notably and in relation to targetable mutations, the EAS cohort had a similar rate of alterations in FGFR3 (16.6% vs. 18.4%;NS).When expanding the analysis to potentially less clinically meaningful genes in the disease, EMSY, NSD3, FGFR1, and PAX5 were discovered to be significantly biased toward the EAS group (P < .05),while no other genes with significantly enriched in the non-EAS cohort (Fig. 1C).TMB was not significantly different when examining the prevalence of TMB ≥10 and TMB ≥20 cohorts.The SAS cohort was found to have statistically similar rates of GAs when compared to the remaining global cohort and to the EUR cohort.However, there was a noticeable decrease in the incidence of TERT promoter alterations in the SAS cohort when comparted to EUR (58.1% vs. 73.6%;P = .06)(Fig. 2A).Whereas TERT was the most common alteration in the global cohort, occurring in approximately 73% of patients, TP53 was the most common alteration in SAS cohort, found in 67.6% of patients.The SAS cohort had fewer alterations in FGFR3 (9.5% vs. 18.5%,P = .30).Interrogating the full set of genes yielded a single additional significant finding-enrichment of MST1R mutations in the SAS group (P < .05)(Fig. 2C).When evaluating the TMB, both the overall and SAS cohorts were similar at 6.25 mut/Mb.
Discussion
This study provides the largest analysis of ancestrally associated genomic changes in UCB.Prior to our study, the largest ancestral analysis to date involved use of the comprehensive multi-omics sequencing of the TCGA database. 11In that study, the authors evaluated the ancestry effects on mutation rates DNA methylation, and mRNA, and miRNA expression among 10 678 patients across 33 cancer types.Of the 10 678 patients in this study, there were only 669 patients of East Asian ancestry and 27 patients of South Asian ancestry.Again, that study spanned 33 cancers.In this UCB-specific analysis, we were able to include 205 and 70 patients of EAS and SAS ancestry.
The TERT promoter is the most common mutation that occurs in approximately 60%-80% of patients with bladder The Oncologist, 2023, Vol. 28, No. 10 e915 cancer. 12TERT mutation is correlated with increased telomerase activity and shorter but stable telomere length, with growing evidence that this empowers the genetically unstable cells to evade senescence by maintaining telomeres from critically shortening. 12,13TERT mutations can be identified throughout the spectrum of urothelial carcinoma with conflicting reports on association of mutation with grade, stage, and prognosis. 12Despite the frequency of this mutation, there are no approved agents that specifically target the gene itself or telomerase activity.Unfortunately, compounds that have been evaluated in clinical trials have proven either ineffective in adult malignancies or too toxic when tested in pediatric populations. 14,15Nevertheless, research has demonstrated efficacy when targeting TERT expression through a different, epigenetic mechanism, inhibition of the BET bromodomain. 16ET bromodomain proteins occupy super-enhancer loci of oncogenes to increase transcription.By inhibiting these proteins, targeted agents can suppress oncogene transcription and exert anticancer effects.BET inhibitors have shown preclinical efficacy in bladder cancer models and require further testing in clinical trials. 17n addition to their therapeutic potential, TERT promoter mutations have also been associated with higher TMB and thus might serve as a potential biomarker associated with response to ICI. 17 One group explored the use of tumor-genomic profiling in predicting the response of advanced-stage UC to ICI. 18 In 119 patients with locally advanced or metastatic UC, researchers demonstrated that TERT promoter mutations were an independent factor associated with ICI response, progression-free survival, and overall survival.
FGFR3 gene mutations are common in UC.The FGFR inhibitor erdafitinib, received accelerated FDA approval in April 2019 as salvage therapy in patients with FGFR2 or FGFR3 activating mutation or fusion progressing on platinumbased chemotherapy.In a case, a locally advanced bladder tumor from a 64-year-old SAS man was found to possess an FGFR3-TACC fusion, making erdafitinib a potential therapeutic option (Fig. 3).In this study, we found that like TERT, FGFR3 mutations were less common in SAS patients than in the remainder of the cohort.FGFR3 overexpression occurs in up to 40% of MIBC.Furthermore, initial data suggested that FGFR3 overexpression may be a useful biomarker to identify patients eligible for FGFR inhibitor therapy, but this notion has not been supported by additional data.Interestingly, when evaluating the TCGA bladder cancer database, Asian patients demonstrated significantly higher FGFR3 expression as compared to Caucasians. 19IK3CA alterations, found to be less common in the EAS cohort, have been directly implicated in several solid tumors. 202][23][24] While clinical trials evaluating PI3K/mTOR pathway inhibitors have largely been disappointing in UC, there have been positive results in metastatic breast cancer with activating mutations treated with the FDA-approved agent, alpelisib. 25Furthermore, preclinical data in bladder cancer models demonstrated that PIK3CA inhibition in combination with ICI has enhanced antitumor effects through increased immune stimulation. 26If PIK3CA inhibition may transition to more clinical trials in UC, it will be important to include East Asian patients to assess safety and efficacy.
Despite the FDA approval of erdafitinib and ICI in UC, its use in patients of South Asian ancestry may not be widespread.This is largely due to real-world access barriers, for example, availability and cost.In a study from India, investigators evaluated single center use of ICI in solid tumors. 27hey found that of 9610 patients who had indications for ICI, only 155 (1.6%) went on to receive it, listing financial constraint as the most common limiting factor.Equitable access to tumor next generation sequencing, clinical trials and safe, effective therapies is a very important priority to eliminate disparities.
Understanding genomic differences at a population level is only a first step.We must Incorporate ancestral data in clinical trial design, specifically by performing these trials in regions of the world from where this data are derived.Indeed, while learning mutational frequencies of SAS is useful, doing so by evaluating a SAS patient living in the US does not translate into real-world success in treating a patient in India.Developing regions of South and East Asia possess different disease burdens that may affect the response rates and toxicity profiles of therapies.Furthermore, use of newly approved agents in these developing regions is not widespread.This is largely due to real-world cost barriers.In one study from India, investigators evaluated single center use of immunotherapy in solid tumors. 28They found that of 9610 patients who had indications for ICI, only 155 (1.6%) went on to receive therapy, listing financial constraint as the most common limiting factor.Therefore, even if data would suggest greater clinical efficacy in this population, there are barriers to treatment access in these resource limited countries.
In this study, the greatest limitation is the lack of clinical, therapy response, and outcomes data that impairs any analysis associated those with genomic data.Moreover, we could not ascertain race and ethnicity based on patient reports.The retrospective and descriptive design of this study, as well as the presence of possible selection bias and confounding factors, may impact interpretation.Furthermore, CGP was performed on a single representative block as part of the clinical workup; therefore, investigation of potential genomic heterogeneity that might correlate with The Oncologist, 2023, Vol. 28, No. 10 e917 morphologic heterogeneity was outside the scope of this study.In that context, we also did not have adequate plasma samples for ctDNA analysis.
Conclusion
Despite the limitations noted, our series represents one the largest efforts to characterize molecular alterations in advanced UCB based on patient ancestry.The in-depth genomic analysis of South Asian and East Asian patients provides hypothesis-generating insights into potential differences in a population level.Our study should further motivate and support the inclusion of more diverse patient populations in clinical trials in UC and across cancer types.
2023, Vol. 28, No. 10 between EAS vs. non-EAS, SAS vs. non-EAS, EAS vs. EUR, and SAS vs. EUR are shown.† 28, No. 10 e913 determined using the decomposition method of Zehir et al using the 96-feature single-base substitution COSMIC reference signatures generated by Alexandrov et al. 9,10 Tumor cell PD-L1 expression was determined by immunohistochemistry (IHC, Dako 22C3) with tumor proportion score (TPS) 0% as negative, 1%-49% as low expression, and TPS ≥50% as high expression.Patient age, biological sex, and site of specimen collection were extracted from accompanying pathology reports.Approval for this study, including a waiver of informed consent and a Health Insurance Portability and Accountability Act (HIPAA) waiver of authorization, was obtained from WCG Institutional Review Board (Protocol No. 20152817).
Figure 1 .
Figure 1.(A) Paired longtail of genomic alterations found in 205 patients of East Asian (EAS) and 7447 patients of European (EUR) genomic ancestry with UCB.Mutations in TERT promoter (73.60% vs. 54.15%;P = 6.34E−08) and PIK3CA (22.09% vs. 12.68%; P = 6.34E−08) are enriched in the EUR group.The most frequent 25 genes ordered by combined frequency in both cohorts are shown.(B) Tile plot of pathogenic genomic alterations identified in 205 patients with UCB of East Asian genomic ancestry.MSI-high was detected in 1.5% of patients were MSI-high and 7.3% were TMB-high (>20 mut/Mbp).(C) Volcano plot showing enrichment (P < .05)for pathogenic gene alterations between EAS and non-EAS groups.EMSY, NSD3, FGFR1, and PAX5 are significantly enriched in the EAS group while TERT promoter and PIK3CA are enriched in the non-EAS group.
Figure 2 .
Figure 2. (A) Paired longtail of genomic alterations found in 75 patients of South Asian and 7447 patients of European genomic ancestry with UCB.No mutations were enriched in either group.The most frequent 25 genes ordered by combined frequency in both cohorts are shown.(B) Tile plot of pathogenic genomic alterations identified in 75 UCB patients of South Asian genomic ancestry.MSI-high was detected in 0% of patients were MSI-high and 12.2% were TMB-high (>20 mut/Mbp).(C) Volcano plot showing enrichment (P < .05)for pathogenic gene alterations between SAS and non-SAS groups.MST1R is significantly enriched in the SAS group while no genes are enriched in the non-SAS group.
Table 1 .
, No. 10 e911 Clinical features and select pathogenic genomic alteration frequencies in 8728 UCB specimens.
|
2023-05-19T06:17:40.156Z
|
2023-05-17T00:00:00.000
|
{
"year": 2023,
"sha1": "8476b6146ba662c9c669fb25e54ac74c9e503968",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/oncolo/advance-article-pdf/doi/10.1093/oncolo/oyad120/50377215/oyad120.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e119f47e6fc25002e754e42a963027aaee9b1637",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
17717835
|
pes2o/s2orc
|
v3-fos-license
|
Energetic particle instabilities in fusion plasmas
Remarkable progress has been made in diagnosing energetic particle instabilities on present-day machines and in establishing a theoretical framework for describing them. This overview describes the much improved diagnostics of Alfven instabilities and modelling tools developed world-wide, and discusses progress in interpreting the observed phenomena. A multi-machine comparison is presented giving information on the performance of both diagnostics and modelling tools for different plasma conditions outlining expectations for ITER based on our present knowledge.
Introduction
As energetic alpha particles will play a central role in burning deuterium-tritium (DT) plasmas, it is crucial to understand and possibly control their behaviour in various operational regimes. Of particular importance is the understanding of instabilities driven by alpha particles [1]. The complete set of implications for operating burning plasmas with the alphaparticle driven instabilities can only be investigated in a burning plasma experiment itself.
However, experiments on present day machines with energetic particles produced by neutral beam injection (NBI), ion-cyclotron resonance heating (ICRH), and electron-cyclotron resonance heating (ECRH) already reveal many relevant features of the possible alphaparticle instabilities. The energetic particle-driven instabilities are often observed experimentally and they range from low-frequency fishbones in the range of 10-50 kHz up to compressional Alfvén eigenmodes (CAEs) in the frequency range comparable to or higher than the ion cyclotron frequency. The instability of weakly-damped Alfvén eigenmodes (AEs) is of highest priority for the next step burning plasma on ITER due to a number of reasons.
First, AEs are driven by radial gradient of energetic particle pressure and lead to enhanced alpha-particle radial transport, in contrast to CAEs excited by velocity space gradients.
Second, AEs resonate with alpha-particles in the MeV energy range in contrast to, e.g. fishbones expected to resonate with alpha-particles of 300-400 keV in ITER. Third, due to their weak damping, AEs could be excited by alpha-particle population with lower energy content per volume as compared to linear Energetic Particle Modes (EPMs). Although amplitudes of Alfvén perturbations are usually not too high in present day experiments, the existing experimental data on energetic particle radial redistribution and losses is representative enough to gain important information on the processes involved.
A significant dedicated effort was made in the past decade in developing techniques of diagnosing energetic particle-driven Alfvén instabilities with interferometry, ECE, phase contrast imaging, and beam emission spectroscopy. Together with the much improved diagnostics of the energetic ions themselves, such development sets a new stage for understanding of such instabilities since nearly all the essential information can now be obtained from experimental measurements and not from assumptions or modelling with often uncertain error bars. The aim of this overview is to present a point-by-point comparison between the much improved diagnostics of AEs and modelling tools developed world-wide, and outlines progress in interpreting the observed phenomena.
Experimentally, Alfvén instabilities exhibit two main nonlinear scenarios, with a mode 3 Frequency Locked (FL) to plasma equilibrium, or with a mode Frequency Sweeping (FS) ("frequency chirping" modes [2]). It is important to understand these two scenarios for predicting what temporal evolution and transport due to Alfvén instabilities will be relevant to ITER. Figure 1 presents a typical example of FL Alfvén eigenmodes (AEs) on the JET tokamak with ICRH-accelerated ions [3], while Figure 2 presents FS Alfvén instability on the JT-60U tokamak with negative NBI heating [4]. In the case of JET, the Alfvén perturbations form a discrete spectrum of Toroidal Alfvén Eigenmodes (TAEs) with different toroidal mode numbers n and frequencies, which are determined by bulk plasma equilibrium throughout the whole nonlinear evolution. These TAEs with different n 's appear one-by-one as ICRH power increases, and the observed slow change in TAE frequencies is caused by an increase in plasma density in accordance with Alfvén scaling . The amplitude of each TAE saturates and remains nearly constant. In contrast to the FL scenario, Figure 2 shows FS Alfvén instability on JT-60U with frequency of the perturbations starting from TAE frequency, but then changing on a time scale much shorter than the time scale of the changes in plasma equilibrium. The amplitude of this FS instability exhibit bursts, and the mode frequency sweeps during every burst.
The FL and FS scenarios of energetic particle-driven instabilities differ in the temporal evolution of redistribution and losses of energetic particles and they require conceptually different approaches in modelling. Namely, the frequency of unstable modes in FL scenario correspond to linear AE determined by bulk plasma equilibrium throughout the linear exponential growth and nonlinear evolution of the mode. In this case, the energetic particles determine growth rate, but affect very little the eigenmode structure and frequency, so that the modes are "perturbative". In FS scenario, the contribution of the energetic particles to mode frequency is as essential as the bulk plasma contribution, and when the unstable mode redistributes the energetic particles, it changes the frequency too. Characteristic time scale of the energetic particle redistribution is the inverse growth rate, so the energetic particle profile and the mode frequency determined by this profile change much faster than the plasma equilibrium. The energetic particles cannot be considered as a small perturbation in the FS scenario, so the modes are "non-perturbative" nonlinear energetic particle modes.
In the past studies of FL scenarios linear spectral MHD codes could be used for computing AEs supported by the plasma equilibrium. For FL scenarios observed experimentally, MHD spectroscopy via AEs, i.e. obtaining information on plasma equilibrium from observed spectrum of AEs, became possible [5][6][7]. In particular, Alfvén cascade (AC) eigenmodes [6,8] 4 (also called reversed shear Alfvén eigenmodes, RSAEs [9]) were employed in MHD spectroscopy successfully. In contrast to TAE modes in Fig.1 [8].
For the FS scenario, the concept of near-threshold "hard" nonlinear regime of energetic particle-driven instability has demonstrated the possibility of forming non-perturbative nonlinear modes even when the instability is somewhat below the linear threshold. This recent development began to provide a credible opportunity of understanding FS modes to a degree required for theory-to-experiment comparison and predictions for burning plasmas.
Advances in diagnosing Alfvén instabilities
Recent advances in diagnosing Alfvén instabilities are associated with a significant expansion of tools and techniques for detection and identification of the unstable modes. In the past, Alfvén instabilities were detected via perturbed magnetic field measured by magnetic sensors, e.g. Mirnov coils outside the plasma. Such measurements did not always detect AEs in the plasma core and they will be more difficult in ITER and DEMO due to the necessity of protecting the magnetic sensors. It is also desirable for future DT machines with a restricted access to the plasma to have detection systems for Alfvén instabilities naturally combined with some other diagnostic tools. Measurements of perturbed electron density and temperature associated with AEs are possible alternatives to magnetic sensors at the edge. The perturbed electron density caused by AEs in toroidal geometry is given by where 0 , n n are the perturbed and equilibrium densities, is the plasma displacement, and n L is the radial scale length of the density. The first term in Eq.(1) describes the usual convection of plasma involved in the B E drift. The second term R / 1 in (1) is caused by toroidicity and gives a non-zero perturbed density when the profile of 0 n is flat [10,11]. 5 A launched microwave O-mode beam on JET with frequency above the cut-off frequency of O-mode was found to deliver detection of AEs far superior to that made with Mirnov coils [11]. This "O-mode interferometry" shows unstable AEs not seen with Mirnov coils. Later, the standard far infra-red (FIR) JET interferometer was digitised to high sampling rate, which enabled detecting AEs even in plasmas of high density. A similar interferometry technique was employed for diagnosing AEs in DIII-D discharges [12]. It was observed for the first time that a "sea of modes" exists in such plasmas with toroidal mode numbers up to 40 n .
The interferometry technique has increased significantly the sensitivity of AE detection and it assures that all unstable modes are detected even deeply in the plasma core. Since the interferometry technique of detecting AEs requires only interferometers used for plasma density measurements, this method is a good candidate for ITER and DEMO.
The main limitation of using interferometry or Mirnov coils for detecting AEs, is that the AEs cannot be localised from the measurements. Recent successful development of ECE [13] and ECE imaging [14,15], beam emission spectroscopy (BES) [16], and phase contrast imaging (PCI) [17] have addressed the problem of measuring mode structure. Together with the existing SXR technique and X-mode reflectometry used for observing alpha-driven AEs in DT plasmas [10], the new diagnostics provide opportunities in identifying the spatial structure of the modes to a degree required for an accurate experiment-to-theory comparison.
On ALCATOR C-MOD, the PCI diagnostic was found to be an outstanding tool for detecting core-localised AEs [17]. This diagnostic is a type of internal beam interferometer, which can generate a 1D image decomposed in 32 elements of approximately 4.5 mm chord separation in the direction of major radius thus providing information on AE localisation.
On DIII-D and ASDEX-Upgrade, ECE became a successful tool for measuring AEs.
Figures 3, 4 display an example of the ECE radial profiles for beam-driven ACs (RSAEs) and
TAEs in DIII-D discharge [13]. Both localisation and the radial widths of these FL modes are found to agree well with the linear MHD code NOVA, which also includes the relationships between the perturbed magnetic fields, density and electron temperature. The perturbed electron temperature associated with the modes is estimated to be 10 -4 for TAE. This information is necessary for computing the energetic particle redistribution due to the AEs described in the next Section. 6 In addition to the Alfvén diagnostics, there has been an extensive development in diagnostics of confined and lost energetic particles on many machines world-wide [18].
Description of these diagnostics goes beyond the scope of this paper, but some examples of their use will be presented.
Redistribution and losses of energetic ions caused by Alfvén instabilities
It was noted in the previous Section that typical amplitudes of the AEs excited are quite low, e.g. in the range of B B / 10 -4 ÷10 -3 on the DIII-D tokamak. For such amplitudes, particles could be affected noticeably if motion of such particles is in resonance with the wave. Hence, the relatively narrow regions surrounding the wave-particle resonances are of major importance for describing the particle interaction with AEs. Significant effort has been made in order to validate experimentally the main assumptions and results of both linear and nonlinear theory describing resonant interaction between Alfvén waves and energetic particles, and the effect of Alfvén instabilities on redistribution and losses of the energetic particles. In a tokamak, the theory focuses on the dynamics of particles resonant with a wave, i.e. satisfying the resonance condition (in the guiding centre approximation) Here, the toroidal, , and poloidal, , orbit frequencies of the particles in the unperturbed fields are functions of three invariants: energy E , magnetic moment , and toroidal angular momentum, where r is the poloidal flux, V is the velocity of the particle parallel to the magnetic field, are the charge and mass of the particle, B is the toroidal component of the magnetic field, and R is the major radius. Since the wave frequency is much less than ion cyclotron frequency, is conserved as is the combination (for a single mode) The free energy source of the Alfvén instability is associated with radial gradient of energetic particle pressure and causes the wave growth rate are the beta value, thermal velocity, and the drift orbit width of the energetic (hot) particles, AE is the radial width of the mode, and functions depend on the energy distribution of the energetic particles.
For assessing transport caused by AEs, one notes that each mode affects resonant particles only in a relatively narrow region of the phase space indicated by condition (2), and that AE can cause a significant radial redistribution of these particles with a minor change in their energy (see Eq. (4)). In the nonlinear phase of instability, the resonant particles can become trapped in the field of the wave within a finite width of the resonance, is the nonlinear trapping frequency [24]. The nonlinear width of the resonance varies along the resonant surface depending on the unperturbed particle orbits, the mode structure, and the mode amplitude. If the widths of different resonances are smaller than the distance between them, a single mode nonlinear theory applies. If the resonances overlap, stochastic diffusion of the particles over many resonances can cause a global transport [24,25]. 8 Two representative cases of AE-induced redistribution of the energetic particles, with resonances non-overlapped and overlapped, were recently modelled in detail for welldiagnosed experiments on JET and DIII-D. In both cases FL scenarios are relevant, so the structure and frequencies of the AEs could be obtained from a linear theory.
On JET, D beam ions were accelerated from 110 keV up to the MeV energy range by 3 rd harmonic ICRH in D plasmas [26]. Figure 5 (2), is based on γrays from the nuclear 12 C(D,p) 13 C reaction between C impurity and fast D [29]. During the tornado modes, the 2D γ -camera on JET (Fig.7) measuring the γ -emission with time resolution of ~ 50 msec showed a strong redistribution of the γ -emission in the plasma core as Figure 8 displays.
A suite of equilibrium (EFIT and HELENA) and spectral code MISHKA was used to model the observed AEs. The particle-following code HAGIS [30] was then employed to simulate the interaction between the energetic ions and TAEs. The unperturbed distribution function of fast D ions was assumed to be of the form where the distribution function in for trapped energetic ions accelerated with onaxis ICRH was considered to be Gaussian centred on 1 with the width of 1.5·10 -1 .
The distribution function in energy, E f , was derived from the measured energy spectrum of DD neutrons [27]. The spatial profile of the trapped D ions before the TAE activity, P f , was obtained with the best fit matching the observed 2D profile of the gamma-emission. 9 The initial value simulation with HAGIS shows an exponential growth of the modes followed by nonlinear saturation and redistribution of the trapped energetic ions. Figure 9 demonstrates that these HAGIS results are in satisfactory agreement with the experimentally measured gamma-ray profiles for the trapped energetic ions. In view of the possible interplay between TAE and sawteeth [31], the fast particle contribution to the stabilising effect of the 1 n kink mode was computed before and after the redistribution. Significant decrease in the stabilising effect was found in [26] supporting the idea that monster crash is facilitated by TAEs expelling the energetic ions from the region inside the 1 q radius. A similar interplay between TAE modes redistributing energetic particles in the plasma core, and other types of MHD instabilities affected by the energetic particles, could be relevant for ITER. Since the interaction described above does not require energetic particle transport across the whole radius of the plasma, even transport of the energetic particles deeply in the plasma core could affect MHD stability in the same core region.
Another example of energetic particle redistribution by AEs comes from DIII-D experiments showing significant modification of D beam profile in the presence of multiple
TAEs and RSAEs (Alfvén cascades) [32]. This observation and TRANSP predictions are shown in Figure 10, and TAE and RSAE data is shown in Figs 3, 4. Based on the ECE measurements of AE amplitude and mode structure, accurate modelling was performed in [33] with the ORBIT code [25] to interpret the flat fast ion profile. In this case of multiple modes densely packed in frequency, a wide area of wave-particle resonance overlaps was found. A stochastic threshold for the beam transport was estimated, and the experimental amplitudes were found to be only slightly above this threshold [33]. We thus observe that multiple low amplitude AEs can indeed be responsible for substantial central flattening of the beam distribution as Figure 11 shows.
Recently, a quasi-linear 1.5D model has been developed and applied to this DIII-D data [34]. The model gives the relaxed fast ion profile determined by the competition between the AE drive and damping. Figure 12 presents a comparison between the experimental data, the TRANSP modelling, and the 1.5D quasi-linear model [34] q in ITER will not necessarily give a global transport similar to that observed on DIII-D.
In another theory-to-experiment comparison aimed at explaining TFTR results [35], the role 10 of nonlinear sidebands including zonal flows was shown to be significant in reducing the mode saturation level. The beam ion losses caused by the AEs were proportional to 2 B , although the simulation cannot yet match the experimental data precisely.
In ASDEX-Upgrade experiments, detailed measurements of radial structures of AE driven by beams and ICRH were obtained using ECE imaging [14], SXR, and reflectometry. The fast-ion redistribution and loss is routinely monitored with scintillator based fast-ion loss detectors and fast-ion D-alpha spectroscopy. It was found that a radial chain of overlapping AEs enables the transport of fast-ions from the core all the way to the loss detector [36,37].
MHD spectroscopy of plasma
The FL instabilities of AEs represent an attractive form of MHD spectroscopy [5][6][7] Here, m is poloidal mode number of an AC and A V is Alfvén velocity. In addition to the scenario development, important information was obtained on the time sequence of events causing the ITB. Figure 13 shows a typical JET discharge, in which a grand AC with all mode numbers n seen at once signifies that min q is an integer at t 4.8 s [38]. Figure 14 shows that in the same JET discharge the ITB triggering event, observed as an increase of e T in the region close to R q min happens earlier, at t 4.6 s. This sequence of events is characteristic of the majority of JET discharges showing that the formation of ITB just before min q = integer is more likely to be associated with the depletion of rational 11 magnetic surfaces [40], rather than with the presence of an integer min q value itself. Similar observations have been made on DIII-D [41]. Another important example of MHD spectroscopy are the studies of sawtooth crashes on C-MOD [43,44] and JET [45], in which ACs (RSAEs) are observed between the sawtooth crashes. Figure 15 shows the detected RSAEs on C-MOD between two sawtooth crashes, which convincingly indicate the shear reversal inside the 1 q radius. Figure 16 shows the relevant reconstruction of the r qprofile from the modes observed.
The use of MHD spectroscopy has become routine for JET, DIII-D, NSTX, MAST, and ASDEX-Upgrade, and the extension to 3D plasmas is being implemented on LHD [42].
The near-threshold nonlinear theory of frequency sweeping modes
The FS scenarios of energetic particle-driven Alfvén instabilities were commonly observed on DIII-D, JT-60U, ASDEX-Upgrade, MAST, NSTX, START, and LHD machines with NBI heating (see, e.g. [46] and References therein). In contrast to FL scenarios, neither frequency nor structure of FS modes is determined by the bulk plasma equilibrium during the nonlinear mode evolution. Description of FS modes is essentially nonlinear, and the linear MHD spectral codes have very limited applicability.
The recent progress in describing FS instabilities is associated with kinetic theory [47] of energetic particle-driven waves with different collisional effects [48], drag and diffusion, replenishing the unstable distribution function and satisfying the near-threshold condition In this limit, a lowest order cubic nonlinear equation for the mode amplitude describes "soft" nonlinear FL scenarios (steady-state, pitchfork splitting, and chaotic) when diffusion dominates at the wave-particle resonance in the phase space, and "hard" (explosive) nonlinear scenario when the drag dominates or the diffusion characteristic time is much longer than 1 .
The explosive mode evolution goes beyond the cubic nonlinearity and the fully nonlinear model shows a spontaneous formation of long-living structures, holes and clumps, in the energetic particle distribution [49]. These structures are nonlinear energetic particle modes, which travel through the phase space and sweep in frequency [50] exhibiting many of the characteristics of FS modes seen in experiments [46].
Among the variety of the frequency sweeping spectra obtained in modelling [51,52], the long range frequency sweeping phenomenon attracts most attention, due to its relevance to the 12 experimental observations. Figure 17 shows results of MAST experiment with super-Alfvénic NBI driving Alfvén instability when the resonance is in phase space region dominated by electron drag of the beam ions. It is seen, that similarly to Figure 2, some of the modes sweep in frequency to a very long range of / 0.5. Although modelling with HAGIS code [30] reproduces the characteristic spectrum observed in experiments as Figure 18 shows, the range of the frequency sweeping is not as large as that observed on MAST.
The dominant transport mechanism for nonlinear FS modes is convection of particles trapped in the wave field. This mechanism is also characteristic for strongly unstable energetic particle modes that are already non-perturbative in the linear regime [55].
Experimentally, validation of the hole-clump formation and transport was made with an NPA diagnostic on LHD [56]. Figure 19 shows how the flux of energetic beam ions sweeps in energy together with FS modes. A new theory of continuous hole-clump triggering [57] shows that a single resonance can produce transport higher than the quasi-linear estimate, due to the convection of the resonant ions trapped in the field of a travelling wave.
A joint ITPA experiment validating the near-threshold model is in progress, with MAST and LHD comparison indicating that the parameter space for bursting AEs shrinks for corelocalised global AE (GAE) on LHD, in which GAEs exist because of a r q -profile different from that in tokamak [42]. In parallel, study of experimental data continues. On NSTX bursting FS TAEs were observed in the form of "avalanches" consisting of several coupled modes with strong downward frequency sweep and amplitudes higher than un-coupled TAEs [58]. The experimentally observed ~10% drops in the neutron rate during the avalanches were explained by a decrease in the beam energy and losses resulting from interaction with TAEs.
Possible control of Alfvén instabilities in burning plasmas using ECRH
The problem of controlling Alfvén instabilities and fast ion transport caused by AEs is one of the important avenues for future exploitation in both experiment and theory. The most encouraging results in this area were obtained on DIII-D, where ECRH was found to suppress RSAEs excited by the beam ions [59]. A direct comparison of ECCD effect versus ECRH [59] has shown that it is the heating, not the current drive, which provides the mode suppression, possibly via electron pressure gradient or via increased damping due to larger population of trapped electrons. A new joint ITPA experiment was set up in order to assess such effect on 13 DIII-D, ASDEX-Upgrade, TCV, LHD, TJ-II, HL-2A, and KSTAR. From the standpoint of targeting and affecting a particular type of waves with a known location, ECRH is an ideal tool since it can provide highly localised targeted power deposition on ITER. Figure 20 shows the interferometry data on RSAE activity in DIII-D discharges with ECRH. The amplitudes and number of unstable AEs decreases when ECRH is applied to the localisation region of RSAEs at min q .
In ITER with possible highn TAEs occupying a wide radius, ECRH, due to its high localisation, may suppress TAEs in a narrow region rather than in whole plasma. However, if the width of the TAE-free zone is larger than the orbit width of the energetic ions, this zone could become a transport barrier for the TAE-induced transport of the energetic ions. With the expertise gained in ECRH triggering of ITBs for thermal plasma, the possibility of employing ECRH for creating TAE-free transport barriers for energetic particles in ITER could be feasible. Further study of ECRH suppression of AEs is required.
Conclusions
In summary, a systematic and significant recent effort in diagnosing the energetic ion 7 Lines-of-sight of the 2D gamma-camera on JET.
Fig.8
Time evolution of the gamma-ray signals for channels 14 -18 in JET discharge # 74951. The signals in central channels (15,16) decrease, while the signals in outer channels (14,18) increase, showing the redistribution of gamma-rays from energetic deuterons during Alfvén instability. Fig.9 Gamma intensity in the 19 channels of gamma-camera (JET pulse # 74951). Here we show measured pre-TAE (blue) and during TAE (green) profiles. Simulated gamma intensity is shown in red (initial data) and black (after redistribution).
|
2016-03-14T22:51:50.573Z
|
2013-10-01T00:00:00.000
|
{
"year": 2013,
"sha1": "939719efb58b83fabcc8b4566e2538cc6154fb50",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1310.8445",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "1583aaac30454553024ccd5fc1f8eca5d93c47b1",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
232117057
|
pes2o/s2orc
|
v3-fos-license
|
Transitions in hookah (Waterpipe) smoking by U.S. sexual minority adults between 2013 and 2015: the population assessment of tobacco and health study wave 1 and wave 2
Background Tobacco smoking using a hookah (i.e., waterpipe) is a global epidemic. While evidence suggests that sexual minorities (SM) have higher odds of hookah use compared to heterosexuals, little is known about their hookah use patterns and transitions. We sought to examine transitions between hookah smoking and use of other tobacco and electronic (e-) products among SM adults aged 18 years of age and older versus their heterosexual counterparts. Methods We analyzed nationally representative data of ever and current hookah smokers from Wave 1 (2013–2014; ever use n = 1014 SM and n = 9462 heterosexuals; current use n = 144 SM and n = 910 heterosexuals) and Wave 2 (2014–2015; ever use n = 901 SM and n = 8049 heterosexuals; current use n = 117 SM and n = 602 heterosexuals) of the Population Assessment of Tobacco and Health Study. Comparisons between groups and gender subgroups within SM identity groups were determined with Rao-Scott chi-square tests and multivariable survey-weighted multinomial logistic regression models were estimated for transition patterns and initiation of electronic product use in Wave 2. Results Ever and current hookah smoking among SM adults (ever use Wave 1: 29% and Wave 2: 31%; current use Wave 1: 4% and Wave 2: 3%) was higher than heterosexuals (ever use Wave 1: 16% and Wave 2: 16%; current use Wave 1: 1% and Wave 2: 1%; both p < 0.0001). Among SM adults who reported hookah use at Wave 1, 46% quit hookah use at Wave 2; 39% continued hookah use and did not transition to other products while 36% of heterosexual adults quit hookah use at Wave 2 and 36% continued hookah use and did not transition to other products. Compared with heterosexuals, SM adults reported higher use of hookah plus e-products (Wave 2 usage increased by 65 and 83%, respectively). Conclusions Compared to heterosexuals, in addition to higher rates of hookah smoking, higher percentages of SM adults transitioned to hookah plus e-product use between 2013 and 2015. Results have implications for stronger efforts to increase awareness of the harmful effects of hookah as well as vaping, specifically tailored among SM communities. Supplementary Information The online version contains supplementary material available at 10.1186/s12889-021-10389-5.
Background
Tobacco smoking using a hookah (i.e., waterpipe) is a global epidemic [1]. Contributing to hookah's popularity is the unsubstantiated belief that smoke is detoxified as it passes through water, rendering hookah as a safer tobacco alternative [2][3][4]. Tobacco and alternative tobacco products are disproportionately being used by sexual minority (SM) adults (i.e., lesbian, gay and bisexual individuals) [5][6][7][8]. According to Wave 1 Population Assessment of Tobacco and Health (PATH) Study (2013-2014), 39.8% of lesbian/gay adults and 45.7% of bisexual adults reported current tobacco use, compared to 27.3% of heterosexual individuals [9]. Lesbian and bisexual women (18 years of age and older) had higher odds of experimental and regular use of hookah compared to heterosexual women [10]. Similarly, gay identified men > 25 years of age had higher odds of experimental hookah use.
While prevalence rates provide useful information about hookah use among SM adults, to date, virtually nothing is known about how SM hookah smokers have quit or transitioned over time to other tobacco products, including electronic nicotine delivery systems such as ecigarettes. Understanding changes in tobacco use behavior over time is imperative for providing insight into the net population health impact of tobacco use as well as how to support quit efforts. This is specifically important given the known tobacco-use disparities among SM individuals. Indeed, common smoking risk factors, including stress and depression-experienced at higher rates among SM adults compared to heterosexual adultshave been shown to play a vital role in etiologies of tobacco-related disparities and may make quitting more difficult [11]. Additionally, the tobacco industry has aggressively targeted sexual and racial/ethnic minorities through specifically designed marketing campaigns, community outreach and promotions [12]. Research shows these populations face higher risk of being exposed to online tobacco marketing and are more likely to interact with tobacco-related messages on social media compared to their heterosexual counterparts [13][14][15][16]. In particular, SM women have reported more exposure to tobacco industry marketing than heterosexual women [17].
Among the general population, recent longitudinal nationally representative data from PATH study show that while the overall prevalence of tobacco product use decreased (from 28 to 26%) from Wave 1 (2013Wave 1 ( -2014 to Wave 2 (2014Wave 2 ( -2015, over half of U.S. adult tobacco users transitioned in product use or combination of products used [18]. Among Wave 1 tobacco users, 72% of young adults (18-24 years of age) transitioned to use other products, including non-combustible and electronic nicotine devices; 20.7% discontinued use completely; and 45.9% of older adults (> 25 years of age) transitioned to other products, with 12.5% discontinuing use completely.
Transitions in hookah use to other tobacco products or quitting all together among SM adults remains unknown. Accordingly, using Wave 1 (2013-2014) and Wave 2 (2014-2015) survey data from the PATH Study, the objective of this study was to characterize transitions between hookah smoking and use of other tobacco products, including cigarettes, cigars, cigarillos, smokeless tobacco, pipe tobacco, snus pouches, dissolvable tobacco and electronic (e-) nicotine products, among SM adults aged 18 years of age and older versus their heterosexual counterparts.
Study design
We used data for adults 18 years and older from Wave 1 (September 12, 2013, to December 14, 2014) and Wave 2 (October 23, 2014, to October 30, 2015) of the PATH Study, a nationally representative, longitudinal cohort study of non-institutionalized adult and youth residents of the U.S. ages 12 and older. The PATH Study was designed to collect data on use patterns, risk perceptions, attitudes and health outcomes associated with tobacco and alternative tobacco products [19]. The PATH study design oversampled adult tobacco users, young adults (aged [18][19][20][21][22][23][24] and African-American adults, relative to population proportions. Weighting procedures adjusted for oversampling and allowed for representation of noninstitutionalized, civilian US population. A detailed overview of the PATH study design and methods are reported elsewhere [19,20]. The PATH study was approved by Westat's Institutional Review Board, and the United States Office of Management and Budget approved the data collection. Secondary data analysis of the PATH Study Files was approved by the University of California, Los Angeles Institutional Review Board.
Socio-demographic characteristics
Data on sex (male vs. female) and sexual orientation (lesbian, gay, bisexual or something else vs. heterosexual), was collected during each wave. Sexual orientation was self-identified by asking respondents to answer the following question: "Do you think of yourself as: (a) "Lesbian or gay", (b) "Straight, that is, not lesbian or gay", (c) "Bisexual", or (d) "Something else". Participants who reported "something else" were probed to provide additional clarifying information (i.e., identifying with other labels such a queer, transgender, in the process of figuring out their sexual orientation, not having a sexuality, not using such labels, or something else). For the purpose of this paper, sexual minorities were defined as lesbian or gay, bisexual or something else, while heterosexuals were defined as straight. Additional demographic data included age, race/ ethnicity, education level, marital status, health insurance status, and annual household income. Age in years was classified as 18-24, 25-34, 35-44, 45-54, and > 55. Race/ethnicity was classified as white non-Hispanic, black non-Hispanic, other non-Hispanic, and Hispanic. Education level was categorized by college or no college. Marital status was categorized as married and non-married. Non-married included widowed, divorced, separated or never married. Annual household income was categorized into income categories: < $25,000, $25,000-49,999, $50,000-99,999 and > $100,000.
Hookah and tobacco use patterns and transitions
Ever hookah use was defined as lifetime use. Current hookah use was defined as currently smoking hookah every day or some days (in past 30 days). Study participants were not mutually exclusive to hookah use; that is, some participants who used hookah may also have used other tobacco products including cigarettes, cigar, traditional cigars, filtered cigars, cigarillos, electronic devices, smokeless tobacco (i.e., loose snus, moist snuff, dip, spit, or chewing tobacco), pipe tobacco, snus pouches, or dissolvable tobacco. Categories of single-and multipleproduct use for purposes of this paper are described in more detail in the next section.
Among the subset of respondents who reported current hookah only use (no other tobacco products) at Wave 1, four types of transitions to Wave 2 tobacco products were examined: (a) No transition in hookah use (i.e., hookah use at Wave 2 as used at Wave 1); (b) Continued hookah and transitioned to other tobacco product(s) (i.e., hookah plus other tobacco product(s) use at Wave 2); (c) Quit hookah and transitioned to other tobacco product(s) (i.e., no hookah use but other tobacco product(s) use at Wave 2); and (d) Quit all tobacco use (i.e., no use of hookah or any tobacco product at Wave 2).
Co-use of tobacco, alternative tobacco products and nicotine delivery systems To assess for co-use of other tobacco and e-nicotine products, single, dual and poly hookah use were examined using five broad product categories: (a) hookah; (b) cigarettes; (c) e-products (i.e., e-cigarettes, e-hookah (Wave 2 only), e-pipe (Wave 2 only)); (d) smokeless tobacco (i.e., snus, moist snuff, dip, spit, chewing tobacco or dissolvable tobacco); and other combustibles (i.e., traditional cigars, filtered cigars, cigarillos, pipe tobacco). Those who used hookah only were classified as single hookah users. Those who concurrently used hookah plus one other product category were classified as dual hookah users, and those who concurrently used hookah plus 2 or more other product categories were classified as poly hookah users. To emphasize the transition of inclusion of e-products, the following three categories were examined: (a) hookah; (b) hookah plus e-products; and (c) hookah plus other tobacco products, including cigarettes, smokeless tobacco and other combustibles.
Statistical analyses
Weighted percentages and means along with their corresponding 95% confidence intervals (CI) were calculated using SAS 9.4. Analyses were estimated using the balanced repeated replication (BRR) method with a Fay's variant to utilize the replicate weights. Comparisons between groups (SM vs. heterosexuals) or between gender subgroups within SM identity groups on demographic variables were determined with Rao-Scott chi-square tests. Supplemental multivariable survey-weighted logistic regression analyses further explored sociodemographic characteristics associated with ever and current hookah use at Wave 1 and Wave 2. Age, gender, sexual orientation, race/ethnicity, education, income, insurance, as well as two-way interactions of sexual orientation with the other predictors, were included in the models.
Additional multivariable survey-weighted multinomial logistic regression models were estimated for selected transition patterns. For these analyses, because of sparse data coverage, age categories were collapsed to 18-24 and 25 years of age or older. Models were developed in a stepwise manner adding one main effect at a time (same predictors as listed for logistic regression), retaining those with p < 0.40, and similarly for relevant interactions of sexual orientation with the included effects, until estimation failed. Data would not support estimation of models with all possible transition categories for use status and multi-product use; thus more general categories were defined. The first analysis considered transitions from hookah-only use in Wave 1 to use status in Wave 2, specifically the following patterns: continued use of hookah only, use of other tobacco products in addition to hookah, use of other tobacco products but no hookah use, and no use of any tobacco product. Raw sample size was 322; only main effects of age, gender, and sexual orientation could be included in this model for estimation to be attained. The second transition analysis considered specifically the initiation of electronic product use in Wave 2 for Wave 1 current hookah users. Transition patterns included 1) consistency of product use from Wave 1 to Wave 2 (i.e., continuous hookahonly, continuous hookah plus e-products [with or without other tobacco products], or continuous hookah plus other tobacco products [no e-products]); 2) initiation of e-product use at Wave 2 in addition to continued hookah use; 3) continued hookah use along with any other change in use of other tobacco products; 4) cessation of hookah use. Raw sample size was 743; only the age-bysexual orientation interaction could be included in the model along with the main effects for age, gender, sexual orientation, race/ethnicity, and health insurance status.
In multivariable models, relatively few of the included sociodemographic characteristics (or their interactions with sexual minority status) were consistently statistically significantly associated with ever or current hookah use (Supplemental Table 1). A consistently statistically significant effect across both waves for ever and current use was the gender by sexual minority interaction, with greater likelihood of sexual minority hookah users being female vs. male than among heterosexual hookah users (supporting the simpler comparisons described above). Education was a consistent statistically significant effect, where those with some college were likely to report ever or current use compared to those with no college. In 3 of the 4 models, where the age main effect can be interpreted (Waves 1 and 2 ever use and Wave 1 current use), the older age groups were less likely than the 18-24 year old group to report ever use or current use of hookah, with decreasing likelihood as age increases.
Hookah smoking transitions from wave 1 to wave 2 Figure 1 depicts transition patterns by gender and age breakdown among SM adult Wave 1 current hookahonly smokers versus their heterosexual counterparts. Among SM current hookah smokers at Wave 1 who did not use other tobacco products (referred to here as "current hookah-only smokers"), 38.8% continued to Results of the multinomial model of the four transition categories showed a gender main effect (Supplemental Table 2) with Wave 1 female hookah-only users less likely to discontinue all hookah and tobacco use by Wave 2 or to take up use of other tobacco products; that is, females were more likely to remain consistent in their hookah-only use. Unfortunately, limited data coverage did not allow the inclusion of interaction effects in the multivariable model; thus, the multivariate analysis could not confirm all the comparisons reported in the previous paragraphs.
Co-use of tobacco, alternative tobacco product and electronic nicotine products As shown in Fig. 2 and Fig. 3, cigarettes were the most common tobacco product used in combination with Continued hookah and transitioned to other tobacco product(s) (i.e., hookah plus other tobacco product(s) use at Wave 2); c Quit hookah and transitioned to other tobacco product(s) (i.e., no hookah use but other tobacco product(s) use at Wave 2); and d Quit all tobacco use (i.e., no use of hookah or any tobacco product at Wave 2). Estimates were weighted to represent the U.S. adult population hookah among Wave 1 and Wave 2 SM and heterosexual adult current hookah users. Among Wave 1 SM hookah users, 36.87% reported single hookah use, 37.00% reported hookah dual use and 25.04% poly hookah use. While single and dual hookah use decreased among SM adults in Wave 2 (29.73 and 33.16%, respectively), poly use increased to 35.08%. Among Wave 1 heterosexual adult hookah users, 45.86% reported single hookah use, 32.21% reported hookah dual use and 20.76% poly hookah use. In Wave 2, single hookah use decreased to 40.19% but both dual and poly use increased among heterosexuals in Wave 2 (36.09 and 23.00%, respectively). Dual hookah plus e-products use increased similarly among SM and heterosexual adults from Wave 1 to Wave 2 (increased by 97 and 99%, respectively). A higher percentage of SM adults reported poly hookah use at Wave 2 compared with heterosexual adults (increased by 40% vs. 11%, respectively). While hookah plus e-products use (with or without other tobacco product(s)) in Wave 2 increased significantly among SM adults (increased by 65 and 83%, respectively), both hookah use and hookah plus other tobacco use decreased similarly among SM and heterosexual adults (Fig. 4).
An additional perspective of Wave 2 initiation of multiproduct use by Wave 1 current hookah users is provided by the multinomial logistic results of selected transition patterns (Supplemental Table 3); the model included sexual minority status as a predictor, as well as gender, age, race/ethnicity, and health insurance status. Four transition patterns were considered: consistent hookah and other product use across the two waves, initiation of e-products with continued hookah use, other change in multiproduct use, and cessation of hookah use. Few differences in transitions were distinguishable in terms of sociodemographic characteristics. In this multivariable model, non-Hispanic Blacks were more likely (OR = 1.22) than were Hispanics to initiate e-products over consistent product use. A significant age by sexual minority status interaction was seen for specifically cessation of hookah use as compared to consistent product use: older sexual minorities were most likely to cease hookah use than maintain a consistent multi-product use pattern (OR = 5.93), calculated from coefficients shown in Supplemental Table 3. This pattern of cessation of hookah use is consistent with the simpler comparative results described in the previous section for the subsample of hookah only users. 1 and 2 (2013-2014). E-products include e-cigarettes, and e-hookah and e-pipe (Wave 2 only); smokeless tobacco include snus, moist snuff, dip, spit, chewing tobacco or dissolvable tobacco; and other combustibles include traditional cigars, filtered cigars, cigarillos, pipe tobacco. Single use was defined as those who reported hookah only use; dual use was defined as those who concurrently use hookah plus one other product category; and poly use was defined as those who concurrently used hookah plus 2 or more other product categories . E-products include e-cigarettes, e-hookah and epipe (Wave 2 only); smokeless tobacco include snus, moist snuff, dip, spit, chewing tobacco or dissolvable tobacco; and other combustibles include traditional cigars, filtered cigars, cigarillos, pipe tobacco. Single use was defined as those who reported hookah only use; dual use was defined as those who concurrently use hookah plus one other product category; and poly use was defined as those who concurrently used hookah plus 2 or more other product categories . E-products include e-cigarettes, and e-hookah and e-pipe (Wave 2 only); and alternative tobacco products include cigarettes, smokeless tobacco (i.e., snus, moist snuff, dip, spit, chewing tobacco or dissolvable tobacco) and other combustibles (i.e., traditional cigars, filtered cigars, cigarillos, pipe tobacco)
Discussion
Using nationally representative data, we sought to characterize transitions between hookah smoking and use of other tobacco products among SM adults versus their heterosexual counterparts. This study provides two novel insights into these transitions. First, our results demonstrate higher rates of ever and current hookah use among SM adults compared to their heterosexual counterparts. Second, while 46% of SM adults reported quitting hookah smoking at Wave 2, among current hookah users, hookah plus e-product (with or without other tobacco product(s)) use markedly increased at Wave 2 among SM adults (Wave 1: 19%; Wave 2: 34%), compared to increases among heterosexual individuals (Wave 1: 16%; Wave 2: 26%). It is noteworthy that among SM adult current hookah smokers, dual hookah plus e-product use (without other tobacco product(s)) increased by 97% at Wave 2 (Wave 1: 3%; Wave 2: 6%), with comparable trends among heterosexual individuals (Wave 1: 5%; Wave 2: 9%).
While the investigation into the cause of the recent epidemic of vaping-induced deaths and illness is still ongoing [21,22], our findings highlight vital trends regarding the rapid uptake of vaping-using various e-products such as e-cigarettes, e-hookahs and e-pipes-among SM hookah smokers. In a two-year period, our nationally representative findings show that hookah plus e-product use (with or without other tobacco product(s)) increased by 83% among SM adults, compared to 65% among heterosexual individuals. In light of these findings, and because there is limited evidence for interventions to address common misperceptions on potential hookah harms [23,24], our study emphasize the need for strong efforts to increase awareness of the harmful effects of hookah as well as vaping, targeted towards sexual minority populations. Our findings also illustrate the importance of feasible and effective health education programing and communication efforts, specifically tailored to SM communities. For example, special programing that could potentially prevent the onset or continued use of hookah and vaping products and assist with cessation programs aimed at reaching SM populations. Indeed, evidence suggests that few anti-tobacco campaigns have been designed specifically to reach sexual minority populations [25].
Use of alternative tobacco products such as hookah has risen abruptly in the past decade [9,26]. Few studies have examined hookah use among SM populations. Prior analysis of PATH data from Wave 1 found that SM individuals had higher odds of hookah use compared to heterosexual individuals [10]. Similarly, nationally representative data from Legacy's Young Adult Cohort Study show that ever hookah use was significantly higher among SM respondents compared with those who identified as heterosexuals [27]. Our analyses confirm these findings by showing that over a two-year period, SM adults continue to have significantly higher rates of hookah use compared with heterosexual adults. Furthermore, our analysis extends these findings by demonstrating that a larger percentage of SM adult, specifically male hookah smokers, compared to heterosexuals, reported quitting hookah smoking at Wave 2. While is it unknown whether these individuals may return to use hookah, future analysis of additional waves of the PATH study may provide further insight into longer-term patterns of hookah use within SM populations.
There is growing concern that hookah smoking may function as a gateway to other tobacco products and harmful substances. Recent prospective analysis from the PATH study 2013-2015 indicate that hookah use is independently associated with subsequent smoking in the year ahead [28]. This finding is consistent with other studies that demonstrate hookah use is associated with more than double the odds of subsequent initiation of cigarette smoking [29]. Our analyses demonstrate that a large majority of SM current hookah smokers (63% in Wave 1 and 70% in Wave 2) reported using hookah plus other tobacco products, with cigarettes being the most common tobacco product used in combination with hookah. While multiple factors may explain our findings, flavored tobacco products have been previously demonstrated to serve as starter products to regular tobacco use [30]. Indeed, sexual minority status has been shown to be associated with use of flavored tobacco products [31,32], and evidence show that the tobacco industry has selectively targeted the marketing of products to sexual minority individuals [12,13,15,17]. In addition to tobacco and menthol flavors, hookah tobacco come in fruit, candy, and alcohol flavors and while the 2009 Family Smoking Prevention and Tobacco Control Act banned characterizing flavors other than menthol in cigarettes, this ban does not extend to hookah [33]. Our findings build upon previous work highlighting the need for robust regulation to reduce flavored tobacco appeal specifically among SM communities.
There are several limitations to this study. Respondents' smoking status was not biochemically verified. Although this study focused exclusively on SM adults, it is important to note gender differences within SM and heterosexual samples when addressing transitions. Combining sexual minority subgroups (i.e., lesbian women, bisexual men) may mask unique differences with regards to hookah use prevalence and transitions, and may obscure subgroup specific health needs. Because the PATH questionnaire's 'something else' category encompasses a highly heterogeneous group that may not necessarily represent the definition of "sexual minority" (i.e., genderqueer people), future research is needed to include specific questions to identify gender diverse individuals and better understand hookah tobacco trends among gender minorities as well as sexual minorities [34]. PATH Study data were self-reported and therefore responses may underrepresent the SM community because of the related-stigma surrounding sexual orientation. Further exploration is needed with longitudinal models that can accommodate the complex survey weights as well as capture behavior change and time-dependent covariates.
Conclusions
This study is one of the first to characterize quitting as well as transitions between hookah smoking and use of other tobacco products among SM adult hookah smokers using a nationally representative sample in the United States. In addition to higher rates of hookah use among SM adults, higher percentages of SM adults transitioned to hookah plus e-product use between 2013 and 2015 compared to their non-minority peers. Considering our findings in light of the study limitations and the context of the limited literature, future work should aim to further examine mechanisms that drive the higher rates of hookah use among SM individuals and how these drivers may differ by unique SM subgroups (i.e., socialization/affiliation versus stress processes may differ by subgroup). Finally, information regarding the harmful effects of hookah use should be tailored to reach diverse sexual minority communities.
|
2021-03-05T14:31:57.793Z
|
2021-03-05T00:00:00.000
|
{
"year": 2021,
"sha1": "84648d1ad4c1c90d9194d01d1a702054c7281348",
"oa_license": "CCBY",
"oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/s12889-021-10389-5",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "84648d1ad4c1c90d9194d01d1a702054c7281348",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
235446787
|
pes2o/s2orc
|
v3-fos-license
|
Concatenated Reed-Solomon and Polarization-Adjusted Convolutional (PAC) Codes
Two concatenated coding schemes incorporating algebraic Reed-Solomon (RS) codes and polarization-adjusted convolutional (PAC) codes are proposed. Simulation results show that at a bit error rate of $10^{-5}$, a concatenated scheme using RS and PAC codes has more than $0.25$ dB coding gain over the NASA standard concatenation scheme, which uses RS and convolutional codes.
I. INTRODUCTION
R EED-SOLOMON (RS) codes are a class of algebraic block-based error-correcting codes with a wide range of applications in digital communications and storage [1]. In this paper, by benefiting from the RS decoder's burst-errorcorrection capability, we introduce two concatenation schemes to improve the error-correction performance of polarizationadjusted convolutional (PAC) codes [2]. Fig. 1 demonstrates the general concatenation scheme of RS and PAC (RS-PAC) codes. Using multilevel PAC code as an inner code makes the RS code see a superchannel (consisting of a multilevel PAC encoder, physical channel, and a multilevel PAC decoder). A PAC code of block length N may introduce a burst error of size up to N , and parameters of the outer RS code should be defined such that it can correct these burst errors. Also, using an interleaver helps to scatter the error bursts of lengths up to N of the superchannel between different RS codes and further improves the error-correction capability of the concatenated RS-PAC codes.
Devised by Forney [3], concatenated codes mainly use an RS code as the outer code to handle the burst errors. Using a convolutional code (CC) as an inner code under Viterbi decoding introduces a short burst of errors of size up to the encoder memory m. Constructing the RS code over the field GF(2 s ) with symbol length s such that s > m helps the concatenated code handle the burst bit errors which could not be corrected by the inner CC.
Employing a block code like a Reed-Muller (RM) code with soft-decision decoding [4] or a convolutional code under sequential decoder as an inner code [5] makes the superchannel introduce some scattered random errors. For this reason, inner code generally has a short block length, and the resulting concatenated code may have worse error-correction Falconer [5] suggested using CC of length N = I × n under sequential decoding as the inner code. As this inner code introduces scattered errors similar to the block codes, Falconer suggested constructing RS code over GF(2 N ) or alternatively using I parallel RS codes over GF(2 n ). This requires a large number of parallel RS codes and may result in a worse tradeoff between the complexity and error-correction performance compared to using a CC under Viterbi decoding.
PAC codes under sequential decoding have variable computational complexity. However, the average computational complexity is low [6], which makes the PAC codes suitable as an inner code in the multilevel concatenated scheme. Performance results show that our introduced concatenated RS-PAC codes have a better error-correction performance than an RS-CC with much better average computational complexity. Simulation results also show that an RS-PAC code has an error-correction performance comparable to an RS-RM code with much better computational complexity.
Throughout this paper, PAC, RM, and CCs are over the binary Galois field GF (2), and RS codes are over GF (2 8 ). We use boldface notation for the matrices and vectors. For a vector x = (x 1 , x 2 , ..., x N ), x i denotes subvector (x 1 , x 2 , ..., x i ). For any subset of indices A ⊂ {1, 2, ..., N }, x A represents subvector (x i : i ∈ A), and A c denotes the complement of set A. For a matrix G, G A,B denotes a submatrix of G that rows are selected by set A, and columns are selected by set B.
The remainder of this paper is organized as follows. Section II reviews blocks and parameters of PAC codes. In Section III, encoding and decoding of RS codes are briefly reviewed. Section IV proposes two concatenated RS-PAC codes and provides simulation results and comparisons. Finally, Section V concludes the paper. Fig. 2 shows a block diagram of a (N, K, A, c) PAC coding scheme, where N is the code length, K is the data word length, A is the data indexing set (a.k.a. rate profile), and c is the connection polynomial of convolutional code. In this paper, we use c = 3211 (in octal form) [7]. h = (h 1 , . . . , h K ) is the source word generated uniformly at random over all possible source words of length K in a binary field GF (2). The data insertion block maps these K bits into a data carrier vector v in accordance with the data set A, thus inducing a code rate of R = K/N . In this paper, for PAC(128, 64) codes, an RM rate profile [2] is used. For PAC(64, 32) and PAC(64, 40) codes we obtain the rate profiles according to the method introduced in [8] at 5 and 6 dB signal-to-noise ratio (SNR) values, respectively. In a non-systematic PAC encoder, after v is obtained by v A = h j and v A c = 0, it is sent to the convolutional encoder and encoded as u = vT, where T is an upper-triangular Toeplitz matrix with first row obtained by the connection polynomial c. Then u is transformed to x with standard polar transformation F ⊗n , where F ⊗n is the nth kroneker power of F = [ 1 0
II. PAC CODES
1 1 ] with n = log n N . In this paper, we use a systematic PAC encoder [9], in which the data word h j is encoded to x as where G = TF ⊗n . After the PAC encoding, x is sent through the channel. Polar demapper receives the channel output y and the previously decoded bits and calculates the log-likelihood ratio (LLR) value of the current bit z i . Finally, the sequential decoder outputs an estimate of the carrier wordv, from which the K-bits data can be extracted according to A. By adopting the notation used in [6], we use the partial path metric for the first i branches as where y N is the channel output, u i is the partial convolution output, and E 0 (1, W N ) is the jth bit-channel cutoff rate [6].
III. REED-SOLOMON CODES RS codes are nonbinary linear block error-correcting codes and are a subset of nonbinary BCH codes [1], [10]. A (n, k, d min ) RS code over GF(2 s ) guarantees to correct up to t = d min − 1 2 symbol errors, where n is the RS code block length, k is the data length, and d min is the minimum Hamming distance of the code. A symbol that can be corrected by RS codes may have 1, 2, · · · , s bit errors. For this reason, RS codes are pretty suited to correct burst errors (contiguous sequence of bits in error). As an example, a (255, 223, 33) RS code over GF (2 8 ) can correct up to 16 symbol errors (up to 16 × 8 bits). RS codes are maximum distance separable (MDS) codes; meaning that the d min of an RS code is the largest possible minimum distance for a given n and k (d min = n − k + 1). When the number of symbol errors of a received data is less than t, an RS code's algebraic decoder will always correctly decode the received vector. However, if the number of symbol errors exceeds t, the decoder may declare a decoding failure based on the distribution and number of errors. Otherwise, the decoder wrongly decodes to another valid codeword.
In this paper, RS codes are defined over GF(2 8 ), and each code symbol is one byte. In our proposed concatenated RS-PAC code, we use a systematic encoder of RS code which takes k data symbols (8k bits) as input and appends n − k parity symbols to make an n symbol (8n bits) codeword. We use primitive polynomial π(x) = 1 + x 2 + x 3 + x 4 + x 8 to represent the GF(2 8 ). Let α be a primitive element in GF(2 8 ) (order of element α is 255). For a (n, k, d min ) RS code that is capable of correcting t symbol errors, the code generator polynomial is The systematic encoding of RS code is as where message polynomial corresponds to the codeword h = (h 0 , h 1 , · · · , h n−1 ), and the parity polynomial P (x) is the remainder of polynomial m(x)x n−k after division by polynomial g(x). Since for a generator polynomial g(x), for a codeword polynomial h(x) we have and all codeword polynomials are divisible by g(x).
After receiving a channel output r(x), an RS decoder attempts to correct up to t symbol errors by identifying both the error locations and error values. First step of decoding is to calculate 2t syndroms of the received polynomial r(x) as S j = r(α j ) for j = 1, 2, · · · , 2t. Assume that the received channel output has δ symbol errors. The second step of decoding is to obtain the error locator polynomial Λ(x) of degree δ which δ roots of Λ(x) are the reciprocals of the error locations. Obtaining Λ(x) can be done using the Euclidean or Berlekamp-Massey algorithm [11], [12]. The Euclidean algorithm is easier to implement; however, the Berlekamp-Massey algorithm is more efficient for both hardware and software implementations. Berlekamp-Massey algorithm's computational complexity is of O(δ 2 ) order. After obtaining Λ(x), the decoder's job is to find its roots. One inefficient way to determine the roots of Λ(x) is to examine every element of the finite field to determine whether it is a root of the error locator polynomial. Chien search is an efficient way to find the roots X i of Λ(x) [13]. If Chien search results in less than δ distinct roots (degree of Λ(x)), the decoding algorithm can declare a decoder failure (roots may be repeated or in an extension field of GF(2 8 )).
As RS code is a nonbinary code, in addition to finding the error locations X −1 i , the decoder should determine the error values as well. According to Forney's algorithm [14], error polynomial e(x) is computed as where Ω(x) = S(x)Λ(x) (mod x 2t ) is the the error-evaluator polynomial and Λ (x) is the formal derivative of Λ(x). For the ith error location X −1 i , e i = e(X −1 i ) is the ith error value. Finally, the recovered codeword polynomial isĥ(x) = r(x) + e(x).
IV. RS-PAC CONCATENATED CODES
A single-level concatenated coding scheme usually employs a nonbinary code such as RS code as an outer code and a binary code such as CC as an inner code. For a CC of memory size m as the inner code, a single incorrect decoding decision might give rise to a burst decoding error of length m. Benefiting from an RS code over GF(2 m ), a burst of m bit errors introduced by the inner code is interpreted as one symbol error by the outer RS code. An RS code capable of correcting t symbol errors can correct up to t of these burst errors. This concatenation results in a powerful code with excellent error-correction performance [15]. In a PAC code, because of the polarization effect, the CC sees a channel with a memory of N , and thus a wrongly decoded bit may result in a burst error of size up to N bits.
Overall, each of 63 parallel PAC encoders receives vector h i of length 4 symbols (32 bits) for i from 1 to 63. The output of each PAC encoder is sent through 63 copies of the channel, and the channel outputs are decoded with the corresponding PAC decoder to obtain an estimateĥ i corresponding to h i . The output of 63 parallel PAC decoder is denoted by vector where eachĥ i has 4 symbols (32 bits), andĤ has a length of 252 symbols. Finally, the RS decoder receives vectorĤ and outputs the estimate datâ whereD has a length of 220 symbols.
Note that the (252, 220, 33) RS code can correctly decode up to 16 symbol errors (4 × 32 bits). In the case that RS decoder declares a decoding failure, RS-PAC concatenated code outputs the first 220 symbols of RS decoder input (ĥ 1 ,ĥ 2 , · · · ,ĥ 55 ). Alternatively, this RS-PAC coding scheme can be constructed using a (240, 208, 33) RS code as the outer code, and PAC(128, 64) or PAC(256, 128) codes as the inner codes. The former uses 30 parallel PAC(128, 64) codes, while the latter uses 15 parallel PAC(256, 128) codes. Fig. 4 plots the bit-error-rate (BER) performance of the proposed RS-PAC concatenated code compared to the BER performance of a PAC(64, 32) code. For SNR values above 2.5 dB, the error-correction performance of the RS-PAC concatenated scheme is significantly better than the one of PAC code with a coding gain of 1.3 dB at BER = 10 −5 . For SNR values below 2.5 dB, the PAC code performs slightly better than the RS-PAC code.
B. RS-PAC Concatenated Codes with Interleaver
In this part, we study the effect of adopting interleaver and deinterleaver in RS-PAC concatenated code which is illustrated in Fig. 5. In concatenated codes, an interleaver and a deinterleaver are commonly used between the inner and outer codes. Since the deinterleaver shuffles the output of the supperchannel, possible long error bursts are distributed between multiple outer RS codes. In this manner, supperchannel turns into an effective random channel in which multiple outer codes can handle long error bursts.
To explain the interleaving and deinterleaving operations, we use H andĤ matrices, respectively. To interleave, the input sequences are written into the rows of the matrix H and the inner code reads the data from matrix H in column order. To deinterleave, the output of each PAC decoder is written into the columns of matrixĤ, and the outer code reads the data from matrixĤ in row order.
For encoding, we use eight parallel (255, 223, 33) RS codes with systematic encoder as outer codes. We use an 8 × 255 matrix H to store the parallel RS codes' outputs. Each row of the matrix H stores the output vector of the corresponding RS encoder. The last 32 columns of H are for parity symbols. Consequently, H can be expressed as where the vector h i is the ith column of matrix H with a length of 8 symbols (64 bits). We use 255 parallel PAC(128, 64) encoders to encode column vectors h i for i from 1 to 255. The output of each PAC encoder is sent through 255 copies of the channel, and the channel outputs are decoded with the corresponding PAC decoder to obtain the estimateĥ i of h i . Theseĥ i vectors are stored in H = (ĥ 1 ,ĥ 2 , · · · ,ĥ 255 ).
Finally, each row of matrixĤ is decoded by using one of the 8 parallel RS decoders to obtain data matrix estimatê D = (d 1 ,d 2 , · · ·d 223 ).
Each row of matrixD is an estimate to the corresponding RS encoder input. If one of the RS decoders declares a decoding failure, we use the output of the deinterleaver as the estimate of the data. Otherwise, we use the outputs of RS decoders. Alternatively, we can use four or five copies of (255, 223, 33) RS codes as an outer code and 255 copies of PAC(64, 32) or PAC(64, 40) codes as inner codes, respectively. Fig. 6 demonstrates the BER performance of the proposed RS-PAC coding scheme when using PAC(128, 64) and PAC(64, 32) as the inner code. We compare the performance of RS-PAC(64, 32) and RS-PAC(128, 64) with the NASA standard RS-CC code [15, p. 761]. This standard uses (255, 223, 33) RS code as the outer code and a rate 1/2 64state CC generated by two polynomials g 1 (x) = 1 + x + x 3 + x 4 + x 6 and g 2 (x) = 1 + x 3 + x 4 + x 5 + x 6 as the inner code. This scheme has been employed (with ideal interleaver) by NASA in some deep-space missions. Compared to RS-CC, RS-PAC(128, 64) has approximately 0.25 dB coding gain, whereas RS-PAC(64, 32) has 0.25 coding loss at BER = 10 −5 .
As the results show, in terms of error correction performance, concatenating RS codes with PAC(128, 64) codes is more favorable than concatenating RS codes with CC codes or RS-PAC(64, 32) codes. Notice that the number of parallel RS codes employed in RS-PAC(64, 32) code is 4, which is half the ones used in RS-PAC(128, 64) and RS-CC. In terms of the number of outer codes, the RS-PAC(64, 32) concatenation scheme results in less complexity. Fig. 6 also compares the BER performance of concatenating RS codes and PAC(64, 40) code against RS-RM(64, 40) scheme reported in [4]. For both of them, 5 parallel copies of (255, 223, 33) RS codes are used as outer codes, and 255 copies of PAC(64, 40) and RM(64, 40) codes are used as inner codes. Compared to RS-RM(64, 40), RS-PAC(64, 40) has approximately 0.1 dB coding loss at BER = 10 −6 .
Besides error-correction performance, the decoding complexity of a coding scheme is an important comparison factor, especially in the case of practical implementations. Since the outer codes of concatenation schemes of Fig. 6 are all RS codes of the same decoding complexity, it makes sense to compare the decoding complexity of inner codes instead of comparing the overall decoding complexity. To measure the complexity of inner decoders, we use the notion of the average number of visits (ANV) used in [8], which denotes the average number of times each bit is visited by the sequential decoder during a decoding session [16, p. 444]. The decoding complexity of inner decoders of concatenation schemes of Fig. 6 is plotted in Fig. 7 PAC(64, 32) decoders are much less than the fixed ANV of Viterbi decoding. Although RS-PAC(64, 32) has the worst BER performance compared to its counterpart concatenated schemes, its inner code has the lowest ANV. Also, while the BER performance of RS-PAC(64, 40) is slightly worse than the BER performance of RS-RM(64, 40), its inner decoder complexity is significantly lower than the one of RS-RM(64, 40).
V. CONCLUSION
We proposed two concatenated coding schemes that use PAC codes and RS codes as the inner and outer codes. We eval-uated the BER and complexity performance of the proposed schemes and provided comparisons with similar concatenation schemes from the literature. Simulation results showed that concatenating PAC codes with RS codes significantly improves its error-correction performance. Results also showed that RS-PAC codes have significantly lower decoding complexity compared to RS-RM and RS-CC codes of the same code rate while having superior error-correction performance when a proper PAC code is chosen.
|
2021-06-17T01:15:51.807Z
|
2021-06-16T00:00:00.000
|
{
"year": 2021,
"sha1": "16157eab6748644d6cb325dba5a10e8ceda2ea70",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "16157eab6748644d6cb325dba5a10e8ceda2ea70",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
}
|
268314177
|
pes2o/s2orc
|
v3-fos-license
|
Modeling gene expression cascades during cell state transitions
Summary During cellular processes such as differentiation or response to external stimuli, cells exhibit dynamic changes in their gene expression profiles. Single-cell RNA sequencing (scRNA-seq) can be used to investigate these dynamic changes. To this end, cells are typically ordered along a pseudotemporal trajectory which recapitulates the progression of cells as they transition from one cell state to another. We infer transcriptional dynamics by modeling the gene expression profiles in pseudotemporally ordered cells using a Bayesian inference approach. This enables ordering genes along transcriptional cascades, estimating differences in the timing of gene expression dynamics, and deducing regulatory gene interactions. Here, we apply this approach to scRNA-seq datasets derived from mouse embryonic forebrain and pancreas samples. This analysis demonstrates the utility of the method to derive the ordering of gene dynamics and regulatory relationships critical for proper cellular differentiation and maturation across a variety of developmental contexts.
Highlights
Fitting pseudotimeordered expression profiles to interpretable functional forms
Derivation of transcriptional cascades to define a pseudotime trajectory
Inference of directionality of regulatory interactions
INTRODUCTION
Changes in gene expression underlie the intrinsic molecular processes governing differentiation, enabling cells to change their morphology and function.These changes can occur in part due to extrinsic cues from signaling molecules 1 or temperature and oxygen levels in the organism's environment, 2,3 as well as intrinsic mechanisms such as the asymmetric distribution of cellular components during cell division. 4hese processes result in modifying the expression levels of genes that are critical for cell fate specification, most importantly transcription factors, which can initiate or block the expression of downstream target genes, including other transcription factors.The sequential activation and repression of transcription factors and their target genes can give rise to a cascade of gene expression, whereby an initiating event can regulate a hierarchy of downstream genes essential for the cell to acquire subsequent cell states.For example, the Pax6 / Eomes / Tbr1 transcription factor cascade directs the progression of radial glia to intermediate progenitor to postmitotic projection neuron in the developing cortex, 5,6 and the transcription factor cascade initiated by Neurog3 controls the differentiation of endocrine progenitor cells to mature pancreatic cells. 7,8It is therefore critical to accurately deduce gene expression cascades in order to determine which genes are responsible for specific cell fate changes during differentiation and maturation.Single-cell RNA sequencing (scRNA-seq) enables sampling the gene expression profile of thousands of cells in an individual sample.However, it is necessary to destroy the cell in order to measure its transcriptome, thereby making it impossible to observe how the cell and its gene expression profile would have altered in the future.Nonetheless, it is possible to order cells along a trajectory which accurately recapitulates the progression of cells as they transition from one cell state to another.This ordering of cells along a trajectory is known as pseudotime, which is essentially a mapping of single-cell transcriptomes to a developmental timeline.][11][12][13][14] Based on the ordering of cells along a pseudotemporal trajectory, it is possible to measure the dynamics of gene expression as cells undergo cell state transitions.Current algorithms typically model gene expression dynamics along pseudotemporal trajectories by fitting their expression profiles using generalized linear models, 12,15,16 with the ultimate goal of determining if gene expression significantly varies as a function of pseudotime.Other methods attempt to deduce pseudotime-dependent gene interactions by calculating a similarity measure between the expression levels of the ''present'' of one gene, and the ''past'' of another gene using correlation 17 or mutual information. 18However, these methods do not calculate an explicit ordering of expression dynamics along a pseudotime trajectory, and require user-defined cutoffs for determining meaningful interactions.
Here, we present a method to better understand the cascade of gene expression dynamics underlying cell state transitions.We are interested in answering questions such as, if two genes are up-regulated during a cell state transition, is one gene up-regulated before the other, or are they up-regulated simultaneously?Furthermore, is it possible to estimate a certainty in the timing of their expression dynamics?In this paper, we address these questions by explicitly modeling gene expression over a pseudotime trajectory using a set of functions that reflect biological state switches, and that model the dynamic behaviors of gene expression within cells as they differentiate.We formulate the problem using a Bayesian inference framework and use an ensemble sampler Monte Carlo Markov chain (MCMC) approach 19 to sample from the posterior distributions over the parameter spaces of the various functions, and determine which model best fits the data.This provides an explicit ordering of genes along a pseudotemporal trajectory based on inflection point estimates, enabling the description of expression dynamics in terms of transcriptional cascades, estimating differences in switch times of gene expression, and annotation of potentially causal gene interactions in gene regulatory networks.
We will introduce our modeling framework in general terms in the first section of the results.A more detailed description is provided in the STAR Methods section.We then apply our method in multiple developmental settings, in which we dissect the transcription factor cascades underlying cortical neurogenesis and pancreatic beta cell development across multiple scRNA-seq datasets.We also show how our method can be used to infer potential upstream regulators of a given gene of interest.Finally, we utilize our method to deduce the gene expression cascade of the Notch signaling pathway in the developing cortex in order to highlight the applicability of our method to gene sets beyond transcription factors.These examples demonstrate the ability of our method to accurately model the dynamics of gene expression during cell state transitions, and highlight the biological insights our method enables.
Modeling gene expression dynamics along pseudotime trajectories
The goal of the method presented here is to decide if a state switch (up-to down-regulation or down-to up-regulation) occurs along a pseudotemporal trajectory, and at what pseudotime these switches occur, in order to determine the timing and ordering of activation and repression during cell state transitions.In order to do this, we first define a set of functions which can model a wide variety of expression dynamics, and for which state changes are well defined and interpretable, namely at the inflection points of each function.The functions are then fit to the normalized expression levels for each gene across cells ordered by their relative pseudotempoal ordering.The functions used for fitting are defined as follows, (Equation 1) Here, f unif is a uniform function with b > 0, which models the absence of dynamics in gene expression along a pseudotime trajectory.f gauss is a Gaussian function with parameter constraints a > 0; b > 0, s > 0; and 1 % t 0 % N, with N = number of cells in the pseudotime trajectory.f sig is a sigmoidal function with parameter constraints L > 0; b > 0; and 1 % t 0 % N. Finally, f dsig is a double sigmoidal function with the formulation described in the study by Baione et al. 20 and parameter constraints b min > 0; b mid > 0; b max > 0; k 1 > 0; k 2 > 0; and 1 % t 1 < t 2 % N. The motivation for using these functions is based on observations from biological scenarios during development. 21For instance, during differentiation, genes can display a shift from one steady state to another, which can be modeled using a sigmoidal function.They can also exhibit impulse patterns of up-regulation followed by a return to basal levels, which can be modeled using a Gaussian function.Finally, double sigmoidal functions can model impulse patterns with asymmetric increase and decrease rates and different initial and terminal basal levels, as well as stepwise up and stepwise down expression patterns (Figure S1).We formulate the problem of fitting gene expression profiles in cells ordered along a pseudotime trajectory as a Bayesian inference problem, and estimate parameters for each function using an ensemble sampler MCMC approach 19 (see STAR Methods).Based on the best-fitting function to the gene expression profiles, genes are ordered according to the relative occurrence of inflection point estimates to provide temporal estimates of gene expression cascades, and regulatory interactions between genes are deduced, enabling a detailed characterization of the molecular processes underlying cellular transitions.
Transcriptional cascades during cortical neuron differentiation
We first applied our method to differentiating forebrain dorsal neural stem cells during mouse development at embryonic stage e13.5.The input to the method consists of a set of cells ordered by pseudotime, t = 1; .;N,and the expression levels (counts) of genes within those cells.Cells from the Atlas of the Developing Mouse Brain 22 were initially subset to non-dividing forebrain dorsal cells consisting of neural stem cells, intermediate progenitors (IPs), and neurons at embryonic stage e13.5.A pseudotime ordering was estimated using diffusion pseudotime 9 (Figure S2).All dividing cells were excluded for the pseudotime estimation due to their expression of a transcriptional program that is independent of the underlying cell type, potentially confounding pseudotime estimates.
In differentiating cells along the mouse e13.5 forebrain dorsal neural stem cell (NSC) / IP / neuron trajectory, 60 out of 510 (11.8%) transcription factors (derived from the study by Lambert et al. 23 ) that were expressed in at least 1% of cells had a non-uniform fit (Figure 1; Table S1).Initially, Gli3, a gene that is required for maintaining cortical progenitors in active cell cycle, 24 was down-regulated in a state-switch manner with a sigmoidal fit, along with Sox9 and Hes1, which are both required for neural stem cell maintenance. 25,26Subsequently, other genes important for neural stem cell maintenance including Sox1, Sox2, Hes5, and Pax6 were down-regulated.Genes exhibiting a state-switch or stepwise up-regulation included Neurod2, Sox11, and Neurod6, which play a critical role in inducing cell-cycle arrest and neurogenic differentiation in the developing cortex, [27][28][29] followed by Tbr1 and Bcl11b, markers of deep-layer cortical neurons generated during early cortical neurogenesis.Subsequently, Satb2 and Bhlhe22, markers of upper-layer cortical neurons generated during later stages of neurogenesis, 30 were up-regulated.Interestingly, four transcription factors were found to be transiently down-regulated using a double sigmoidal fit, including Mycn, Jun, Ybx1, and Jund.Genes exhibiting a transient up-regulation (Gaussian or double sigmoidal fit) included Hes6 and Eomes, markers of cortical IPs, 31 as well as Neurog2 and Sox4, which are required for IP cell specification and maintenance via activation of Eomes. 32hese results demonstrate that the functions which best fit the expression profiles of dynamically expressed genes (genes exhibiting a nonuniform fit) largely reflect the known biological role these genes play during differentiation.Furthermore, the relative ordering of inflection point estimates for dynamically expressed transcription factors along the mouse e13.5 forebrain dorsal NSC / IP / neuron trajectory accurately recapitulates known temporal orderings that are essential for the differentiation of cortical neurons.Finally, in order to justify the functional forms we used, we performed a PCA of the gene expression profiles.Genes with a non-uniform fit fill the extremes of the principal component space (Figure S3), indicating that the functional forms we used to model the pseudotime-ordered gene expression profiles are able to capture most of the variability in the data.
Constructing regulatory interactions during cortical neurogenesis
We then compared a set of transcription factors forming an essential regulatory network underlying cortical neuronal differentiation including Pax6, Neurog2, Eomes, and Tbr1, 33 as well as the neural lineage bHLH factor, Neurod4 (Figure 2A).Neurog2 and Eomes exhibited a transient up-regulation, with both genes having a double sigmoidal fit.Pax6 and Tbr1 were fit using a sigmoidal function, with Pax6 exhibiting a stateswitch from high to low expression, and Tbr1 from low to high expression.Neurod4 was fit using a Gaussian function, and was specifically expressed transiently in mid-stage Eomes + cells.These genes were then ordered according to the pseudotemporal occurrence of inflection point estimates (Figure 2B), whereby Neurog2 was found to be up-regulated before Eomes, followed by the up-regulation of Neurod4 and down-regulation of Pax6.Subsequently, Tbr1 was up-regulated, followed by down-regulation of Neurod4, Neurog2, and finally Eomes.Neurod4 exhibited a brief, transient impulse expression pattern within mid-stage Eomes + cells, reflecting previously studied expression patterns of Neurod4, which is only expressed in a subset of Eomes + cells in the mouse e14.5 cortex. 34y comparing inflection point estimates of these genes (see STAR Methods), we were able to reconstruct previously validated regulatory interactions (Figure 2C).The initial up-regulation of Neurog2 just before Eomes up-regulation suggests that Neurog2 initiates expression of Eomes in intermediate progenitors.This relationship has been shown in mouse e13 embryos via electroporation of Neurog2 cDNA into the ganglionic eminence, where both Neurog2 and Eomes are not expressed, resulting in ectopic expression of Eomes. 35Neurog2 has also been shown to directly activate Neurod4 in cortical IP cells using a luciferase reporter assay, 36 which we also recapitulate based on the sequential up-regulation of Neurog2 and Neurod4.Furthemore, it has been shown that both Neurog2 and Eomes induce Tbr1 expression, 36 which we also infer based on the up-regulation of Tbr1 following both Neurog2 and Eomes.Interestingly, directly after Eomes and Neurog2 were upregulated, Pax6 was down-regulated, suggesting a negative feedback loop, whereby Pax6 activates both Eomes and Neurog2, which then both in turn repress Pax6, a relationship which has been previously described in the developing mouse cortex. 37
Inferring shared upstream regulators of Eomes
We next explored potential upstream regulators of Eomes in mouse e13.5 forebrain dorsal cells across two samples in order to deduce high confidence regulators of Eomes and determine how robust our method is across biological replicates.We applied our method to forebrain dorsal cells in a mouse e13.5 biological replicate (Figure S4; Table S2).Transcription factors with a positive inflection point occurring simultaneously with or before the first inflection point of Eomes, as well as those with a negative inflection point occurring after the first inflection point of Eomes, were labeled as positive upstream regulators.We furthermore included all co-activators and co-repressors (derived from the study by Siddappa et al. 38 ) that exhibited a transient up-regulation, with the first inflection point occurring simultaneously with or before the first inflection point of Eomes.In total, 25 positive upstream regulators were found in the first sample, and 27 were found in the second sample, with an overlap of 21 genes across the two (Figure 3A; Figure S5).Furthermore, the relative ordering of inflection points of these genes along the cortical differentiation trajectory strongly agrees across both datasets, with one exception being Tfap2c, which was fit to a sigmoidal function in the first sample, and Gaussian function in the second sample.
Within the set of inferred transcription factors regulating Eomes expression were Neurog2 and Pax6, which are known to directly activate Eomes in the developing mouse neocortex, as described in the previous section.The co-regulators Dll1, a key ligand for activating Notch signaling, and Chd7, a chromatin remodeler, have also been implicated in the formation of IP cells, 39,40 although their role as a co-activator of Eomes has not been established to our knowledge.These results validate the utility of our method in discovering upstream regulators of a given gene of interest.The remaining potential activators of Eomes warrant further experimental validation.
Furthermore, the genes that repress Eomes in maturing IP cells, thereby enabling the differentiation of these cell types into neurons, are largely unknown. 33The transcription factor Mycn, a gene critical for normal brain development, 41 has been shown to down-regulate Eomes in neuroblastoma cell lines 42 ; however, its role in regulating Eomes expression in maturing IP cells is not well understood.In differentiating cells along the forebrain dorsal NSC / IP / neuron trajectory in both mouse e13.5 samples, Mycn was expressed in a transient down-regulation pattern and best fit using a double sigmoidal function (Figures 3B and 3C).In both samples, Mycn up-regulation occurred simultaneously with Eomes down-regulation, signifying that Mycn may play a role in the differentiation of cortical neurons by down-regulating Eomes in maturing IPs.
Dissecting Notch signaling during cortical neurogenesis
To demonstrate the applicability of our method to genes beyond transcription factors, we investigated the dynamics of Notch signaling along the forebrain dorsal NSC / IP / neuron trajectory in e13.5 mouse embryos.Shared dynamically expressed genes involving ligand-receptor pairs of Notch receptors from the study by Shao et al. 43 in both embryonic samples were estimated (Figure 4).In both samples, Mfap2, which can interact with the extracellular domain of Notch1, 44 however whose role is poorly understood in the regulation and differentiation of cortical NSCs, was up-regulated within forebrain dorsal NSCs, and down-regulated in neuronal cells.This indicates that Mfap2 may play a general role in Notch signaling within differentiating cortical NSCs, whose actions are not specific to a given cell type.Dll1 was up-regulated in early IPs, followed by the up-regulation of Dll3 in later stage IPs, confirming the selective basal expression of Dll3 from in vivo studies. 33Furthermore, Mfng, a glycosyltransferase which increases the ability of Notch1 to bind to Dll1, 45 was up-regulated shortly after Dll1 up-regulation in both samples within IPs, indicating that this gene becomes activated sequentially after the activation of Dll1.Dll1 was then down-regulated within IPs, suggesting that this gene is not essential for further IP differentiation into neurons.Finally, Notch1 was down-regulated in maturing IPs, followed by down-regulation of Mfap2, Mfng, and Dll3 in neurons.These results highlight the ability of our method to dissect the complex dynamics of signaling pathways within differentiating cell types.
Transcriptional cascades in mouse pancreatic beta cell development
To demonstrate the utility of our method in other developmental contexts, we applied our method to a scRNA-seq dataset of pancreatic cells derived from mouse e14.5 embryos, 46 subsetting to cells belonging to the beta cell lineage.When measuring the expression dynamics of a set of genes known to play an essential role in the specification and maturation of pancreatic beta cells, 8 we find a well-defined transcriptional cascade which largely agrees with previously characterized gene expression cascades (Figure 5A).Interestingly, we find one exception to this cascade, Neurod1, which is up-regulated at a later stage of beta cell maturation than previously reported (Figures 5B and 5C).We are also able to measure the sequential up-regulation of Pax6 and Pdx1, followed by Mnx1, and ending with the insulin gene expression regulator Isl1, thereby providing a more explicit ordering of the expression cascade in maturing beta cells than previously established.Furthermore, with this approach, we can model the expression dynamics of all transcription factors (Figure S6; Table S3), enabling a detailed overview of the full gene expression cascade underlying pancreatic beta cell differentiation.
DISCUSSION
In this paper, we explored an approach to model the gene expression dynamics in cells ordered by a pseudotime trajectory using a fully Bayesian framework.This framework enabled us to fit the gene expression profiles of cells undergoing cell state transitions to a set of functions that are able to model complex transcriptional dynamics.From these fits, we were able to order genes along a gene expression cascade which describes the molecular dynamics underlying cell state transitions, and deduce regulatory interactions.
We first applied the method to differentiating forebrain dorsal neural stem cells into neurons in mouse e13.5 embryos.By ordering transcription factors by the relative occurrence of inflection point estimates, we were able to reconstruct the transcriptional cascades underlying neuronal differentiation within the developing cortex, and model the dynamics of gene expression for all genes along the trajectory.However, genes can undergo further dynamic changes including post-transcriptional and post-translational modifications, and localization changes within the cell, all of which can have a large impact on function and regulation.While transcriptomics data are unable to identify these changes, the dynamics we uncover from gene expression data can still shed light on their regulatory roles.
By comparing the relative timing of expression dynamics of the transcription factors Pax6, Neurog2, Eomes, Neurod4, and Tbr1, which form a regulatory network underlying cortical neuron differentiation, we were able to infer known causal interactions.However, reconstructing a gene regulatory network using all genes with a non-uniform fit would lead to many false positives, in part due to the simultaneous activation of multiple pathways involving different genes.Thus, we believe one of the main utilities of our approach is to infer the directionality of regulatory interactions, especially in cases where an interaction has been measured but the directionality is unknown.
We then identified potential upstream positive regulators of Eomes, an essential gene for the formation of IPs.Subsetting to genes which have similar dynamics across biological replicates revealed a set of high-confidence potential upstream regulators.Not only did we recover validated activators of Eomes, such as Pax6 and Neurog2, but we also detected a number of other transcription factors whose roles in Eomes activation have not been fully characterized.The enrichment of known DNA-binding motifs of these transcription factors in the promoter and enhancer regions of Eomes may provide further evidence for the regulatory role of these genes in Eomes expression.We also identified a potential negative regulator of Eomes, the transcription factor Mycn, whose role in cortical IP maturation has not been fully explored.Wet lab experiments, such as knockin or knockout experiments, or chromatin immunoprecipitation sequencing experiments, would need to be performed in order to validate the roles of these transcription factors in the regulation of Eomes expression.
We further demonstrated the applicability of our method to genes beyond transcription factors by comparing the expression dynamics of genes involved in the Notch signaling pathway.This analysis revealed a sequential up-regulation of the Notch receptor ligand Dll1 in early IPs, followed by Mfng, and finally Dll3 in maturing IPs.This activation cascade supported the selective expression of Dll1 and Dll3 in apical and basal IPs, respectively, further demonstrating the utility of comparing genes according to inflection point estimates to dissect signaling pathways.
We also applied our method to differentiating pancreatic beta cells in mouse e14.5 embryos.Based on this analysis, we were able to reconstruct a gene expression cascade that defines beta cell maturation.In this analysis, we highlighted a gene that deviated from the established literature, Neurod1, whose up-regulation along the cascade occurred later during beta cell development than previously established.Followup experiments are needed to validate these findings.
In order to place our method in a broader context, we compared our results with Monocle 3 12 and tradeSeq, 16 which perform statistical tests to determine if a gene is differentially expressed along a pseudotime trajectory, in cells from the e13.5 forebrain dorsal NSC / IP / neuron trajectory.While the overwhelming majority of genes with a non-uniform fit from our method were also found to be significantly differentially expressed by these two methods, both methods detected at least six times more genes to be significant compared to our method (Figure S7).Thus, we conclude that our method is more stringent in detecting genes exhibiting dynamic changes along a trajectory.Furthermore, while the relative ordering of gene expression dynamics along a trajectory is not readily available using these two methods, we are able to explicitly infer this using our method based on inflection point estimates.Similar to our method, the authors of the original diffusion pseudotime publication used derivative estimates of smoothed gene expression profiles to order gene dynamics along a pseudotime trajectory. 9owever, the authors only used derivative estimates to measure switch-like transitions and not transient up or down transitions, and only provide point estimates of these transitions.We are able to model a higher variety of transitions, and based on the MCMC samplings, quantify the uncertainty in the timing of these transitions using the posterior distribution of the parameter fits.
To measure the dependence of our method on the pseudotime method used to order cells, we ran our method on the pseudotime-ordered cells from the e13.5 forebrain dorsal NSC / IP / neuron trajectory using both Slingshot 10 and Monocle 3, 12 and compared them with the diffusion pseudotime estimates (Figure S8).Overall, the fits were largely consistent independent of the pseudotime method used to order the cells, indicating that our method is robust to fluctuations in pseudotime estimates and underlying pseudotime method.
While we focused specifically on cells along the forebrain dorsal NSC / IP / neuron trajectory, and pancreatic beta cell development, the method presented in this paper can be applied to any scRNA-seq dataset where cells can be ordered along a pseudotime trajectory.Our method is able to reconstruct transcriptional cascades in order to deduce critical genes for cell state transitions.It is also able to predict regulatory interactions, as well as gene interactions involved in different signaling pathways.Therefore, we believe this approach can provide useful insights into the molecular underpinnings involved in a variety of developmental biology contexts.exhibiting high expression levels of G2M cell cycle genes were subsequently filtered, as well as clusters with a subpallium (ventral cortical) identity, hippocampal identity, and Cajal-Retzius neurons.The above procedure was re-run until the only subsequent populations in the sample consisted of forebrain dorsal NSCs, IP cells, or neurons based on the expression of known marker genes for the respective populations.Diffusion pseudotime estimates 9 for each cell were then estimated was after running a diffusion map embedding and assigning a starting cell.The raw count data across all cells ordered by diffusion pseudotime were then stored and the MCMC procedure was run on the resulting count matrix.
pancreas development samples
The raw count data for the pancreas endocrinogenesis dataset 46 was downloaded from http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE132188.The raw count data was loaded into scanpy 47 for downstream analyses.Cells were then initially subset to samples corresponding to e14.5 embryos.All cells with a positive G2M score in the metadata were initially filtered.Following this, the count data was processed in a similar fashion to the Atlas of the Developing Mouse Brain dataset using scanpy.Diffusion pseudotime estimates were calculated and the raw count data across all cells ordered by diffusion pseudotime were then stored and the MCMC procedure was run on the resulting count matrix.
Establishing a likelihood model
The negative binomial distribution has been shown to accurately describe the count data generated in scRNA-Seq experiments without the need to account for zero-inflation resulting from ''dropout'' events. 51The probability mass function for the negative binomial distribution can be parameterized using the mean, m ˛R+ , and dispersion parameter, 4 ˛R+ , with y ˛N, as follows, (Equation 2) The mean and variance of the random variable Y $ NBðm; 4Þ which follows a negative binomial distribution is then E½Y = m and Var½Y = m + m 2 4 .For a gene g with measured counts of Y !g = fy gt g t = 1;::;N along a pseudotime trajectory with fixed pseudotime-step interval, m !g = fm gt g t = 1;.;Nand 4 !g = f4 gt g t = 1;.;N the mean and dispersion at corresponding pseudotimes, the full likelihood of observing Y where pðy gt m gt ; 4 gt Þ is the negative binomial probability mass function.The full log-likelihood is then: It was shown that when fitting scRNA-Seq UMI count data to a negative binomial model, data are consistent with a global dispersion parameter independent of the expression level of a given gene, and that fitting a dispersion parameter to each gene individually leads to overfitting. 52Therefore, a global estimate of 4 can be used for every gene independent of pseudotime, and 4 g != f4 gt g t = 1;.;N is replaced with a constant 4 in Equation 4. A dataset specific 4 using genes which exhibit lower levels of overdispersion is estimated, since the expression levels in these genes reflect the technical rather than the biological variability.To do this, the log10 mean counts for each gene are binned into five equally spaced bins, and a linear fit between log10 mean and log10 variance of counts in each bin is estimated.Genes within the top 20th percentile of the difference between the estimated variance and the expected variance using the linear fit in each bin are then filtered.The remaining genes are used to fit the non-linear relationship between the mean (m) and variance (s 2 = m + m 2 4 ) using unconstrained non-linear least squares (Figure S9).
Here, 4 estimates the dispersion based on genes which do not exhibit high variability in the dataset, and therefore captures the technical variability in the dataset.This technical variability is in large part driven by the varying number of UMI counts captured in each cell, as well as other factors including library quality and amplification bias.Thus, the full log-likelihood of observing counts Y !g = fy gt g t = 1;::;N for gene g along a pseudotime trajectory given the mean at corresponding pseudotime points m !g = fm gt g t = 1;.;N, becomes, 5) where 4 is a global parameter estimated using the procedure described above.For scRNA-Seq methods which sequence only from one end of the transcript and not full-length protocols, normalization does not need to account for the total transcript length.In this case, for a given cell i, let M i be the number of UMIs in cell i, and y gi be the number of UMIs for gene g in cell i.In this paper, we use the median number of UMIs across all cells in the dataset as a size factor M, that is, M = medfM i g i = 1;.;N .
Then, the log-normalized expression levels for gene g in cell i is defined by the following mapping, h y gi = ỹgi = ln y gi M i M + 1 : (Equation 6) The functions ðf unif ; f gauss ; f sig ; f dsig Þ described in Equation 1 are then fit to the pseudotemporally ordered expression profile for gene g, fỹ gt g t = 1;.;N, in the log-normalized expression space with the objective function to maximize defined by the likelihood in Equation 5.The means m !g = fm gt g t = 1;.;Nare then calculated by mapping the function values evaluated at t = 1; .;N back to count space using the inverse of Equation 6.The full log-likelihood estimate is then evaluated by plugging in the m !g values and global estimate for 4 into Equation 5.This procedure can be summarized as follows.We want to solve for f a ðt; qÞ, which maximizes the following likelihood, 7) where f a ˛ðf unif ; f gauss ; f sig ; f dsig Þ.
Model inference using MCMC
Under the framework presented above, solving for f a ðt; qÞÞ can be formulated as a Bayesian inference problem, which we solve using an ensemble sampler MCMC approach. 19This provides an estimate of the posterior distribution over the parameter space for each of the parameters in the different functions ðf unif ; f gauss ; f sig ; f dsig Þ described in Equation 1.For each of the models, the priors used for the different parameters are summarized in Table S4.Note, in Table S4 (Equation 8) The uniform priors in Table S4 are uninformative, however, they provide bounds on the parameters to keep them in interpretable and meaningful ranges.The slope parameters k in the sigmoidal function, and k 1 and k 2 in the double sigmoidal function, have a folded normal prior with 0-mean and 0.1 variance, which is used to ensure that the slope has a low magnitude.This prior is used because differences in the function once the slope becomes relatively large are minimal.Finally, the folded normal prior on s in the Gaussian with 0-mean and N= 10 variance is used to ensure that the curve does not become very flat.
In this paper, we use the ensemble sampler MCMC proposed by Goodman & Weare in 2010 19 with implementation by Foreman-Mackey et al. 53 An initial guess is needed as a starting point from which a walker begins in the ensemble sampler.For the Gaussian and sigmoidal functions, initial guesses are derived from a non-linear least squares fit for each function on the log-normalized pseudotime expression levels using scipy's 'curve_fit' function, with added Gaussian noise.For the double sigmoidal function, initial guesses are randomly chosen to cover the varieties of different forms the functions can have.For the uniform function, initial guesses are randomly chosen from a uniform distribution over the interval 0.01 and maximum expression level for the gene of interest.The number of walkers used is four times the number of parameters for each function -28 for the double sigmoidal fit, 16 for the Gaussian fit, 16 for the sigmoidal fit, and 4 for the uniform fit.This enables a wide sampling across the search space of parameters.
The MCMC is then run for a total of 10,000 iterations.There is generally no consensus on how many iterations to run an MCMC algorithm. 53housands of iterations are typically desirable to allow the process to reach a steady-state.After reaching the steady-state, the MCMC will sample from the posterior distribution over the parameter space, enabling an estimate of the posterior distribution for each parameter.Iterations before reaching the steady-state are discarded, as these are not sampled from the target distribution.This is called the ''burn-in'' phase.For this implementation, a burn-in of 5; 000 iterations was used (Figure S10).Some MCMC walkers can get stuck near a local maximum.These walkers typically have a low acceptance rate, that is the proportion of moves for which the MCMC sampler generated parameter values that differed from the previous sample.One common practice is to prune these walkers from the final MCMC output.For example, walkers can be pruned which get stuck in irrelevant local optima by clustering the likelihood of the walkers and removing the clusters with lower likelihoods. 54For this implementation, half of the MCMC walkers are pruned with the lowest acceptance rate in order to remove potentially stuck walkers (Figure S11).
Model selection
We use a probabilistic model selection technique, the bayesian information criterion (BIC) 55 to score the different models, and select the model with the best score.The BIC is defined as follows, BIC = k lnðnÞ À 2 lnð b LÞ; (Equation 9) where n = number of data points, k = number of parameters in the model, and b L = maximized value of the likelihood function.In the original formulation of the BIC, the value b L was derived from maximum likelihood estimation.When using an MCMC for model inference, the output consists of a sampling or distribution over the parameter space.It is advantageous to use a likelihood estimate which more closely reflects the optimal parameter regime estimated from the MCMC instead of the parameter regime which maximizes the likelihood.To this end, b L in the Here, T ˛½0; 1000 enables an accurate estimate of b t f under the assumption that r f ðtÞ approaches 0 by t = T for each parameter.The autocorrelation function (Figure S13) and autocorrelation time (Figure S14) is estimated for each parameter separately.
For a general comparison, the autocorrelation times were estimated for all genes using the model with the best fit in the mouse e13.5 forebrain sample (Figure S15).The autocorrelation times increase with the complexity of the model (i.e.number of parameters specified in each model).This is in part expected, since a model with more parameters will generally have a lower acceptance rate due to the higher number of dimensions in which the MCMC has to make proposal moves, leading to higher autocorrelations for each parameter.Nonetheless, the autocorrelation times are fairly robust for each model.
Thinning is an approach to use every k-th iteration of the MCMC walkers, where k = t f would represent an i.i.d.sampling of the posterior distribution.However, various publications indicate that thinning is often unnecessary and results in reduced precision. 57,58Therefore, no thinning of the MCMC walkers was used in this analysis.
Another way to visualize the posterior distribution over the parameter space derived from an MCMC is a corner plot (Figure S16).The corner plot highlights both the two dimensional projections over the parameter space across iterations of the MCMC, as well as the marginal posterior distribution for each individual parameter (highlighted in the upper plots).Some parameters are more correlated with each other than others, indicating underlying covariates within the model parameters.However, the marginal posterior distributions do not appear to be multimodal.
These heuristics provide some insight into the ability of the ensemble MCMC sampler to provide an accurate sampling of the posterior distribution over the parameter space.
Estimating inflection points
Inflection points occur where the curvature of a function changes sign.At inflection points, the first-order derivative, or rate of change, of a function reaches a local maximum or local minimum.At an inflection point, the second-derivative of a function passes through 0 with the second derivative changing sign from positive (concave upward) to negative (concave downward) or vice versa.The inflection points of the Gaussian, sigmoidal and double sigmoidal fits can be used to compare the relative timing of when genes exhibit a state transition along a pseudotime trajectory.To estimate the inflection points of the different functions, first solve for x at which the second-derivative of the function is zero.For the Gaussian function, f gauss ðtÞ, sigmoidal function f sig ðtÞ, and double simgoidal function f dsig ðtÞ defined in Equation 1, the second derivatives are f 00 gauss ðtÞ = a s 4 e À ðt À t 0 Þ 2 2s 2 ðt À ðt 0 À sÞÞðt À ðt 0 + sÞÞ; For the Gaussian function, f gauss ðtÞ, two inflection points occur at t ˛ðt 0 À s;t 0 + sÞ.For the sigmoidal function, f sig ðtÞ, one inflection point occurs at t = t 0 .The estimates for the inflection points are then measured from the parameters ðt 0 À s; t 0 + sÞ for the case of the Gaussian and t 0 for the case of sigmoidal function at each MCMC iteration.Finally, for the double sigmoidal function, f dsig ðtÞ, the number of inflection points can vary.However, if all parameters are fixed besides k 1 , then, f 00 dsig ðtÞ/0 as k 1 increases.Similarly, if all parameters are fixed besides t 1 , then f 00 dsig ðtÞ/0 as t 1 decreases.That is, for k 1 [ 0, i.e. the transition from b min to b mid occurs rapidly, then an inflection point will occur very close to t 1 .Similarly, for k 2 [ 0, i.e. the transition from b mid to b max occurs rapidly, then an inflection point will occur very close to t 2 .Also, the further apart t 1 and t 2 are from each other, the closer the inflection points are to t 1 and t 2 .To ensure the inflection points occur very close to t 1 and t 2 , at each iteration of the MCMC, a move is only accepted in cases where signðf 00 dsig ðt 1 À dtÞÞ Ã signðf 00 dsig ðt 1 + dtÞÞ < 0 and signðf 00 dsig ðt 2 À dtÞÞ Ã signðf 00 dsig ðt 2 + dtÞÞ < 0 for dt = 1.The estimates for the inflection points are then calculated from the parameters t 1 and t 2 at each MCMC iteration.
Comparing inflection points
Regulatory interactions were inferred based on the relative timing of inflection point estimates (Figure 2).If there was an overlap of at least 1% in the inflection point estimates between two genes across MCMC iterations, then these were assumed to have a simultaneous switch state.A regulatory interaction between the two was mutually positive if the inflection points had the same sign, and mutually negative if the inflection points differed in sign.The overlap between two inflection points is estimated by binning the inflection point estimates across all MCMC iterations to 100 equally spaced bins, starting at the minimum inflection point estimate across both genes and ending at the maximum inflection point estimate across both genes.Let fx i g i ˛½1;100 represent this binning domain.If p A ðx i Þ is the percent of counts in the histogram in bin x i for gene A, and p B ðx i Þ is the percent of counts in the histogram in bin x i for gene B, then the overlap between the two, PðA = BÞ, is
Figure 1 .
Figure 1.Transcriptional cascades in mouse 13.5 forebrain dorsal cells (A) Gene expression profiles of transcription factors with non-uniform fits are displayed as a heatmap.Genes are grouped according to a state-switch from high to low expression (sigmoidal fit) or stepwise down-regulation (double sigmoidal fit), a state-switch from low to high expression (sigmoidal fit) or stepwise upregulation (double sigmoidal fit), a transient up (Gaussian or double sigmoidal fit) expression pattern, and transient down (double sigmoidal fit) expression pattern.(B) The inflection point estimates are shown for the same genes as in (A).Inflection point estimates from double sigmoidal fits are shown in light blue and light red, and those from Gaussian and sigmoidal fits in blue and red.
Figure 2 .
Figure 2. Reconstructing regulatory interactions during mouse e13.5 cortical development (A) Normalized expression levels of essential genes -Pax6, Neurog2, Eomes, and Tbr1 -forming a regulatory network underlying cortical neuron differentiation, as well as the neural lineage bHLH factor, Neurod4, across pseudotime-ordered cells are shown.The curves display a random sampling of the parameters from 100 iterations of the MCMC traces for the best-fitting model for each gene.(B) Inflection point estimates for the genes highlighted in (A).(C) A reconstructed gene regulatory network based on the comparison of inflection points.Positive regulatory interactions which have previously been validated are highlighted as a green solid line, and those which have not been validated as a green dashed line.Similarly, negative regulatory interactions which have previously been validated are highlighted as a red solid line, and those which have not been validated as a red dashed line.
Figure 3 .
Figure 3. Inferring upstream regulators of Eomes across mouse e13.5 embryos (A) The left and right plots show a transcriptional cascade of the shared potential positive regulators of Eomes in forebrain dorsal cells of mouse e13.5 embryos across biological replicates.Transcriptional co-activators and co-repressors (derived from the study by Siddappa et al. 38 ) are shown in orange, and transcription factors (derived from the study by Lambert et al. 23 ) are shown in black.(B) The left panel in the plot displays a random sampling of the parameters from 100 iterations of the MCMC traces for the genes Eomes and Mycn using the double sigmoidal model, the best-fitting model for both genes.The full range of first and second inflection point estimates for both genes is highlighted as a shaded region, with blue indicating a negative inflection point and red a positive inflection point.The middle and right panels highlight the distribution of first and second inflection point estimates across MCMC iterations, respectively.The right panel highlights the distribution of second inflection point estimates across MCMC iterations.p values were estimated as the percentage of overlapping inflection point estimates across both genes after binning the inflection point estimates across all MCMC iterations to 100 equally spaced bins, starting at the minimum inflection point estimate and ending at the maximum inflection point estimate across both genes.(C) The same plot for (B) in cortical cells of the biological replicate.
Figure 4 .
Figure 4. Notch signaling cascade in mouse e13.5 embryos The left and right plots show a transcriptional cascade of the shared ligand-receptor pairs involved in Notch signaling in cells along the forebrain dorsal NSC / IP / neuron trajectories in mouse e13.5 embryos across biological replicates.Annotated cell types are highlighted below.
Figure 5 .
Figure 5. Gene expression cascades in developing mouse e14.5 pancreatic beta cells (A) Schematic diagram of the previously characterized gene expression cascade in developing pancreatic beta cells, based on the study by Wilson et al. 8 (B) The heatmap in the upper panel highlights the expression profiles of transcription factors ordered by the occurrence of their first inflection points.Inflection point estimates are highlighted in the plot below using the same ordering, with double sigmoidal fits shown in light blue and light red, and those from Gaussian and sigmoidal fits in blue and red.The annotated cell type for each cell in the trajectory is highlighted in the middle.(C) Modified gene expression cascade based on inflection point estimates from (B).
, the folded normal distribution is parameterized by m > 0 and s > 0 with probability density function,
|
2024-03-11T17:57:13.564Z
|
2024-03-01T00:00:00.000
|
{
"year": 2024,
"sha1": "7090e348f7b43f34dad47cf8ca2242af528527ec",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.isci.2024.109386",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "971eba69588142cd0e2998203dccbce9c33a3bb1",
"s2fieldsofstudy": [
"Biology",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
119173597
|
pes2o/s2orc
|
v3-fos-license
|
Strict unimodality of q-polynomials of rooted trees
We classify rooted trees which have strictly unimodal q-polynomials (plucking polynomial). We also give criteria for a trapezoidal shape of a plucking polynomial. We generalize results of Pak and Panova on strict unimodality of q-binomial coefficients. We discuss which polynomials can be realized as plucking polynomials and whether or not different rooted trees can have the same plucking polynomial.
Introduction
We study in this paper properties of coefficients of the q-polynomial invariant of rooted trees. This invariant is defined, initially, for plane rooted trees using the recursive plucking recursive relation as follows . In our work, we use the convention that trees are growing up (like in Figure 1.1).
Definition 1.1. Consider the plane rooted tree T (compare Figure 1.1). We associate with T the polynomial Q(T, q) (or succinctly Q(T )) in the variable q as follows.
(i) If T is the one vertex tree, then Q(T, q) = 1. We proved in [Prz-1, Prz-3] that Q(T ) is a rooted tree invariant (so it does not depend on the embedding in the plane). It follows from the following result that we shall often use in the paper.
Theorem 1.2. (1) Let T 1 ∨ T 2 be a wedge product of trees T 1 and T 2 ( ). Then: Q(T 1 ∨ T 2 ) = |E(T 1 )| + |E(T 2 )| |E(T 1 )|, |E(T 2 )| q Q(T 1 )Q(T 2 ) 2 STRICT UNIMODALITY OF q-POLYNOMIALS (2) Let a plane tree be a wedge product of k trees ( ... T k T 2 1 T ), that is T = T k ∨ · · · ∨ T 2 ∨ T 1 , then where E i = |E(T i )| is the number of edges in T i , and q-multinomial coefficients are defined by Here [k] q = 1 + q + ... + q k−1 and [k] q ! = [k] q [k − 1] q · · · [2] q [1] q are called a q-integer and a q-factorial, respectively. Notice that for every ordering of numbers E 1 , E 2 , ..., E k we can decompose the q-multinomial coefficient E k +E k−1 +...+E 1 E k ,E k−1 ,...,E 1 q into the product of Gaussian polynomials: (3) (State product formula) where W (v) is the weight of a vertex (we can call it the Boltzmann weight) defined by: where T v is a subtree of T with root v (part of T above v, in other words T v grows from v) and T v may be decomposed into a wedge of trees as follows: Notice that by (2), Q(T ) is a product of q-binomial coefficients.
(4) Q(T ) is of the form c 0 + c 1 q + ... + c N q N where: (i) c 0 = 1 = c N , c i > 0 for every i ≤ N, (ii) c i = c N −i for each i, that is Q(T ) is a symmetric polynomial (i.e. a palindromic polynomial), (iii) the sequence c 0 , c 1 , ..., c N is unimodal (see the next definition), 3 (iv) For a nontrivial tree T , that is a tree with at least one edge, we have: is the number of edges growing up from v, that is the degree of v in the tree T v growing from v. The number c 1 of Q(T ) is called the branching number of T .
Notice that for trees of Figure 1.1 we get Q(T ) = 5 2,3 q [3] q = [4] q [5] q , and that c 1 = 2. Definition 1.3. Consider the sequence of nonnegative integers c 0 , c 1 , ..., c N . ( (2) If c i = c N −i for every i, then we call the sequence symmetric (or palindromic) centered at N/2. In this paper, we deal exclusively with unimodal symmetric sequences with c 0 = c N = 1.
then we say we have an almost trapezoidal sequence with a top of length N − 2j. (7) We say that a polynomial in one variable q with nonnegative coefficients is symmetric unimodal (respectively, almost unimodal, trapezoidal, or almost trapezoidal) if its nonzero coefficients form a symmetric unimodal sequence (respectively, almost unimodal, trapezoidal, or almost trapezoidal). (8) We denote by P SU N the set of positive, symmetric, unimodal polynomials of degree N. 4 The classical result of Sylvester (1878, [Syl]) established the unimodality of Gaussian polynomials (q-binomial coefficients): For us the following observation of MacMahon is of importance: For more basic information on Gaussian polynomials (q-binomial coefficients) we refer to [K-C]. In particular, Gaussian polynomials are symmetric centered at mn 2 and of degree mn. Our starting point is the result by Pak and Panova [Pak-Pan] describing almost strict unimodality of Gaussian polynomials and listing the exceptions from almost unimodality.
Gaussian polynomials with m ≤ 4 are carefully analyzed in Sections 2 and 3 (see also [Lin, West]).
We complete this section with a fairly general result concerning the structure of a product of two unimodal polynomials. Additionally, we define a preorder relation on unimodal polynomials and introduce the notion of their shapes. (1) For the product of two q-integers, we have: for m ≤ n. We say that the product has a trapezoidal shape with base of length m + n and top of length n − m (n − m + 1 terms), see Figure 1.3. (2) Let Q(q) ∈ P SU N thus it can be written as Then the product Q(q)Q ′ (q) is a polynomial in P SU N +N ′ , Then d k = 0 if and only if there are i and j with b Proof. (1) follows directly from the definition of multiplication.
We stress that the simple observation in the proposition is very important and used several times in this paper.
Here we now define the notion of polynomials having the same shape (relation ≃). We consider the following preordering relation on nonnegative symmetric unimodal polynomials of variable q. We allow here b 0 or b ′ 0 to be equal to zero.
Definition 1.6. Let ] q then we say that P (q) P ′ (q) (we say the shape of P ′ (q) dominates the shape of P (q)) if whenever b ′ i = 0 then b i = 0. If P (q) P ′ (q) and P ′ (q) P (q), we say that the polynomials are shape equivalent or shortly they have the same shape, and write P (q) ≃ P ′ (q). If b i = 0, then we say that P (q) has a nontrivial row of length N − 2i at the height i. Proposition 1.5, despite its simplicity, immediately leads to useful properties: Corollary 1.7.
(1) Assume that P (q) ∈ P SU N and [n + 1] q is a q-integer with n ≥ N. Then the product P (q)[n + 1] q has a trapezoidal shape with a bottom of length N + n and top of length n − N. Furthermore, if n = N − 1, then P (q)[n + 1] q is strictly unimodal. In other words, in these cases, P (q)[n + 1] q has the same shape as [N + 1] q [n + 1] q .
Another very simple corollary (or its variations) will be often used: and otherwise b i = 0. Then for k ≤ N + 1, P (q)[k + 1] q is strictly unimodal, except for k + 1 = N − 1 or 7 k + 1 = N + 1 − 2s, where P (q)[k + 1] q has a trapezoidal shape with a top of length 2.
We also obtain the following concrete corollary which will be one of the basic bricks for our general results on strict unimodality of plucking polynomials.
Corollary 1.9. Let P (q) be one of the exceptional polynomials in [Pak-Pan] and d = deg(P (q)) then is strictly unimodal with the following exceptions: (1) 12 6,6 q [3] q which has a trapezoidal shape with a top of length 2.
The next two sections are purely algebraic. Our starting point is Theorem 1.2 decomposing Q(T ) into a product of Gaussian q-binomial coefficients. Because of Theorem 1.4 of Pak and Panova, we have to pay special attention to factors of the type m+n m,n q with m ≤ 4; especially because by Proposition 1.5 we can already conclude that products of factors with m ≥ 5 are strictly unimodal.
Analysis of products of q-integers and Gaussian polynomials 2+n
2,n q We start with an analysis of products of q-integers which will always have a trapezoidal shape. Our results are based on Proposition 1.5.
Proof. It holds for k = 2 as [a 1 + 1] q [a 2 + 1] q is a trapezoid with base of length a 1 + a 2 and top of length a 2 − a 1 by Proposition 1.5(1). Thus for strict unimodality we need a 2 = a 1 or a 2 = a 1 + 1. To complete the proof inductively, we only need to prove the case of three integers which can be restated as computing the product of a polynomial of trapezoidal shape with a q-integer. The inductive step reduces to this case. Therefore, we only need part (2) of the following lemma.
(2) Let the polynomial P a,b (q) have a trapezoidal shape with the base of length a + b and top of length b − a. Then the product polynomial P a,b (q)[c + 1] q has a trapezoidal shape with base of length a + b + c and the same shape as the triple product trapezoidal shapes with base of length b+a and d+c, respectively, and tops of length b−a and d−c, respectively. Then the product P a,b (q)P c,d (q) has a trapezoidal shape of base of length a+b+c+d and top of length which can be written as 2 max(a, b, c, d) − (a + b + c + d) if this number is not negative. Otherwise the product is strictly unimodal. Proof.
(1) follows directly from definition (it is a special case of Corollary 1.7).
(3) follows by applying (2) twice and observing that P c,d (q) has the same shape as We show here that the product of 2+n 2,n q and an integer [k + 1] q = 1 + q + ... + q k (k ≥ 1) is a polynomial with a trapezoid shape. More precisely: Proposition 2.3. 2+n 2,n q [k + 1] q is strictly unimodal if k + 2 ≤ 2n and 2n + 2 − k is not divisible by 4. If k + 2 ≤ 2n and 2n + 2 − k is divisible by 4, then the product has a trapezoidal shape with a top of length 2. If k ≥ 2n, then the product has a trapezoidal shape with a top of length k − 2n. 9 Proof. We compare summands of We see that if 2 ≤ k + 1 ≤ 2n + 1, then the product has a trapezoidal shape with a top of length 0, 1, or 2, where it is 2 if and only if 2n + 1 − (k + 1) = 2n − k is congruent to 2 modulo 4. If k ≥ 2n, then 2+n 2,n q [k + 1] q has the same shape as [2n + 1] q [k + 1] q (Lemma 2.2 (1)) so it is of the shape of trapezoid with a top of length k − 2n, as needed.
We now show that if we multiply two Gaussian polynomials of type 2+n 2,n q , then we get either a strictly unimodal polynomial or a polynomial of a trapezoidal shape with a top of length 2.
Proposition 2.4. The product 2+n 2,n q 2+m 2,m q is strictly unimodal if m and n have the same parity and is of trapezoidal shape with a top of length 2 (3 terms) if they have different parity.
Proof. We use the fact that Gaussian polynomials 2+n 2,n q have a shape of a step pyramid. Separating even and odd cases, we have: We complete our proof by using Proposition 1.5 (2).
(The polynomial has a trapezoidal shape with a top of length 2.) Corollary 2.5. The product of three terms 2+n 2,n q 2+m 2,m q 2+k 2,k q is always strictly unimodal.
Proof. Two terms of the product have the same parity, so their product is strictly unimodal. Then we deduce from Proposition 1.5 (2) that the product of a strictly unimodal (PSU) polynomial and any q-binomial coefficient 2+n 2,n q is strictly unimodal.
We can also observe that by Proposition 1.5 (2), the product of 2+n 2,n q (n ≥ 2) and any Gaussian polynomial with 5 ≤ m ≤ n is strictly unimodal. In the next section we analyze the case of m = 3 or 4.
3. Analysis of 3+n 3,n q and 4+n 4,n q We consider the set L(m, n) which consists of integer sequences of length m denoted by a = (a 1 , . . . , a m ) such that 0 ≤ a 1 ≤ · · · ≤ a m ≤ n with ordering (a 1 , . . . , a m ) ≤ (b 1 , . . . , b m ) if a i ≤ b i for every i. A chain a 1 < · · · < a k is called symmetric if r(a 1 ) + r(a k ) = mn where For an elementary approach to the relation between L(m, n) and qbinomial coefficients m+n m,n q we refer to [Sta-2]. Lindström [Lin] and West [West] introduced symmetric chain decompositions of S(3, n) and S(4, n), respectively. We will classify the shapes of the q-polynomials 3+n 3,n q and 4+n 4,n q by using their symmetric chain decompositions of L(3, n) and L(4, n), respectively. 11 Lemma 3.1. For n ≥ 0, q-polynomial 3+n 3,n q = 3n i=0 c i q i has one of the following forms: (1) if n = 2k + 1, that is the polynomial has an almost trapezoidal shape with a top of length 3 (4 terms), (2) if n = 4k, and we say that the top of the polynomial has a shape of type (2, 1, 2), the same shape as 12 6,6 q ( Figure 1.2 > c 12k+5 = c 12k+6 , and we say that the top of the polynomial has a shape of type (2, 3, 2), compare Figure 3.1.
The product polynomial has a trapezoidal shape with a top of length 2 (3 terms).
The product polynomial has a trapezoidal shape with a top of length 2 (3 terms).
The product polynomial has a trapezoidal shape with a top of length 2 (3 terms).
The product polynomial has a trapezoidal shape with a top of length k (k + 1 terms).
West [West] proved that S(4, n) can be decomposed with two types of symmetric chains C n ij and D n ij of lengths 4(n − 3i − j) + 1 and 4(n − 3i − j) − 5, respectively: Since chains C n ij and D n ij are symmetric, we only need to consider their lengths to determine coefficients c i 's of q-polynomial 4+n 4,n q . Here the coefficient c i represents the number of integer sequences a in L(4, n) satisfying r(a) = i.
Then the set of all lengths of C n ij and D n ij in L(4, n) is the same as the set of all positive odd integers which are less than or equal to 4n+1 but neither 3 nor 4n − 1 if n ∈ N \ {1, 4}. Therefore Lemma 3.4 combined with Proposition 1.5 gives the following useful corollary which we use in Section 4.
The product polynomial has a trapezoidal shape with a top of length 2 (3 terms).
The product polynomial has a trapezoidal shape with a top of length 2 (3 terms).
(3) 4 + n 4, n q [3] q for n ≥ 4, The product polynomial has a trapezoidal shape with a top of length 2 (3 terms).
The product polynomial has a trapezoidal shape with a top of length 2 (3 terms).
The product polynomial has a trapezoidal shape with a top of length k (k + 1 terms).
Proof. Lemma 3.4 follows directly from Proposition 1.5 but we will do well a little longer on the case of 8 4,4 q . We can express it as We can draw the shape of the polynomial as in Figure 3.3. is also of a trapezoidal shape of top of length 2. Other cases are treated similarly. 17
The main algebraic result
We are ready to combine the results from previous sections to decide which products of Gaussian polynomials are strictly unimodal and show that all nontrivial products (more than one factor different from 1) are of trapezoidal shape (if they are not strictly unimodal, then we give the length of top of the trapezoid). Our algebraic results are summarized in the theorem below.
Theorem 4.1. Consider a nontrivial product (at least two factors different from 1) of q-binomial coefficients: Then: (1) The product P (q) is always of a trapezoidal shape.
(2) If the product has no q-integer factors, then it is always strictly unimodal except 8 4,4 q 5 2,3 q , 4k+1 3,4k−2 q 4 2,2 q , or if we multiply two factors of type 2+n 2,n q with different parity of n (Proposition 2.4). In these cases the top of the resulting trapezoid has length 2.
(3) If the product has exactly one q-integer factor [n + 1] q , the product P (q) is strictly unimodal with exceptions of the cases of 2n ≥ deg(P (q)) + 2 which have a trapezoidal shape with a top of length 2n−deg(P (q)) and products of the form [n+1] q m 2 +n 2 m 2 ,n 2 q , 2 ≤ m 2 listed in Proposition 2.3, Corollary 3.2, and Corollary 3.5.
(4) The case of all factors being q-integers is described in Proposition 2.1. (See also Corollary 5.5.) (5) All other cases when P (q) is not strictly unimodal can be characterized as follows: Let [n+1] q be the largest q-integer factor of P (q) and let P (q) = [n + 1] qP (q). Assume also that n ≥ deg(P (q)) + 2. Then P (q) has a trapezoidal shape with a top of length n − deg(P (q)).
Proof. We already proved all of the main ingredients needed to demonstrate Theorem 4.1.
Tree realization and future plans
Our original problem was to characterize those rooted trees whose plucking polynomials are not strictly unimodal. We are not interested 18 in Gaussian polynomials b+a b,a q (analyzed carefully in [Pak-Pan] and follow up papers [Dha, Zan]) which are plucking polynomials of trees with one splitting (Figure 5.1). Such a tree after reduction 2 is denoted by T b,a . From our main algebraic result, Theorem 4.1, we obtain: Corollary 5.1. Every tree which is different from T b,a with a string (Figure 5.1) has a plucking polynomial Q(T ) of a trapezoidal shape. Consider a tree T whose reduction is not equal to T b,a . If we would like to decide whether Q(T ) is strictly unimodal and if it is not what the length of the top of the corresponding trapezoidal shape is, we can decompose Q(T ) into the product of Gaussian polynomials (not necessarily unique), as in Theorem 1.2, and then use Theorem 4.1.
We can however ask further questions: (1) Which products of Gaussian polynomials can be realized by trees as Q(T )?, (2) To what extent is the realization unique?
We answer the first question in Theorem 5.6. We discuss the second question in Subsection 5.2. 5.1. Realizations. We start from the rather pleasing criterion for the product of Gaussian polynomials to be realized as Q(T ) for some T. This will show, in particular, that 8 4,4 q 5 2,3 q of Theorem 4.1 cannot be realized as the plucking polynomial of any tree. We consider polynomials P (q) which can be represented by a fraction for which the numerator and denominator are products of q-integers. Every plucking polynomial can be written in this form. We denote the numerator and denominator of P (q) by N(P (q)) and D(P (q)), respectively.
It is well known that the reduced form is unique.
Theorem 5.3. Assume that the plucking polynomial Q(T ) of a rooted tree T is in a reduced form. Then ( is the greatest q-integer in N(Q(T )). Proof.
(1) Let T be a rooted tree. We consider every vertex of degree ≥ 3, including the root if the degree of the root is ≥ 2, denoted by v 0 , v 1 , . . . , v m where v 0 is the root of the greatest subtree of T . Then where T v is the subtree of T with root v and k v i + 1 is the degree of v i , especially if v 0 is the root of T, then k v 0 will be the degree of v 0 (compare with Theorem 1.2(3) "State product formula"). Then in the reduced form Q(T ) of Q(T ), every [E(T v i )] q ! which is the numerator of for 1 ≤ i ≤ m will be canceled out (because each subtree T v i belongs to the next greater subtree T v j therefore (2) Since T = T 1 ∨ · · · ∨ T k and k ≥ 2, the tree T is reduced. Then by the similar argument in the proof of (1), q-integer factors the greatest q-integer in the numerator of the reduced form of Q(T ) is |E(T )|. Corollary 5.5. A product of q-integers [a 1 ] q [a 2 ] q · · · [a k ] q where a 1 ≤ a 2 ≤ ... ≤ a k can be realized as a q-polynomial of some rooted tree Q(T ) if and only if all inequality are strict, that is a 1 < a 2 < ... < a k .
Proof. (if) We proceed by induction on k, the number of factors in the product. [a 1 ] q can be realized by a tree. Then in the inductive step we attach at the bottom the tree of size a k − a k−1 as illustrated in the Figure 5. As a generalization of Corollary 5.5, now we give a complete solution to the first characterization problem we asked in the beginning of this section. Recall that the plucking polynomial of any rooted tree can be written as the product of q-binomial coefficients.
Theorem 5.6. Consider a product of q-binomial coefficients: P (q) = P 1 (q)P 2 (q) · · · P k (q), where P i (q) = m i + n i m i , n i q .
Then the product can be realized as Q(T ) for some rooted tree T if and only if the numerator of P (q) does not repeat any q-integer.
Proof. According to Theorem 5.3, it suffices to show that if the numerator of P (q) does not repeat any q-integer, then there exists a rooted tree T such that Q(T ) = P (q). Without loss of generality, we assume that m 1 + n 1 ≥ m 2 + n 2 ≥ · · · ≥ m k + n k . Denote A = {m 2 + n 2 , · · · , m k + n k } and B = {m 1 , n 1 , · · · , m k , n k }.
Namely, A is a set which contains k − 1 integers and B is a set which consists of 2k integers. If the numerator of P (q) does not repeat any qinteger, we claim that there exists an injection f from A to B such that f (m i +n i ) ≥ m i +n i , for all 2 ≤ i ≤ k. In fact we can define f beginning with m 2 + n 2 , if m 1 ≥ m 2 + n 2 , then we define f (m 2 + n 2 ) = m 1 . 21 Next let us consider m 3 + n 3 , if any one of {n 1 , m 2 , n 2 }, say, n 2 , is greater than or equal to m 3 + n 3 , then we define f (m 3 + n 3 ) = n 2 .
The key point is this process will not stop until all elements of A have been assigned an image under f . If not, we assume that after defining f (m 2 + n 2 ), · · · , f (m i−1 + n i−1 ) we can not find the image of m i + n i in B − {f (m 2 + n 2 ), · · · , f (m i−1 + n i−1 )}. Note that m 1 + n 1 ≥ m i + n i , hence [m 1 + n 1 ] q ! contains the q-integer [m i + n i ] q . Now since no integer in B − {f (m 2 + n 2 ), · · · , f (m i−1 + n i−1 )} is greater than or equal to m i + n i , it follows that the q-integer [m i + n i ] q appears at least twice in the the numerator of P (q). This contradicts with the assumption. Now we explain how to construct a binary rooted tree T with Q(T ) = P (q). First we regard T as the wedge product of T 1 and T 2 , where |E(T 1 )| = m 1 and |E(T 2 )| = n 1 . If f (m 2 + n 2 ) = m 1 , then T 1 can be described as the wedge product of T 3 and T 4 with a string consists of m 1 − (m 2 + n 2 ) edges, where |E(T 3 )| = m 2 and |E(T 4 )| = n 2 . With the help of f , we can construct T step by step. In particular, if some integer, say n 1 , is not the image of any element in A under f , then T 2 is a straight line with n 1 edges. In this way finally we can obtain a binary tree T such that Q(T ) = P (q). See Figure 5.3 for a simple example.
Uniqueness.
We have an infinite number of different rooted trees with the same reduced form (i.e. string can be of any length) therefore with the same plucking polynomial. However, for reduced rooted trees we have: Proposition 5.7. For reduced rooted trees, the function T → Q(T ) is finite-to-one.
Proof. By Theorem 5.3(2) the polynomial Q(T ) determines the number of edges of a reduced rooted tree T. Because we can have only a finite number of trees with given number of edges, the proposition follows.
There are many examples of different reduced rooted trees of the same plucking polynomial. The simplest one is of five edges as in Figure 1.1 (the trees differ by changing a root 3 ). Furthermore, for any n, we can construct n different trees with the same plucking polynomial: As we have seen in Figure 5.4, in general the realization of [a 1 ] q [a 2 ] q · · · [a k ] q is not unique. However, it is easy to conclude that for one q-integer 23 [a 1 ] q , the realization is unique. For [a 1 ] q [a 2 ] q , the realization is unique if and only if (i) a 2 ≥ a 1 + 2, (ii) a 1 = 2, a 2 = 3, or (iii) a 1 = 3, a 2 = 4. When a 2 = a 1 + 1 ≥ 5, there are only two distinct realizations for [a 1 ] q [a 2 ] q , and these two distinct rooted trees are isomorphic as (unrooted) trees. More generally, we have the following result.
Proof. First we claim that if we write T in the form of a wedge product T 1 ∨ · · · ∨ T n , then n = 2 and |E(T 1 )| = 1. By Theorem 5.3(2) the numerator of the reduced form of Q(T ) contains [a k ] q and [a k − 1] q . This contradicts the assumption that [a k ] q − [a k − 1] q ≥ 2. Now let us consider T 2 , beginning with the root. Let v be the next vertex with degree ≥ 3 (which exists if k ≥ 2). Due to a similar reason, we have T v = T v 1 ∨ T v 2 and |E(T v 1 )| = 1. Continuing the discussion we will obtain the unique realization of [a 1 ] q [a 2 ] q · · · [a k ] q .
We remark that the converse of Proposition 5.9 does not hold in general. For example, the realization of [2] q [3] q or [2] q [4] q [5] q is unique.
We end this paper with a problem.
Problem 5.10. Find a family of moves on rooted trees that preserve plucking polynomial such that if two trees have the same plucking polynomial, then they are related by a finite number of moves.
An example of an elementary move is illustrated in Figure 5.5 where we modify a tree by exchanging T 1 and T 2 where |E(T 1 )| = |E(T 2 )|. Maybe this move solves the problem above.
|
2016-01-14T01:55:32.000Z
|
2016-01-14T00:00:00.000
|
{
"year": 2016,
"sha1": "a47b4ac0a6d38ecdcdd1b85ff693539c93237736",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "a47b4ac0a6d38ecdcdd1b85ff693539c93237736",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
239454602
|
pes2o/s2orc
|
v3-fos-license
|
Structure-guided antibody cocktail for prevention and treatment of COVID-19
Development of effective therapeutics for mitigating the COVID-19 pandemic is a pressing global need. Neutralizing antibodies are known to be effective antivirals, as they can be rapidly deployed to prevent disease progression and can accelerate patient recovery without the need for fully developed host immunity. Here, we report the generation and characterization of a series of chimeric antibodies against the receptor-binding domain (RBD) of the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) spike protein. Some of these antibodies exhibit exceptionally potent neutralization activities in vitro and in vivo, and the most potent of our antibodies target three distinct non-overlapping epitopes within the RBD. Cryo-electron microscopy analyses of two highly potent antibodies in complex with the SARS-CoV-2 spike protein suggested they may be particularly useful when combined in a cocktail therapy. The efficacy of this antibody cocktail was confirmed in SARS-CoV-2-infected mouse and hamster models as prophylactic and post-infection treatments. With the emergence of more contagious variants of SARS-CoV-2, cocktail antibody therapies hold great promise to control disease and prevent drug resistance.
SARS-CoV-2 infection is initiated by the engagement of the spike (S) protein receptorbinding domain (RBD) to the host receptor molecule, angiotensin-converting enzyme 2 (ACE2). This binding triggers subsequent conformational changes within the S protein that enable viral entry. Most neutralizing antibodies (nAbs) therefore target the RBD to compete with ACE2 and prevent viral entry. Prior to the COVID-19 outbreak, several nAbs were developed for SARS-CoV and Middle East Respiratory Syndrome (MERS) CoV [21]. As SARS--CoV-2 and SARS-CoV share 74% sequence similarity in their RBDs, nAbs against SARS-CoV are potentially available for neutralizing SARS-CoV-2 [15,17]. Furthermore, single chain nanobodies from llamas that recognize SARS-CoV S RBD can also bind to the RBD of SARS--CoV-2 S protein with high affinities [22]. During the COVID-19 pandemic, major efforts have also been devoted to identifying nAbs from COVID-19 convalescent sera [12][13][14][15][16][17][18]23]. In parallel, mouse immunization and phage display were also utilized to identify potential therapeutic Abs against SARS-CoV-2 [24]. In order to optimize treatment efficacy, it is desirable to develop cocktails of nAbs that can simultaneously bind different sites of the RBD and synergistically neutralize SARS-CoV-2 [14,25,26].
Here, we describe the generation of a panel of monoclonal antibodies (mAbs) using hybridoma screening. These mAbs potently neutralized SARS-CoV-2 in vitro by targeting the RBD of SARS-CoV-2 S protein with high affinity. The 12 most effective neutralizing chimeric Abs (chAbs) exhibited potent neutralizing capability to reduce 50% of plaque counts in the plaque reduction neutralization test (PRNT), yielding the PRNT 50 residues within the receptor binding motif (RBM) required for neutralizing activities of these mAbs. Cryo-electron microscopy (cryo-EM) revealed atomic details of the structural epitopes of two representative chAbs, which could potentially be used in a cocktail therapy. The prophylactic and therapeutic potentials of these antibodies and their combination were confirmed in SARS-CoV-2 mouse and hamster infection models, wherein injection of the therapeutic mAb cocktail markedly reduced the virus titers, underscoring their potential to be used in prevention and treatment of COVID-19. Moreover, a cocktail of therapeutic chAbs targeting separate epitopes on the RBM of SARS-CoV-2 spike protein may increase therapeutic efficacy and decrease the potential for virus escape mutants, providing additional benefit to tackle the emergence of new variants that harbor multiple mutations within the S protein.
Generation and characterization of anti-SARS-CoV-2 RBD chAbs
BALB/cJ mice were immunized with purified SARS-CoV-2 RBD-Fc protein (S1A Fig) to induce robust serum immune responses (S1B Fig). A total of 38 mAbs were generated, and their binding to the SARS-CoV-2 RBD was determined by ELISA ( Fig 1A). We first examined the in vitro competition abilities of all RBD-specific hybridoma clones against SARS-CoV-2 using human ACE2-overexpressing 293T cells and flow cytometry (S2A Fig). 17 of the 38 mAbs showed more than 80% inhibition of ACE2 binding to SARS-CoV-2 S RBD. To improve the clinical applicability of these mAbs, the 17 SARS-CoV-2 S RBD-specific mAbs were engineered into human IgG1 chimeric antibodies (chAbs). The V H and V L domains of the neutralizing mAbs from hybridoma cell lines were identified and grafted onto a human IgG1 and kappa backbone to generate 12 chAb clones. The binding of all chAbs to SARS-CoV-2 RBD or S recombinant protein were evaluated by ELISA, and the EpEX-His protein [27] was used as a control protein (Fig 1B-1G). RBD-chAb-28, -45, and -51 exhibited the highest binding signals for recombinant RBD (Fig 1B). RBD-chAb-51 was the most potent, in terms of antigen binding ( Fig 1C). HEK-293 cells that overexpress SARS-CoV-2 S RBD on the cell surface were used to evaluate chAb binding. In this context, the binding activities of RBD-chAb-1, -15, -34, -45, and -51 were approximately two-fold weaker compared to the signals to purified protein ( Fig 1E-1G). Expression of full-length S protein on the cell surface further confirmed the binding of these chAbs ( Fig 1F). As a negative control, HEK-293 cells expressing SARS-CoV-2 S2 domain on the cell surface were not recognized by the chAbs (Fig 1G). To assess the possible neutralization abilities for all 12 chAbs, we performed in vitro neutralization studies by PRNT. All tested RBD-specific chAbs, but except RBD-chAb- 26 These six chAbs were highly specific to SARS-CoV-2 S protein. None were found to crossreact with the S proteins of the other six human CoVs, namely SARS-CoV, MERS-CoV, hCo-V-OC43, hCoV-HKU1, hCoV-NL63, and hCoV-229E, with the sole exception of RBD-chAb-15, which exhibited partial cross-reactivity to the S1 domain of SARS-CoV S protein (S3A Fig). Additional assessments of antibody specificity (lack of cross-reactivity with whole organs) were performed by staining the FDA human normal organ tissue array. The appropriate amounts of antibodies required for immunocytochemistry staining were assessed using RBDexpressing 293T cells as a reference. All six chAbs showed clear binding to RBD-expressing cells at 1 μg/ml (S3B Fig and Table A in S1 Text). We then used a higher concentration of 5 μg/ ml to examine the cross-reactivity of each chAb with a multi-normal tissue array. No tissue cross-reactivity was observed for six major target organs (lungs, liver, spleen, heart, kidney, and larynx) for any of the tested chAbs (S3C Fig and Table A in S1 Text). Further analyses with 27 other human organs, including the cerebrum, cerebellum, adrenal gland, ovary, pancreas, parathyroid gland, hypophysis, testis, thyroid gland, breast, tonsil, thymus, bone marrow, cardiac muscle, esophagus, stomach, small intestine, colon, salivary gland, prostate, A. ELISA-reactivity of anti-RBD mAbs. Each anti-RBD mAb was serially diluted from 0.1 μg/ml to 0.8 ng/ml, then incubated in a RBD-His recombinant protein (0.5 μg/ml)-coated plate in an ELISA. OD 450 , Optical density at 450 nm. B-D. Binding of anti-RBD chAbs was determined by ELISA. SARS-CoV-2 RBD-His or S-His were immobilized on 96-well plates prior to blocking with 1% BSA in PBS and incubated with diluted anti-RBD chAbs at concentrations ranging from 1000 ng/ml to 0.25 ng/ml. Signal was detected (OD) after labeling with Donkey anti-human IgG-HRP secondary antibody. EpEX-His served as a negative control. E-G. Binding of anti-RBD chAbs was assessed by cellular ELISA. HEK-293T cells were transfected with SARS-CoV-2 RBD-flag-His, S2-flag-His or S-flag-His plasmids. A series of dilutions for anti-RBD chAbs were added to the 96-well plates. The ODs were detected with Goat anti-human IgG F(ab') 2 -HRP secondary antibody. Data information: Except A, each assay was performed in triplicate and all data points are shown, along with the mean ± SD. https://doi.org/10.1371/journal.ppat.1009704.g001
PLOS PATHOGENS
SARS-CoV-2 potent neutralizing antibody cocktail overlapping epitopes exist for RBD-chAb-1, 15 and -28; a similar finding was observed for RBD-chAb-45 and -51. Notably, the epitope for RBD-chAb-25 appears to partially overlap with that of RBD-chAb-1, 15 and -28 (Fig 3B-3F). We therefore classified the six chAbs into three distinct groups, each of which recognized a unique epitope on the RBD (Fig 3B and 3C). Structural analysis of ACE2 in complex with SARS-CoV-2 S RBD indicated that K417, Y453, Q474, F486, Q498, T500, and N501 within the RBD make direct contacts with ACE2 forming part of the RBM [28]. These residues are categorized into three clusters, namely Q498, T500 and N501 at the proximal end of the RBM, K417 and Y453 in the middle of RBM, and Q474 and F486 at the distal end of the RBM [28]. To dissect the contributions of these residues to the neutralizing effects of our RBD-chAbs, we carried out alanine scanning of the residues A. Schematic illustration of the experimental design of the epitope competition-binding assay. First, RBD-His was captured by RBD-neutralizing chAbs on a 96-well plate. Second, 10-fold capture antibody was added to saturate the RBD. Third, biotinylated RBD-chAb was added to compete with capture RBD-chAb. Finally, HRP-conjugated streptavidin was added to bind the biotinylated reporter RBD-chAb, and competitive binding was detected by optical density at 450 nm (OD 450 ). B. Results of triplicate epitope competition-binding assays for RBD-chAb-1, -15, -25, -28, -45, and -51 are shown. EpEX-His served as a negative control (Crtl: without biotin-RBD-chAb). All data points are shown, along with the mean ± SD. C. Heatmap of the epitope competition-binding assay results. The detected OD 450 values are colored according to the scale bar shown in the right. D-E. Epitope mapping of RBD-neutralizing antibodies by mutagenesis. Normalized binding to RBD alanine variants by RBD-chAbs with respect to that of wild type (WT) based on ELISA. The human 293T cells were transiently transfected with wild-type or mutant RBD plasmids with combinatorial alanine mutations. The binding of RBD-chAbs to the RBD mutants was examined by cellular ELISA. F. Structural mapping of key residues on the RBD responsible for the recognition by RBD-chAbs. The crystal structure of SARS-CoV-2 S RBD in complex with ACE2 (PDB entry: 6M0J) is shown on the white/grey surface. Regions in the RBD within 4 Å of any atoms of ACE2 (defined as RBM) are outlined in black. The positions of the key residues, Y453, F486 and N501, are indicated. Data information: All experiments were performed in triplicate, with the standard deviations shown in error bars. https://doi.org/10.1371/journal.ppat.1009704.g003
PLOS PATHOGENS
SARS-CoV-2 potent neutralizing antibody cocktail followed by ELISA to assess their impacts on RBD-chAb binding ( Fig 3D). The results showed that the singleton mutations at Y453 or N501 significantly decreased the binding signals for RBD-chAb-25, as did the mutation at Y453 for RBD-chAb-28 ( Fig 3D). Moreover, RBD-chAb-45 and -51 responded similarly to different singleton mutations, with the F486 mutation being the most disruptive, suggesting that RBD-chAb-45 and -51 bind to the same epitope ( Fig 3D). We subsequently generated combinations of singleton mutations and evaluated the effects on RBD-chAb binding. The K417A/Y453A and Q498A/T500A/N501A mutations were found to substantially reduce binding of RBD-chAb-25. The K417A/Y453A mutations had a similar effect on RBD-chAb-28. The Q474A/F486A mutations were more disruptive for RBD-chAb-45 and RBD-chAb-51 ( Fig 3E). These results suggest that the epitope residues recognized by RBD-chAb-25 are Y453 and N501, while the epitope residue of RBD-chAb-28 is Y453. Moreover, both RBD-chAb-45 and -51 recognize F486 of the RBD (Fig 3F).
Cryo-EM analysis of RBD-chAbs in complex with SARS-CoV-2 S protein
To reveal the structural basis of how the distinct classes of the RBD-chAbs recognize the SARS-CoV-2 S protein, we determined the cryo-EM structures of RBD-chAb-25 and -45 in complex with the ectodomain of the SARS-CoV-2 S protein (Figs 4, and S6 and S7). In both cases, the chAbs bound to the SARS-CoV-2 S protein in a 3:3 stoichiometry, indicated by the three distinct EM densities protruding from the three RBDs, which were all in the open conformation (Fig 4). The overall nominal resolutions of the S-chAb complexes were between 3.6 and 3.5 Å (S-chAb-25 and S-chAb-45 complex, respectively, Table B in S1 Text). Focused refinement of the cryo-EM maps by masking the Fab and RBD to yield a better definition of the binding interface, thus enabling de novo model building of the Fabs and the S protein to define the atomic details of the epitopes of individual RBD-chAbs (Materials and Methods).
Detailed structural analysis showed that RBD-chAb-25 bound to the RBD via an extensive intermolecular hydrogen bond network around Y453 and N501 (Figs 4C and S8A). Specifically, Y453, Q493 and R403 of the RBD were hydrogen bonded to S31 L , S28 L and S32 L of RBD-chAb-25 (subscript L denotes the light chain), respectively. Additionally, G502 (adjacent to N501) of the RBD and N101 H of RBD-chAb-25 (subscript H denotes the heavy chain) formed a backbone-to-backbone hydrogen bond. A cluster of bipartite hydrogen bonds was also formed at the interface of the light chain (Y97 L ) and heavy chain (E50 H and N59 H ) of RBD-chAb-25, with Q498 and T500 of the RBD. The overall binding interfaces between the RBD and the light/heavy chains of RBD-chAb-25 were 511 Å 2 and 350 Å 2 , respectively. In the case of RBD-chAb-45, the phenyl ring of the key residue F486 on the RBD was encaged by the side chains of Y94 L , N52 H , D57 H , and T59 H of RBD-chAb-45, adjacent to a bipartite hydrogen bond between Y489 of the RBD and N52 H , and N55 H of RBD-chAb-45 in close proximity to F486 of the RBD (Figs 4F and S8B). Additionally, T478 of the RBD was hydrogen-bonded to Y91 L and N92 L of RBD-chAb-45. The overall binding interfaces between the RBD and the light and heavy chains RBD-chAb-45 were 190 Å 2 and 362 Å 2 , respectively.
Despite some overlaps in the structural epitopes of RBD-chAb-25 and -45, superposition of the two resolved Fab structures onto the same RBD showed few steric clashes between the Fabs, suggesting that the two RBD-chAbs could bind simultaneously to the same RBD ( Fig 5A). To verify their simultaneous binding, we mixed RBD-chAb-25 and SARS-CoV-2 S protein and isolated the binary complex by size-exclusion chromatography (SEC), followed by the addition of RBD-chAb-45 for another round of SEC analysis (Fig 5B). A clear shift in the elution volume of the main peak was observed, indicating the formation of a ternary complex, wherein the added RBD-chAb-45 was bound to the complex of RBD-chAb-25 and SARS-CoV-2 S protein, despite the limited space around the three RBDs to accommodate more than PLOS PATHOGENS SARS-CoV-2 potent neutralizing antibody cocktail three chAbs. The same shift in elution peak was observed when RBD-chAb-25 was added to the complex of RBD-chAb-45 bound SARS-CoV-2 S protein, i.e., in reverse mixing order (Figs 4A, 4D and 5B). These changes in SEC elution profiles provided clear evidence of simultaneous binding of RBD-chAb25 and -45 to SARS-CoV-2 S protein to form a higher order complex.
Evidence of simultaneous binding of two different RBD-chAbs to the same RBD was even more clearly observed when isolated RBD was used to form a quaternary complex with RBD- The addition of RBD-chAb-25 or RBD-chAb-45 resulted a clear shift of the elution volume of the main elution peak, indicating the formation of a stable binary complex between the RBD and the individual nAbs. Similar to the use of a trimeric SARS-CoV-2 S protein, subsequent addition of a second nAb to a pre-formed RBD-nAb complex resulted in further shift of the elution volume to a higher molecular weight, indicating the formation of a ternary complex formed by two different nAbs and the RBD, regardless of the mixing sequence of the nAbs (S9 Fig). Collectively, these SEC analyses of different S protein constructs provided strong evidence of the ability of RBD-chAb-25 and -45 to simultaneously bind to the RBD.
To verify the formation of a ternary complex between RBD-chAb-25 and -45 with the SARS-CoV-2 S protein, we determined the cryo-EM map of the ternary complex that was purified by SEC as shown in Fig 2B (S-chAb25 cplx + chAb45). The resolution of the EM map was limited, in part due to the conformational heterogeneity of the nAbs in complex with the RBD. However, the resolution was sufficient for us to dock the Fabs of RBD-chAb-25 and -45 onto the RBD, which required considerable conformational rearrangements of the relative orientation of the Fab of chAb-45 with respect to the binary S-chAb-45 complex for one of the three RBDs (Fig 5C and 5D). While the cryo-EM map of the Fab of RBD-chAb-45 could be seen at a lower threshold for the other two RBDs, the EM map corresponding to the Fab of RBD-chAb-25 was less visible in the other two RBDs. The lack of EM-density from RBD-chAb-25 could be attributed to either conformational heterogeneity or substoichiometric binding. While the exact binding stoichiometry of the ternary complex remains to be established, our SEC and cryo-EM analyses provided good evidence of the simultaneous binding of RBD-chAb-25 and -45 to the SARS-CoV-2 S RBD, that could serve as the basis for antibody cocktail therapy developments.
Prophylactic effect of RBD-chAb in SARS-CoV-2-infected mice or hamsters
To assess the in vivo prophylactic potency for SARS-CoV-2 infection, we selected RBD-chAb-45 for the first evaluation based on its high neutralization capacity. An adeno-associated virus (AAV)-mediated human ACE2-expressing (AAV-hACE2) mouse model was administered a single shot of 25 mg/kg antibody one day before SARS-CoV-2 infection (Fig 6A-6D). The virus titer was significantly lower than controls and the plaque-forming units were undetectable in the treatment group at 5 days post-infection of SARS-CoV-2 ( Fig 6B and 6C). This result was confirmed by immunohistochemical staining of tissues from treated animals at 5 days post-infection (Fig 6D), confirming the potent in vivo neutralization activity of RBD-chAb-45 against SARS-CoV-2.
Next, we used a hamster model to mimic virus transmission in mild human SARS-CoV-2 infection [29]. We administered a single intraperitoneal injection of low-dose RBD-chAb-25, -28, -45 and -51 at 1.5 mg/kg one day prior to SARS-CoV-2 infection (Fig 6E-6H). The virus titer was determined from the lung tissue of each hamster at the third day after infection. Although no body weights were changed ( Fig 6F) and only RBD-chAb-45 caused a statistically significant decrease in the level of virus RNA measured by RT-qPCR (Fig 6G), the TCID 50 values were decreased in all chAb-treated groups, and the effect was especially significant in the RBD-chAb-45-treated group at the third day post-infection compared to the control group ( Fig 6H). We further assessed the efficacy of the cocktail of the two best RBD-chAbs (RBD-chAb-25 and -45) to hamsters (Fig 6I-6K). A single intraperitoneal injection of 1.5 or 4.5 mg/ kg both RBD-chAb or 4.5 mg/kg single RBD-chAb one day prior to SARS-CoV-2 infection conferred dramatic protection, according to the infectious SARS-CoV-2 titers at the third day PLOS PATHOGENS SARS-CoV-2 potent neutralizing antibody cocktail post infection. Body weights were not changed in the group injected with 1.5 mg/kg antibody and may have even been slightly increased in the group receiving 4.5 mg/kg antibody ( Fig 6I). Importantly, the virus RNA and TCID 50 values were decreased drastically in groups that received combinations of RBD-chAb-25 and -45 (1.5 or 4.5 mg/kg of each antibody, 3 or 9 mg/kg of total antibody at 3 days post-infection ( Fig 6J and 6K). As the lower dose of neutralizing antibodies may induce antibody-dependent enhancement infection in SARS-CoV-2 infected hamsters [30], we tested 1.5 or 4.5 mg/kg of each RBD-mAb-25, -45, or the combination treated at three or five days prior to intranasal challenge of SARS-CoV-2 in hamsters ( Fig 7A). No body weight loss was observed at the third day after virus challenge (Fig 7B), and significant reduction of viral load was seen (Fig 7C and 7D). Notably, when using the low dose of these two antibodies (1.5 mg/kg) or their combination (3.0 mg/kg), we did not find that any of the neutralizing antibodies enhance disease. In groups receiving a combination of RBD-chAb-25 and -45, the neutralizing activity (TCID 50 values) exhibited a synergistic effect compared to those receiving single treatments of RBD-chAb-25 or -45 (Figs 6K and 7D).
Therapeutic effect of RBD-chAb cocktail in SARS-CoV-2-infected mice or hamsters
We next tested the effect of treating animals with the antibody cocktail post SARS-CoV-2 infection. We treated the AAV-hACE2 mouse model with combinations of 1.5, 4.5, or 10 mg/ kg of each RBD-chAb-25 and -45 at one day post-intranasal SARS-CoV-2 inoculation ( Fig 8A). Although the viral genome RNA could be detected, the infectious SARS-CoV-2 titers were close to the limit of detection (LOD, 1 × 10 2 TCID 50 /ml) for all mice of the RBD-chAb cocktail-treated groups at 5 days post-infection (Fig 8B and 8C). The viral antigen contents were also assessed in lung tissue of mice treated RBD-chAb cocktail (10 mg/kg of each antibody) using immunohistological assays, and no or very few viral antigens were detected ( Fig 8D). Next, we tested the therapeutic effects in the hamster model (Fig 8E-8H). However, the viral genome RNA could still be detected at the end of the experiment and the body weight of the hamsters showed a slight loss, similar to the control group (Fig 8F and 8G). As expected, the combination of RBD-chAb-25 and -45 exhibited the same pronounced therapeutic effect when administered 1 day post-intranasal SARS-CoV-2 inoculation in hamsters or AAV-hACE2 mice (Fig 8H). Collectively, our data demonstrated an additive neutralizing effect of the RBD-chAb cocktails of RBD-chAb-25 and -45, which acted as prophylactic and therapeutic agents for SARS-CoV-2 infection in both mice and hamsters.
Fig 6. Prophylactic efficacy of neutralizing chAbs against SARS-CoV-2 infection.
A. Illustration of the study design for prophylactic efficacy of RBD-chAb-45 against SARS-CoV-2 in AAV-ACE2 mice. One day prior to intranasal challenge of SARS-CoV-2, each group of mice was given a single intraperitoneal dose of 25 mg/kg of RBD-chAb-45 (n = 4), or NHIgG, normal human IgG, as isotype control (n = 4). On day 5 after virus inoculation, lung samples were collected for analysis. B-C. The viral load in the lung of mice treated RBD-chAb-45 was determined by qRT-PCR and median tissue culture infectious dose per ml (TCID 50 / ml) was calculated. D. Viral antigen was detected by anti-SARS-CoV-2 N protein mAb (red) in paraffin embedded lung tissue. Nuclear DNA was stained with DAPI (blue). E. Illustration of the study design for prophylactic efficacy of RBD-chAb against SARS-CoV-2 in hamsters. One day prior to intranasal challenge of SARS-CoV-2, each group of hamster was given a single intraperitoneal injection of RBD-chAbs (n = 3 or 5), or NHIgG as isotype control (n = 3 or 4). On day 3 after virus inoculation, lung samples were collected for analysis. F. The percentage of body weight of hamsters treated single RBD-chAb were compare to the body weight in the day of virus inoculation. G-H. The viral load in the lung of hamsters treated single RBD-chAb was determined by qRT-PCR and median tissue culture infectious dose per ml (TCID 50 /ml) was calculated. I. The percentage of body weight of hamsters treated cocktail RBD-chAbs were compare to the body weight in the day of virus inoculation. J-K. The viral load in the lung of hamsters treated cocktail RBD-chAbs was determined by qRT-PCR and median tissue culture infectious dose per ml (TCID 50 /ml) was calculated. L-M. The pathologic changes in the lung were assessed by immunohistochemistry. Data information: All data points are shown, along with the median. � p < 0.05, ��� p < 0.001, as determined by Student's t-test. ctrl, isotype control. i. p., intraperitoneal. LOD, limit of detection, 1 × 10 2 TCID 50 /ml. Scale bars, 100 μm. The lung pathology score definition is according to
Discussion
Effective nAbs are highly sought after for the fight against the COVID-19 pandemic because of their abilities to slow down the spread of the virus, and to provide timely treatments for the critically ill. Here we reported the development of a panel of potent chAbs that target distinct structural epitopes within the RBD of SARS-CoV-2 S protein. These chAbs effectively neutralized SARS-CoV-2 in cell cultures with PRNT 50 values down to 6 ng/ml (Fig 2). We defined three distinct classes of structural epitopes for these RBD-chAbs using a site-directed mutagenesis approach (Fig 3); these classes were further elucidated in atomic detail by cryo-EM structural analyses to guide the design of cocktail therapy against SARS-CoV-2 (Fig 4). The ability of RBD-chAb-25 and -45 to simultaneously bind to the RBD of SARS-CoV-2 S protein was A. Illustration of the study design for prophylactic efficacy of monoclonal mouse antibodies, against SARS-CoV-2 in hamster. Three or five days prior to intranasal challenge of SARS-CoV-2, each group of hamster was given a single intraperitoneal dose of 1.5 mg/kg of RBD-mAb-25 (n = 3), RBD-mAb-45 (n = 3), or 3 mg/kg of RBD-mAb-25 combined RBD-mAb-45 (n = 4), 3 mg/kg of NMIgG as isotype control (n = 4). On day 3 after virus inoculation, body weights were recorded and lung samples were collected for analysis. B. The percentage of body weight were compare to the body weight in the day of virus inoculation. C. The viral load in the lung was determined by qRT-PCR. D. The infectious viral load in the lung was determined by median tissue culture infectious dose per ml (TCID 50 /ml) was calculated. Data information: All data points are shown, along with the median. ��� p < 0.001, as determined by Student's t-test. ctrl, isotype control. i.p., intraperitoneal. LOD, limit of detection, 1 × 10 2 TCID 50 /ml. A. Illustration of the study design for therapeutic efficacy of cocktail RBD-chAbs against SARS-CoV-2 in AAV-hACE2 mice. One day after to intranasal challenge of SARS-CoV-2, each group of mice was given a single intraperitoneal dose of 1.5, or 4.5, or 10 mg/kg of RBD-25 + 45 (n = 4) or 9 or 20 mg/kg NHIgG, normal human IgG, as isotype control (n = 7). B-C. On day 5 after virus inoculation, the viral load in the lung of mice treated cocktail RBD-chAbs was determined by qRT-PCR and median tissue culture infectious dose per ml (TCID 50 /ml). D. Viral antigen was detected by anti-SARS-CoV-2 N protein mAb (red) in paraffin embedded lung tissue. Nuclear DNA was stained with DAPI (blue). E. Illustration of the study design for therapeutic efficacy of cocktail RBD-chAbs against SARS-CoV-2 in hamster. One day after to intranasal challenge of SARS-CoV-2, each group of hamsters was given a single intraperitoneal dose of 1.5, or 4.5, or 10 mg/kg of each RBD-chAb (n = 6), or 20 mg/kg NHIgG as isotype control (n = 6). F. The percentage of body weight of hamsters treated cocktail RBD-chAbs were compare to the body weight in the day of virus inoculation. G-H. On day 5 after virus inoculation, the viral load in the lung of hamsters treated cocktail RBD-chAbs was PLOS PATHOGENS SARS-CoV-2 potent neutralizing antibody cocktail confirmed by SEC (Fig 5). The prophylactic and therapeutic potentials of the cocktail therapy were verified using SARS-CoV-2-infected mouse and hamster animal models (Figs 6-8).
Using cryo-EM, we revealed the atomic details of RBD-chAb-25 and -45 binding to the RBD of SARS-CoV-2 S protein (Fig 4). The structural epitopes are generally hydrophilic, and the intermolecular interactions mostly involve hydrogen bonding. Additionally, F486 of the RBD is sandwiched by a T-shaped edge-to-face π-π interaction with Y94 L and a CH-π interaction with T59 H of RBD-chAb-45 ( Fig 4F). This unique binding motif may help stabilize the complex formation despite its relatively small binding interface compared to that of RBD-chAb-25 (Fig 4B and 4E). Of all reported structures for SARS-CoV-2 S protein in complex with antibodies and nanobodies, only six showed all three RBDs in an upward, open conformation [12,14,15,[35][36][37][38][39][40][41][42][43][44][45]. By lifting the RBD upward, more binding surface is made available to other nAbs (Fig 4A and 4D). Indeed, molecular modeling of RBD-chAb-25 and -45 in complex with the RBD suggested the possibility that these two Abs could bind simultaneously to the same RBD by occupying two distinct structural epitopes (Fig 5A), a finding which was subsequently confirmed by SEC analyses (Figs 5B and S9). As the collective contributions to the binding interface of RBD-chAb-25 and -45 essentially cover the entire RBM of ACE2 (Fig 3F), the combined use of these two Abs is expected to exhibit strong synergy in neutralizing SARS--CoV-2; this synergistic neutralization was confirmed by our in vivo animal model studies (Figs 6K and 7D).
According to the reported epitopes of COVID-19 nAbs, there exist three types of SARS--CoV-2 S protein binding modes: (1) direct binding to the RBM, (2) binding to the RBD outside the RBM [14,15,31,46,47], and (3) binding to the S protein outside the RBD while still exhibiting neutralizing activity [14,46]. Combining multiple nAbs with non-competing epitopes has been demonstrated to synergistically neutralize virus infection [18]. Here, we showed that a cocktail of RBD-chAb-25 and -45 also exhibits synergetic neutralizing ability, and this combination is likely to retain therapeutic potential for SARS-CoV-2 mutants.
Despite the rapid development of multiple nAbs against SARS-CoV-2, mutations in the S protein can potentially lead to drug resistance. Since the release of the first SARS-CoV-2 complete sequence [48], 28811 point mutations have been identified in the SARS-CoV-2 genome [49]. Non-silent mutations in specific residues may severely disrupt nAb epitopes or enhance viral infection. Among these mutants, the D614G point mutation is the most prevalent [50,51]. This point mutation in the S1 domain (not in the RBD) is frequently recognized by nAbs, and it leads to a more stable S protein and higher virus infectivity [51][52][53]. In addition to the D614G mutant, more transmissible SARS-CoV-2 variants have been reported [54]. While some mutations outside the RBD site were found to escape antibody binding [55], all six of our potent antibodies retained high binding signals when tested with S protein variants harboring some of the most common mutations on the GISAD sequencing database for COVID-19 on determined by qRT-PCR and median tissue culture infectious dose per ml (TCID 50 /ml). Data information: All data points are shown, along with the median. �� p < 0.01, ��� p < 0.001, as determined by Student's t-test. ctrl, isotype control. i.p., intraperitoneal. LOD, limit of detection, 1 × 10 2 TCID 50 /ml. Scale bars, 100 μm. https://doi.org/10.1371/journal.ppat.1009704.g008
PLOS PATHOGENS
SARS-CoV-2 potent neutralizing antibody cocktail December, 2020 (S10A Fig). Residue N501 within the RBM is highly mutable in various infectious SARS-CoV-2 strains, including the recently emerged United Kingdom variant B.1.1.7 and the South African variant B.1.351, which are more infectious than the original strain [56]. We continue to track the binding of these six antibodies to other common SARS-CoV-2 mutant variants, including N501Y, using cellular ELISA (S10B Fig). Nearly all of the antibodies retain the ability to recognize most common mutation variants, with only RBD-chAb- 25 showing poor binding to the N501Y mutant (S10B Fig). Three major mutations of concern within the RBD of SARS-CoV-2 (i.e., N501Y, K417N/ K417T and E484K) are present in the highly transmissible B.1.1.7 (United Kingdom), B.1.351 (South Africa) and P.1 (Brazil) variants, and were recently reported to disrupt binding by several prominent nAbs, including REGEN10933, 2-15, LY-Cov555, and CT-P59 [57]. We have tested the neutralization activities of our antibodies against these mutant RBD recombinant proteins using ELISA and pseudotype viruses for B.1.1.7 and B.1.351 (S10C-S10F Fig). Fortunately, RBD-chAb-45 and -51, which share almost the same epitope, retained high binding ability for all three major variants of concern. Although RBD-chAb-25 lost its binding ability for the N501Y mutant form of RBD and B.1.1.7 or B.1.351 pseudoviruses, it still retained the ability to recognize K417N and E484K mutant RBD proteins (S10C Fig). Furthermore, the antibody cocktail, RBD-chAb-25 and -45, was tested against D614G, B.1.1.7, and B.1.351 pseudoviruses. The cocktail showed IC 50 values close to those of RBD-chAb-45 only, meaning that the effect was not diminished due to the loss neutralization ability for RBD-chAb-25 (S10F Fig). Although mutations of N501 could perturb the binding of RBD-chAb-25 to the RBD of SARS-CoV-2 (Fig 4C), none of the reported mutations within the RBD overlaps with the epitope of RBD-chAb-45 ( Fig 4F). In addition, the E484K mutation is not located within the binding surfaces of RBD-chAb-25 and RBD-chAb-45 according to our cryo-EM analysis (S11 Fig). Although the K417N mutation is located within the binding epitope of RBD-chAb25, it is at the edge of the binding surface (S11 Fig), and RBD-chAb25 still retains the ability to bind the RBD (S10C Fig). These findings suggest that the cocktail of RBD-chAb-25 and -45 might be effective at overcoming drug resistance due to escape mutations. This is similar to REGEN10987, which retains neutralizing activity for SARS-CoV-2 variants B.1.1.7 and B.1.351 [57]. Interestingly, we have identified the cryo-EM structure of RBD-chAb-15 in complex with SARS-CoV-2 S protein and found the combination of RBD-chAb-15 and -45 show a synergistic effect toward B.1.1.7 in the pseudovirus neutralization assay [58]. Therefore, our six chimeric antibodies can be used strategically to create cocktail therapies against multiple SARS-CoV-2 mutant strains.
Synergistic effects of antibody cocktail therapies have been reported, which paved the way for the anti-SARS-CoV-2 S protein antibodies, REGN 10987 and REGN 10933, to enter clinical trials, even without animal experiments [25]. Additionally, Liu and co-workers demonstrated additive inhibitory effects for cocktail antibodies [18]. The crystal structure of one of the antibodies, B38, has an epitope that largely overlaps with the ACE2 binding interface, which is similar to the epitope of RBD-chAb-25 in our study. Crowe and co-workers also reported the use of cocktail antibodies to increase the protection from SARS-CoV-2 infection in an animal model. A low-resolution EM structure was demonstrated the simultaneous binding of two nAbs to the RBD of SARS-CoV-2 [34]. While our current study provides very similar findings to those above, including the efficacies derived from in vitro and in vivo neutralization assays, we provide additional information that was hitherto unavailable. First, we determined the atomic structures of two potent nAbs, namely RBD-chAb-25 and -45 in complex with the SARS-CoV-2 S protein, which revealed an unusual 3:3 binding stoichiometry (Fig 4). In other words, both RBD-chAbs occupy all three RBDs to preclude ACE2 binding to the S protein, although RBD-chAb-25 is similar to REGEN10933 (one of the antibodies in
PLOS PATHOGENS
SARS-CoV-2 potent neutralizing antibody cocktail the REGN-COV2 cocktail), with regard to its loss of neutralizing ability against SARS-CoV-2 variants B.1.1.7, B.1.351 and P.1 [57,59]. Second, the structure-guided design of the cocktail therapy showed promising therapeutic effects in mouse and hamster models (Fig 8). Based on these structural insights, we predict that recognition of the non-overlapping epitopes for RBD-chAb-25 and -45 would provide improved protection from different SARS-CoV-2 variants, including the emerging United Kingdom and South African variants. In particular, the epitope of RBD-chAb-45 is less utilized by other reported nAbs, making it an ideal candidate for use in antibody cocktail therapies.
Ethics statement
All animal experiments were performed according to established guidelines for the ethical use and care of animals provided by the Institutional Animal Care and Use Committee (IACUC) at Academia Sinica, Taiwan. All experiments involving animals were approved by the IACUC (protocol 20-05-147). Mice and hamsters were housed individually in cages on a 12-hr light/dark cycle at 20-24˚C and given free access to food and water. In order to minimize suffering, animals were euthanized upon loss of over 20% body weight or when the animal exhibited hunching, lack of movement, ruffled fur, and poor grooming. The mice were killed by CO 2 asphyxiation.
Construction and purification of RBD and SARS-CoV-2 S recombinant protein
The DNA fragments encoding the RBD, amino acid residues Arg319-Phe541 of SARS-CoV-2 S protein, were amplified by PCR with PfuTurbo DNA polymerase (Stratagene). The PCR products were then cloned into pcDNA3.4-Flag-His vector with IgGκ signal sequence to generate pcDNA3.4-S1-Flag-His and pcDNA3.4-RBD-Flag-His. The RBD-Flag-His protein was produced using Expi293F Expression System (Thermo Fisher Scientific) and purified by Ni Sepharose (GE Healthcare Bio-sciences), followed by anti-Flag M2 agarose beads (Sigma). The DNA fragments were also cloned into a pcDNA3.4-Fc vector with IgGκ signal sequence to generate pcDNA3.4-RBD-Fc, respectively. The RBD-Fc protein was produced using Expi293F Expression System (Thermo Fisher Scientific) and purified by Protein G Sepharose 4 Fast Flow (GE Healthcare) according to the manufacturer's instructions.
The codon-optimized nucleotide sequence of full-length SARS-CoV-2 S protein was kindly provided by Dr. Che Alex Ma (Genomics Research Center, Academia Sinica). The DNA sequence corresponding the residues 1-1208 of the S protein was subcloned into the mammalian expression vector pcDNA3.4-TOPO (Invitrogen). Additional mutations were introduced for stabilization [60], namely a polybasic furin cleavage site mutation ( 682 RRAR 685 ! 682 GSAG 685 ) and a tandem proline substitution ( 986 KV 987 ! 986 PP 987 ), hereafter designated as SARS-CoV-2 S fm2P . The construct harbors a C-terminal foldon trimerization domain of phage T4 fibritin followed by a c-myc epitope and a hexa-repeat histidine tag for affinity purification.
Screening and binding of antibodies against SARS-CoV-2 by ELISA
The ELISA plates were coated with 0.5 μg/ml RBD-His, S-His, or EpEX-His protein in 0.1 M NaHCO 3 (pH 8.6) buffer at 4˚C overnight, followed by blocking with PBS containing 1% bovine serum albumin (BSA) at RT for 2 h. After blocking, the wells were washed twice with PBS; the plates were then stored at -20˚C.
The protein contents of the culture supernatants from hybridoma or antibodies were quantified by the BCA assay and serially diluted with 1% BSA in PBS. Then, 50 μl supernatant or antibody was added into each well, and the plate was incubated for 1 h at room temperature. The plates were washed with PBS containing 0.1% Tween-20 (PBST 0.1 ) three times and then incubated for 1 h with Peroxidase AffiniPure Goat Anti-mouse IgG (H+L) (Jackson Immu-noResearch) or Peroxidase AffiniPure Goat Anti-human IgG (H+L) (Jackson ImmunoResearch) (1:5000 dilution), as appropriate. After three washes with PBST 0.1 , signal was produced using 3,3'5,5'-Tetramethylbenzidine (TMB) color development (TMBW-1000-01, SURMO-DICS). The reaction was stopped with 3 N HCl, and absorbance was measured at 450 nm by ELISA reader (Versa Max Tunable Microplate Reader; Molecular Devices).
Histological analysis
Viral antigen detection in SARS-CoV-2 animal models was accomplished by immunofluorescence staining. The lung was fixed with 4% paraformaldehyde, paraffin embedded and cut into 3μm sections. Slides were deparaffinized and rehydrated, then incubated with PBS/0.02% Triton X-100 and blocked with 5% BSA at room temperature for 1 h. The anti-SARS-CoV-2 N protein antibody was added to the sections, followed by washing and incubation with Alexa Fluor 568 goat-anti-human IgG (Invitrogen) at 1:200 dilution. After washing in PBS, slides were stained with DAPI (Invitrogen) at 1:100 dilution. The images were acquired using a ZEN 2011 Black Edition (Carl Zeiss MicroImaging GmbH) and LSM 700 confocal microscopy (Carl Zeiss AG).
Construction and expression of chimeric antibodies (chAbs)
The V H and V K gene segments of mAbs were introduced via appropriate restriction enzyme sites and amplified by PCR with KAPA HiFi DNA polymerase (Roche). The V H genes were cloned separately in-frame into a modified expression vector with a signal peptide and human IgG1 constant region. The V L genes were also separately cloned into a modified expression vector with a signal peptide and human kappa chain constant region. The V H -and V L -encoding plasmids were co-transfected into Expi-293 cells, which were cultured for 5 days to produce antibodies. The culture supernatant from the transfected cells was filtered through a 0.45μm membrane and then subjected to protein G column chromatography (GE healthcare) for purification of human IgG. After the dialysis of eluents with PBS, the antibody concentration was assessed using the Bradford assay (Thermo Fisher Scientific).
PLOS PATHOGENS
SARS-CoV-2 potent neutralizing antibody cocktail mutant pseudovirus with 1000 TU/well in 96-well plates. The mixture was incubated for 1 h at 37˚C and then added to pre-seeded 293T cells at 100 μl/well for 24 h at 37˚C. The supernatants were removed after 24 h and refilled with 100 μl/well DMEM for additional 72-h incubation. Next, 100 μl of supernatants were removed, and 100 μl ONE-Glo luciferase reagent (Promega) was added to each well for 3-min incubation. The luciferase activities were measured with a microplate spectrophotometer (Molecular Devices). The inhibition rate was calculated by comparing the OD value to the negative and positive control wells. IC 50 and IC 80 were determined by a four-parameter logistic regression using GraphPad Prism (GraphPad Software Inc.).
Plaque reduction neutralization test (PRNT)
Serially diluted chAbs were incubated with 100 PFU SARS-CoV-2 (strain: TCDC#4) for 1 h at 37˚C. The virus-mAb mixtures were added to pre-seeded Vero E6 cells for 1-h adsorption at 37˚C; each experiment was performed in triplicate. The viral mixtures were removed and overlaid with DMEM containing 2% FBS and 1% methyl-cellulose. After 4-day incubation, the cells were fixed with 10% formaldehyde overnight and stained with 0.5% crystal violet for 20 min. The plates were washed with tap water, and plaque numbers were counted. Plaque reduction was calculated as: Inhibition percentage = 100 × [1 -(plaque number incubated with mAb/plaque number without mAb)]. The 50% plaque reduction (PRNT 50 ) titer was calculated by Prism software. The SARS-CoV-2 used in this study, the clinical isolate TCDC#4 (hCoV-19/Taiwan/4/2020), was obtained from Taiwan Centers of Disease Control (CDC). The PRNT assay was performed at the BSL-3 facility in the Institute of Biomedical Sciences, Academia Sinica.
Equilibrium dissociation constant (K D ) of SARS-CoV-2-RBD binding to chAbs
Binding kinetic measurements were performed using a Biacore 8K (GE Healthcare). All assays were performed with a running buffer of PBS pH 7.4 supplemented with 0.005% (v/v) Surfactant P20 (GE Healthcare) at 25˚C. Anti-RBD chimeric antibodies were immobilized onto a protein A sensor chip surface to a level of~180 response units (RUs). SARS-CoV-2 RBD-His protein was injected in a two-fold dilution series from 40 nM to 0.625 nM, at a flow rate of 50 μl/min using a Multi-cycle kinetics program with an association time of 150 sec and a dissociation time of 300 sec. Running buffer was also injected using the same program for background subtraction. K D values (affinity constant or dissociation equilibrium constant) were calculated from all the binding curves based on their global fit to a 1:1 binding model by Biacore 8k data analysis software.
Site-directed mutagenesis of ACE2-binding residues within the RBD
The K417, Y453, Q474, F486, Q498, T500, and N501 residues within the RBD of S protein are responsible for its interaction with ACE2 [28], and each ACE2-binding residue was individually replaced with alanine by site-directed mutagenesis. Mutagenesis was performed using KAPA HiFi Polymerase (Kapa Biosystems) and DpnI digestion, according to the manufacturer's instructions. RBD mutants were constructed with a single mutation at each ACE2-binding residue or multiple mutations if the residues were neighbors. All mutant constructs were confirmed by sequencing.
Epitope mapping by ELISA
RBD-chAbs were biotin-labeled using EZ-Link Sulfo-NHS-Lc-Biotin (Thermo; following manufacturer recommendations) and purified using an Amicon Ultra-0.5 Centrifugal Filter
PLOS PATHOGENS
SARS-CoV-2 potent neutralizing antibody cocktail Unit (Millipore). Each RBD-chAb (50 ng/well) was pre-coated to ELISA plates. RBD-His or EpEx-His protein (5 ng/well) in bovine serum albumin (BSA) was added to capture Ab-precoated ELISA plates, followed by the addition of RBD-chAb (7.8 ng/well) in BSA. Then plates were added biotinylated antibodies (0.78 ng/well) in BSA and incubated at 25˚C for 1 h, and 50 μl of 2000-fold diluted Peroxidase Streptavidin (Jackson) was added into each well and incubated for 1 h at 25˚C. The BSA without biotinylated antibodies was as a control. The plates were washed with PBST between each step. After a final wash, the plates were developed with TMB, and absorbance was read at 450 nm after the reaction was stopped.
In vivo prophylactic and therapeutic assays for SARS-CoV-2 infection
To assess the in vivo potency of neutralizing chAbs against SARS-CoV-2 RBD, mouse and hamster models of SARS-CoV-2 infection were utilized. AAV-hACE2 mice were prepared by intratracheal injection of AAV6 expressing hACE2 and intraperitoneal injection of AAV9 expressing hACE2 (manuscript in submission). The AAV-hACE2-transduced mice or hamsters were first given an intraperitoneal injection of antibody or normal mouse IgG. Intranasal inoculations of 10 5 tissue-culture infectious dose (TCID) SARS-CoV-2 (strain: TCDC#4) were administered to mice or 10 5 plaque-forming units (PFU) were administered hamsters 24 h later. Five days or 3 days after the virus challenge to mice or hamsters, lung tissues were harvested to quantify the viral load. Lung tissues were weighed and homogenized using the Speed-Mill PLUS (Analytik Jena AG) for two rounds of 2 min each in 0.6 ml of DMEM with 1% penicillin/streptomycin or RLT buffer (RNeasy mini kit, Qiagen). Homogenates were centrifuged at 3,000 rpm for 5 min at 4˚C. The supernatant was collected and stored at -80˚C for TCID 50 assay or RNA extraction. After tissue homogenization, serial 10-fold dilutions of each sample were inoculated in a Vero-E6 cell monolayer in quadruplicate and cultured in DMEM with 1% FBS and penicillin/streptomycin. The plates were observed for cytopathic effects for 4 days. TCID 50 was interpreted as the amount of virus that caused cytopathic effects in 50% of inoculated wells. Virus titers are expressed as TCID 50 /ml tissue.
The in vivo assays to assess therapeutic activities of chAbs cocktails were conducted by intraperitoneally injecting mixtures of RBD-chAB-25 and -45. AAV-hACE2 mice or hamsters were intranasally infected with 1 × 10 5 TCID 50 virus. Then, antibodies were intraperitoneally injected into mice or hamsters at day 2 after SARS-CoV-2 inoculation. The mice or hamsters were sacrificed to collect tissue and blood samples at day 5 or 3 post-infection, respectively.
In vivo prophylactic assays for low dose of neutralizing mAbs against SARS-CoV-2 infection in hamsters
To assess the in vivo potency of low dose neutralizing mAbs against SARS-CoV-2 RBD, hamster models of SARS-CoV-2 infection were utilized. The hamsters were first given an intraperitoneal injection of antibody or normal mouse IgG. Intranasal inoculations of 10 5 tissueculture infectious dose (TCID) SARS-CoV-2 (strain: TCDC#4) were administered to mice or 10 5 plaque-forming units (PFU) were administered hamsters 3 or 5 days later. Three days after the virus challenge to mice or hamsters, lung tissues were harvested to quantify the viral load. Lung tissues were weighed and homogenized using the SpeedMill PLUS (Analytik Jena AG) for two rounds of 2 min each in 0.6 ml of DMEM with 1% penicillin/streptomycin or RLT buffer (RNeasy mini kit, Qiagen). Homogenates were centrifuged at 3,000 rpm for 5 min at 4˚C. The supernatant was collected and stored at -80˚C for TCID 50 assay or RNA extraction. After tissue homogenization, serial 10-fold dilutions of each sample were inoculated in a Vero-E6 cell monolayer in quadruplicate and cultured in DMEM with 1% FBS and penicillin/streptomycin. The plates were observed for cytopathic effects for 4 days. TCID 50 was interpreted as
PLOS PATHOGENS
SARS-CoV-2 potent neutralizing antibody cocktail the amount of virus that caused cytopathic effects in 50% of inoculated wells. Virus titers are expressed as TCID 50 /ml tissue.
Real-time RT-PCR for SARS-CoV-2 RNA quantification
To quantitate SARS-CoV-2 RNA, primers targeting the envelope (E) gene of SARS-CoV-2 genome were used for Taqman real-time RT-PCR method as previously described [61]. Forward primer E-Sarbeco-F1 (5'-ACAGGTACGTTAATAGTTAATAGCGT-3') and reverse primer E-Sarbeco-R2 (5'-ATATTGCAGCAGTACGCACACA-3'), in addition to the probe E-Sarbeco-P1 (5'-FAM-ACACTAGCCATCCTTACTGCGCTTCG-BBQ-3') were used. A total of 30 μL RNA solution was collected by using RNeasy Mini Kit (QIAGEN, Germany) according to the manufacturer's instructions. 5 μL of RNA sample was added in a total 25 μL mixture using Superscript III one-step RT-PCR system with Platinum Taq Polymerase (Thermo Fisher Scientific, USA). The final reaction mix contained 400 nM forward and reverse primers, 200 nM probe, 1.6 mM of deoxy-ribonucleoside triphosphate (dNTP), 4 mM magnesium sulphate, 50 nM ROX reference dye and 1 μL of enzyme mixture from the kit. The cycling conditions were performed with a one-step PCR protocol: 55˚C for 10 min for cDNA synthesis, followed by 3 min at 94˚C and 45 amplification cycles at 94˚C for 15 sec and 58˚C for 30 sec. Data was collected and calculated by Applied Biosystems 7500 Real-Time PCR System (Thermo Fisher Scientific, USA). A synthetic 113-bp oligonucleotide fragment was used as a qPCR standard to estimate copy numbers of viral genome. The oligonucleotides were synthesized by Genomics BioSci and Tech Co. Ltd. (Taipei, Taiwan).
Cryo-EM sample preparation and data collection
To prepare S-mAb complexes, purified recombinant SARS-CoV-2 S fm2P was mixed individually with RBD-chAb-45, chAb-25, and chAb-15 at a molar ratio of 1:1.4 at room temperature for 1 h. The mixture of was loaded into a size-exclusion column (Superose 6 increase 10/300 GL, GE Healthcare, U. S. A.) to separate the S-mAb complex from free mAbs. Fractions corresponding to the S-mAb complex were confirmed by SDS-PAGE and concentrated to 1 mg/ml for cryo-grid preparation. To collect the ternary complex of S protein in complex with RBD-chAb-25 and -45, the SEC fractions corresponding to the ternary complex were collected, as described in the following section, and concentrated to 1 mg/ml for cryo-grid preparation. Three microliters of each sample were applied onto 300-mesh Quantifoil R1.2/1.3 holey carbon grids. The grids were glow-charged at 20 mA for 30 sec. After 30-sec incubation, the grids were blotted for 2.5 sec at 4˚C and 100% humidity, and vitrified using a Vitrobot Mark IV. (ThermoFisher Scientific, U. S. A.).
Cryo-EM data acquisition was performed on a 300 keV Titan Krios transmission electron microscope (ThermoFisher Scientific, U. S. A.) equipped with a Gatan K3 direct detector (Gatan, U. S. A.) in a super-resolution mode using the EPU software (ThermoFisher Scientific). Movies were collected with a defocus range of -1.2 to -1.7 μm at a magnification of 81000×, which results in a pixel size of 0.55 Å. A total of 48-50 e -/Å 2 was distributed over 50 frames with an exposure time of 1.8 sec. The datasets were energy-filtered with a slit width of 15-30 eV, and the dose rates were adjusted to 8-10 e -/pix/sec.
Cryo-EM data processing
All 2× binned super-resolution raw movies of each S-chAb complex were subject to Relion-3.0 with dose-weighting and 5×5 patch-based alignment using GPU-based software MOTION-COR2 [62]. After motion correction, the corrected micrographs were transferred to cryoS-PARC v2.14 [63]. Contrast transfer function (CTF) estimation was performed by patch-based PLOS PATHOGENS SARS-CoV-2 potent neutralizing antibody cocktail CTF. The exposures with "CTF_fit_to_Res" parameters between 2.5 and 4 Å were selected and applied to particle picking. A small subset of micrographs was used for template-free blob picker, followed by iterative rounds of 2D classification for filtering junk particles. The best 2D classes were then used as templates for particle picking on the remaining micrographs. Likewise, the picked particles were cleaned and re-extracted with a box size of 384 pixels.
For each S-mAb complex, the particle images were initially classified by ab-initio reconstruction with C1 symmetry (class = 3). The particles and three ab-initio models were used in heterogeneous refinement to generate three distinct classes (class = 3). For both RBD-chAb-25 and -45, the majority of classes corresponded to an all-open state for all three RBDs. Particles within the best class were used for further processing by using non-uniform 3D refinement imposed with C1 symmetry. The overall resolution of the EM map was estimated by the goldstandard Fourier shell correlation (FSC) = 0.143 (Table B in S1 Text). To improve the resolution at the mAb binding interface of S-chAb-25 and S-chAb-45 complexes, a focused refinement procedure was employed. For S-chAb-25, a further local refinement with a focus mask covering the NTD, RBD and chAb-25 was performed in cryoSPARC. For S-chAb-45, the particles of NU-refinement were symmetrically expanded by C3 symmetry, then converted to Relion-3.0 using the pyem script (developed by Daniel Asarnow, https://github.com/asarnow/ pyem). A further focus classification with a focus mask corresponding to the RBD and chAb-45 was implemented in Relion. The particles of the best 3D class were selected and transferred to cryoSPARC for another round of local refinement with same focus mask. Focused masks were generated by a combination of UCSF-Chimera [64], cryoSPARC and Relion. The refined cryo-EM maps of the RBD in complex with S-chAb-25 and S-chAb-45 were deposited in the EMDB under the accession codes EMD-31470 and EMD-31471, respectively. The atomic coordinate of the RBD in complex with S-chAb-25 and S-chAb-45 were deposited in the Protein Data Bank (PDB) under the accession codes 7F62 and 7F63, respectively. Local resolution analysis was calculated using ResMap [65]. For the ternary complex of S-chAb-25 and -45, the curated particle images were analyzed by 3D variability analysis within cryoSPARC, as described elsewhere [66], to identify the subclass of structures with the most abundant chAb-25 EM density on the RBD in addition to the well-defined chAb-45 density on each of the three RBDs (S8 Fig).
Model building and refinement
The atomic model of SARS-CoV-2 S protein in complex with RBD-chAb-25 and -45 were built using Phenix [67] and Coot [68]. An initial coordinate was generated by using the PDB entry 6XLU as a template in Swiss-Model [69]. The atomic models of the Fabs of RBD-chAb-25 and -45 were generated by Swiss-Model using default settings. The atomic coordinates of the S protein and the Fabs of RBD-chAb-25 and -45 were manually fit into the cryoEM map using UCSF-Chimera, UCSF-ChimeraX [70] and Coot. After iterative manual refinement steps, the coordinates were refined by the real-space refinement module within Phenix. Nlinked glycans were built by using the extension module "Glyco" within Coot from the asparagine side chains at which additional EM densities were observed. These asparagine residues comply with the rule of the N-glycosylation sequon (N-X-S/T). The final model was assessed by MolProbity [71] in Coot. Statistics of the model refinement are reported in Table B in S1 Text. For the ternary complex of S-chAb-25 and -45, the refined atomic models of S-chAb-25 and S-chAb-45 were individually fit to the cryo-EM map of the ternary complex. Manual fitting of the substructure of Fab in complex with the RBD was carried out within UCSF-Chi-meraX, followed by application of the automated volume fitting function within UCSF-ChimeraX. Additional manual adjustments of the Fab of RBD-chAb-45 were carried PLOS PATHOGENS SARS-CoV-2 potent neutralizing antibody cocktail out by visual inspection to optimize rigid body docking. Structural visualization and representations were accomplished by a combination of UCSF-Chimera, UCSF-ChimeraX, and Pymol (Schrodinger Inc. U.S.A.).
Size-exclusion chromatography analysis of S+RBD-chAb complex formation
RBD-chAb binding to the SARS-CoV-2 S protein was analyzed by using a gel filtration column (Superose 6 increase 10/300 GL, GE Healthcare, U.S.A.) in 50 mM Tris-HCl (pH 8.0), 150 mM NaCl, 0.02% NaN 3 at room temperature. RBD-chAb-25 or -45 was mixed with the SARS--CoV-2 S protein (1 mg/ml) at a 1.4:1 molar ratio and incubated at room temperature for 1 h prior to injection into an FPLC system (AKTA UPC10, GE Healthcare, U.S.A.) for size-exclusion chromatography (SEC). Fractions that correspond to the binary complex of the S protein and RBD-chAb-25 or -45 were collected, pooled and concentrated using a 50-ml centrifugal concentrator with a 50-kDa molecular weight cutoff (Millipore, U.S.A.) before addition of the complementary RBD-chAb-45 or -25 followed by 1 h incubation at room temperature to allow formation of a ternary complex. The mixture was analyzed by the same SEC analysis to confirm stable complex formation. The ternary complex formed by incubation of S protein and RBD-chAb-25 followed by the addition of RBD-chAb-45 was collected as elution fractions (10-12 ml total elution volume) and concentrated by the same procedure used for cryo-EM grid preparation. A. The inhibitory activities of antibodies derived from the supernatants of hybridoma cultures were assessed using ACE2-overexpressing 293T cells by flow cytometry. Antibodies were incubated with RBD-His-FITC (2 μg/ml) for 1 h. After incubation, the mixtures were added to ACE2-overexpressing 293T cells for 30 min. The binding profile was analyzed by Thermo Fisher Scientific, Attune NxT flow cytometry. Red asterisks indicate RBD-specific hybridoma clones exhibiting more than 80% inhibition of binding between SARS-CoV-2 RBD and human ACE2 protein. B. PRNT for the neutralization of all SARS-CoV-2 RBD-reactive chAbs. The inhibitory activities of all 12 chimeric antibodies were examined with authentic SARS-CoV-2 in Vero E6 cells. ChAbs were serially diluted in PBS and used to block infection of Vero E6 cells with SARS-CoV-2. Virus without chAb served as control. Plaques formed at each dilution were counted 4 days after virus infection. Red asterisks indicate the six most efficacious neutralizing RBD-chAbs. (TIFF) S3 Fig. Cross-reactivity of chAbs. A. Characterization of chAbs against S1 proteins from different coronaviruses. Binding of RBD-chAb-1, -15, -25, -28, -45, and -51 to different coronavirus S1 recombinant proteins were detected by ELISA. OD450, optical density at 450 nm.
PLOS PATHOGENS
SARS-CoV-2 potent neutralizing antibody cocktail NHIgG, normal human IgG, as negative control. His Ab, as positive control. Each assay of A was performed in triplicate and the data are presented as mean ± SD (n = 3). B. Immunocytochemistry with anti-SARS-CoV-2 RBD-mAbs in RBD-expressing human 293T cells served as a positive control. Cells were fixed with 4% paraformaldehyde, then blocked with 3% BSA for 1 h. RBD-mAb-1, -15, -25, -28-45, or -51 was incubated at 1 μg/mL for 1 h at room temperature. C. Immunohistochemical staining of six major target organs and tissues that are easily damaged by SARS-CoV-2. Human tissue sections were stained with RBD-mAb-1, -15, -25, -28, -45, and -51 at concentrations of 5 μg/ml. Scale bar = 100 μm. The binding ability of RBD-chAb to mutant S protein was examined by cellular ELISA. The human 293T cells were separately transfected with the SARS-CoV-2 wild type (WT) or mutant as indicated. OD 450 , optical density at 450 nm. Each assay was performed in triplicate; data are presented as mean ± SD. C. Binding activity of anti-RBD chAbs was determined by ELISA.
SARS-CoV-2 potent neutralizing antibody cocktail
Mutants of SARS-CoV-2 RBD-His proteins were immobilized on 96-well plates prior to blocking with 1% BSA in PBS and incubated with anti-RBD chAbs at 100 ng/ml. Signal was detected (OD 450 ) after labeling with Donkey anti-human IgG-HRP secondary antibody. NHIgG, normal human IgG, as negative control. His Ab, as positive control. Each assay of was performed in triplicate and the data are presented as mean ± SD (n = 3). D-E. Neutralization assay of B.1.1.7 (D) and B.1.351 (E) variants of SARS-CoV2 pseudoviruses with chimeric anti-RBD antibodies. Each assay was performed in triplicate; data points represent the mean. F. Neutralization test for RBD-chAb-25, 45, or both using D614G, B.1.1.7 and B.1.351 variants of SARS--CoV2 pseudoviruses. Each assay was performed in triplicate; data points represent the mean. (TIFF) S11 Fig. Structural mapping of RBD-chAb-25 and -45 binding interfaces as well as two mutations of concern, K417N and E484K. The binding interfaces of RBD-chAb25 and RBD-chAb-45 are colored orange and magenta, respectively. Two key mutations present in B.1.351 and P.1 lineages are colored with cyan (K417N) and blue (E484K). (TIFF)
|
2021-10-23T05:23:52.521Z
|
2021-10-01T00:00:00.000
|
{
"year": 2021,
"sha1": "307b9306dd58979f9f877d1d95aa9c4090063d08",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plospathogens/article/file?id=10.1371/journal.ppat.1009704&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "307b9306dd58979f9f877d1d95aa9c4090063d08",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
213516602
|
pes2o/s2orc
|
v3-fos-license
|
Disrupting the Status-Quo of Organisational Board Composition to Improve Sustainability Outcomes: Reviewing the Evidence
: Sustainability, conceptualised as the integration of economic, social and environmental values, is the 21 st century imperative that demands that governments, business and civil society actors improve their existing performance, yet improvement has been highly fragmented and unacceptably slow. One explanation for this is the lack of diversity on the boards of organisations that perpetuates a narrow business, economic and legal mindset rather than the broader integrated values approach that sustainability requires. This paper presents a systematic review of the literature investigating how board diversity a ff ects the sustainability performance of organisations. Our review uncovers evidence of relationships between various attributes of the diversity of board members and sustainability performance, though over-reliance on quantitative methodologies of studies reviewed means explanations for the observed associations are largely absent. Limited measures of sustainability performance and narrow definitions of diversity, focused predominantly on gender, were also found. Important implications from the study include the need for policy responses that ensure boards are diversely composed. We identify that more qualitative investigations into the influence of a broader range of types of board diversity on sustainability performance is needed, along with studies that focus on public sector boards, and research that takes an intersectional understanding of diversity.
Introduction
In the context of the bushfire emergency in Australia which dramatically illustrates the climate crisis scientists have been warning the world about for 30 years, it is pertinent to recall the comments of a former Australian Prime Minister, Bob Hawke, who in 1989 issued Australia's first major statement on sustainable development, entitled 'Our Country, Our Future'. He stated that 'the crux of the issue in implementing sustainable development is establishing mechanisms that ensure an integration of economic and environmental considerations both now and in the future' [1]. As is now evident, since then, Australia has moved backwards on many crucial sustainability metrics, with greenhouse gas emissions rising [2], biodiversity counts falling [3], plastic pollution spreading [4], wealth inequality widening [5] and now, of course, vast tracts of land burning. The gap between sustainability aspirations and reality is now so stark that it prompts the following question: why have the numerous agreements committing Australia to sustainability these past 30 years not generated far more significant on-the-ground impacts?
A frequent answer to this question is definitional, with a large and diverse normative and empirical literature concluding that sustainability's failure to generate impacts is due to its elasticity and 'slipperiness' [6]. However, from a 'contested concepts' perspective, such conceptual features 1.
How does board diversity affect the sustainability performance of organisations? 2.
What do we learn from these studies about: a. The characteristics of diversity that have been investigated? b.
The way sustainability performance is measured? 3.
What methodologies have been employed to undertake these studies and what are their strengths and weaknesses taken as a group?
Materials and Methods
For this research, a systematic review with a descriptive and narrative approach is applied to primary studies [17,18]. This method was selected because a descriptive and narrative approach allows a more comprehensive synthesis of different designs, without privileging quantitative or qualitative investigations; important in our case as sustainability performance is measured and reported differently across sectors [19]. In addition, this approach allows us to capture the current state of knowledge (first research question), assess the effects of board diversity on sustainability performance (second research question), and critically review the methodological strengths and shortcomings of studies (third research question). This provides a robust base on which critical insights and the implications of the review can be drawn.
Search Procedure
To search for relevant studies, an electronic and manual search was conducted. The most widely used electronic databases in corporate governance were screened: Scopus, JSTOR, Informit and Web of Science. The descriptors used were: sustainability AND board diversity OR board composition. The combination of those keywords was used to search for both titles and abstracts. The reference lists of the previous review article (i.e., [11]) were also searched manually for the same keywords. A summary of the systematic review process with studies initially selected, main reasons for exclusion, and final pool of studies included in the analysis, is depicted in the flow diagram ( Figure 1).
Inclusion and Exclusion Criteria
Abstracts and titles were screened to retrieve journal articles relevant to this review. Book chapters and dissertations were excluded because of variable and hard to assess differences in the rigour of the peer review process.
Therefore, to be included in the review, articles had to report on how sustainability was influenced by board diversity, be published in the past decade (2009-2019) (to ensure a more contemporary review of conceptions of diversity and sustainability performance), have the board of organizations as their focal point, and employ a broad interpretation of sustainability performance as integrating economic, social and environmental imperatives. Studies that treated 'sustainability' merely as a synonym for 'continuing' or 'enduring' were excluded, for example, most often in papers relating to financial sustainability. We also only included studies published in international, peer-reviewed, English-language journals that reported some type of data such as panel data, interviews, surveys and observations, regardless of geographical area. Review studies were excluded.
Coding Procedure and Data Analysis
The first author conducted the initial database search of Scopus, JSTOR, Informit and the Web Science and reviewed the titles and abstracts of the studies for potential inclusion. From the list generated, both authors independently reviewed the abstracts of the papers to determine whether they should be included or excluded in analysis. Strong inter-coder consensus on the articles determined relevant to include was found. A decision on disputed studies was taken based on a complete review of the full manuscript.
The first author coded the studies selected using an iterative coding procedure. For the first research question, the following codes were developed:
The second question was evaluated by using the main results of each study (e.g., results reported, participant quotes) to determine the effect of board diversity on sustainability performance. Finally, the methodological strengths and weaknesses of each study were assessed, looking at: • Relationships between authors and participant/companies; • Presence of a control or comparison group; • Data gathering and data analysis procedure (e.g., reliability, effect size).
The methodological strength and weakness indicators were informed by previous work on sustainability performance measures [9,19] and on interpretations of diversity [11,21].
Search Results
The electronic search yielded 285 results. From this electronic search, we retrieved a total of 37 peer-reviewed journal articles after applying inclusion and exclusion criteria. Reference sections of the 37 full-text articles were manually searched for additional articles for inclusion, where a further eight articles were found. A total of 45 articles were subjected to data extraction and thematic analysis, shown in Table 1.
The Effects of Board Diversity on the Sustainability Performance of Organisations
Most studies analysed in this review find positive correlations between sustainability performance and board diversity. Predominantly, studies consider how the representation of females on a board influences sustainability and, in these studies, debate remains regarding the extent to which the inclusion of females on a board can influence outcomes. Often, researchers advocate for a critical mass of representation of women on a board. Though, Birindelli et al. [22] counters this argument and calls for a greater emphasis on gender-balanced boards, citing Schwartz-Ziv's work that balanced boards demonstrate greater communication and effective problem-solving capacities.
The continued low numbers of females on boards generally was often highlighted (e.g., Issa and Fang, [23]), which Kılıç and Kuzey [24] note may be the reason why they find no relationship between women on boards and carbon emission disclosures by companies in Turkey. Their findings support Seto-Pamies [25], who found a lack of representation of women directors (7%) on boards in their sample population of Global 100 companies; as well as Fernandez-Feijoo et al. [26], who suggest that cultural context may explain why this phenomenon exists.
The dis/enabling aspects of the geographical, cultural and policy context on representations of diversity on boards is highlighted in several studies. Mahmood and Orazalin [27] found that cultural context influenced women on boards in their study in Pakistan where traditional ideas of gender roles were prevalent (similar to [23]). Fakoyo and Nakeng [28] highlight policy as an important enabling factor for the integration of diversity onto boards in their South African study, while Fernandez-Feijoo et al. [26] note the importance of policy when they show a significant negative relationship between the proportion of women on company boards and a country's relative gender equality. Yet the positive impact of women on boards continues to be highlighted no matter the cultural context (Shoham et al. [29]). Other scholars have investigated the difference that diversity on boards can have according to industry. Li et al. [30] found that the environmental policies of industries with greater Pollution Creation Likelihood are more positively influenced by women on boards. Similarly, Post et al. [31] focused on the U.S. oil and gas industry to "reveal that as the relative representation of women on the board increases and as the number of independent directors grows, firms are more likely to form renewable energy alliances".
While most studies focused on the presence or absence of females on boards, some considered how gender diversity on boards may influence a company's sustainability performance. Al-Shaer and Zaman [32] find evidence that sustainability reporting quality is higher when boards contain independent women on boards. Others considered how the characteristics of diversity are not homogeneous. In their US-based study, Cuadrado-Ballesteros et al. [33] examine how board characteristics may work together to influence the sustainability performance of companies using complexity theory. They advocate that "a female director has more characteristics than her gender" (p. 539). Similarly, Furlotti et al. [34] consider possible self-schemas of individuals as an influencing behaviour, while Galbreath's [35] study on attention-directing structures finds that "women may have greater impact on proximal team effects (e.g. group dynamics such as debate and interaction norms) than on distal effects (e.g. firm performance)" in environmental scanning (p. 753). Darus et al. [36] considers why women influence decision-making, although the quantitative methodologies employed limited their capacity to offer rich interpretations of these relationships.
While gender and independence were the most commonly investigated categories of diversity, a few studies investigate ethnicity, educational background and age (e.g., [32,[37][38][39][40][41][42]). Ferrero-Ferrero et al. [43] examine generational diversity (defined by the year that someone is born) on boards of directors and CSR, and while no evidence of a direct effect of generational diversity on CSR performance (measured using Asset4 data) was found, they do find evidence for their hypothesis that generational diversity positively affects CSR performance by means of CSR management quality. While not focused on generational difference, Chams and Garcia-Blandon [44] also consider the impact of directors' age on sustainability performance to find a curvilinear relationship between age with sustainability performance, whereby sustainable practices first increase with age but then decrease as the age of directors rises.
Investigations into other elements of diversity are underdeveloped in this literature, which may be in part due to the general homogeneity of boards. Some researchers who consider other factors include Chams and Garcia-Blandon [44], who find no relationship between the educational qualification, independence or duality of board of directors with sustainability performance. In comparison, Oossthuizen and Lahner [45], in a study of South African companies, find positive (albeit not statistically significant) relationships between board members' ethnicity, non-traditional background, gender and independence, similar to Ortiz de Mandojana and Alberto Aragon-Correa's [46] findings. In Bergman et al. [47], the influence of cognitive diversity to sustainable decision-making is investigated. They find that the cognitive frames of board members tend to privilege economic over environmental and social issues and conclude that currently, sustainability management issues play a minor role in top decision-makers' cognitive frames and strategic landscapes. However, the researchers do not disclose the composition of boards studied nor the identity markers of their directors, and therefore the way cognitive frames may be shaped by individual identities goes unexplored.
While not strictly a characteristic of a board member, several studies investigate the size of the board and a correlation to sustainability is generally found. For instance, Chams and Garcia-Blandon [44] find a statistically significant relationship, Kaymak and Bektas [48] find a positive relationship and Arayssi et al. [49] find that the larger the board, the greater the sustainable performance of companies. Overall, the results of the studies when considered collectively demonstrate that a board of directors with a diverse make-up of people can have a significant impact on the sustainability performance of a company.
Characterising Diversity
Overwhelmingly, diversity in our reviewed studies is conceived through a gendered lens, where the proportions or percentages of women (gender) on boards and their influence on corporate governance is considered. However, independence, board size, duality and interlocks are also commonly investigated. A minority of studies explore generational difference, age, educational background and cognitive diversity as markers of diversity on boards of directors and how these influences corporate governance.
Gender
In all studies, gender diversity is defined in the binary of male/female. Standard means of calculating gender include the percentage or proportion of females on a board, or employing Blau's diversity index [23,24,32,37,40,42,43,[49][50][51]. Shoham et al. [29] offer one of few studies to consider the implications of using binary language, in particular masculine language, in the recruitment of women to company boards. They explore how grammatical gender markings work to reinforce binaries and how this might influence the appointment of women onto a board and the relationship to environmental sustainability.
In some studies [22,23,37,50,52], critical mass is considered in interpretations of diversity and what constitutes 'representation' on a board [22,32,33]. Overall, there are no reports of any boards with more women directors than men.
While numbers of women remain comparatively low across countries, researchers have investigated how women influence governance on boards of directors in organisations all over the world [23,25,28,35,50,51,53]. In all of these studies, the cultural differences that overlay any performance of gender and how women may be able to participate in boards of directors remain largely absent from analyses, with the exception of Alazanni et al. [50].
Most research investigating the influence of women on boards does so in ways that suggest that all women possess the same qualities. Capturing the heterogeneity encompassed within categories of diversity, in this instance what it means to be a woman, is a limited perspective found in the studies included in the review. However, Mahmood et al. [27] discuss the limitations of research that uncritically accepts social-role theory and the characteristics commonly associated with women. Using interview data, they demonstrate that not all women directors perform in the same way. Furlotti et al. [37] go some way, in acknowledging and investigating this in their study to consider the self-schema of women who are on boards and why they might influence boards in different ways to men. However, they do this through content analysis of CSR-related reports of Italian companies, which arguably is not an adequate methodological approach to investigate a concept related to an individual's experience. Similarly, Glass, Cook and Ingersoll [54] take a partially intersectional approach by considering the proportion of women on the board, the number of interlinks women board members hold, and the interactive and cumulative effects of women CEOs and gender diverse boards.
Overall, the depth of analysis of the studies in how characteristics of diversity are represented, measured and correlated remains largely quantitative. The generalized findings, while demonstrating macro trends, fail to highlight the qualities or attributes of women, or the discrete ways in which gender dynamics play out in board rooms to influence sustainability performance. While connections are made to the relevant literature, purporting why a change may have occurred, the data included in studies (to which [27] is an exception), give little indication into why women on boards have an effect or not.
Independence, Board Size, Duality: Markers of Diversity
While studies overwhelmingly used diversity synonymously with gender, there were a number that consider the independence of directors, director interlocks, the duality of CEOs and the size of the board [24,31,36,55]. Chams and Garcia-Blandon [44] investigate a range of attributes in their study of diversity on boards and the influence on sustainability performance, including the highest qualification and discipline of board members along with other more commonly reported variables including gender, board size and independence. Fuente et al. [39] refer to race and ethnicity as markers of diversity in their study, although their actual research focuses on the influence of women and independence representation on boards.
Oossthuizen and Lahner [45] pose the question: 'who should the board members be in terms of composition and characteristics?' They seek answers by considering gender, ethnicity, independence and director background. This information comes from annual reports and Who's Who SA and Business Week. The authors find that boards of companies listed on the Socially Responsible Index (SRI) are more diverse than a control group of companies. Ethnic minority directors make up 37% of boards for the SRI sample, while they comprise 29% of non-SRI listed companies. Michelon and Parbonetti [56] consider the independence of directors, CEO duality and the representation of influential community members on the board in their study of sustainability disclosures, obtained through annual reports. The inclusion of influential community members is a unique area of investigation, not replicated in any other study in this review.
Other unique approaches to characterizing diversity are the Ferrero-Ferrero et al. [43] and Bergman et al. (2015) studies. Ferrero-Ferrero et al. [43] use BoardEx data to determine the 'generational diversity' of boards, while Bergman et al. [47] consider the cognitive diversity that exists in boards of directors in Finland, explaining this as "the capacity of human cognition relative to the requirements of information environments in which the individuals perform" (p. 163). The scholars found varied effects on sustainability performance.
Overall, the characterization of what constitutes diversity remains largely situated within the category of gender and is in support of Ghauri and Mansi's [21] claim that in the "diversity seems to have been overshadowed by the narrow definition of gender representation and specifically female representation in organisational management and above levels." These researchers call for conceptions of diversity to shift from narrow interpretations towards understandings of deep diversity, which they claim includes ethnicity, age, disability and sexual orientation.
In addition, the ways in which categories of diversity were understood in the studies reviewed in this paper were often problematic. Categories of diversity were conceived largely as homogenous groups, which were assumed to guarantee certain qualities in those inside them. Many of the papers [51,57] drew on understandings from social role theory that allocates discrete differences in social behaviour and personality traits to, for example, males and females. While there may be broad generalizable differences, a greater appreciation of directors as individuals with a complex identity of which gender is but one part is required to deepen our understanding of how certain qualities of individuals interact together in decision-making forums.
Measures of Sustainability Performance
As discussed previously, an inclusion criterion for our studies was that sustainability be understood in terms of economic, social and environmental performance. Within the papers reviewed, researchers generally approached sustainability from one of two perspectives: either as a business imperative to be responded to, or as encompassing larger issues facing humanity that business has a key role in resolving (e.g., [45]). Darus et al. [36] emphasise that organisations have a responsibility to 'take care of the communities where they operate, their employees, their customers, the natural environment in executing their economic activities, and ensure the safety of the products that they offer.' Similarly, Galbreath [52] references the Brundtland definition of sustainable development and considers the performance of each pillar of sustainability-economic, environmental and social-independently. Galbreath [52] acknowledges the difficulty of finding reliable proxies for the environmental and social aspects of sustainability and goes on to explain how and why content analysis of annual reports is appropriate: a common method in the field.
Considerable variability in what constitutes sustainability performance is an important finding of this review. A majority of studies use company disclosure as a proxy for determining sustainability performance. Researchers like Ong and Djajadikerta [58] distinguish between 'hard' and 'soft' disclosure using an instrument developed by Ong et al. [59], recognising that disclosure exists along a continuum which they measure using a scale. Others take a binary approach and score evidence of sustainability performance using disclosure/non-disclosure criterion. Al-Shaer and Zaman [32] use quality of sustainability reporting as a proxy measure. Arguably, the presence or absence of reporting is not indicative of reporting quality, as much reporting is 'impression management' [60]; nor can auditing evidence of 'quality', given that most audits simply verify a very selected range of information. Fuente [39] makes a similar claim in focusing on the disclosure of information related to sustainability, suggesting that this is, in itself, indicative of sustainability performance.
While the perspectives that informed researchers' conceptions of sustainability vary, overall the ways of measuring sustainability performance are largely similar. In the reviewed works, three approaches are dominant in measuring sustainability performance: (i) single index proxies based on raw data (i.e., energy use); (ii) author-generated composite proxies; and (iii) third-party composite proxies. The majority of studies with author-generated or third-party composite proxies relied on annual reports or sustainability reports of companies for information.
Single Index Proxies
Five studies in our review use raw data of performance to assess sustainability performance. Fakoyo and Nakeng [28] consider how energy usage is influenced by the composition of company boards; Kılıç and Kuzey [24] consider carbon emission disclosures; Ortiz de Mandojana and Alberto Aragon-Correa [46] relate sustainability performance to global warming potential, which they quantify in a sample of electricity generating/transmitting companies; and Haque [61], similar to Liao, Luo and Tang [62], use greenhouse gas emissions (GHG) to measure outcome-oriented carbon performance.
Author-Generated Composite Proxies
A number of studies use annual/sustainability reports or third-party datasets of sustainability related data and then apply various indexes, equations or frames to determine comparable sustainability performance. Ong and Djajadikerta [61] measure sustainability according to company disclosures and consider total disclosures as well as economic, environmental and social disclosures, basing their assessment of performance on Ong's [60] framework of sustainability disclosure. Li et al. [30] use information on firms' environmental policies available through the KLD database, combined with six items measuring environmental policies related to recycling, pollution prevention and using clean energy. Issa and Fang [23] employ the GRI Guidelines to undertake a content analysis of companies' annual reports, information on websites and sustainability reports. The researchers in this study use a pre-determined format to analyse data for sustainability performance but the analysis of disclosure claims is undertaken independently. Many of these studies discussed the limitations of using publicly available information or information from annual/sustainability reports and noted the risks inherent in self-disclosure [27,53,54,63].
Kaymak and Bektas [49] consider the relationship between corporate governance and CSR and use Transparency International data as a means of measuring CSR. While this does not measure social or environmental indicators directly, the researchers argue that "transparency and disclosure can be considered as a measure of CSR, as the latter is a fluid concept embracing activities that satisfy different interest groups" (p. 557). Arayssi et al. [49] and Fernandez-Feijoo et al. [26] use disclosures as a means of assessing sustainability performance and understand this within a CSR frame. Fernandez-Feijoo et al. [26] use the KPMG International Survey of Corporate Social Responsibility Reporting data compiled on Global Fortune 250 and the 100 largest companies by revenue across 22 countries, which is used as the basis of their analysis about the effect of women on boards on sustainability reporting.
Third-Party Composite Proxies
A number of large-scale, well-developed, third-party databases exist that purport to measure sustainability performance. Those commonly employed in the studies reviewed here include KLD, Thomson and Reuters, Dow Jones, Bloomberg and the GRI in various formats [31,44,50,54,56]. Other researchers use Bloomberg's ESG scores in their analyses, including Tamimi and Sebastianelli [64], to determine the sustainability performance of the SandP 500 companies they investigate. The results show that Governance disclosure scores are significantly higher than Social disclosure scores, and Social disclosure scores are significantly higher than Environmental disclosure scores. Nadeem et al. [40] use Bloomberg's ESG score to measure sustainability disclosure and include the environmental categories: water consumption, energy use, wastes management and greenhouse gas emissions. However, the availability of relevant corporate environmental and social information was frequently noted as a major challenge for researchers [22,29,37,57].
Overall, the variety of approaches to determining sustainability performance means that any meta-analysis is likely impossible. Rather, the variety of approaches employed highlights the theory-dependent nature of inquiry. While approaches using single indexes as a proxy ensure that company comparisons are based on like-for-like data, the narrowly defined fields employed, such as energy usage or carbon emissions, do not capture the entire range of a company's sustainability performance, and thus any pernicious intra-environmental or environment-social trade-offs. The variety of author-defined proxies highlight the point made in 2002 by Burritt and, notwithstanding the efforts to standardise corporate sustainability reporting in the GRI, reiterated again by Ong and Djajadikerta [58], that "despite the various methods used in prior research studies, the lack of a standardised reporting framework has hindered comparison of sustainability information". Analyses involving third-party composite data experience the same challenges as presented in the author-defined proxies. The standardization of data sources, the sustainability performance measures employed, and the statistical packages used to ultimately determine sustainability performance are rarely comparable across studies. Of most concern is the reliance on company disclosure to measure sustainability performance. While there were three common approaches identified in the studies reviewed, underpinning each of these approaches was trust in companies to disclose relevant information, either to a third party or through annual/sustainability reporting. The now large literature on corporate reporting as 'impression management' (e.g., [60,65]) highlights how problematic such assumptions are.
Methodological Strengths and Weaknesses
From the studies reviewed, many utilize publicly available organisational information to run quantitative assessments about sustainability performance. While a practical and logistically feasible methodological approach, there are several reliability questions that should be considered. How are each of the companies in the datasets collecting data and are these methods comparable? For example, Fakoyo and Nakeng [28] use publicly available information of 28 companies to run multiple regression analyses to determine relationships between women on boards and energy use. While energy use is a seemingly straightforward indicator, how are the companies measuring energy use? How can the researchers be sure that each approach is comparable to permit between-company comparisons?
Many studies in this review use proxies (i.e., disclosure) to determine sustainability performance, some of which indicate that boards of directors are aware of potential issues related to the use of proxies (such as the presence/absence of sustainability reporting, transparency and company self-assessments) as measures of sustainability. For example, in Mahmood and Orazalin's [27] study, participants highlight that "commitment toward sustainability and greater disclosure of sustainability are two different things" (p. 207). This is a consideration across all studies, where the measures of sustainability rarely seem to reflect company-supplied data of sustainability indicators across environmental, social and economic domains. For example, many studies relied on observations of data from listings. The accuracy and reliability of the data that most studies are drawing on come with no guarantees. The Thomson Reuters ASSET4 website explicitly states, for example, that: Information within the profile may have been supplied by a variety of different sources and while SRI-CONNECT makes an effort to ensure that any information that we ourselves submit to profiles is accurate and sourced from an appropriate place, neither we nor the organisation that the profile is about can give any warranty as to the accuracy or completeness of information submitted by others.
However, scholars such as Biswas et al. [57] argue that the data is of appropriate quality and is credible. Glass et al. [54] make the point that independent information supplied by KLD (for example) is more credible and reduces the social desirability bias that can be found when annual and sustainability reports are used as a source of data.
There was a tension in the studies reviewed in terms of the credibility and trustworthiness of companies' self-generated reports. For example, Alazanni et al. [50] note as a limitation of their study that data is based on the annual reports of the companies in their sample group and no third-party quality assurance to the data was available. Darus et al. [36], too, acknowledge this limitation: "The selection of this medium [annual and sustainability reports] of CSR reporting was predicated on the notion that the reports possess a degree of credibility and that the contents are not subject to the risk of other interpretations and distortions" (p. 273). Finally, Fuente et al. [39] highlight that "[they] believe that not considering GRI guidelines or focusing only on quantification of the number of GRI indicators included in the CSR report is an important limitation associated with the fact that companies only incorporate indicators that highlight their best CSR performance" (p. 743).
The methodological approaches of reviewed studies are overwhelmingly quantitative and field-based. A strength of the quantitative approach is the ability to control for other variables such as company size, return on equity and, in some cases, bank leverage (such as [22]). However, within the quantitative studies, the use of control groups is limited and the use of simulation methods to allow for greater control and comparison of variables is absent. Michelon and Parbonetti [56] are an exception, including a control group in their GRI-informed content analysis of the influence of board diversity. Oossthuizen and Lahner [45] also utilise sampling to incorporate a control group and draw on companies from the FTSE/JSE All Share Index with a Social Responsible Investment Index and a second population that are not listed in the SRI.
Researchers took various approaches to collecting data, that included different databases, different geographical regions as well as different industries; however, the analytic processes are largely similar. As noted by Cuadrado-Ballesteros et al. [33], "the most common methodology used by researchers is the multiple regression analysis, a symmetric test that reports the net effects of some variables on a dependent variable, considering a set of other independent variables" (p. 529). While the quantitative studies are able to provide macro-level trends, limitations include the inability to provide a rich interpretation of why these trends are apparent. For example, Arayssi et al. [49] find that the participation of women on boards of directors positively influences ESG disclosure and speculate that "women directors seem to promote social agenda in the boardrooms . . . " (p. 392). However, there is no data from the research that can support this claim, other than to infer the relationship.
Only three studies in our review draw on qualitative methods or theory. Mahmood et al. [27] demonstrate the value of qualitative methods in the nuance they reveal in how diversity on boards can vary in different situations. Shoham et al. [29] apply a mixed methods approach to collecting data on the influence of board composition on sustainability performance, using interviews with board directors to inform their interpretations of the quantitative results. Cuadrado-Ballesteros et al. [33] employ a qualitative comparative analysis, which is predominantly a quantitative assessment of the relationship between the variables under investigation. However, their approach is innovative in analysing the results through the lens of complexity theory. This approach allows the researchers to problematize the correlations obtained and consider the relationality of board characteristics and how they may work together to produce different outcomes.
Overall, the methodological approaches taken in the studies reviewed allow for generalisable understandings of how some characteristics of diversity influence sustainability performance. While only a few studies interrogate these findings at the micro-level, there are approaches being used to enable a more nuanced understanding of how diverse compositions of board members influence sustainability performance. Taken together, and as noted by other researchers, there is a need for further research that provides more descriptive and comparative accounts of the full range of diversity on boards for sustainability (noted also in [45]) supplemented by longitudinal studies [52].
Discussion
There are several important practical and theoretical findings from this review. We find evidence to support disrupting traditional compositions of boards that tend to represent only white, male, middle class, urban identities as a means of achieving sustainable outcomes in organisations. Yet, the variability in the studies reviewed around definitions and methodology make it difficult to claim what kinds of diversity matter and the level to which outcomes can be improved. Therefore, while there is evidence that changes to board composition can contribute to sustainable outcomes, the way diversity and, to some extent, sustainability performance, is theoretically constituted, currently undermines policy and strategic efforts to enforce such a radical means of ensuing sustainable outcomes.
On this note, we found the existence of a huge range of indicators in use by researchers to measure sustainability performance. While this variability reflects the multiple interpretations of sustainability in use and the complexity of capturing it within competing frameworks, the variability in indicators raises questions regarding construct validity. That is, how accurately are these studies measuring what it is that they claim to be testing? The majority of studies in this review relied on corporate disclosure as a proxy for sustainability performance. While research suggests that disclosure is an appropriate measure of sustainability [66], disclosure relies heavily on the self-reporting of the organisations under investigation, while the methodologies used by individual organisations to gather data for reported indicators are often not comparable across organisations or industry sectors. These observations raise important methodological questions regarding how to operationally define sustainability in the first instance. For example, can 'sustainability' be unambiguously defined to obtain a proxy measurement that allows for comparability? Should it be left up to individual researchers to undertake this task or is there a need for a global commission to do it? If so, who should participate in such a commission and with what influence?
Our findings also reveal the troublesome ways that 'diversity' has and continues to be defined, especially in the management literature where corporate board diversity studies are predominantly published. This research frequently conflates the broad idea of diversity with the narrower inclusion of women on boards. We support those who argue for interpretations of gender that are more inclusive and include contemporary understandings of gender identity that recognise non-binary identifications [67]. We suggest that more research is needed that investigates other forms of diversity (i.e., ethnicity) and its influence on sustainability performance [68,69]. As Glass [54] notes: . . . future research could consider the role of other types of diversity on organizational policy and practice related to the environment. Though much less scholarship has considered the effect of racial/ethnic diversity as compared with gender diversity on organizational practice, scholarship on that topic suggests that racial/ethnic minority leaders bring diverse professional experiences and perspectives to leadership positions. (p. 508) To this we would add the importance of investigating diversity in political perspectives along the left-right spectrum and, building on the extensive work by Schwartz [70] and his colleagues, on personal values. There is also a need for 'intersectional' research that examines how the complex characteristics of individual board members in terms of gender, ethnicity, age, discipline, cognitive capacity and political and personal values cumulatively influence board decisions, including decisions concerning its sustainability responsibilities, reporting and practices. With regard to the former, it would be hard to underestimate the disruptive impact, for example, of introducing legislation that mandated that the appointment of directors to all boards, public and private, required organisations to balance the number of those with egocentric values (linked to 'achievement') equally with those holding altruistic (linked to 'benevolence') and biospheric values (linked to 'universalism'). With regard to the latter, mandating that boards provide evidence of high intersectionality would also have profoundly disruptive impacts, given how homogeneous most boards currently are.
While the studies show evidence of relationships between various attributes of board members in influencing sustainability, there are few studies that robustly work towards explaining why these associations exist. Most studies use various forms of regression analyses and while this methodology is useful for macro-patterning, it does not provide interpretations on why or how these patterns have occurred. Qualitative and mixed methods approaches are largely absent from the body of research in this review and future research using these approaches could help to address this current gap. There are also opportunities to employ simulations of board decision making which, by controlling for many of the intervening variables, could assist in assessing the specific contribution of designated independent variables on the dependent, sustainability, variable.
In addition, this study did not find any research solely focused on investigating the role of diversity on boards of governmental agencies and public sector organisations, with the exception of Sangle [71], whose focus remained on CSR. The literature is currently dominated by investigations into the board diversity of corporate, business and private sector organisations. This is a large gap in the international literature base in urgent need of addressing given the important role that public sector boards play in environmental, social and economic decision making.
This review has demonstrated the many ways in which board composition may significantly influence sustainable outcomes. To improve organisations' sustainable decision-making, we argue that disrupting the status quo of board composition is one evidenced way to achieve this. While we have identified several important areas of research that need addressing, we argue that this review indicates that diverse board composition is a means of embedding sustainable decision-making into organisations, worthy of further investigation. While many unknowns remain regarding the quality or quantity of impacts that may be realised through changes to board composition, what is known is that maintaining the 'status quo' is no longer viable. Innovative solutions must be found that entrench decision-making for sustainability in organisations which work to create a new status quo and contribute to the system-wide disruption so urgently needed across the public and private sector.
Limitations
As with all studies, ours is not without limitations. First, a general lack of consensus among organisations and scholars as to what constitutes sustainability, its disclosure and its performance as well as the complex relationships between each of these dimensions, generates an extremely heterogenous set of studies that significantly complicates our understanding of the influence of board diversity. In addition, the lack of qualitative research of organisational sustainability performance and the role of management, in particular boards of directors, has meant that our findings in this paper can only be tentative. Finally, it is acknowledged that the scope of this paper is limited to the search terms used and its relevance, to the time at which it was conducted.
|
2020-02-27T09:19:37.899Z
|
2020-02-18T00:00:00.000
|
{
"year": 2020,
"sha1": "3b8387d20847d33518fffba47dd69bf1a60b5921",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/12/4/1505/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "0a6ee71bed6000705b893c1fc38d3dba9756eb7f",
"s2fieldsofstudy": [
"Business",
"Environmental Science"
],
"extfieldsofstudy": [
"Political Science"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.