id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
123441487 | pes2o/s2orc | v3-fos-license | Tests of Predictions of the Algebraic Cluster Model: the Triangular D3h Symmetry of 12C
A new theoretical approach to clustering in the frame of the Algebraic Cluster Model (ACM) has been developed. It predicts rotation-vibration structure with rotational band of an oblate equilateral triangular symmetric spinning top with a D3h symmetry characterized by the sequence of states: 0+, 2+, 3-, 4±, 5- with a degenerate 4+ and 4- (parity doublet) states. Our measured new 2+2 in 12C allows the first study of rotation-vibration structure in 12C. The newly measured 5- state and 4- states fit very well the predicted ground state rotational band structure with the predicted sequence of states: 0+, 2+, 3-, 4±, 5- with almost degenerate 4+ and 4- (parity doublet) states. Such a D3h symmetry is characteristic of triatomic molecules, but it is observed in the ground state rotational band of 12C for the first time in a nucleus. We discuss predictions of the ACM of other rotation-vibration bands in 12 C such as the (0+) Hoyle band and the (1-) bending mode with prediction of (“missing 3- and 4-”) states that may shed new light on clustering in 12C and light nuclei. In particular, the observation (or non observation) of the predicted (“missing”) states in the Hoyle band will allow us to conclude the geometrical arrangement of the three alpha particles composing the Hoyle state at 7.6542 MeV in 12C. We discuss proposed research programs at the Darmstadt S-DALINAC and at the newly constructed ELI-NP facility near Bucharest to test the predictions of the ACM in isotopes of carbon.
Introduction
A recent experiment performed at the HIγS gamma-ray facility [1] using the optical readout TPC (O-TPC) [2] shown in Fig. 1, provided unambiguous identification of the 2 + member of the Hoyle rotational band [3]. This observation allows for the first time the study of the predicted rotation-vibration structure of 12 C.
The structure of light nuclei, and specifically of clustering received new interest with major developments of ab-initio calculations of light nuclei and in particular of 12 C. Theoretical abinitio shell model [4] and symmetry inspired shell model [5] calculations as well as ab-intio Effective Field Theory (EFT) calculations on the lattice [6,7], Fermionic Molecular Dynamics (FMD) model [8] and Anti-symmetrized Molecular Dynamics (AMD) model [9], are employed to yield a microscopic foundation of the clustering phenomena that naturally occurs in cluster models [10,11]. For example, one issue of current concern [12] is the geometrical structure of the three alpha-particles in the Hoyle state at 7.6542 MeV in 12 C (linear chain, obtuse triangle or equilateral triangle) and the Hoyle rotational band built on top of the Hoyle state [13,14,15,16,17].
Cluster states are best described as molecular states which are characterized by the separation (Jacobi) vector(s) connecting the constituent objects. For diatomic like object one single (separation) vector is required leading to the predicted U(4) symmetry [18] that was observed in 18 O via the characteristic enhanced E1 decays [19]. Triatomic symmetric spinning tops are characterized by the two perpendicular Jacobi vectors (λ and ρ) leading to the predicted U(7) symmetry with the geometrical D 3h symmetry [20,21]. The observation of the 2 + Hoyle rotational excitation in 12 C [3,14] together with the recently discovered 4 − [22,23] and 4 + [24] states in 12 C are in agreement with the predicted spectrum of an oblate spinning top with a D 3h symmetry [20,21]. It was predicted [20,21] that the three alpha-particle system of 12 C leads to the ground state rotational band including the most unusual sequence of states: 0 + , 2 + , 3 − , 4 ± , 5 − . The new high spin 5 − state [25] as well the previously published 4 − state [22,23] lead to a J(J+1) trajectory as predicted by this U(7) model [25] including the nearly degenerate 4 − and 4 + states as shown in Fig. 2.
The Algebraic Cluster Model
The spectrum of 12 C predicted by the Algebraic Cluster Model (ACM) [20,21] is shown in Fig. 3 where it is also compared to the measured spectrum of 12 C [25]. In addition to the ground state rotational band this U(7) model [20,21] predicts the Hoyle state at 7.6542 MeV in 12 C to be the first vibrational breathing mode of the three alpha-particle equilateral configuration leading to the same rotational structure albeit with a larger moment of inertia (by a factor of 2). Recent measurements revealed the 2 + [14] and 4 + [24] members of the predicted Hoyle rotational band and we are currently searching [26,27] for the 4 − predicted by the ACM to be nearly degenerated with the 4 + state and the 3 − (broad) state that was suggested to lie between 11 and 14 MeV [22,28]. The observation (or lack there) of these "missing" states will allow us to determine whether the Hoyle state is composed of three alpha-particles in an equilateral triangle arrangement [25,8,9] or an obtuse triangle [7] or whether it is better described as a vibrational excitation of a "diffuse gas" of three alpha-particles [11].
The U(7) model also predicts the 1 − state at 10.84 MeV to be the vibrational bending mode with a rotational band including the 1 − and a degenerate 2 ± states. We are searching [26,27] for the third 2 + of 12 C that is predicted by the U(7) model to lie near the observed 2 − state at 11.8 MeV.
Future Tests of the ACM
We propose a new research program in Nuclear Structure studies to be performed at the ELI-NP facility near Bucharest, Romania, with a newly proposed electronic readout TPC (eTPC) and a Silicon Strip Detector (SSD) [27]. We propose to measure of the multi-alpha decay of 12 C and 16 O with a TPC detector. Such measurements of multi-alpha decay of 12 C were also performed in the past with Silicon Strip Detectors (SSD) [22,23,24] hence similar measurements with SSD are also considered for the ELI-NP. We emphasize that the measurements we discuss in 12 C and 16 O are considered as typical examples of studies in nuclear structure that can be performed with our proposed detectors at the ELI-NP and such measurements in other light nuclei will lead to new perspective on the clustering phenomena in light nuclei.
Conclusions
In conclusion the ACM appears to open a new chapter in cluster physics of light nuclei and it presents an opportunity for further experimental investigation of light nuclei. We refer the reader to the Technical Design Report (TDR) [27] for a discussion of the scientific goals of the Charged Particle Working Group (CPWG) of the ELI-NP facility in the studies of light nuclei. | 2019-04-21T13:13:19.798Z | 2016-07-01T00:00:00.000 | {
"year": 2016,
"sha1": "b33440478f95b7b1142f7b169b5904fac6ce638c",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/730/1/012011",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "39925c4e51a928fab54d17fa7cf049a1c99cda46",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
209340329 | pes2o/s2orc | v3-fos-license | Cashew nuts (Anacardium occidentale L.) decrease visceral fat, yet augment glucose in dyslipidemic rats
The objective of this study was to evaluate the biological effects of roasted Cashew nuts consumption on biochemical and murinometric parameters in dyslipidemic rats receiving lipid supplementation. Young male rats were randomly assigned to three experimental groups (n = 10). The Control group (CONT) was treated with water, the Dyslipidemic group (DL) received a high fat content emulsion throughout the experiment, and the Dyslipidemic Cashew Nuts group (DLCN) received the same high fat content emulsion throughout the experiment, yet was treated with Cashew nuts. Body parameters, biochemical, hepatic and fecal fatty acid profiles were all evaluated. The levels of total cholesterol and triglycerides were higher in the DL and DLCN groups as compared to the control group. DLCN and CONT presented no difference in HDL levels. DLCN presented higher glycemia levels than the other groups. There was reduction of body fat in DLCN as compared to other groups, but with higher accumulations of liver fat. DLCN presented a reduction in saturated hepatic fatty acids of 20.8%, and an increase of 177% in relation to CONT; there was also a 21% in increase DL for ω9 fatty acids in comparison to CONT. As for fecal fatty acids, there was a lower concentration of polysaturates in DLCN as compared to the other groups. The data showed that the consumption of Cashew nuts by the dyslipidemic animals treated with a hyperlipidic diet induced greater accumulations of liver fat and worsened glycemic levels, despite having reduced visceral fats and increased fecal fat excretion.
Studies have evidenced the benefit of seed oils for human health [1], in particular the cholesterol-lowering effect [2], as well as cardioprotective effects of almonds [3], and reduction of inflammatory markers promoted by consumption of nuts in general [4]. Maternal consumption of Cashew nuts in rats has been investigated for causing alterations in offspring development [5], from acceleration of nervous system maturation, to prevention memory deficits. However, there are still few studies investigating the biological effects of Cashew nut seed oil in non-healthy populations.
Cashews are the fruit of the cashew tree (Anacardium occidentale), which when dried or roasted, originate Cashew nuts (also known as Cashews). Cashews are a tropical fruit native to northeastern Brazil, produced in large scale in India and Vietnam [6,7] with global production from 2015 to 2016 reaching 738,861 tons [8]. Statistical data show a 70% increase in Cashew nut exports originating in these countries. The USA is the principal importer, but the population with the highest per capita consumption is Cambodia [8].
Cashew nuts are consumed in their natural or roasted form, or converted into food byproducts [7]. Having a soft and slightly sweet flavor, they stand out for high lipid content (47.8 g/100g) as source of unsaturated fatty acids (UFAs)-oleic (ω-9) and linoleic (ω-6) acid [9,5]. Other functional properties of the seed oil are due to its phenolic contents (flavonoids, anthocyanins and tannins), and fiber [10]. The most valuable micronutrients found in cashews are folate and tocopherols [11], which delay metabolic disorders, protecting against atherosclerosis and other chronic non-communicable diseases (CNCD) [12].
Cardiovascular disease is the most prevalent of CNCDs and responsible for almost 1/3 of deaths worldwide. The main risk factors for CNCDs are obesity, smoking, hypertension and dyslipidemias [13]. Adequate food habits are essential to control dyslipidemia; both fiber and unsaturated fatty acids help control dyslipidemia. Cashew nuts is a source of fiber and UFAs could be a food that helps control dyslipidemia [14]. We hypothesize that consumption of cashew nuts improves dyslipidemia in rats with a hyperlipidic diet. The objective of this research was to evaluate biological effects on biochemical and murinometric parameters of consuming cashew nuts in dyslipidemic rats who did not modify the hyperlipidic diet to normolipidic diet.
Methods and materials
This study was approved by the UFCG Ethics Committee for Animal Use (Protocol No. 94-2017). The experimental protocol followed the ethical recommendations of the National Institute of Health Bethesda (Bethesda, USA) regarding animal care (Fig 1). The research was duly registered in the National System of Management of Genetic Heritage and Traditional Knowledge (SISGEN), under Code A1BE84C. temperature increases during processing; and loss of nutritional properties. A household blender was used to obtain the homogeneous flour which was then stored under refrigeration in a hermetically sealed container and unexposed to luminosity throughout the experiment.
Nutritional characterization of the cachew nuts
Cashew nut flour was subjected to analysis to characterize its centesimal composition, fiber quantification, phenolic contents and total flavonoids (Table 1), as also analysis of phenolic compounds by High-performance liquid chromatography (HPLC) ( Table 2). For centesimal composition; moisture, ash, lipids and proteins [15] were analyzed. Calculation of the total carbohydrate content was estimated by difference: using the AOAC formula [15]: 100-[Weight in grams (protein + lipids + ashes + water) in 100 g of food]. To quantify dietary fiber, the methodology described by Prosky et al. [16]. All analyses were performed in three replications and the results presented as mean and standard deviation.
For evaluation of total phenolic and total flavonoids, the Cashew nut constituents were extracted with an 80% methanol solution (v/v). One gram of Cashew nut was measured into a test tube and 10 mL of solvent was added. The test tube was left at room temperature for 24 hours and after filtration the volume was completed to 10 mL with an extraction solvent and stored in a freezer (-18˚C) until analysis. Total phenolic compounds were quantified according to the methodology described by Liu et al. [17], with modifications: 250 μL of extract was mixed with 1250 μL of a 1:10 diluted Folin-Ciocalteau reagent. The solutions were mixed thoroughly and incubated at room temperature (27˚C) for 6 min. After incubation, 1000 μL of 7.5% sodium carbonate (Na 2 CO 3 ) solution was added and again incubated in a water bath at 50˚C for 5 min. The reaction mixtures' absorbances were measured at 765 nm using a spectrophotometer (BEL Photonics, Piracicaba, São Paulo, Brazil). The absorbance of the extract was compared with a gallic acid standard curve for estimating concentration of total phenolic compounds in the sample. The results were expressed in mg of gallic acid equivalents (GAE) per hundred grams of Cashew nut on the basis of dry weight. The total flavonoid content was measured using the colorimetric assay developed by Zhishen et al. [18]. A known volume (0.5 mL) of the extract was added to a test tube and at the same time 150 μL of 5% NaNO 2 was added. After 5 min, 150 μL of 10% AlCl 3 was added, and, at 6 min, 1 mL of 1 M NaOH was added, followed by 1.2 mL of distilled water. Sample absorbance was read at 510 nm using a spectrophotometer (BEL Photonics, Piracicaba, São Paulo, Brazil), and for estimating the concentration of flavonoid contents in the sample, it was compared with a catechin standard curve. The flavonoids content was expressed as mg of catechin equivalents (QE) per hundred grams of Cashew nut on the basis of dry weight.
The extraction, identification and quantification of phenolic acids from the cashew nuts were performed according to Meireles (19). For this, the milled sample was degreased with chloroform and methanol, ethanol 70% was added in proportion sample:solvent 1:10, stirred for 4h at 200 rpm under temperature control (26˚C), centrifuged for 15 minutes at 5000 rpm. Then it was vacuum filtered with Buchner funnel. The extract was dried in a circulating air oven and eluted in water at a concentration of 5mg/mL and then injected into HPLC following the methodology mentioned [19].
Animals and diets
Seven-week-old Wistar rats were randomly separated into three experimental groups (n = 10). Throughout the experiment all the animals were offered access (ad libitum) to water and The animals of the DL and DLCN groups underwent induction of dyslipidemia through administration of a lipid emulsion during the initial 14 days of the experiment, in the amount of 1 mL/100 g of the animal's weight, according to the methodology described by Xu et al. [20]. The emulsion contained pork lard, cholesterol, glycerol, propylthioracyl, and distilled water. After the initial 14 days, the formulation of the lipid emulsion was altered by removing the propylthioracyl and reducing the total quantity by half to 0.5 mL/100 g. Administrations continued to the DL and DLCN groups until the end of the experiment. Together with the lipid emulsion, the DLCN group animals received 1 g (4 g/kg of animal weight) of Cashew nut flour for 28 days.
Physical parameters
Weight and feed intake verifications were performed weekly. The calorie intake was calculated from feed, emulsion with high lipid content intake and cachew nuts intake. At the end of the experiment, with the animal anesthetized, the nasal-anal length, and the abdominal and thoracic circumferences were measured by measuring tape. The body mass index (BMI) of the animals was calculated from the body weight (p) and nasal-anal length (c) data using the formula: BMI = p/c 2 , being weight (p) in grams, and length (c) in centimeters [21].
Biochemical parameters
At the end of the experiment, the blood samples were obtained after cardiac puncture with the animals anesthetized using ketamine hydrochloride + xilasine hydrochloride (1 mL/kg of weight). Plasma was collected by centrifugation of the blood at 3,500 rpm for 15 min. The plasma was used to quantify total cholesterol (TC), HDL, triglycerides (TG) and blood glucose using the enzyme kit (LABTEST), with later spectrophotometer reading (Spectrophotometer SP 1102).
Oral glucose tolerance test
The oral glucose tolerance test (OGTT) was performed at the end of the experiment on the 41 st experimental day. The animals at 6 hours of fasting received a 10% sucrose solution at 2 mL/ 100 g of weight. Blood was collected through the caudal vein. Glycemia was verified using an AccuCheck Active glucometer (Roche Diagnostics GmbH, Germany) at 0, 15, 30 and 45 minutes after administration of the solution.
The glucose area under the curve (AUC) were calculated as follows: Where, PG is plasma glucose.
Visceral, retroperitoneal and hepatic fats
Shortly after euthanasia, the visceral (mesenteric and epididymal) and retroperitoneal fats were removed and weighed [22]. The liver was removed, weighed and subjected to fat quantification by means of the methodology described by Folch et al. [23], and beginning from a 2 g lipid extraction of the sample using 40 mL of a chloroform: methanol (2:1) solution.
Fecal fat
The feces of the animals were collected in two periods of the study. The first collection took place at the end of dyslipidemia induction on the 14 th experimental day, and the second was performed at the end of the third week of treatment-on the 35 th experimental day. The Folch et al. [23] methodology was used for fecal fat quantification.
Hepatic and fecal fatty acids
Parts of the liver and feces samples from the animals were used for fatty acid quantification. Methylation of the fatty acids present in the lipid extract was carried out for both, following the methodology described by Hartman and Lago [24]. An aliquot of the lipid extract was taken, calculated for each sample according to the fat content found in the lipid measurement; and quantification was performed according to the method of Folch et al. [23]. Adding 1 mL of internal standard (C19:0) and a saponification (KOH) solution, the solution was subsequently heated under reflux for 4 min. An esterification solution was then added immediately afterwards, returning the solution to heating under reflux for 3 extra minutes. The sample was then allowed to cool before subsequent washings with ether, hexane, and distilled water; finally obtaining an extract (with the methyl esters and solvents), which was conditioned into a properly identified amber glass until complete drying of the solvents. After drying, a suspension was made (in 1 mL of hexane) and packaged into a vial for further chromatographic analysis. The aliquots of the saponification and esterification solutions were determined according to the methodology described by Hartman and Lago [24]. A gas chromatograph (VARIAN 430-GC, California, USA), coupled to a fused silica capillary column (CP WAX 52 CB, VARIAN, California, USA) with dimensions of 60 m x 0.25 mm, and a 0.25 mm film thickness was used with helium as the carrier gas (1 mL/min flow rate). The initial temperature was 100˚C programmed to reach 240˚C, increasing 2.5˚C per minute for 30 min, totaling 86 minutes. The injector temperature was maintained at 250˚C and the detector at 260˚C. 1.0 μL aliquots of esterified extract were injected in a Split/Splitless injector. The chromatograms were recorded using Galaxie Chromatography Data System software. Fatty acids were identified by comparing the retention times for the methyl esters of the samples against Supelco Mix C4-24/C19 standards. The fatty acid results were quantified by normalizing the areas of the methyl esters, and are expressed as percentage of area.
Statistical analysis
The Cashew nuts flour composition result was described as mean ± S.D. All other results were expressed as mean ± S.E.M. The statistical analysis of the data was based on one-way ANOVA followed by a Tukey's test. Differences were considered significant when p < 0.05. The statistical analyses were performed using the GraphPad Prism 7 statistical software.
Body weight
As for the weekly weight of the animals in the respective second and third weeks of the experiment, it was found that the groups undergoing the dyslipidemic induction and treatment transitions (DL 207.5 ± 3.44 g; 220.5 ± 4.31 g) and (DLCN 211.75 g ± 2.02; 223.6 ± 2.87 g) presented lower weights in relation to the control group (CONT 228.2 ± 4.61 g; 248.6 ± 4.73 g) (P < 0.05). In the fourth week, the reduction in weight continued only for the DLCN (242.6 ± 5.89 g) in relation to the CONT (266.0 ± 4.88 g), and in the fifth week, the DLCN (248.2 ± 4.75 g) also presented lower weights as compared to the other two experimental groups, CONT (277.3 ± 5.12 g), and DL (275.14 ± 6.05 g) (P < 0.05) (Fig 2).
The caloric intake presented in Table 3 shows that CONT consumed more calories from the high feed intake compared to the other groups in the first experimental weeks. In the last two experimental weeks, DLCN consumed fewer calories compared to the other groups.
BMI, abdominal and thoracic circumferences
In Table 4, the BMI, abdominal and thoracic circumference values of the experimental groups are described. No significant differences were observed between the groups (P > 0.05).
Biochemical analysis
The biochemical analyses proved the efficacy of the dyslipidemia inductions since both the DL (69.59 ± 4.39 mg/dl) and DLCN (122.52 ± 12.95 mg/dl) groups had high levels of total Cashew nuts (Anacardium occidentale L.) augment glucose in dyslipidemic rats cholesterol in relation to the CONT (43.72 ± 2.47 mg/dl) (P < 0.05). DLCN also presented higher values than DL (P < 0.05). The DL group presented higher serum triglyceride values (127.4 ± 12.56 mg/dl) as compared to CONT, while DLCN (81.56 ± 5.26 mg/dl) presented elevated triglycerides in relation to CONT (62.76 ± 6.24 mg/dl) and reduced triglycerides as compared to DL (P < 0.05).
HDL levels were similar for the group treated with Cashew nuts and the control group, but were higher in the DLCN (48.17 ± 3.53 mg/dl) than in the DL (29.55 ± 2.89 mg/dl) (P < 0.05). In the DL group, the HDL content was lower than in the CONT group (45.28 ± 5.46 mg/dl) (P < 0.05).
Hepatic and fecal fatty acids
Regarding saturated fatty acids (SFA) present in the liver (Table 5), the DLCN group presented an increase in the content of Myristic acid (14:0) as compared to DL and a reduction as Cashew nuts (Anacardium occidentale L.) augment glucose in dyslipidemic rats In total values of accumulated SFA in the hepatic tissue, the DLCN group presented a reduction of 20.8% as compared to the CONT group, and of 1.5% as compared to the DL group. Cashew nuts (Anacardium occidentale L.) augment glucose in dyslipidemic rats As for the amount of monounsaturated fatty acids (MUFA) present in the liver, DLCN presented a higher value of Myristoleic acid (14:1ω5) as compared to DL. DLCN also presented higher values of Trans-oleic acid, Cis-oleic acid, Erucid and Nervonic acids (18:1ω9t, 18:1ω9c, 22:1ω9 and 24:1ω9) in relation to the other groups. The sum of the MUFA averages showed an increase of 168% for the DLCN group in relation to the CONT group and 21% in relation to the DL group. The total content of ω-9 fatty acids was higher in the DLCN group as compared to the other groups, with an elevation of 177% in relation to CONT, and 21% in relation to LD.
The polyunsaturated fatty acid Linoleic acid (18:2ω6c) presented elevated values (by about 17% for DLCN) when compared to CONT, and a higher quantification in the DL group as compared to the other groups. Arachidonic acid (20:4ω6c) presented similar values for the DLCN and DL groups, with a reduction in the CONT group. The fatty acid Eicosapentaenoic acid (20:5ω3) presented increasing values for DLCN, CONT and DL in order. The Docosahexaenoic acid (22:6ω3) values were lower in the DLCN group and higher in the CONT group. The total polyunsaturated fatty acids (PUFA) in the liver presented a slight percentage difference between the groups, with DLCN presenting a 14.5% reduction as compared to CONT, DL presenting a 4.9% reduction as compared to CONT, and DLCN presenting a 10% reduction as compared to DL.
Our data show in Table 6 that fecal excretion of fatty acids varied between groups in both feces collections. The data from the first collection, performed on the last day of dyslipidemia induction, show an increase in saturated fats for the groups that consumed the emulsion with high fat content. However, in the second collection there was a marked decrease in saturated fat excretion by the DLCN group as compared to the other groups, and a difference of 10.6% was observed in relation to the DL group. MUFA quantification presented similar levels between the two collections and between the groups. Regarding PUFA, there was a reduction in the second collection as compared to the first for the CONT and DL groups, but the DLCN presented lower levels of polyunsaturated fats in the feces for both the first and second collections in relation to the other groups.
Discussion
Consumption of seed oils (pleasant flavors and high caloric values) is related to body weight gain [25]; but the fiber content of this food also acts on satiety, causing hypophagy, and consequent reduction of ingested energy [26]. Increased energy density in a diet with high lipid value, such as Cashew nut consumption with an emulsion rich in SFA and cholesterol as used in the present study also causes increased satiety [27], with consequent reductions in the appetites of the rats [28]. These findings justify the results of the present study in relation to the delay in weight gain and lower calories and feed intake by the groups that underwent induction of dyslipidemia (Table 3, Figs 2 and 3). However, in the DLCN animals, the result was even more evident, since the consumption of Cashew nuts caused lower calories and feed intake as compared to the other experimental groups, most evident in the last experimental weeks (Table 3 and Fig 3).
When assessing appetite sensations in humans ingesting almonds and walnuts, levels of hunger were found to be suppressed and satiety increased [29,30]. Like almonds and other nuts, Cashew nuts present high lipid and protein contents, as well as fibers which are associated with reduced appetite [29].
The supplementation of fibers in rats fed a diet rich in fats has also caused a decrease in animals' weights, but with increased consumption [31,28]. Another factor associated with alteration of satiety is the profile and amount of fatty acids in the lipid source consumed; these eventually flow through the bloodstream and affect satiety in the brain. After triacylglycerol hydrolysis of foods, fatty acids are transported to epithelial cells in the form of micelles to be absorbed. Depending on the degree of fat insaturation, micelle formation becomes faster; UFAs are more readily available for absorption, causing release of hormones and increased satiety [32].
Kozimor et al. [33], evaluated satiety in women receiving a diet rich in SFA, MUFA, and PUFA and it was observed that PUFAs promoted greater satiety in comparison to SFAs. In the Cashew nuts (Anacardium occidentale L.) augment glucose in dyslipidemic rats study, MUFAs presented a lower satiety response as compared to the other two types of fat. Poudyal and collaborators [34] concluded that supplementation with differing oils such as: "macadamia oil, safflower oil, and linseed oil, being respectively rich in oleic, linoleic, and αlinolenic acids, reduced feed intakes as compared to groups fed a normal diet. A study comparing a diets rich in saturated and unsaturated fats, found higher weights in rats fed a diet rich in saturated fats, even with the same calorie intake for both groups [35]. The Cashew nuts used in this research presented excellent total phenolic content and total flavonoids; as well as a high catechin content, seen in Tables 1 and 2, giving it a high nutritional quality [36]. According to Trox et al. [37], Cashew nuts present high antioxidant potential because they are a source of phenolic compounds which possess biological and medicinal properties. Like other oleaginous foods, the total phenolic content and total flavonoids found in Cashew nuts are highlighted for their functional properties [38,39], being responsible for inhibiting or reducing oxidation, and resulting in UFA protection [40]. However, Cashew nuts' antioxidant content can be diminished through heat treatment, percentage of shell present, storage, and irradiation processes [40]. Studies show the potential value of Anacardium occidentale as a source of antioxidants not only in the fruit, but also in its leaves and pseudofruits [41,42].
The induction of dyslipidemia as performed in the present study proved to be effective, since the treatment induced an increase in TC in both treated groups (DL and DLCN) (Fig 4). However, the consumption of Cashew nuts did not reverse the effects caused by the administration of a high fat content emulsion. The Cashew nuts used in the present study presented a total dietary fiber content of 3.65 g per 100 g of product, (with 0.33 g of soluble and 3.32 g of insoluble fibers). Soluble fibers are widely used in the treatment of dyslipidemias [43,44]. In contrast, insoluble fiber consumption does not present positive effects for cholesterol reduction or cardiovascular risk [14]; as verified in the present study, because there were no observed reductions in TC or TG.
The hypocholesterolemic action caused by consumption of seed oils still diverges in many studies. Lovejoy et al. [45], evaluated non-diabetic adult males and females who consumed 100 g of almonds/day for 4 weeks. A reduction in TC by 21% was observed. Similarly, Lee et al. [46] reported improvement of TC levels in women with metabolic syndrome who consumed a nut mixture for 6 weeks. However, Casas-Agustench et al. [26], when assessing the effect of a seed oil mixture (walnuts, almonds and hazelnuts), in conjunction with a standard diet in adult men and women with metabolic syndrome observed (after 12 weeks) no alterations in TC, LDL, HDL and TG. When evaluating the administration of Cashew leaf, stem and nut extract, Jaiswal et al. [47], reported no significant differences in TC and HDL levels in diabetic rats.
In the present study, the HDL levels in the dyslipidemic group decreased in comparison to the control group. However, in the animals treated with Cashew nuts, which also suffered prior induction of dyslipidemia, the damage was reversed. The DLCN presented values similar to the control group (which did not suffer from induction of dyslipidemia) (Fig 4), thus highlighting the beneficial effects of Cashew nuts on HDL levels.
Evaluating the glycemic metabolism of the animals in the present study through fasting glycemia, the group treated with Cashew nuts presented higher values as compared to the other groups (Fig 4). In the oral glucose tolerance test, the groups DL and DLCN presented elevated levels of serum glucose at the beginning and end of the test (time 0, 30 and 45 minutes) ( Fig 5A), this can be confirmed in Fig 5B with the glucose area over the OTTG curve. However, the curve shows that DL had higher serum glucose levels. Studies evaluating seed oil consumption in humans have observed a reduction in fasting glycemia in diabetic individuals [48,49,45]. However, Ma et al. [50], when treating diabetic men and women with nuts, verified an increase in fasting glycemia.
Research also performed with humans presents results similar to the present study when performing the OGTT, confirming an increase in serum glycemia in men and women diagnosed with metabolic syndrome and treated with Cashew nuts for 8 weeks [51]. When testing a diet with high fat content, Almeida-Suhett et al. [52], also found an increase in basal glycemia in rats, resulting in a glucose tolerance curves with greater areas in the animals fed the high fat content diets.
Regarding serum TG levels, DLCN presented a decrease of 36.3% in relation to the dyslipidemic group and a 23% elevation as compared to the control group. Because they are a source of UFA, Cashew nuts promote a fall in TG because they potentially reduce exposure of nonesterified fatty acids to the liver, preventing one of the main TG synthesis pathways [53]. However in this study, Cashew nuts were not able to completely ablate such alterations as compared to the control group.
Alterations in carbohydrate metabolism interfere in the lipid profile, because when glucose appears in excess, insulin converts it into fatty acids which are stored in the form of TG, (their concentration becomes elevated and HDLs are reduced). Elevation of TG levels implies pancreatic β cell apoptosis due to lipotoxicity, causing insulin resistance [54]. Thus, elevation of serum TG levels correlates with high fasting glucose levels and the OGTT results in this study. The induction of dyslipidemia and the consumption of Cashew nuts may have triggered increased glycemic levels which interfere with lipid metabolism.
A diet rich in lipids, such as the one used in the present study, causes the organism to accumulate excess fat, with adipocyte expansion, high blood concentrations of serum lipids and lipoproteins, and high lipid supplies to the liver. Lipid superaccumulation results in the ectopic deposition of lipids in non-adipose tissues, and increases hepatic gluconeogenesis; along with LDL and HDL reductions [55,56].
In our study, the animals of the DL and DLCN groups were kept on a fat-rich diet, and the DLCN group received Cashew nuts (in addition) which also have high lipid content. The data revealed that supplementation with Cashew nuts induces a lower deposition of fatty acids in the adipose tissues, besides promoting a decrease in serum TG. The data correlate when considering that TG are stored in the adipose tissue and that deposition is directly related to its synthesis in the liver and its concentration in the bloodstream [57].
However, reduction in fat deposition was not observed in the liver, since there was a higher accumulation of fats in the DLCN than in the DL. Evaluating the fatty acids profile as accumulated in this tissue, a difference was observed; the group that received Cashew nuts presented more MUFA in relation to the DL group.
Considering that the DLCN group received fatty acids (originating in Cashew nuts) in addition to the dietary fatty acids which the DL group also received. Cashew nut supplementation inhibited fat accumulation in adipose tissue as compared to DL. However, when analyzing the glycemic curve we also found an increase in plasma glycemia. According to Kahn and Valdez [55], a fat-rich diet can generate free fatty acid deposition in the pancreas; reduce insulin secretion, inducing insulin resistance, or hyperglycemia. This potentially occurred in the present study, since there was an increase in the glycemic curve and in fasting hyperglycemia.
When the physical condition of the animals was evaluated, it was found that the Cashew nuts offered to the animals caused (besides reduction of body weight) a reduction of visceral (mesenteric and epididymal) and retroperitoneal fats (Fig 6). Vaidya et al. [58] evaluated the effect of omega 3 fatty acid supplementation in rats with a fat-rich diet, and observed reductions in body fat in animals treated for 8 weeks. The same research observed reduction of body weight, reduction of liver weight, reduction of TC, and of hepatic cholesterol. Bhaskaran et al. [35], when treating rats with saturated and unsaturated fats found a reduction in the size of the adipocytes of the group fed with a diet rich in unsaturated fat. These studies corroborate our findings, where Cashew nuts, a source of unsaturated fats, produced the same results.
Fat-rich diets also induce adipose tissue accumulation; which causes the increase of free plasmatic fatty acids [59]. The release of free fatty acids from visceral adipose tissues contributes to fat oxidation, and stimulates esterification of fatty acids into TGs in the liver. This leads to reductions in glucose and lipid metabolism in the peripheral tissues [60]. Our research presented reductions of visceral and retroperitoneal fats in the group treated with Cashew nuts. In contrast, there was an increase in serum TG (as compared to the control group) and higher liver fat accumulation, which might be explained by increased oxidation of fatty acids and reduction of lipogenesis, which uses peripheral tissue fats [35].
The hepatic fat content measured followed the organ weight; since as fat deposition increases, there is an increase in organ weight. Our findings evidenced greater deposition of hepatic fat in the group treated with Cashew nuts as compared to the other groups (CONT and DL), but the liver weight increased in both dyslipidemic groups (DLCN and DL) as compared to CONT (Fig 7). Cholesterol and TG synthesis is highest in the liver, and the main sites of fat deposition are in the viscera and under the skin.
In this study, the animals that consumed Cashew nuts presented lower abdominal fat deposition. Both dyslipidemic groups presented a higher percentage of liver fat, with the highest amount found in DLCN. The correlation of hepatic fat deposition with quantification of fatty acids revealed that when comparing the DLCN with the DL, the amounts of SFA were similar; with the MUFA contents being 21% higher in the DLCN. These results are possibly due to Cashew nuts being a source of oleic acid. The consumption of Cashew nuts induced greater hepatic deposition of MUFA. As in our research, Picklo et al. [61], also found an increase in oleic acid in the livers of rats with a high-fat oleic acid diet; as well as an elevation of liver fats and glycemia for these animals.
High depositions of inappropriate ectopic fat in the liver generate hepatic steatosis or nonalcoholic fatty liver disease (NAFLD) [57], a disease with negative impacts on the health of obese individuals, and usually accompanied by dyslipidemia and other complications [58,62].
Lee Homma and Fujii [63] showed that the early phase of non-alcoholic fatty liver disease (NAFLD) presents protective functions against oxidative lesions caused by reactive oxygen species (ROS) and toxic agents in rats when fed a diet rich in unsaturated fat, because this temporary accumulation of liver fat is an adaptive response of hepatocytes under stress. Research comparing the accumulation of liver fats as related to the consumption of SFA, (with a palm oil based diet), and UFA, (with sunflower oil-based diet) in adult men and women concluded that a UFA rich diet prevents liver, visceral, and total fat deposition as compared to the SFA rich diet. This was not observed in the present study [53].
The Cashew nuts used in this research are a source of oleic acid and linoleic acid, however, since it was offered to previously dyslipidemic animals and this continued with consumption of a hyperlipid emulsion, the Cashew nut consumption was not able to reverse liver fat accumulation or to improve the fatty acid profile in the tissue, with the exception of the MUFA.
In the first collection, the amount of fat excreted in the feces was higher in the DL and DLCN groups than in the CONT group, due to induction of dyslipidemia through administration of a high fat content emulsion. However, in the second collection, (after initiation of treatment with Cashew nuts), there was a higher amount of fat excretion by DLCN as compared to CONT and DL (Fig 8). The use of fibers in the diet assists in the excretion of fat through the feces [64]. Cashew nuts, in addition to containing dietary fibers, are a food source of UFAs, especially oleic and linoleic acids. High fat intake, (as in the case of the animals in the DL group), induced higher fecal fat excretion, and the consumption of Cashew nuts (which is a source of fibers) also caused an increase in excretion. The soluble fibers form a chelate with the fat being excreted by the gallbladder or consumed in the diet [65].
Thus, in the present study, the data confirm that the fibers and lipid content of Cashew nuts were responsible for the increase in fecal fat excretion. When using seed oils (walnuts, almonds and hazelnuts) in research with adult individuals, Casas-Agustench et al. [26] found higher excretion of fecal fat when compared to the control group. Research evaluating dietary fiber contents, found that a diet supplemented with flaxseed increased fat excretion by up to 50% [64]. Even with increased fat excretion, DLCN showed high total cholesterol, but this group consumed largely unsaturated cashew lipids, which may be responsible for increasing the HDL content compared to DL.
When correlating the data of this study, we found that Cashew nuts in a dyslipidemic diet (DLCN), in relation to the group without Cashew nuts (DL), led to reversal of HDL serum levels, weight reduction, visceral and retroperitoneal fat deposition, higher liver fat deposition (of a better quality) and increased excretion of fecal fat. However, we also found increased serum levels of total cholesterol, together with glucose.
Conclusion
Previously dyslipidemic animals, maintained on saturated fats associated with Cashew nuts, presented reductions in visceral and retroperitoneal fat deposition, and a reversal of diminished HDL levels usually found in dyslipidemias. However, treatment with Cashew nuts compromised glycemic metabolism, and augmented fat deposition in the hepatic tissue.
We conclude that consumption of Cashew nuts by dyslipidemic animals in an unbalanced diet presents improvements in dyslipidemia, yet also increases glycemic alterations, and raises risks of non-alcoholic fatty liver disease. | 2019-12-14T14:01:57.260Z | 2019-12-12T00:00:00.000 | {
"year": 2019,
"sha1": "35b47e84d14203d99300b9055364ef1ba7d5b0b7",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0225736&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a9a82a348d2825bb4d572d2590105c3a4e732145",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
271302586 | pes2o/s2orc | v3-fos-license | The comparison of effectiveness of acupressure on Spleen 6 and Hugo points on the severity of postpartum pain: A randomized clinical trial
Abstract Background and aims Postpartum pain poses a significant challenge for new mothers. Various nonpharmacological methods are employed to manage postpartum pain. This study aimed to compare the effectiveness of acupressure on Spleen 6 and Hugo points on the severity of postpartum pain. Methods In this parallel randomized trial study, 68 eligible primiparous women who had vaginal deliveries and experienced postpartum pain at Farabi Hospital in Malekan (a city in East Azarbaijan Province in Iran) were selected according to inclusion/exclusion criteria and then allocated to the Hugo (n = 34) and Spleen 6 (n = 34) acupressure groups using a randomized block design (six blocks). The data collection process took place from November 2022 to April 2023. The participants were blinded; however, the analysts and investigators were not blinded. Acupressure interventions were applied bilaterally for 20 min, consisting of 10 s of pressure followed by 2 s of rest. Pain intensity was assessed using a visual pain scale before, immediately after, and 1 h after the intervention. In total, 68 participants fulfilled the study. Data were analyzed using Statistical Package for the Social Sciences version 25 with chi‐square, Mann–Whitney, and Friedman tests. Results Both groups exhibited a statistically significant reduction in postpartum pain intensity across all periods (p < 0.001). Although there was a significant difference in pain intensity between the groups before the intervention (p = 0.039), this distinction was not observed immediately and 1 h after the intervention (p ≥ 0.05). Both Hugo and Spleen's 6 acupressure interventions reduced postpartum pain intensity. No significant adverse events or side effects were observed. Conclusion Acupressure on Spleen 6 and Hugo points helped decrease the severity of postpartum pain in primiparous women who had vaginal deliveries. Healthcare providers are encouraged to consider acupressure for postpartum pain management.
Postpartum pain following vaginal delivery is a prevalent issue impacting millions of young women. 1,2During the third stage of labor, when the placenta and membranes are expelled, the uterus undergoes contractions to constrict large uterine arteries, preventing postpartum hemorrhage. 3These contractions lead to the release of chemical mediators, including bradykinin, leukotrienes, prostaglandins, serotonin, and lactic acid, contributing to the experience of pain. 4 Notably, the release of prostaglandins, responsible for inducing uterine contractions, emerges as the primary culprit behind postpartum pain. 5The intensity of postpartum pain spans from menstruallike cramps to severe discomfort, occasionally surpassing the pain experienced during childbirth itself. 5Typically, this discomfort persists for 3-4 days, although it can occasionally extend up to a week postdelivery. 3It was reported that 47% of women experienced postpartum uterus pain within 6-48 h after delivery. 6Pourmaleky et al. stated that 77% of women experience postpartum pain. 7Pain stands out as one of the most prevalent and distressing sensory and psychological ordeals. 8The repercussions of pain extend beyond the physical realm, impacting a mother's ability to breastfeed, engage in daily activities, and even communicate effectively with healthcare providers and her newborn. 9Furthermore, pain-induced stress triggers an increase in adrenaline secretion, leading to decreased oxytocin production, which can disrupt the flow of breast milk. 10erefore, pain management in the postpartum period is fundamental. 11Recent research indicates a range of methods for alleviating postpartum pain, encompassing massage therapy, reflexology, heat therapy, relaxation techniques, skin stimulation, herbal remedies, and pharmaceutical interventions. 12,13Among these, oral pain relievers like mefenamic acid, ibuprofen, and acetaminophen stand out as the most commonly employed means to mitigate postpartum pain. 13spite their notable efficacy in pain reduction, it is crucial to acknowledge potential side effects associated with these medications, including but not limited to nausea, vomiting, diarrhea, abdominal pain, gastrointestinal bleeding, dizziness, and drowsiness. 4upressure, a derivative of acupuncture, involves stimulating acupuncture points through finger pressure and massage to regulate and expedite bodily functions. 14Acupressure operates by activating specific acupuncture points to modulate pain gate control.By stimulating large nerve fibers that transmit impulses to the spinal cord, acupressure effectively closes the gates of pain transmission, thereby diminishing the perception of pain. 15Additionally, according to traditional Chinese medicine, the body's vital energy, or Qi, flows through meridians, governing bodily functions.Disruptions in this energy flow lead to discomfort and pain.Targeting specific points in the body allows access to these meridians, restoring balance and alleviating pain. 16Certain pressure points are also believed to stimulate oxytocin release and expedite labor while concurrently fostering energy equilibrium and pain reduction. 17key acupressure point is Spleen 6 (SP6), positioned four fingers above the inner ankle behind the tibia's posterior edge. 18Wu et al. have conducted a study on the effects of acupuncture (SP6) on postcesarean section pain and reported that the acupuncture group's pain scores were lower than the control group's, and there were significant differences in the visual analog scale (VAS) scores between the acupuncture group and the control group within the first 2 h after cesarean section. 19e Hugo point, or Large Intestine 4 (LI4), located between the thumb and index finger, is another pivotal pressure point and one of the most common points. 13Afravi et al. reported that Hugo point pressure is a simple and cost-effective, harmless, and easily applicable analgesic method for after-pain reduction, especially in the first 2 h after delivery. 13Negahban Bonabi et al.'s study demonstrated a significant reduction in postcesarean section pain through acupressure at Hugo's point.The observed difference in pain intensity between the intervention and control groups was particularly notable 1 h after the intervention. 20Acupressure stands as a widely utilized method for alleviating postpartum pain. 21Despite extensive research on pain relief during childbirth, postpartum pain has been relatively underexplored. 22systematic review study showed that there is currently no standard for acupressure point location, frequency, and duration of use of this method to reduce pain.Also, the durability of the relief effect of this intervention is not apparent.In this regard, the researchers have recommended more studies to investigate the preferred point of acupressure for postpartum pain and to determine the duration of the intervention's effect. 23The results of a systematic review showed that Hugo and Spleen's pressure points are the most used. 24In a study, the effect of the Hugo acupressure point and the six spleen on the severity of labor pain in primiparous women was compared, and no significant difference was observed between these two points in reducing labor pain. 25In another study that compared the effect of acupressure on Hugo point and six spleens on postcesarean section pain, the results showed that Hugo point acupressure compared to six spleen point had a better performance in reducing postcesarean pain. 20igned with the World Health Organization's stance on mother friendly hospitals, alleviating postpartum pain is a fundamental principle.Nonpharmacological methods, devoid of adverse effects on both mother and fetus, are preferred by patients.In Iran, Spleen 6 and Hugo are common pressure points employed for this purpose, yet there is no consensus on the preferred point of pressure.In addition, comparative studies on their efficacy in postpartum pain relief after natural childbirth are lacking.
Given the significance of postpartum pain and its direct impact on maternal satisfaction, recognizing the widespread acceptance of nonpharmacological pain reduction methods, and the significance of finding the best point for acupressure, this study was conducted to compare the effects of acupressure on the Sixth Spleen point and Hugo point on postpartum pain in primiparous women who gave birth in Malekan city in 2022.
| Study design
This single-center, single-blind, parallel randomized (1:1) trial study was conducted in Malekan, Iran, from November 2022 to April 2023.
| Participants
In this study, 68 eligible primiparous women who experienced postpartum pain referred to Farabi Hospital in Malekan (a city in East Azarbaijan Province in Iran) were selected according to inclusion/ exclusion criteria and then randomized into two groups using a randomized block design (six blocks) (Figure 1).
| Sample size calculation
The sample size was calculated for each group based on the results of a previous similar study. 20The following equation was used to calculate the sample size: In this equation, α = 0.05, β = 0.2, δ1 = 2.39, δ2 = 1.93, µ1 = 5.91, µ2 = 7.39, and n = 34.Finally, 68 participants were included in the study, with 34 participants in each group.
| Randomization
Patients from the postpartum ward at Farabi Hospital in Malekan, located in the East Azarbaijan Province of Iran, were selected based on specific inclusion and exclusion criteria before being randomly allocated into the Hugo or SP-6 groups at a 1:1 ratio.The allocation
| Inclusion and exclusion criteria
The inclusion criteria comprised being a primiparous woman (first-time mother), having undergone an episiotomy, and possessing no lesions at the intended site of acupressure application.The exclusion criteria were defined as follows: a lack of willingness to participate; prior experience with acupressure; any speech, hearing, or visual impairments; a history of substance abuse; diagnosed mental health disorders; and the occurrence of complications such as significant hemorrhage, embolism, or the necessity for analgesics within the initial 2 h postpartum.
| Data collection
The demographic questionnaire was administered through face-toface interviews, while pain intensity was assessed using the VAS at three time points: before the intervention, immediately afterward . 20The VAS is recognized for its validity and scientific reliability. 22e content validity method was employed to ensure the female obstetrics specialists, 1 midwife, and 7 professors with expertise in midwifery and reproductive health.Following comprehensive discussions and revisions by this panel of experts, the refined questionnaire was adopted for the study.This meticulous validation process, incorporating diverse academic insights, significantly enhances the questionnaire's content validity, ensuring its congruence with the research goals and adherence to scientific standards.
| Intervention
Interventions were delivered to primiparous mothers immediately following natural childbirth, both directly after delivery and upon their transfer to the postpartum ward.These procedures were conducted before the initiation of breastfeeding, within a 2-h window postdelivery.Acupressure was applied bilaterally at Spleen 6 point (Figure 2) and Hugo point (Figure 3).For each point, the pressure was applied for 10 s, followed by a 2-s rest, over a continuous duration of 20 min.Under the guidance of the third author, the first author administered the interventions, following a protocol adapted from the Negahban Bonabi study. 20e pressure intensity was adjusted to produce a sensation of warmth and mild discomfort.The Spleen 6 point is located 5 cm above the inner ankle along the Spleen meridian, whereas the Hugo point is found on the back of the hand, between the first and second metacarpal bones, in alignment with the radial bone.In addition to the acupressure intervention, all participants received standard postpartum care, which included monitoring of vital signs, uterine massage, and bleeding control measures.Furthermore, in the postpartum ward, women who requested it were administered a 100 mg Diclofenac suppository at 4 and 10 h postdelivery.
| RESULTS
In total, 68 primiparous women who met the inclusion criteria were randomly divided into two groups (n = 34 each): the Spleen 6 and Hugo groups.The Kolmogorov-Smirnov test assessed the normality of quantitative variables within both groups.A comparative analysis of demographic variables between the Hugo and Spleen 6 groups showed no statistically significant differences in demographic factors.
The Mann-Whitney test, applied to evaluate the average age difference given the nonnormal distribution in the Spleen 6 group, indicated no significant age difference between groups (p = 0.212) (Table 1).
The Kolmogorov-Smirnov test demonstrated that pain intensity variables were not normally distributed before, immediately after, and 1 h after the intervention in both groups.The nonparametric Mann-Whitney test compared pain intensity scores across the three time points between the groups, revealing a statistically significant difference in pain intensity scores before the intervention.However, no significant differences were observed immediately after and 1 h after the intervention (Table 2).
Due to the nonnormal distribution of pain intensity variables, the
Friedman test was employed to examine the trends in pain intensity across three time intervals.This analysis showed a statistically significant reduction in pain intensity over time in both the Hugo and Spleen 6 acupressure groups (p < 0.001) (Table 2).No significant adverse events or side effects were reported.
| DISCUSSION
The findings of this study demonstrate a statistically significant reduction in postpartum pain intensity across three time intervals (before, immediately after, and 1 h after the intervention) in the six spleen acupressure point group.Consistent with our results, Wu et al.
have conducted a study on the effects of acupuncture on postcesarean section pain and reported that the acupuncture group's pain scores were lower than the control group's, and there were significant differences in the VAS scores between the acupuncture group and the control group within the first 2 h after cesarean section. 19e et al. 26 conducted a study examining the impact of acupressure on the acupuncture point SP-6.In their research involving 75 women, 36 received acupressure at the SP-6 point, while 39 received only tactile touch at the same point.Acupressure was administered for 30 min, and pain intensity was assessed 30 and 60 min after the intervention.Similar to the findings in the present study, Lee et al. reported that women who underwent acupressure exhibited reduced pain levels. 26lime and Okumus reported lower pain levels in the intervention group compared to the control group following acupressure at six spleen points. 27Negahban Bonabi et al. did not observe a statistically significant difference in mean postpain scores after cesarean section between the six spleen acupressure and control groups. 20Similarly, Soltani et al. found no significant differences in uterine tonicity and pain 1 and 2 h after delivery among groups receiving acupressure on main points, sham acupressure, and control. 28Additionally, Adib-Hajbaghery et al. reported no significant reduction in pain levels, nausea, and vomiting after appendectomy with acupressure at the spleen point. 29The precise mechanism of pain reduction via acupressure and acupuncture at the SP-6 point remains unclear.It is theorized that acupuncture modulates the nervous system, influencing input signals to the central nervous system.This activation may engage pain-regulating systems, including internal opioid pathways.Studies have shown elevated endorphin levels in cerebrospinal fluid and the brain after acupuncture, implying their potential role in pain alleviation. 30A B L E 1 Demographic characteristics of participants in the Hugo and six spleen groups.The results of this study suggest that acupressure at the Hugo point significantly reduced the average intensity of postpartum pain across three critical time intervals: before, immediately after, and 1 h after the intervention.These findings align with the study by Negahban Bonabi et al., which suggests a statistically significant difference in after-pain scores following cesarean section in the Hugo point acupressure group compared to the control group, showcasing the efficacy of Hugo point acupressure in alleviating postoperative pain. 20Hamidzadeh et al.'s study also provided consistent results, illustrating that acupressure at the Hugo point effectively reduced labor pain compared to the control group. 31In a study by Kumar and Viswanath, 32 significant differences in pain scores were observed between groups at 30 and 60 min postintervention.Women in the acupressure group reported a positive experience, suggesting that Hugo Point acupressure is a cost-effective nursing intervention for enhancing comfort during childbirth. 32Smith et al.'s study further supports these findings, demonstrating that acupressure effectively reduces labor pain. 33The study by Ganji et al. emphasized the reliability of the six spleens and Hugo points for labor pain reduction in their review, as they were consistently employed in studies with acceptable validity. 24wever, it is worth noting that some studies present contradictory results.For instance, the study by Ramezani et al. in 2016 showed that acupressure at Hugo's point had no discernible effect on reducing postoperative pain after cesarean surgery. 34In a study by Yeh et al. comparing the effects of auricular point acupressure with painkillers on postspine surgery pain, ear acupressure did not prove effective in pain reduction. 35These outcomes may be attributed to variations in acupressure or acupuncture techniques, including the application of electrical acupressure.
Additionally, the selection of control groups and the specific context of the surgical procedure can play a role in influencing the observed effects.Moreover, postoperative pain is a multifaceted phenomenon influenced by various factors, such as age, personality traits, education, social status, and the patient's awareness and understanding of the surgical process, medical care, time of day, and physical condition. 19e results of this study indicate no significant difference in the average intensity of postpartum pain between the two acupressure groups (six spleen points and Hugo) immediately after and 1 h after the intervention.
In some studies, age has been reported as a factor affecting pain intensity. 36,37However, in our study, a thorough examination of the studied groups regarding age and education before the test indicated homogeneity, allowing us to attribute changes in average pain intensity to the interventions.The exact mechanism underlying acupressure's effect on pain remains unknown.It is suggested that acupressure, by stimulating energy channels, establishes a balance between forces and energy flow.It may also hinder the transmission of pain signals and elevate endorphin levels. 26Furthermore, reducing anxiety levels may contribute to pain reduction, as anxiety is associated with increased catecholamines, leading to decreased endorphins, heightened pain, and prolonged labor. 38Another possible mechanism is the "gate control theory of pain," which posits that pressure stimulates large nerve fibers, ultimately keeping pain transmission gates closed and reducing pain perception. 39According to Chinese medicine, energy channels known as meridians exist within the body, and blockages in these channels lead to imbalances in energy, potentially resulting in pain during childbirth.Therefore, stimulating these points may restore energy balance and alleviate pain. 40
| LIMITATIONS OF THE STUDY
This study has several limitations that may affect the generalizability of its findings.Pain is subjective, with psychological factors significantly influencing its perception and manifestation.Factors such as underlying mood disorders, emotional support, fatigue, and past trauma can affect pain outcomes, [41][42][43][44] introducing variability that the study may not fully account for.The reliance on the visual pain scale and self-reported data for pain assessment, without objective criteria, introduces potential bias due to individual interpretation variations.The study's exclusive enrollment of women with standard deliveries limits the findings' applicability to a broader population experiencing various delivery methods.
Additionally, focusing on postpartum pain within the hospital setting without long-term follow-up restricts understanding of pain's persistence or evolution postdischarge.The absence of a control group without acupressure treatment limits the study's ability to make direct comparisons and attribute observed changes to the acupressure intervention.Furthermore, the impact of acupressure on the need for painkillers postintervention was not investigated, representing another research limitation.
| CONCLUSION AND RECOMMENDATIONS
The study indicates a statistically significant reduction in postpartum pain intensity across three time intervals (before, immediately after, and 1 h after the intervention) for both the Hugo and Spleen 6 acupressure groups.These findings suggest that acupressure at these points can effectively reduce postpartum pain intensity in women following natural childbirth.Healthcare providers, including midwives and gynecologists, are encouraged to consider acupressure on the Hugo and Spleen 6 points as part of their care protocol for managing postpartum pain.
(within 5
min), and 1-h postintervention.This assessment was conducted by a third party not involved in the study.The VAS employs a 10 cm graduated line, where a score of 10 indicates the most severe pain and 0 denotes the absence of pain.The patient's mark along this line determines their pain level, with pain severity categorized as mild (1-3), moderate (4-7), or severe(8-10) demographic questionnaire's validity.This process entailed aligning the questionnaire with the research objectives incorporating insights from scientific literature, articles, and studies by other researchers.The supervising and consulting faculty reviewed, approved, and modified the draft form.It was subsequently presented for final review and approval to 10 academic staff members from the Faculty of Nursing and Midwifery at Kermanshah University of Medical Sciences, including 2
2. 8 |
Outcome measures 2.8.1 | Primary outcome The primary outcome was the severity of postpartum pain.F I G U R E 2 Hugo point.Li4, large intestine 4. The statistical analysis utilized the Statistical Package for the Social Sciences, version 25.0.The Kolmogorov-Smirnov (KS) test determined the normality of data distribution.The chi-square test assessed the differences in the frequency of qualitative variables between the two groups.To examine the trends in mean scores of pain intensity (total score) before, immediately after, and 1 h after the intervention across the groups, Friedman's test (the nonparametric equivalent of the repeated measures test) was employed.The Mann-Whitney U test (the nonparametric counterpart to the independent t-test) compared the mean pain intensity scores (total score) at different time intervals between the Hugo and SP-6 groups.The intention-totreat analysis model was applied to data analysis.A significance level, or p-value, of less than 0.05 was set for all analyses.
2. 10 |
Ethical considerationsInitially, all participants were informed about the study's details and provided their consent by signing an informed consent form.The study received approval from the Ethics Committee of Kermanshah University of Medical Sciences and was registered in the Iranian Registry of Clinical Trials under the code [IRCT20220910055926N1].
Comparison of pain intensity at three time points in the study groups.
a Chi-square tests.b Mann-Whitney U. T A B L E 2 a Mann-Whitney U test b Friedman test. | 2024-07-21T05:27:07.688Z | 2024-07-01T00:00:00.000 | {
"year": 2024,
"sha1": "3a30356a98769d85811e9eaa02f1e0ac33801d1f",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "3a30356a98769d85811e9eaa02f1e0ac33801d1f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
210956785 | pes2o/s2orc | v3-fos-license | Development of a Cost-Effective Pediatric Intubation Task Trainer for Rural Medical Education
Pediatric intubation and airway management (PIAM) is a life-saving, emergent procedure that is performed by a variety of healthcare practitioners. Securing the pediatric airway in a time-sensitive fashion is a specialized skill that declines with lack of practice, leading to a precarious gap in clinical competency and healthcare delivery. However, current training models for PIAM, such as live animals, human cadavers, and simulators, are not adequately accessible or reliable due to their combination of high cost, unrealistic simulation, lack of standardization, and ethical concerns. Task trainers pose an ethically and fiscally sustainable training model for experiential learning through repetitive practice, which has been shown to dramatically improve trainee proficiency and confidence in performing high-acuity low-occurrence procedures such as pediatric intubation. This work aims to report the development process and initial validation evidence of a prototype cost-effective pediatric intubation task trainer that can be used for post-graduate education, especially in resource-challenged settings.
Introduction
Medical education is often described as drinking from a firehose, and rightly so: trainees acquire an onslaught of knowledge and skills in a very short period of time, with the expectation that they will be able to proficiently exercise these skills upon demand. However, research shows that much of this expertise fades with lack of practice, especially in regard to high-acuity low-occurrence (HALO) skills such as pediatric intubation and airway management (PIAM) [1]. Pediatric patients present with greater difficulty in airway management due to anatomical differences and varying pathological processes compared with their adult counterparts, for whom practitioners are trained [2]. This challenge is further exacerbated in practitioners who are not regularly exposed to manipulation of the pediatric airway-much less in the crisis situations in which intubation is often required-leading to marked deficits in psychomotor skill retention [1,3]. Inability to secure the pediatric airway in a time-sensitive fashion causes suboptimal success rates in this life-saving, emergent procedure [4].
Adequate training models for PIAM are in dire need, particularly in global and rural contexts, due to the high prevalence of scenarios that require airway management without specialized personnel such as anesthesiologists, emergency physicians, and respiratory therapists [5][6][7]. There are three general classifications of models used for teaching PIAM: live animals, human cadavers, and simulators. Attempts to use live animals or human cadavers have shown promise for experiential learning of pediatric intubation but pose ethical concerns, can be difficult to acquire, and are not available for long-term practice [8]. These limitations are further amplified in both global and rural contexts, where availability and expenditure are limited. The shortfalls of both live and cadaveric models comprise the strengths of simulators. Emerging evidence suggests that experiential learning with simulation models improves intubation performance in medical trainees, while being ethically sound and offering limitless opportunity for practice. Still, current simulation models only offer moderate realism and can be very expensive, lending to financial and geographical barriers to accessibility. The demand for a cost-effective pediatric intubation training model with reasonable realism prevails, and the solution may lie in developing three-dimensional (3D)-printed models, which hold promise for improving trainee confidence when paired with repetitive, problem-based experiential learning [9][10].
Task trainers exist within the realm of simulation models, but they attempt to train a particular task through repetition rather than simulate the reality of a clinical experience, as with a simulation model. As such, task trainers adopt a minimalist "partial-body" design in contrast to simulation models and are therefore significantly more cost-effective [11][12]. Simulation models have come to represent a luxury in medical education, but task trainers are equipped only with necessary features designed to train basic skills and improve trainee confidence, particularly with respect to HALO skills, which benefit exponentially from any practice due to the infrequency of exposure to real clinical scenarios [13]. Although task trainers do not facilitate an equivalent degree of realism as simulation models, such as real-time physiological feedback, they do offer potential to be used imaginatively with case-based learning scenarios, thus further widening their scope of training utility. Developing a 3D-printed task trainer has a far-reaching impact on both resource-rich and resource-challenged areas, which can benefit immensely from simulation-based health professional education [14]. Here, our objective is to develop a prototype PIAM task trainer to assess demand and critical features that are necessary for producing a high-quality model. Ultimately, the goal of the simulation community would be to develop a design for the task trainer that can be printed anywhere in the world for a miniscule fraction of the cost of a simulation model.
Technical Report
This technical report is organized into five sections: design considerations and education context, design protocol, development, evaluation methods, and evaluation results.
Design considerations and education context
The need for a PIAM task trainer stems primarily from the anatomical differences of neonate and pediatric patients in comparison with adults, on whom much of medical education is centered. In fact, the Mallampati classification system, which is based on visibility of the glottis, fails to accurately predict the ease of intubation in pediatric patients due to these anatomical differences [15]. The larger head and prominent occiput of pediatric patients predispose them to airway obstruction, especially when asleep, and poses difficulty in correctly aligning the airway during PIAM. Pediatric patients also have a proportionally smaller mandible but a larger tongue, shorter and narrower hypopharynx, higher larynx and cricoid ring, and prominent adenoids and tonsils, resulting in insufficient upper airway space. Decreased tone of upper airway muscles upon administration of anesthetic or sedative drugs introduces further challenges to securing the pediatric airway, and patients may be more susceptible to trauma of the vocal chords due to their obtuse alignment. Prior studies attest that the pediatric airway is funnel-shaped, with the narrowest portion most superior at the elliptical cricoid ring, whereas the adult airway is more cylindrical, and narrowest at the level of the glottis. Physiological differences in oxygen consumption, residual lung capacity, and respiratory rate in pediatric patients reduce the time available to perform the procedure successfully. These differences are most pronounced upon birth and gradually disappear as the child grows older [2].
The purpose of simulation-based medical education in pediatrics is to facilitate a "learning event with goals and objectives" that is not "a replacement for clinical experiences" but rather "a safe place to learn … without the fear of harming patients" [11]. In traditional clinically oriented medical education, the concept of "psychological safety" of learners is underemphasized yet crucial to establishing proficiency and confidence in essential skills [11]. Simulators act as a mimetic patient, often equipped with advanced technology to reproduce realistic physiological responses. As such, they are often tremendously high realism, but also high cost and high maintenance. This makes simulators an investment to own, upkeep, and replace, especially when used to educate masses of students who require repetitive practice. Task trainers, on the other hand, are not designed to simulate a whole-body patient. Instead, their purpose is to offer a mode for training psychomotor skills for isolated tasks such as pediatric intubation and surgical airway management. They do not feature computerized controls or lifelike aesthetics but are engineered to provide the absolute essentials in facilitating a sufficiently realistic experience, such as accurately replicated anatomy and texture [11]. Thus, they are significantly cheaper to produce and maintain: the cost of owning and operating a 3D printer, plus the materials to produce a task trainer, would average at less than half the cost of a simulation model [1,16].
In the case of a PIAM task trainer, realism would be directly associated with cost; therefore, a variable cost-effective task trainer could be feasibly developed for availing consumers from different economic conditions. Essentially, we planned to develop a base-level trainer with upgradable options to increase realism or flexibility of scenarios. This would increase accessibility of the trainer to resource-challenged areas, in which trainees may opt to purchase the base-level trainer for initial practice and then upgrade over time if the fiscal opportunity arises. It would also allow for a simple and an inexpensive replacement of parts if needed. For example, the base-level trainer would comprise the essential structure of the model composed of realistically textured materials to simulate the oral cavity, neck, and jaw flexibility for head and neck tilt, and cricoid cartilage, leading into lung and stomach balloons to confirm correct tube placement. It would function to simulate the manual task of pediatric intubation with no physiological feedback or complications. The consumer may then wish to add replenishable reservoirs for artificial bodily fluids to mimic saliva or vomit, inflatable tongue to simulate tongue edema or pharyngeal swelling, and inflatable trachea to simulate variable airway resistance or lung compliance, among other potential add-ons. These optional upgrades, which are inspired by components of high-realism models presently in the Canadian market, would simulate pediatric intubation with positive feedback, thus introducing the trainee to a more difficult, multifaceted airway management scenario that more closely mimics reality [17]. With further technological developments, it may be possible to hybridize rudimentary task trainers with virtual reality simulators that overlay a paradoxically nontangible realism to the procedural skill training experience, again at a fraction of the cost of whole-body simulators [11].
Design protocol
The three-phase-"conceptualization, concept refinement, and implementation"-process to develop our prototype was guided with expert opinion from a pediatric anesthesiologist [18]. We developed an initial prototype by extracting anatomical features from a computed tomography (CT) scan of a five-year-old patient with normal anatomy, custom 3D computeraided design modeled components, and pre-existing models from open source websites for review and practice. Opinions obtained from our first task trainer were used to inform the development of our second and final iteration, which then was further reviewed by a cohort of pediatric anesthesiologists to assess the trainer's face and content validity. Since we were working in a resource-challenged setting, texture and color accuracy of our materials were not possible for some components of the model. Anatomical accuracy of materials with respect to texture, color, flexibility, and movement was prioritized, and any areas for improvement were incorporated into the second iteration of our trainer. Our design process is visually represented in Figure 1. Using Fusion 360 (AutoDesk, San Rafael, CA, USA), the base of the task trainer was designed to represent the top portion of the body that houses the airway. A guided slot was incorporated to allow for the head tilt feature of the skull, and a rounded extrusion simulated the shoulder girdle with an angled opening for the guidance of the airway during the simulation.
We used the Ultimaker S5 3D printer (Ultimaker, Utrecht, the Netherlands) for printing and molds for casting. A dual-extrusion print with black polylactic acid (PLA) 2.85-mm filament was used for the base with InnoSolve 2.85-mm polyvinyl alcohol (PVA) support filament (InnoSolve, Castle Rock, CO). A layer height of 0.2 mm, infill of 10%, support material infill of 15%, and print speed of 70 mm/s at a temperature of 205°C was used. The print instructions file was saved to a secure USB drive to upload to the 3D printer.
Upon review from our clinical advisor, the primary recommendation was to remove the thoracic cavity casing as it was interfering with head-tilt and jaw movement. We also moved the thoracic cavity closer to the guided head slot and angled the airway bed slightly upward to facilitate the airway in its proper axis during intubation since the positioning in the first iteration limited the physiological range of motion. Furthermore, since the pediatric occiput offers resistance upon tilting during PIAM, we also made the occiput attachment loops slightly thicker to prevent the loops from breaking when tilting the head, ensuring longevity of the task trainer. The base board is depicted in Figure 2.
Occiput
The CT scan captured anatomy from the eye level of the patient to the mid-chest level; therefore, complete extraction of the skull was not possible. As the superior occiput does not serve a vital function in this simulation, a pre-existing model of an adult skull from an opensource royalty-free website (thingiverse.com) was downloaded and rendered using a combination of MeshMixer (AutoDesk) and Fusion 360. The occiput was scaled down appropriately using measurements taken from the CT scan in OsiriX MD (Pixmeo SARL, Bernex, Switzerland). For purposes of the trainer, an attachment for the bottom jaw was added in Fusion 360, consisting of two pins that extended at the attachment site and a ridge that would prevent the jaw from opening wider than anatomically possible. To simulate the head tilt that positions the child's occiput correctly for the procedure, a curved guide was attached to the occipital protuberance that could glide on the base of the task trainer. Using Fusion 360, the model was then converted to an STL (STereoLithography) file for 3D printing purposes.
The Ultimaker 3 3D printer was used to develop the occiput with a dual extrusion print. The body of the skull was printed using PLA filament material in ivory color to more closely resemble bone, with PVA acting as support material. A total of 338 g of PLA filament and 177 g of PVA filament were used, with a total print time of 2 days 6 hours and 26 minutes. The recommended settings were adjusted to a layer height of 0.2 mm, infill of 10%, support material infill of 15%, and print speed of 70 mm/s at a temperature of 205°C.
Upon review from our clinical advisor, the skull was deemed appropriate and did not require any adjustments. The occiput is depicted in Figure 3.
FIGURE 3: Occiput
Jaw The bottom jaw model was extracted from the provided CT scan with the use of OsiriX MD. OsiriX MD sliced the CT scan into 0.8-mm layers that could be easily viewed to identify the different anatomical features of interest. With 3D volume rendering and two-dimensional (2D) surface rendering, the jaw structure was extracted from the scan and exported as an STL file. The jaw was inserted into Fusion 360 as a mesh and altered to provide attachment sites to the upper jaw and mouth cavity. This file was then imported into MeshMixer to refine and smooth the model using a variety of sculpting tools. The final design was exported as an STL for 3D printing.
As with the occiput, the Ultimaker 3 3D printer was used to print and cast the jaw in a dualextrusion print. The jaw was constructed similarly to the occiput, with ivory-colored PLA filament, with PVA filament as support. A total of 25 g of PLA filament and 10 g of PVA filament were used, with a total print time of 3 hours 52 minutes. The following recommended settings were used: a layer height of 0.2 mm, infill of 20%, support material infill of 15%, print speed of 70 mm/s at a temperature of 205°C.
Upon review from our clinical advisor, we added angular restrictions on the attachment pegs to the jaw to prevent the mouth from opening further than anatomically possible. This restriction helps to facilitate what may be deemed one of the most important features of PIAM: the smaller oral cavity, which makes maneuvering and visualization more difficult in pediatric patients than their adult counterparts. The jaw along with the oral cavity is depicted in Figure 4.
Oral Cavity
The oral cavity comprises silicone in which a 3D-printed mold was required. The oral cavity was designed in Fusion 360 with reference to the size and shape of the upper and lower jaw. The bottom was thickened to simulate the tongue, and the top as the roof of the child's mouth. An opening for insertion of the airway was formed at the back of the mouth. To cast the feature, a three-part mold was modeled in Fusion 360. The model of the mouth previously created was imported into the design and combined into the mold in such a way that the element was cut away from the mold leaving the oral cavity negative imprinted in the mold to be filled with silicone. The oral cavity along with the lower jaw is depicted in Figure 4.
The Ultimaker 3 3D printer was used for the mouth cavity as well, with a dual-extrusion print. The mold of the mouth was printed in PLA, with PVA as support. A total of 101 g of PLA filament and 8 g of PVA filament were used, with a total print time of 8 hours 34 minutes. The settings used are as follows: layer height of 0.2 mm, infill density of 20%, support material infill of 15%, and print speed of 70 mm/s at a temperature of 205°C. Using the mold, the oral cavity was cast using silicone. The silicone mix used was Ecoflex 00-30 (Smooth-On, East Texas, PA, USA) platinum cure silicone rubber (part A and part B in a 1:1 ratio) measuring 100 mL in total.
Parts A and B were combined by mixing, and a small amount of red pigment was added to add an element of realism to the mouth. The mixture was then poured into the assembled three-part mold and left to cure for a total of 4 hours.
Upon review from our clinical advisor, we increased the thickness of the bottom of the mouth cavity to simulate the tongue. In the first iteration, we aspired to create a hollow inflatable tongue to simulate tongue edema, which may occur in PIAM scenarios. However, the inflatable tongue posed various limitations and reduced overall realism; therefore, it was ultimately removed from the second iteration of our prototype. This modification allows for the tongue and mouth cavity to be printed as a singular piece, eliminating the need for attachment. The first iteration also included removable teeth to simulate dental injury during intubation, which may be more common in pediatric scenarios due to the smaller oral cavity. However, these removable teeth would fall out too easily and the underlying pegs were highly vulnerable to damage. To simplify the task trainer and reduce the necessity for extra printing and replacement of parts, we decided to replace these removable teeth with stationary teeth. It was reasoned that trainees would be aware of the possibility of dental injury through tactile contact with stationary teeth as well.
Airway
The airway and esophagus were extracted solely from the CT scan. Using OsiriX MD, a new region of interest was created to localize the airway-esophagus unit, and then 3D surface rendering was used to remove undesired portions of the scan. Then, the 2D surface rendering tool was used to further isolate the airway and export the model as an STL file. This file was imported into MeshMixer to eliminate any remaining undesired components, and we further refined the model using MeshMixer's model sculpting tools.
The Stratasys Connex3 3D printer (Stratasys, Eden Prairie, MN, USA) was used for the airway to facilitate greater precision. The model was opened using Object3D, where the desired placement and validation were performed. The airway was printed in the flexible Tango Black+ resin (Stratasys) using 146 mL of total and 197 mL of support material. Ideally, this print would have been done in a more anatomically correct pink-red color rather than the resultant dark-grey. The total print time was 7 hours 26 minutes.
Upon review from our clinical advisor, we scaled the airway-esophagus unit up by 1.3 times for two reasons: first, the original airway was smaller than average for a five-year-old patient, and, second, lubrication is typically used in intubation, and soft muscle tissue is more elastic, which allows the endotracheal tube to be inserted with less resistance. Since our task trainer is not compatible with lubrication media due to its silicone-based construction, we decided to scale the airway up by a small factor to compensate. We also printed this unit in a more flexible material in the second iteration since the first iteration resulted in an unrealistically firm airway that cracked while practicing intubation. We considered casting the airway in firm silicone, as with the tongue, in which case a red or pink airway would have been possible to create. However, the CT scan anatomy was too intricate to replicate in a mold; therefore, a silicone model was not feasible for this iteration. Both the airway and the esophagus terminate in color-coded balloons: inflation of the pink balloon after intubation would indicate correct placement in the trachea and subsequent perfusion of the lungs, whereas inflation of the blue balloon after intubation would indicate misplacement in the esophagus and subsequent perfusion of the stomach. The airway is depicted in Figure 5.
FIGURE 5: Airway
The second and final iteration of the task trainer is depicted in Figure 6.
Evaluation methods
The task trainer was evaluated by five pediatric anesthesiologists with 4 to 20 years of experience who regularly perform pediatric intubation in a variety of settings including the operating room, emergency room, and intensive care unit. Their opinions were gathered through a semi-structured interview with guided questions to provide evidence for face and content validity, in other words, realism and efficacy of the task trainer as a tool for medical education. Participants were given a laryngoscope, endotracheal tube, stylet, and a bag valve mask during their interaction with the task trainer. After their interaction with the model, the following metrics were evaluated: physical attributes, realism of experience, and efficacy of the task trainer as a tool for medical education, as well as general comments and suggestions for improvement. The answers were audio-recorded and transcribed offline by a single researcher.
Evaluation results
The interview findings are paraphrased below with specific quotes from participants.
All participants unanimously emphasized the potential of our PIAM task trainer as an innovative tool to fill the dire gap in PIAM training. The "customizability" of the task trainer and its potential for application in resource-challenged settings were acclaimed as its strongest features. One participant commented: "the airway material is more pliable than other task trainers, which is a positive feature. Although a new pediatric intubation trainer is definitely necessary, improvements in this model are required". Many other participants shared the following feedback: despite the task trainer being "a positive step", there were several loci for improvements and it was ultimately deemed in need of significant improvement prior to consideration for practical use in medical education. This feedback was consistent across the participants' comments.
Tissue color for airway structures was universally touted as the main disadvantage of the trainer: the black color made illumination with the laryngoscope difficult due to light absorption such that vital landmarks in intubation, including glottic structures and vocal chords, could not be clearly identified. Nearly all participants commented on color, with one participant saying: "The color of the airway must be changed to flesh color. The black material absorbs the light". Access to the vallecula and the interface between the oral cavity and pharynx were condemned for being unrealistic due to "rigidity of the airway material". All articulations in the model were considered to be in need of reinforcement, particularly the skull-base board and oral cavity-pharynx interfaces, as these would "become easily disconnected, causing the model to collapse".
The universal comments on color and articulations point to their paramount importance in facilitating a realistic simulation experience. Although the model enjoyed an optimistic outlook among its criticisms, one evaluator identified the dangers in such task trainers that are critical to note in the development of future simulation models. Their concern was that very few trainers are truly representative of anatomy: "by reproducing unrealistic scenarios, and having junior learners succeed, we may be giving them a false sense of security for the real-life event". Such comments went beyond the scope of the interview, which was to address the realism and education value of the task trainer, but are nonetheless important in ensuring a holistic and contextual understanding simulation and medical education [19].
Discussion
Use of 3D printing for medical education is a relatively new phenomenon rapidly gaining attention, and pediatric intubation occupies a small niche in the world of simulation model development. We sought to reduce barriers in PIAM training by gauging interest in a costeffective, ethically sound 3D-printed task trainer. The primary objective of this rudimentary task trainer is accessibility for trainees around the world, even in resource-challenged areas. As this task trainer is the first of its kind in PIAM, our design and development process anticipated a rudimentary prototype model that would assess demand and interest from experienced professionals.
Perhaps, the most imperative criticism we received not only for our task trainer but also for all simulation models is a necessity for unparalleled realism. Realism cannot be compromised if the task trainer is to be an effective learning tool, even when building a task trainer that attempts to offer more value, dollar-for-dollar, than simulators. Numerous studies have demonstrated the efficacy of task trainers in improving clinical proficiency by providing a tool for experiential learning; still, pediatric simulators and task trainers have yet to break the mainstream market [11]. This may be due to the complexity of the anatomy and the onus of the practitioner to perform the procedure with minimal, if any, error during real PIAM scenarios. Indeed, the procedure is considered a HALO skill with a minimal margin of error. It is important to develop an adequate task trainer for PIAM; after all, "children are not merely small adults" and the management of pediatric scenarios requires a specialized skill set that can only be acquired with repetitive, procedural skill training [2].
Some avenues for improvement, besides working with anatomically correct materials and colors, include collecting additional CT scans to compile a more anatomically accurate representation of the pediatric airway for 3D modelling. MRI imaging of the pediatric airway, although uncommon, would be beneficial for elucidating density and texture of the vital soft tissue structures that are abundant in the airway. With enhancements in material color, texture, and overall structural robustness, a truly efficacious task trainer is within the realm of possibility for 3D-printed simulation modelling. It is in this capacity that task trainers demonstrate their full potential in being an equalizing force in the healthcare arena, by opening access to even the most resource-challenged communities, practitioners all over the world can become proficient in versatile, life-saving procedures such as PIAM.
Conclusions
We built a relatively inexpensive and standardized PIAM task trainer, primarily for use in resource-challenged settings. Our prototype demonstrates initial agreement between experts in that the task trainer is necessary given the lack of simulation models for PIAM; however, improvements in realism are required before implementing the task trainer for medical education purposes. To be truly effective, this task trainer and similar simulators need to be implemented in conjunction with effective scenarios and execution of the scenario supported by educational theory, and, finally, the task trainer must fit with the needs of its trainees. Future work will focus on refining the model and developing case scenarios for PIAM that can simulate the entirety of the clinical PIAM experience.
Additional Information Disclosures
Human subjects: All authors have confirmed that this study did not involve human participants or tissue. Animal subjects: All authors have confirmed that this study did not involve animal subjects or tissue. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: This work was supported by the Atlantic Canada Opportunities Agency Business Development Project (infrastructure) and the Canada Research Chair in Health Care Simulation (human resources) awarded to Adam Dubrowski. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work. | 2020-01-16T09:13:12.551Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "04344ebdf27ee12fc0716bf45d2b50e781aa31df",
"oa_license": "CCBY",
"oa_url": "https://www.cureus.com/articles/26418-development-of-a-cost-effective-pediatric-intubation-task-trainer-for-rural-medical-education.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e568b2d3d0a0b31cf22fc40aff943750997dd20b",
"s2fieldsofstudy": [
"Medicine",
"Education",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
222348780 | pes2o/s2orc | v3-fos-license | Feasibility of Using the Video-Head Impulse Test to Detect the Involved Canal in Benign Paroxysmal Positional Vertigo Presenting With Positional Downbeat Nystagmus
Positional downbeat nystagmus (pDBN) represents a relatively frequent finding. Its possible peripheral origin has been widely ascertained. Nevertheless, distinguishing features of peripheral positional nystagmus, including latency, paroxysm and torsional components, may be missing, resulting in challenging differential diagnosis with central pDBN. Moreover, in case of benign paroxysmal positional vertigo (BPPV), detection of the affected canal may be challenging as involvement of the non-ampullary arm of posterior semicircular canal (PSC) results in the same oculomotor responses generated by contralateral anterior canal (ASC)-canalolithiasis. Recent acquisitions suggest that patients with persistent pDBN due to vertical canal-BPPV may exhibit impaired vestibulo-ocular reflex (VOR) for the involved canal on video-head impulse test (vHIT). Since canal hypofunction normalizes following proper canalith repositioning procedures (CRP), an incomplete canalith jam acting as a “low-pass filter” for the affected ampullary receptor has been hypothesized. This study aims to determine the sensitivity of vHIT in detecting canal involvement in patients presenting with pDBN due to vertical canal-BPPV. We retrospectively reviewed the clinical records of 59 consecutive subjects presenting with peripheral pDBN. All patients were tested with video-Frenzel examination and vHIT at presentation and after resolution of symptoms or transformation in typical BPPV-variant. BPPV involving non-ampullary tract of PSC was diagnosed in 78%, ASC-BPPV in 11.9% whereas in 6 cases the involved canal remained unidentified. Presenting VOR-gain values for the affected canal were greatly impaired in cases with persistent pDBN compared to subjects with paroxysmal/transitory nystagmus (p < 0.001). Each patient received CRP for BPPV involving the hypoactive canal or, in case of normal VOR-gain, the assumed affected canal. Each subject exhibiting VOR-gain reduction for the involved canal developed normalization of vHIT data after proper repositioning (p < 0.001), proving a close relationship with otoliths altering high-frequency cupular responses. According to our results, overall vHIT sensitivity in detecting the affected SC was 72.9%, increasing up to 88.6% when considering only cases with persistent pDBN where an incomplete canal plug is more likely to occur. vHIT should be routinely used in patients with pDBN as it may enable to localize otoconia within the labyrinth, providing further insights to the pathophysiology of peripheral pDBN.
Positional downbeat nystagmus (pDBN) represents a relatively frequent finding. Its possible peripheral origin has been widely ascertained. Nevertheless, distinguishing features of peripheral positional nystagmus, including latency, paroxysm and torsional components, may be missing, resulting in challenging differential diagnosis with central pDBN. Moreover, in case of benign paroxysmal positional vertigo (BPPV), detection of the affected canal may be challenging as involvement of the non-ampullary arm of posterior semicircular canal (PSC) results in the same oculomotor responses generated by contralateral anterior canal (ASC)-canalolithiasis. Recent acquisitions suggest that patients with persistent pDBN due to vertical canal-BPPV may exhibit impaired vestibulo-ocular reflex (VOR) for the involved canal on video-head impulse test (vHIT). Since canal hypofunction normalizes following proper canalith repositioning procedures (CRP), an incomplete canalith jam acting as a "low-pass filter" for the affected ampullary receptor has been hypothesized. This study aims to determine the sensitivity of vHIT in detecting canal involvement in patients presenting with pDBN due to vertical canal-BPPV. We retrospectively reviewed the clinical records of 59 consecutive subjects presenting with peripheral pDBN. All patients were tested with video-Frenzel examination and vHIT at presentation and after resolution of symptoms or transformation in typical BPPV-variant. BPPV involving non-ampullary tract of PSC was diagnosed in 78%, ASC-BPPV in 11.9% whereas in 6 cases the involved canal remained unidentified. Presenting VOR-gain values for the affected canal were greatly impaired in cases with persistent pDBN compared to subjects with paroxysmal/transitory nystagmus (p < 0.001). Each patient received CRP for BPPV involving the hypoactive canal or, in case of normal VOR-gain, the assumed affected canal. Each subject exhibiting VOR-gain reduction for the involved canal developed normalization of vHIT data after proper repositioning (p < 0.001), proving a close relationship with otoliths altering high-frequency cupular responses. According to our results, overall vHIT sensitivity in detecting the affected SC was 72.9%, increasing up to 88.6% when considering only cases with persistent pDBN where an incomplete canal plug is more likely to occur. vHIT should be routinely used in patients with pDBN as it may enable to localize otoconia within the labyrinth, providing further insights to the pathophysiology of peripheral pDBN.
INTRODUCTION
Positional downbeat nystagmus (pDBN) represents one of the most common findings related to central nervous system (CNS) disorders involving brainstem and cerebellum. As the main function of central vestibular system is to estimate the angular velocity, gravity orientation, and inertia processing peripheral vestibular afferents within the velocity-storage circuit, any lesions disrupting this network can generate pDBN (1,2). Though central pDBN may also present with paroxysmal course, purely vertical direction, long duration, lack of latency, fatigability and no suppression with visual fixation represent the most prominent features of pDBN of central origin (3)(4)(5)(6)(7). Nevertheless, it has been widely demonstrated how pDBN may not rarely occur also in peripheral pathologies (1)(2)(3)8). It can be elicited when the patient is brought into the straight head hanging (SHH) position and/or by Dix Hallpike (DH) maneuvers and it has been mainly related to benign paroxysmal positional vertigo (BPPV) involving the anterior semicircular canal (ASC) (3,(9)(10)(11)(12)(13)(14)(15). Despite detached otoconia, moving inside unusual sites of the labyrinth, represent the assumed underlying mechanism, peripheral pDBN patterns show features classically known as central such as lack of torsional components and long time constant (9,11,15). More recently, it has been hypothesized that even otoliths settling in the distal portion of the non-ampullary tract of the posterior semicircular canal (PSC) may result in pDBN (16)(17)(18)(19)(20)(21)(22)(23)(24)(25)(26)(27). This type of PSC-BPPV has been named "apogeotropic variant" (18,20) as nystagmus evoked in provoking positions beats away from the ground and in the opposite direction to positional paroxysmal upbeat nystagmus (beating toward the ground in DH positioning, therefore geotropic) due to classical BPPV involving PSC ampullary arm. Demi-Semont (DS) maneuver, 45 • -forced prolonged position (FPP) and quick liberatory rotation represent physical treatments proposed for this PSC-BPPV variant, with the aim of moving back displaced particles to the vestibule (19,20). Nevertheless, it is not rarely hard to identify the Abbreviations: ASC, anterior semicircular canal; BPPV, benign paroxysmal positional vertigo; CRP, canalith repositioning procedure; CNS, central nervous system; DH, Dix Hallpike; DS, Demi Semont; FPP, forced prolonged position; HSC, horizontal semicircular canal; pDBN, positional downbeat nystagmus; PSC, posterior semicircular canal; SC, semicircular canal; SHH, straight head hanging; VEMPs, vestibular-evoked myogenic potentials; vHIT, video-head impulse test; VOG, video-oculography; VOR, vestibulo-ocular reflex. affected semicircular canal (SC) due to the possible missing torsional components in pDBN (9,11,15,19,20). Additionally, ASC-BPPV is generally hardly distinguishable from contralateral apogeotropic variant of PSC-BPPV as in both cases resulting pDBN is generated by the contraction of the same ocular muscles (18,28). Thanks to the introduction of video-head impulse test (vHIT) in clinical practice, high-frequency VOR measurements for semicircular canals can be easily assessed (29,30). This new clinical device has been widely used to measure SC function, in both peripheral and central vestibular disorders (31)(32)(33)(34)(35)(36). Recently, it has been assumed that vestibulo-ocular reflex (VOR) for the affected SC may be impaired in BPPV resulting in pDBN, providing possible key data for differential diagnosis (37). To further investigate this claim, we submitted a homogeneous cohort of patients with pDBN due to vertical SC-BPPV to statistical analysis and assessed the diagnostic sensitivity of vHIT in detecting the SC involved by BPPV among cases presenting with pDBN. Reviewing our results, we also aimed to offer possible explanations for VOR-gain abnormalities for the affected SC in such cases, providing better insights to the pathophysiology of peripheral pDBN.
Patients
This study was approved by our Institutional Review Boards (approval number for the promoter center: 236/2020/OSS/AUSLRE) and was conducted according to the tenets of the Declaration of Helsinki. We performed a retrospective review of clinical-instrumental data of a cohort of 93 patients presenting with pDBN who were evaluated at our centers between June 2019 and May 2020. Overall subjects were admitted either to the outpatient units or to the emergency units. In order to select only patients with peripheral pDBN due to vertical SC-BPPV, subjects exhibiting oculomotor central signs (gaze-evoked nystagmus, rebound nystagmus, pDBN not reduced or enhanced by visual fixation) or abnormal findings on gadolinium-enhanced magnetic resonance imaging (MRI) were excluded from the analysis. Likewise, patients with past history of vestibular pathologies potentially resulting in pDBN or possible VOR-gain abnormalities for vertical SCs [i.e., Meniere's disease (38), vestibular migraine (39), inferior vestibular neuritis (40), sudden sensorineural hearing loss with vertigo (33,41), canal dehiscences (42)], were excluded. Therefore, only patients with pDBN receding or converting into a typical BPPV positional nystagmus following proper canalith repositioning procedures (CRP) were considered. Among Authors, AC, PM, SM, SQ, ER, and EA (all neurotologists) were directly involved in the analysis of pDBN features and data collection. Patients without complete clinical data including at least pre-and post-treatment measurements for all six SC VOR-gains on vHIT were not included in the study. Finally, a residual homogeneous population of 54 patients was recruited for statistical analysis. All patients underwent the same detailed work-up including history taking and bedside examination with the aid of video-Frenzel goggles or video-oculography (VOG). Each patient underwent a comprehensive assessment for all SCs VOR-gains on vHIT before and after physical treatment and only few of them were submitted to VEMPs in different stages of BPPV. Gadolinium-enhanced MRI and/or temporal bones high-resolution CT (HRCT) scan were performed if needed. Besides personal details, patients were asked whether recent head trauma occurred. They were also investigated for history of BPPV with paroxysmal positional nystagmus documented by Video-Frenzel goggles within 30 days prior to examination. Additionally, patients were divided into subgroups according both to the time elapsed between symptoms onset and clinical assessment (<7 and >7 days) and to days needed for pDBN either to recede or to convert in typical positional nystagmus due to typical ipsilateral canalolithiasis (<7 and >7 days).
Detection of the Vertical Canal Affected by BPPV
Any of the following strategies were used for the identification of the SC involved by BPPV: • Detection of the SC with impaired VOR-gain values on vHIT with either covert or overt saccades. • In case of pDBN with torsional components, recent history of BPPV with paroxysmal positional nystagmus documented with Video-Frenzel goggles addressed the diagnosis toward a specific SC (i.e., recent left PSC-BPPV addressed the diagnosis toward ipsilateral ASC-BPPV in a patient presenting with pDBN with leftbeating torsional nystagmus, whereas rightbeating components would reasonably indicate ipsilateral apogeotropic PSC-BPPV).
In cases lacking of the above-mentioned findings, detection of the affected canal could only be provided after physical treatment, basing on the following findings: • Resolution of pDBN after proper CRP designed to release a specific SC from debris, as therapeutic maneuvers, though effective in moving debris, would not result in restoring BPPV affecting other SCs. • Conversion of pDBN in classical paroxysmal positional nystagmus involving either ipsilateral PSC (upbeating/torsional nystagmus on ipsilateral DH positioning) or horizontal SC (HSC) (either geotropic or apogeotropic horizontal direction-changing nystagmus at the supine head roll test) after any CRP performed, consistently with otoconial switch from the affected canal either to other ipsilateral SCs or to another tract of the same affected SC (i.e., conversion of pDBN with leftbeating torsional components into paroxysmal upbeating nystagmus with rightbeating torsional components elicited in right positioning consistently with debris shift into right PSC ampullary arm addressed the original diagnosis toward ipsilateral BPPV involving PSC non-ampullary arm rather than contralateral ASC-BPPV).
Physical Treatment
All patients underwent specific physical therapy aimed to move debris back to the utricle from the assumed affected vertical SC. In cases with BPPV involving the non-ampullary tract of PSC, DS maneuver was mainly used, followed by the 45 • -FPP in case of persistence of symptoms following DS (20). Whereas DS maneuver mainly exploits inertial force to free the affected SC from otoconia, as it basically represents the second part of the well-known Semont's liberatory maneuver (43), 45 • -FPP technique uses gravity to move particles toward the utricle, as the affected PSC is located in the uppermost part of the labyrinth in this position (20). Standard Epley's CRP (44) or Semont's maneuver were rarely used as first therapeutic choice, mainly depending on examiner's preferences or patient's compliance.
In cases with ASC involvement, patients were mostly treated with Yacovino's technique (45), followed by prolonged forced position procedure (PFPP) (46) in subjects not exhibiting immediate recovery.
In cases where affected SC could not be ascertained due to the lack of the aforementioned findings, several CRP were pursued according to examiner's experience to obtain a canal switch, to move otoconia toward another tract of the involved SC or to directly free the affected canal.
Each subject was checked within 3-4 days. In case of persistence of pDBN, additional CRP were pursued with following check within further 3-4 days, and so on until a complete recovery or a canal switch was achieved. Physical therapy outcome was considered as successful either if patients were free from symptom and signs or if they exhibited a conversion into a typical form of BPPV. In case debris moved to the ampullary tract of PSC, Epley's or Semont's maneuvers were performed according to examiner's preference or patient's compliance, whereas proper CRP for geotropic and apogeotropic variants of HSC-BPPV were used (47,48) in case debris moved either to non-ampullary or to ampullary arm of HSC, respectively. All patients were finally checked within further 3-4 days for ensuring a complete recovery.
Eye Movements Recording
Eye movements were analyzed with video-Frenzel goggles or video-oculography (VOG). Horizontal, vertical and torsional nystagmus were qualitatively assessed. Horizontal (right/leftbeating), vertical (up/downbeating) directions of nystagmus and torsional components (right/leftbeating, i.e., with the upper pole of the eye rotating toward the right/left ear, respectively) were described from the patient's point of view. Bedside-examination included assessment of spontaneous and positional nystagmus evoked by both DH and SHH positionings. Once evaluated any spontaneous DBN (purely vertical, with or without torsional components), positional nystagmus was checked for latency (with/without), direction (purely vertical or with torsional components), inhibition with visual fixation (yes/no), duration and temporal trend (either transitory/paroxysmal accompanied by a crescendo-decrescendo pattern if <2 min, or persistent with nearly stationary course if >2 min) and reversal when returning upright following positionings (with/without).
vHIT Vestibulo-ocular reflex (VOR) gains for all three SCs were tested on both sides in response to high-frequency head stimuli on vHIT, an ICS video-oculographic system (GN Otometrics, Denmark). At least 15 impulses were delivered for stimulating each SC and averaged to get corresponding mean VOR-gains. Vertical SC were considered hypoactive if VOR-gains were <0.7 with at least either covert or overt saccades (29,30). All patients underwent vHIT testing at the presentation and following CRP, whether they succeed in SC releasing or resulted in a conversion into a typical BPPV variant (i.e., as soon as pDBN either receded or converted in positional paroxysmal nystagmus).
VEMPs Testing
Cervical and ocular vestibular-evoked myogenic potentials (cVEMPs and oVEMPs, respectively) for air-conducted sounds were recorded using 2-channel evoked potential acquisition systems (either Neuro-Audio, Neurosoft, Russia or Viking, Nicolet EDX, CareFusion, Germany depending on different centers) with surface electrodes placed according to standardized criteria (49). Potentials were recorded delivering tone bursts (frequency: 500 Hz, duration: 8 ms, stimulation rate: 5 Hz) via headphones either before or following CRP. Recording system used an EMG-based biofeedback monitoring method to minimize variations in muscles contractions and VEMPs amplitudes. A re-test was performed for each stimulus to assess reproducibility. The first biphasic responses on the ipsilateral sternocleidomastoid muscle (p13-n23) for cVEMPs (ipsilateral response) and under the patient's contralateral eye (n10-p15) for oVEMPs (crossed response) were analyzed by calculating the peak-to-peak amplitude. Inter-aural amplitude difference between ear affected (Aa) and unaffected (Au) by BPPV were calculated with the asymmetry-ratio (AR): [(Au -Aa)/(Au + Aa)] × 100. Otolith sensors on the pathologic side were considered damaged if potentials resulted in AR >35%, according to our normative data and to literature references (49).
Statistical Analysis
Quantitative variables were checked for normal distribution using both Kolmogorov-Smirnov and Shapiro-Wilk tests. Continuous variables were described by mean ± 1 standard deviation for normally distributed variables or by median, interquartile range and range for non-normally distributed variables. Diagnostic sensitivity of vHIT in detecting the involved SC in BPPV with pDBN was calculated as the ratio of cases with hypoactive SC to overall patients. Conversely, diagnostic sensitivity of vHIT for persistent pDBN was calculated as the ratio of cases with persistent pDBN exhibiting a deficient SC to overall cases with persistent pDBN, whereas vHIT sensitivity for transitory/paroxysmal pDBN was derived dividing the number of cases with transitory/paroxysmal pDBN presenting with a hypoactive SC for overall cases exhibiting transitory/paroxysmal pDBN. Fisher's exact test was used for categorical comparisons. Spearman's rank correlation was used to correlate patient's age with SCs VOR-gains. Wilcoxon signedrank test was used to compare pre-and post-treatment vHIT data for all six SCs. Mann-Whitney U-test was employed for pairwise comparisons between subgroups. Results were considered statistically significant if p < 0.05. Statistical analyses were performed using IBM SPSS ver. 20.0 (IBM Corp., Armonk, NY, USA).
RESULTS
Fifty-four patients (20 males, 34 females, mean age 55.9 ± 13.8) with pDBN due to vertical SC-BPPV were included in the study. Recurrence of pDBN due to BPPV involving the same SC was recorded in a case and it was considered twice in the analysis. Similarly, four patients (1 male and 3 females) were considered twice as they exhibited either simultaneous or newly sequential BPPV involving other vertical SCs with pDBN. Therefore, clinical and instrumental data concerning 59 cases with pDBN due to vertical SC-BPPV were finally analyzed. Detailed information about overall 59 cases included in the study can be found in Supplementary Tables 1, 2.
Apogeotropic PSC-BPPV was diagnosed in 78% of cases (46/59, 26 on right and 20 on left side), ASC-BPPV in 11.9% of cases (7/59, all left-sided) whereas in 6 cases (10.1%) neither the involved SC nor the pathologic side could be ascertained ( Table 1). Recent BPPV were reported in 46 cases (77.9%) and in 10 cases (16.9%) previous head trauma could be recorded ( Table 1). Most subjects (35/59, 59.3%) presented at our attention more than a week from symptoms onset, without statistically significant difference between apogeotropic PSC-BPPV and ASC-BPPV cases or between subjects with identified SC and unknown affected site ( Figure 1A). In patients with BPPV involving PSC non-ampullary arm, spontaneous purely vertical DBN was identified in 2 subjects and spontaneous torsional/vertical DBN was identified in 4 cases, consistently with a canalith jam. Spontaneous nystagmus did not exhibit direction changings either in forward or backward head bending along the pitch plane, was inhibited by visual fixation in its vertical component and increased in recumbent positionings in all 6 subjects. It likely resulted from previously performed CRP only in half of cases, whereas the remaining 3 patients presented with a spontaneous canalith jam converting into an ipsilateral SC-BPPV after DS maneuvers with the aid of mastoid vibrations. While 83.1% of overall cases and the whole population either with ASC-BPPV or BPPV involving undefined SC presented with pDBN detectable in both SHH and bilateral DH positions, both maneuvers resulted in positional nystagmus only in 78.3% of cases with apogeotropic PSC-BPPV, without statistically significant difference among subgroups Frontiers in Neurology | www.frontiersin.org Figure 2D). No significant difference in terms of outcome with physical therapy could be found among underlying diagnosis, being resolution of pDBN predominant over conversion into typical paroxysmal nystagmus due to either canal switch or progression toward the ampullary tract of PSC in all different BPPV-subtypes ( Figure 1C). Despite two types of CRP (usually an impulsive maneuver followed by prolonged positioning) were enough either to release the involved canal from dislodged particles or to convert pDBN into paroxysmal nystagmus in most cases (35/59, 59.3%), all subjects with unidentified affected SC received more than 2 types of CRP to recover (p = 0.003; Figure 1D). Conversely, no substantial difference in terms of time needed to recover or convert into typical BPPV could be found among different subgroups, prevailing a time-period greater than a week across all different BPPV forms ( Figure 1E). Preliminary correlation analysis between patient's age and VOR-gain values for each SC (both pre-and post-treatment) in subjects with defined affected SC (n. 53) was performed prior to investigating VOR-gain behavior among different subgroups, to ensure lack of consistent age-related bias involving canal activity (50). Only a negative correlation between patients' age and presenting VOR-gains for the other vertical SC ipsilaterally to the affected canal (rho = −0.279, p = 0.043) and for the contralateral SC other than the canal coupled with the affected SC (rho = −0.302, p = 0.028) was found (Figure 3).
In 43/59 cases (72.9%) an isolated vertical SC hypofunction could be identified without statistically significant difference between apogeotropic PSC (37/46, 80.4%) and ASC-BPPV (6/7, 85.7%) ( Figure 4A and Table 1). In all these patients, torsional components of pDBN, when detected, were in agreement with the excitatory (in ASC-BPPV) or inhibitory (in apogeotropic PSC-BPPV) discharge of the hypoactive SC, as expected from resulting endolymphatic flows elicited by otoconial shift in DH and SHH positionings. Moreover, proper CRP for treating the hypoactive SC succeeded either in resolution of symptoms and signs or in conversion into a typical ipsilateral BPPV with paroxysmal nystagmus involving the ampullary tract of PSC or HSC in all cases. All these patients exhibited normalization of VOR-gain abnormalities after either pDBN resolution or conversion into paroxysmal positional nystagmus, confirming a close linkage between transient high-frequency VOR impairment and BPPV-related pDBN ( Figure 5). In none of our cases, deficient VOR-gain values could be detected for a SC unrelated to BPPV, except for a case who presented with impaired PSC VORgain ipsilaterally to the affected ASC (with normal VOR-gain) normalizing after successful physical therapy for ASC-BPPV. All patients presenting with spontaneous DBN exhibited hypoactive VOR-gain for the affected canal ( Figure 4B).
When dividing overall cohort of patients according to pDBN duration, subgroup of patients presenting with persistent positional nystagmus (44/59) exhibited a significantly higher rate of cases with VOR-gain impairment (88.6%) compared to patients in which positionings elicited a transient/paroxysmal pDBN (26.7%) (p < 0.001; Figure 6A). Nevertheless, rates of cases showing pDBN reversal in upright position, different time from symptoms onset and for recovery or conversion of pDBN, different outcomes and number of CRP required did not differ between subgroups exhibiting these two different pDBN patterns (Figures 6B-F).
When exploring variations between presenting and posttreatment VOR-gain values for each SC among overall population with pDBN due to BPPV involving detectable SC (53 cases), a significant functional improvement could be found for the affected SC (p < 0.001), for its coupled contralateral canal (p < 0.001) and for the other contralateral vertical SC (p = 0.002; Figure 5A). Similar results could be achieved for cases with apogeotropic PSC-BPPV ( Figure 5B) and ASC-BPPV ( Figure 5C). Increasing of VOR-gain for the affected SC following CRP was more pronounced for cases presenting with persistent pDBN than paroxysmal, irrespective to the type of canal involved (Figure 7A), whereas it was statistically significant only for affected SC presenting with hypoactive canal function ( Figure 7B). No significant differences in terms of both presenting and post-treatment values of VOR-gain for the affected SC could be found considering either the type of SC involved (ASC vs. non-ampullary tract of PSC), possible pathologic sides (right vs. left) or different genders (male vs. female). Similar results were achieved dividing overall patients according to previous history of BPPV and head trauma and comparing high-frequency function for the affected SC between subgroups (Figure 8). Conversely, once separated overall population of 53 cases with identified involved canal according to possible pDBN features, presenting VOR-gain values for the affected SC were more severely impaired in cases with persistent positional nystagmus (p < 0.001) and with spontaneous DBN (p = 0.002) than subjects exhibiting transient/paroxysmal pDBN and lacking of spontaneous nystagmus, respectively (Figures 9A-E). Unlike, no significant disparities in posttreatment VOR-gain values could be found between subgroups with different pDBN characteristics (Figures 9F-J). Functional SC impairment at presentation was also slightly greater in patients successfully treated with one or two CRP than cases requiring more than 2 types of maneuvers either to recover or to convert pDBN into a typical BPPV variant (p = 0.025; Figure 10C). On the contrary, presenting VOR-gain for involved canal did not significantly differ considering possible onset times (<7 vs. >7 days), outcomes (resolution vs. conversion) and days needed to treat pDBN (<7 vs. >7 days) (Figures 10A-D), and the same was found comparing VOR-gain values following therapeutic maneuvers among subgroups (Figures 10E-H).
Both cVEMPs and oVEMPs to air-conducted sounds were performed only in 26/59 patients (44%) to test saccular and utricular function, respectively. Whereas, no significant difference in utricular function could be found among subgroups (Figure 11), cases with left-sided BPPV and exhibiting pDBN conversion in paroxysmal nystagmus showed greater cVEMPs AR than cases with BPPV involving the right ears (p = 0.029) and with pDBN resolution after CRP (p = 0.035), respectively (Figure 12).
DISCUSSION
BPPV is considered the most frequent disorder among peripheral vestibular pathologies with a high prevalence in adult population. Otoliths detachment from utricular macula is the underlying physiopathological mechanism currently accepted (1). Perturbations in SC dynamics due to dislodges particles gravitating within membranous labyrinth mostly result in rotatory vertigo spells triggered by head position changes. Despite positional short-lasting vertigo represent the distinguishing symptom, BPPV-related signs and symptoms may differ among individuals, mainly depending on the portion of the labyrinth involved and on how dislodged otoconia are disposed, resulting in sometimes challenging clinical scenario. In fact, despite PSC represents the most frequently involved site due to its anatomically inferior location in both supine and upright positions, HSC-BPPV accounts for a considerable rate of patents ranging from 10 to 20% of overall cases (1,51). Conversely, due to its anti-gravity position, ASC ampullary receptor has been found to be rarely activated by endolymphatic perturbations due to detached otoconia, mainly accounting for Frontiers in Neurology | www.frontiersin.org 9 October 2020 | Volume 11 | Article 578588 FIGURE 5 | lines represent the border between normal and pathologic VOR-gain values for vertical canals (0.7) and values within gray areas represent abnormal measurements. Statistically significant differences at the Wilcoxon signed-rank test are shown. p < 0.05 and < 0.01 are marked with * and **, respectively. ASC, anterior semicircular canal; BPPV, benign paroxysmal positional vertigo; HSC, horizontal semicircular canal; post, post-treatment; pre, at presentation; PSC, posterior semicircular canal; SC, semicircular canal; VOR, vestibulo-ocular reflex. Values at a greater distance from the median than 1.5 times and 3 times the IQR are plotted individually as dots (weak outliers) and asterisks (strong outliers), respectively. <5-10% of cases (1,11,12,15,51,52). Moreover, despite it has been demonstrated by microscopic investigations how otoliths may either float within membranous ducts (canalolithiasis) or adhere to the cupula (cupulolithiasis) (53,54), it has been also hypothesized that a consistent amount of otoconial fragments may sometimes aggregate in clots remaining entrapped within membranous ducts (canalith jam) (55,56). Since several CRP have been described to treat each possible BPPV variant, precise localization of otoconia within the labyrinth is of pivotal importance for treatment outcome. As vestibular testing have a limited role in BPPV diagnosis, not being even recommended in current clinical practice guidelines (57,58), otoconia siting has been predominantly relied on combining the above-mentioned notions with both the spatial orientation of the assumed SC involved and principles of gravitational fluid mechanics leading to nystagmus recorded during examination and throughout treatment. The distinguishing feature of the typical variant of PSCcanalolithiasis (involving its ampullary arm) is represented by paroxysmal upbeat nystagmus with torsional components beating toward the undermost ear, referring to the upper corneal poles, evoked by ipsilateral DH maneuver. Geotropic upbeat nystagmus is disconjugated, with a weaker downward and stronger intorsional slow component in the eye of the pathologic side and a stronger downward and weaker extorsional slow component in the opposite eye, resulting from the transitory activation of ipsilateral PSC-ampulla, since debris within the ampullary arm move away from the cupula during diagnostic maneuvers (59). Resulting ampullofugal endolymphatic flows represent an excitatory stimulus for PSC according to Ewald's laws, explaining the aligning plane and the direction of resulting positional nystagmus. It usually exhibits both a typical crescendodecrescendo course and limited duration as it recedes once debris have reached the undermost position. Moreover, it shows direction reversal with analogous time characteristics once returning to the upright position due to reflux of otoconia toward PSC ampulla. In accordance with Ewald's laws, the latter nystagmus shows lower amplitude than the former, as it results from ampullopetal flows inhibiting PSC afferents (1,2). Canalolithiasis involving HSC lead to oculomotor patterns exhibiting similar time features to PSC, though it is mainly elicited by head movements along the yaw plane in the supine position (1, 2). Whereas, HSC-cupulolithiasis has been widely investigated, resulting in persistent direction-changing positional nystagmus aligning with the horizontal plane due to a continuous deflection of the overloaded cupula in lateral positionings, cupulolithiasis involving PSC has been rarely described (60)(61)(62). It has been related to persistent positional nystagmus elicited in recumbent positionings with either downbeat or upbeat direction depending on anatomy and head-bending angle, similarly to migrainous subjects with supposed modified density ratio between PSC-cupula and surrounding endolymph (63).
Nevertheless, most authors have advocated BPPV involving ASC ampullary arm as underlying mechanism for pDBN ( Figures 13A-D). In this condition, debris are thought to move away from ASC ampulla resulting in ampullofugal cupular deflection with an excitatory discharge of the superior ampullary nerve [(1, 9, 28, 52); Figure 13C]. Morphological characteristics of pDBN resulting from such a physiological event should exhibit a fast phase torsional components directed toward the affected ear as nystagmus is generated by the contraction of ipsilateral superior rectus and contralateral inferior oblique muscles. Nevertheless, interpretation of pDBN still represent a challenging topic, as patients with ASC-BPPV usually present with atypical positional nystagmus mimicking central pDBN (1)(2)(3)9). In fact, it is rarely evoked only by ipsilateral positioning and it usually exhibits longer time constant compared to typical BPPV-like nystagmus, lacking of both crescendo-decrescendo course and torsional components (3,(9)(10)(11)(12)(13)(14)(15). Moreover, it has been recently hypothesized that the same pDBN could also be generated by particles gravitating through the distal portion of the non-ampullary tract of PSC, close to the common crus [ (16)(17)(18)(19)(20)(21)(22)(23)(24)(25)(26)(27); Figures 14A-D]. In this condition, provoking maneuvers should move debris toward PSC-ampulla leading to an inhibitory discharge of PSC-afferents, which in turn results in pDBN with torsional components beating toward the contralateral ear ( Figure 14C). Likewise ASC-BPPV (9), even in this case the non-ampullary arm of each PSC aligns with gravity enough to move debris in ampullopetal direction in both DH bilaterally and SHH positionings (20). Therefore, the same positional nystagmus resulting from activation of ASC-afferents could be also generated inhibiting contralateral posterior ampullary nerve, which drive contractions of the same ocular muscles (18,28). Though comparison of amplitude between nystagmus evoked in Figures 13B, 14B]. Additionally, pDBN often lacks of torsional components in both ASC-BPPV and apogeotropic PSC-BPPV. Authors advocated several explanation for this finding (8,9,11,19,20,64). Basically, they stated that since in the human skull, on average, ASC is closer to the sagittal plane (on average only 41 • ) than PSC (56 • ) (65, 66), a much smaller torsional component is expected from ASC stimulation. Additionally, calculations of angular eye velocity vectors derived from known canal geometry show the existence of an upwards bias in vertical slow phase eye velocity (67) Thus, more downbeat than torsional nystagmus is expected from ASC-BPPV (9). The same geometrical and neurophysiological considerations have been considered for pDBN resulting from apogeotropic PSC-BPPV, as PSC likely rotates its axis proceeding from the ampullary to non-ampullary arm, so that the latter becomes closer to the sagittal plane (20). Moreover, as the torsional gain of the human VOR is less than unity, about 0.75 and 0.28-0.5 in response to high-and low-frequency roll head rotations, respectively, torsional components of VOR responses should be smaller than the horizontal and vertical components (64). Nevertheless, even though peripheral pDBN is unanimously accepted to align with a more vertical than torsional plane and not to reverse while upright in the majority of cases, these aspects need to be better clarified yet.
According to our original series of 93 patients, most pDBN recognize a peripheral origin confirming previous studies (8,9,15). Despite confirming that ASC-canalolithiasis represents a rare entity, it is actually possible to occur. Nevertheless, unlike other reports (19), its prevalence is much smaller than apogeotropic PSC-BPPV, accounting for <12% of pDBN due to BPPV compared to 78% for apogeotropic PSC-BPPV ( Table 1). Unlike previously reported series (19), ratio of pDBN with latency, with purely vertical direction and with a persistent course did not significantly differ among subgroups, resulting in challenging differential diagnosis when relying only in the interpretation of pDBN characteristics (Figure 2). It is noteworthy that in our series otoconia entrapped within the non-ampullary branch of PSC were more likely to result in persistent positional nystagmus than ASC (Figure 2C). This may be due to the fact that in general population the tract of the posterior canal approaching the common crus could be more frequently narrow than ASC ampullary arm. According to our result, pDBN could be always evoked in bilateral DH and SHH tests in case of ASC involvement compared to PSC where positional nystagmus was detectable only in one DH positioning in 21.7% of cases ( Figure 1B). Despite specific rehabilitative treatments for these BPPVvariants have been designed reporting good results (12,14,19,20,24,28,45,46,68,69), uncertainty regarding the involved SC represents a dilemma when deciding the best therapeutic approach. Authors have proposed to use the efficacy of appropriate physical therapy or the conversion into a classical ipsilateral canalolithiasis to identify the involved canal (18)(19)(20)24). Others have advocated the use of the so-called "pendular maneuver" aiming to shift otoconia toward PSC ampullary arm to detect the affected canal and proceed later with proper repositioning (70).
The importance of a precise detection of the affected SC is reflected in higher number of CRP needed to restore patients with undefined affected SC compared to cases with identified pathologic canal in our series (Figure 1D), with obvious prognostic sequel and related patient's discomfort. Conversely, irrespective to the canal affected, number of CRP needed, outcome and time required for resolution or conversion did not differ among ASC-and apogeotropic PSC-BPPV (Figure 1). High prevalence of efficacy in two-step maneuvers could be explained assuming that partially entrapped otoliths might fragment during the first maneuver and then return back to the utricle following prolonged positionings.
Although objective measures of canal function would be of extreme help in such cases, diagnostic usefulness of vestibular tests in BPPV remains controversial. Previous investigations assessing the feasibility of VEMPs (71)(72)(73) and other tests measuring ampullary activity in different frequency domains (74)(75)(76)(77) to detect the ear or the canal involved in patients with typical PSC-canalolithiasis have not achieved univocal consensus. Recently, vHIT has been used to assess high-frequency SCs function in a subsample of patient presenting with persistent pDBN due to the vertical SC-BPPV. Whereas VOR-gain proved to be reduced for the involved canal at presentation, it normalized following proper CRP aimed to release the affected canal from otoconia or to transform pDBN in typical paroxysmal upbeat nystagmus (37). Authors have hypothesized that, unlike typical canalolithiasis involving PSC ampullary tract, where particles are free to float along the membranous duct with minimal effect on cupular dynamics during high-frequency testing (75,77), in cases with ASC-BPPV and apogeotropic PSC-BPPV presenting with persistent pDBN debris could alter endolymphatic dynamics and cupular response mechanisms, resulting in high-frequency VOR deficit for the involved canal. This condition is thought to occur whenever otoconia settle in physiological narrow portions of the canal lumen [such as the distal portion of the nonampullary branch of PSC, close to the common crus (20)] or in particular sites of altered canal anatomy due to possible structural changes in SCs orientation (9,13,46) or to acquired stenosis of membranous ducts [as demonstrated for ASC ampullary arm (10,78)] or even to irregularities in membranous walls. Given that hydrodynamic models of fluid-filled SC have demonstrated how a pressure amplification occurs as otoconia enter a narrow section of the canal (79,80), in particular situations it could likely result in an incomplete canalith jam (18,20) leading to impaired ampullary responses for high-frequency range (37). Namely, this condition would behave as a "low-pass filter" allowing the cupula to be activated by low-frequency stimuli (otoconial shifts producing pDBN) while impeding the ampullary receptor to respond to high-frequency inputs (head impulses leading to impaired vHIT data). Partial embedment of debris within narrower portion of membranous duct may also account for the usual persistent course of pDBN with smaller frequency compared to typical PSC-canalolithiasis. In fact, hypothesizing that otoconia could remain incompletely entrapped in these canal tracts, a small amount of endolymphatic reflux is expected and fluid column may continue to press against ampullary receptor, resulting in a slower return of the ampullary crest to the resting position than typical BPPV (20). Finally, considering that some patients with refractory BPPV submitted to surgical plug have recently been found to have, on microscopic examination, fragments of otolith membrane and otoconia encased in their gelatinous matrix rather than simply free-floating otoconia (54), it is not hard to assume how these large materials could be trapped in various locations inside membranous SC dampening endolymphatic flows.
These hypothetical conditions significantly diverge from presenting scenario in typical PSC-canalolithiasis, where dislodged debris prove to be freely moving within the canal by the transitory paroxysmal nystagmus with crescendodecrescendo course evoked in both recumbent positionings and in return upright. In fact, according to investigations on SCs model, when debris enters the membranous canal from ampulla, a transcapular pressure is generated resulting in cupular displacement and nystagmus onset. Once debris settle on canal walls, they have no more effects on the ampullary receptor, unless the clot fills the portion of the canal with a consequent greater effect on cupular dynamics (80). Whereas, the former mechanism may account for the lack of persistent dynamic perturbation of canal activity by dislodged otoconia in classical PSC-canalolithiasis, with FIGURE 12 | differences at the Mann-Whitney U-test are reported and highlighted with for p < 0.05. Horizontal dashed lines represent the border between normal and pathologic AR values for cervical VEMPs (35%) and values within gray areas represent abnormal measurements. AR, asymmetry ratio; ASC, anterior semicircular canal; BPPV, benign paroxysmal positional vertigo; CRP, canal repositioning procedures; cVEMPs, cervical vestibular-evoked myogenic potentials; F, female; M, male; pDBN, positional downbeat nystagmus; PSC non-amp, posterior semicircular canal non-ampullary arm. Statistically significant differences at the Mann-Whitney U-test are reported and highlighted with * for p < 0.05. Values at a greater distance from the median than 1.5 times and 3 times the IQR are plotted individually as dots (weak outliers) and asterisks (strong outliers), respectively. consequent missing abnormalities in vestibular tests assessing canal function (75,77), the latter finding could likely explain the transient VOR-gain impairment for the affected SC in case of pDBN (37).
In our opinion, this hypothetical mechanism represents the most likely explanation for our findings. In fact, abnormal VOR-gain values were detected in 43/59 cases with pDBN due to vertical SC-BPPV, with a sensitivity of vHIT in detecting the affected SC of 72.9%, irrespective to the canal involved ( Figure 4A). In all 43 cases, SC presenting with deficient VOR-gain values matched with the canal involved by BPPV, except for subject #12 who was affected by ASC-canalolithiasis despite presenting with ipsilateral impaired PSC VOR-gain. In this case, transitory pDBN with left-torsional components was related to left ASC-BPPV rather than contralateral apogeotropic PSC-BPPV as the patient was treated few days before for a typical variant of left PSC-canalolithiasis. Moreover, she developed deficient VOR-gain value for left PSC, normalizing after proper CRP for ASC-BPPV, suggesting that otoliths, though eliciting superior ampullary afferents, could have dampened dynamic responses for ipsilateral PSC. We hypothesized a common crus-canalolithiasis for this patient, where geometrical abnormalities in canal disposition, such as ASC with large-sized diameter prevailing over PSC in common crus constitution, could occur. Nevertheless, 25.4% of overall patients presented with transitory nystagmus (Figure 2C), suggesting that otoconia may also be freely moving within either ASC or PSC nonampullary lumen in some cases. Among them, only 26.7% of cases presented with VOR-gain value <0.7 (Figure 6A), confirming how pDBN with longer time constant is mostly related to the "incomplete jam" theory. The two different mechanisms theorized (canalolithiasis vs. incomplete jam) could likely account for different behavior of pDBN and affected SC activity in these BPPV variants, explaining also how vertical SCs function could have been normal in a series of patient diagnosed with AC-BPPV presenting with mainly transitory pDBN (81). In fact, examining only data of 44 patients presenting with persistent pDBN, where an incomplete jam is thought to happen, diagnostic sensitivity of vHIT increased up to 88.6% (39/44 cases) ( Figure 6A). Additionally, when analyzing pDBN features, VOR-gain values for affected SC presenting with persistent pDBN were found significantly more impaired than cases with transitory/paroxysmal pDBN, further confirming different behavior for high-frequency ampullary responses depending on the degree of otoconial entrapment ( Figure 9C). Moreover, in all cases (43/59) presenting with impaired VOR-gain for the affected SC, canal function normalized after proper CRP with either resolution or conversion into a typical BPPV, irrespectively to the underlying diagnosis (Figure 5), confirming a strong linkage between abnormalities for high-frequency canal activity and dislodged otoconia consistently with the "incomplete plug" theory. These assumptions are also in accordance with the significant improvement for VOR-gain detected only for SC exhibiting deficient VOR-gain at presentation, whereas normally active SC did not significantly modify VOR measurements after repositioning ( Figure 7B).
It is noteworthy that medians of overall VOR-gain values highly significantly improve not only for the affected SC, but also for the contralateral vertical canal functionally coupled with the involved SC, despite presenting with VOR-gain within normal ranges ( Figure 5). This is in line with studies on contralesional canal activity following acute vestibular loss showing, on average, slightly reduced VOR-gain also in the healthy side. It is still matter of debate whether only peripheral phenomena may account for this finding (mainly a functional loss of the "push-pull" mechanism) or if also central compensation (mainly cerebellar "shut-down") could reduce canal activity on the healthy side (82)(83)(84). Probably, the latter phenomenon may likely explain reduced VOR-gain values detected also in the other vertical SC contralaterally to the lesion side and in those patients with long-lasting symptoms (the majority in our cohort). Additionally, canal function, despite normal, resulted to slightly improve even for ipsilateral SCs among cases with involved ASC (Figure 5C). Despite this finding may be a result of casualty, given the small-sized cohort of ASC-BPPV cases, it may be assumed that ASC-BPPV could more likely result in a global labyrinthine perturbation compared to apogeotropic PSC-BPPV.
Nevertheless, no differences were found among presenting VOR-gains for the affected SC according to the canal and side involved, to the patients' gender and to previous history of BPPV or head trauma (Figures 8A-E). Similarly, neither different onset time, outcome nor time needed for resolution or conversion of pDBN in paroxysmal nystagmus were found to impact on presenting function for the involved SC (Figures 10A,B,D). Significantly higher VOR-gain at presentation for the affected SC in patients requiring more than 2 CRP to recover or convert into a typical BPPV compared to those treated with a twosteps maneuver could be explained with the fact that cases in the former group were more likely to have normal VORgain, resulting in a more difficult localization of otoconia with vHIT ( Figure 10C). On the other hand, the routine use of highfrequency measurement of canal function to detect the canal involved may account for the smaller amount of cases with canal switch in our cohort (Figures 1C, 6D) compared to other series where conversion into a typical BPPV variant was used as a diagnostic tool (19,20,24,70). Interestingly, no significant differences in terms of VOR-gain values following CRP could be found among patients exhibiting different time of symptoms onset, outcomes, number of physical treatments and time required for pDBN resolution or conversion (Figures 10E-H), proving how possible residual dizziness in these patients may be ascribed to other than peripheral causes (85).
As previously mentioned, persistent positional vertical/torsional nystagmus (either upbeat or downbeat, depending on anatomy) evoked in DH or SHH positions has been related to PSC-cupulolithiasis (60)(61)(62) or to modified density ratio between the PSC-cupula and surrounding endolymph (63). Therefore, it might be reasonable to assume that presenting findings in some patient from our cohort could be ascribed to such mechanism. Nevertheless, we found some discordant issues between VOG/vHIT findings and cupulolithiasis/buoyancy theory that made us leaning toward BPPV involving nonampullary arm of PSC resulting in an incomplete jam. Firstly, we did not detect any direction-changing nystagmus by modifying head-bending angle in upright position or in contralateral DH positioning. We could only record slight transient nystagmus reversal when returning upright from head hanging positionings in a small subset of patients of our cohort. In a hypothetical case of PSC-cupulolithiasis, we would have expected to find the above-mentioned findings if PSC-cupula had been overloaded by attached otoliths and bent downward (either toward the canal or toward the ampulla, depending on anatomy, and head-bending angle), persistently exciting or inhibiting, respectively, PSC afferents (60,61). Moreover, unlike what observed, we would also expected to record a neutral head position where the axis of the affected cupula is supposed to align with gravity, suppressing positional nystagmus (60,63). Additionally, most cases presenting with hypoactive PSC recovered following CRP properly designed for BPPV involving PSC non-ampullary arm, without canal conversion. Conversely, in case of otoconia attached on the side of cupula overlooking the long arm of the canal, whichever maneuver should always be expected to convert pDBN in paroxysmal upbeating nystagmus as debris should necessarily detach and become freely floating within the membranous PSC before returning within the utricle. Moreover, according to mathematical model, cupulolithiasis resulted to require much greater amount of particles compared to canalolithiasis (79,80), so its conversion into a canalolithiasis should result in strong positional nystagmus that could hardly go unnoticed by patients. Alternatively, debris hypothetically either settling on the opposite side of the cupula or shifting within PSC short arm should theoretically result in worse symptoms while upright rather than in evident nystagmus in DH positioning, unlike what recorded in our cohort of patients (62,86). Finally, those cases presenting with reduced VOR-gain for ASC normalizing after CRP should have necessarily exhibited endolymphatic perturbations altering the activity of the superior ampullary receptor rather than PSC-cupulolithiasis. Although other possible explanations could be assumed, in our opinion all these findings suggested that PSC-cupulolithiasis was less likely to occur than apogeotropic PSC-BPPV.
Finally, when considering different VOR behavior for the affected canal between cases exhibiting spontaneous DBN or not, the former group presented with highly reduced function compared to the latter ( Figure 9E). This data are in accordance with the assumed mechanism consistent with a canalith jam, where an otoconial clot is thought to completely plug a narrow portion of the membranous duct, blocking endolymphatic flows (56). In this condition, a continuous alteration of hydrostatic pressure between the otoliths clump and the cupula may occur, leading to a persistent cupular deflection thus explaining sudden conversion of positional nystagmus in stationary nystagmus irrespective to head positions occurring during physical treatment (55,56). As already described for HSC-canalith jam (87,88), this mechanism may prevent both high-and low-frequency responses for the SC affected (namely head impulses and otoconial shifts, respectively) by blocking endolymphatic flows, likewise surgical plugging (89), thus explaining severe impairment of canal VOR-gain. Though canalith jam has been described for HSC in several reports, occurring either spontaneously (87,88,90,91) or as a result of inappropriate CRP (92-94), a similar condition involving PSC has been recently implied as the hypothetical pathomechanism for spontaneous DBN receding after proper physical treatment (95). Whereas, spontaneous nystagmus resulting from HSCcanalith jam overlaps presenting signs of acute vestibular loss, spontaneous VOG findings due to a PSC involvement should be mainly torsional/vertical aligning with vertical SC axis, thus mimicking CNS pathologies. In these conditions, instrumental equipment for vestibular testing may play a key role in the differential diagnosis, since other end-organs dysfunctions or additional signs of central origin should always coexist with VOR-gain impairment for vertical SC in CNS disorders (36). In Table 2 each possible scenario (regular canalolithiasis vs. incomplete jam vs. complete canalith jam) accounting for different patterns of pDBN due to BPPV and vHIT measurements with corresponding assumed pathomechanism is summarized.
Despite our instrumental assessment included both cervical and ocular-VEMPs to air-conducted sounds, they were neither routinely performed to search for possible AR differences among patients nor they were routinely tested both before and after CRP to look for amplitudes changes following proper repositioning. Moreover, due to the lack of bone-conducted stimuli in our equipment, a reliable measurement of both saccular and utricular function could not be obtained. On the other and, analysis of variations in VEMPs amplitudes among BPPV patients would have gone beyond the aim of our investigations. These methodological bias could likely account for the lack of statistically significant results among VEMPs data (Figures 11, 12).
Our investigation presents some other limitations. First of all, being a multicentre investigation, each involved otoneurologist collected data by him/herself, and an inter-observer agreement for ambiguous cases (in particular for three-dimensional evaluation of nystagmus) was never used. Then, despite corrective saccades were always checked to avoid artifacts inclusion in hypoactive VOR-gain plots, our analysis on vHIT data focused almost solely on SC VOR-gain values, whereas morphological study of saccades (covert vs. overt, latency, distribution, peak velocity and inter-aural differences, etc.) was not pursued, being beyond the aims of this study. Moreover, we considered as deficient only VOR-gains below normative data without considering gain asymmetry between coupled pairs of SC. In addition, the subgroup of patients with ASC involvement was significantly smaller than population with apogeotropic PSC-BPPV, leading to possible misleading conclusions when comparing subsamples data. The inclusion in the analysis of 6 subject where the affected SC could not be identified could also have altered final results of our investigation. Though they presented with pDBN exhibiting the same characteristics of the remainder of cases with BPPV and restored with positionings, it could not be excluded they were not affected by vertical SC-BPPV. Finally, despite our cohort size with pDBN collected in a 1-year period was similar to others series, the smallsized sample analyzed does not permit to achieve definitive conclusions on the sensitivity of vHIT in detecting the affected canal in these patients. Further prospective investigations with a more significant number of subjects with pDBN will be needed to better determine the role of vHIT in vertical canals BPPV presenting with pDBN.
CONCLUSIONS
According to our data, vHIT may play a key role in the diagnosis of the affected canal in BPPV involving vertical SC presenting with pDBN. In fact, conversely to typical BPPV with paroxysmal positional nystagmus, where particles are free to move along the membranous ducts, in case of persistent pDBN an incomplete jam is likely to occur. Unlike complete canalith jam, where otoconia are thought to plug the entire canal lumen impeding endolymphatic flows for both high-and low-frequency domains due to a continuous pressure exerted by the clot on a persistently displaced cupula, in these conditions particles partially entrapped within a narrow portion of the canal likely behave as a "low-pass filter" for the ampullary receptor. This phenomenon may actually impair high-frequency dynamic responses for the affected canal while allowing low-frequency endolymphatic movements, thus explaining reduced VOR-gain for the affected SC on vHIT despite pDBN, respectively, and normalization of head impulse data following symptoms resolution or pDBN conversion into a typical paroxysmal nystagmus. These findings should encourage clinicians to routinely use the vHIT in case of pDBN, including high-frequency testing of canal function in the test battery for these patients, particularly in cases lacking of torsional components, where detection of the affected canal and differential diagnosis with CNS disorders may be challenging.
DATA AVAILABILITY STATEMENT
All datasets generated for this study are included in the article/ Supplementary Material.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by Area Vasta Nord Emilia Romagna Institutional Review Committee. The patients/participants provided their written informed consent to participate in this study.
AUTHOR CONTRIBUTIONS
AC, PM, and SM led the conception of the study and conducted most data acquisition, interpretation and made significant contributions to the writing, and editing of the manuscript. AC conducted data analysis and creation of figures. CB and MR were involved in project conception and manuscript editing. SD, SQ, ER, and EA contributed to data acquisition and manuscript review. EA, MM, AG, and GL were involved in manuscript review. All authors approved the final version of the manuscript. | 2020-10-15T13:11:18.807Z | 2020-10-15T00:00:00.000 | {
"year": 2020,
"sha1": "3c9c0e899338220288c021585deea99d129696e2",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fneur.2020.578588/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3c9c0e899338220288c021585deea99d129696e2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
48355844 | pes2o/s2orc | v3-fos-license | Acetabular Cup Revision Arthroplasty Using Morselized Impaction Allograft
The rate of acetabular cup revision arthroplasty is gradually rising along with an increased risk of osteolysis and prosthesis loosening over time and an increase in life expectancy. The goals of revision total hip arthroplasty are: i) implant stability through reconstruction of large bone defects, ii) restoration of range of motion and biomechanics of the hip joint, and iii) normalization of uneven limb lengths. In acetabular cup revision arthroplasty, stable fixation of acetabular components is difficult in the presence of severe bone loss (e.g., evidence suggests that it is challenging to achieve satisfactory results in cases of Paprosky type 3 or higher bone defects using conventional techniques). The author of this study performed acetabular revision to manage patients with large areas of defective bones by filling in with morselized impaction allografts. These allografts were irradiated frozen-stored femoral heads acquired from a tissue bank, and were applied to areas of an acetabular bone defect followed by insertion of a cementless cup. When this procedure was insufficient to obtain primary fixation, a tri-cortical or structural allograft using a femoral head was carried out. Structural stability and bone incorporation were confirmed via long-term follow-up. This study aims to review conventional surgical techniques and verify the utility of surgical procedures by analyzing the author's surgical methods and discussing case reports.
The author of this study intends to address the theoretical background, surgical procedures and outcomes of acetabular revision arthroplasty with morselized impaction allograft and cementless cup.
CLASSIFICATION OF ACETABULAR DEFICIENCIES
The accurate classification of bone stock conditions and acetabular defects is important when deciding an adequate surgical technique for acetabular reconstruction; these assessments are made based on the anteroposterior, lateral and Judet views of simple radiography and computed tomography (CT) scans of the hip. The most commonly used classification systems are the D'Antonio classification 7) adopted by the American Association of Orthopaedic Surgeons (AAOS) and the Paprosky classification 8) ( Table 1, 2). The AAOS system is based on intraoperative assessments of acetabular defects, while the Paprosky system is based on preoperative simple radiographs of the pelvis. The CT scans can improve the accuracy of preoperative classification (Paprosky's classification) done using simple radiography. In addition to the AAOS and Paprosky systems, other classification schemes have been proposed by Saleh et al. 9) , Gustilo and Pasternak 10) , Gross et al. 11) , and Parry et al 12) . The author of this study mainly used the Paprosky classification to plan surgery procedure using simple radiography and CT scan.
BONE GRAFT
There are several different types of bone grafts (e.g., autograft, allograft, and bone substitute). Autografts are known to achieve better clinical outcomes and incorporation compared to allograft with the benefits of facilitated bone formation and no immune response 13) . Importantly, however, autografts have a number of limitations as well (e.g., insufficient amount of grafts, poor bone quality in elderly patients, and the requirement for additional incisions). When the degree of bone loss is severe, the use of allograft is unavoidable because large bone defects cannot be restored using just an autograft.
The types of allografts used in revision hip arthroplasty are typically divided into structural or morselized allografts. Structural allografts are used for reconstruction of structural or uncontained bone defects, while morselized allografts are usually used to manage non-structural or cavitary bone defects. The benefits of using allograft include: i) excellent applicability and ii) no residual sequelae in the area where bone grafts are harvested (as occur when autografts are used). However, the potential adverse events associated allografts (the absence of osteoblasts and bone inducing factors, the risk of disease transmission, bone graft fractures) 14) should be carefully considered prior to their use. In revision surgery, allografts frozen and stored under -80。 C are typically used after collection and radiation sterilization at a dose of 25 kGy. Furthermore, the type of allografts should be chosen depending on the: i) size and area of bone defect, ii) condition of recipient site for bone graft, and iii) type of bone defects (i.e., contained or segmental).
Regarding the histological fate of bone graft materials, whether autografts or allografts, incorporation is important and mediated through a series of processes closely related with the host bone. The host bone supplies blood and living osteoblasts-which are critical factors in incorporation and regeneration of the dead bone graft. Bone grafts stimulate the cellular activity of the host bone, leading to new bone formation within and around graft materials, and serve as a scaffold structure for bone regeneration. Important factors for bone incorporation include: i) firm fixation of bone grafts into the host bone, ii) the surface area between graft and host bone, iii) vascularity of the host bone, iv) weight-bearing condition within grafts, and v) the size and structure of grafts 15) .
Primary structural bone graft fixation is easy to achieve, however, the graft surface has higher density making ingrowth of blood vessels challenging. Furthermore, stress fractures or bone resorption may occur because bone incorporation occurs with blood vessel ingrowth into the graft surface only while the internal sections of the grafted bone remain mostly dead until later.
Although initial fixation is difficult to achieve with the morselized impaction allografting approach primarily used in this study, complete incorporation can be obtained as morselized allografts have homogeneous surface without gaps compared to the rough surface of the host bone. New bone deposition can be facilitated with dead bone trabeculae without the loss of mechanical intensity through minimized immune response by removing fatty marrow with sufficient cleansing and easy ingrowth of new blood vessels onto the graft 16) .
High Hip Center Technique
This technique is primarily used to manage huge acetabular bone defects in the posterior column, and a cementless acetabular cup is inserted into the upper part of the acetabulum with severe bone loss. The placement of the acetabular component was defined as the presence of hip center above a teardrop line connecting the superior aspect of the acetabulum greater than 35 mm 17) . When using this technique, the primary stability of the acetabular component and greater than 70% host bone contact with the cup should be obtained. The superior migration of the hip center causes no definitive problems with hip biomechanics, but the superolateral migration of the hip center needs to be avoided due to its adverse effects on hip biomechanics 18) . Since the superior migration of hip center may result in leg length shortening and abductor weakness leading to dislocation of the hip joint, the use of a long neck-head, a calcar replacement stem and a high offset stem is recommended. Excessive reaming into the posterior column should be avoided and all possible causes of impingement should be eliminated 19,20) . This surgical technique is limitedly recommended and should be avoided if possible.
Bipolar Cup with Chip Bone Graft
This surgical technique is used in progressive acetabular reconstruction in the presence of huge defects in acetabular bone stock. In this procedure, acetabular reconstruction is not completed via a single operation; instead, a bipolar cup is fixed at secondary surgery after restoration of bone stock and the incorporation of the bone graft into the host bone.
The surgical method can be used in AAOS type I, II, and III and Paprosky type 1, 2, and 3 bone defects, but is not desirable in AAOS type IV bone defects. A porouscoated cementless cup is typically used after chip bone grafting when the cup surface is in contact with more than 50% of the host acetabular bone and solid fixation of the acetabular cup can be attained with screws and alternatively, the use of a bipolar cup can be considered when firm fixation of the acetabular cup cannot be attained [21][22][23] . The primary concern with this surgical method is ensuring stable placement of the bipolar cup into the relatively healthy host bone through over-reaming of the acetabular rim. When bone loss is severe in the superomedial acetabular wall, it is important to prevent the postoperative medial displacement of the bipolar cup by using sufficient amounts of allograft.
A study of McFarland et al. 24) showed that although clinical improvement was shown in 83% of cases, radiological outcomes at an average of 1.3 years of follow up revealed osteolysis around the acetabular component in most cases. Takatori et al. 25) also note that the use of a bipolar cup should be limited since medial displacement of the cup of greater than 10 mm occurred in 40% of cases during an average follow-up of 7 years. This surgical technique has been rarely used in the clinical settings.
Structural Allograft
Structural allografts are often used in patients with AAOS type III and IV defects in the acetabular bone. This procedure fixes the structural bone at the site of acetabular bone defect and then the defect can be filled with autografts between the graft and host bones. Cemented and cementless cups and acetabular reinforcement ring are also used. Total acetabulum, distal femur, proximal tibia and femoral head allografts are commonly used as structural allograft materials. Usually, failure rates are high when structural grafts support more than 50% of the acetabular component 26) . Relatively good short-term results have been attained, but mid-and long-term failure rates range between 4% and 47% 27) . The major cause of failure is bone resorption during reformation of the grafted bone which leads to structural instability. When performing this procedure, the high long-term failure rates when structural grafts are attempting to support large areas of the acetabular component should be considered.
Morselized Allograft with Acetabular Reinforcement Ring
This surgical technique is commonly used for elderly patients with Paprosky type 2 and 3 structural defects in which greater than 50% host bone contact with the cup cannot be attained. The defects are filled with impacted moralized allografts and fixed with the acetabular reinforcement ring using screws, and a polyethylene liner 2 to 3 mm smaller than the acetabular reinforcement ring is fixed with cement. There are two types of Müller acetabular reinforcement rings: i) one that is fixed to the ilium alone with screws and ii) other reinforced types (e.g., Ganz and Octopus rings that are fixed to the ilium using screws and thus providing stability by a hook attached to cotyloid notch in the inferior aspect). Another ring type is Burch-Schneider antiprotrusio cage that is fixed to both the ilium and ischium 28) . The rate of shortterm acetabular cup loosening is about 24% 29) , and the revision rates at mid-and long-term follow up are 20% and 44%, respectively 30) . Despite the use of a solid metal ring, Berry and Müller 29) addressed problems encountered between the more flexible pelvic bone and a hard implant and stress shielding of the graft bone. The posterior column is fixed with a pelvic reconstruction plate in cases with pelvic incontinuity and the acetabular reinforcement ring is used for fixation. This technique is a commonly used surgical option, but should be carefully applied with accuracy. For this reason, the author of this study rarely uses this method.
Morselized Allograft with Cemented Acetabular Cup Fixation
In 1984, Slooff et al. 31) used a modified morselized impaction allografting technique. In summary, the contained acetabular defect is impacted with an allograft (average size of 1 cm) and the segmental defect is covered with a metal mesh or a thin cancellous bone layer. Subsequently, cement is used directly onto the bone bed for fixation without using the acetabular reinforcement ring to allograft bone fragments.
The key principles of this technique are: i) reconstruction of hip biomechanics by placing the cup on the anatomic teardrop, ii) impaction of the segmental defect with a metal wire mesh to achieve containment, iii) impaction of the cavitary defect with a morselized allograft to replace bone loss around implants, and iv) impaction of bone chips using bone cement to increase stability. The benefits of this surgical technique include: i) minimized polyethylene wear, ii) restoration of bone stock with satisfactory graft incorporation, and iii) stable interdigitation of cement with morselized grafts 32) .
This surgical method can be used in AAOS type I, II, and III and Paprosky type 1, 2, and 3 bone defects, but is not desirable in AAOS type IV defects. When performing this technique, the following cautions should be carefully considered: i) massive segmental defects should be converted to cavitary defects using metal meshes, ii) small-sized chip bone grafts are impacted into the site of the cavitary defect, and iii) the polyethylene acetabular cup is cemented onto the graft.
A wide variety of radiological and clinical outcomes relating to this technique have been reported. Previous studies that introduced this surgical technique have reported favorable results. Studies on aseptic loosening of the acetabular component by Trumm et al. 33) , Highcock et al. 34) , and Sloof et al. 35) demonstrated a low re-revision rate of 4% to 6%. Buttaro et al. 36) documented a success rate of 90% in 23 hips at 3-year follow-up. Comba et al. 37) obtained favorable results in 96% of 142 hips at 4-year follow-up. Schreurs et al. 38) note that cemented acetabular cup fixation is considered a good surgical option because of a survival rate of 85% at an average follow-up of 12 years. On the contrary, Jasty and Harris 39) reported a failure rate of 75% at 6-year follow-up, and Pellicci et al. 40) and Kavanagh et al. 41) also obtained unsatisfactory results with a re-revision rate of 22%, complication rate for sepsis at 2.5%, recurrent dislocation at 4%, aseptic loosening at 16% and others. Nevertheless, this technique is considered fairly safe.
Cementless Cup with Bone Graft
This is a widely used surgical technique that restores the anatomic hip center and facilitates incorporation of the bone graft. To use a cementless cup, ensuring as much contact as possible with healthy host bone is critical. Although the extent of viable host bone contact with a cementless cup remains controversial, it is generally considered that the cup surface should make contact with at least 50% of healthy host bone in cavitary defects. In addition, it is desirable to fix the cup by inserting extralong screws from different directions. Greater than 70% host bone contact with a cementless cup is commonly recommended in segmental defects 42,43) .
This surgical method can be used in AAOS type I, II, and III and Paprosky type 1, 2, and 3 bone defects, but is not allowed in AAOS type IV defects and patients with Paget's disease, metabolic bone disease (e.g., acetabular necrosis and bone tumor) 42) . The success of this surgery is considerably affected by the preservation of the posterior column. Moreover, solid impaction of morselized bone chips is important and initial stability needs to be attained by placement of the cup with the acetabular rim and acetabular floor on the remaining host bone.
Multiple mid-and long-term studies report relatively good clinical results in fixation with a cementless vs. cement cup. Silverton et al. 44) documented a radiological failure rate of 7% in a study with a median follow-up period of 8.3 years. In a study by Sun et al. 45) , radiological findings revealed osteolysis in 24.6% of patients at an average follow-up of 8.2 years, but the survival rate of implants was 92.1%. Leopold et al. 46) noted a survival rate of 98% and a noninfectious acetabular loosening rate of 1.8% at a median follow-up of 10.5 years. Rosenberg 47) reported a survival rate of 84% and no revisions due to loosening at an average follow-up of 11 years after revision in 138 hips. The author of this study prefers the surgical method using morselized impaction allograft and a cementless cup and intends to more clearly delineate this procedure in the future.
Revision with Trabecular Metal Augmentation
Trabecular metal augmentation has been introduced to improve biological fixation rather than mechanical fixation and is used in revision when there is clearly a smaller contact surface between the implant and host bone due to osteolysis. This technique facilitates bone ingrowth by using tantalum to fill the bone defect and trabecular metal material can be an alternative to structural allograft. Since tantalum has properties of high volumetric porosity and low modulus of elasticity and exhibits high coefficient of friction, this metal can ensure primary implant stability 48) . Furthermore, this technique is a simple and quick procedure, and can achieve biological fixation with bone ingrowth without the risk of bone resorption after grafting.
This surgical method can be used in Paprosky type 3 defects, type 3A with severe bone loss in the superior aspect of the acetabulum and less than 50% host bone contact and type 3B associated with 3A defects or pelvic discontinuity.
Siegmeth et al. 49) performed acetabular revisions using trabecular metal augments and trabecular metal cups in 34 cases, and stable fixation was achieved in 32 requiring no additional surgery; re-revision was required in only two cases with more than 2 years of follow-up. Other authors obtained comparable results and suggest that this technique is a surgical option for bone ingrowth around the cup when contact surface between the cup and host bone is small and firm screw fixation is impossible 50,51) . Boscainos et al. 52) report more than 32-month follow-up results of 14 patients who underwent revision using the trabecular metal cup-cage construct, and all patients gained stable implant fixation; only two patients underwent re-revision due to dislocation.
However, since the clinical outcomes of trabecular metal augments are insufficient due to short-term follow-up, further long-term investigations are warranted to improve long-term clinical results (e.g., stability, wear debris between the cup and metal augments, fatigue failure, the amount of the host bone required in revision, and difficult recovery).
Cup-cage Reconstruction
A cage-in-cup technique is a recent surgical option for acetabular fixation using the cup-cage construct. In this method, a second-generation porous-coated cup is used for pelvic fixation. When implant stability is weak, an acetabular cage can be placed into the superior portion of the cup and fixed to the ilium and ischium by screws. Acetabular revision arthroplasty using a cup-cage construct is a useful technique when managing Paprosky type 3A and 3B defects and pelvic discontinuity. In a case report by Bellester Alfaro and Sueiro Fernádez 53) , no failures were observed among 5 patients who underwent revision using cup-cage constructs and trabecular metal augments (an average follow-up of 26 months). Further studies are warranted to investigate this rarely used technique and only explored with short-term follow-up in Korea.
Author's Surgical Technique
The author of this study prefers a transgluteal approach, but concomitantly uses trans-trochanteric osteotomy, if necessary. After exposure of the surgical site, a loose acetabular component, osteolytic soft tissues and all bone cements are completely removed. After observation of the defect area, reaming is continued using progressively larger-diameter reamers. Reaming is carefully carried out to ensure maximum stability without damaging structures within the periacetabular region. Sufficient medialization of the acetabular component is obtained by conserving as much contact with the original bone bed as possible. In particular, the contact area between the cup and host bone should be maximized by progressively increasing reamer diameters as tolerable to optimize contact pressure in the anterior, inferior and posterior areas. Allografts used in implantation are harvested from the femoral head, frozen and stored under -80。 C for more than 6 months in a tissue bank after being collected from a patient with femoral neck fracture. Bacterial cultures are done before allografts are frozen and stored, and screening tests are performed on each donor. Cartilage and cortical bone are removed from the allograft, and separated cancellous bone is cut into about 1 cm using bone scissors. To minimize immune response, allografts are repeatedly washed with saline solution using pulsatile lavage to eliminate as much fat and blood as possible and then dried with a skin towel. Dried morselized impaction allografts are mixed with the patient's blood. Morselized bone grafts are impacted into the defect using impactors with reverse reaming (Fig. 1). Proper reaming allows satisfactory results using morselized impaction allograft despite major bone loss affecting more than 60% of the acetabulum. A hemispherical jumbo cup larger than a reamer by 2 mm is press-fit and firmly fixed to the superior portion using 4 to 7 cancellous screws to ensure bony ingrowth (Fig. 2). When severe bone loss is managed with morselized impaction allograft alone, a jumbo cup with a diameter of 66 to 74 mm is commonly used.
The author prefers allograft in mild bone defects unless a bone graft is required. Adequate fixation can be achieved in some cases when morselized impaction allograft is applied to a Paprosky type 3B defect with more than 60% of bone loss. However, when sufficient fixation cannot be attained due to severe segmental defect in the medial wall and bone stock first, one of the two techniques used for early fixation is used. The first tricortical bone graft which is harvested from the iliac tuberosity the most abundant source of cancellous bone via an incision of 5 to 7 cm, and then fixed to the superior acetabular margin using 2 to 3 screws. This technique is similar to slotted acetabular augmentation described by Staheli 54) . In addition, the remaining bone defect is packed with morselized allograft and cancellous autograft reaming is applied to help impact the graft (Fig. 3). Secondly, initial fixation may be obtained via adequate reaming with structural allograft from the femoral head, and the remaining defect is well designed with femoral head allograft. Femoral head structural graft is fixed with 2 to 3 cancellous screws. A minimum compressive load to the graft bone needs to be generated, and as much of the load as possible should be in contact with the remaining acetabular rim. Congruency of the acetabulum is ensured by packing morselized allograft around the structural allograft (Fig. 4). Despite more than 60% of bone loss between the boundary of Paprosky type 3A and 3B defects, the author's revision technique can be performed if pelvic discontinuity is not severe, the posterior column and acetabular dome remain after maximum reaming, or a jumbo cup can be placed on the acetabular margin, despite a discontinuity of the acetabular rim (Fig. 2). Reaming the exact amount of the diameter of the acetabular component should be ensured to obtain satisfactory fixation.
Passive hip joint and knee joint exercises are allowed from the second to third postoperative day, and partial weight bearing ambulation is normally begun from 3-4 days to 6 weeks after surgery depending on rigidity of fixation. Partial weight bearing gait with heavier weight load is recommended at the sixth postoperative week and full weight bearing gait is allowed three months after surgery. However, non-weight bearing ambulation can be performed in cases with excessive allograft implantation by considering patient' s age and systemic status.
With respect to ossification when using a cementless acetabular cup and morselized impaction allograft, first, bony ingrowth in the contact surface with autograft encourages fixation with autograft. Second, host bonederived new blood vessels and osteoblasts flow into a dead space in the contact area with morselized impaction allograft, and progressive ossification enables fixation with allograft as a scaffold structure. Sloof et al. 31) proved bony ingrowth from a cementless acetabular cup in contact with allograft based on the results of laboratory animal tests. Although elucidated in a study of Lee et al. 15,[55][56][57] , bone ingrowth appears to occur indirectly without a radiolucent line as shown in the author's long-term follow-up.
The advantages of the author's technique are the ability to achieve greater early fixation and incorporation using press-fit technique and morselized impaction bone compared to previous techniques. Avoiding direct weight load to the structural allograft minimizes the risk of collapse and The procedure, as stated by Lee et al. 15) , has been used since the early 1990s. Instead of fixation loss, bony union and graft bone changes in hip joints at a minimum followup period of 10 years (range, 10-20 years 2 months) after morselized impacted allograft into acetabular bone defect were reported in 2012. This previous study reviewed 98 hips (93 patients) followed up for at least 10 years (range, 10-20 years 2 months) to examine: i) new bone formation, ii) changes in the radiolucent zone between allograft and cup size, iii) changes in the allograft margin, and iv) formation of trabecular bone by comparing enlarged anteroposterior and lateral views on radiographs taken According to the radiographic results at final follow-up, a radiolucent zone between the host bone and graft bone was observed in 3 cases and periacetabular component in 12 cases, but the width of all radiolucent zones was less than 2 mm. When comparing radiographs taken on regular follow-ups, radiolucency of the graft bone was increased in 38 cases and decreased in 58 cases on follow-up radiographs taken between 3rd and 6th postoperative months. Newly formed trabecular bone and incorporation between the host bone and graft bone were confirmed in all cases, excluding 15 hips. There were no complications associated with injury to the nerves including the sciatic nerve, and no revision surgery due to deep infection. Although postoperative dislocation occurred in 1 hip, this was managed with an abduction brace after closed reduction. Reformation of the graft bone seems to be affected by patterns in stress change. Therefore, the author's technique with structural autograft and a cementless cup is suggested as a good surgical option for acetabular revision arthroplasty assuming accurate surgical procedures are applied.
In 2004, Lee et al. 15) reported outcomes of morselized allograft in revision for management of acetabular bone defect and reviewed 77 cases (81 hips) with acetabular revision using morselized allograft bone by reviewing the anteroposterior and lateral views on radiographs taken at final follow-up (range, 6-12 years and 10 months). Fixation was well maintained in 30 out of 31 cases. Compared to immediately after surgery, radiodensity increased in 32 cases and decreased in 48 cases. A radiolucent zone between the host bone and graft bone was observed in 2 cases and around the acetabular component in 9 cases; all radiolucent zones had a width of less than 2 mm. Moreover, reformation of the medial graft bone was observed with acetabular bone resorption.
In a study published in 2011, Lee et al. 56) reported radiographic and clinical results in 62 of 71 hips that underwent surgery using morselized impaction allograft and a cementless cup based on radiographic images and Harris hip scores at a minimum of 10 years (range, 10-14 years and 8 months) from 1992 to 2000. The mean Harris hip score was 92 at final follow-up and re-revision was done in 3 cases. There were no injuries to the blood vessels and nerves including sciatic nerve palsy, and two patients with recurrent dislocation within the first postoperative week were managed with an abduction brace. The 12-year survival rate was 95.8%. These outcomes demonstrate that satisfactory long-term results can be obtained using the author's surgical technique.
CONCLUSION
For reconstruction of severe acetabular bone loss, restoration of acetabular bone defect using morselized impaction allograft and a cementless jumbo cup is a useful surgical option to achieve stable short-term fixation and satisfactory long-term surgical outcomes. | 2018-06-14T00:34:25.238Z | 2018-06-01T00:00:00.000 | {
"year": 2018,
"sha1": "3217ad8eff883a0e811971e18d04bc879a25fb52",
"oa_license": "CCBYNC",
"oa_url": "https://europepmc.org/articles/pmc5990533?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "3217ad8eff883a0e811971e18d04bc879a25fb52",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
45363617 | pes2o/s2orc | v3-fos-license | Brownian motion and magnetism
We present an interesting connection between Brownian motion and magnetism. We use this to determine the distribution of areas enclosed by the path of a particle diffusing on a sphere. In addition, we find a bound on the free energy of an arbitrary system of spinless bosons in a magnetic field. The work presented here is expected to shed light on polymer entanglement, depolarized light scattering, and magnetic behavior of spinless bosons.
In this paper we present a general method of solving these problems by using a connection between Brownian motion and magnetism. The qualitative idea is to use a magnetic field as a "counter," to measure the area enclosed in a Brownian motion. We derive a relation between the distribution of areas in a Brownian motion and the partition function of a magnetic system, which can be used to cast light on both subjects. Despite its apparent simplicity, this relation does not seem to have been noticed or exploited so far. Our main purpose here is to illustrate its usefulness. We first discuss the planar problem solved earlier. We then go on to solve the (as yet unsolved) problem of diffusion on the sphere. We also exploit the relation to learn about the magnetic properties of bosonic systems. Here we recover previously known results and arrive at some others. We conclude the paper with a few remarks.
Let a diffusing particle start from a point on a plane at time τ = O. Given that the path is closed at time β (not necessarily for the first time), what is the conditional probability that it encloses a given area A? By "area" we mean the algebraic area, including sign. The area enclosed to the left of the diffusing particle counts as positive and the area to the right as negative. This problem has been posed and solved 2 by polymer physicists, since it provides an idealized model for the entanglement of polymers. We present a method of solving this simple problem.
Let { x(τ ), 0 ≤ τ ≤ β, x(0) = x(β)} be any realization of a closed Brownian path on the plane. As is well known, Brownian paths are distributed according to the Wiener measure: 3 if f [ x(τ )] is any functional on paths, the expectation value of f is given by In Eq. (1) the functional integrals 4 are over all closed paths (the starting point is also integrated over). (We set the diffusion constant equal to half throughout this paper.) Let A[ x(τ )] be the algebraic area enclosed by the path x(τ ). Clearly, the normalized probability distribution of areas P(A) is given by The expectation valueφ of any function φ(A) of the area is given by P (A)φ(A)d(A). As is usual in probability theory we focus on the generating functionP (B) of the distribution P (A):P which is simply the Fourier transform of P (A). For future convenience we write the Fourier transform variable as eB. The distribution P (A) can be recovered from its generating function by an inverse Fourier transform. From Eqs. (2) and (3) above we findP Notice that BA can be expressed as where A( x) is any vector potential whose curl is a homogeneous magnetic field B.
Equations (1), (4), and (5) yield By inspection of Eq.(6) we arrive at where Z(B) is the partition function (Z(B) = T r{exp[−βH(B)]}) for a quantum particle of charge e in a homogeneous magnetic field B at an inverse temperature β. This is the central result of this paper and it relates Brownian motion and magnetism. As the reader can easily verify, the relation (7) holds even if there is an arbitrary biasing potential. The plane can also be replaced by a sphere or (R 3 ) N , the configuration space of N particles in (R 3 ). In the last case, the area of interest is the sum of the weighted areas of the projections of the closed Brownian paths onto the x−y plane. Now we demonstrate the utility of Eq. (7) by computing the distribution of areas for diffusion on a plane. The partition function Z(B) for a particle of unit mass in a constant magnetic field, is easily computed from the energies E n = (n + 1 2 )eB and degeneracy (or the number of states per unit area) (eB/2π) of Landau levels 5 (throughout this paper we set = c = 1): . Let us now address the problem posed at the beginning of this paper: what is the distribution P (Ω) of solid angles enclosed by a diffusing particle on a unit sphere? Unlike the planar case, P (Ω) is a periodic 6 function with period 4π. The generating function P g of the distribution of solid angles is given bỹ with g an integer. P (Ω) is expressed in terms ofP g by a Fourier series rather than an integral (3). Relation (7) now takes the form where Z g is the partition function for a particle of charge e on a sphere subject to a magnetic field created by a monopole of quantized strength G = g/e (Ref. 7) at the center of the sphere. The energy levels of this system are easily computed: 8 where j, the total angular momentum quantum number ranges from |g| to infinity, and the jth level is (2j + 1)-fold degenerate. The partition function is consequently given by Combining (9), (10), and (11) and rearranging the summations we arrive at where ζ(l, β, Ω) = exp[−1/2{β(2l + 1) + iΩ}]. The function (12) is plotted numerically for various values of β in Fig. 1. The qualitative nature of these plots is easily understood. For small values of β the particle tends to make small excursions and its path encloses solid angles close to 0 or 4π and consequently the plots are peaked around these two values. As the available time β increases, other values of Ω are also probable and the peaks tend to spread and the curves to flatten out. Finally in the limit of β → ∞ the particle has enough time to enclose all possible solid angles with equal probability. These plots give the answer to the question that was raised in the beginning of the paper.
Now we turn to the magnetic properties of spinless bosons. An N particle system in three dimensions placed in a homogeneous external magnetic field which is along the z direction has the Hamiltonian Now consider a diffusion on Q biased by the potential V ( x a ). The Wiener measure is now appropriately modified: The area whose distribution we are interested in is defined as follows: Let q(τ ) be a closed curve in Q · q(τ ) determines trajectories of particles { x a (τ ), a = 1, 2, . . . , N} The area functional has the following interpretation. If the final positions of the N particles are the same as the initial ones (direct processes), A[q(τ )] is simply the sum of the areas enclosed by the projection of the particle trajectories on the (x − y) plane. If the final positions differ from the initial ones by a permutation (exchange processes), the projections of the particle trajectories still define closed curves on the (x − y) plane. A[q(τ )] is defined as the sum of areas enclosed by these closed curves.
As before we find thatP (B), which is the Fourier transform of the distribution P (A) ≡ δ(A[ x(τ )] − A) W(V ) of areas, is given by Eq. (7). It is crucial for our argument that the particles obey Bose statistics. 9 For the simplest nontrivial case n = 2, the inequality (13) with B = u 1 − u 2 leads to Z(B) ≤ Z(0) (14) or equivalently, F (B) ≥ F (0). Since the free energy of the system increases in the presence of a magnetic field, the material is diamagnetic. This universal diamagnetic behavior of spinless bosons at all temperatures is known in the mathematical physics literature. 11 However, our approach may be accessible to a wider community of physicists. Our approach relating Brownian motion to magnetism enriches both fields and provides each field with intuition derived from the other. For instance, the zero-field susceptibility χ = −∂ 2 F (B)/∂B 2 | B=0 of the magnetic system is related to the variance of the distribution of areas in the diffusion problem: It is curious that the zero-field susceptibility can be interpreted as the variance of the distribution of areas. Since the variance cannot be negative, it follows that χ, the zero-field susceptibility cannot be positive and so these systems are diamagnetic.
Next consider the case n = 3. The 3 × 3 matrix D ij will then be a function of u = u 1 − u 2 and v = u 2 − u 3 (u 1 − u 3 , being expressible in terms of u and v). If we set u = 0 (i.e., set u 1 = u 2 = 0, u 3 = −v), we find that ∆ (3) (u, v)| u=0 = 0. It then follows from the inequality (13) that ∆ (3) (u, v) has a minimum at u = 0 for all v. This implies that where the prime means derivative with respect to the magnetic field, we find As can be seen by taking the limit B → 0, U(0) = −P ′′ (0) = −βχ(0). We define a critical field B c = π/[2 −βχ(0)]. The inequality (16) implies a bound on the partition function. Notice thatP ′ lies in a cone defined by the lines of slope −(π/2B c ) (1 −P 2 ) and (π/2B c ) (1 −P 2 ). It follows that The diamagnetic inequality due to Simon and Nelson 11 gives an upper bound on the partition function Z(B) of a system of spinless bosons. The new inequality stated in (17) gives us a lower bound on Z(B) (see Fig. 2) (or equivalently, an upper bound on the free energy). As an explicit check on this new bound on the free energy we considered a simple system-a charged particle in a magnetic field subject to a harmonic oscillator potential. The calculated partition function of this system is close to, but above the lower bound set by (17). Needless to say, our bound is derived for an arbitrary interacting system of spinless bosons. The new bound presented here along with the earlier (14) diamagnetic inequality 11 places strong restrictions on the partition function of a bosonic system in the presence of a magnetic field. We find a curious and immediate consequence of these restrictions: if the zero-field susceptibility of the system vanishes, then Eqs. (14) and (17) imply that Z(B) = Z(O), i.e., the system is nonmagnetic at all fields.
The key result of this paper is a connection between two apparently distinct classes of problems-Brownian motion and magnetism. This allows us to compute the distribution of solid angles enclosed in Brownian motion on a sphere. As mentioned earlier, this problem comes up when computing the distribution of Berry phases in a random magnetic field. A more classical context is depolarized light scattering.
As is well known, a light ray following a space curve picks up a geometric phase, 12,13 equal to the solid angle swept out by the direction vector. If a light ray inelastically scatters off a random medium, its direction vector does a random walk on the unit sphere of directions. The distribution P (Ω) computed here is relevant to the extent of depolarization in such an experiment. 14 In the domain of magnetism we find an independent way of arriving at the diamagnetic inequality 11 which states that the free energy of a system of spinless bosons always increases in the presence of a magnetic field. Spinless charged bosonic systems occur in the context of superconductors (which are perfect diamagnets) and neutron stars. 15 We believe that the community of physicists working in these areas may not be aware of the general results available in the mathematical literature. For instance, the diamagnetism of bosons may be relevant 16 to the interpretation of recent experiments 17 on high-T c superconductivity.
Throughout this paper we have only discussed homogeneous magnetic fields. It is easy to generalize our discussion to take into account arbitrary inhomogeneous fields: all one does is consider the distribution of weighted areas. An obvious application of this is the computation of the probability of entanglement of a polymer with a background lattice of polymers. We expect the new method outlined here to shed light on open problems in polymer entanglement involving more complicated configurations of polymers than the simplest one solved so far. One can also use the relation (7) to compute the distribution of winding numbers in diffusion in a multiply connected space.
It is a pleasure to thank N. Kumar for bringing up the problem of diffusion on a sphere and several discussions on this work; Barry Simon for his help in finding Ref. | 2018-04-03T06:03:43.930Z | 1994-11-01T00:00:00.000 | {
"year": 2005,
"sha1": "7b84c3d8ceb92abaed6228b6fa9c2998bf578e39",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/0506631",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "b141d919af72d1ea83fc74afe8a4c8caf264fb22",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine",
"Mathematics"
]
} |
258928869 | pes2o/s2orc | v3-fos-license | Abstract DGPRÄC
The perioperative use of vasopressors in free flap surgeries is controversially debated. The predominant concern is pedicle blood supply will be negatively affected, thus leading to increased post-operative complications. However, little is known about the relationship between vasopressor use and its impact on intrinsic blood supply of free flaps. Comorbidities that diminish blood circulation may cause a higher risk of necrosis from vasopressor use due to the decreased intrinsic free flap blood supply. The aim of this study is to establish the role of peri-operative vasopressors in free flap necrosis based on flap localization and patients’ comorbidities
Materials and methods:
Twenty patients with complete facial paralysis (duration of paralysis 8,45±3,63 months) received dual reinnervation with CFNG and MNT. The functional outcome of the procedure was evaluated with the physician-graded outcome metric eFACE. The objective artificial intelligence-driven softwares Emotrics and FaceReader were used for oral commissure measurements and emotional expression assessment, respectively.
Results:
The mean follow-up was 31.75±23.32 months. In the eFACE score, the nasolabial fold depth and oral commissure at rest improved significantly (p<0.05) towards a more balanced state after the surgery.
Postoperatively, there was a significant decrease in oral commissure asymmetry while smiling (19,22±6,1mm to 12,19±7,52mm). For the emotionality expression, the median intensity score of happiness, as measured by the FaceReader software, increased significantly while smiling (0.28, IQR 0.13-0.64). Seven (35%) patients reported a spontaneous smile. In 5 (25%) patients, a secondary static suspension of the mouth with fascia lata strip had to be performed due to an unsatisfactory resting symmetry.
Conclusion:
The MNT provided reliable voluntary motion and the CFNG could contribute to good resting tone in the majority of the patients, thus merging the advantages of both neural sources without additional morbidity. Despite the early reinnervation, spontaneity is less predictable to achieve. DE GRUYTER Abstracts -DCK 2023-140th German Congress of Surgery -München, April 26-28 • DOI 10.1515/iss-2023-9002 Innov Surg Sci 20238 (Special Suppl
Materials and methods:
We retrospectively evaluated the treatment strategy and recurrence rate of hidradenitis suppurativa.
We included all eligible patients of legal age between February 2003 and October 2021, with the diagnosis of Hidradenitis suppurativa and the necessity for surgical treatment. All patients with surgical treatment and direct wound closure by suture were excluded. Bacterial load and flora were analyzed for primary and secondary reconstruction in combination with negative-pressure wound therapy.
Patient data were analyzed for recurrence rate and remission time according to different reconstructive techniques.
Results:
In 44 affected anatomical sites (n = 23 patients) we treated 15 patients with negative-pressure wound therapy. Bacterial load and flora were lower in the last wound swab of patients with multi-surgical procedures (22 localizations) compared to the first wound swab independent of the use of negativepressure wound therapy. Wound closure, independent of a direct and multi-stage procedure was achieved by local fasciocutaneous flaps (n = 12), secondary intention healing (n = 7), secondary intention healing with buried chip skin grafts (n = 10), or split-thickness skin grafts (n = 15). Radical excision combined with split-thickness skin grafts showed the lowest recurrence rate in the follow-up (16%; n = 4).
Conclusion:
Radical excision of hidradenitis suppurativa as gold standard for surgical treatment combined with negative-pressure wound therapy as multi-stage procedures ultimately reduced bacterial load and flora in our study. The use of split-thickness skin grafts showed the lowest recurrence rate. The perioperative use of vasopressors in free flap surgeries is controversially debated. The predominant concern is pedicle blood supply will be negatively affected, thus leading to increased post-operative complications. However, little is known about the relationship between vasopressor use and its impact on intrinsic blood supply of free flaps. Comorbidities that diminish blood circulation may cause a higher risk of necrosis from vasopressor use due to the decreased intrinsic free flap blood supply. The aim of this study is to establish the role of peri-operative vasopressors in free flap necrosis based on flap localization and patients' comorbidities.
Materials and methods:
We retrospectively analyzed 106 patients that received free flap treatment between 2006 and 2020 stratifying based on age, sex, body mass index (BMI), and smokers vs non-smokers. We assessed the role of mean arterial pressure (MAP) and the perioperative use of catecholamine during free flap surgeries using univariate and multivariate analyses.
Results:
The use of fasciocutaneous flaps, especially on breast and extremities in patients with vascular disease, had the highest risk for marginal flap necrosis (OR: 1.64, p=0.01). Musculocutaneus free flaptransplanted on central body defects were less affected by catecholamine use (OR: 1.01 p=0.05).
In summary fasciocutaneous flaps used on extremities in patients with cardiovascular disease showed the greatest vulnerability to high vasopressor use. Additionally, we identified that low MAP (<65mmHg) in patients with peripheral artery disease (PAD) led to increased flap marginal necrosis (OR: 1.32 p=0.02).
Conclusion:
To minimize the risk of flap marginal necrosis in patients with cardiovascular disease, we recommend limited use of catecholamine or minimizing the flap size, particularly when using fasciocutaneous flaps to cover defects on extremities. Maintaining a mean arterial pressure above 65mmHg in patients with PAD was beneficial. Limb regeneration in mammals has been shown to be less comprehensive compared to amphibian models. Part of the reason for this difference in regenerative capacity is due to the immune system.
The effects of immune cells, such as macrophages, on regeneration have been well examined, however the role of Nature Killer (NK) cells remains largely unknown. Here, we demonstrate the importance of NK cells in regeneration and that the effects of these cells depend on tissue origin.
Materials and methods:
Amputation of the distal one-third of the terminal phalanges was performed on aged matched 8-10 week old immunodeficient NSG mice. Two digits per hind paw (digits 2 and 4) were amputated. Digit number 3 functioned as an uninjured control. Adoptive cell transfer (ACT) of flow cytometry purified splenic NK (SpNK) or thymic NK (ThNK) cells from C57BL/6 mice were injected via the tail vein.
Changes in hard and soft tissue volume were assessed using microCT and histology. To determine immune cell presence and receptor expression in the regenerating digit tip we used immunofluorescent staining and flow cytometry.
Results:
We confirmed NK cell recruitment to the regenerating digit tip. NK cell cytotoxicity was observed against osteoclast and osteoblast progenitors. ACT of ThNK cells induced apoptosis with a reduction of osteoclasts, osteoblasts, and proliferative cells, resulting in inhibition of regeneration. By contrast,
ACT of splenic NK cells showed reduced cytotoxicity towards progenitor cells and improved regeneration. Adoptive transfer of NK cells deficient in NK cell activation genes identified that promotion of regeneration by SpNK cells requires Ncr1, whereas inhibition by ThNK cells is mediated
via Klrk1 and perforin.
Conclusion:
These findings yield insight into mammalian digit tip regeneration and demonstrate the importance of NK cells on regenerative ability. Successful future therapies aimed at enhancing regeneration will require a deeper understanding of progenitor cell protection from NK cell cytotoxicity. Sarcomas are a rare and highly heterogeneous group of malignancies. Sarcomas affecting the bone require an interdisciplinary approach to resection and reconstruction. However, microsurgical reconstruction strategies must not negatively impact oncological safety and survival. Here, we analyzed the oncosurgical safety, including survival and overall complication rates in patients with bone-associated sarcomas that underwent complex microsurgical procedures for limb salvage.
Materials and methods:
We performed a retrospective chart review of all patients treated for soft tissue and bone sarcoma at our institution, focusing on bone affection and microsurgical reconstruction between 2000 and 2019.
This subgroup was further investigated for tumor resection status, 5-year survival rate, length of hospital stay, and overall complication and amputation rates.
Conclusion:
Safe and function-preserving treatment of bone-associated sarcoma is challenging. Primary reconstruction with microsurgical techniques of sarcoma-related defects enables limb-sparing and adequate oncosurgical cancer treatment without increasing the risk for local recurrence or prolonged hospital stay. -DCK 2023-140th German Congress of Surgery -München, April 26-28 • DOI 10.1515/iss-2023-9002 Innov Surg Sci 20238 (Special Suppl Fluid resuscitation is of great importance in the management of burns. Various formulae have been described for calculating fluid management especially in severely burned patients. Although the parkland formula is widely used, its effectiveness and clinical value are discussed controversial. The impact of calculated volume of fluid by Parkland on the outcome of burned patients was investigated.
Materials and methods:
All patients with thermal injuries of the German burn registry (VR-DGV-project-ID 2020/01) between January 2016 and December 2020 with an affected body surface area more than 15% were analysed.
Patients were divided into two groups according to their age: paediatric and adult. Outcome was compared in correlation with the volume of fluid given according to Parkland formula and analysis was performed with logistic regression.
Results:
480 children (0-15 years old) and 2096 adults (16-100 years old) were included in the analysis. In the paediatric group the logistic regression analysis showed a low correlation between Parkland formula and given volume 0.78 (95% CI 0.72-0.83). Apparently, all centres' fluid management deviated from calculation of parkland formula with the majority over-resuscitating regardless the size of the centre.
The deviation seemed to depend on the cause of accident. Less given volume than calculated with Parkland formula reduced the length of hospital stay. Only 5 children died, all received more fluid than Parkland's estimation. In the adult group, a negative deviation from parkland seemed to correlate with shorter length of hospital stay, whereas it had no influence on mortality.
Conclusion:
This retrospective analysis provided no clear evidence of the eligibility of the parkland formula. Even huge data sets of a burn registry such those of the German Society for Burn Medicine are limited in answering the "fluid question" definitively. Extended parameters such as urine output and blood pressure would be of great interest. A randomized study with lowmediumhigh fluid income groups DE GRUYTER | 2023-05-28T06:16:31.186Z | 2023-03-01T00:00:00.000 | {
"year": 2023,
"sha1": "5a6b23c2e54f9dee8302a915e699fdfb0ee8296f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "5a6b23c2e54f9dee8302a915e699fdfb0ee8296f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
1589769 | pes2o/s2orc | v3-fos-license | Algebraic Formulation for the Dispersion Parameters in an Unstable Planetary Boundary Layer: Application in the Air Pollution Gaussian Model
An alternative formulation for the dispersion parameters in a convective boundary layer is presented. The development consists of a simple algebraic relation for the dispersion parameters, originated from the fitting of experimental data, in which the turbulent velocity variances and the Lagrangian decorrelation time scales are derived from the turbulent kinetic energy convective spectra. Assuming homogeneous turbulence for elevated regions in an unstable planetary boundary layer (PBL), the present approach, which provides the dispersion parameters, has been compared to the observational data as well as to results obtained by classical complex integral formulations. From this comparison yields that the vertical and lateral dispersion parameters obtained from the simple algebraic formulas reproduce, in an adequate manner, the spread of contaminants released by elevated continuous source in an unstable PBL. Therefore, the agreement with dispersion parameters available by an integral formulation indicates that the hypothesis of using an algebraic formulation as a surrogate for dispersion parameters in the turbulent convective boundary layer is valid. In addition, the algebraic vertical and lateral dispersion parameters were introduced into an air pollution Gaussian diffusion model and validated with the concentration data of Copenhagen experiments. The results of such Gaussian model, incorporating the algebraic dispersion parameters, are shown to agree with the measurements of Copenhagen.
INTRODUCTION
Our preoccupation about air pollution is a consequence of the explicit evidence that air contaminants negatively affect the health and the welfare of human beings.Air contaminants concentration influences the health of humans and animals; damage vegetation and materials; reduce visibility and solar radiation; and affect weather and climate [1].
The study and the employment of operational short-range atmospheric dispersion models for environmental impact assessment have demonstrated to be of large use in the evaluation of ecosystems perturbation in many distinct scales [2].Therefore, short-range atmospheric dispersion models, including the physical description of the Planetary Boundary Layer (PBL), are fundamental tools to evaluate the noxious effect of air pollutants on human health and on urban and agricultural environments [3].Generally, such air quality short-range models can be useful in predicting contaminants concentration magnitudes in atmospheric boundary layer generated by different forcing mechanisms and consequently distinct degrees of complexity.
In operational applications, the classical Gaussian diffusion models are largely employed in assessing the impacts of existing and proposed sources of air contaminants on local and urban air quality [1].Simplicity, associated to the Gaussian analytical model, makes this approach particularly suitable for regulatory usage in mathematical modeling of the air pollution.Indeed, such models are quite useful in short-term forecasting.The lateral and vertical dispersion parameters, respectively y and z , represent the key turbulent parameterization in this approach, once they contain the physical ingredients that describe the dispersion process and, consequently, express the spatial extent of the contaminant plume under the effect of the turbulent motion in the PBL [4].
The following simple algebraic relation has been employed to fit the observed dispersion parameters ( y , z ) in the PBL under different stability conditions [5][6][7][8][9][10]: where = x, y, z ; i = u, v, w , i corresponds to the Eule- rian standard deviation of the turbulent wind field, T L i is the Lagrangian decorrelation time scale and t is the travel time of the fluid particle.Formulation (1) consists of an empirical relationship that satisfies the short and long time limits of Taylor statistical diffusion theory.Its derivation was obtained directly from the fitting of experimental data [11].
Recently, Degrazia et al. [12] showed that the velocity autocorrelation function derived from the functional form (1) satisfies the principal mathematical requirements suggested by Hinze [13] for homogeneous turbulence.Furthermore, this autocorrelation function also satisfies the inertial subrange conditions suggested by Tennekes [14] and Manomaiphiboon and Russel [15].Based on Kolmogorov's theory [16], this means it captures the n 2 frequency falloff in the inertial subrange.
Most of the turbulence parameterizations employed in advanced dispersion models is based on PBL similarity theories [17][18][19][20].Therefore, the dispersion parameters described in terms of a similarity theory are directly related to the basic physical quantities describing the turbulence state of the PBL.
Throughout classical statistical diffusion theory [21], it is possible to relate turbulent parameters in the PBL to spectral distribution of the turbulent kinetic energy.Following such methodology, Degrazia et al. [22][23] developed a model for the turbulent spectra in a convective boundary layer and proposed a formulation for the Lagrangian decorrelation time scales and turbulent velocity variances described in terms of the unstable PBL similarity theory.
The following study aims at using Lagrangian decorrelation time scales and turbulent velocity variances, described in terms of the characteristics of the turbulent field in a convective boundary layer, to obtain simple algebraic expressions for the dispersion parameters.The hypothesis to be tested in the present analysis is that complex integral formulations for dispersion parameters, which are only numerically solvable, can be represented by simple algebraic relations constructed from equation (1).To demonstrate that this hypothesis is valid, we compare the values of the lateral and vertical dispersion parameters evaluated from the algebraic expression (1) with those numerically obtained from an integral formulation.Furthermore, the algebraic and integral dispersion parameters are put together with measured values of y and z .As an additional purpose, this paper presents the formulation of a simple short-range Gaussian model which evaluates ground-level concentrations from elevated sources in a boundary layer, dominated by moderate convection.The performance of this Gaussian model incorporating simple algebraic relationships and integral formulations for the lateral and vertical dispersion parameters are compared to ground-level concentrations from atmospheric dispersion experiments that were carried out in the Copenhagen area under moderately unstable conditions [24].
ALGEBRAIC AND INTEGRAL FORMULATION FOR THE DISPERSION PARAMETERS
The equation for dimensional Eulerian velocity spectra under unstable conditions in the PBL can be described as a function of convective scales as follows [19], where c i = i 0.5 ± 0.05 , where is the mean dissipation rate of turbulent kinetic energy per unit time per unit mass of fluid, with the order of magnitude of determined only by scales that characterize the energy-containing eddies.Field observations in a convective PBL show that 0.65 [26].
The analytical integration of Eq. ( 2) over the whole frequency domain leads to the following turbulent velocity variance that is employed to normalize the spectrum so that the normalized spectrum can be written as follows: Based on a model for the spectra of turbulent kinetic energy and Taylor statistical diffusion theory Degrazia et al. [23] derived a mathematical expression for the Lagrangian decorrelation time scale.For non-homogeneous turbulence this decorrelation time scale can be expressed as where i = U i [27][28][29] is defined as the ratio of the Lagrangian to the Eulerian decorrelation time scale and F i E 0 ( ) represents the spectra in which the high frequencies were filtered.
The substitution of the Eq. ( 3) and Eq. ( 4) into Eq.( 5) and using i = U i yields the following expression Finally, the substitution of Eq. ( 3) and Eq. ( 6) into Eq.( 1) leads to the following generalized algebraic expression for the dispersion parameters where Thusly, the vertical dispersion parameter from elevated sources in an unstable PBL that is first considered.By elevated, we mean that at this height the turbulence structure can be idealized as vertically homogeneous with the length scale of the energy-containing eddies being proportional to the convective boundary layer height z i , so that the peak vertical wavelength can be written as m Therefore, the vertical dispersion parameter for convective conditions can be obtained from Eqs. ( 7) and ( 8), employing c w = 0.4 and = 4 , it is expressed as Experimental observation in a convective boundary layer exhibiting horizontal homogeneity shows that the peak lateral wavelength can be represented by m ( ) v = 1.5z i [17].
From this observational consideration yields Thus, the lateral dispersion parameter for unstable conditions can be constructed from Eqs. ( 7) and ( 10) using c v = 0.4 and = 4 as follows
COMPARISON OF THE PROPOSED PARAM-ETERIZATION WITH A CLASSICAL INTEGRAL FORMULATION DESCRIBING y AND z
To demonstrate that the model as given by ( 7) is valid, we compare the vertical and lateral dispersion parameters provided respectively by ( 9) and (11), with the following classical integral formulation proposed by Pasquill and Smith [30] expressing a Lagrangian in terms of the ratio of the Eulerian energy spectrum to the Eulerian velocity variance as the kernel of a Fourier transform in frequency space: Substituting Eqs. ( 3) and (4) and using i = U i into Eq.( 12), follows that Thus, the vertical dispersion parameter for convective condition can be derived from Eqs. ( 13) and ( 8), employing c w = 0.4 and = 4 . This integral formulation for z can be written as i z 2 Furthermore, the integral formulation for the lateral dispersion parameter using c v = 0.4 , = 4 and Eq.(10) into Eq.( 13), yields Despite the difference between Eqs. ( 14) and ( 15), originated from the turbulence energy spectrum, and Eqs. ( 9) and (11), which constitute experimental fittings, Figs. ( 1) and (2) show the existence of a good degree of agreement between the algebraic and integral formulation.Furthermore, in Figs.
(1) and (2) the expressions ( 9), ( 14), ( 11) and ( 15) are compared to the dispersion parameters ( z and y ) measured in the Copenhagen experiments.In the Copenhagen experiments the contaminants were released without buoyancy from a tower at a height of 115 m and collected at the ground-level positions at a maximum of three crosswind arcs of tracer sampling units.The sampling units were positioned 2 -6 km from the point of release [24].The meteorological conditions during the dispersion experiments, ranged from moderately unstable to convective, 43.04 (convective) < z i L < 1.42 (moderately unstable) [31].Additionally, using statistical indices Tables 1 and 2 exhibits a comparison of the dispersion parameters z , y ( ) meas- ured in Copenhagen experiments [24,31] with those calculated by the equations ( 9), ( 11), ( 14) and (15).From this statistical comparison it can be seen that the simple algebraic formulations reproduce fairly well the observed values of the lateral and vertical dispersion parameters.An explanation for the distinct statistical indices is given in the appendix.Therefore, the present investigation indicates that the dispersion parameters given by simple algebraic interpolation formulas can represent the turbulent dispersion of contaminants released from elevated sources in an unstable boundary layer.The great advantage of using the algebraic expressions for z end y is the fact that these formulas, under the computational point of view, are forty times faster than the numerical integrations and, consequently, they will be useful in the solution of large and complex atmospheric diffusion models.
COMPARISON WITH EXPERIMENTAL CON-CENTRATION DATA
We evaluate the performance of the algebraic and integral parameterization for z and y dispersion parameters, 11) Equation ( 15) Data Copenhagen applying the Gaussian plume model to the Copenhagen experimental concentration data set.For this comparison were used the measured values of the ground-level crosswindintegrated and the centerline ground-level concentration normalized with the source emission rate [24].
The Gaussian expression for the ground-level crosswindintegrated concentration and the normalized ground-level concentration along the plume centerline are respectively given by [1] C y x,0 ( ) C x,0,0 ( ) where Q is the source strength or emission rate and h is the effective height of release above the ground.
The physical fundamental quantities, by the employment of Eq. ( 16) and Eq. ( 17), are the vertical and lateral dispersion parameters that are obtained from Eqs. ( 9), ( 11), ( 14) and (15).The substitution of these expressions in ( 16) and (17) provides directly the values of the ground-level concentrations of contaminants released from elevated continuous point sources located in a moderately unstable to convective PBL.Therefore, as a validation of the algebraic formulas for the vertical and crosswind spread of the plume developed in this study, the parameterizations given by the Eqs.( 9), ( 11), ( 14) and ( 15) are going to be incorporated in the Gaussian plume model approach defined by Eqs. ( 16) and (17).
In Tables 3 and 4 the observed ground-level concentrations are exhibited together with computed one from the Gaussian model employing the formulations given by ( 9), ( 11), ( 14) and (15).Furthermore, Figs. ( 3) and (4) show respectively the observed and predicted scatter diagram of ground-level crosswind integrated and centerline concentrations using the Gaussian model with vertical and lateral dispersion parameters given by equations ( 9) and ( 11) (algebraic formulations) and ( 14) and ( 15) (integral formulation).
Finally, the datasets were applied subsequently to the statistical indices ( [32], see appendix).Therefore, observing the Figs.( 3) and ( 4) and the statistical indices, Tables 5 and 6, one can easily conclude that the Gaussian model (Eqs.16 and 17) incorporating the algebraic (Eqs.9 and 11) and integral formulations (Eqs.14 and 15) predicts quite well the Copenhagen ground-level observed concentrations.The overall good agreement between Gaussian model predictions using the algebraic formula for the dispersion parameters and field data of ground-level concentration, as well as the comparison with the integral formulation for the z and y , confirms that the simple algebraic relations ( 9) and ( 11) contain a realistic description of the energy-containing eddies that control the turbulent dispersion in the unstable PBL.
CONCLUSIONS
Algebraic simple formulations for the lateral and vertical dispersion parameters in an unstable PBL are derived.The development is based upon an empirical algebraic relation in which the turbulent velocity variances and the Lagrangian decorrelation time scales are obtained from the turbulent kinetic energy spectra.By considering the turbulent field structure of the convective boundary layer as fairly homogeneous, that is, the length scale of energy containing eddies proportional to the convective PBL height and the dimensionless turbulent kinetic energy dissipation rate as constant, the present model, describing the dispersion parameters, has been compared with experimental data and with values provided by classical integral formulations which are only numerically solvable.( ) / Q : scatter diagram for the solution of Eq. ( 16) using Eqs.( 9) and ( 14).scatter diagram for the solution of Eq. ( 17) using Eqs.( 9), ( 11), ( 14) and ( 15).This comparison shows that the lateral and vertical dispersion parameters calculated from the simple algebraic formulas (Eqs.( 9) and ( 11)) can describe the turbulent diffusion process in an unstable PBL.Furthermore, by using a Gaussian plume model and a dataset of diffusion experiments performed in an unstable PBL, ground-level contaminant concentrations calculated using the dispersion parameters given by the algebraic simple formulations (Eqs.( 9) and ( 11)) were compared to the ones obtained by the classical integral formulation derived by Pasquill and Smith [30] (Eqs.( 14) and ( 15)).
The validations used in this study show that the Gaussian short range dispersion model employing the algebraic formulas for the lateral and vertical dispersion parameters reproduces well the measured concentrations from contaminants released from elevated continuous point sources situated in a moderately unstable to convective boundary layer.Therefore, the simple algebraic formulation for dispersion parame- ) Equations ( 9) and ( 11) Equations ( 14) and (15) ters can be used as a surrogate parameterization for the complex integral formulation.
As a consequence of the analyticity of the expression its use in dispersion models removes problems associated to the computational time and mathematical approximation.Therefore, the new algebraic dispersion parameters may be suitable for applications in regulatory air pollution short range dispersion model.
3
and i = 1, 4 3, 4 3 for u, v and w components respectively [25], k = 0.4 is von Kármán constant, f = nz U z ( ) is the nondimensional frequency, z is the height above the ground, U z ( ) = U is the horizontal mean wind speed at height z n , f m * ( ) i is the re- duced frequency of the convective spectral peak, i z is the height of the base of the inversion layer capping the daytime convective boundary layer, w * is the convective velocity scale and the nondimensional molecular dissipation rate function is defined as = by the ratio of travel time x U ( ) to the convective time scale z i w * ( ) .For lateral y ( ) and vertical z ( ) dispersion parameters c w = c v = 0.4 .These c i values derive of the isotropy condition in the inertial subrange.Furthermore, Wandel and Kofoed-Hansen [27] have rigorously shown that for the case of a fully developed isotropic homogeneous turbulence = 4 . | 2016-10-26T03:31:20.546Z | 2008-08-12T00:00:00.000 | {
"year": 2008,
"sha1": "525dce033a05411145c8992737bdd0407061ab63",
"oa_license": "CCBY",
"oa_url": "https://openatmosphericsciencejournal.com/VOLUME/2/PAGE/153/PDF/",
"oa_status": "HYBRID",
"pdf_src": "Grobid",
"pdf_hash": "525dce033a05411145c8992737bdd0407061ab63",
"s2fieldsofstudy": [
"Environmental Science",
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
215759737 | pes2o/s2orc | v3-fos-license | Betulinic acid triggers apoptosis and inhibits migration and invasion of gastric cancer cells by impairing EMT progress
Gastric cancer (GC) is one of the most prevalent types of malignancies. Betulinic acid (BA) is a natural pentacyclic triterpene with a lupine structure. However, to the best of our knowledge, there is no research study on the anti‐tumour and anti‐metastasis effect of BA on GC. In this study, we assessed the anti‐cancer effect of BA on human GC cells in vitro and in vivo. We first investigated the cytotoxicity and anti‐proliferation effect of BA on GC cells of SNU‐16 and NCI‐N87. The results indicated that BA had significant cytotoxic and inhibitory effects on GC cells in a dose‐ and time‐dependent manner. To further study the cytotoxic action of BA on GC cells, we assessed the apoptotic induction effect of BA on SNU‐16 cells and found that BA distinctly induced apoptosis in SNU‐16 cells. In addition, BA inhibited the migratory and invasive abilities of SNU‐16 cells. Western‐blot analysis revealed that BA suppressed the migration and invasion of GC cells by impairing epithelial‐mesenchymal transition progression. Furthermore, in vivo experiments showed that BA could delay tumour growth and inhibit pulmonary metastasis, which is consistent with the results of in vitro studies. Overall, we evaluated the anti‐cancer effect of BA on human GC cells in vivo and in vitro, and the present study provides new evidence on the use of BA as a potential anti‐cancer drug for GC treatment. Significance of the study BA significantly suppressed proliferation and triggered apoptosis in GC cells. Additionally, BA remarkably inhibited migration and invasion of GC cells by impairing the epithelial‐mesenchymal transition signalling pathway. It is worth noting that BA drastically retarded tumour growth in the xenograft mouse model of GC. Our results indicated that BA can be considered a candidate drug for GC therapy.
metastases at the time of diagnosis, and their 5-year survival rates are extremely low. 7 In 2018, 679 000 new cases and 498 000 cases of GC-related mortality were reported by the Chinese National Cancer Center. 8 With the use of multimodal treatments for GC, the overall 5-year survival rate has been found to stabilize at 40% worldwide. [9][10][11] Therefore, it is crucial to understand the molecular mechanism of GC progression and metastasis in early diagnosis and treatment. Currently, modern treatment of GC, including surgery combined with radiotherapy, chemotherapy and targeted therapy still have some drawbacks such as low therapeutic effect, high toxicity, recurrence and even metastasis. 12,13 Therefore, it is important to find safe and effective drug candidates for GC treatment.
Natural products have been used to treat human diseases since ancient times and are they are vital to drug discovery and development. 14,15 Various anti-infection and anti-cancer drugs have been derived from natural products. [16][17][18][19] Additionally, the rapid development of new drugs that are more effective and have fewer adverse effects is a common goal shared by scientists and clinicians. 20,21 Traditional Chinese medicine is a huge treasury of remedies with low toxicity and high sensitivity and the ability to stabilize tumour growth and improve GC prevention and treatment.
Betulinic acid (BA) ( Figure 1A) (3β, hydroxyl-lup-20 [29]-en-28-oic acid) is a natural pentacyclic triterpene with a lupine structure, which is derived from plant sources such as acuminatissima leaves, white birch bark and wild jujube seeds. 22,23 Various studies have reported that BA has a variety of valuable medicinal effects, including anti-bacterial, anti-cancer, anti-malarial, anti-viral and anti-inflammatory. [24][25][26] Furthermore, BA has shown tumorigenesis inhibition in many kinds of cancers, including lung, colon, breast, prostate and pancreatic cancer. [27][28][29][30] In the present study, our results showed that BA distinctly triggered apoptosis and suppressed the proliferation, migration and invasion of GC cells in vitro. The in vivo anti-tumour efficacy was consistent with that reported in vitro studies.
| Cell lines and cell culture
SNU-16 and NCI-N87 cells were purchased from the American Type Culture Collection (ATCC). Cells were cultured in RPMI1640 medium (Gibco, Grand Island, New York) containing 10% fetal bovine serum F I G U R E 1 BA inhibits viability and proliferation of gastric cancer cells. A, The chemical structure of BA. B and C, The growth curves of SNU-16 and NCI-N87 cells treated with different concentrations of BA for 48 and 72 hours. D and E, The colony formation of SNU-16 and NCI-N87 cells treated with different concentrations of BA. Significant differences were indicated as *P ≤ .05; **P ≤ .01; ***P ≤ .001. B, betulinic acid (FBS) (Gibco, Grand Island, New York). All cells were kept in an atmosphere of 5% CO 2 at 37 C.
| Colony-formation assays
Approximately 500 GC cells were seeded into a six-well plate and cultured in media with various concentrations of BA (0, 2.5, 5, 10, 20, 40 and 80 μM). The cells were incubated at 37 C for 8 days.
When larger clones were found in the control group, the incubation was stopped. The culture medium was discarded, and cells were washed with phosphate-buffered saline (PBS), fixed with 800 μL methanol per well for 10 to 20 minutes, and stained with 0.1% crystal violet at 37 C for 10 minutes. Finally, the stained cell clones in dishes were imaged and counted. Each treatment was repeated in three times.
| Western blot analysis
Total protein was separated from the treated cells using 500 μL of radio immune precipitation assay buffer with 1 mM of phenylmethanesulfonylfluoride. Samples with the cells were immediately sonicated for 2 minutes to break the cell membrane, and were then centrifuged. The supernatant containing the proteins was collected and stored at −20 C for further use. The proteins were separated by 10% polyacrylamide gels and then transferred onto a polyvinylidene fluoride membrane. Then, the proteins on the membranes were washed with TBS containing 0.1% Tween-20 (TBST) and blocked by 5% nonfat milk (wt/vol) for 1 hour at room temperature.
After washing with PBST three times, the membranes were incubated with antibodies (E-cadherin, N-cadherin and β-actin) overnight at 4 C.
Next, the membranes were washed again with PBST and incubated with horseradish peroxidase-labelled immunoglobulin G (dilution, 1:5000) at room temperature for 1 hour. Finally, the proteins bands were visualized using enhanced chemiluminescence by western blot analysis.
| Xenograft tumour model in nude mice
The animal experiment in this study was approved by the Institutional
| Immunohistochemistry
The tumour samples were resected from the mice, fixed in 10% formaldehyde, embedded in paraffin, and sectioned (5 μm thickness). The paraffin tumour sections were analysed by Immunohistochemistry (Ki-67 and MMP-2). Then, the sections were dehydrated and fixed, and the slides were sealed with neutral gum. Pictures were obtained using the Leica microscope (Leica, DM4000B).
| Statistical analysis
All experiments were conducted at least three times. Data are showed as mean ± SD (SD). Two-tailed student's t test and one-way analysis of variance (ANOVA) test were used for statistical analysis of the data.
Significant differences of the P-values are as follows: * P < .05; ** P < .01; *** P < .001. Furthermore, colony-formation assays were carried out to study the anti-proliferation effect of BA on SNU-16 and NCI-N87 cells. As shown in Figure 1D,E, the colony-formation ability of SNU-16 and NCI-N87 cells was significantly inhibited by BA treatment. Taken together, the results show that BA exerted sufficient anti-proliferation effect on GC cells.
| DISCUSSION
GC is one of the most prevalent types of malignancies. 1 GC is a malignancy with high morbidity and mortality, and is the third leading cause of cancer-related mortality worldwide. 2,3 Therefore, the need for new and effective anti-GC drug treatments is urgent. Even though chemical drugs such as Dox and PTX can effectively destroy the DNA in tumour cells, severe side effects are inevitable. 31 Natural products from plants and animals exhibit high efficiency, low toxicity and other benefits. 32 It is widely reported that BA is cytotoxic to various types of human cancer cells. [28][29][30] In the present study, we assessed the anti- 33 Lewinska A showed that BA-mediated changes in glycolytic pathway promote cytotoxic autophagy and apoptosis in phenotypically different breast cancer cells. 34 Furthermore, BA proved to possess immunomodulatory activity by producing pro-inflammatory cytokines and activation of macrophages. 35 Therefore, the other anti-tumour mechanisms of BA should be further investigated in future.
In addition, the migration, invasion and EMT process of cancer breast cancer metastasis by targeting GRP78-mediated glycolysis and ER stress apoptotic pathway. 36 Similarly, it is reported that BA inhibits stemness and EMT of pancreatic cancer cells via activation of AMPK signalling. 37 Furthermore, Nuclear Factor-κB is responsible for antiproliferation and anti-migration of BA in Urothelial Tumorigenesis. 38 In conclusion, we assessed the anti-tumour effect of BA on human GC cells by conducting in vitro and in vivo experiments and found that BA induced apoptosis and suppressed the migration and invasion of GC cells. Overall, the present study shows that BA might serve as an anti-tumour drug for GC therapy.
ETHICS STATEMENT
All of the animal experiments in this study were performed according to the National Institutes of Health (Bethesda, MD, USA) guidelines and were approved by the Ethical Committee of First Affiliated Hospital of Gannan Medical College (Ganzhou, China).
CONFLICT OF INTEREST
The authors declare no conflicts of interest.
Yun Chen and Yun Zhou designed the experiments; Xiongjian Wu and
Chi Liu were involved in performing the designed experiments; Chi Liu and Yun Zhou were responsible for analysing the data and writing and approving the manuscript.
DATA AVAILABILITY STATEMENT
The data sets used or analyzed in this study are available from the corresponding author on reasonable request. | 2020-04-15T13:06:49.255Z | 2020-04-13T00:00:00.000 | {
"year": 2020,
"sha1": "06782e8dc903f50b04de7e7974cb10a91278d630",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/cbf.3537",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "c270bf26c44d3e2eb870de224a1d8fdd5c0df1e9",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
119632766 | pes2o/s2orc | v3-fos-license | Local convergence of large random triangulations coupled with an Ising model
We prove the existence of the local weak limit of the measure obtained by sampling random triangulations of size $n$ decorated by an Ising configuration with a weight proportional to the energy of this configuration. To do so, we establish the algebraicity and the asymptotic behaviour of the partition functions of triangulations with spins for any boundary condition. In particular, we show that these partition functions all have the same phase transition at the same critical temperature. Some properties of the limiting object -- called the Infinite Ising Planar Triangulation -- are derived, including the recurrence of the simple random walk at the critical temperature.
Introduction
In 2003, in order to define a model of generic planar geometry, Angel and Schramm studied the limit of uniform triangulations on the sphere, [8]. They proved that this model of random maps converges for the Benjamini-Schramm topology defined in [11], and also called the local topology. The limiting object is a probability distribution on infinite triangulations, now known as the UIPT, for Uniform Infinite Planar Triangulation. Soon after, Angel [6] studied some properties of the UIPT. He established that the volume of the balls of the UIPT of radius R scales as R 4 and that the site-percolation threshold is 1/2. Similar results (but with quite different proofs) were then obtained for quadrangulations by Chassaing and Durhuus [26] and Krikun [48]. Since then, the local limit of random maps has become an active area of research. The UIPT is now a well-understood object: the simple random walk on the UIPT is known to be recurrent [41], precise estimates about the volume and the perimeter of the balls of radius r are available [33], geodesic rays are known to share infinitely many cutpoints [35] and percolation is fairly well understood [6,7,14,15,31,40]. We refer to the recent survey by Le Gall [51] or the lecture notes by Miermont [56] for nice entry points to this field, and to [60] for a survey of the earlier combinatorial literature on random maps.
The results cited above deal with models of maps that fall in the same "universality class", identified in the physics literature as the class of "pure 2d quantum gravity": the generating series all admit the same critical exponent and the volume of the balls of the local limits of several of those models of random maps are known to grow as R 4 . To capture this universal behaviour, a good framework is to consider scaling limits of random maps (of finite or infinite size) in the Gromov Hausdorff topology. Indeed, for a wide variety of models the scaling limit exists and is either the Brownian map [3,4,16,50,53,55] or the Brownian plane [9,32].
To escape this pure gravity behaviour, physicists have long ago understood that one should "couple gravity with matter", that is, consider models of random maps endowed with a statistical physics model. From a combinatorial point of view, evidence for the existence of other universality classes were first given by constructing models, like tree-rooted maps or triangulations endowed with Ising configurations, whose generating series exhibit a different asymptotic behaviour at criticality. One of the first such results, and the most relevant for our work, appears in [21], where Boulatov and Kazakov initiated the study of Ising models on random triangulations (following some earlier work by Kazakov [47]). They established the existence of a phrase transition, the critical value of the model and the corresponding critical exponents. Their result is based on the expression of the generating series of the model as a matrix integral and the use of orthogonal polynomial methods. Their result was later rederived via bijections with trees by Bousquet-Mélou and the third author [23] and by Bouttier, di Francesco and Guitter [24], and more recently via a tour-de-force in generatingfunctionology by Bernardi and Bousquet-Mélou [13] building on a seminal series of papers by Tutte on the enumeration of colored maps, synthesized in [62].
Main results
The aim of this paper is to build on these latter ideas to prove the local convergence of random triangulations endowed with Ising configurations. To state our main result, let us first introduce some terminology. Precise definitions will be given in Section 2.1. For T a rooted finite triangulation of the sphere, a spin configuration on (the vertices of) T is an application σ : V (T ) → { , ⊕}. We denote by T f the set of finite triangulations endowed with a spin configuration. For (T, σ) ∈ T f , we write m(T, σ) for its number of monochromatic edges. Then, for n ∈ N and ν > 0, let P ν n be the probability distribution supported on elements of T f with 3n edges, defined by: Writing ν = exp(−2β), this is the probability distribution obtained when sampling a triangulation with 3n edges together with a spin configuration on its vertices with a probability proportional to the energy in the Ising model, defined by exp(−β (v,v )∈E(T ) σ(v)σ(v )). In particular, the model is ferromagnetic for ν > 1 and antiferromagnetic for 0 < ν < 1. The case ν = 1 corresponds to uniform triangulations. Following Benjamini and Schramm [11], we equip the set T f with the local distance d loc . For (T, σ), (T , σ ) in T f , set: d loc ((T, σ), (T , σ )) = (1 + sup{R 0 : where B R (T, σ) is the submap of T composed by its faces having at least one vertex at distance smaller than R from its root vertex, with the corresponding spins. The only difference with the usual setting is the presence of spins on the vertices and, in addition of the equality of the underlying maps, we require equality of spins as well.
The closure (T, d loc ) of the metric space (T f , d loc ) is a Polish space and elements of T \ T f are called infinite triangulations with spins. The topology induced by d loc is called the local topology. As it is often the case with local limits of planar maps, we will be especially interested in the element of T that are one-ended, that is the infinite triangulations (T, σ) ∈ T for which (T, σ) \ B R (T, σ) has a unique infinite connected component for every R.
Our main theorem is the following result: Theorem 1. For every ν > 0, the sequence of probability measures P ν n converges weakly for the local topology to a limiting probability measure P ν ∞ supported on one-ended infinite triangulations endowed with a spin configuration.
We call a random triangulation distributed according to this limiting law the Infinite Ising Planar Triangulation with parameter ν or ν-IIPT.
Our approach to prove this convergence result is akin to Angel and Schramm's initial approach for the UIPT: in particular it requires precise information about the asymptotic behaviour of the partition function of large Ising triangulations, with an arbitrary fixed boundary condition, see Theorem 6. This result, which does not follow from earlier results [21,13,23], constitutes a significant part of this work and is of independent interest. One of the main technical challenges to obtain this result is to solve an equation with two catalytic variables. This is done in Theorem 13 using Tutte's invariants method, following the presentation of [13].
As expected, these partition functions all share the same asymptotic behaviour, which presents a phase transition for ν equal to ν c := 1 + √ 7/7. This critical value already appeared in [21,13,23], and we call critical IIPT the corresponding limiting object. The study of this critical IIPT is the main motivation for this work, since, as mentioned above, it is believed to belong to a different class of universality than the UIPT. However, these two models share some common features, as illustrated by the following theorem: Theorem 2. The simple random walk on the critical IIPT is almost surely recurrent.
Our strategy to prove this result does not rely on the specificity of ν c , but requires a numerical estimate which prevents us from extending this result to a generic ν. However, the same proof would work for any fixed ν between 0.3 and 2 (see Remark 31) and we conjecture that the IIPT is recurrent for every value of ν.
Finally, as a byproduct of the proof of Theorem 1, we prove a spatial Markov property for the ν-IIPTs (Proposition 26) and some of its consequences. We also provide a new tightness argument (see Lemma 17) that seems simple enough to be adapted to other models since it does not require explicit computations as was the case in previous works.
Connection with other works
Our results should be compared to the recent preprint of Chen and Turunen [29] where they consider random triangulations with spins on their faces, at a critical parameter similar to our ν c and with Dobrushin boundary conditions (i.e. with a boundary formed by two monochromatic arcs, similarly as in Figure 5). In the first part of their paper, the authors compute explicitly the partition function of this model by solving its Tutte's equation, obtaining a result comparable to Theorem 13. While their proof also relies on the elimination of one of their two catalytic variables, it does not use Tutte's invariant like ours. However, as was explained to us by Chen, their algebraicity result and Theorem 13 are equivalent and can be deduced from one another by a clever argument based on the relation between the Tutte polynomial of a planar map and that of its dual.
In the second part of their paper, Chen and Turunen show that their model has a local limit in distribution when the two components of the Dobrushin boundary tend to infinity one after the other. The fact that they consider these particular boundary conditions allow them to make explicit computations on Boltzmann triangulations and to construct explicitly the local limit using the peeling process along an Ising interface. They also derive some properties of this interface.
At the discrete level the Ising model is closely related via spin cluster interfaces to the O(n) model: this latter model has been studied on triangulations or bipartite Boltzmann maps via a gasket decomposition approach in a series of papers [18,19,20,17,25,28], revealing a remarkable connection with the stable maps of [52]. In particular this approach allows to identify a dense phase, a dilute phase and a generic phase for the loop configuration. We believe that our approach is suitable to study the geometry of the spin clusters of the Ising model and might shed some additional light on this connection with stable maps. We plan to return to this question soon in a sequel of the present paper.
Let us end this introduction by mentioning the conjectured links between models of decorated maps and Liouville Quantum Gravity (LQG), which is a one-parameter family of random measures on the sphere [38]. Physicists believe that most models of decorated maps converge to the LQG for an appropriate value of the parameter. In particular, the Ising model should converge to the √ 3-LQG. Such a convergence has been established in the case of "pure quantum gravity", corresponding to uniform planar maps and γ = 8/3, in the impressive series of papers by Miller and Sheffield [57,58,59]. Obtaining such a result for a model of decorated maps outside the pure-gravity class seems out of reach for the moment. However -building on the so-called mating-of-trees approach initiated by Sheffield [61] and which has allowed to obtain various local convergence results for models of decorated maps (see e.g. [27,12,44,46,45]) -Gwynne, Holden and Sun [42] managed to prove that for some models of decorated maps, including the spanning-tree decorated maps, bipolar oriented maps and Schnyder wood decorated maps, the volume growth of balls in their local limit is given by the "fractal dimension" d γ , for the conjectured limiting γ-LQG (see also [43] for a recent survey on this topic by the same authors).
The value of d γ is only known in the pure gravity case and d √ 8/3 = 4. For other values of γ, only bounds are available. As of today, the best ones have been established by Ding and Gwynne in [37]. Except when γ is close to 0, these bounds are compatible with Watabiki's famous prediction for d γ [63]: As far as we understand, the Ising model does not fall into the scope of this mating-of-trees approach and so far, we are not able to derive information on the volume growth of balls in the ≈ 4.212 and the bounds of Ding and Gwynne give: If we believe in the connection between the critical IIPT and √ 3−LQG, this is a strong indication that its volume growth should be bigger than 4. We hope that the present work will provide material for the rigorous study of metric properties of two-dimensional quantum gravity coupled with matter. also consider triangulations with holes, which are planar maps such that every face has degree 3, except for a given number of special faces enclosed by simple paths that will be called holes. The size of a planar map M is its number of edges and is denoted by |M |.
The maps we consider are always endowed with a spin configuration: a given map M comes with an application σ from the set V (M ) of its vertices to the set {⊕, }. An edge {u, v} of M is called monochromatic if σ(u) = σ(v) and frustrated otherwise. The number of monochromatic edges of M is denoted by m(M ).
Let p be a fixed positive integer and ω = ω 1 · · · ω p be a word of length p on the alphabet {⊕, }. The set of triangulations of a p-gon of size n is denoted by T p n (boundary edges are counted). Likewise, the set of finite triangulations of the p-gon is denoted by T p f . Moreover, we write T ω f for the subset of T p f consisting of all triangulations of the p-gon endowed with a spin configuration such that the word on {⊕, } obtained by listing the spins of the vertices incident to the root face, starting with the target of the root edge, is equal to ω (see Figure 1).
We now introduce the generating series that will play a central role in this paper and are the subject of our main algebraicity theorem. For any positive integer p, the generating series of triangulations of a p-gon endowed with an Ising model with parameter ν is defined as: For every fixed word ω ∈ {⊕, } p , we also set In particular, the generating series of triangulations of a p-gon with positive boundary conditions is given by: where ⊕ p denotes the word made of p times the letter ⊕.
To normalize the probability P ν n defined by (1) in the introduction, we consider Z(ν, t) the generating series of the triangulations of the sphere. It is linked to the generating series of the 1-gon and of the 2-gon by the following relation: Indeed, if the root edge of a triangulation of the sphere is not a loop, by opening it we obtain a triangulation of the 2-gon giving the first two terms in the sum (we divide Z ⊕⊕ by ν in order to count the root edge as a monochromatic edge only once). On the other hand, if the root edge is a loop, we can decompose the triangulation into a pair of triangulations of the 1-gon giving the last term in the sum. In both cases, the factor 2/t is here to count the root edge only once and to take into account the fact that the root vertex can have spin (obviously Z ⊕ = Z , Z ⊕⊕ = Z and Z ⊕ = Z ⊕ ), see Figure 2.
and Figure 2: How to transform a triangulation of the sphere into a triangulation of the 2-gon (left) or into two triangulations of the 1-gon (right). The p-gon is shaded.
Definition of Ising-algebraicity and main algebraicity result
Since the number of edges of a triangulation of a p-gon is congruent to −p modulo 3 (each triangular face has 3 half edges), the series t p Z p (or t p Z ω if ω has length p) can also be seen as series in the variable t 3 that counts the vertices of the triangulation (minus 1, this is a direct consequence of Euler's formula). The different generating series introduced in the previous section all share common features. In particular, they all have the same radius of convergence. This will be proven later but, since we need the value of this common radius of convergence to state our results, let us define it now. This quantity ρ ν satisfies: where P 1 and P 2 are the following two polynomials: To properly define ρ ν as a function of ν, we have to specify which branch of P 1 or of P 2 to consider. The situation, which is illustrated in Figure 3 and detailed in the proof of Proposition 10, is the following. The polynomial P 2 has real roots only for ν ∈ (0, 3]. Of its two branches, only one takes positive values. This branch will be called the first branch of P 2 , it is given by First branch of P where √ · denotes the principal value of the square root. As a function of ν, it is decreasing, continuous and positive for ν ∈ (0, ν c ]. The polynomial P 1 has a unique real root for ν < 3 and three real roots for ν 3. Among these real roots, two can take positive values for ν > 0 and meet at ν = 1 + 2 √ 2. We define w 1 (ν) as the only real root of P 1 for ν < 3, its larger positive root for ν ∈ [3, 1 + 2 √ 2], and its smaller positive root for ν 1 + 2 √ 2. This defines a branch of P 1 that we call the first branch of P 1 . As a function of ν, it is decreasing, continuous and positive for ν ν c . In addition we have w 1 (ν c ) = w 2 (ν c ).
The function ρ ν is then defined as follows: Let w 2 be the unique branch of P 2 that is positive for ν ∈ (0, ν c ] and w 1 be the unique branch of P 1 that is positive decreasing for ν ν c . For every ν > 0 we set: This defines a continuous and decreasing bijection from (0, +∞) onto (0, +∞) (see Figure 3 for an illustration). The value at ν c is The following property is going to be ubiquitous in the rest of the paper: 1. For any positive value of ν, the generating series S, seen as a series in t 3 , is algebraic and ρ ν = (t ν ) 3 is its unique dominant singularity.
2. The series S satisfies the following singular behaviour: there exist non-zero constants A(ν), B(ν) and C(ν) such that: • For ν = ν c , the critical behaviour of S(ν, t) is the standard behaviour of planar maps series, with an exponent 3/2.
• But, at ν = ν c , the nature of the singularity changes and: Ising-algebraic series all share the same asymptotic behaviour: is Ising-algebraic with parameters A, B and C, then for any ν > 0, we have, as n → ∞: where Proof. This is a direct consequence of the general transfer theorem [39, Thm VI.3, p.390].
Our main algebraicity result is the following theorem: Theorem 6. Denote by {⊕, } + the set of finite nonempty words on the letters ⊕ and . For any ω ∈ {⊕, } + , the series t |ω| Z ω (ν, t) is Ising-algebraic with some parameters A ω , B ω and C ω . In particular, for any ν > 0, there exists κ ω (ν) ∈ R >0 such that, as n → ∞, Similarly, for triangulations of the sphere, we have as n → ∞: with
Main steps of the proof of Theorem 6
The rest of this section is devoted to the proof of Theorem 6. First we recall in Section 2.3 the result of Bernardi and Bousquet-Mélou [13] about triangulations with a (non-simple) boundary of size 1 or 3. We then show how a squeeze lemma-type argument allows to extend their result to various models of triangulations provided that algebraicity is proved. Then, the main piece of work is to prove that the generating series of triangulations of a p-gon with positive boundary conditions are algebraic, see Section 2.4. Finally, a double induction on the length of the boundary and on the number of ⊕ on the boundary allows to conclude the proof, see Section 2.4.3.
Generating series of triangulations with a boundary, following [13]
Let Q denote the set of triangulations with a boundary (not necessarily simple), and Q p denote the subset of these triangulations with boundary length equal to p. Following [13], we define: and let Explicit expressions for Q 1 and Q 3 have been established by Bernardi and Bousquet-Mélou: Theorem 7 (Theorem 23 of [13]). Define U ≡ U (ν, t) as the unique formal power series in t 3 having constant term 0 and satisfying Then, there exist explicit polynomials R 1 and R 3 such that: Remark 8. Theorem 23 of [13] gives a different parametrization than the one given in Theorem 7. The two are linked with the simple change of variables (which already appears in [13]): With this change of variables, the values of R 1 and R 3 are given by − (2(100ν 3 + 1173ν 2 + 2098ν + 1237))(ν + 1) 2 U 3 + (4(ν + 1))(8ν 4 + 218ν 3 + 637ν 2 + 708ν + 285)U 2 These explicit parametrizations allow to study the singularities of the series Q 1 and Q 3 as is partly done in [13] but without a full proof. One of the main missing steps is a complete study of the singular behaviour of U . This is the purpose of the following lemma: Lemma 9. For every fixed ν > 0, the series U , defined by (8) and seen as a series in t 3 , has nonnegative coefficients and radius of convergence ρ ν . In addition, it is convergent at ρ ν , has a unique dominant singularity (at ρ ν ), and has the following singular behaviour: there exists a positive explicit constant ℵ(ν) such that Proof. All the computations are available in the companion Maple file [2]. From now on, we always consider U as a power series in t 3 . We first prove that its coefficients are nonnegative. Let us write F (w) = t 3 · tQ 1 (t) with w = t 3 . It is the generating series of triangulations of the 1-gon with weight w per vertex and ν per monochromatic edge. As a power series in w, the series F has obviously nonnegative coefficients since it is the generating series of triangulations of the 1-gon with a distinguished vertex. To prove that the coefficients of U are nonnegative, we will prove that: This striking identity has a combinatorial interpretation, but discussing it here would take us too far from the subject of this article. We plan to return to it in a future work. Let us denote by U • the derivative of U with respect to w. By differentiating (8) with respect to w, we obtain This gives us the following expression for U • in terms of U : From there a tedious but basic computation yields (10).
We now prove that the radius of convergence w 0 (ν) of U is equal to ρ ν . First, from (10), we deduce that w 0 is also the radius of convergence of F . Therefore, we can see that w 0 (ν) is a non-increasing function of ν. In addition, it is continuous. Indeed, for ν 1 ν 2 and for every w 0, we have that: Therefore, which proves that w 0 (ν) is continuous. Known results about triangulations without spins ensure that w 0 (1) > 0. Combined with the previous inequalities, it implies that w 0 (ν) > 0 for every ν > 0.
Since the series U has nonnegative coefficients, its radius of convergence is a singularity by Pringsheim's Theorem. Therefore it is amongst the roots of the discriminant of the algebraic equation satisfied by U . This discriminant factorises into P 1 (ν, w) · P 2 (ν, w) given by (3) and (4) and we have to identify the correct root.
First, we start by identifying the values of ν for which P 1 and P 2 have a common positive root. The resultant of these two polynomials in w factorises into several terms: The factor of degree 4 has no positive root and is irrelevant to us. For ν = 3, it is easy to verify that the common root of P 1 and P 2 is negative. This leaves the factor of degree 2 in ν. Its roots are ν c and 1 − √ 7 7 . Again, when ν = 1 − √ 7 7 , it is easy to verify that the common root of P 1 and P 2 is negative. This leaves ν c , for which we can verify that the common root of P 1 and P 2 is ρ νc , and since all the other roots of P 1 and P 2 are not positive real numbers, it implies that w 0 (ν c ) = ρ νc .
We now turn our attention to values of ν different from ν c . In those cases, we know that ρ ν is a root of P 1 or of P 2 , but cannot be a common root. It remains to identify the correct root. The discriminant of P 2 is −27648ν 2 (ν + 1) 3 (ν − 3) 3 . Thus, for ν > 3, both roots of P 2 are imaginary. An easy analysis shows that for ν ∈ (0, 3], only one of its two roots can take nonnegative values, it is the root given by which is by Definition 3 equal to ρ ν for every ν ∈ (0, ν c ].
The discriminant of P 1 is 82556485632ν 18 Therefore, for ν < 3, it has a unique (possibly with some multiplicity) real root and for ν > 3, it has three real roots (one is a double root for ν = 1 + 2 √ 2). The following situation is illustrated in Figure 3. Among the three branches of P 1 , the one that is real for every ν > 0 is decreasing for ν ν c . We denoted earlier this branch by w 1 (ν). By Definition 3, it is equal to ρ ν for every ν ν c . The other branches of P 1 are also real for ν 3, one stays negative and the other can be positive, is increasing in ν and intersects w 1 at ν = 1 + 2 √ 2 (its is called the second branch of P 1 in Figure 3).
From the previous description of the roots of P 1 and P 2 , we have w 0 (3) = w 1 (3) since it is the only positive root for this value of ν. The fact that w 0 is nonincreasing in ν then implies that w 0 (ν) = w 1 (ν) = ρ ν for every ν ν c . A simple check for ν = 1 shows that w 0 (1) = w 2 (1) and, since w 0 is continuous and w 1 and w 2 are only equal at ν c , we have We now turn to the claim that U has a unique dominant singularity. We have to identify the roots of P 1 and P 2 other that the radius of convergence ρ ν that are on the circle of convergence and test if they correspond to singularities. The following computations are done in the Maple companion file [1]. At several occasions, we will need to know the value of U (ρ ν ). The algebraic equation (8) for U (t 3 ) writes: It is classical that the value of U at its radius of convergence is the smallest positive root of the caracteristic equation which is a rational fraction whose numerator has three factors given by The three factors have common roots for ν ∈ {1 − √ 7/7, 1, ν c , 3}. For ν = 3, the smallest positive root is 1/8, which is a root of ψ 3 and not of the other two factors. For ν = ν c , only ψ 2 and ψ 3 have a common root, it is also the smallest positive root and is therefore U (ρ νc ). For ν = 1, ψ 1 and ψ 3 have 1/2 as common root, but the smallest positive one is a root ψ 2 . Finally, for ν = 1 − √ 7/7, ψ 2 and ψ 3 have a common root, but the smallest positive one is a root of ψ 2 alone. In conclusion, U (ρ ν ) is the smallest positive root of ψ 2 for ν ν c , and the smallest positive root of ψ 3 for ν ν c . Furthermore, we have U (ρ ν ) < 1/2 for every ν > 0.
We first look at the roots of P 2 . For ν ν c the radius of convergence is the positive root of P 2 . For these values of ν, P 2 has two real roots and it is easy to check for which values of ν these two roots are opposite one from another. It only happens when ν = 1 and we know that U has no other dominant singularity than its radius of convergence since it corresponds to the derivative of the generating series of triangulations with a critical site percolation.
When ν 3, P 2 has two complex conjugate roots and ρ ν is a root of P 1 . We can compute the modulus of the roots of P 2 and see that it is increasing and larger that ρ ν for ν = 3, therefore, for ν 3 no roots of P 2 have the same modulus than ρ ν .
It remains to check the roots of P 2 for ν ∈ (ν c , 3). For these values of ν, the roots of P 2 are real and differ from ρ ν . The only possibility for them to be on the circle of convergence is to be equal to −ρ ν . A Puiseux expansion of the solutions of (8) with the explicit values of these two roots is possible and gives two possible branches. To identify which one of these two branches corresponds to U , we look at the constant term of their Puiseux expansion. Since U has nonnegative coefficients, |U (ρ ν )| |U (z)| for any z ∈ C with |z| = ρ ν . Out of these two branches, one is singular but gives a value at the root larger than U (ρ ν ) that was computed previously. The other one is the branch corresponding to U and is not singular.
We now turn to the roots of P 1 . First, when ν 3, the polynomial P 1 has three real roots. By computing the resultant in ρ of P 1 (ρ) and P 1 (−ρ) we identify the values of ν for which P 1 has two opposite roots. There are six values of ν for which it occurs. Three of these values are larger than 3. Two of these values correspond to roots outside the circle of convergence (meaning that ρ ν is the third root of P 1 for these values of ν). The last possibility is ν = 1 + 2 9 136 − 10 √ 10 for which ρ ν and −ρ ν are both roots of P 1 . We then check that for this specific value of ν, the series U is not singular at −ρ ν .
For ν ∈ [ν c , 3], the roots of P 1 are ρ ν and two complex conjugates. If one of the complex roots are on the circle of convergence, all three roots of P 1 have the same modulus. We can easily compute the cube of this modulus from the coefficients of P 1 . It is then easy to check that this quantity is never equal to ρ 3 ν , meaning that for this range of values for ν, the three roots of P 1 never have the same modulus.
Finally, it remains to check the roots of P 1 for ν ∈ (0, ν c ). Unfortunately, there are three values of ν for which some of the roots of P 1 have modulus ρ ν (which we recall is a root of P 2 for these values of ν). We will show that the roots of P 1 are never singularities of U for ν < ν c with Newton's polygon method. We denote by w 3 (ν) any root of P 1 for ν < ν c . Exact expressions for w 3 are too complicated to directly compute a singular expansion of U around w 3 with a computer. Instead, using polynomial eliminations, we compute a polynomial Pol(W, V ) whose coefficients only depend on ν such that From (8), we can define a polynomial alg We then define A(Y ) to be the resultant of alg U (X, Y ) and P 1 (X) with respect to X so that, for every ν, we have A(U (w 3 )) = 0. The polynomial A(Y ) factorizes into to factors, one of them is the polynomial ψ 3 (Y ) of (12) that gives the value of U (ρ ν ) when ν ν c and the other, of degree 9 in Y , will be denoted byÃ(Y ). We have to establish if U (w 3 ) is a root of ψ 3 orà in order to continue. We know that for ν < ν c , U (ρ ν ) is the root of ψ 2 given by 1 Finally, we also saw that ψ 3 has a positive root for ν ν c , and that it is stricly larger that U (ρ ν ) for ν < ν c . The discriminant of ψ 3 is negative for ν < 3, so it has two imaginary conjugate roots. We can write an equation for the common squared modulus |w| 2 of these complex roots from the coefficients of ψ 3 by eliminating the real root and the sum of the two imaginary roots. This equation is given by −16(ν + 1) 4 |w| 6 + 12(ν + 1) 2 (ν + 3)|w| 4 − 6(ν + 1)(ν + 3)|w| 2 + 4 = 0.
The resultant of this polynomial (in |w|) with ψ 2 (that gives U (ρ ν )) vanishes only when ν = 3. We can also check that the roots of ψ 3 for ν = 1 are all 1/2 > U (ρ 1 ). In conclusion, the roots of ψ 3 all have modulus stricly larger that U (ρ ν ) for ν < ν c . This in turns shows that, for ν < ν c , if w 3 is on the circle of convergence of U , then U (w 3 ) is a root ofà and not a root of ψ 3 .
We now define the following polynomial in w, V : for ν ν c as announced. This polynomial factorizes into two factors, one of degree 18 in W and one of degree 9. It is easy to check that only the factor of degree 9 vanishes for W = V = 0. We call Pol this factor, so that Pol(w 3 − w, U (w 3 ) − U (w)) = 0 for ν ν c . We can then apply Newton's polygon method to Pol to see that U is not singular at w 3 for ν < ν c .
To finish the proof, we have to establish the singular behaviour of U at ρ ν . We can do so in a very similar fashion as above. Recall that for ν ν c , U (ρ ν ) is a root of ψ 3 . We define the polynomial This polynomial factorizes into two factors, one of degree 6 in W and one of degree 3. It is easy to check that only the factor of degree 3 vanishes for W = V = 0. We call Pol 3 this factor, so that Pol 3 (ρ ν − w, U (ρ ν ) − U (w)) = 0 for ν ν c . We can then apply Newton's polygon method to Pol 3 to see that U has a typical square root singularity as announced at ρ ν for ν ν c , except maybe at ν = ν c and ν = 3 where the relevant coefficient of Pol 3 vanishes. The case ν = 3 also gives a square root singularity and the case ν = ν c gives a singularity with exponent 1/3 as required.
Proof. First, the fact that tQ 1 and t 3 Q 3 have a rational expression in ν and U (recall (9)) ensure that both series are algebraic by closure properties of algebraic functions since U itself is algebraic. From the proof of Lemma 9, we know that |U (t 3 )| < 1/2 for all |t| < t ν . Therefore, the series U and (1 − 2U ) −1 seen as series in t 3 have the same unique dominant singularity at ρ ν . The form of tQ 1 and t 3 Q 3 given in (9) then implies that these two series are algebraic and that their only singularities are also those of U .
The singular behaviour of tQ 1 is easily computable by integration (see [39], Theorem VI.9 p. 420) from (10), where we recall that F (w) = t 3 tQ 1 (t) with w = t 3 . The singular behaviour of t 3 Q 3 at ρ ν could be obtained by plugging the explicit singular expansion of U into expression (9) and tracking cancellations. A more direct and satisfying proof without tedious computations can also be obtained with Lemma 11 of the following section, whose proof only relies on the Ising algebraicity of tQ 1 that we just established. Indeed, this lemma provides the singular behaviour of t 3 Q 3 at ρ ν and we know that this series has no other dominant singularity.
Asymptotic behaviour for triangulations by pinching
Let Q p,P be the subset of triangulations with a boundary of length p, whose boundary satisfies a property P depending only on the length, shape and spins of the boundary (in particular not on the vertices, faces or edges in the interior regions). Let Q P p denote the generating function of triangulations in Q p,P .
In addition, for any ν > 0 and any positive integer p, there exist constants α p (ν), β p (ν) and γ p (ν) such that, Q P p satisfies the following singular expansion at ρ c :
Remark 12.
We stress the fact that the Lemma does not state that t p Q P p is Ising-algebraic. Indeed, the simple bounds used in the proof do not rule out the possibility that Q P p as a function of t 3 has other non real dominant singularities and that these singularities induce an oscillatory behaviour of [t 3n ]t p Q p,P . This is clearly illustrated with the case of Q 1 (t) when viewed as a function of t instead of t 3 . Therefore, to establish the Ising-algebraicity of t p Q P p with the help of this Lemma, we need to establish that it also has a unique dominant singularity.
Proof. We first observe that there exist positive constants k P p andk P p such that for all n p: There is indeed an injection from Q p,P n into Q 1 n+p+2 : given an element of Q p,P n , attach a triangle to each side of the boundary, glue all these triangles together and add two edges to create an outer face of the appropriate degree. Conversely, given a boundary satisfying the property P, we can first open the root edge and insert a quadrangulation of the one-gon inside the opened region at the corner corresponding to the root vertex. Then, we can add a vertex inside each face of the boundary and join it with an edge to each vertex of the corresponding face. See Figure 4 for an illustration. This gives an injection from Q 1 n into Q p,P n+2p+1 . These bounds ensure that ρ ν is the radius of convergence of Q P p seen as a series in t 3 and Pringsheim's theorem then ensures that it is a singularity. The singular expansion follows from the classification of possible singular behaviour of algebraic functions. Indeed, such functions have a Puiseux expansion in a slit neighbourhood of a singularity ζ of the form f (z) = k k 0 c k (z −ζ) k/κ with k 0 and κ integers, see [39,Thm VII.7 p.498]. Our bounds ensure that, at the singularity ρ ν , we have κ = 2 if ν = ν c and κ = 3 for ν = ν c and the expansion is of the form announced in the Lemma.
For further use, notice that the case of the generating functions Z p and Q p of triangulations with a boundary (simple or not) of length p is included in the statement of the Lemma.
Triangulations with positive boundary conditions
We now state and prove our main technical result. where Pol is an explicit polynomial with integer coefficients, given in (25).
Proof. The difficulty of this result stems from the fact that we need two catalytic variables to write a functional equation satisfied by Z + . A technical application of Tutte's invariants method, introduced by Tutte (see [62]) and further developed in [13] allows us to derive the result. All the computations are available in the companion Maple file [1].
1-A functional equation with two catalytic variables:
The series Z + is a series with one catalytic variable. However, it is necessary to introduce a second catalytic variable to study it. Indeed, when writing Tutte-like equations by opening an edge of the boundary, a sign can appear on the newly explored vertex. It is then necessary to take into account triangulations with signs on the boundary. However, things are not hopeless as we can restrict ourselves to triangulations with a boundary consisting of a sequence of ⊕ followed by a sequence of . Indeed, opening on the edge ⊕ of such a triangulation can only produce triangulations with the same type of boundary conditions. Figure 5 illustrates the different possibilities. Now, let us denote Z +,− (x, y) the generating series of triangulations with boundary conditions of the form ⊕ p q with p + q 1, the variable x being the variable for the number of ⊕ and the variable y being the variable for the number of : Note that this series is symmetric in x and y and Z + (x) = Z +,− (x, 0). We also need the specialisation (x, y)). The different possibilities illustrated in Figure 5 translate into the following system of equations:
2-Kernel method:
Following the classical kernel method, we write: and The next step is to find two distinct formal power series with coefficients in Q(x) which cancel the kernel, ie such that K(x, Y i (x)) = 0, for i = 1, 2. However, the equation K(x, Y (x)) = 0 can be written and we can see that it has a unique solution in Q(x)[[t]] by computing inductively its coefficients in t. To get a second solution, we relax the hypothesis and ask for series Note that this is possible because the series tZ + (y/t) is a well defined power series in t: indeed, the polynomial in x given by [t n ](tZ + (x)) has degree at most n − 1, except for n = 2 which has the term νt 2 x 2 for the map reduced to a single edge. Following advice given by Bernardi and Bousquet-Mélou in [13], we perform the change of variables x = t + at 2 because of the term t/x in the Kernel. The kernel equation now reads Using the fact that Z + (t + at 2 ) = O(t 3 ) and tZ + (y/t) = νy 2 + O(t), it is clear that if Y is solution of this last equation, its constant term is either 0 or a/ν, and its coefficients in t can then be computed inductively. This shows that the kernel equation has indeed two distinct solutions Y 1 and Y 2 such that
-Computation of invariants
By writing Following Tutte we say that the quantity is an invariant since it takes the same value for Y 1 and Y 2 and we note this common value I. To find a second invariant we have to dig deeper and look at all the other equations. First, notice that since . Solving this new system for x and Z +,− 1 (x) gives: Using equation (14) linking Z + (y), Z +,− 1 (y) and y together with the expression of I, we can express Z +,− 1 (Y i /t) in terms of Y 1 , Y 2 and I: for i = 1, 2. We can now use either of these last two identities to express x and Z +,− 1 (x) in terms of Y i and I: Using once again the kernel equation, we can express Z + (x) in terms of x, Y 1 and Z + (Y 1 /t) and therefore solely in terms of Y 1 , Y 2 and I: Finally, putting our expressions (17), (18) and (19) into the second equation of (14) verified by Z + (x) gives an equation linking Y 1 , Y 2 and I: where At this point, we almost have a second invariant and just have to isolate Y 1 and Y 2 in (20) to get it. Following the guidelines of [13], we want to perform a change of variables to transform equation (20) into an equation of the form where First, setting Y i = X i − 1 3 β(I) yields the following: We have equation (21) and thus
Now we just have to transform the last equation into an equality with no radicals
to get our second invariant Of course, if we eliminate from J(y) the terms depending on y only through I(y), we still get an invariant. It is given by The two invariants J and J contain the same information and we will work with J to shorten computations.
-J is a polynomial function of I
Borrowing again from Tutte and Bernardi-Bousquet Mélou [13], we now show that J(y) is a polynomial in I(y) with explicit coefficients depending only on ν and t. To that aim, we first notice that from expression (16) we can easily write where R(y) is a series having no pole at y = 0. Hence, from the form ofJ, we can find Laurent series C 0 (t), C 1 (t) and C 2 (t) (depending on ν) such that the series has coefficients in t which are rational in y and vanish at y = 0. The computations of these coefficients is straightforward: we first eliminate the term in 1/y 2 of J(y) , then the term in 1/y of J(y) − C 2 (t) I(y) 2 and finally the constant term of J(y) − C 2 (t) I(y) 2 − C 1 (t) I(y). The explicit values of the C i 's are: We see from the expressions of the C i 's and of I(y) and J(y) that H(y) is in fact a power series in t with coefficients that vanish at y = 0. Supposing that H(y) is not 0, we can write On the one hand we have t 0 Y 1 = 0 so that [t n 0 ] H(Y 1 ) = 0. On the other hand, t 0 Y 2 = a ν and, h n 0 (y) is different from 0 by assumption and does not depend on a since H itself does not. Therefore we have H(Y 1 ) = H(Y 2 ) which contradicts the fact that H(y) is an invariant. This means that H(y) = 0.
Algebraicity and singularity of triangulations with positive boundary
The equation with one catalytic variable (13) could be solved by the general methods of Bousquet-Mélou and Jehanne [22] or even by guess and check à la Tutte. We can also rely on the expressions for Q 1 and Q 3 , obtained in [13] and recalled in Theorem 7, to solve it and even obtain a rational parametrization of Z + , see [1]. However, we will only need the following result: Proposition 14. Recall from Theorem 7, that U ≡ U (ν, t) is the unique power series in t having constant term 0 and satisfying (8). Each series Z ⊕ p = Z + p is algebraic over Q(ν, t). More precisely there exist polynomials R ν p in U whose coefficients are rational in ν, such that, for all p 1: Proof. We proceed by induction on p, the result is clear for Z + 1 (which is equal to Q 1 ) by Theorem 7. For p = 2, on the one hand, we can write a Tutte-like equation for triangulations contributing to Z ⊕⊕ and to Z ⊕ . Peeling the root edge of those triangulations yields the following equality: Since Q 1 (t)Q 2 (t) enumerates the triangulations with a boundary of length 3 rooted on a loop with spin ⊕, we also have On the other hand, Z ⊕⊕ (t) + Z ⊕ (t) enumerates the triangulations with a simple boundary of length 2 and hence: Combining these two relations and Q 1 (t) = νtQ 2 (t), we obtain the following expression for which, with the expressions of tQ 1 and t 3 Q 3 given in Theorem 7, implies the statement for t 6 · t 2 Z + 2 . We now carry out an induction. By the result of Theorem 13, and more precisely by setting y ← ty in equation (24) and then dividing it by t 2 , we get: For p 3, identifying the coefficients of y p on both sides leads to Multipliying this identity by t 3p (1−2U ) 3p gives the following recursion relation for the polynomials R ν p (U ): Using the fact that 32ν 3 t 3 (1 − 2U ) 2 is a polynomial in U and ν with integer coefficients from (8) finishes the proof. With these expressions of the series Z + p in terms of U , following the exact same chain of arguments as in the beginning of the proof of Proposition 10, we obtain that these series are algebraic and that their singularities are singularities of U . Combined with Lemma 11, it yields the following crucial result: Corollary 15. For any p, the series Z + p is Ising-algebraic.
Triangulations of the p−gon with arbitrary fixed boundary condition
Our starting point is the standard root-edge deletion equation for triangulations of a p-gon with a given boundary word.
Proposition 16.
Let ω = ω 1 · · · ω k be a non-empty word on {⊕, } and let a, b be in {⊕, }, then we have: Proof. Let T be an element of T bωa f . Figure 6 illustrates the 4 possibilities for the configuration of the inner face incident to the root-edge of T (i.e. the edge between the spins a and b by our rooting convention). The deletion of the root edge of T translates into the following equation for the corresponding generating series: which yields (28).
We can finally prove our main algebraicity result: Proof of Theorem 6. Fix ν > 0. The series Z ω have a cyclic symmetry with respect to ω. Indeed, if ω is a cyclic permutation of ω, we have clearly Z ω = Z ω . In addidtion, if ω denotes the image of ω under the involution that exchanges and ⊕, we also have Z ω = Z ω . For every nonempty word ω on {⊕, }, we denote by |ω| the number of times appears in ω. We proceed by induction on (|ω|, |ω| ) in lexicographic order. By Corollary 15, we know that the result holds when |ω| = 0. Equations (26) and (27) and the invariance of Z ω under cyclic permutations or spin inversion ensure that the statement is true if |ω| 3. From (29), we see that for every non empty word ω and any a, b ∈ {⊕, }, we can express Z a bω as a linear combination of products of some Z ω , where either |ω | < |a bω| or |ω | < |a bω| (or both). It implies that the singularities of Z a bω are included in the set of singularities of those Z ω . By the induction hypothesis and Lemma 11, it concludes the proof of the the Ising algebraicity of Z ω for every fixed ω. The asymptotic behavior of the coefficients of the series then follow directly by applying Proposition 5.
Local weak topology and tightness of the root degree
To prove Theorem 1, we will first prove that the sequence of probability measures {P ν n } is tight for the topology of local convergence. Fix (l r ) r 1 a sequence of positive real numbers, denote K (lr) the subset of T defined by: Then, K lr is a compact subset of (T , d loc ). To prove tightness, we will therefore prove that for every r the maximum degree L r in a ball of radius r for a random triangulation with law P ν n is tight with respect to n. First, let us do so for the root vertex degree. Lemma 17. Let X n be the degree of the root vertex under P ν n . The sequence of random variables (X n ) n 1 is tight.
Remark 18. In Section 5.2, we will prove that the limiting distribution of the X n 's has exponential tails for ν = ν c (and the proof works in fact for ν close enough to ν c , see the remark following Proposition 30). It may be possible to extend this statement to every X n , with a uniform exponential upper bound for the tails. This is usually the approach to prove tightness results in random maps (see for example Lemmas 4.1 and 4.2 in [8]).
It turns out that things become fairly technical in our setting and we are still unable to prove exponential tails for every value of ν. However, though a much weaker statement, Lemma 17 is sufficient to prove tightness and moreover has a very simple and robust proof that we were not able to find in the literature.
Proof of Lemma 17. Fix n 1 and ν > 0. To simplify notation, we write P instead of P ν n . We define P as a random triangulation distributed according law P with a marked uniform edge. That is, for any triangulation of the sphere T with 3n edges and any edge e of T , we set Denote δ the root vertex and e the marked edge of a triangulation sampled according to P. We have: where we used the fact that an edge adjacent to the root vertex can contribute to its degree by at most 2. Now, by duplicating and opening the marked edge and the root edge (see Figure 7), we can see that there is an injection from the set of triangulations of size 3n with a marked edge adjacent to the root vertex into triangulations with no marked edges. More precisely, we have the following cases when cutting along the two edges: • Both edges are not loops. We get either a triangulation of the 4-gon or a pair of triangulations of the 2-gon if the edges have the same endpoints.
• Both edges are loops. We get either a pair of triangulations of the 1-gon if the marked edge is the root edge or a triplet of triangulations otherwise (two of the 1-gon and one of the 2-gon).
• One edge is a loop and not the other. We get a pair of triangulations, one of the 1-gon and one of the 3-gon.
Therefore, taking into account every case and the possible creation of new monochromatic edges, we have, Together with equation (30), this yields that E [deg(δ)] is bounded with n giving the tightness of the sequence of random variables.
To go from the tightness of the root degree to the tightness of the maximal degree in balls, we need some sort of invariance of the root degree by re-rooting. We will see in the next section that in fact, the distribution of the maps themselves are invariant under rerooting along a simple random walk, which is more than we need (see Lemma 19).
Invariance along a simple random walk and tightness
To formally introduce an invariance property by rerooting, we need some additional notation. Let T be a rooted triangulation with spins (finite or infinite) and denote by e 0 the oriented root edge. A simple random walk on T is an infinite random sequence (e 0 , e 1 , . . .) of oriented edges of T defined recursively as follows. Conditionally given (e i , 0 i k), we let e k+1 be an oriented edge whose origin is the endpoint e + k of e k and whose endpoint is chosen uniformly among the deg(e + k ) possible choices. We call the sequence (e 0 , e 1 , . . .) a simple random walk on T started at the root edge, and denote by P T its law. Finally, if e is an oriented edge of T , we denote by T (e) the triangulation T re-rooted at e.
If λ is a probability distribution on T , we denote by Θ (k) (λ) the distribution of a random triangulation sampled according to λ and re-rooted at the k-th step of a simple random walk It is defined by for every Borel subset A of T . We invite the interested reader to check the work of Aldous and Lyons [5] where this framework is introduced for any unimodular measure (see also [10,30,36] for related discussions specific to random maps).
The following lemma is an easy adaptation of [36,Proposition 19] and its proof is mutatis mutandis the same. See also [8,Theorem 3.2] for an analogous statement with a slightly different proof. We insist on the fact that this result holds independently of the tightness or convergence of the measures P ν n .
Lemma 19. The laws P ν n and any of their subsequential limit P ν are invariant under re-rooting along a simple random walk in the sense that for every k 0 and any n ∈ N ∪ {∞} we have Θ (k) (P ν n ) = P ν n .
Proof. See the proof of [36,Proposition 19], which carries word for word in our setting.
Proof of Theorem 1
As in the previous section, thanks to the behavior of our generating series, things are not much more complicated than in the uniform setting and we can follow the original approach of Angel and Schramm [8]. Recall the definition of rigid triangulations (see [8,Section 4.2] for details), which are triangulations with holes such that one cannot fill the holes in two different ways to obtain the same triangulation of the sphere. First we show that subsequential limits of the P ν n 's share common properties. This Proposition is analogous to [8, Corollary 3.4 and Proposition 4.10] and the proofs are almost identical, so we only give the main arguments. Proposition 21. Every subsequential limit P ν of (P ν n ) n 1 has almost surely one end. In addition, for every finite rigid triangulation ∆ with 1 holes without common edges and respective boundary conditions given by ω (1) , . . . , ω ( ) ∈ {⊕, } + , we have: where ω = (ω (1) , . . . , ω ( ) ) and m(ω) denotes the total number of monochromatic edges of the boundaries ω (1) , . . . , ω ( ) . We recall that the constants κ and κ ω (i) are defined in Theorem 6. Moreover, the probability that the i-th hole contains the infinite part of the triangulation is proportional to the i-th term in the sum.
Proof. First, the one-endedness is an easy adaptation of Lemma 3.3 and Corollary 3.4 of [8]. Indeed, if a subsequential limit has more than one end then, under this law, there exists k > 0 and ε > 0 such that there is a simple cycle with k edges containing the root that separates the triangulation into two infinite parts with probability larger than ε. This in turn means that for any integer A and infinitely many n, the probability under P ν n to have a simple cycle with k edges containing the root that separates the triangulation into two parts with at least A edges each is larger than say ε/2. Denote by L(k, A) such an event. Its probability under P ν n is bounded by where the first sum is to fix the spins of the loop and the term ν −m(ω) is there to avoid counting monochromatic edges of the cycle twice and the number of edges on each side of the cycle is 3n i − k including the boundary. From Theorem 6, we know that the coefficients in the above identity all share the same asymptotic behavior and we have, if A is large enough and for some constant depending only on ν and k: with α being 5/2 or 7/3 depending on ν. A classical analysis of the right hand side of (32) shows that this probability is of order O A −α+1 and thus goes to 0, meaning that the triangulation cannot have more than one end. The second statement is a straightforward computation. Indeed, by decomposing triangulations T such that ∆ ⊂ T into ∆ a triangulations with respective boundaries ω (1) , . . . , ω ( ) and avoiding counting edges on the boundary of ∆ twice we get In particular, we used the fact that the holes of ∆ have no common edges. This hypothesis can be avoided by changing the definition of m(ω) but we prefer not to in order to keep technicalities at a minimum level. The series t |ω i | Z ω (i) (ν, t) are all Ising-algebraic and have the following singular expansion at their singularity with α(ν) = 5/2 or 7/3 depending on ν. Therefore, the product j=1 t |ω (j) | Z ω (j) (ν, t) is also Ising algebraic with a singular expansion at its unique singularity of the form From there, it is straightforward to obtain with α(ν) = 5/2 or 7/3 depending on ν. This in turn yields finishing the proof of (31). Fix i ∈ {1, . . . , }. If N i + N i = 3n − |∆| + |ω|, the probability under P ν n that the i-th hole has N i edges and the other holes have a total of N i edges is given by when N i is fixed. Summing over N i finishes the proof of the proposition.
Theorem 1 now follows directly from the tightness of the laws P ν n (Proposition 20) and from Proposition 21 which implies that the sequence has a unique possible subsequential limit.
Basic properties of the limit
We introduce another probability distribution on the set of finite triangulations, denoted P bol and called the Boltzmann law. This probability measure is often found to be of central importance in local limits of planar maps. For example it appears in the limiting law of uniform triangulations without spins in [8] where it is called the free distribution, or in [33].
Definition 22. The critical Boltzmann distribution P bol is a probability measure on the set of finite triangulations defined by
.
is finite thanks to Theorem 6. We will always denote by T bol a Boltzmann triangulation of the sphere, that is a random finite triangulation of the sphere with law P bol . For any finite word ω on {⊕, }, define similarly the probability measure P ω bol on T ω f by setting for any T ∈ T ω . We call a random triangulation with law P ω bol a Boltzmann triangulation with boundary condition ω and denote it T ω bol .
In addition, conditionally on the event {K ⊂ T bol }, the parts of T bol filling each hole of K are independent random triangulations with a boundary, distributed as Boltzmann triangulations with respective boundary conditions given by ω.
Proof. This is a straightforward computation, analogous to the one performed in the proof of Proposition 21. Indeed, a finite triangulation T such that K ⊂ T can be decomposed into K and a collection of triangulations with respective boundary conditions ω (i) . This yields: proving the first claim.
Proposition 23 allows to interpret the ball probabilities (31) as an absolute continuity relation between P ∞ and P bol . Indeed, for ∆ a ball of radius r of some finite triangulation and ω = (ω (1) , . . . , ω ( ) ) its boundary words, this probability can be written as: This observation motivates the following definition: Definition 24. Fix a finite triangulation T and r 0. If B r (T ) is the whole triangulation T , set ω r (T ) = ∅. Otherwise set ω r (T ) = (ω (1) , . . . , ω ( (T,r)) ) to be the spin configurations on the boundary of B r (T ). We define: Inspired by [33,Theorem 4], formula (33) can be directly reformulated as follows: Proposition 25. The random process (M r (T bol )) r 0 is a martingale with respect to the filtration generated by (B r (T bol )) r 0 . Moreover, if F is any nonnegative measurable function on the set of triangulations with holes, we have for every r 1 We conclude this section by stating the spatial Markov property for the IIPT. First, we need to introduce the analog of the IIPT for triangulations with fixed boundary condition. Let ω be a non empty word on {⊕, }. We can define the probability measure P ω n on T ω 3n−|ω| by .
A slight modification of the proof of Theorem 1 shows that the sequence (P ω n ) n 1 converges weakly in (T ω , d loc ) to a probability measure supported on one-ended infinite triangulations with boundary condition ω. We denote this limiting probability measure by P ω ∞ and call it the law of the Ising Infinite Planar Triangulation with boundary condition ω. As in the uniform setting, this law appears naturally in the spatial Markov property of the IIPT: Proposition 26 (Spatial Markov property for the IIPT). Fix K a finite rigid triangulation with holes (the holes can have common vertices but have no common edges) and endowed with a spin configuration such that the boundary conditions of its holes are given by ω = (ω (1) , . . . , ω ( ) ).
On the event {K ⊂ T ∞ }, let us denote by T i the component of T ∞ inside the i-th hole of K.
Then, almost surely, only one of these components is infinite and the probability that it is T i is given by: Finally, if we fix i ∈ {1, . . . , }, conditionally on the event {K ⊂ T ∞ , T j is finite for j = i} 1. The random triangulations with boundary conditions (T j ) 1 j are independent; 2. The random triangulation T i is distributed as the IIPT with boundary condition ω (i) ; 3. For j = i, the random triangulation T j is distributed as a Boltzmann triangulation with boundary condition ω (j) .
Proof. Everything follows directly from Proposition 21.
Generating series of triangulations with simple boundary
We start with a technical lemma about the generating series of triangulations with simple boundary. For every p > 0, we set κ p = |ω|=p κ ω . Since we only use this Lemma to prove Theorem 2 and since our proof of this Theorem does not work for all ν (see Remark 31), we restrict ourselves to ν = ν c for the sake of simplicity.
Proof. Let
holomorphic in the open disc D of radius 1 centered at x = 1, and such that ε p (x) → 0 as x → 0 + . While the latter expansions are a priori non uniform (in the sense that ε p (x) depends on p), we can still define the formal power series and wonder about their respective radii of convergence.
Our strategy to determine these radii of convergence is to relate A(y), B(y) and C(y) to the formal power series Z(t, ty) of Q[[y, t 3 ]] defined as More precisely we would like to view Z(t, ty) as an analytic function of t 3 with y a complex parameter fixed in an appropriate domain and to perform an expansion as t 3 → t 3 νc of the form with t 3 = t 3 νc (1 − x) and lim x→0 + ε(y, x) = 0. Now the series Z(t, ty) is the unique formal power series solution of an explicit algebraic equation P (Z(t, ty), y, U (t)) = 0 (36) for some polynomial P (z, y, u) of degree 4 in z, and U (t) is the series introduced in Theorem 7: this equation can be deduced from [13,Lemma 31] using the fact that our Z(t, ty) is exactly the series R(0, ty) there. In particular we shall consider the unique formal power series ζ(y, u) solution of the equation P (ζ(y, u), y, u) = 0 (37) so that, as formal power series, Z(t, ty) = ζ(y, U (t)).
In particular Z(t, ty) as a complex function can be identified near the origin (t, y) = (0, 0) with the branch of the analytic variety defined by P having the expected Taylor expansion. We will use this to prove in Lemma 28 that Z(t, y) is analytic in a polydisc D(0, t νc ) × D(0, y c ) and singular at the point (t νc , y c ), where y c is as in Lemma 27.
On the other hand using Equation (36) (or Equation (37)) we can study each branchZ(t, ty) of the analytic variety defined by P near the point t = t νc , and derive explicit descriptions of their coefficients in an expansioñ . Indeed taking t = t νc in Equation (36) or u = U c := U (t νc ) in Equation (37) we obtain an algebraic equation satisfied byÃ(y) =Z(t νc , t νc y) = ζ(y, U c ). Moreover we will identifyB (y) = lim t→tc (Z(t νc , t νc y) −Ã(y))(t c − t) −1 and C(y) = lim t→tc (Z(t νc , t νc y) −Ã(y) −B(y)(t c − t))(t c − t) −4/3 in terms of partial derivatives of P evaluated at z =Ã(y) and u = U (t νc ): for some explicit polynomial P B and P C . Finally the positivity properties of Z(t, ty) allow to discriminate between the possible branches and characterize A(y) as the unique formal power series in the variable y such that where Q is a well chosen factor of the polynomial P . In particular the equation for A(y) implies that A(y) has y c as radius of convergence and from the irreducible rational expression of B(y) and C(y) we can check that no cancellation occur and these two series also have y c as radius of convergence. All computations are available in the companion Maple file [1].
Lemma 28. The series Z(t, ty) is analytic in the larger domain
, and singular at (t νc , y c ).
Proof. On the one hand, this formal power series is by definition an element of Q(ν c )[y][[t]], and for |y| 1 the series is term-by-term dominated by the series Z 3 (ν c , |t|). Indeed, since ν c > 1, where ∆(T ) denotes the triangulation obtained by triangulating the outer face of T from a new vertex. Since Z 3 has radius of convergence t νc we already know that: • Z(t, ty) is absolutely convergent in the polydisc D(0, t νc ) × D(0, 1).
For any fixed y, let t c (y) denote the radius of convergence of the series Z(t, ty) in the variable t.
In view of the positivity of the coefficients of Z(t, ty), the function t c (y) is a weakly decreasing function of y for y positive, and it is at most equal to t νc since t νc is the radius of convergence of t · Z 1 (ν c , t) = [y]Z(t, ty): in particular t c (y) = t νc for y ∈ (0, 1). For any y > 0, Z(t, ty) is a series with positive coefficients, so by Pringsheim's theorem it must be singular at t = t c (y). In particular if t c (y) < t νc , then U (t) is regular in D(0, t c (y)) and, as a function of u, ζ(y, u) admits an analytic continuation in an open domain containing (0, U (t c (y))) and it is singular at u c (y) = U (t c (y)). Recall from Lemma 9 that U (t) has nonnegative coefficients. Hence, it is an increasing function of t. Consequently, u c (y) must be a nonincreasing function of y, with u c (y) = U (t νc ) for y ∈ (0, 1).
As a consequence of the previous analysis we can look for u c (y) among the decreasing branches in the root variety of the discriminant ∆(y, u) = discrim z P (z, y, u) with respect to z of the polynomial P (z, y, u). This discriminant factors into three irreducible factors of degree at most three in y, that can thus be explicitly analyzed: for y < y c = 3 5 (1 + √ 7), all real positive branches have either u > U (t νc ) or are increasing. At y = y c , three discriminant branches meet with u = U (t νc ) which is the minimal positive root of ∆(y c , u). We therefore conclude that t c (y) = t νc for y ∈ (0, y c ), or in other terms: for any fixed y ∈ (0, y c ), the series Z(t, ty) has radius of convergence t νc . It implies that the series is absolutely convergent in the polydisc D(0, t νc ) × D(0, y c ), which concludes the proof.
Root degree distribution and recurrence
Since the IIPT is the local weak limit of uniformly rooted maps, by Gurel Gurevich and Nachmias [41], it is enough to prove that the root degree (i.e. the number of half-edges incident to the root) distribution of the IIPT has exponential tails.
To study this degree, let us have a look at the structure of the hull of radius 1 around the root, see Figure 8 for an illustration. This hull, denoted by B 1 (T ∞ ) is by definition the ball B 1 (T ∞ ) completed by the finite connected components of T ∞ \ B 1 (T ∞ ). It is therefore a triangulation (with spins) with one hole, which corresponds to the part of ∂B 1 (T ∞ ) separating the root vertex from infinity in the map. Such maps (or more precisely slight modifications) are called triangulations of the cylinder, and have been extensively studied in [34,35,49,54] to which we refer for a more detailed analysis. In particular, each edge of ∂B 1 (T ∞ ) belongs to a face of T ∞ having the root vertex as third vertex. The slots between two consecutive such faces are filled with independent Boltzmann triangulations with the proper boundary conditions. See Figure 8 for an illustration. The degree of the root vertex in T ∞ is then the sum of the degrees of the root vertex of each of these Boltzmann triangulations filling the slots of the hull of radius 1. Therefore, we only have to prove that the distribution of the root degree of these Boltzmann triangulations and the boundary length |∂B 1 (T ∞ )| have exponential tails, which is done in Propositions 29 and 30.
Proposition 29.
There exist two constants c > 0 and λ < 1 such that for every p 1, Proof. As illustrated in Figure 8 and described above, the hull of radius 1 can be decomposed into its faces sharing an edge with the boundary and slots. The slots are filled with Boltzmann triangulations of the 2-gon with boundary condition (⊕, ⊕) or (⊕, ). Special care has to be taken if the root is a loop, then the slot containing it, is slightly different and can be decomposed into a Boltzmann triangulation of the 1-gon and a Boltzmann triangulation of the 3-gon with boundary condition (⊕, ω 1 , ⊕), where ω = ω 1 . . . ω p gives the boundary condition of the hull of radius 1. The spatial Markov property stated in Proposition 26 hence yields: where the constant does not depend on p. And the result follows since from the value of y c given in Lemma 27.
Let us now turn our attention to the root-degree of Boltzmann triangulations.
Proposition 30.
Let ω be a non-empty word on {⊕, } and D ω be the degree of the root of a Boltzmann triangulation with boundary condition ω and parameter ν = ν c . Then, there exist two constants c and λ < 1, such that, for every ω and every k 1, Proof. In [34,Proposition 30], a similar result is obtained for Boltzmann triangulations without spins (corresponding to the case ν = 1 in our setting). Following the same approach, we stochastically dominate the root degree by a subcritical branching process. To that aim, we explore a Boltzmann triangulation with a peeling process that will focus on exploring the neighbours of the root edge.
Fix ω a non empty word and let T ω bol be a Boltzmann triangulation with boundary condition ω = (ω 1 , . . . , ω p ). Recall that the root face of this triangulation lies on the right-hand side of its root edge. When the face adjacent to the left-hand side of the root face is revealed, several possibilities can occur. These possibilities are illustrated in Figure 9 and their respective probabilities can be expressed in terms of the generating series of triangulations with simple boundary conditions evaluated at their radius of convergence. Let us enumerate them: 1. If p = 2, then T ω bol may be reduced to the edge-triangulation. It happens with probability: and the exploration stops if this event occurs.
2. The third vertex of the revealed face is an inner vertex of T ω bol and has spin a ∈ {⊕, }. This happens with probability Z ω (ν, t ν ) and the rest of the triangulation is distributed as T aω bol .
3. The third vertex of the revealed face belongs to the boundary of T ω bol , say the i-th starting from the target of the root edge, which has spin ω i . This happens with probability ν δω 1 =ωp t ν Z (ω 1 ,...,ω i ) (ν, t ν ) · Z (ω i ,...,ωp) (ν, t ν ) Z ω (ν, t ν ) and the two triangulations remaining to explore are independent and distributed respectively as T (ω 1 ,...,ω i ) bol and T (ω i ,...,ωp) bol . Since we are only interested in the root degree of T ω bol , we can further distinguish two subcases: (a) The third vertex is not the root vertex (meaning p > 1 and i = p). In that case only the subtriangulation distributed according to T (ω i ,...,ωp) bol contains the root vertex of T ω bol and we discard the other remaining part. (b) The third vertex is the root vertex (meaning i = p). In this case the two remaining subtriangulations contain the root vertex and we have to explore both of them. We will say that the exploration branches. Notice that in this case, the two subtriangulations are distributed as T ω bol and T ⊕ bol , and that the probability of this event simplifies into ν δω 1 =ωp t ν Z ⊕ (ν, t ν ). When the exploration is complete, every edge adjacent to the root edge is discovered, and each exploration step (taking into account the steps of every branch of the exploration) increases the degree by 1 or 2 (for loops). Therefore, the degree of the root vertex of T ω bol is bounded from above by twice the total number of particles in a multitype branching process B where the types of the particles are words in {⊕, } N ending by ⊕ (which is always the spin of the root vertex) and whose transition probabilities are given by: • Case 1: A particle of type (a, ⊕) has no child with probability ν 1(a=⊕) t ν Z a⊕ (ν, t ν ) .
• Case 3b: A particle of ω has two children of respective types ⊕ and ω with probability The branching process B has an infinite number of types, which makes it difficult to analyze. We introduce another branching process, denoted B , that stochastically dominates B, has only finitely many types (and as few as possible!) and is subcritical. Since only particles of type ⊕⊕ and ⊕ can die and branching always give birth to particles of type ⊕, we keep these three types and group types of length larger than two together. To get an interesting bound, we end up keeping five types, denoted ⊕, ⊕⊕, ⊕, Ω ⊕ ⊕, Ω ⊕, where the last two regroup the corresponding original types of length larger than two.
The offspring distribution for type ⊕ in B is the same as in B. Namely, an individual of type ⊕ has: • Two children of type ⊕ with probability νt ν Z ⊕ (ν, t ν ).
The offspring distribution for types ⊕⊕, and ⊕ in B are the same as in B, where all particles of type length larger than two are merged. Namely, for a fixed in { , ⊕}, an individual of type a⊕ has: • No children with probability ν 1(a=⊕) t ν Z a⊕ (ν, t ν ) .
• One child of type Ωa⊕ with probability 1 The second probability is taken to be larger than the branching probability in B (hence the factor (1 ∨ ν).
With these choices, we can couple two branching processes B and B so that the total number of particles in B is larger than the total number of particles in B in the following natural way. A particle of type t (ending by ⊕) in B is projected in B to a particle of type ⊕ if t = ⊕, of type a⊕ if t = a⊕ with a ∈ {⊕, }, or of type Ωa⊕ if t has length at least 3 and ends with a⊕, again with a ∈ {⊕, }.
The offspring distribution of particles of type ⊕, ⊕⊕ or ⊕ is equal in B and in B . Fix a ∈ {⊕, } and ω ∈ {⊕, } + , because of the differences in the offspring distribution of a particle of type Ωa⊕ and of a particle of type ωa⊕, the coupled branching processes B and B may differ in two ways: • A particle of type ωa⊕ may have only one child of type ωa⊕ in B, whereas its counterpart in the coupled branching process B has two children: one of type Ωa⊕ and one of type ⊕.
• A particle of type ωa⊕ may have one child of type a⊕ whereas its counterpart in the coupled branching process B has one child of type Ωa⊕.
To prove that the total number of particles (TNOP) in B is dominated by the TNOP in B , it is then enough to prove that the TNOP in B started with a single particle of type a⊕ is dominated by the TNOP of B started with a single particle of type Ωa⊕ (for the same value of a). Given the definition of the offspring distributions, a branching process B started with a particle of type Ωa⊕ has to produce at least one particle of type a⊕ (again for the same value of a) to go extinct. It is hence clear that the TNOP of a branching process B is stochastically greater when the process is started with a particle of type Ωa⊕ than with a particle of type a⊕, which concludes the proof.
To prove that B is critical, we write the matrix of the mean number of children of each type for B with the ordering (⊕, ⊕⊕, Ω ⊕ ⊕, ⊕, Ω ⊕) (all generating series Z ω are evaluated at t ν ): To finish the proof of the proposition, we check that the spectral radius of M is strictly smaller than 1. Since we have explicit formulas for each quantity appearing in M , we can easily compute its spectral radius at any specified ν. For ν = ν c , we obtain 0.98985 < 1! | 2018-12-07T18:02:45.000Z | 2018-12-07T00:00:00.000 | {
"year": 2018,
"sha1": "07866d33f9b19495ceff5a000bb79be4fac4751d",
"oa_license": "publisher-specific, author manuscript",
"oa_url": "https://doi.org/10.1090/tran/8150",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "2a98839ae1a486f6cf3ca892d6f49c5bf58fd808",
"s2fieldsofstudy": [
"Physics",
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
9044132 | pes2o/s2orc | v3-fos-license | Functional transcriptome analysis of the postnatal brain of the Ts1Cje mouse model for Down syndrome reveals global disruption of interferon-related molecular networks
Background The Ts1Cje mouse model of Down syndrome (DS) has partial triplication of mouse chromosome 16 (MMU16), which is partially homologous to human chromosome 21. These mice develop various neuropathological features identified in DS individuals. We analysed the effect of partial triplication of the MMU16 segment on global gene expression in the cerebral cortex, cerebellum and hippocampus of Ts1Cje mice at 4 time-points: postnatal day (P)1, P15, P30 and P84. Results Gene expression profiling identified a total of 317 differentially expressed genes (DEGs), selected from various spatiotemporal comparisons, between Ts1Cje and disomic mice. A total of 201 DEGs were identified from the cerebellum, 129 from the hippocampus and 40 from the cerebral cortex. Of these, only 18 DEGs were identified as common to all three brain regions and 15 were located in the triplicated segment. We validated 8 selected DEGs from the cerebral cortex (Brwd1, Donson, Erdr1, Ifnar1, Itgb8, Itsn1, Mrps6 and Tmem50b), 18 DEGs from the cerebellum (Atp5o, Brwd1, Donson, Dopey2, Erdr1, Hmgn1, Ifnar1, Ifnar2, Ifngr2, Itgb8, Itsn1, Mrps6, Paxbp1, Son, Stat1, Tbata, Tmem50b and Wrb) and 11 DEGs from the hippocampus (Atp5o, Brwd1, Cbr1, Donson, Erdr1, Itgb8, Itsn1, Morc3, Son, Tmem50b and Wrb). Functional clustering analysis of the 317 DEGs identified interferon-related signal transduction as the most significantly dysregulated pathway in Ts1Cje postnatal brain development. RT-qPCR and western blotting analysis showed both Ifnar1 and Stat1 were over-expressed in P84 Ts1Cje cerebral cortex and cerebellum as compared to wild type littermates. Conclusions These findings suggest over-expression of interferon receptor may lead to over-stimulation of Jak-Stat signaling pathway which may contribute to the neuropathology in Ts1Cje or DS brain. The role of interferon mediated activation or inhibition of signal transduction including Jak-Stat signaling pathway has been well characterized in various biological processes and disease models including DS but information pertaining to the role of this pathway in the development and function of the Ts1Cje or DS brain remains scarce and warrants further investigation. Electronic supplementary material The online version of this article (doi:10.1186/1471-2164-15-624) contains supplementary material, which is available to authorized users.
Background
Down Syndrome (DS) is a genetic disorder resulting from trisomy or partial trisomy of human chromosome 21 (HSA21). This syndrome is a non-heritable genetic disorder that occurs at a prevalence of approximately 1 in 750 live births [1]. DS has been associated with more than 80 clinical manifestations, including cognitive impairment or intellectual disability, craniofacial features, cardiac abnormalities, hypotonia and early onset Alzheimer's disease [2,3]. In terms of cognitive impairment, DS individuals have an average Intelligence Quotient (IQ) value of 50 [4] as well as learning impairment involving both long-term and short-term memory [5]. DS individuals also present with reduced brain size, brain weight, brain volume, neuronal density, and neuronal distribution with neurons that are characterized by shorter dendritic spines, reduced dendritic arborization and synaptic abnormalities [6][7][8].
There are various hypotheses that attempt to explain the genotype-phenotype relationship of DS. The gene dosage imbalance hypothesis states that an increased copy number of genes on HSA21 leads to an overall increase in gene and protein expression and a subset of these directly result in the traits associated with DS [1]. In contrast, the amplified developmental instability hypothesis suggests that the dosage imbalance of genes on HSA21 results in a general disruption of genomic regulation and expression of genes involved in development, which upsets normal homeostasis and results in many of the traits associated with DS [9]. A further proposed hypothesis is known as the critical region hypothesis and is based on genetic analyses performed on individuals with partial trisomy of HSA21. This line of thinking suggests that a small set of genes within the Down Syndrome Critical or Chromosomal Region (DSCR) are responsible for the development of common DS phenotypes [10]. However, this hypothesis is not supported by experiments on DS individuals, which demonstrated that the DSCR is more likely to be a susceptible region for DS phenotypes, rather than a single critical region causing all DS phenotypes [11][12][13]. In reality, it is unlikely that the DS traits are caused by one genetic mechanism but instead are due to a combination of mechanisms, with the added complexity of further genetic and epigenetic controls [14]. Some researchers have suggested that dosage imbalance of certain genes may not have any effect on the DS phenotype as they are "dosage compensated" under certain circumstances [1].
Significant genetic homology exists between HSA21 and mouse chromosome 16 (MMU16) [15], MMU17 and MMU10 [16], which has allowed the generation of mouse models of DS and testing of genotype-phenotype correlation hypotheses. There are a few strains of mice that are trisomic for segments of MMU16 that are homologous to HSA21 including Ts65Dn [mitochondrial ribosomal protein L39, (Mrpl39)-zinc finger protein 295, (Znf295)] [17], Ts1Yey [RNA binding motif protein 11, (Rbm11)-Znf295] [18], Ts1Cje [superoxide dismutase 1, soluble, (Sod1)-Znf295] [19] and Ts1Rhr [carbonyl reductase 1, (Cbr1)-myxovirus (influenza virus) resistance 2, (Mx2)] [12] strains. In addition, the Ts2Yey [protein arginine N-methyltransferase 2, (Prmt2)-pyridoxal (pyridoxine, vitamin B6) kinase, (Pdxk)] strain [20] is trisomic for MMU10 segments, whereas the Ts3Yey [ribosomal RNA processing 1 homolog B (S. cerevisiae), (Rrp1b)-ATP-binding cassette, sub-family G (WHITE), member 1, (Abcg1)] [20] and Ts1Yah [U2 small nuclear ribonucleoprotein auxiliary factor (U2AF) 1, (U2af1)-Abcg1] [21] strains are trisomic for segments of MMU17. Each of these mouse models was found to perform differently in cognitive and hippocampal long-term potentiation (LTP) or long-term depression (LTD) tests and exhibit differences in brain morphology and behavioural phenotypes as well as neuropathology [22]. As such, there is currently no perfect mouse model to study the DS brain. In 2010, Yu and colleagues [20] generated a mouse model [Dp (10)1Yey/+;Dp (16)1Yey/+;Dp (17)1Yey/+] with regions that are syntenic to all of HSA21. This mouse model is characterised by several DS-related neuropathological features including cognitive impairment and reduced hippocampal LTP. Unfortunately, the mice develop hydrocephalus, a phenotype that is rarely associated with DS, and 25% of these animals die between 8 to 10 weeks of age [20]. The Ts1Cje mouse model, also known as T(12;16)1Cje, was developed in 1998 and carries a partial trisomy of MMU16 resulting from a translocation of a segment of MMU16 spanning across the superoxide dismutase 1 (Sod1) gene to the zinc finger protein 295 (Znf295) gene onto MMU12 [19,23]. This trisomic region is syntenic to HSA21. Recent literature reports a significant correlation between Ts1Cje mice phenotypes and DS individuals, including altered hippocampus-dependent learning and memory [24][25][26], craniofacial defects [27] and reduced cerebellar volume [23,28]. This makes Ts1Cje a suitable model to study the neurobiology networks and mechanisms that contribute to the neuropathology in DS individuals. Olson and colleagues [28] reported that the Ts1Cje mouse is defective in both prenatal and postnatal neurogenesis. We have recently demonstrated that adult Ts1Cje mice start with a similar number of adult neural stem cells as their control littermates, but later develop fewer neuronal progenitors, neuroblasts and neurons [29]. In that study we also reported that differentiated Ts1Cje neurons harbour fewer neurites and have an increased number of astrocytes, which demonstrates that the Ts1Cje mouse has defective neurogenesis and neuronal development. Similar observations have been reported by different studies that showed impaired adult neurogenesis in the subventricular zone (SVZ) and impaired embryonic neurogenesis in Ts1Cje neocortices [30]. The Ts1Cje hippocampus also exhibits abnormal short-and longterm synaptic plasticity [26] as well as an impairment that is restricted to the spatially oriented domain, since short-and long-term novel object recognition memory is conserved [25].
Many genomic studies have been conducted on various tissues from mouse models of DS. To date, gene expression studies on Ts1Cje have mostly been done on the postnatal cerebellum up to day 30 [23,31,32]. Gene expression analyses on Ts1Cje whole brain at postnatal day 0 [33], and on neocortical neurospheres at embryonic day 14.5 [34] have also been reported. We have previously analysed the global gene expression in Ts1Cje adult neural stem cells (P84) [29]. All previous studies have been completed on specific brain regions or the whole brain and have not encompassed the entire postnatal brain development period. In addition, gender differences and hormonal influences may also be a confounding factor in some of these gene expression studies as not all reported the gender of their subjects and littermate controls. In order to understand the effect of segmental MMU16 trisomy on the postnatal Ts1Cje brain and the complex mechanisms that may result in neuropathology, we performed a comprehensive spatiotemporal gene expression profiling analysis of 3 brain regions (cerebral cortex, cerebellum and hippocampus) at 4 different time points (Postnatal day (P)1, P15, P30 and P84). These regions were selected for analysis as they are most commonly reported to be affected by neuropathology in DS and mouse models [35]. Furthermore, mice at postnatal day (P)1, P15, P30 and P84, correspond to postnatal brain development and function during the neonatal, juvenile, young adult and adult periods.
Methods
Ethics statement, animal breeding, handling and genotyping Breeding procedures, husbandry and all experiments performed on mice used in this study were carried out according to protocols approved by the Walter and Eliza Hall Institute Animal Ethics Committee (Project numbers 2001. 45,2004.041 and 2007.007) and the Faculty of Medicine and Health Sciences, Universiti Putra Malaysia Animal Care and Use (ACU) committee (Approval reference: UPM/ FPSK/PADS/BR-UUH/00416). All sex matched disomic and trisomic littermates involved in the study were generated by mating Ts1Cje males with C57BL/6 female mice. All mice were kept in a controlled environment with an equal light/dark cycle. Unlimited standard pellet diet and water were provided. Genomic DNA was extracted from mouse-tails and genotyped using multiplex PCR primers for neomycin (neo) and glutamate receptor, ionotropic, kainite 1 (Grik1) as an internal control as described previously [19] with substitution of gel electrophoresis with high resolution melting analysis.
Tissue procurement, RNA extraction, quality control and microarray analysis Procurement of the cerebral cortex, hippocampus and cerebellum were performed on 3 Ts1Cje and 3 disomic female littermates at 4 time points (P1.5, P15, P30 and P84) according to a method described previously [36]. Only female mice were utilized in the study to avoid the downstream effects of Y-linked genes on neural sexual differentiation [37]. Total RNA was purified from each tissue, with assessment of RNA quality and quantification of purified RNA performed according to methods described previously [29]. Each RNA sample was processed using the Two-Cycle Target Labeling Assay and hybridized onto Affymetrix Gene-Chip® Mouse Genome 430 2.0 arrays (Affymetrix, USA) according to the manufacturer's protocols. Fluorescent signals were detected using a GeneChip® Scanner 3000 (Affymetrix, USA) and expression data were pre-processed and normalized using the gcRMA algorithm [38]. All datasets were normalized by comparing Ts1Cje trisomic mouse brains to their disomic littermates.
Differentially expressed genes (DEGs), gene ontology and pathway analyses
The Empirical Bayes t-statistic [39] was used to analyse differential expression of genes between groups according to a method described previously [29]. Briefly, stringent criteria were employed to select differentially expressed genes (DEGs) from the analysis including t-statistic values of ≥ 4 or ≤ −4 and an adjusted P-value of ≤ 0.05. Selected DEGs were collectively analysed for functional ontologies using the Database for Annotation, Visualisation and Integrated Discovery (DAVID) [40]. High classification stringency was used to analyse the gene lists with the following settings; a kappa similarity threshold of 0.85, a minimum term overlap of three, two initial and final group membership with 0.50 multiple linkage threshold and a modified Fisher-exact p-value or enrichment thresholds of 0.05. All DEGs were analysed according to brain regions and/or time-points.
Quantitative real time polymerase chain reaction (RT-qPCR)
RT-qPCR was performed to validate the expression of DEGs using cDNAs that were generated from the same RNAs used for microarray analysis. First strand cDNA was synthesized from 3000 ng total RNA using random hexamers and the SuperScript™III Reverse Transcriptase Kit (Invitrogen, USA) according to the manufacturer's protocol. Primers were designed and probes selected using ProbeFinder version 2.34 (except for Stat1 where ProbeFinder version 2.45 was used) at the Universal ProbeLibrary Assay Design Center (Roche Applied Science http://lifescience.roche.com/). RT-qPCR was performed in triplicate using the LC480 Master Probe Mix (Roche Diagnostics, Switzerland) and Universal ProbeLibrary (UPL) probe (Roche Diagnostics, Australia) according to published methods [29,36] (see Additional file 1 for a full list of primers and UPL probes used). Conditions for the RT-qPCR, calculation of quantification cycle for each signal, determination of PCR efficiencies, reproducibility (R 2 values) and relative quantification of target gene expression in Ts1Cje and disomic samples were performed essentially according to methods described previously [36]. Successful assays were defined by a PCR efficiency of between 90-110% and an R 2 values > 0.98.
Microarray datasets and differentially expressed genes (DEGs)
To investigate the effect of partial trisomy on postnatal brain development and function in Ts1Cje mice, we performed 72 whole-genome expression analyses using GeneChip® Mouse Genome 430 2.0 Arrays (Affymetrix, Santa Clara, USA). The analyses encompassed comparison of three brain regions (cerebral cortex, cerebellum and hippocampus) at 4 different time points (Postnatal (P)1, P15, P30 and P84) in Ts1Cje and disomic female mice. These datasets are publicly accessible from the Gene Expression Omnibus website under the series accession number GSE49050 (http://www.ncbi.nlm.nih.gov/ geo/query/acc.cgi?acc=GSE49050).
To investigate the overall characteristics of genes in the trisomic region, we plotted their log 2 fold-change (M) for trisomic versus disomic mice versus the average log 2 expression (A) (Figure 1). Probe-sets that were not expressed or showed no differences between the groups of mice were plotted near to 0. There was consistently a larger number of probe-sets located in the trisomic region with M values greater than 0.58, signifying their 1.5-fold upregulation in various brain regions and developmental stages compared to probe-sets located in disomic regions of the genome. Our observation therefore supports the gene dosage imbalance hypothesis, which specifies that an increased copy number of genes will lead to an overall increase in their expression by 50%.
Genes located within the trisomic region have an increased copy number of 0.5 compared to genes located within disomic regions. According to the gene dosage imbalance hypothesis, we expect only a small fold-change difference in the level of gene expression between Ts1Cje and disomic groups resulting in a small number of globally differentially expressed genes (DEGs) based on our stringent selection criteria (see Methods). The analysis revealed 317 DEGs based on all spatiotemporal comparisons completed between the Ts1Cje and disomic mice (Table 1; Additional file 2). Of these DEGs, 41 are located on the MMU16 triplicated segment ( Table 2) and all of the significant probe sets were found to be upregulated by 1.4--4.8-fold, which again supports the gene dosage imbalance hypothesis.
Functional clustering of DEGs based on gene ontologies
To dissect the ontologies that are enriched in the list of DEGs, we employed a top-down screening approach to analyze any disrupted molecular networks on a global level, followed by refined analyses involving specific brain regions or developmental stages. An initial analysis of the 317 DEGs revealed 7 significant functional clusters that were associated with interferon-related signaling pathways (23 DEGs, 6 ontologies), innate immune pathways (9 DEGs, 4 ontologies), Notch signaling pathway (4 DEGs, 1 ontology), neuronal signaling pathways (9 DEGs, 2 ontologies), cancer-related pathways (11 Figure 1 MA plots of trisomic and disomic microarray probe-sets from 3 different brain regions (cerebral cortex, cerebellum and hippocampus) at 4 postnatal (P) time points (P1, P15, P30 and P84). The Y-axis represents the M value, which is the ratio (log 2 (T/D)) whereas the X-axis represents the A value, which is the mean ratio (1/2xlog 2 (TxD)). T and D represent the intensities of microarray probe-sets for Ts1Cje and disomic samples, respectively. Each blue dot represents a single probe. Red dotted lines denote the cutoff at M values of 0.58, signifying 1.5-fold upregulation of microarray probe-sets.
DEGs, 4 ontologies), cardiomyopathy-related pathways (3 DEGs, 2 ontologies) and dynamic regulation of cytoskeleton pathways (7 DEGs, 2 ontologies). The functional clustering analysis was repeated using the lists of DEGs from each brain region regardless of developmental stage and subsequently at each developmental stage. The DEGs found at each developmental stage were found to be significantly enriched for the same pathways identified in the list of 317 DEGs (see Additional file 3). The results of the top-down functional screening approach are illustrated in Figure 3.
Based on the analysis involving all 317 DEGs, only 3, namely Ifnar1, Ifnar2 and interferon gamma receptor 2 (Ifngr2), from the triplicated MMU16 region were enriched in the functional clusters that were identified ( Figure 3). These DEGs were found within two annotation clusters for six interferon-related signaling pathways, including the interferon alpha signaling pathway, natural killer cell mediated cytotoxicity, cytokine-cytokine receptor interaction, toll-like receptor signaling pathway, the Janus kinase (Jak)-signal transducer and activation of transcription (Stat) signaling pathway and the inflammation mediated by chemokine and cytokine signaling pathways. Interestingly, these DEGs are surface interferon receptors and were also found to be enriched for the same functional clusters in all regions of the brain assessed regardless of developmental stage. This suggests that trisomy of Ifnar1, Ifnar2 and Ifngr2 is crucial in causing dysregulation of interferon-related pathways, which may in turn contribute to the developmental and functional deficits in the Ts1Cje brain. Disomic DEGs that were clustered with the 3 interferon receptors include activin receptor IIB (Acvr2b), caspase 3 (Casp3), collagen, type XX, alpha 1 (Col20a1), ectodysplasin A2 isoform receptor (Eda2r), epidermal growth factor receptor (Egfr), c-fos induced growth factor (Figf ), growth differentiation factor 5 (Gdf5), histocompatibility 2, K1, K region (H2-K1), interleukin 17 receptor A (Il17ra), interferon regulatory factor 3 (Irf3), interferon regulatory factor 7 (Irf7), inositol 1,4,5-triphosphate receptor 3 (Itpr3), lymphocyte cytosolic protein 2 (Lcp2), leptin receptor (Lepr), nuclear factor of activated T-cells, cytoplasmic, calcineurin-dependent 4 (Nfatc4), regulator of G-protein signaling 13 (Rgs13), signal transducer and activator of transcription 1 (Stat1) and Tnf receptor-associated factor 6 (Traf6). We consider these as important candidates for further analysis to understand the neuropathology of DS. We propose that differential regulation of these disomic genes will lead to a number of further cascades of low-level gene dysregulation within the Ts1Cje brain. For example, we found Egfr to be interconnected in various dysregulated molecular pathways represented by different functional clusters including the calcium signaling pathway, neuroactive ligand-receptor interaction and the MAPK signaling pathway, as well as pathways in cancers such as pancreatic and colorectal cancers, which involve focal adhesion and regulation of actin cytoskeleton ( Figure 3).
We were also interested to elucidate all potential molecular pathways represented by the 18 DEGs that were common to all brain regions analysed throughout development (Atp5o, Brwd1, Chaf1b, Cryzl1, Dnah11, Donson, Dopey2, Erdr1, Ifnar1, Ifnar2, Itgb8, Itsn1, Morc3, Mrps6, Pigp, Psmg1, Tmem50b and Ttc3). Functional clustering analysis of these genes showed that interferon-related pathways were enriched, which was mainly attributed to the presence of Ifnar1 and Ifnar2. Combining our functional analyses, our data suggest that interferon-related pathways are globally dysregulated and therefore important in causing neurological deficits within the Ts1Cje mouse brain.
Western blotting
Both microarray and RT-qPCR analyses demonstrated significant differences in Ifnar1, Ifnar2 and Stat1 expression levels in the P84 cerebral cortex and cerebellum. To evaluate the effect of mRNA levels on protein synthesis, we measured the expression level of these proteins in the cerebral cortex and cerebellum lysates prepared from P84 Ts1Cje and wild type mice (Figure 7). Based on the pixelation analysis of the bands, Ifnar1 and Stat1 were found to be significantly (p ≤ 0.05) over-expressed in the Ts1Cje cerebellum as compared to wild type with 2.69-and 4.93fold increases, respectively. In Ts1Cje cerebral cortices, we observed 1.55-and 1.73-fold upregulation of Ifnar1 and Ifnar2 expression, respectively, when compared to wild type. However, none of them were statistically significant based on pixelation analysis (see Additional file 4).
Discussion
This study aimed to identify disruptions in molecular pathways caused by the partial trisomy of mouse chromosome 16 (MMU16) harbored by Ts1Cje mice, which results in neuropathology similar to that observed in people with DS. We provide the most comprehensive molecular expression catalogue for the Ts1Cje developing postnatal brain to date. Previous studies have focused on single brain regions or the whole brain at limited developmental stages [23,29,[31][32][33][34].
We completed a stringent microarray analysis throughout postnatal development (P1.5, P15, P30 and P84) of the cerebral cortex, cerebellum and hippocampus of Ts1Cje versus disomic littermates. The majority of the trisomic probe-sets have a 0.5-fold increase in expression in Ts1Cje mice as compared to disomic controls. Our data are in agreement with previously reported microarray analysis involving Ts1Cje and disomic littermate control primary neural stem and progenitor cells [29] and Ts1Cje P0 mouse whole brains [33] or the cerebellum [32], which demonstrated a dosage-dependent over-expression of genes on the triplicated segment of MMU16. According to the spatial analysis, the number of DEGs identified in the cerebellum and hippocampus was consistently higher than in the cerebral cortex at all time points. It is widely accepted that the cerebral cortex is the most highly developed part of the brain, and is responsible for the majority of information processing and higher cognitive functions, as well as being the most recent addition in evolutionary terms. We hypothesise that the smaller number of DEGs in this region throughout post-natal development represents the higher level of genetic control required for the cerebral cortex to function at a level that allows survival. Further evidence that supports this theory includes a meta-analysis [41] demonstrating that the human cortex has a reproducible genomic aging pattern whilst the cerebellum does not. This reproducibility reflects a higher level of gene expression control in the cortex compared to the cerebellum Red lines or asterisks denote RT-qPCR data whereas black lines or asterisks denote microarray data. *p < 0.05, **p < 0.01 and ***p < 0.001 based on Empirical Bayes t-statistic test. Figure 5 RT-qPCR validation of selected DEGs in the cerebellum. Red lines or asterisks denote RT-qPCR data whereas black lines or asterisks denote microarray data. *p < 0.05, **p < 0.01 and ***p < 0.001 based on Empirical Bayes t-statistic test. even through the degenerative process of aging to maintain a certain level of function. The Ts1Cje mouse model contained a partial monosomy of MMU12 following partial translocation of MMU16 onto this site. An~2 MB segment of the telomeric end of MMU12 is deleted [23], and consequently seven genes were deleted (Abcb5, Dnah11, Itgb8, Macc1, Sp4, Sp8, and Tmem196) [42]. Our data showed that dynein axonemal heavy chain 11 (Dnah11) is significantly up-regulated in all three brain regions and four postnatal developmental time points with a log 2 expression ratio that ranged from 5.4 to 7.7. This over-expression of Dnah11 is consistent with previously reported cerebellum microarray expression results [23] and this overexpression is probably specific to the Ts1Cje mouse model [23,33] since similar over-expression in DS patients or the Ts65Dn mouse model has not been observed [43][44][45][46]. Over-expression of the Dnah11 gene is likely caused by the position effect of an upstream regulatory element following translocation onto MMU12 in the Ts1Cje genome. In our study, the expression levels of Sp8 and Itgb8 are down-regulated (Additional file 2: Table S2) as they are monosomic in Ts1Cje [42]. Sp8, trans-acting transcription factor 8, is important for patterning in the developing telencephalon, specification of neuronal populations [47] and also neuromesodermal stem cell maintenance and differentiation via Wnt3a [48]. Meanwhile, Itgb8, Intergrin beta 8, is crucial for neurogenesis and neurovascular homeostasis regulation [49]. This down-regulation of Sp8 and Itgb8 may affect DS neuropathology features to a certain extent in the Ts1Cje mouse brain. The remaining four monosomic genes in Ts1Cje mice [(ATP-binding cassette, sub-family B (MDR/TAP), member 5, (Abcb5); metastasis associated in colon cancer 1, (Macc1); trans-acting transcription factor 4, (Sp4) and transmembrane protein 196 Mus musculus, (Tmem196)] were not found to be dysregulated in our data.
Our data are also in agreement with a previously reported meta-analysis that was performed on DS patient tissues, cell lines and mouse models at different developmental stages [50]. Fifteen of the top 30 DS trisomic genes with direct dosage effects reported in the metaanalysis report [50] were also selected as DEGs in our analysis [(Cbr1; carbonyl reductase, (Cbr3); Donson; Down syndrome critical region gene 3, (Dscr3); E26 avian leukemia oncogene 2, 3' domain, (Ets2); phosphoribosylglycinamide formyltransferase, (Gart); Ifnar2; Ifngr2; Psmg1; regulators of calcineurin 1, (Rcan1); Son; synaptojanin 1, (Synj1); Tmem50b, Ttc3 and Wrb)]. The expression of dual-specificity tyrosine-(Y)-phosphorylation regulated kinase 1a (Dyrk1a), a well-studied gene in DS individuals and mouse models, has been found to be inconsistent across various expression profiling studies involving the brain of Ts1Cje mice. Dyrk1a was not differentially regulated in our dataset and our finding is in agreement Figure 6 RT-qPCR validation of selected DEGs in the hippocampus. Red lines or asterisks denote RT-qPCR data whereas black lines or asterisks denote microarray data. *p < 0.05, **p < 0.01 and ***p < 0.001 based on Empirical Bayes t-statistic test. with two other studies on the embryonic Ts1Cje neurosphere [34] and early postnatal Ts1Cje whole brains [33], but this result is in contrast to those of Laffaire et al. [23], who observed Dyrk1a over-expression in the cerebellum of early postnatal Ts1Cje mice. According to our dataset, Rcan1, which is located in the Down syndrome critical region (DSCR), was over-expressed in P1 cerebral cortex and P15 hippocampus of Ts1Cje mice. Rcan1-null mice demonstrated deficits in spatial learning and memory, implicating its role in late-phase long-term potentiation and memory formation [51]. In addition, RCAN1-1S overexpression in the hippocampal neuronal cell line HT22 cell line resulted in hyperphosphorylation of tau [52], which positions Rcan1 as an important candidate for further investigation in DS-related Alzheimer's disease features. Functional clustering of various DEGs based on DA-VID ontologies highlighted a global dysregulation of interferon-related molecular networks in all brain regions attributed mainly to the dysregulated expression of the trisomic genes Ifnar1 and Ifnar2. These genes code for IFN beta-receptor subunits 1 and 2, respectively. However, Ifngr2, which encodes one of the two subunits of the IFN gamma receptor, was differentially upregulated in the cerebellum only. A role for all 3 interferon receptors and their dysregulation has been described in mouse models of DS. For example, mouse fetuses that are trisomic for MMU16 (Ts16), which includes the interferon alpha and gamma receptor genes, showed upon subsequent knockout of these genes improved growth when compared to Ts16 fetuses and generated cortical neurons with similar viability to their euploid counterparts [53]. In the present study, upregulation of these receptors suggests that the Ts1Cje mouse would have a lower response threshold or hyperresponsiveness to interferons or cytokines that would result in activation of downstream intracellular signaling pathways contributing to the observed neuropathology, particularly in the cerebellum.
In addition to Ifnar1, Ifnar2 and Ifngr2, our analysis showed that other Jak-Stat-associated genes such as Stat1 (P84), Lepr (P1) and two interferon response factor genes, Irf3 (P15) and Irf7 (P84), were upregulated in the Ts1Cje cerebellum. Irf3 and Irf7 have been shown to induce type 1 interferons, which subsequently stimulate Jak-Stat signal transduction pathways leading to upregulated transcription of various interferon-stimulated genes [54][55][56]. Leptin and its receptor, Lepr, have been shown to be involved in leptin-dependent adult hippocampal neurogenesis [57] and mediated neuroprotection of dopaminergic cells via activation of Jak-Stat, mitogenactivated protein kinases (MEK)/extracellular signalregulated kinases (ERK) and growth factor receptorbound protein 2 (GRB2) signaling pathways in a mouse model of Parkinson's disease [58]. The role of the Jak-Stat signaling pathway in the brain, however, is unclear. Jak-Stat signaling has recently been implicated in neurogenesis/cell-fate determination [59,60], astrogliogenesis [61,62] and synaptic plasticity [63,64] within the nervous system of rats and fruit flies, but not specifically in the development and progression of neuropathology in All selected DEGs are trisomic genes located on chromosome 16 except for Erdr1 (a disomic gene located on chromosome X), Itgb8 (a monosomic gene located on chromosome 12), Sod1 (a trisomic gene located on chromosome 16, although one of the copies is non-functional due to truncation), and Stat1 (a disomic gene located at chromosome 1). *p<0.05, **p<0.01 and ***p<0.001 based on Empirical Bayes t-statistic test.
Figure 7
Western blotting analysis of Ifnar1 (66 kDa), Ifnar2 (55 kDa) and Stat1 (91 kDa) in the cerebral cortex and cerebellum of adult (P84) Ts1Cje and wild type littermates. Each band represents each Ts1Cje or wild type mouse in the respective brain region.
mouse models or individuals with DS. Elevation of STAT1 activities has been shown to promote astrogliogenesis during the neurogenic phase of development [61]. We have previously demonstrated that Ts1Cje mice have a number of defects in adult neurogenesis, including a severe reduction in the numbers of neurons produced and an increased number of astrocytes [29]. Our current protein analysis further confirmed the overexpression of Ifnar1 and Stat1 in the cerebellum of adult (P84) Ts1Cje mice as compared to their wild type littermates. Therefore, we hypothesize that over-activation of Jak-Stat signal transduction, which is due to the increased sensitivity towards interferons via over-expression of interferon receptor, may lead to a preference for the glial-fated path in Ts1Cje neural precursors that contributes to the neuropathology observed in Ts1Cje mice. The role of the trisomic genes Ifnar1, Ifnar2 and Ifngr2 and the disomic gene Lepr in upregulation of Stat1, Irf3 and Irf7 and subsequent activation of Jak-Stat signaling in the Ts1Cje mouse brain, particularly the cerebellum, remains elusive and warrants further investigation. From the list of validated trisomic DEGs, Brwd1, Donson, Tmem50b and Itsn1 were upregulated in all brain regions, which concurs with previous studies [65][66][67][68][69][70][71][72]. Both Brwd1 and Donson are not well studied and have not been associated with the progression and development of neuropathology in DS. Brwd1 encodes a nuclear protein that plays a role in transcriptional regulation related to diverse biological functions [65,66]. Donson, on the other hand, encodes a protein of unknown function. Fusion transcripts that are encoded by exons from Donson and another trisomic DEG, Atp5o, have been reported but their role/function also remains unknown [67]. Tmem50b encodes an intracellular membrane protein expressed mainly in the endoplasmic reticulum and Golgi apparatus of the rodent brain [68]. At the subcellular level, Tmem50b is expressed in rat and mouse glial fibrillary acidic protein (GFAP)positive cells and to a lesser degree in neuronal microtubuleassociated protein 2 (MAP2)-or beta-tubulin II-positive cells in vitro, suggesting a role for this gene in astroglial cell development or function. Upregulation of ITSN1 has been demonstrated previously in the prosencephalon of DS fetuses compared with controls [69]. Itsn1 is also expressed in both proliferating and differentiating neurons in the mouse brain [69] and has been shown to regulate endocytosis events probably via the formation of clathrin-coated vesicles, which are important for recycling synaptic vesicles [70]. Endocytosis anomalies such as enlarged endosomes in neurons were identified as an early neuropathological feature in the brain of Ts65Dn mice and individuals with DS and Alzheimer's disease [71,72]. Over-expressed Itsn1 and amyloid beta (A4) precursor protein (App) may contribute to the early development of Alzheimer's disease in DS individuals by accelerating beta amyloid and neurofibrillary tangle accumulation via increased endocytosis activity in neurons.
Our microarray data demonstrate that many other trisomic DEGs such as Atp5o, Cbr1, Dopey2, Erdr1, Hmgn1, Morc3, Mrps6, Son and Wrb, are upregulated in Ts1Cje mouse brain regions. The molecular and cellular functions of these DEGs have not been comprehensively characterized in the brain and therefore their potential roles in the onset and progression of neuropathology observed in DS remain poorly understood. Of these DEGs, the expression profiles of Cbr1, Dopey2, Erdr1, Hmgn1 and Mrps6 are in agreement with previous studies of DS mouse models [31,32,[73][74][75]. The chromatin-binding protein Hmgn1 is a negative regulator of methyl CpG-binding protein 2 (MeCP2) expression via chromatin structure changes and histone modification in the MeCP2 promoter [76]. As MeCP2 has widespread effects on gene expression, especially in neurological disease such as Rett syndrome [77], over-expressed Hmgn1 will down-regulate MeCP2 expression, which may cause disruption in terms of downstream gene expression that is necessary for normal brain development. Dopey2 has been proposed as a candidate gene that is responsible for mental retardation in DS individuals because its expression was found in brain regions that are involved in learning and memory processes [75,[78][79][80]. Transgenic mice over-expressing Dopey2 demonstrated increased density of cortical cells suggesting that this protein may play an important role in brain morphogenesis and therefore may contribute to neuropathology of DS when over-expressed [78,80]. These under-characterised DEGs are important candidates that should be investigated further to understand various neuropathological features of DS.
Conclusion
Our study aimed to define the disrupted molecular pathways caused by partial triplication of MMU16 during postnatal brain development in the Ts1Cje mouse model of DS. Global analysis of transcriptomes from different regions of the Ts1Cje brain supported a gene-dosage effect of the majority of the trisomic genes that led to the disruption of the disomic genome. Interferon-related pathways were identified as the most significantly dysregulated molecular networks and these changes were attributed mainly to the upregulation of the interferon receptors, which are encoded by the trisomic genes Ifnar1, Ifnar2 and Ifngr2. Upregulation of Ifnar1 and Stat1 proteins in the adult Ts1Cje cerebral cortex and cerebellum suggests that interferon receptor over-expression may lead to over-stimulation of Jak-Stat signaling pathway. The role of interferon-mediated activation or inhibition of signal transduction has been well-characterized in various biological processes and disease models, including DS, but information pertaining to its role in the development and function in the Ts1Cje or DS brain remains scarce and warrants further investigation. | 2016-05-04T20:20:58.661Z | 2014-07-22T00:00:00.000 | {
"year": 2014,
"sha1": "c5402be1718886f9ae7342118b0249668cce2378",
"oa_license": "CCBY",
"oa_url": "https://bmcgenomics.biomedcentral.com/track/pdf/10.1186/1471-2164-15-624",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3d61843706065025d8bed8bf74ed6afefed4a70b",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
226965992 | pes2o/s2orc | v3-fos-license | Identification of Genomic Alterations of Perineural Invasion in Patients with Stage II Colorectal Cancer
Purpose The molecular mechanism of perineural invasion (PNI) in stage II colorectal cancer (CRC) remains not to be defined clearly. This study aims to identify the genomic aberrations related to PNI in stage II CRC. Patients and Methods Using array-based comparative genomic hybridization (array-CGH), primary tumor tissues and paracancerous normal tissues of stage II CRC with PNI and without PNI were analyzed. We identified genomic aberrations by using Genomic Workbench and MD-SeeGH and validated the aberrations of selected genes by real-time polymerase chain reaction (PCR). Gene ontology (GO) and pathway analysis were performed to determine the most likely biological effects of these genes. Results The most frequent gains in stage II CRC were at 7q11.21-q11.22, 8p11.21, 8p12-p11.23, 8q11.1-q11.22, 13q12.13-q12.2, and 20q11.21-q11.23 and the most frequent losses were at 17p13.1-p12, 8p23.2, and 118q11.2-q23. Four high-level amplifications at 8p11.23-p11.22, 18q21.1, 19q11-q12, and 20q11.21-q13.32 and homozygous deletions at 20p12.1 were discovered in Stage II CRC. Gains at 7q11.21-q22.1, 16p11.2, 17q23.3-q25.3, 19p13.3-p12, and 20p13-p11.1, and losses at 11q11-q12.1, 11p15.5-p15.1, 18p11.21, and 18q21.1-q23 were more commonly found in patients with PNI by frequency plot comparison together with detailed genomic analysis. It is also observed that gains at 8q11.1-q24.3, 9q13-q34.3, and 13q12.3-q13.1, and losses at 8p23.3-p12, 17p13.3-p11.2, and 21q22.12 occurred more frequently in patients without PNI. Further validation showed that the expression of FLT1, FBXW7, FGFR1, SLC20A2 and SERPINI1 was significantly up-regulated in the NPNI group compared to the PNI group. GO and pathway analysis revealed some genes enriched in specific pathways. Conclusion These involved genomic changes in the PNI of stage II CRC may be useful to reveal the mechanisms underlying PNI and provide candidate biomarkers.
Introduction
Colorectal cancer (CRC) has been ranked third in terms of cancer incidence and second in terms of cancer mortality, according to the International Agency for Research on Cancer (IARC). 1 Management of CRC patients is commonly defined by the TNM stage at diagnosis, which is based on the depth of tumor wall invasion, lymph node involvement and distant metastasis. 2 However, the TNM stage alone does not accurately predict the prognosis and distinguish whether the patient should receive adjuvant chemotherapy, particularly in patients with stage II CRC. Among CRC, TNM stage II constitutes a very wide spectrum and the 5-year overall survival of surgically resected patients ranges between 75% and 80%. 3,4 Plenty of clinicopathological features have been associated with a high risk of recurrence and metastasis in stage II CRC, among which perineural invasion (PNI) has been associated with a poor outcome and the postoperative survival rate of stage II CRC patients with PNI was supposed to be more similar to that of stage III. [5][6][7][8] Complex signaling between tumor cells, the nerves, and stromal cells is probably related to the pathogenesis of PNI. [9][10][11][12] Several previous studies have identified that the overexpression of the ITGAV gene, the higher degree of PIWIL2 expression, the downregulated E-cadherin expression, CDX2 loss, and the loss of certain tight junction proteins are associated with a higher progression and spread of CRC. [13][14][15] However, the molecular mechanism of PNI and the internal relation between PNI and tumor metastasis is still largely in its infancy, and related research has not been conducted in patients with stage II CRC. Our interest is to detect frequent DNA copy number changes and identify genomic alternations in stage II CRC patients with PNI. Different from conventional comparative genomic hybridization (CGH), which cannot detect changes in small chromosomal regions, array-based CGH (array-CGH) allows analysis of DNA copy number aberrations (DCNAs) at the gene level and has been used to the rapid genomic-wide screen for genetic aberrations such as gains and losses in solid tumors and proven to be a valuable and a convenient method. In the present study, the genomic alterations of both stage II CRC with PNI and without PNI were investigated by array-CGH.
Patients and Tumor Tissues
Fresh tumor tissues and corresponding paracancerous normal tissues from 100 stage II CRC patients in the department of Colorectal Surgery, Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, between 2010 and 2015, were included in this study. They were divided into two groups: a PNI group of 50 cases with PNI and a no perineural invasion (NPNI) group of 50 cases without PNI. PNI was defined as that tumor cells surrounded >33% of the nerve circumference without invading through the nerve sheath, as well as tumor cells within any of the three layers of the nerve sheath.
The clinicopathological characteristics of the patients in this study are summarized in Table 1 and the two groups were comparable in terms of age, sex, location, T stage and differentiation. None of the patients had received neoadjuvant therapy and all of them underwent radical operation (R0 resection). The study protocol was approved by the Institutional Review Board for Human Use at Cancer Hospital, Chinese Academy of Medical Sciences, and informed consent for sampling and molecular analysis was obtained from all the patients.
Genomic DNA Extraction
According to the manufacturer's instructions (Qiagen, Hilden, Germany), the genomic DNA was isolated using the Qiagen DNeasy Blood & Tissue Kit from tumor tissues and the corresponding paracancerous normal tissues.
Array-CGH Analysis
Array-CGH analysis was carried out in five cases with PNI and five cases without PNI using standard Agilent protocols (Agilent Technologies, Santa Clara, CA). DNA from normal tissues was used as a reference for tumor DNA and all the DNA was digested with Alu I and RSA I restriction enzymes (PROMEGA, Warrington, UK). Tumor DNA (-500-1000 nd) was labelled by cyanine-5 dUTP and the same amount of normal tissue-matched reference DNA was labelled by cyanine-3 dUTP (Agilent Technologies, Santa Clara, CA). The mixture and hybridization were performed in an Agilent 44K human genome CGH microarray (Agilent) for 40 h after clean-up. Then, the washing, scanning, and data extraction were performed as described earlier. 16
Microarray Data Analysis
A specially designed microarray reader system with software Agilent Genomic Workbench (Agilent Technologies, Santa Clara, CA) and MD-SeeGH (www.flintbox.com), was used for analyzing the microarray data. Agilent Genomic Workbench was used to calculate the log2 ratio for every probe and to identify genomic aberrations. A mean log2 ratio >0.75 of all probes in a chromosome region was considered as a high-level DNA amplification, a mean log2 ratio >0.25 and ≤0.75 as a genomic gain, a mean log2 ratio <−0.25 and ≥−0.75 as a hemizygous loss, and a mean log2 ratio <−0.75 as a homozygous deletion.
Real-Time PCR
To further verify the expression of selected genomic aberrations identified by Array-based CGH, we performed the real-time PCR in all 100 cases. The PCR reactions were carried out with Power SYBR Green PCR Master Mix on the ABI 7300 (Applied Biosystems, Warrington, UK). The amplification reaction procedure was as follows: an initial denaturation at 95°C for 2 min, followed by 95°C for 15 s and 60°C for 1 min for 40 cycles. GAPDH was applied as internal control, and the relative expression level of the gene was calculated by the relative quantification (2 −ΔΔCT ) method. ΔCT was calculated by subtracting the average GAPDH CT from the average CT of the gene of interest.
Gene Ontology and Pathway Analysis
The "clusterProfiler" package was recruited to perform the functional annotation of all significantly differentially expressed genes (DEGs), and gene ontology (GO) enrichment analysis including cellular component, molecular function, and biological processes was performed. In organisms, different genes coordinate with each other to exercise their biological functions. Pathway-based analysis was performed to further understand the biological functions of genes. The most important biochemical metabolic pathway and signal transduction pathway involved in genes was determined by significant enrichment of Pathways. The Kyoto Encyclopedia of Genes and Genomes (KEGG) Pathway was the main database for Pathway significance enrichment analysis.
Statistical Analysis
Statistical analysis was performed with SPSS software, version19.0 for Windows (SPSS Inc., Chicago, IL, USA). Distribution of the data was checked for normality using the Kolmogorov-Smirnov test. Non-parametric continuous variables were compared using the Mann-Whitney U-test.
Normally distributed continuous variables were presented as the mean and standard deviations and were compared using the Student's t-test. Qualitative variables were given as the number and percentage and were compared with the χ 2 -test. The corresponding two-sided P value lower than 0.05 was considered statistically significant. (Table 3). Among these 10 cases, less than 40 genetic alterations were confirmed in five Stage II CRC cases (50%) and 40-84 DNA copy number changes were revealed in four cases (40%). The number of DNA copy changes was not different between the patients with PNI and without PNI (P = 0.294).
Genomic Changes Associated with PNI in Stage II CRC
The genetic alterations linked with PNI status detected by Array-CGH were analyzed by using the frequency plot comparison and significance analysis of microarrays (SAM) methods. Gains at 7q11. 21 Table 4.The results showed that the expression of FLT1 (P < 0.01), FBXW7 (P < 0.001), FGFR1 (P < 0.05), SLC20A2 (P < 0.001) and SERPINI1 (P < 0.001) was significantly up-regulated in the NPNI group compared to the PNI group (Figure 3), which was consistent with Array-CGH analysis.
GO and Pathways Enrichment
In order to determine the most likely biological effects of these genes, we performed GO analysis for these CGH data. GO analysis revealed that genes changed in stage II CRC belonged to the classes of genes that participated in the following biological processes: organic substance biosynthesis, regulation of metabolic processes, molecular functions, regulation of macromolecule biosynthesis, binding biosynthetic process, regulation of macromolecule metabolic processes and metabolic processes (Figure 4). We analyzed the genes of each of the two groups and found that the genes related to PNI mainly participated in DNA binding, olfactory receptor activity, sensory perception of smell, and biological processes. Meanwhile, the genes related to NPNI mainly belonged to homophilic cell adhesion via plasma membrane adhesion molecules, flavonoid glucuronidation, flavonoid biosynthetic processes and cellular glucuronidation ( Figure 5).
The related genes were annotated and enriched by pathway analysis, and it was found that the genes changed in stage II CRC were mainly involved in the following pathways: signal transduction, gene expression, metabolism, immune system, metabolism of proteins, signaling by GPCR, generic transcription pathway, metabolic pathways, GPCR downstream signaling, and other basic metabolic processes. The KEGG pathway analysis revealed that these genes were mainly represented in metabolic pathways (Figure 4). We also analyzed the pathways in each of the two groups ( Figure 5).
Discussion
PNI was first described in a primary head and neck tumor in 1862 by Neumann and referred to as tumor invasion of OncoTargets and Therapy 2020:13 submit your manuscript | www.dovepress.com DovePress nervous structures and spread along nerve sheaths. 17 With the development in the microanatomy of the peripheral cutaneous nerve, the definition of PNI has continued to change. 12,18 There are many different definitions of PNI used and there is still no agreement on a clear definition of PNI-positive. However, the broadest definition of PNI widely used in the literature and also used in our study is that tumor cells should surround >33% of the nerve circumference without invading through the nerve sheath, as well as tumor cells within any of the three layers of the nerve sheath. 19 The incidence of PNI is reported to be 14-32% in CRC, which is much lower than pancreatic cancer (98%), cholangiocarcinomas (75-85%),
DovePress
OncoTargets and Therapy 2020:13 prostate (75%), and gastric cancer (60%). 20 However, numerous reports have confirmed and quantified the strong negative prognostic impact for recurrence and survival in CRC when PNI is noted. 8 Due to the controversy regarding the issue of adjuvant therapy in stage II CRC patients, the prognostic significance of PNI in stage II CRC appears to be particularly important in clinical practice. 18 Pathological features such as PNI, perforation, serosal extension, low tumor differentiation, low number of examined lymph nodes, venous or lymphatic invasion have been associated with a poor prognosis; thus, these patients may derive a potentially greater benefit from adjuvant chemotherapy. 21 Although it has been reported that stage II CRC patients with PNI who received chemotherapy had a significantly improved survival rate compared to those who did not, the target genes and molecular mechanisms underlying the association between PNI and stage II CRC still remain unclear. 22 Using CGH, many studies investigated the genetic alterations in CRC and identified some chromosome regions and genes correlated with the carcinogenesis and tumor progression. It is known that the genomic changes of the adenomacarcinoma sequence include the activation of K-Ras and the inactivation of at least three tumor suppression genes, namely, loss of APC (chromosome region 5q21), loss of p53 (chromosome region 17p13), and loss of heterozygosity for the long arm of chromosome 18 (18q LOH). 23 Interestingly, losses at 8p, 17p, 18p, and 18q and gains at 8q and 20q were reported to be observed in patients with CRC. Multiple high-level amplifications at 20q were also seen centering at 32. Kim et al confirmed that the gelsolin (GSN) gene at 9q33.2 was associated with PNI in CRC through the finding that the invasion potential was >2-fold greater in GSNoverexpressing LoVo cells than in control cells. 24 It was also reported that patients with a low GSN expression had a significantly higher 5-year recurrence-free survival (RFS) rate than those with GSN overexpression (73.6% vs 64.7%, p=0.038), which suggested its potential value as a predictor of recurrence or as a therapeutic target in CRC patients. It is found that CRC with PNI patients showed an overexpression of the ITGAV gene at 2q31-q32 compared to CRC without PNI patients (p=0.028) and the expression of the corresponding ITGAV protein was also validated in that study (p=0.001). 25 Oh et al revealed that there was a significant correlation between the high degree of PIWIL2 gene expression at chromosome 8 and PNI in CRC (p=0.027) and PIWIL2 may contribute to a poor prognosis in CRC. 26 The loss of the expression of paracellular tight junctions, claudin-1, -4, and −-7 were demonstrated to be related with PNI, tumor invasion depth, stage of the disease, tumor grade, lymphovascular invasion, and lymph node status in an investigative study. 15
DovePress
OncoTargets and Therapy 2020: 13 We further selected the candidate genes for validation by real-time PCR and relative genes have been reported in other diseases. FLT1 has been shown to play direct roles in biological and pathological events associated with cellular proliferation, transformation, migration, apoptosis, and vascularization. 27 In our study, FLT1 was expressed at significantly greater levels in the NPNI group than the PNI group (P < 0.01). Reduced FBXW7 expression level and loss-of-function mutations have been found in a wide range of human cancers, and the overall point mutation frequency is 6% to 35% in human cancers with tissue specificity. 28 Preceding studies disclosed that the low expression of FBXW7 was associated with lymph node metastasis and advanced TNM stage and acted as an independent prognostic indicator for poor outcome in patients with colorectal cancer. 29 Our data further revealed that FBXW7 expression was negatively associated with the PNI (P < 0.001). Amplification of the FGFR1 gene is reported to be associated with worse clinical outcome compared with no FGFR1 amplification. 30 But we found high FGFR1 expression was frequently observed in Stage II CRC patients without PNI (P < 0.05). Mutations in SLC20A2 are a major cause of primary familial brain calcification (PFBC), 31 and our study also identified it was linked to Stage II CRC patients without PNI. SERPINI1 is suggested as an epithelial-mesenchymal transition-associated gene and reported to be related to hepatocellular carcinoma and CRC. 32 Furtherly, the results of this study showed the amplification of SERPINI1 was more common in Stage II CRC patients without PNI (P < 0.001). Moreover, GO analysis confirmed that the genes related to PNI mainly participated in DNA binding and olfactory receptor activity while the genes related to NPNI mainly belonged to homophilic cell adhesion via plasma membrane adhesion molecules and flavonoid glucuronidation. The Reactome pathway analysis revealed that the genes related to PNI were mainly represented in the pathway of signal transduction, gene expression, and metabolism, suggesting that PNI may affect the prognosis of CRC through these processes and related target genes may belong to that pathway.
Conclusion
In conclusion, our data provide detailed genomic aberrations regarding PNI in Stage II CRC. Further studies are necessary to clarify the candidate target genes and to explore their implications in Stage II CRC.
Ethics Statements
The study was carried out in accordance with the Helsinki declaration and its later amendments and regarding the involvement of human subjects and the use of human tissues for research. | 2020-11-12T09:08:24.212Z | 2020-05-03T00:00:00.000 | {
"year": 2020,
"sha1": "3306d3b0a76f0a7592b9a1588e2bf3474f3e842d",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=63626",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "11ffebfcb73e74552ee9d17dd9f899079f549d81",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
110337858 | pes2o/s2orc | v3-fos-license | Durability Reliability Demonstration Test Methods
Abstract A complete product design cycle should include the following three stages: target setting, product design and development, and product validation. In the first target setting stage, it is required to understand the voice of customers (VOC), define the extreme customer usage conditions, acquire the duty cycle or customer usage data for the target, and cascade the top-down VOC targets from vehicle to system and component levels. In the second stage of product design and development, the multi-objective design optimization in the virtual analysis domain needs to be performed, where the analytical tools to design the products for various design criteria should have been developed and in place for analyses. In the final product validation stage, the test methods with a limited sample size are required to demonstrate the reliability of the product population for meeting the specific design functional and performance requirements. Since the test results exhibit significant statistical variations, these test methods (named the reliability demonstration test methods) are the most important mechanism to ensure a successful product launch for durability, reliability and quality. Again, due to the limited sample size for testing, a specific reliability and confidence target should be predetermined to be demonstrated by these test methods. The focus of this paper is to review the theoretical background and discuss the pros and cons of commonly used reliability demonstration test methods. Depending on the failure criterion in product validation testing, these reliability demonstration test methods can be categorized into the methods for non-repairable and repairable systems. For a non-repairable system, the test is considered complete, censored or suspended as the system fails, while a repairable system allows the actions required to restore/renew a failed system to operational status. The methods of interest for non-repairable systems include the test-to-failure method, the test-to-bogey method, the extended life test bogey method, and the step-stress accelerated life test method. Repairable systems are often modeled with the non-homogeneous Poisson process which includes the Duane and the Crow reliability growth models.
AMSAA
Army Material Systems Analysis Activity C Lower bound confidence CB Two-sided confidence bounds on a variable Chi-squared distribution with one degree of freedom for a two-sided confidence interval Shape parameter (or Weibull slope) Scale parameter (or characteristic life) C True Weibull scale parameter corresponding to one-sided lower-bound confidence C Duane's curve-fitting "positive" parameter Duane's curve-fitting "positive" parameter Log-likelihood function Differentiating the log likelihood function with respect to Differentiating the log likelihood function with respect to A successful product design development process needs to serve as the foundation of how a company will develop products that will delight our customers and improve their loyalty by providing value through optimization of quality, reliability and durability. A good product should be designed based on the voice of customers and be met with customers' satisfaction. Many different product development processes have been developed over the decades, specifying a series of activities that make up the system engineering approaches. Among these models, the system V model first developed in the 1980s is actually a systematic engineering process, and is merging as a standard product development roadmap for complex products. A simplification for this V model can be illustrated in Figure 1, showing the process is composed of four key processes: Cap -System engineering management over the entire life cycle for integration management and configuration control. Left leg -Requirements and architecture (top-down activity), defining requirement and the product to be built. Rungs -Design and development (horizontal activity), managing the system element development process.
Right leg -Integration and validation (bottom-up activity), managing the assembly or integration of system elements.
The left leg of requirement and architecture addresses the creative process of partitioning the initial product concept into a structure of system elements and allocating and cascading customer, company, and regulatory requirement, level by level to lowest level system elements. It is crucial in this stage to decompose a complex product into an architecture of smaller, simpler, definable system elements and to understand the voice of customers (VOC), define the extreme customer usage conditions, acquire the duty cycle or customer usage data for the target, and cascade the top-down VOC targets from product to system, subsystem, component and parts levels. Once the requirement and architecture of system elements is defined, activities shift to the rungs of development and optimization of designs. This is a design verification and launch stage, where there is a need to define, refine, or create interface and system element designs and to perform the multi-objective design optimization in the virtual analysis domain. Once the system elements are optimized, the right leg of integration and validation provides the opportunity to test the interfaces and to mitigate interactions between system elements for all the requirements. In this leg, the test methods with a limited sample size are required to demonstrate the reliability of the product population for meeting the specific design functional and performance requirements. Since the test results exhibit significant statistical variations, these test methods (named the reliability demonstration test methods) are the most important mechanism to ensure a successful product launch for durability, reliability and quality. Again due to the limited sample size for testing, a specific reliability and confidence target should be predetermined to be demonstrated by these test methods.
There are two unique test phases within a reliability development process: reliability development testing and reliability demonstration testing. Reliability development testing is performed to identify problems and evaluate the effectiveness of corrective redesign actions. The primary purpose is to improve reliability performance rather than measure the reliability. The most important actions are to log all failures generated during testing and assure corrective actions are taken. This is verified through testing to ensure that failure prone aspects of the design are fixed. Reliability quantification during this development phase is intent on measuring improvement. The purpose of a reliability demonstration test is to determine whether a designed product meets or exceeds the established minimum reliability requirement. Testing material samples or components for fatigue life or strength is a common procedure. However, practically no two nominally identical samples would produce identical test results. This is because of inherent variation in the material which induces variation in the creation, location and propagation of dislocations and cracks as well as due to inevitable variations in manufacturing and assembly. However, overall, the population of specimens that are nominally identical do exhibit certain properties or traits that can be statistically characterized. Practically, one cannot test the entire population and hence typically only certain random samples are tested. Engineering statistics helps one understand the behavior of the population by examining data from sample sets much smaller than the population. The objective of statistics is to make inferences about a population based on information contained in a sample. Two types of estimations are in common use: (1) point estimation and (2) interval estimation. The point estimate is named because a single number represents the estimate. The interval estimate is to calculate the region that is intended to enclose the true value of the population parameter estimated using the data in the sample.
It should be recognized that when population parameters are estimated on the basis of finite samples, errors of estimation are unavoidable. The significance of such errors is not reflected in the point estimation (estimation of a parameter that defines the population's distribution). A point estimate of a parameter is not very meaningful without some measure of the possible error in the estimate. This estimated value rarely coincides with the true value of the parameter being estimated. Therefore, an interval that is expected to include the true value of the parameter with some specified odds with a prescribed confidence level is necessary. This interval is called the confidence interval. Thus, a 90% confidence interval for a given parameter implies that in the long sequence of replications of an experiment, the computed limits of the interval will include the true value of the parameter about 90% of the time. The fraction of 90% is called the confidence level. The confidence interval can be one-sided or two-sided. It is true that higher the degree of confidence, larger the resulting interval. In reliability life testing with a demonstrated target, for example of R95C90 requirement, it means that the 90% confidence lower limit of a product reliability estimate at a measurement point should be greater than 95%. The reliability and confidence are related when one attempts to project reliability determined from a sample to the population. This paper intends to review the theoretical background and discuss the pros and cons of the commonly used points analysis methods, confidence interval estimation methods and reliability demonstration test methods. Depending on the failure criterion in product validation testing, the reliability demonstration test methods can be categorized into methods for non-repairable and repairable systems. For non-repairable systems, the commonly used reliability demonstration test methods include (1) the test-to-failure method (the Weibull analysis method), (2) the Weibull analysis of reliability data with few or no failures, (3) the attribute test method, (4) the extended life test method, and (5) the step-stress accelerated life test method. For repairable systems, the two non-homogeneous Poisson process approaches (the Duane model and the Crow-AMSAA model) are discussed.
Points Analysis Methods
Point estimation is concerned with the calculation of a single number/value from a set of observed data to represent a statistical parameter of the underlying population. For a Weibull distribution, this would include the estimation of its shape and slope parameters. This section describes two basic parameter estimation methods (the median rank regression method and maximum likelihood estimation method) which are used in most commercially available software packages.
Presented below are the four important Weibull functions. The Weibull cumulative distribution function for the product population life failing by time or life t is where is the shape parameter (or Weibull slope) and is the scale parameter (or characteristic life).
The characteristic life defined as life/time at the 63.2 nd percentile of the cumulative distribution function. For 1 , the Weibull distribution becomes the exponential distribution. The Weibull reliability function for the population life surviving beyond time or life t is The Weibull probability density function is The Weibull hazard function is the instantaneous failure rate; defined as
Median Rank Regression (MRR) Method
Median rank regression is a very popular method for estimating Weibull scale and shape parameters, as evidenced by this being the default method in numerous commercial statistical software packages. It gained popularity due to the ease of calculations and programming as compared to the Maximum Likelihood method.
Essentially, this procedure fits a least squares regression line through the test data (failure) points on a probability plot. The Weibull cumulative distribution function can be rearranged as If natural logarithms are applied twice to both sides of Equation (5), it yields Equation (6) can be construed in the form of ˆŶ A BX and solved by the linear regression analysis, where Therefore, the Weibull parameter estimates are ˆ1 B and ˆe xp( ) A . Please note that the parameter estimates are written with a ^ above them, indicating that they are estimated from a sample of limited size and the actual and represent the parameters for the population. Per Abernethy [1], the procedures for calculating MRR estimates are listed as follows: 1. Group n number of life data in ascending order, where r failures and n r suspension data are listed.
2. Assign reverse ranks for each one of the data points from n to 1. 3. Use the following equation developed by Johnson [4] to adjust the rank orders of failures due to the presence of suspensions: 4.
5. Determine the median rank for each adjusted failure data order, using the following discrete cumulative distribution formula F i developed by Benard and Bosi-Levenbach [5]: 7. Estimate the Weibull parameters by a linear regression method on the r failures and corresponding median rank values.
Maximum Likelihood Estimation (MLE) Method
The maximum likelihood estimation (MLE) method is a statistical procedure for parameter estimation by maximizing the likelihood or probability that a certain set of failures or suspensions occur as observed, assuming an underlying Weibull distribution for the population. Further details of the maximum likelihood estimation method can be found elsewhere [1][2].
A general concept description of the maximum likelihood estimation method is provided here. For example, the Weibull joint probability density function for all identical independent random failures observations (such as 1 2, , ..., n t t t ) is given below The Likelihood function is the function that the observed values are to be fixed and the functional parameters are variables, defined as β,θ | , ..., , ,..., | β,θ | , In practice, the logarithm of the likelihood function, called the log-likelihood function is used because it enables easier computations. Log-likelihood function is expressed as follows: The Weibull parameters are estimated by maximizing the log-likelihood function, since log is a monotonic transformation. So it can be written as the argument of the maximum such that the set of values of i t , 1,..., i n for which the log-likelihood function has the largest value ˆβ ,θ argmax .
Determine the two statistical parameters which will maximize the log-likelihood function by solving / β 0 Presented below is the Weibull parameters estimation method with censored data. Given a total sample size n where r is the number of failure data ( 1 i t ,i ,..,r ) and k (= n -r) is the number of suspension data ( 1 i T ,i ,..,k ), the likelihood function can be written as (β,θ| , ..., , , ,..., ) ( | β,θ) |β,θ 1 2, With the introduction of the Weibull probability density and reliability functions, the log-likelihood function becomes The MLE method differentiates the logarithm of the log-likelihood function with respect to β and θ , equates the resulting expression to zero, and simultaneously solves for both β and θ . It is expressed below Therefore, the maximum likelihood estimate of ˆ is obtained by solving the following equation: (17) and the maximum likelihood estimate of θ is obtained from
Confidence Interval Estimation Methods
Interval estimates can be contrasted with point estimates. A point estimate is a single value given as the estimate of a population parameter or a reliability value that is of interest. A confidence interval estimate specifies a range within which the true population parameter or a reliability value is likely to occur for a certain percentage of the time or the population. For example, a 90% confidence interval for a true reliability value implies that in a long sequence of replications of an experiment, the calculated limits of the interval will include the true reliability value for 90% of the time or 90% of the population. This fraction 90% is called the confidence level, which is usually chosen to be 90, 95, or 99%. The desired level of confidence is determined by each company's policy. The confidence interval can be one-sided or two-sided. In reliability engineering, one usually uses one-sided lower confidence intervals rather than two-sided ones, but both can be easily related to each other.
The confidence interval width is influenced by the Size of sample: A larger sample size will lead to a better estimate of the population parameter because large samples are more similar to each other and have more information leading to narrow confidence intervals.
Level of confidence: A higher level of required confidence drives the width of the confidence interval wider to ensure that the true population parameter lies within the estimated confidence interval. Population variability: A population with large variation leads to samples with high variations, resulting in wider confidence intervals.
There are many methods used to establish (both one-sided and two-sided) confidence intervals for a given data set. This section will cover three important methods used to estimate confidence intervalsthe likelihood ratio method, Fisher matrix method, and Monte Carlo pivital statistics methods.
The Likelihood Ratio Method
The method is described in detail by Abernethy [1] and Nelson [2]. If L β,θ is the likelihood function for the population and ˆL β,θ is the likelihood function for the sample dataset, then the likelihood ratio function is defined Taking logarithms and multiplying by a constant (-2) gives 2ˆˆ2 Equation (20) represents the square of the likelihood ratio function, which is the square of the error estimates between the likelihood function for the population and sample. This can be best described by a chi-squared 2 ,1 distribution with one degree of freedom for a two-sided confidence interval as follows: The confidence bounds on a parameter such as time t or reliability R can be determined in a similar fashion as described above. But it requires the parameter in the likelihood function L β,θ needs to be replaced by t or R. Given the Weibull reliability function, the following relation exists: Therefore the likelihood ratio functions defined by L β,t and L β,R can be used estimate the confidence bounds on time t and reliability R, respectively.
The Fisher Matrix Method
A very popular method for estimating confidence bounds is through the use of Fisher's information matrix, sometimes called the information matrix. The Fisher matrix was named after Sir Ronald Fisher to honor all his contributions to the field of modern statistics. Like all estimation methods it has its advantages and disadvantages. This method has the advantage of being computationally very simple allowing for quick estimations of confidence bounds. However, it produces inaccurate results for small sample sizes. In addition, it is not recommended for data sets containing suspensions and it is recommended that this analysis be used for populations having ten or more failures [1]. The method is described in detail by Nelson [2]. This matrix, essentially the negative Hessian, consists of negative second partial derivatives of the log-likelihood function as follows: Based on the Weibull log-likelihood function and its first derivatives with respect to and described previously in Equations (18)(19)(20), the second and cross derivatives of the log-likelihood function in the Fisher matrix can be expressed as follows: It should be noted 2 2 . (28) By taking the inverse of the Fisher matrix, it is possible to calculate the variance and covariance of the statistical parameters β and θ. The inverse of the Fisher information matrix, also referred to as the covariance matrix, is written as Evaluated at ˆ andˆ , Equations (24) and (29) are the local estimated Fisher information matrix and covariance matrix, respectively. The local estimates will be used to obtain approximate confidence bounds. The concept of confidence bounds estimation is described here. In general, , G is a function representing a statistical distribution with two parameters and . The local mean E G and variance Var G of this function at ˆ andˆ can be approximated by where the estimated variance and covariance of the statistical parameters and are determined by the local Fisher information matrix. Therefore the approximate two-sided confidence bounds on the function G can be written as 1 2 where 1 α 2 z is the standard normal variable with the cumulative probability of failure of 1 α 2 .
Presented below are some examples of estimated confidence bounds on the Weibull parameters, reliability and time or life t.
Confidence Bounds on Weibull Parameters
Both the shape parameter estimateˆ and scale parameter ˆ for a Weibull distribution must be positive, therefore l n and l n are assumed to be normally distributed. Based on the assumption of log-normally distributed ˆ andˆ , the two-sided confidence bounds on can be written as
Confidence Bounds on Reliability
In order to estimate the bounds on reliability, the Weibull reliability function needs to be rearranged as follows Next a new parameter u is introduced, where ln ln u t .
Therefore, the reliability equation becomes exp exp R t u , and the two-sided confidence bounds on the parameter u are given by: The local estimated variance of u is calculated as follows: Finally, the two-sided confidence bounds on reliability can be determined by the two-sided confidence bounds on u .
Confidence Bounds on Time t
The Weibull reliability function needs to be written so that time t is a function of R , and , expressed as follows 1 ln ln ln ln t R .
A new parameter v is introduced below: and, alternatively, The two-sided confidence bounds on the new parameter v are where the local estimated variance of v is determined by Because the partial derivatives of v from Equation (46) are Finally, the two-sided confidence bounds on time t at a specific reliability can be obtained by the two-sided confidence bounds on v .
The Monte Carlo Pivotal Statistics Method
Both the Fisher matrix method and likelihood ratio method provide for very narrow (i.e. optimistic or nonconservative) confidence bounds for sample sizes that are less than 10. Therefore, it is recommended that one use the Monte Carlo pivotal statistics method to estimate the confidence bounds for a sample size smaller than 10. This method was in part developed by Wes Fulton and Robert Abernethy with Chrysler LLC. A pivotal statistic is a quantity that is parameter-free, meaning it is independent of the properties/characteristics (such as mean, standard deviation or slope, shape parameters) of the distribution while representing the data points of the sample. It is essentially a function of the sample data but does not depend on the unknown parameters of the distribution [6].
For a normal distribution with mean µ, standard deviation σ, and an observation t, a standard normal variable z can be defined as: It is noted that the z follows a normal distribution with a mean of 0 and a standard deviation of 1, irrespective of the values of µ and σ calculated from the samples. This is said to be following a "standard normal distribution". Thus, the standard normal variable z is effectively a pivotal statistic for any data that fits a normal distribution.
Extending the idea of a pivotal statistic quantity to a Weibull distribution, a standard Weibull variable p z can be defined as: where ˆ and ˆ are the two estimated Weibull parameters from the test samples.
Ready to use tables are provided in [8] for easy of confidence interval calculations for Weibull distributed samples up to a size of 20, with no suspensions.
Reliability assessment methods for non-repairable systems
It should be mentioned that the materials presented here have been drawn heavily from Abernethy [1], Nelson [2] and Lu [3].
The Test-to-Failure Method (the Weibull Analysis Method)
The test-to-failure method has the advantage of knowing the product life distribution and failure modes under the same duty cycle test loads. Given the fact that the fatigue life failures follow the Weibull distribution, the failure data is analyzed by fitting a Weibull distribution to the data. This method relies on testing as many specimens as possible until failure occurs. The primary purpose is to learn how and when components fail in order to identify weakest links to improve reliability/durability.
With such a process, there must be at least two failures to estimate the Weibull parameters and confidence bounds. For statistically significant confidence interval estimates, there must be a much larger sample size. However, testing larger sample sizes would be more expensive as well as time consuming. Availability and cost of the parts, test rigs/setups and the test engineers required to run the tests are some of the issues with such a method. Therefore, there is a tradeoff between accuracy and time that needs to be addressedsome of the methods discussed hereafter provide alternate solutions for this problem.
Weibull Analysis of Reliability Data with Few or No Failures
For the methods described thus far, there must be at least two failures to estimate the Weibull parameters and confidence bounds. However, one may have few or zero failures in real life, resulting in inaccurate estimations or inability to estimate parameters for practical purposes. An accurate estimation method for point parameters and confidence bounds, which can apply to very few or no failures is presented here.
For a given sample of n units where r is the number of failures and k (= n -r) is the number of suspensions, the failure times or lives are denoted as 1 2 , ,..., n t t t . It is also assumed that the Weibull shape parameter is given (or known from experience or historical data), the failure times are independent and identically distributed, and the failure times may be intermixed among the censored times.
Nelson [9] derived the following formula to estimate corresponding one-sided lower-bound confidence limit C% for the true Weibull scale parameter as follows: where 2 C;2r 2 is the C th percentile of the Chi-square distribution with 2 2 r degrees of freedom. Hence the reliability at t cycles with one-sided lower C90 is given by
The Attribute Test Method
The attribute test method is a "success/failure", "go/no go", or "acceptable/not acceptable" type of test, and is often referred to as the binomial test, or the test-to-bogey method. A product is submitted to a minimum durability test or performance criterion or bogey. If a test sample makes it to the bogey, it is a success, and if it does not, it is a failure. For the "success/failure" type of situation, the binomial distribution is applied. The following are some limitations of using the binomial method: (1) it requires numerous test samples, (2) no test failures are allowed, and (3) failure modes and variability are not disclosed.
For a given sample size n , it is assumed that the n tests are independent and repeated under identical test conditions. For each individual test, the reliability or probability of success denoted by R is the same. Since each individual trial results in either success or failure, the probability of failure is calculated by (1-R). The binomial probability formula is used to find the possibility of r failures out of n tests as follows: where n! r!(n r)! n r .
Also the probabilities of at most r failures out of n tests can be related to the lower bound confidence level C with the following relationship: The lower confidence level is a random quantity that falls above the corresponding population value with the probability of C, which can be chosen to be 90%, 95%, or 99%. In generally the confidence limit based on few failures will generally be quite wide, indicating the fact that the estimate has great uncertainty. The is the commonly used equation for the attribute test method to estimate the minimum number of samples required with no failures in order to demonstrate the product meeting a R reliability and a C confidence level. For example, the use of Equation (63) with R = 0.9 and C = 0.9 gives n = 22. Therefore, all the 22 samples need to be tested and suspended at the bogey without any failures in order to demonstrate the fact that 90% of the estimated R90 quantify based on any 22 samples from the product population will satisfy the test criterion or bogey.
The attribute test method is an easy reliability demonstration test method, but could be costly for many samples required for prototype testing. Most importantly, the product life distribution and failure modes are not revealed.
The Extended Life Test Method
One of the situations encountered frequently in testing involves a tradeoff between sample size and testing time. If the test product is expensive, the number of test products can be reduced by extending the time of testing on fewer products. Extended testing is a method to reduce sample size by testing the samples to a time that is higher than the test bogey requirement without failures. It is also referred to as the test to extended bogey method.
Assuming that there are no failures in the sample set and an estimate of the Weibull slope is known, the theory is derived on the equivalence of the Weibull distribution and the success-run theory (Equation (63)). For example, the reliability at 1 t with a lower bound confidence C and sample size 1 n is expressed by Similarly, the reliability at 2 t with a lower bound confidence C and sample size 2 n is expressed by Equating the above two equations yields Let 1 n be the number of test products suspended at the extended time 1 t and 2 n be the number of test products suspended at the test bogey time 2 t . Since the success run theory satisfies the following relationship among the sample size, lower confidence bound and reliability at the bogey time:
The Step-Stress Accelerated Life Test Method
In general, the test-to-failure method provides the life distribution for a product, but the test-time-to-failure is usually longer. The binomial test method is a good testing method for pass and fail criterion at a bogey, but requires a larger sample size and does not reveal the life distribution. The extended testing method allows for trading additional test time for smaller sample size requirement, provided the Weibull slope of the life distribution is given.
However, this method cannot provide the life distribution of a product. There is a need for a new test method called the step-stress accelerated life testing (SSALT) method which can provide the life distribution of a product and can also accelerate the test-time-to-failure. This method of accelerated life testing is a good way to obtain time-tofailures distribution in a relatively short amount of time. But the SSALT method needs to be used with caution because there is likelihood that the failure mode may be changed due to a higher stress amplitude level. This test is achieved by testing the product at the derived stress level for a fixed amount of time. At the end of that time, if there are products surviving, the stress level is increased in a stepwise fashion and held for another amount of time. This process is repeated until all of the test parts are failed. The cumulative exposure model [1] has been widely used to determine the reliability of the test products under SSALT. The model assumes that the cumulative life distribution at any constant stress amplitude level follows a two-parameter Weibull distribution and the Weibull shape parameter is the same at each stress level. Since 1990, many extensions of Nelson's cumulative exposure models [10][11][12][13][14] have been developed.
Presented below is the introduction to the Nelson cumulative exposure model. A step-stress accelerated test can be described in the following mathematical expressions. Let The concept of Nelson's cumulative exposure model can be schematically illustrated in Figure 2.
Here is the median rank regression method to estimate the three parameters ( , , ) involved in Nelson's cumulative exposure model. If an S-N curve is given as follows:
RELIABILITY ASSESSMENT METHODS FOR REPAIRABLE SYSTEMS
The reliability growth test planning and management strategy assumes that, as the product design matures, potential failure modes for the product will be identified through controlled testing in a series of phases. The corrective fixes for some of all of the identified failure modes are to be implemented to reduce the likelihood that the revised product design will fail. The implementation of the corrective actions will determine the reliability growth management strategy. There are three basic approaches that will affect the analysis and decision-making process. They are: Test-Fix-Test: Fixes are implemented during the test after the failure modes and the corrective actions have been identified. In this case, the test may be stopped until the correction action is implemented, and the reliability growth is tracked in the given test phase. Test-Find-Test: Failure modes are identified but the fixes are not implemented until after the completion of the test phase. In this case, the reliability growth is tracked after the completion of a given test phase and the improved product design will be in place for the beginning of the next test phase. Test-Fix-Test with Delayed Fixes: Some fixes are implemented during the test while other corrective actions are delayed until the completion of the test phase. In this case the reliability growth will be tracked during and after the completion.
The ultimate goal of these approaches is to ensure that the data are captured from the first test phase through subsequent test phases until reliability goals have been achieved and the product can be released.
This section describes the reliability assessment techniques for repairable systems based on the test-fix-test strategy. Assuming that failures are not necessarily independent or identically distributed, one may use the nonhomogenous Poisson process to assess reliability of the system, describing failures or incident events (such as failures or incidents) in a continuum such as time or mileage. Two popular non-homogeneous Poisson process models are discussed and they are known as the reliability growth models as the Duane model [15] and the Crow-AMSAA model [16][17][18]. Please note that the test-find-test and the test-fix-find-test methods can be found in Reference [19].
The Duane Model
The first reliability growth model was developed by James T. Duane in 1964 [15], a GE reliability engineer who observed the cumulative failure rate C t is linear and decreasing with the cumulative operating time t on a log-log plot as illustrated in Figure 3. This relationship is known as the "Duane postulate". If N t is the cumulative number of failure up to time t, the cumulative failure rate C t by time t is defined as Based on Duane's postulate, the following empirical power law function exists: where and are the two curve-fitting "positive" parameters.
Taking logarithms in Equation (78) yields log log log Thus the value of is the slope of the line between C t and t when plotted in a log-log coordinate. Typically, the value for vehicle reliability tests ranges from 0.2 to 0.4 [20].
The Duane postulate can be expressed in terms of cumulative mean time between failures M t as Generally the cumulative mean time between failures (MTBF) expression is preferred because the upward slope reflects reliability improvement. The positive slope is sometimes referred to as the "growth rate". Figure 4 shows the cumulative MTBF versus test time relationship. The instantaneous failure intensity or rate c t at time t is the derivative of the cumulative number of failures N t with respect to time t. It is also called the recurrence failure rate. Therefore, And, the instantaneous MTBF m t is given by Assuming reliability growth occurs, the instantaneous MTBF will be greater than the cumulative MTBF because improvements have been introduced. Comparing Equations (80) and (84), one can determine the following relationship: Finally, on the basis of assumption of an exponential failure distribution, one can predict the instantaneous reliability of a system on test as follows: exp R t c t t . Also there is a need to set the target reliability growth curve of a system in order to quantify the current reliability achievement at any time during a design development stage and to predict the likely achievement by the end of the program. Here is an example to demonstrate how to set the target MTBF curve. Consider a test program for a specific design stage with 4000 hours of test time and the target instantaneous MTBF of 500 hours. Based on the assumption that the growth rate equals to 0.4, the target cumulative and instantaneous MTBF curves can be established and illustrated in Figure 6.
The Crow-AMSAA Model
Larry H. Crow [16][17][18] noted that the Duane model could be statistically represented as a non-homogeneous Poisson process (NHPP) model with a Weibull failure intensity function. This statistical extension became what is known as the Crow-AMSAA model which was first employed by the U.S. Army Material Systems Analysis Activity (AMSAA) and allows for statistical procedures to be used in the application of this model in reliability growth including a goodness of fit test. The Crow-AMSAA model is designed for tracking the reliability within a test phase and not across test phases.
Crow assumes that the instantaneous failure intensity c t can be approximately by the Weibull hazard function, which can be expressed as follows: where is the Weibull scale parameter and is the shape parameter. If 1 , the instantaneous failure intensity at time t can be rewritten as Therefore, the expected (average) number of failures N t within the test interval 0,t is given by The Crow-AMSAA model is a probabilistic interpretation of the Duane postulate. Duane observed that the cumulative number of failures for a system at the total operating time t can be approximately by t . However, for the Crow-AMSAA model which assumes the actual number of failures observed up to the total operating time t is a random variable described by the Weibull process, where the average number of failures by time t is expressed by t .
Also the probability that that number of failures equals to n in a time interval [0, t] is given by the following Poisson distribution: This can be used for the development and use of statistical procedures for reliability growth assessments. So the Crow-AMSAA model is referred to as the non-homogeneous Poisson process with the mean of t . For 1 , the instantaneous failure intensity c t is a constant, a sign of reliability static, and the number of failures follows a homogeneous (constant) Poisson process with the mean of T . The period where a system exhibits constant failure intensity is often called the "useful life" of a system, corresponding to the horizontal part of the bathtub curve. For 1 , the instantaneous failure intensity c t is increasing, a sign of system reliability deterioration, and this is the characteristic of a wear-out situation in the bathtub curve. For 1 , the instantaneous failure intensity c t is decreasing, an indication of system reliability growth as occurring in the infant mortality portion of the bathtub curve.
Point estimation using the maximum likelihood method (MLE)
The procedures described here are to be used to analyze data from tests, which are terminated at a predetermined time. The parameters and in the instantaneous failure intensity function can be determined by the maximum likelihood method as follows. Let k be the total number of systems of interest. For the q-th system, 1,2,..., q k , let q T = ending (or current) time of the q-th system q N = total number of failure experienced by the q-th system , i q t = system time of the q-th system at the i-th occurrence of failure, 1,2,..., q i N .
So the successive failure time occurrence is The maximum likelihood estimates of and are values of ˆ and ˆ , which were derived by Crow [16][17] and Lu and Rudy [18].
The probability density function of the i-th event at , i q t given that the (i-1)-th event survived at 1, i q t for a q-th system for time duration 0, q T is given by and the likelihood function of the q-th system is Thus the likelihood function for the entire systems can be expressed as follows: By taking the nature log on both sides of the above equation, the log likelihood function becomes Differentiating the log likelihood function with respect to and setting its value equal to zero yields The estimate of is then obtained by (100) Differentiating the log likelihood function with respect to and setting its value equal to zero has 1 , 1 11 The estimate of can be determined by Obtaining ˆ and ˆ , one may also estimate the expected number of failures at system time t , instantaneous reliability at system time t , and reliability of a system going another interval d without failure at system time t by using the following equations, respectively:
Confidence bounds estimation using the Fisher matrix
Given a function of two parameter statistical distribution G , , its estimated mean and variance can be approximated, respectively, by ˆˆ, It is assumed that the statistical function ˆĜ , at the local parameters estimated by the maximum likelihood method (MLE) follows a log normal distribution (i.e. ln(λ) and ln(β) follow a normal distribution). Then, with the calculated mean and variance of the function, the approximate two-sided confidence bounds on the function can be given as Because of the following approximation, the above two-sided confidence bounds of the function can be rewritten as The determination of the variance and covariance of the parameters is accomplished by the use of the Fisher information matrix. The Fisher information matrix is given by where is the natural log-likelihood function. The partial derivatives for the Fisher information matrix are: It should be noted that 2 2 .
The approximate two-sided confidence bounds for the instantaneous failure intensity at t with a confidence level are given 1 α 2ˆĉ t exp z Var c t / c t CB . (121)
Two-sided confidence bounds for the cumulative failure intensity at t
The mean and variance of the cumulative failure intensity at t are calculated The approximate two-sided confidence bounds for the cumulative failure intensity at t with a confidence level are given 1 α 2ˆĈ t exp z Var C t / C t CB . (126)
Non-Repairable Systems
The test-to-failure method (the Weibull analysis method) The test-to-failure method presented in section 2.1 is one of the most common reliability demonstration methods. It has the advantage of providing product life distribution and the failure modes under realistic loading conditions. Point estimation methods such as Median Rank Regression (MRR) and Maximum Likelihood Estimation (MLE) produce point estimates of the Weibull distribution. Depending upon sample size and presence/absence of suspensions, one can use multiple methods to estimate confidence intervals on the parameters such as the likelihood ratio method, the Fisher matrix method and the Monte Carlo pivotal statistics method.
It must be noted that the likelihood ratio method is best used for medium to large samples, with few or no suspensions. The Fisher matrix method is recommended for medium to large samples as well. It can be easily used to also estimate confidence bounds on fatigue life/time as well as Reliability. The Monte Carlo pivotal statistics method is suggested for small samples with at least 3 failures. The test-to-failure method does present accuracy of problems when there none or very few failures observed via testing.
The Weibull analysis of reliability data with few or no failures
The Nelson model presented in Section 2.2 is very useful in reliability demonstration for trade-off of sample size, reliability, statistical confidence bounds, and total test time. The Weibull slope should be known before the testing, which can be estimated from historical data, experience, or engineering knowledge. All products on test must complete the planned test time without failure in order to successfully demonstrate the reliability/confidence target.
The attribute test method
The attribute test method is a binomial test method which can be interpreted as the "success-failure", "go-no go", or "acceptable-not acceptable" type of testing. It is typical in verifying minimum reliability levels for new products prior to production release. The test requires multiple test samples, does not allow for any failures occur during testing, and does not disclose failure modes and variability.
The extended life test method
The extended life test method is derived on the Weibull distribution and the success-run theorem from the binomial distribution. It assumes that there are no failures in the sample set during testing and an estimate of the Weibull slope is known. This is a nice method involving a trade-off between sample size and test time, but like the attribute test method, the extended life test method does not reveal any failure mode and life distribution.
The step-stress accelerated life test method
This is an accelerated test method to obtain time-to-failure distribution in a relatively short time, but it should be used with caution for there is likelihood that the failure model may be changed due to a higher stress amplitude level.
Repairable Systems
The reliability growth model allows one to predict reliability of a repairable system by analyzing failure data in the development phase of the system. If a failure can occur at any instant, it may occur more than once in a given time interval. In such a case, the number of failures in any time interval (the cumulative reoccurrence rate) follows a Poisson process. If the instantaneous reoccurrence rate is not constant, then the Poisson process is called the nonhomogeneous Poisson process (NHPP), otherwise, it is named the homogenous Poisson process. The nonhomogeneous Poisson process can be applied for the infant mortality and the wear out portion of the bathtub curve in a product development life cycle, and the homogeneous Poisson process, for the design life portion. Both of the Duane and the Crow-AMSAA models are the two popular generic non-homogenous Poisson processes that will cover all the three portions of the bathtub product development life curve. Both models were presented in Sections 3.1 and 3.2. | 2019-04-13T13:13:09.909Z | 2015-01-01T00:00:00.000 | {
"year": 2015,
"sha1": "dc684cf92001f7e6ff33ba2d3ad2dec919982c05",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.proeng.2015.12.621",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "d773aead7c06598e531603dd3cbdd1f5491ca0d9",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Engineering"
]
} |
233906984 | pes2o/s2orc | v3-fos-license | To study the Ayurveda perspective of Covid 19
Covid 19 pandemic really judge patience of mankind. Ayurveda explained about epidemic diseases, its causes, pathophysiology, and treatment. Ayurveda explains it under ‘marak’ or janpadoddhvansa disease. Adharma, pollution of water, air, earth, food, medicine, seasonal variations are its major causes. Concept of immunity is important in this aspect. This article will try to elaborate disease Covid 19 with the help of Ayurveda. © This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Introduction
Since November 2019, world is facing entirely new experience of Corona pandemic situation. Many doctors, health professionals, nurses, police officers, cleanliness workers, and civilians lost their lives in initial phase. WHO, Research workers, doctors tried to understand this disease, but after every few days, they come across some new facts. I decided to understand this disease, 'corona' with the help of Ayurveda. In this article, I will like to discuss about what Ayurveda thinks about such pandemic situation, what kinds of measure we can do against Corona. Ayurveda, being an ancient science, definitely guides us to fight against this fetal disease. In my clinic, Shree panchakarma and beauty clinic, I had treated some patients for corona prevention and treatment, but the data is very less, for any firm conclusion. In this article I will like to discuss some facts about corona.
Materials and Methods
The methodical collection of data from classical texts of Ayurveda as well as related pharmaco-clinical research articles and dissertations works published up till now have been collected using pubmed and manual search of bibliographies as a sources.
Today, World is facing Covid 19 pandemic situation, some nations now facing 3 rd or 4 th wave of covid outbreak. Covid 19 is an illness caused by Novel corona virus 2, now called as severe acute coronary syndrome 2 [SARS-COV-2]. Epidemic disease are explained in Ayurveda as 'Janpadoddhavansa'. A large population, irrespective of their, bala (strength), food habits, behavior, psychological state, affected by the same disease, at the same time, which may destroy the community, is called as 'janapadoddhvasa disease. 'Janpad' means large population, or community. 'Uddhavasa' means to get destroyed. Janapadodhwansa means situation involving destruction or death of large population spread over a small locality, country, or part of world.
Nidan-Hetu of Covid 19
Sushruta, had well-explained about epidemic diseases. Sushruta, in sutrasthan 6th chapter, narrated about 'vyapanna rutu' as a cause of epidemic diseases. Now a days, we are facing lots of changes in climate and 'rutuviparyay' means conditions opposite to that particular season. This is happened all over the world. Sushrut tells that this changes in season occur due to, 'adharma'. 'Dharma' means, the right things, disciplines, rules, commitments, which we are supposed to follow, while leaving in a civil society. When we don't follow this, it is called as 'Adharma'. Now a days our lifestyle, our selfishness, increased and reduced the importance of dharma, and everyone is leaving for their own happiness, instead of caring for Mother Nature. As per Ayurveda, this may be the reason for seasonal variation. Seasonal variation leads to vitiation of water and herbs, medicines, which leads to outbreak epidemics. Marak-means-epidemics-this term is evolved by Sushruta. 1 Atharvaveda also explains about the spreade of disease from 2 types of worms, one which we can see with eyes and another which we cannot see. Worms are mixed with mountain, forest, food products, animals and liquids, which ultimately enter in body through food, water and wound. To treat this with earth, water, fire, sun, some Mantras are explained.
Bahujana sadharana hetu 2-4 -common etiological factors-Common factors pertaining entire community-Water, air, desha-land, Kala-seasonal variation. As per text these are common etiological factors for all communicable diseases, out of these-Air and kala responsible for covid-19.
A] Evitable factors-Adharma-Violation of stipulated behavior Prajnyapradha-Wrong behavior Shasraprabhava-Wars, weapons Curse Poisonous flowers smell Bhutasanghata-pathogens, uncleanliness Sexual contact with affected person Physical touch, close breathing, sharing meal with affected person, sharing bed, bench, ornament, clothes etc. 1 Less immunity B] Inevitable disastrous factors Abnormal variation in seasonal cycles Air vitiation Cosmic changes
Symptoms
Asymptomatic-people with good immunity may not have any symptom.
Mild symptoms-patient having non-specific symptoms of upper respiratory tract such as, cough, cold, sore throat, nasal congestion, fever, malaise, headache. They do not have signs of dehydration, sepsis, shortness of breath, Moderate symptoms-High grade fever, dyspnea, anosmia, nausea, abdominal discomfort, low oxygen saturation, pulmonary ground glass opacities in CT lungs, with score more than 10.
Fatal symptoms-oxygen saturation very low, unable to breath, inflammatory marker test-increased, in cytokine storm, multiple organ involvement, cardiac symptoms etc.
As per Ayurveda 'vyadhi-kshamatva' plays important role to protect us from these epidemic diseases. 6 Vyadhikshamatva is a body's capacity to inhibit disease to manifest and increase its symptoms. Chakrpani, commentator of Charaka describes concept of vadhikshamatva as body's natural response to prohibit contact of disease producing factors with body, and also to inhibit entry and progress of disease in body. Charak explains that this vyadhikshamtva, is not equal for all individuals. Obesity, people with bad food and behavior habits, malnourishment, weak, physically disturbed people have less immunity. Vyadhikshamatva is directly dependent upon strength of individual. Ayurveda explains about strength examination parameters, such as body proportion, muscle strength, muscle tone, sense organs strength, functioning organs strength, mind situation, capacity to tolerate thirst, hunger, sun, wind exposure, capacity to work and to do exercise, capacity of digestion etc. [7][8][9] Vydhikshamatva is of 3 types, sahaj-means by birth, kalajover the period of time (youth have good vyadhikshamava as compared to childhood and old age and in winter season we have good strength) and third is adaptive vyadhikshamatva means with specific life style changes, use of medicines we can increase our 'vyahikshamatva. 2 If 'vyadhikhamatv' is good, although patient exposed to virus, he may not suffer from disease, or he may be symptomless, or with minimal symptoms.
# Panchakarma treatments-Body purification done with panchakarma to avoid accumulation of dosha in body, thereby prevents disease. Vamana, nasya, play major role to prevent covid 19.
# Light, unctuous, warm, freshly cooked diet fortified with spices like black pepper, long pepper, cardamom, cinnamon, ajwain, cumin, garlic, tulsi, ginger, turmeric is preferred. Change in diet is not recommended, the food with which you are habitual since birth is nourishing for that particular person, so it is recommended. 8 # Drink luke-warm water, medicated water (shadangodak-musta, parpat, usheer -vetivir, chandan -sandalwood, dry ginger, medicated water), water mixed with honey is also good.
# Steam inhalation-helps to prevent accumulation of cough in upper respiratory tract.
# Neti-Among six purification methods as per Yoga, Neti is very easy and effective purification method to clean upper respiratory tract. In Pune, at Dinanath mangeshkar hospital, all 600 doctors serving covid 19 patients, regularly do neti, and not a single doctor get affected due to corona.
# Community sanitization-spraying of decoction made from nimb, nirgudi, shigru, curry leaves, nilgiri added with alum, camphor and cow's urine will be effective for community sanitization.
Discussion
Ayurveda well-explains about epidemic diseases. Epidemic diseases are explained by Sushruta in detail. Causes of epidemic disease and treatment of it was done since 'veda age'. We can prevent these pandemic diseases, by following behavior regimen-'sadvritta palan'. Swasthvritta measures like following, daily regimen, night regimen and seasonal regimen. Adharma is major cause of epidemics called as 'marak' means deadly diseases. Seasonal variations and non -seasonal changes in climate occur mainly due to man's selfishness, not caring for Mother Nature. Abhyang, steam inhalation, nasya, neti, exercise, good food habits, behavior habits, panchakarma purification, rasayana consumption, are key factors to increase 'vyahikshamatva'. We have to prevent covid 19 infections with intimate hygiene, cleanliness, sanitization, and by avoiding person to person contact.we have to pay attention to increase our 'bala' strength with the help of Rasayna therapy, good food and behavior habit and mental peace. Home and air sanitization can be done with dhupan of nimb, nilgiri, tulsi, black pepper etc. Covid 19 is a sannipatic jwara, all doshas are vitiated so we should carefully treat the patient.
Conclusion
Covid 19 is a pandemic disease. In Ayurveda epidemic disease are explained under the heading of 'sankramak vyadi'. It may be fetal and called as 'Marak' means deadly disease etiology, causes, patho-physiology symptoms and treatment of common epidemic disease is explained in Ayurveda. 'Swasthvritta' explains the way to remain healthy. To educate people about swasthvritta measures, advice to follow, daily regimen, seasonal regimen, exercise, food and behavior habit, yoga, pranayama, yoga purification method, good morals, will play major role to prevent epidemic diseases. Sesonal panchakarma and Rasayana administration will play major role to prevent epidemic diseases. Cleanliness-Niyam, dhupan, dhumpan, to follow nature, is some key factors.
Conflict of Interest
None. | 2021-04-29T06:58:46.169Z | 2021-02-15T00:00:00.000 | {
"year": 2021,
"sha1": "2af5ca570178e4f40410fe5815368083eaf67eba",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.jpmhh.org/journal-article-file/13173",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "2af5ca570178e4f40410fe5815368083eaf67eba",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"History"
]
} |
266782915 | pes2o/s2orc | v3-fos-license | Correlation Analysis and Prediction of the Physical and Mechanical Properties of Coastal Soft Soil in the Jiangdong New District, Haikou, China
,
Introduction
Estimating the mechanical parameters for civil construction projects based on measured soil's physical parameters is crucial for proposing appropriate design parameters, establishing scientific and reasonable calculation models, selecting foundation pits with favorable safety and economic adaptability, and determining the related support modes [1].Moreover, it can help to avoid tedious, time-consuming, and expensive laboratory measurements while also reducing construction time and cost.Therefore, it is of significant theoretical and practical importance to collect mathematical statistics on the physical and mechanical properties of regional and representative stratums and establish their predictive model [2][3][4].
Many scholars have conducted a great deal of research on the correlation between rock's physical and mechanical indexes and achieved substantial results.As early as the 1950s, Chinese researchers have systematically concluded the correlations of soft soil's shear strength indexes with void ratio and plasticity index, which have played an important role in engineering construction in Shanghai [5].According to related engineering geological data of Shanghai, several sets of practical correlations between physical and mechanical properties are derived [6].Bai et al. [7] investigated the effect of plastic index on the compression deformation parameters of saturated soft clay and concluded the linear fitting relations of compression index, swelling index, and secondary consolidation coefficient with the plastic index.Tian et al. [8] focused on clay soil in Beijing and performed statistics of physical and mechanical indexes.Li et al. [9] analyzed the relations of the internal friction angle of soft soil in the southern part of Kunming with various physical indexes.Xiaoliang et al. [10] investigated the relations of soil's compression indexes with the number of loading or unloading cycles and saturability.Jing et al. [11] concluded the correlations of the variations of internal friction angle and cohesive force with some physical parameters including plasticity and liquidity index.Xuchang et al. [12] preliminarily investigated the correlation between physical and mechanical performance indexes of soil in Yangzhou.Linping et al. [13] investigated the correlation between soil's physical and mechanical indexes of clay soil in Binhai new district of Tianjin.Xianwei et al. [14] presents the relevance and correlation analysis on the physical and mechanical indexs of Zhanjiang clay.
The above study extensively analyzes the correlation between the physical and mechanical properties of soil, which greatly simplifies geotechnical engineering analysis.However, most of these studies mainly focus on analyzing the correlation of individual factors.Considering multiple factors simultaneously and obtaining accurate correlations are challenging due to the complexity and uncertainty of the soil [15].
Machine learning methods like artificial neural networks (ANN), support vector machines, and random forests have gained significant attention in geotechnical engineering due to their ability to efficiently and accurately map highly nonlinear problems [16,17].Zhang et al. [18] developed a nonparametric ensemble artificial intelligence approach to calculate the Es of soft clay.The mean squared error and correlation coefficient of the model applied to the testing set were 0.13 and 0.91.Pham et al. [19] investigated and compared the performance of four machine learning methods, particle swarm optimization-adaptive network based fuzzy inference system (PANFIS), genetic algorithm-adaptive network based fuzzy inference system (GANFIS), support vector regression, and ANN, for predicting the strength of soft soils.And concluded that out of four models, the PANFIS indicates as a promising technique for prediction of the strength of soft soils [19].Taffese and Abegaz [20] used machine learning techniques to predict the compaction and strength properties of amended soil.Li et al. [21] compared the performance of random forest regression and artificial neural network, two commonly used machine learning methods, for predicting soil properties.They found that the random forest regression method generally yielded smaller prediction errors [21].
Previous studies on machine learning mainly focus on predicting individual indices such as compression parameters and strength parameters.The analysis lacks machine learning models that can comprehensively predict compression and shear strength indices.Moreover, these studies have not yet analyzed the prediction of compression and shear strength indices specifically for soft soils in Haikou City, Hainan Province.Jiangdong new district in Haikou, as a pilot demonstration region in Hainan free trade port, has begun a great number of engineering plans and constructions at present.In the future, the construction scale of various projects will be continuously expanded.In order to better exploit and develop underground space, reasonably save the engineering construction cost, shorten engineering construction period, and accumulate rich regional empirical parameters, this study focused on the fourth member sedimentary soil in the Tertiary Haikou Formation (with the lithology of cohesive soil and extensive distribution in Jiangdong district, Haikou) and conducted geotechnical tests for statistical analysis.The main contributions of this study can be summarized as follows.
(1) A dataset was prepared based on investigation reports in Haikou.(2) Studied the correlation among the physical indexes, compressibility indexes, and shear strength indexes of this stratum by means of mathematical statistical analysis.
(3) A random forest regression algorithm in machine learning was used to develop a model that can predict soil compression and shear strength indicators.(4) The predictive performance of ML methods and engineering measured data was compared for evaluating model accuracy.The project is a subproject of the Comprehensive Survey of Urban Geology in Jiangdong New District, Haikou, organized and implemented by Hainan Provincial Bureau.According to the regional geological data of Jiangdong New Area, the rock and soil mass that overlays the sedimentary soil layer in the fourth section of the Tertiary Haikou Formation is predominantly composed of quaternary sea-land alternating sedimentary soil.This layer is known to be problematic for engineering purposes, as it consists mostly of severely liquefied sand and seismic soft soil.The sedimentary soil in the fourth section of the Tertiary Haikou Formation serves as the primary pile end bearing layer for regional engineering construction, as well as the main layer for the development and utilization of underground space.The sampling was conducted at a depth of 100 m, with a total of 182 boreholes drilled.
Sampling and Test Statistics
2.2.Test Method.The limit moisture ratio was measured by cone penetrometer via rolling.The moisture content when the cone with a weight of 76 g sank by 10 mm was set as the liquid limit, while the moisture content when the fractures appeared as the soil stripe was rubbed to 3 mm and fracture was set as the plastic limit.The difference between liquid limit and plastic limit was defined as the plasticity index.The test data of compressibility indexes were measured with standard consolidation test (at a pressure of 100-200 kPa).According to Standard for Geotechnical Testing Method (GB/T 50123-1999), consolidated quick direct shear test was performed.
Sampling Method and Statistical Analysis.
During the present field drilling process, rotary drilling with mud protection wall was adopted.Rock samples were collected with the core barrel (φ91 mm) while soil samples were collected with single-action triple tube.In this study, 279 sedimentary soil samples in the fourth member of the Tertiary Haikou 2 Advances in Civil Engineering Formation were collected for statistics.The soil samples were mainly cohesive soil (silty clay or clay).Considering the actual engineering applications at present, the distribution range and possible depth of the soil layer, the sampling depth was controlled within 100 m (ranging from 7.50 to 97.00 m).
According to the parameter statistical method as described in Code for Investigation of Geotechnical Engineering (GB50021-2001, the 2019 Edition), the statistics of basic physical and mechanical parameters of this layer of soil were obtained and listed in Table 1.
Apparently, the variation coefficients of some physical indexes including soil density, moisture content, and void ratio were all smaller than 0.300, suggesting the reasonability of soil layer division.The variation coefficients of compressibility and shear strength indexes were mostly larger than 0.300.This is mainly due to that partial perturbation or soil stress release may exist in postpreparation of samples, thereby leading to great variations in sample parameters.
Overall, the test data of samples were quite reliable and practical.It was feasible to perform correlation analysis on these data for regional design suggestions and empirical calculation.
Overall Analysis of Correlation of Soil's Parameters
The engineering characteristics of soil in soil mechanics are mainly directly reflected by its physical and mechanical indexes.Therefore, statistical analysis and summary of the measured indexes of the same stratum in the region is of great practical significance for the accumulation of regional geological experience and engineering practice experience.
Based on previous statistical analysis experiences of geotechnical data, the correlation among soil's physical and mechanical indexes generally can be described by linear models [22].This study adopted least square linear fitting and unary linear regression for analysis.First, the correlations among various indexes of the collected soil samples were judged.Overall correlation analysis between soil sampling position and various test indexes was performed, and the results are shown in Table 2. Various indexes ranked from weak to strong correlation.The detailed parameters were then analyzed.
Based on the above statistics of correlation coefficients and significance test results, the following conclusions can be drawn.
(1) Except the moderate correlation with wet density, moisture content, and void ratio, soil's sampling depth was weekly correlated with consistency, plasticity index, compressibility, and shear strength indexes.(2) Wet density and moisture content were highly correlated with void ratio, which can satisfy basic conversion indexes of three-phase indexes.Wet density, moisture content, and void ratio were moderately-strongly correlated with compressibility and shear strength indexes.
(3) The compressibility indexes including the modulus of compression and compression coefficient showed exactly different correlations with all parameters.This is consistent with the definitions of soil's compressibility indexes.A greater modulus of compression suggests stronger deformation resistance ability, which corresponds to a smaller compression coefficient.(4) The shear intensity indexes showed consistent positive/negative correlations with the other parameters.
The shear strength indexes exhibited negatively moderate-strong correlations with moisture content and void ratio, as well as negative weak-moderate correlations with liquidity and plasticity indexes.( 5) The compressibility indexes were overall moderatelystrongly correlated with the shear strength indexes.More favorable compressibility indexes suggest stronger shear strength indexes and better geological properties of soil engineering.
Based on the above preliminary analysis results, the correlation of soil's three-phase measured indexes with sampling depth, shear strength, and compressibility indexes were analyzed in depth.In addition, high-pressure consolidation test was performed on 120 soil samples for exploring the correlation between preconsolidation pressure and the foundation's bearing capacity.
Correlation between Soil's Three-Phase Indexes and Sampling Depth
The correlations among the measured wet density, moisture content and void ratio, and the sampling depths of all soil samples were investigated, as the statistical scatter diagrams are shown in Figures 1-4.Overall, wet density was in direct proportion to the sampling depth, while moisture content and void ratio were inversely proportional to the sampling depth.Through preliminary analysis, the soil samples were collected from old clay and can be regarded as normal consolidated-overconsolidated soil.The mechanical indexes such as plasticity, compressibility, and shear strength indexes were relatively stable.As the sampling depth and overlying soil pressure increased, natural density increased gradually while moisture content and void ratio dropped gradually.In terms of negative/positive correlation, as the sampling depth increased, both compressibility and shear strength indexes were improved.This also conforms to soil's sedimentary rules of underconsolidated-consolidated-overconsolidated transition.
Meanwhile, it can be observed from the scatter diagrams that void ratio and moisture content overall showed identical variation rules.Figures 3 and 4 show the scatter diagram of the correlation between void ratio and moisture content.Linear correlation can be observed, suggesting that pores in the soil were almost filled by water, with almost no void content.The results fit well with the measured saturation value from 79 to 100, with a mean value of 93.93.The soil can be judged as saturated soil.
Advances in Civil Engineering
In terms of consolidation procedure of consolidated soil, when soil was consolidated to a certain degree, various pores and voids were almost compressed or filled by bound water.Accordingly, both moisture content and void ratio were gradually fixed.As shown in Figures 1-3, for the soil samples with a sampling depth of above 50 m, the correlations of wet density, moisture content, and void ratio with the sampling depth were higher than the correlations for the samples with a sampling depth of below 50 m.After eliminating the samples with a depth of below 50 m, linear fitting was performed on the correlations of wet density, moisture content, and void ratio with the sampling depth, as the fitting formulas and correlation coefficients listed in Figures 5-7.
Based on the above analysis results, for the soil samples from the fourth member of the Tertiary Haikou Formation from Jiangdong new district, Haikou, the related empirical formulas that describe the correlations of dry density, moisture content, and void ratio with the sampling depth can be written as follows: 6 Advances in Civil Engineering (1) When h ≤ 50 m: When h > 50 m, ρ, ω, and e can be directly set as 1.969, 28.5, and 0.787.The values are basically coincident with the statistical averages of 1.930, 27.6, and 0.784 in the 48 group samples below 50 m.
(2) The empirical formula between void and moisture content can be written as follows:
Correlations of Void Ratio with Soil's Mechanics and Displacement Index
According to previous research results, for the soil samples collected from this layer, the void ratio is moderatelystrongly correlation with compressibility and shear strength indexes, as detailed statistics are shown in Figures 8-11.
Generally, under the additional stress, free-state underground can be discharged from the pores of soil on account of the fluidity, thereby leading to volume reduction and inducing compression [23,24].This can account for soil compressibility.The deformation is then referred to as consolidation.For ordinary foundations, the sedimentation and deformation are always designed by considering both compression modulus and coefficient [25].
Soil's shear strength refers to soil's capability to resist shear failure and equals to the shear stress on the sliding surface when shear failure occurs in soil.Certainly, whether Advances in Civil Engineering soil reaches the shear failure state not only depends on soil properties but also is closely correlation with the applied stress combination [26].Therefore, the indexes should be selected in combination with actual engineering condition (mainly, the drainage condition).According to the three-phase measured results, the collected soil samples were saturated soil and water in soil was mostly bound water, with poor drained.The quick direct shear indexes in this study are of great significance to practical applications [27].
However, in actual production, compressibility and shear tests are always time-consuming, with great difficulty in sample collection.At the preliminary engineering design phase, the compressibility and shear strength indexes can be reasonably derived in combination with the detailed burial depth and void ratio for further estimation of foundation sedimentation and stability.This is quite significant for the design of engineering exploration schemes and foundations.
It can be observed in Figures 8-11, for the fourth member sedimentary soil of Haikou Formation from Haikou new district, Haikou, the empirical formulas of compressibility and shear strength indexes can be written as follows.
Compressive indexes: Shear strength indexes (direct shear test): Soil layer in natural world has undergone ever-changing consolidation history in long geological history; however, the soil has endured the maximum pressure and reached certain consolidation degree.The maximum pressure is exactly the abovementioned preconsolidation pressure.Considering that overconsolidated soil samples were collected, 120 samples with measured preconsolidation pressures and consolidation indexes were selected for statistics for gain better understanding of soil's sedimentary history, estimate foundation sedimentation, evaluating the characteristic value of foundation bearing capacity and propose reasonable and economic foundation scheme.The correlation between preconsolidation pressure and sampling depth and the correlations of void ratio with preconsolidation pressure and consolidation index were analyzed, as the results shown in Figures 12-14.
As shown in Figure 12, the preconsolidation pressure was quite weakly correlated with the sampling depth, indicating that soil was subjected to low external force in sedimentary history, under relatively stable state.The statistical results of P c can be described below, a range from 178.5 to 2003.7 kPa, a mean value of 1074.4kPa, and a variation coefficient of 0.408.
Generally, high-pressure consolidation test is performed to measure the pressure and the compressibility indexes.The test process and parameter calculation are quite time-consuming.It can be observed from Figures 13 and 14 8 Advances in Civil Engineering ratio, and moderately correlated with the preconsolidation pressure.The following empirical formula can be used at the beginning or in preliminary stage of engineering construction:
Correlation between Shear Strength and Compressibility Indexes
The compressibility index reflects soil's consolidationinduced deformation while the shear strength index reflects soil's shear-induced deformation [28,29].Under the deformation induced by both consolidation and shear, pores in the soil can be compressed, water flows out and soil particles move [30,31].Although the different deformation mechanisms, these two parameters are correlated to certain degree, as listed in Table 2.The internal correlation should be analyzed, which can also be used for cross-verification in parameter design at the early stage of engineering construction.Figures 15-18 display the statistical results.It can be observed that the linear correlation coefficients of the modulus of compression with internal friction angle and cohesive force were 0.5317 and 0.7888, respectively, suggesting moderate-strong correlation; the correlation of compression coefficient with internal friction angle and cohesive force can be reasonably fitted by power functions, with a correlation coefficient of 0.6489 and 0.8152, respectively, suggesting strong correlation.Overall, strong correlation between compressive and shear strength indexes can be observed.
The main reasons can be described below.
(1) Soil samples can be regarded as saturated clay soil.
During the consolidation-induced deformation process, pores were almost filled by water, with quite small voids.
Advances in Civil Engineering
Water discharge under squeezing played a dominant role in consolidation deformation under compression.(2) Soil particles were extruded during soil's vertical consolidation deformation process.There existed mutual shearing and friction among particles during the deformation process.(3) Water among soil particles was mostly bound water, with strong adsorption capacity with soil particles.Great shear friction can be found in water drainage process.(4) Soil compression on macroscopic level corresponds to shear deformation among soil particles on microscopic level.Accordingly, compressibility and shear strength indexes show strong correlation.
In actual engineering applications, when the measured indexes show great difference under great disturbance on sample collection, the indexes can be validated according to the following empirical formulas:
Predict Soft Soil Parameters Using Random Forest Regression
Based on the analysis presented above, it is evident that soil's three-phase index and other factors are correlated with its compression and shear strength indicators.Previous sections have quantitatively examined these relationships.However, Figures 1-18 shows a significant amount of dispersion in the measured data.Consequently, the quantitative relationship curves for various parameters are deficient in accurately representing the data.Estimating the compression and strength properties of soils based only on a single metric and a simple linear fit is problematic.The deformation and strength indicators of soil exhibit complex nonlinear relationships with soil properties.Thus, there is a need to comprehensively characterize the relationships between soil properties and its deformation and strength indicators.Given the exceptional performance of machine learning in fitting nonlinear complex relationships, the random forest algorithm in machine learning is chosen as a regression method specifically designed for high-dimensional soil parameters data.The training process of random forest involves "randomness" and "ensemble" effects, enabling it to accurately capture the randomness and diversity of soil parameters.The specific modeling process (Figure 19) is described below: (1) Preparing the dataset, based on the results of Sections 3-6 on the correlation of soil parameters, the following parameters are selected as inputs for the model: depth of sampling, wet density, moisture content, void ratio, plasticity index, and liquidity index.The output variables for prediction are compression modulus, compression coefficient, cohesion, and internal friction angle.Overall, a total of 185 data points were assembled for analysis.The dataset was then divided into a training set, comprising 70% of the data, to facilitate model training, and a test set, accounting for 30% of the data, to assess the model's generalization ability.(2) The bootstrap sampling method is used to randomly select samples with replacement from the original data set collected from the site of the engineering project.This method aims to reduce the sample dependency of the data set, thereby improving the robustness of the model.In each sampling, a subset of features is randomly selected.By controlling the number and types of features, it is possible to effectively reduce the complexity of the model and avoid overfitting.(3) Develop a decision tree model based on the chosen samples and features using CART algorithm.CART builds a binary tree structure recursively by partitioning the input space into subsets based on the values of input features.The tree is constructed in a way that each internal node represents a decision based on a feature, and each leaf node represents the output (class label for classification or numerical value for regression) for the corresponding subset of data.Since this study focuses on regression analysis, the decision tree construction process employs the Gini coefficient as the split criterion to identify the most suitable feature for node splitting.The Gini impurity measures the likelihood of misclassification.For a given node, it calculates the probability of misclassifying a randomly chosen element if it were randomly labeled according to the distribution of classes in the node.Consequently, each subtree is able to effectively capture and comprehend the regression characteristics inherent in the data.(4) Bootstrap sampling is used to construct subsequent decision trees until the predefined number of trees is reached (set at 500 for this study).During the construction of each tree, the previously created trees are combined through ensembling, and their average probabilities are calculated to obtain the final prediction result.
( Advances in Civil Engineering the actual data and the model's predictions.The formula for MSE is given by the following equation: where n is the total number of samples, y i is the measured results, and b y i is the predicted results of the model.After training, the model's prediction accuracy for compressibility modulus, compression coefficient, cohesive strength, and internal friction angle is shown in Table 3. Table 3 illustrates that the mean squared error (MSE) values of the CART-based random forest regression model for predicting each parameter are all below 0.1.This implies that the model performs significantly better in comprehensively fitting each parameter compared to the correlation fitting curves discussed earlier in the section.
The test set data are inputted into the trained model to calculate the predicted output.Then, these predicted values are compared with the measured values.The results of this comparison can be found in Figures 20-23.The figure reveals a strong consistency between the predicted and measured values of compression modulus, compression coefficient, cohesion, and angle of internal friction.This consistency indicates that the model possesses a certain degree of generalization capability, allowing it to be applied in predicting the parameters of soft soil in the fourth layer of the third series of the Haikou Formation.
Conclusions
This study focused on the fourth member sedimentary soil of Haikou Formation collected from Jiangdong new district, Haikou, and conducted soil test on 279 samples.Through analysis, various fitting formulas that describe the correlations of three-phase indexes and sampling depth, void ratio and compressibility indexes, void ratio and shear strength indexes, void ratio and preconsolidation pressure, and compressibility indexes and shear strength indexes were derived.A random forest model was established to synthesize the above parameters, realizing the prediction of compression, and shear strength indexes.The present research results can provide reference for in-depth understanding of basic physical and mechanical parameters, design of exploration scheme and foundation, and the calculation of sedimentation deformation.The main conclusions are described below.Advances in Civil Engineering (1) At a sampling depth of below 50 m, the three-phase measured indexes (wet density, moisture content, and void ratio) of soil samples were well correlated with the sampling depth; as the sampling depth exceeded 50 m, the three-phase measured indexes were almost fixed.(2) Overall, soil's void ratio was in good correlation with compressibility and shear strength indexes.As the void ratio decreased, the compression modulus increased, the compression coefficient dropped, while both cohesive force and internal friction angle increased.A smaller void ratio was indicative of greater preconsolidation pressure and smaller consolidation index.(3) Among the correlations between compressibility and shear strength indexes, the compression modulus was linearly correlated with cohesive force and internal friction angle, while the correlations of compression coefficient with cohesive force and internal friction angle can be described by power functions.(4) The data were first subjected to correlation analysis for random forest model parameter selection.Subsequently, the random forest model was developed to predict the compressibility index and shear strength index of soft soil.The model demonstrated a high level of accuracy in predicting the indices and exhibited excellent generalization ability.The research outcomes are particularly helpful in the planning and initial design stages for soft soil projects in saving time and cost.
In summary, the machine learning algorithm based on random forest regression can well predict the bearing capacity parameters and deformation parameters of coastal soft soil.However, due to the different causes and environments of the soil, this model can only be applied to Jiangdong New District of Haikou.In the future, a large amount of data support is needed to obtain a machine learning model with a wider range of applications.
2. 1 .
Engineering Situations Data Source.The present geotechnical data were sourced from the project Investigation and Evaluation of Underground Space Development and Utilization Potential for Jiangdong New District, Haikou.
TABLE 1 : 2 :
Statistics of physical and mechanical indexes of the fourth member sedimentary soil of the tertiary Haikou Formation.Statistics of the Pearson's correlation coefficient among sample's measured parameters.Sampling depth Wet density Moisture content Porosity ratio Plasticity index Liquidity index For the calculated correlation coefficient, the values of over 0.8, 0.6∼0.8,0.4∼0.6,0.2∼0.4,and below 0.2 are indicative of high correlation, strong correlation, moderate correlation, weak correlation, and extremely weak correlation or no correlation, respectively.
FIGURE 1 :FIGURE 2 :
FIGURE 1: Scatter plot of wet density and sampling depth.
FIGURE 3 :
FIGURE 3: Scatter plot of void ratio and sampling depth.
FIGURE 4 :
FIGURE 4: Scatter plot of void ratio and water content.
3 )FIGURE 5 :
FIGURE 5: Scatter plot of wet density and sampling depth.
FIGURE 6 :
FIGURE 6: Scatter plot of water content and sampling depth.
FIGURE 7 :
FIGURE 7: Scatter plot of void ratio and sampling depth.
FIGURE 8 :FIGURE 9 :
FIGURE 8: Scatter plot of void ratio and compression modulus.
FIGURE 10 :
FIGURE 10: Scatter plot of void ratio and cohesion.
FIGURE 11 :
FIGURE 11: Scatter diagram of void ratio and internal friction angle.
FIGURE 12 :
FIGURE 12: Scatter diagram of preconsolidation pressure and sampling depth.
FIGURE 15 :
FIGURE 15: Scatter plot of cohesion and compression modulus.
FIGURE 16 :
FIGURE 16: Scatter plot of cohesion and compression coefficient.
FIGURE 17 :
FIGURE 17: Scatter diagram of internal friction angle and compression modulus.
FIGURE 18 :
FIGURE 18: Scatter diagram of internal friction angle and compression coefficient.
FIGURE 20 :
FIGURE 20: Comparison of predicted and measured values of compression modulus.
FIGURE 21 :
FIGURE 21: Comparison of predicted and measured values of compression coefficient.
FIGURE 22 :
FIGURE 22: Comparison of predicted and measured values of cohesion.
FIGURE 23 :
FIGURE 23: Comparison of predicted and measured values of internal friction angle.
TABLE 3 :
MSE error of measured values and predicted value of random forest regression models. | 2024-01-06T16:12:11.942Z | 2024-01-03T00:00:00.000 | {
"year": 2024,
"sha1": "a7186bf8af54b8170bf6cad468b593eb519a1987",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/ace/2024/9985210.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e5632993f5d4864e90b343d0a80fb5160cd22d51",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science",
"Geology"
],
"extfieldsofstudy": []
} |
210946949 | pes2o/s2orc | v3-fos-license | Academic Performance, Communication, and Psychosocial Development of Prelingual Deaf Children with Cochlear Implants in Mainstream Schools
Background and Objectives To assess the academic performance, communication skills, and psychosocial development of prelingual deaf children with cochlear implants (CIs) attending mainstream schools, and to evaluate the impact of auditory speech perception on their classroom performance. Subjects and Methods As participant, 67 children with CI attending mainstream schools were included. A survey was conducted using a structured questionnaire on academic performance in the native language, second language, mathematics, social studies, science, art, communication skills, self-esteem, and social relations. Additionally, auditory and speech performances on the last follow-up were reviewed retrospectively. Results Most implanted children attending mainstream school appeared to have positive self-esteem and confidence, and had little difficulty in conversing in a quiet classroom. Also, half of the implanted children (38/67) scored above average in general academic achievement. However, academic achievement in the second language (English), social studies, and science were usually poorer than general academic achievement. Furthermore, half of the implanted children had difficulty in understanding the class content (30/67) or conversing with peers in a noisy classroom (32/67). These difficulties were significantly associated with poor speech perception. Conclusions Improving the listening environment for implanted children attending mainstream schools is necessary.
Introduction
Profound deafness in childhood affects the development of auditory speech perception, speech production, and language skills. Failure to develop adequate oral communication skills can have a significant negative effect on academic performance and psychosocial development in these children. A cochlear implant (CI) is an electronic prosthetic device that is surgically placed in the inner ear to provide useful sound perception by electrically stimulating the auditory nerve. It has been proved to be an effective management device for children with bilaterally severe-to-profound sensorineural hearing loss receiving limited benefit from hearing aids (HAs). Many studies have demonstrated the benefits of CIs concerning speech perception, production, and language development due to the enhanced audition provided by CIs [1][2][3][4][5]. Thus, the academic performance and psychosocial development of children with CIs is expected to be better than their non-implanted peers.
Over the past two to three decades, there has been a significant trend towards placing students with special educational needs in mainstream schools rather than in segregated spe-cial schools and special classes [6]. Besides, the proportion of children with CIs in mainstream schools has been steadily increasing [7][8][9]. Thus, classroom performance and social integration are some of the major challenges faced by children with CIs in mainstream schools. However, only a few studies have addressed the academic performance and psychosocial development of these children in mainstream schools, and have reported mixed results. Some studies reported that children with CIs in mainstream schools had satisfactory academic achievement [10,11], while others reported that their academic performance lagged behind their normal-hearing peers [12,13]. Information on the subject-specific academic performance of these children is also lacking [13,14]. As with academic performance, wide variability in psychosocial development has been reported. Some studies reported positive findings regarding the social well-being and social functioning of children with CIs in mainstream schools [12,15,16] while others revealed ongoing difficulties in social participation with normal-hearing peers [17][18][19].
For successful social integration of children with CIs in inclusive educational settings, the overall classroom performance of children with CIs in mainstream schools, and the factors affecting this performance need to be determined. Thus, our study aimed to assess the academic performance, communication skills, and psychosocial development of prelingual deaf children with CIs attending mainstream schools, and to evaluate the impact of auditory speech perception on their classroom performance. Given the lack of evaluation tools, we used structured questionnaires to examine academic performance in several subjects, communication skills, self-esteem, and the social relations of implanted children. We also reviewed medical speech evaluation records retrospectively.
Participants
A prospective cohort study of prelingual deaf children with CIs was performed from August to September 2015. All children underwent CI before the age of 5 years at Samsung Medical Center and had at least 5 years of experience with the CI. Of the 149 children, 72 agreed to participate in the in-person or telephone interview along with their parents. Of the 72 children, 67 children attending mainstream schools were enrolled. Participants included 30 boys and 37 girls with an age range of 6 to 17 years [mean±standard deviation (SD)=10±3 years] at the time of the study. Before implantation, CI candidates underwent auditory brainstem response and auditory steadystate response to predict their hearing thresholds. All the children had bilateral profound sensorineural hearing loss (>90 dB) across the speech frequency range and showed limited benefit from HAs in best-aided conditions. Fifty-one children were implanted with the Nucleus device (Cochlear Corporation, Lane Cove, New South Wales, Australia), and 16 children were implanted with the Clarion device (Advance Bionics Corporation, Sylmar, CA, USA). First devices were implanted at an age ranging from 13 months to 4 years 10 months (mean± SD=31±13 months). The duration of implant usage ranged from 5 to 13 years (mean±SD=8±2 years).
Ethics statement
All the participants were recruited and assessed in the Hearing Laboratory at the Samsung Medical Center. Written informed consent for participation was taken from each participant. The Institutional Review Board of Samsung Medical Center (IRB No. 2015-08-126-007) approved this study. All the study protocols were performed following the relevant guidelines and regulations.
Auditory and speech performance evaluation
Auditory and speech performances on the last follow-up at the Samsung Medical Center, at least 3 years after the implantation, were reviewed retrospectively. Aided pure-tone thresholds were measured in the sound field using warble tones presented from a loudspeaker located one meter away from the children at 0 degrees azimuth by an experienced audiologist [20]. The auditory perception was evaluated by the categories of auditory performance (CAP) scores. CAP comprises 8 hierarchical divisions of auditory perceptive ability: 0, the lowest level, describes no awareness of environmental sounds; and 7, the highest level, describes the use of a telephone with a familiar speaker. Speech perception in both auditory-only (AO) and audiovisual conditions was evaluated using consonant-vowel-consonant monosyllabic and bi-syllabic words. Stimuli were presented using a monitored live voice of an experienced speech therapist. The presentation level was approximately 70 dB SPL, as determined with a handheld sound level meter. Children were instructed to repeat each presented stimulus. Speech acquisition and production were assessed using the speech production scale (SPC) and the modification of Ling's 7 distinct stages [21,22] of the normal data of Korean phoneme development. The SPC assesses phonetic (articulation) and phonologic (meaningful speech) acquisition levels at each stage in young children with CI. The 'staircase' diagram of SPC is illustrated in in Supplementary Material 1 (in the online-only Data Supplement).
Questionnaires
A survey of academic performance, communication skills, and psychosocial development were conducted through inperson or telephone interviews. A structured questionnaire was designed for children and their parents. Background information about device strategies (unilateral, bilateral, and bimodal CI users) and usage, communication modes (oral or/ and sign language), and school type were evaluated. Five categories of school enrollment were recognized: ordinary classes in mainstream schools, special classes in mainstream schools, special schools, schools for the deaf, and vocational schools. Grade retention, defined as the need to repeat a year due to failing grades, was also assessed. Academic achievement was assessed using quartiles and included the survey of overall academic performance, language (Korean and English), mathematics, social studies, science, and artistic performance (art and music). Communication development was assessed using closed questions on various situations at school and at home. Lastly, self-esteem and social relations were assessed by inquiring about friends and future hopes. An English translation of the questionnaire is provided in Supplementary Material 2 (in the online-only Data Supplement).
Statistical analysis
Results were analyzed using SPSS 18.0 (SPSS Inc., Chicago, IL, USA). Clinical information (age, sex, side of first CI, age at first CI, and duration of first CI use) and the results of auditory and speech performance evaluations were compared between the three listening modes using the Kruskal-Wallis and the chi-square tests. In case of a significant difference between the three listening modes, a post-hoc analysis using the Mann-Whitney test or chi-square test was performed to evaluate the differences between two different listening modes (i.e., unilateral CI users vs. bimodal listeners, unilateral CI users vs. bilateral CI users, and bimodal listeners vs. bilateral CI users). Chi-square test or Fisher's exact test was used to compare the distribution of responses to items on academic achievement, communication skills, and psychosocial development according to the three listening modes. To evaluate the impact of auditory speech perception ability on classroom performance, a multivariate logistic model was used to compare the bi-syllabic word recognition scores (WRSs) in AO condition and classroom performance after adjusting for the current age, sex, and listening modes. The Bonferroni correction was applied to the p-value because of multiple testing. Adjusted p-values were calculated by multiplying the original p-values by the number of comparisons (3 for listening modes). A p-value or adjusted p-value of less than 0.05 was considered statistically significant.
Demographics
Among 67 children with CIs, 17 children (25%) wore only one CI (unilateral CI users), 29 children (43%) were experienced bimodal listeners with a CI in one ear and an HA in the other (HA+CI), and 21 children (31%) received two implants in sequential procedures (bilateral CI users, CI+CI) with the interval between the first and second implantations ranging from 1-8 years (mean±SD interval=3±2 years). All the children reportedly wore their device(s) more than 70% of the day and used oral communication. Supplementary Table 1 (in the online-only Data Supplement) includes the clinical and device information of each implanted children. Table 1 presents the comparison of the clinical information between the three listening modes (unilateral CI user, bimodal listener, and bilateral CI user). No significant difference was found in age and first CI information (side of first CI, age at first CI, and duration of first CI use) between the three listening modes. However, the sex ratio was significantly different between the three listening modes (p=0.035). Table 1 also presents the comparisons of the auditory and speech performances between the three listening modes. No significant difference was found in aided pure-tone thresholds of the first device between the three listening modes. Supplementary Table 2 (in the online-only Data Supplement) includes the individual aided pure-tone thresholds at 250, 500 Hz, 1, 2, and 4 kHz. Most children showed good performance in speech evaluation 3 years after the implantation. However, there were significant differences in CAP scores (p=0.006), WRSs (p<0.05 for monosyllabic and bi-syllabic), and SPSs (p=0.001 for phonetic level and p=0.002 for phonologic level) between the three different modes. Overall, the bimodal listeners and bilateral CI users showed significantly superior auditory perception (CAP scores and WRSs) and superior speech production in post-hoc analysis than unilateral CI users (Table 1). There were no significant differences in auditory perception and speech production between bimodal listeners and bilateral CI users in the post-hoc analysis.
Academic performance
Of the total 67 implanted children, 61 and 6 children were enrolled in ordinary classes with normal-hearing peers and a special class in a mainstream school, respectively. Grade retentions were reportedly experienced by 12 children (18%) one or more times. Table 2 shows participants' clinical information and speech performance. Eight of these children were unilateral CI users, 1 child was a bimodal listener, and 3 were bilateral CI users. Two children had Noonan syndrome, 1 child had congenital heart disease, and 1 child had enlarged vestibular aqueduct syndrome. Fig. 1A shows the academic performance of the 67 implanted children attending mainstream schools in each subject.
When the academic achievement was assessed using quartiles, 28, 10, 17, and 12 implanted children had general academic achievement in the 1-25% (top 25%), 26-50%, 51-75%, and 76-100% (bottom 25%) of their class, respectively. Ac- Table 3 shows communication skills at school and at home, and psychosocial development of the 67 implanted children attending mainstream schools. About half (55%) of the children understood all or almost all of the teaching content (Q1 at school). In a quiet classroom, most implanted children (79%) reported little or no difficulty during one-on-one conversations (Q2 at school). Also, most implanted children (88%) experienced little or no stress conversing with an unfamiliar person at school (Q4 at school). However, half of the implanted children (48%) reported some difficulty during individual conversations in a noisy classroom (Q3 at school). Difficulty in communicating during class (Q1 at school) or in a quiet classroom (Q2 at school) differed significantly depending on the listening mode. In post-hoc analysis with Fischer's exact test, bilateral CI users understood the class content better than bimodal listeners (adjusted p=0.037) and were more comfortable with individual conversations in a quiet classroom than unilateral CI users (adjusted p=0.02).
Communication skills and psychosocial development
Implanted children generally experienced more difficulty communicating at school than at home. At home, 91% and 76% of implanted children reported little or no difficulty communicating in a quiet and noisy background, respectively (Q1 and Q2 at home). However, only 34% of the implanted children were able to understand all or almost all the words of their family while concentrating on other things (Q3 at home). No significant difference was found in communication skills at home between the three listening modes.
A survey of psychosocial development revealed that most implanted children (96%) had a friend with normal hearing (Q1), and 96% of implanted children were unafraid of making A3 I try to avoid the conversation as much as possible. The impact of auditory speech performance on classroom performance Bivariate logistic regression models were constructed to examine the relationship between bi-syllabic WRS in AO condition and classroom performance (Table 4). Classroom performance (academic achievement and communication at school) was divided into good and poor performance based on the questionnaire responses. Good performers were defined as children who scored above average (within the top 50%) on the academic achievement questionnaires, or to the top 1 and 2 on the communication skills questionnaires. The remaining were defined as children with poor performance. A multivariate logistic regression model was constructed while adjusting for potential confounders (current age, sex, and listening modes). In the crude model, high scores on the bi-syllabic WRSs were significantly associated with a decrease in the odds of having poor academic performance in all subjects except art and music (all p<0.05). However, academic achievement in art and music was not significantly associated with the bi-syllabic WRSs. In the adjusted model, the bi-syllabic identification scores were significantly associated with academic achievement in Korean, social studies, and science (all p<0.05). High scores on the bi-syllabic WRSs were also significantly associated with a decrease in the odds of having communication difficulties at school (all p<0.05 in the adjusted model).
Discussion
In this study, most implanted children (61 of 67) attended ordinary classes with normal-hearing peers in a mainstream school. In Korea, students requiring special education are usually placed in mainstream schools rather than in separate special schools and special classes. Previous studies have emphasized the following principal benefits of inclusive education of implanted children: naturalistic access to typical linguistic and behavioral models of hearing peers, and social acceptance by hearing peers [6,23,24]. Children with CIs in mainstream schools reported having friends with mostly normal hearing (Q1 in Table 3). This result reflects their social interactions with peers during school life. Thus, the inclusive education of implanted children in mainstream schools appears to provide opportunities for contact and social interactions with normal peers. Based on the responses to the psychosocial development questions (self-esteem and social relations section in Table 3), most implanted children in mainstream schools seemed to have a positive attitude towards self-esteem and confidence.
In a mainstream education setting, half the implanted children scored above average in general academic achievement (Fig. 1A). Consistent with a previous study [13], academic achievement was best in mathematics. However, academic achievement in a second language (English) and social studies were usually poorer than general academic achievement (Fig. 1A). Particularly, our cohort reported difficulty in listening comprehension and pronunciation in English. When interviewed, the children also had difficulty understanding abstract concepts in social studies and science. Children with lower scores on the bi-syllabic identification test were significantly likely to have poor academic performance in social studies and science (Table 4). This may be because these subjects are often taught with complex verbal explanations. Children with CI have been reported to experience delays in concept formation [25]. Therefore, it is necessary to provide these children with various supplementary materials or preteach them the subject-relevant vocabulary so that the new information can be integrated meaningfully into the prior knowledge framework.
Most implanted children had little difficulty in conversing in a quiet background at school or home (Table 3). However, communication at school was considerably more challenging than at home in a noisy background. In a noisy background, 24% of the implanted children responded having some difficulty communicating at home, while half of the implanted children (48%) had some difficulty conversing at school. Also, half of the implanted children (45%) had difficulty in understanding the class. Although children with CIs can perceive speech, they may have difficulty in understanding during group or outdoor activities. Therefore, providing a good listening environment by considering the seating arrangements or using hearing assistive technologies such as personal FM systems for some children with poor speech perception is necessary for improving classroom performance.
This study had several limitations. First, the academic performance outcomes are mostly derived from parents' opinions rather than from scores on validated measures directly testing the children's outcomes. Careful observation of implanted children's classroom performance is required in both national and school-based achievement testing for tracking studies. Second, this study included a heterogeneous group of children in terms of age, implantation age, CI device characteristics, presence or absence of additional disabilities, and the range of received educational supports. This study did not consider the CI device characteristics or educational supports. However, when adjusted for current age, sex, and listening modes, this study demonstrated that speech performance was significantly associated with academic performance. Educating children with CIs in mainstream schools appears to positively impact self-esteem and confidence in social integration in the future. However, 12 of the 67 implanted children attending mainstream schools experienced one or more grade retention episodes. The academic performance of CI children in mainstream schools was mostly satisfactory; however, they scored below average in some subjects. Furthermore, they reported some difficulty in understanding the class content and holding a conversation in a noisy background. Most of these difficulties were reflected by poor speech perception. Therefore, improving the listening environment for implanted children in mainstream schools is necessary. | 2020-01-30T20:07:35.828Z | 2020-01-31T00:00:00.000 | {
"year": 2020,
"sha1": "8cd2dad68299b148be8341dadd37dc2b357977f0",
"oa_license": "CCBYNC",
"oa_url": "http://www.ejao.org/upload/pdf/jao-2019-00346.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "1e2b806d959ac8541ea0b70648a801332243c0e8",
"s2fieldsofstudy": [
"Medicine",
"Education",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119276101 | pes2o/s2orc | v3-fos-license | Mapping properties of the Hilbert and Fubini--Study maps in K\"ahler geometry
Suppose that we have a compact K\"ahler manifold $X$ with a very ample line bundle $\mathcal{L}$. We prove that any positive definite hermitian form on the space $H^0 (X,\mathcal{L})$ of holomorphic sections can be written as an $L^2$-inner product with respect to an appropriate hermitian metric on $\mathcal{L}$. We apply this result to show that the Fubini--Study map, which associates a hermitian metric on $\mathcal{L}$ to a hermitian form on $H^0 (X,\mathcal{L})$, is injective.
where H(X, L) is infinite dimensional and B k is finite dimensional.
We can define the following two maps; • the Hilbert map Hilb : H(X, L) → B k defined by the L 2 -inner product of h ∈ H, • the Fubini-Study map F S : B k → H(X, L) defined as the pullback of the Fubini-Study metric on the projective space P(H 0 (X, L k ) * ).
The result that we prove in this paper is the following.
Theorem 1.1. Hilb is surjective if L k is very ample, and F S is injective if k is sufficiently large.
Since Hilb is a map from an infinite dimensional manifold to a finite dimensional manifold, it seems natural to speculate that it is surjective. Similarly, it also seems natural to speculate that F S is injective. Indeed, these statements seem to be widely believed among the experts in the field. However, the proof of these facts do not seem to be explicitly written in the literature previously, to the best of the author's knowledge. We shall provide the proofs of these "folklore" statements, when we take the exponent k to be large enough.
For F S, we shall in fact prove a stronger quantitative result for the injectivity (cf. Lemma 3.1), which was applied in [8] to find a point in B k that is close to the minimum of the modified balancing energy. Generalisations to several variants of the Hilb map (cf. Proposition 2.3) will also be discussed at the end of §2.
Acknowledgements
Part of this work was carried out in the framework of the Labex Archimède (ANR-11-LABX-0033) and of the A*MIDEX project (ANR-11-IDEX-0001-02), funded by the "Investissements d'Avenir" French Government programme managed by the French National Research Agency (ANR). Much of this work was carried out when the author was a PhD student at the Department of Mathematics of the University College London, which he thanks for the financial support. Theorem 1.1 and its proof form part of the author's PhD thesis submitted to the University College London. The author thanks Julien Keller and Jason Lotay for helpful discussions.
Proof of surjectivity of Hilb
Before we start the proof, we shall define the Hilb and F S maps more precisely as follows (cf. [6]).
Definition 2.1. The Hilbert map Hilb : H(X, L) → B k is defined by where we write N for dim C H 0 (X, L k ) and V for X c 1 (L) n /n!.
The Fubini-Study map F S : where {s i } is an H-orthonormal basis for H 0 (X, L k ).
We prove the first part of Theorem 1.1. The main line of the proof presented below is similar to §2 in the paper by Bourguignon, Li, and Yau [5].
Proof. Since L k is very ample, we have the Kodaira embedding ι : X ֒→ P(H 0 (X, L k ) * ) ∼ → P N −1 . First of all pick homogeneous coordinates {Z i } on P N −1 ; all matrices appearing in what follows will be with respect to this basis {Z i }. This then defines a hermitian metrich :=h F S(I) on O P N −1 (1) and the Fubini-Study metric ω F S(I) on P N −1 . Suppose that we write dµ Z for the volume form on P N −1 defined by ω F S(I) , and dµ BZ for Suppose that we write which we compactify to J by adding a topological boundary ∂J : and hence, recalling tr(Ψ 0 (B)) = 1 and writing B t for the transpose of B, we get We claim that it defines a diffeomorphism between J • and H • . It is easy to check that Ψ 0 is a smooth bijective map from J • to H • . Its linearisation δΨ 0 | B at B can be computed as is a constant multiple of Ψ 0 (B). Noting that Ψ 0 (B) is a positive definite hermitian matrix, we can show by direct computation that f B (A) cannot be a constant multiple of Ψ 0 (B) unless A is a constant multiple of B.
Thus the linearisation of Ψ 0 is nondegenerate at each point in J • , and hence Ψ 0 defines a diffeomorphism between J • and H • with a nontrivial degree at every point in H • . We also see that, using Ψ 0 (B) = Ψ 0 (αB) for α > 0, Ψ 0 extends continuously to the boundary, mapping elements of ∂J into ∂H, such that the degree of the map Ψ 0 : ∂J → ∂H is nontrivial. Now suppose that we write ι * X (dµ BZ ) for the measure induced from dµ BZ which is supported only on ι(X) ⊂ P N −1 , and consider a continuous map Ψ : We first show that Ψ extends continuously to the boundary. Recall that ι * X (dµ BZ ) is, as a measure on X, equal to ι * (ω n F S(H) /n!), and observe also proves that Ψ maps a sequence {B ν } in J • approaching ∂J to a sequence which accumulates at a point in ∂H. We can now define a 1-parameter family of continuous maps Ψ t := J → H by Ψ t (B) := tΨ(B) + (1 − t)Ψ 0 (B) (this can be viewed as using a measure tι * X (dµ ZB ) + (1 − t)dµ BZ in the integrals above). By what we have established above, Ψ t is a continuous 1-parameter family of maps between J and H which maps ∂J into ∂H. Since Ψ 0 is a diffeomorphism between J • and H • and has a nontrivial degree on the boundary and Ψ maps sequences approaching ∂J to sequences accumulating at points in ∂H, Ψ : ∂J → ∂H has a nontrivial degree. We thus see that Ψ is surjective since the degree of a continuous map is a homotopy invariant (cf. [1, Theorems 12.10 and 12.11]).
Finally, we recall that ι * X (dµ BZ ) = ι * (ω n F S(H) /n!) is equal to k n ω F S(H) /n!. Note also that, writing h k for ι * h , we have where we wrote s i := ι * Z i . Observe also that there exists β ∈ C ∞ (X, R) such that ω n F S(H) = e β ω n h . We have thus proved that, fixing a basis {s i } for H 0 (X, L k ), for any positive definite hermitian matrix G there exists a function φ ∈ C ∞ (X, R) such that We thus aim to find a function f ∈ C ∞ (X, R), such that e −f h k is positively curved and Hilb(e −f h k )(s i , s j ) = N V X e β+φ h k (s i , s j ) ω n h n! , to finally establish the claim. For this, it is sufficient to solve for f the following nonlinear PDE: which is solvable by the Aubin-Yau theorem (cf. [2] and [11,Theorem 4,p383]).
We now recall that there are several variants of the Hilb map that also appear in the literature [3,4,7,9,10]. We define the Hilb ν map where the volume form dν is one of the following.
1. dν is a fixed volume form on X; an example of this is when X is Calabi-Yau, in which case we can use the holomorphic volume form Ω ∈ H 0 (X, K X ) to define dν := Ω ∧Ω, 2. dν is anticanonical ; a hermitian metric h on −K X defines a volume form dν ac (h), where we note dν ac (e −ϕ h) = e −ϕ dν ac (h), 3. dν is canonical ; a hermitian metric h on K X defines a dual metric on −K X , which defines a volume form dν c (h) with dν c (e −ϕ h) = e ϕ dν c (h).
We prove the following analogue of Lemma 2.2. Proof. Fixing a basis {s i } i and a hermitian metric h on L, (2) implies that for any positive definite hermitian matrix G ij there exists ϕ ∈ C ∞ (X, R) such that Observe that for each three choices dν, dν ac (h), dν c (h) of the volume form dν, there exists a function ψ ∈ C ∞ (X, R) such that ω n h n! = e ψ dν, and hence the claim follows from the following.
Remark 2.4. The above proof does not show thath has positive curvature; the associated curvature form ωh may not be a Kähler metric.
Proof of injectivity of F S
We establish the following "quantitative injectivity" to prove the second part of Theorem 1.1.
Lemma 3.1. Suppose that we choose k to be large enough, and that H, H ′ ∈ B k satisfy where || · || op is the operator norm, i.e. the maximum of the moduli of the eigenvalues. In particular, considering the case ǫ = 0, we see that F S is injective for all large enough k.
i |s i | 2 F S(H) k , and hence, by recalling (1), with respect to any hermitian metric h on L, by noting that we may multiply both sides of (3) by any strictly positive function e kφ . We now fix this basis {s i }, and the operator norm or the Hilbert-Schmidt norm used in this proof will all be computed with respect to this basis. We now choose N hermitian metrics h 1 , . . . , h N on L k as follows. Recall now that, by Lemma 2.2, for any N -tuple of strictly positive numbers λ = (λ 1 , . . . , λ N ) there exists φ λ ∈ C ∞ (X, R) such that the hermitian metric h ′ := exp(φ λ )h satisfies X |s i | 2 and observe that the modulus of each entry is at most 1, and that ||Λ|| op ≤ 2 and ||Λ −1 || op ≤ 2 if k is large enough. Then, multiplying both sides of (3) by exp(kφ i ) and integrating over X with respect to the measure ω n hi /n!, we get the following system of linear equations | 2018-06-26T11:31:31.000Z | 2017-05-31T00:00:00.000 | {
"year": 2020,
"sha1": "ae670a7a123209bdf3344ae64997386b0791486c",
"oa_license": null,
"oa_url": "https://afst.centre-mersenne.org/item/10.5802/afst.1635.pdf",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "ae670a7a123209bdf3344ae64997386b0791486c",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
119143151 | pes2o/s2orc | v3-fos-license | The Balian-Low type theorems on $L^2(\mathbb{C})$
In this paper it is shown that $\|Zg\|_2$ and $\|\bar{Z}g\|_{2}$ cannot both be simultaneously finite if the twisted Gabor frame generated by $g\in L^2(\mathbb{C})$ forms an orthonormal basis or an exact frame for $L^2(\mathbb{C})$. The operators $Z=\frac{d}{d z}+\frac{1}{2}\bar{z}$ and $\bar{Z}=\frac{d}{d \bar{z}}-\frac{1}{2}z$ are associated with the special Hermite operator $L=-\Delta_z+\frac{1}{4}|z|^2-i\left(x\frac{d}{dy}-y\frac{d}{dx}\right)$ on $\mathbb{C}$, where $\Delta_z$ is the standard Laplacian on $\mathbb{C}$ and $z=x+iy$. Also the amalgam version of BLT is proved using Weyl transform and the distinction between BLT and amalgam BLT is illustrated by examples. The twisted Zak transform is introduced and using it several versions of the Balian-Low type theorems on $L^2(\mathbb{C})$ are established.
Introduction
The Balian-Low theorem (BLT) is one of the fundamental and interesting result in timefrequency analysis. It says that a function g ∈ L 2 (R) generating Gabor Riesz basis cannot be localized in both time and frequency domains. Precisely if g ∈ L 2 (R) and if a Gabor system G(g, a, b) := {e 2πimbt g(t − na)} m,n∈Z with ab = 1 forms an orthonormal basis for L 2 (R), then was originally stated by Balian [3] and independently by Low in [20]. The proofs given by Balian and Low each contained a technical gap, which was filled by Coifman et al. [9] and extended the BLT to the case of Riesz bases. Battle [4] provided an elegant and entirely new proof based on the operator theory associated with the classical uncertainty principle.
For general Balian-Low type results, historical comments and variations of BLT we refer to [7,10].
The Balian-Low type results are proved for multi-window Gabor systems by Zibulski and Zeevi [30] and for superframes by Balan [2]. The BLT and its variations for symplectic lattices in higher dimensions (see [11,15]), for the symplectic form on R 2d (see [5]) and on locally compact abelian groups (see [13]) are obtained in the literature. For further results on BLT we refer to [1,6,12,16,21,22] and [29]. In this paper we establish the BLT and some of its variations on L 2 (C) using the operators Z andZ associated with the special Hermite operator L on C. The Weyl transform and the twisted convolution closely related to the Fourier transform on the Heisenberg group and play a significant role in proving our main results. Therefore we review the representation theory on the Heisenberg group to see various objects of interest arising from it.
One of the simple and natural example of non-abelian, non-compact groups is the famous Heisenberg group H, which plays an important role in several branches of mathematics. The Heisenberg group H is a unimodular nilpotent Lie group whose underlying manifold is C × R and the group operation is defined by (z, t) · (w, s) = (z + w, t + s + 1 2 Im(zw)).
The Haar measure on H is given by dzdt.
The group Fourier transform of f ∈ L 1 (H) is defined aŝ Note that for each λ ∈ R \ {0},f (λ) is a bounded linear operator on L 2 (R). Under the operation "group convolution" L 1 (H) turns out to be a non-commutative Banach algebra. Let denote the inverse Fourier transform of f in the t−variable. Thereforef (λ) = C f λ (z)π λ (z, 0)dz.
For f, g ∈ L 1 (C), the twisted convolution is defined by Under twisted convolution L 1 (C) is a non-commutative Banach algebra. For f ∈ L 1 ∩ L 2 (C) the Weyl transform of f can be explicitly written as which maps L 1 (C) into the space of bounded operators on L 2 (R), denoted by B. The Weyl transform W (f ) is an integral operator with kernel If f ∈ L 2 (C), then W (f ) ∈ B 2 , the space of all Hilbert-Schmidt operators on L 2 (R) and satisfies the Plancherel formula In general, for f, g ∈ L 2 (C), we have The inversion formula for Weyl transform is where π(z) * is the adjoint of π(z) and tr is the usual trace on B. For detailed study on Weyl transform we refer to the text of Thangavelu [27,28].
Let H k denote the Hermite polynomial on R, defined by and h k denote the normalized Hermite functions on R defined by Let A = − d dx +x and A * = d dx +x denote the creation and annihilation operators in quantum mechanics respectively. The Hermite operator H is defined as The Hermite functions {h k } are the eigenfunctions of the operator H with eigenvalues 2k + 1, k = 0, 1, 2 · · · . Using the Hermite functions, the special Hermite functions on C are defined as follows: where z = x + iy ∈ C and m, n = 0, 1, 2 · · · . The functions {φ m,n : m, n = 0, 1, 2 · · · } form an orthonormal basis for L 2 (C). The special Hermite functions are the eigenfunctions of a second order elliptic operator L on C. To define this operator L we need to define the operators Z andZ as follows: The functions φ m,n are eigenfunctions of the special Hermite operator with eigenvalues (2n + 1), where ∆ z denotes the Laplacian on C. We list out some of the properties (see [27,28]) of the operators Z andZ in the following proposition, which will be useful at several places. (4) The adjoint Z * of Z is −Z.
Our goal in this paper is to obtain the Balian-Low type Theorem (BLT) and its variations on L 2 (C). The motivation to prove the BLT on L 2 (C) arises from the classical Heisenberg's uncertainty principle on L 2 (R). Let P and M be the position and the momentum operators defined by on a suitable domain.
Observe that the Laplacian L 0 on R can be written as The expression for special Hermite operator L is similar to the Laplacian the operators Z andZ are not self-adjoint. However we obtain the following variation of Heigenberg's uncertainty inequality for L 2 (C).
In view of the above facts we obtain the following BLT for exact frames on L 2 (C).
Using Plancherel formula for Weyl transform we have This expression is analogous to the conclusion of the classical BLT. Observe that unlike the Fourier transform of functions in L 1 (R), the Weyl transform of functions in L 1 (C) are bounded operators on L 2 (R). Therefore it is bit technical to deal with the Weyl transform and estimate the oscillations of the twisted Zak transform on L 2 (C) in terms of Zf 2 and Z f 2 .
The paper is organized as follows. In section 2, we provide necessary background for proving BLT and discuss basic properties of frames. In section 3, we define twisted Gabor frames, twisted Zak transform and deduce some of its properties. Also we prove the amalgam A frame {f k } is exact if it ceases to be a frame when any single element f n is deleted, that is, {f k } k =n is not a frame for any n. For any frame {f k } there will exist a dual frame {f k }, so that for all f ∈ H, have a series representation given by The concept of a Riesz basis and an exact frame for a frame sequence on a separable Hilbert space coincides.
2.2.
Gabor frames and density. For a, b > 0, g ∈ L 2 (R d ) and n, k ∈ Z d define M bn g(x) := e 2πibnx g(x) and T ak g(x) := g(x − ak). The collection of functions G(g, a, b) = {M bn T bk g : The associated frame operator called the Gabor frame operator has the form If g ∈ L 2 (R d ) generates a Gabor frame G(g, a, b) then there exists a dual window (canonical One of the important and interesting concept in frame theory is to obtain the necessary condition on the lattice parameters a, b so that the Gabor system G(g, a, b) constitute a frame.
The algebraic structure of the lattice Λ = {(ak, bn) : k, n ∈ Z d } has been exploited to derive the necessary condition for a Gabor system G(g, a, b) to be complete, a frame or an exact frame in terms of the product ab. The following results are known for Gabor frames in one dimension case (d = 1) with a rectangular lattice Λ = aZ × bZ. In [26], Rieffel proved that the Gabor system G(g, a, b) is incomplete for any g if ab > 1. Daubechies [9] proved Rieffel's result for the case when ab is rational and exceeds one. Assuming further decay on g andĝ Landau [19] proved that G(g, a, b) cannot be a frame for L 2 (R) if ab > 1.
For a, b ∈ R d , g ∈ L 2 (R d ) and the lattice Λ ⊂ R 2d , Ramanathan and Steger [25] proved the incompleteness of Gabor systems that are uniformly discrete (i.e. there is a minimum distance δ between elements of Λ) in terms of the Beurling density defined as follows: Let Λ ⊂ R d be a uniformly discrete. Let B be the ball of volume one in R d centered at origin. For each r > 0, ν + (r) and ν − (r) denote the maximum and minimum number of points of Λ that lie in any translate of rB i.e. ν + (r) = max and ν − (r) are finite for every r > 0. The upper and lower densities are defined by In [18], Landau shown that these quantities are independent of the particular choice of the Let g ∈ L 2 (R d ), and Λ ⊂ R 2d be a uniformly discrete set.
By the density theorem, there is a clear separation between overcomplete frames and undercomplete Riesz sequences with Riesz bases corresponding to the critical density lattices that satisfy D(Λ) = 1. The classical BLT [7] on L 2 (R) says that the window g of any Gabor Riesz basis G(g, a, b) must either not be smooth or must decay poorly at infinity.
Twisted Zak transform and Amalgam BLT
For a = b = 1 the properties of twisted translation are listed below (see [23]).
Then one can define the twisted Gabor tight frames, Riesz basis and the frame operator analogously. It is natural to ask about the density result as in Theorem 2.3 for twisted Gabor frames. For a, b > 0 and g ∈ L 2 (C), the sequence {T t (am,bn) g : m, n ∈ Z} is complete in L 2 (C) if and only if the system {ρ(p, q)g : (p, q) ∈ Λ ⊂ R 4 } is complete in L 2 (R 2 ), where p = (am, bn), q = (bn, −am). In this case uniform Beurling density is D(Λ) = 1 (ab) 2 . So by Theorem 2.3, if ab > 1 then the twisted Gabor system G t (g, a, b) = {T t (am,bn) g : m, n ∈ Z} is incomplete in L 2 (C). Therefore without loss of generality we consider the case when a = b = 1 throughout the paper. Now we define the twisted Zak transform which will be an important tool to prove our main results.
where 1 is the constant function 1. Since we are interested to obtain the BLT for twisted Gabor frames we define the twisted Zak transform with a slight modification in the following way.
wherek is the complex conjugate of k and Im(wk) is the imaginary part of wk.
Clearly Z t f is well-defined for continuous functions with compact support and converges 1). The idea of the proof is similar to the Zak transform on L 2 (R) as in [8].
The unitary nature of twisted Zak transform allows to transfer certain conditions on frames for L 2 (C) into conditions on L 2 (Q × Q). More precisely, {f k } is complete or a frame or an exact frame or an orthonormal basis for L 2 (C) if and only if the same is true for As in case of Zak transform on L 2 (R) we obtain the similar properties of twisted Zak transform on L 2 (C) in the following lemma. However, our main results are still valid if Zak transform on L 2 (R 2 ) is applied in place of twisted Zak transform on L 2 (C).
. Then the following holds: and Proof. The proof of the lemma follows similarly as in the Zak transform for L 2 (R) (see [7,8,14] or [17]). We only prove part (viii). Assume that Z t f (z, w) = 0 for all (z, w) ∈ C 2 .
Since Z t f is continuous on a simply connected domain C 2 , there is a continuous function Therefore for each z and w there are integers l z and k w such that ϕ(z, 1) = ϕ(z, i) + 2πl z and ϕ(i, w) = ϕ(0, w) + 2πk w − 2πr. Since ϕ(z, 1) − ϕ(z, i) and ϕ(i, w) − ϕ(0, w) + 2πr are continuous functions of z and w respectively, so l z = l (say) and k w = k (say), for all z, w ∈ C. Therefore, with the obvious modification for q = ∞.
For p ≥ 1, consider the amalgam space defined by W (C 0 , ℓ p ) = {f ∈ W (L ∞ , ℓ p ) : The amalgam BLT in terms of W (C 0 , ℓ 1 ) and a subspace of B 2 is obtained in the following theorem.
Theorem 3.6. (Amalgam BLT) Let g ∈ L 2 (C). If the twisted Gabor system G t (g, 1, 1) is an exact frame for L 2 (C) then Proof. Suppose that g ∈ W (C 0 , ℓ 1 ). Then by the definition of twisted Zak transform, Z t g is continuous. By Lemma 3.4 (viii), Z t g must have a zero. Therefore |Z t g| −1 is unbounded and by Lemma 3.4 (v), G t (g, 1, 1) cannot be a frame. Again assume that G t (g, 1, 1) is an exact frame and W (g) ∈ W. So by the inversion formula for Weyl transform g(z) = tr(π(z) * W (g)) and g ∈ W (C 0 , ℓ 1 ), leads to a contradiction.
The BLT and amalgam BLT are two distinct results. There exists a function g ∈ L 2 (C) satisfying BLT but not amalgam BLT and vice-versa.
The following examples illustrate the difference between the BLT and amalgam BLT.
Let z = x + iy, and define g : C → R by Then clearly g ∈ W (C 0 , ℓ 1 ). Further, Clearly W (g) ∈ B 2 . From the inversion formula for Weyl transform it follows that W (g) ∈ W.
Next we show that Z g 2 = ∞. Consider Note that for each m, n ∈ N and (x, y) ∈ (m, m + 1) × (n, n + 1), the integrand Example 3.8. We shall construct a function f such that Zf andZf ∈ L 2 (C) but f ∈ W (C 0 , ℓ 1 ) and W (f ) ∈ W. For sufficiently large k (say k > N ) choose a k = b k such that Define the continuous function g k by where g ′ is the classical derivative of g, defined except at countably many points.
Again if W (f ) ∈ W then the inversion formula for Weyl transform gives f ∈ W (C 0 , ℓ 1 ), which is a contradiction.
Now we investigate the relationships between the operators Z,Z and the continuity of twisted Zak transform. A version of BLT assuming the Wiener amalgam condition is obtained in the following theorem: Theorem 3.9. If g ∈ L 2 (C) and Zg,Zg ∈ W (C 0 , ℓ 2 ), (3.4) then {T t (m,n) g} cannot be a twisted Gabor frame for L 2 (C).
Proof. Given that g is continuous and hence Fundamental theorem of calculus for complex variables and ML-inequality can be applied. Now we claim that g ∈ W (C 0 , ℓ 2 ). To prove the claim it is sufficient to show k |g(z k + k)| 2 < ∞ Proof. (i) Choose a smallest positive integer N ǫ such that Applying mean value theorem on the Schwartz class function φ on R we have for some θ ∈ (0, 1). Writing 2ξφ(ξ) = (A + A * )φ(ξ) and 2φ Therefore by (i) we get (iii) From (i) and (ii) we get We use the following notation to estimate the upper bound for the oscillation of the twisted Zak transform over small cubes. Let x = (t, w) ∈ R 2 and r > 0. Then Q(x; r) is the square centered at x with radius r, i.e.
Thus the square Q = [0, 1) × [0, 1) can be represented as Q( 1 2 , 1 2 ; 1). 3 2 ] and w 0 , ǫ ∈ C be given. Let f ǫ ,f be as in Lemma 4.1. Then there exists an N ǫ ∈ N such that where T t ǫ,j G(z, w) is the twisted translation of G in the jth variable for j = 1, 2.
Proof. As in the proof of Theorem 4.2 we have Applying Cauchy-Schwartz inequality in the left hand side of (4.1) and (4.2) the proof follows immediately, where Further, using the fact that f.χ Q(z0,r) 2 → 0 as r → 0, we have lim Assume that both Zg andZg ∈ L 2 (C). We will show our assumption together with (4.3) leads to a contradiction in the following three steps.
Putting C(r) = C 1,g (r) + C 2,g (r) we get (4.6). Then the inequality (4.5) can be obtained by (4.4) and applying Cauchy-Schwartz inequality in the last term of the above calculation.
Uncertainty Principle approach to BLT
Motivated by the proofs of BLT for orthonormal basis and Riesz basis (see [4,10]), we prove the analogue of BLT on L 2 (C). We start with the proof of Theorem 1.3, which is a variation of Heigenberg uncertainty inequalities for L 2 (C).
Proof of Theorem 1.3: f, φ m,n φ m,n (z). Using the properties of the operators Z andZ we get Proof. Assume that Zg, Zg,Zg,Zg ∈ L 2 (C). Since {g m,n } is a twisted Gabor frame for Remark 5.3. If the twisted Gabor frame {g m,n } forms an orthonormal basis then g =g and the above theorem is precisely analogue of Battle's proof of BLT in [4]. The BLT will follow from the weak BLT ifZg ∈ L 2 (C) ⇔ Zg ∈ L 2 (C) and Zg ∈ L 2 (C) ⇔Zg ∈ L 2 (C).
However we show that the BLT and the weak BLT are actually equivalent.
Proposition 5.4. If g ∈ L 2 (C) and {g m,n } is an exact twisted Gabor frame for L 2 (C), then there is a uniqueg ∈ L 2 (C) such that Z tg = 1/Z t g.
Since {g m,n } is complete in L 2 (C) and h,g ∈ L 2 (C), it follows that h =g.
(3) The functions Lg and g cannot both be in L 2 (C). | 2017-08-01T13:38:41.000Z | 2017-06-01T00:00:00.000 | {
"year": 2017,
"sha1": "a5259357794805b87300ab1b3b87913842da78eb",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1706.00294",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "a5259357794805b87300ab1b3b87913842da78eb",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
244346538 | pes2o/s2orc | v3-fos-license | Tandem Mass Tag labelling quantitative acetylome analysis of differentially modified proteins during mycoparasitism of Clonostachys chloroleuca 67–1
Lysine acetylation (Kac) is an important post-translational modification (PTM) of proteins in all organisms, but its functions have not been extensively explored in filamentous fungi. In this study, a Tandem Mass Tag (TMT) labelling lysine acetylome was constructed, and differentially modified Kac proteins were quantified during mycoparasitism and vegetative growth in the biocontrol fungus Clonostachys chloroleuca 67–1, using liquid chromatography-tandem mass spectrometry (LC–MS/MS). A total of 1448 Kac sites were detected on 740 Kac proteins, among which 126 sites on 103 proteins were differentially regulated. Systematic bioinformatics analyses indicate that the modified Kac proteins were from multiple subcellular localizations and involved in diverse functions including chromatin assembly, glycometabolism and redox activities. All Kac sites were characterized by 10 motifs, including the novel CxxKac motif. The results suggest that Kac proteins may have effects of broadly regulating protein interaction networks during C. chloroleuca parasitism to Sclerotinia sclerotiorum sclerotia. This is the first report of a correlation between Kac events and the biocontrol activity of C. chloroleuca. Our findings provide insight into the molecular mechanisms underlying C. chloroleuca control of plant fungal pathogens regulated by Kac proteins.
www.nature.com/scientificreports/ that Kac events might be involved in 67-1 mycoparasitism. The acetylation may affect the biological activities of the biocontrol fungus by regulating the expression of mycoparasitism-related genes and/or influencing the activities of proteins that contribute to signal transduction, defense responses and mycoparasitic processes 28,31 . The proteomic strategies have been established to yield sound data in plant pathogenic fungi Phytophthora sojae and B. cinerea in our previous studies 36,37 , which is adapted to the beneficial fungus C. chloroleuca. In this research, the lysine acetylome of strain 67-1 during mycoparasitism to sclerotia was constructed, and Kac proteins and sites were identified and characterized. The results provide a comprehensive view of the molecular mechanisms regulated by Kac events in the biocontrol activity of C. chloroleuca against plant fungal pathogens.
Results
Identification and characterization of Kac proteins and sites in C. chloroleuca 67-1. The Kac modification of 67-1 during mycoparasitic process and vegetative growth were conducted. The mycelia of 67-1 collected in different infection stages were combined as one sample to reflect the mycoparasitic process of C. chloroleuca, while the mycelia without the induction of sclerotia served as a control. A quantitative lysine acetylome of C. chloroleuca 67-1 was generated by TMT labelling, affinity purification, and LC-MS/MS. The results of repeatability tests showed that the quantitative data were statistically consistent (Fig. S1). Mass errors of most Kac peptides were within 3 ppm, which is consistent with precise MS analysis. The peptides ranged in length from 8 to 18 amino acids (Fig. 1A, C), which is consistent with the expected fragments for trypsin-based enzy-
Analysis of Kac motifs in C. chloroleuca.
To further investigate the characteristics of acetylation sites in C. chloroleuca 67-1 during the mycoparasitic process, conserved sequence motifs in the 1431 identified peptides were evaluated, revealing 10 conserved sequences surrounding Kac sites. Most of the conserved residues are located downstream of Kac sites, with asparagine (N), histidine (H), lysine (K), tyrosine (Y), arginine (R), serine (S), threonine (T) and phenylalanine (F) in the + 1 position, lysine (K) in the + 4 position, and cystine (C) conserved upstream in the -3 position (Fig. 2, Table S2). Among the 10 motifs, four are highly conserved in both eukaryotic and prokaryotic organisms. Among the remaining six motifs of KacK, KacN, KacR, KacS, KacT and CxxKac, KacS was only detected in the acetylomes of Trichinella sprialis and Aspergillus flavus, while KacN, KacK, KacR and KacT were only reported in T. sprialis 38,39 in previous analyses of plant pathogens and biocontrol fungi. The CxxKac motif appears to be unique to C. chloroleuca, detected herein at 26 Kac sites on 25 Kac proteins associated with binding domains of pyridoxal-5'-phosphate (PLP)-dependent transferases and histidine kinases involved in oxytetracycline biosynthesis and cytokinin activities; however, most of these proteins have not yet been characterized. The results indicate that some Kac events are of special importance for fungi, and more Kac proteins and sites are required to be explored in future work.
Characteristics of differentially regulated Kac sites and proteins in C. chloroleuca. Among the quantified acetylated proteins, 80 Kac sites were up-regulated and 46 sites were down-regulated during the mycoparasitic process in C. chloroleuca 67-1, compared with vegetative growth (P < 0.05, Fig. 3, Table S3). Gene Ontology (GO) analysis of the three main functional categories (biological process, molecular function and cellular component) was performed. In the biological process category, 38% of differentially regulated Kac proteins were associated with metabolic process, 27% were related to single-organism process, and 23% were linked to cellular process. Additionally, proteins were found to be involved in responses, localization and biological regulation (Fig. 4A, Table S4). In the cellular component category, 39% of the proteins were associated with the cell wall and cell envelope, 25% were linked to organelles, 20% were related to macromolecular complexes, and 14% were membrane-associated (Fig. 4B). In the molecular function category, catalytic activity and binding activity were the most important processes, accounting for 90% of the identified differentially regulated Kac proteins (Fig. 4C). Analysis of the subcellular localization of the differentially regulated acetylated proteins in C. chloroleuca during sclerotia induction showed that most proteins were located in the cytoplasm (32%), followed by the mitochondria (22%) and the nucleus (21%), while 9% were found to be extracellular (Fig. 4D, Table S5). In Eukaryotic Orthologous Group (KOG) classification, ~ 70% of differentially regulated acetylated proteins were related to metabolism and cellular processes such as carbohydrate transport and metabolism, and energy production and conversion. However, the functions of 10.9% of the modified proteins were not clear (Fig. 5, Table S6). The results of GO, KOG and subcellular localization analyses were consistent; differentially regulated Kac sites and proteins are involved in diverse functions during C. chloroleuca mycoparasitism, especially metabolism, oxidation-reduction processes, and binding.
C. chloroleuca enrichment analysis.
To detect the enrichment trends of the differentially regulated Kac sites and proteins, GO and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway enrichment analyses www.nature.com/scientificreports/ were performed. GO analysis showed that up-regulated proteins were strongly linked to binding activities, multiple metabolism processes, and oxidoreductase activities, such as GTP binding and metabolism of monosaccharides (especially hexose and ribose phosphate), and many function as components of enzyme complexes ( Fig. 6A, Table S7). In these pathways, guanine nucleotide-binding protein, glucose-6-phosphate isomerase (PGI), triose-phosphate isomerase (TPI), glyceraldehyde-3-phosphate dehydrogenase (GAPD), 6-phosphogluconate dehydrogenase (G6PD), catalase/peroxidase HPI, cytochrome C oxidase, cytochrome P450, and chitinase were markedly enriched. By contrast, Kac proteins associated with dimerization, chromatin assembly, and nucleosome organization were significantly down-regulated, especially histone 2A variant and histone H3. Dihydrolipoamide succinyltransferase (DLST) and dihydrolipoyl dehydrogenase (DLDH), involved in the glyoxylate cycle, were also markedly down-regulated (Table S8). KEGG pathway enrichment analysis indicated that many up-regulated Kac proteins were associated with glycerolipid metabolism and the pentose-phosphate pathway (PPP) (Fig. 6B), and the down-regulated proteins were consistent with those identified by GO enrichment analysis (Table S8). In addition, the identified Kac protein domains were predicted to be histones, peptidases, oxidoreductases, aldehyde dehydrogenase (ALDH), and FAD/NAD(P)-binding domains (Fig. 7), all closely related to various cellular activities based on GO and KEGG enrichment analyses. In general, Kac events are active during chromatin assembly, energy metabolism, the tricarboxylic acid cycle (TCA) and glycometabolism in C. chloroleuca induced by S. sclerotiorum. C. chloroleuca protein interaction network analysis. A protein interaction network was established using STRING. Two groups comprising 11 and 7 proteins were associated with the nuclear nucleosome pathway and the PPP, respectively (Fig. 8, Table S9). These results further confirm our conclusion that Kac proteins associated with chromatin assembly and glycometabolism are essential during C. chloroleuca mycoparasitism.
Discussion
Although Kac is a widespread and highly conserved PTM of proteins in all organisms, its functions have not been extensively explored in filamentous fungi 37 . In the current study, 740 Kac proteins were identified, which accounting for 15% of all proteins in C. chloroleuca 67-1, and the proportion of Kac proteins identified was much higher than previously reported for P. sojae, B. cinerea, F. graminearum and B. bassiana 31,32,35,36 . To the best of our knowledge, this is the first report on the correlation between Kac events and the biocontrol activity of C. As an important mycoparasite, C. chloroleuca has great potential for controlling a range of plant fungal diseases under various environmental conditions 35 . Many research efforts have been made in the understanding of its mycoparasitic strategies. It is well known that ATP and NADPH are produced in the PPP, and NADPH has reducing power for anabolism and maintains the redox balance of cells, while intermediate products are used for biosynthesis 40 . Therefore, we believe that acetylation of proteins involved in carbohydrate metabolism and energy production and conversion may be essential during C. chloroleuca mycoparasitism, and we speculate that the expression levels of catalase/peroxidase enzymes are differentially up-regulated in the mycoparasite during the response to stimulation of S. sclerotiorum. In addition, the glyoxylate cycle, which complements the TCA, can increase the utilization of acetyl-CoA and the production of succinic acid to boost the energy supply. Additional experiments will be needed to conclusively prove these findings.
Consistent with GO and KEGG enrichment analyses of the Kac proteins, the domains enrichment analysis demonstrated that these proteins were more predicted to be histones, peptidases, oxidoreductases, ALDH and FAD/NAD(P)-binding domains. Previous studies proposed that histone deacetylases utilizing NAD + as a cofactor are sensitive to nutrient levels in cells 40 . When energy is limited, NAD + levels increase and histone deacetylases www.nature.com/scientificreports/ are activated, and a series of metabolic signals are transduced by deacetylation 17,40,41 . We speculate that Kac events may monitor the intracellular nutrient and energy status during C. chloroleuca vegetative growth and mycoparasitic process. In addition, NAD + is also a coenzyme of dehydrogenases, and it is essential in glycolysis, gluconeogenesis, the TCA, and the respiratory chain. All these findings strongly suggest that lysine acetylation is important for biological control activity of C. chloroleuca against plant fungal pathogens.
In conclusion, these findings represent the first extensive data on lysine acetylation in C. chloroleuca. These data not only indicate that the regulatory scope of lysine acetylation is broad in C. chloroleuca, but also expands our current knowledge of the molecular mechanisms underlying C. chloroleuca control of plant fungal pathogens regulated by Kac proteins. Plates of 67-1 without sclerotia served as a control, experiments were conducted three times, and a total of six samples were frozen immediately at -80 °C in liquid nitrogen 35 . Mycelia from C. chloroleuca were ground to powder in liquid nitrogen using a mortar and pestle, and transferred to 2 mL tubes containing lysis buffer with 10 mM dithiothreitol, 1% protease inhibitor cocktail, 3 μM trichostatin A (TSA) and 50 mM nicotinamide (NAM). The samples were ultrasonicated on ice using a Highintensity Ultrasound Processor (Scientz, Ningbo, China). An equal volume of Tris-saturated phenol (pH 8.0) was added and mixed by vortexing for 5 min. The mixture was centrifuged at 5 000 g at 4 °C for 10 min, and the upper phenol phase was transferred into a new tube containing four volumes of ammonium sulphate-saturated methanol. The samples were incubated at − 20 °C for 6 h then centrifuged at 4 °C for 10 min. The precipitated proteins were collected and washed with ice-cold methanol followed by three washes with ice-cold acetone. The proteins were re-dissolve in 8 M urea and determined using a BCA Protein Assay Kit (Beyotime, Shanghai, China).
Methods
Trypsin digestion of C. chloroleuca 67-1 protein samples. The protein samples were reduced with dithiothreitol at a final concentration of 5 mM at 56 °C for 30 min, then alkylated with iodoacetamide at a final concentration of 11 mM at room temperature in darkness for 15 min. Trypsin was added at a ratio of 1:50 (trypsin/protein, w/w) and incubated overnight, and 1:100 trypsin was then added and incubated for 4 h to thoroughly digest protein samples. www.nature.com/scientificreports/ TMT labelling. The peptides were desalted using a Strata X C18 SPE Column (Phenomenex, Torrance, CA, USA), vacuum-dried, and reconstituted in 0.5 M triethylammonium bicarbonate (TEAB). The samples were labelled using a TMT 6-plex Labelling Kit (Thermo Fisher Scientific, Rockford, IL, USA) according to the manufacturer's instructions.
HPLC fractionation. The labelled peptides were eluted with a gradient of 8−32% acetonitrile (ACN) using a Thermo Betasil C18 Column (Thermo Fisher Scientific). A total of 60 fractions were collected per min by reversed-phase HPLC, mixed into four groups, and dried using a vacuum freeze centrifuge (Eppendorf, Hamburg, Germany). Quantitative analysis of differentially modified proteins. The influence of protein abundance on the modified signals was eliminated by quantitative proteome normalization, the fold-change for differential modification of Kac sites under different treatments was calculated, and the p-value of the differential modification ratio was determined by t-test; proteins with p < 0.05 and fold-change > 1.5 were considered significantly up-regulated, while those with fold-change < 1/1.5 were considered significantly down-regulated. A volcano plot was drawn in which the horizontal axis represents multiple values for protein differences after Log2 conversion and the vertical axis represents the p-value after transformation of -log10 for significance of difference tests.
Affinity enrichment of
Analysis of the repeatability of quantitative data. To analyze the data derived from three repeated experiments, principal component analysis (PCA), relative standard deviation (RSD) and Pearson's correlation coefficient were used, and modified quantitative repeatability was evaluated.
Database searching. The resulting LC-MS/MS data were searched against the C. chloroleuca 67-1 mycoparasitism-related gene database 35 concatenated with the reverse decoy database using Maxquant (v1.5.2.8), and a common reference database was appended to eliminate the effects of potential contaminants. Trypsin/P was specified as the cleavage enzyme, up to four missed cleavages were allowed, the minimal peptide length was set at 7 residues, and maximal number of modification sites per peptide was set at 5. The mass tolerance for precursor ions was set at 20 ppm and 5 ppm for First search and Main search, respectively, and for fragment ions it was set at 0.02 Da. Carbamidomethyl on Cys was selected as a fixed modification, while oxidation on Met, acetylation on Lys and acetylation on the protein N-terminus were selected as variable modifications. TMT 6-plex was selected as the quantitative method. The false discovery rate (FDR) thresholds for proteins, peptides and modification sites were adjusted at 1%, and the site localization probability was set to no less than 0.75. Bioinformatics analysis. The Kac proteins with differentially modified sites were analyzed using multiple bioinformatic tools. GO analysis was performed for functional classification and enrichment with the UniProt-GOA database (http:// www. ebi. ac. uk/ GOA/). InterProScan and InterPro (http:// www. ebi. ac. uk/ inter pro/) were used to classify and enrich Kac protein domains, respectively, and InterProScan was also used to analyze proteins unannotated by GO. WoLFPSORT (http:// wolfp sort. seq. cbrc. jp/) was used to predict the subcellular localization of Kac proteins. The differentially modified proteins were mapped using KOG analysis (http:// genome. jgi. doe. gov/ help/ kogbrowser.jsf), and functional pathways of the Kac proteins were annotated and enriched using KEGG analysis (http:// www. genome. jp/ kegg/). All protein sequences with differential modifications were searched against STRING (version 10.5) for protein-protein interactions, and interactions with high confidence scores (> 0.7) were retained. The top 50 Kac proteins with the closest interactions were selected for graph-theoretical clustering algorithm and molecular complex detection (MCODE) analysis. The online software Motif-x (http:// motif-x. med. harva rd. edu/) was used to predict the motif sequences of all identified Kac positions.
In these analyses, all database protein sequences were used as the background database parameter, and other parameters were set to default values. For each category, a two-tailed Fisher's exact test was employed to assess the | 2021-11-19T06:17:07.703Z | 2021-11-17T00:00:00.000 | {
"year": 2021,
"sha1": "a5610eda7dff777116c40e9d5f29efbb53158a76",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-021-01956-2.pdf",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "e05e2c2b7c9700374cae1b55b6de8598369951bd",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
202903626 | pes2o/s2orc | v3-fos-license | PATCHWORK STATES: THE LOCALIZATION OF STATE TERRITORIALITY ON THE SOUTH SUDAN–UGANDA BORDER, 1914–2014 *
This paper takes a localized conflict over a non-demarcated stretch of the Uganda–South Sudan boundary in 2014 as a starting point for examining the history of territorial state formation on either side of this border since its colonial creation in 1914. It argues that the conflict was an outcome of the long-term constitution of local government territories as patches of the state, making the international border simultaneously a boundary of the local state. Some scholars have seen the limited control of central governments over their borderlands and the intensification of local territorialities as signs of African state fragmentation and failure. But the article argues that this local territoriality should instead be seen as an outcome of ongoing state-formation processes in which state territory has been co-produced through local engagement and appropriation. The paper is thus of wider relevance beyond African or postcolonial history, firstly in contributing a spatial approach to studies of state formation which have sought to replace centre–periphery models with an emphasis on the centrality of the local state. Secondly it advances the broader field of borderlands studies by arguing that international boundaries have been shaped by processes of internal territorialisation as well as by the specific dynamics of cross-border relations and governance. Thirdly it advocates a historical and processual approach to understanding territory, arguing that the patchwork of these states has been fabricated and reworked over the past century, entangling multiple, changing forms and scales of territory in the ongoing constitution of state boundaries.
surprising and unprecedented. 2 Despite recurrent armed rebellions spilling across the border, the Kuku and Ma'di themselves had a long history of peaceful relations, intermarriage, trade and common farming livelihoods in the green hills above the west bank of the Nile. Never formally demarcated, this international border has been described as the epitome of an 'artificial' colonial boundary, resisted or ignored by local communities with enduring cross-border solidarities. 3 'We are the same people, same blood', representatives of the closely allied national governments had declared in 2011, seeking to defuse the rising local tensions over the border. 4 So why had this boundary become such a focus of tension and conflict on the ground, and why was it the local -rather than centralgovernment administrations on either side that were leading the competing assertions of state territorial limits?
Answering these questions entails addressing more fundamental questions about the nature and history of state territory in these countries, with implications for how we approach state formation as a spatial process more broadly. This article draws on oral histories from Moyo and Kajokeji and largely district-level documentation to reorient our understanding of border-making and state-formation processes from the centre to the local. This approach reveals the historical and contemporary role of local-level actors in constituting state territorial sovereignty by investing in their own jurisdictional and political patches. The result has been the ongoing emergence of these states as patchworks of local government territories, making the international border simultaneously a boundary of the local state.
This argument runs counter to the tendency of scholarly and media analysis to see the intensification of local territorialitywhether understood as the resilience of primordial ethnic divisions 5 or as newer populist political reactions to globalization 6 -as a sign of the fragmentation and failure of states in Africa and beyond. 7 Such approaches often imply that state control of territory depends on the capacity and will of the centre to project or 'broadcast' its power across the periphery. 8 Yet the work of borderlands scholars has increasingly demonstrated that states can also be constructed from the outside in, as borderland inhabitants play a crucial role in giving meaning and value to national boundaries on the ground. 9 To some extent, however, such approaches continue to replicate centre-periphery spatial models, even if they seek to reverse these by placing borderlands at the centre of the analysis. Rather than emphasizing the distinctive features of international boundaries, this article suggests that the Moyo-Kajokeji border dynamics reflect broader internal processes of boundary-making across Uganda and South Sudan. 10 And while these borderlands undoubtedly represent some of the most marginalized and alienated peripheries of either state, this centreperiphery geography has been cross-cut by the horizontal tensions between neighbouring territories and partially counteracted by the resulting local appeals to higher state authority.
This article thus explores the territorial implications of approaches to state formation that have sought to replace centre-periphery models with an emphasis on the centrality of the local state. The term 'patchwork' is intended not only to describe patterns of state territoriality but also as a spatial metaphor for the way in which state authority has been coproduced through local engagement and appropriation. As scholars have increasingly argued in relation to other periods and places, this 'localization of state power' has been central to state formation rather than a sign of state weakness or fragmentation. 11 It is not unusual, of course, to describe states and nations as patchworks of regions, localities, ethnicities or federated units.
But often the implication is that states have been superimposed onto prior, smaller territorial identities or polities. 12 Instead this article uses the term 'patchwork' more precisely to characterize the mutual constitution of local and national state territories. The metaphor is particularly apt because a patchwork involves the fabrication of both the individual patches and the overall quilt. The 'patches' in Sudan and Uganda did not lie intact waiting for state power to stitch them together, but have been imagined, drawn, cut and sewn -and recut and resewn -by multiple actors engaged and invested in state territoriality at the local level. 13 In turn, the state is not a mere 'décor' or 'façade', as has been argued in relation to the postcolonial African state more widely, 14 but is fundamentally made of these constituent patches. A patchwork can be reworked, unstitched and restitched, thus emphasizing territory as a 'work' always in progress. It can be rough and messy, allowing for the fraying of seams and entanglements of prior geographies in the production of state territory. As political geographers emphasize, 'entanglement' is a similarly useful spatial metaphor for describing the 'threadings, knottings and weavings' of power relations as these are 'spun out across and through the material spaces of the world'. 15 The material environment has also given patterns to the political construction of space: 16 natural features are seen as a source of historical evidence in continuing debates 12 over the South Sudan-Uganda border, for example, and played an active role in determining working boundary lines made on the ground by local officials. 17 A patchwork state is not, then, simply a conglomeration of smaller entities or identities (as any state might appear), but a dynamic work-in-process in specific contexts where local territories emerge through and as a fundamental part of state formation. Such processes are particularly apparent in many African contexts, where twentieth-century colonialism entailed an unprecedented attempt to contain, govern and define people within bounded territories. 18 The shift from personal, plural and overlapping jurisdictions to territorial sovereignty occurred as a much longer-term process in Western Europe, for example. 19 But European governments took an increasing interest in defining and defending their boundaries and, from the late eighteenth century, in unifying and controlling the fabric of national space. 20 In contexts like Sudan and Uganda, by contrast, governance strategies have instead worked to accentuate and multiply the internal seams. While territory may be an outcome of state formation everywhere, its particular patchwork patterns in these contexts reflect political and economic dynamics that have rendered territorial control and identification more important for political power and resource access at the local level than at the national level. These dynamics will be explored through the rest of the article and include recurrent decentralization programmes, changing land values and the territorialization and politicization of ethnicity as the basis for citizenship.
The sporadic and limited interest of central governments in defining or demarcating the boundary between Uganda and Sudan (now South Sudan) is apparent in the archival record 17 and previous studies of the border. 21 The new perspective presented in this article on local-level investment in state boundaries derives from new documentary and oral sources as well as from the analytical approach to state formation outlined above. Interviews and local-level documentation produced in Moyo and Kajokeji immediately before and after the 2014 conflict narrated the history of the international border through a series of key markers in both space and time, in order to evidence contemporary territorial claims. The conflict context no doubt produced newly virulent rival assertions of the border -both oral histories and the archival record simultaneously emphasized the historical prevalence of more peaceful cross-border relations. But the very production of rival historical narratives is an aspect of the local investment in the international boundary with which this article is concerned. As Newman and Paasi argue, it is through such narratives that boundaries are constructed; 22 telling the story of the border through key moments in time was working to fix its location in people's spatial imaginaries, demonstrating the temporal and processual nature of territory. The article therefore takes these narratives seriously and in their own terms, not to suggest that the boundary has always been as significant or contentious as in recent years, nor to try to verify one or other side of the story, but because they reveal what local actors have identified as the key moments and factors in a longterm process by which the international boundary came to be a focus of conflict -something that they too often described as a historical puzzle. These key moments also appear in the archival records, along with the repeated initiatives by local government officials, chiefs, councillors and politicians to try to define and demarcate the international boundary in the face of central government disinterest. Strikingly, both oral and documentary sources also revealed similar struggles over internal boundaries, as the unfulfilled colonial vision of bounded 'tribal' territories has 21 23 Even a later British colonial official in Uganda pronounced this 'the world's most idiotic' boundary description, asking 'If the Kuku tribe decide to move, do they carry the international boundary with them?' 24 Yet this is a question which has continued to be asked of internal administrative jurisdictions in these states -are they exercised over (mobile) subjects or over bounded territories? The continuing disputes over this question demonstrate that the constitution of territory is an ongoing and incomplete process. 25 By defining part of the Sudan-Uganda boundary in such tribal terms and more generally assuming ethnic identities to be territorial, the British colonial administrations established a basis for changing and emerging local definitions of ethnic territory to work their way into rival assertions of the boundary line. Section I of this article therefore begins by exploring recent, predominantly oral, local accounts of territorial history in this borderland to emphasize the impossibility of entirely disentangling 'indigenous' from 'government' territorialities or uncovering a distinct precolonial basis for the current patchwork. Section II argues that the colonial ambition to map and fix people within boundaries was repeatedly undone by the limits of central government capacity or will, and by the resistance of borderland inhabitants, but that the patchwork state territories nevertheless began to take shape through the initiative of local government actors between the 1920s and 1940s. Section III contends that, despite an increased central government investment in asserting territorial sovereignty around and after Sudanese and Ugandan independence, it was largely at the local state level that the international boundary was a focus of concern and action in the period from the 1950s to 1980s. The final section of the article focuses on the parallel policies of decentralization espoused both in Uganda by the This period has seen an intensification of local government territoriality, encouraged also by the channelling of international development resources to the local level and by the commercialization of land and natural resources. As boundary disputes have proliferated, the entanglement of multiple territorial layers and logics has become more evident than ever in the ongoing production of these states as patchworks.
Africanist historians often contrast pre-colonial/indigenous and colonial forms of territoriality, and associate the latter with the imposition of linear boundaries and more ethnically defined and exclusionary ideas of territory. 26 Yet interviewees in Kajokeji and Moyo were swift to assert that 'boundaries are known' and to emphasize the deep historicity of the territorial arrangements that upheld their competing definitions of the international boundary. Their accounts, however, drew on multiple threads of legitimation for their territorial claims, from oral traditions of ancestral migration to the markers of colonial and post-colonial boundaries. There were some national differences in these accounts: Sudanese in Kajokeji reached much further back in history to support their claims that 'the southern boundary of the Kuku tribe' reached well south of the Ugandan constitutional definition, even claiming that there had once been a boundary signpost on Lake Albert, or in Moyo town. 28 At the same time they asserted ancestral Kuku land rights in the borderlands, based on the histories of individual 'clans'. These different claims show that the threads of contemporary territoriality cannot simply be disentangled to reveal an underlying precolonial patchwork, though the idea that such a patchwork exists has long been asserted as the basis for boundary definitions by a range of state actors.
The South Sudanese references to historic boundary markers located deep in what is now Ugandan territory -though dismissed as ludicrous by Ugandans -are rooted in the messy history of imperialism and boundary adjustments in the region. 29 This was a frontier zone even in the mid to late nineteenth century, at the violent edges of ivory and slave trading emanating both northwards from the east African coast via the lacustrine kingdoms, and southwards from Sudan and Egypt, closely followed by the expanding frontier of Turco-Egyptian imperialism. The latter reached into what is now northern Uganda until the overthrow of Turco-Egyptian rule by Mahdist forces in the 1880s. Initially a focus of intense interest for European explorers, by the 1890s the upper Nile had become the object of competing European imperialisms. The Belgians were first to establish a presence on the ground and a claim to territory that would eventually be restricted by the 1906 Anglo-Congolese Agreement to a lifetime lease of the 'Lado Enclave' to King Leopold administered from Kajokeji in Sudan. But the British administrations of Sudan and Uganda soon decided to adjust their boundary to include the West Nile region in Uganda in return for Sudan acquiring territory east of the Nile. This history of dramatically shifting boundaries explains some of the more extreme South Sudanese claims to Ugandan territory now, and is also expressed in local stories of boundary marker stones being carried back and forth by various individuals from each side until they were eventually left somewhere in the middle. 30 That boundaries move with people is also implicit in the territorial logic embedded in clan histories and ideas of spiritual authority over land. Oral traditions focus not on tribal origins of 'the Kuku' or 'the Ma'di' but on the origins and relations of the numerous clans that are now said to make up these ethnic groups. Each exogamous clan is said to be descended patrilineally from a particular heroic ancestor who migrated from elsewhere to settle as the 'firstcomer' in the place now defined as clan territory (usually several square miles in extent). One of his direct descendants inherits ritual responsibility for this clan land as its 'custodian' or 'landlord'. Oral traditions also tell of other people who came later and were invited to settle around the firstcomers to act like a protective 'fence'; again there is the idea that people can constitute a boundary. 31 'It is like a zariba [fenced/fortified enclosure]: those in the north, south, east and west defend us and we landlords are here in the middle. And [we] intermarry with these tribes until they become one now'. 32 In common with many other African oral traditions, these accounts reveal the inclusive and flexible nature of clan kinship and territoriality, in which clans sought to build strength in numbers, or 'wealth-in-people', by absorbing newcomers through marriage, alliance or subordination. 33 Nowadays clan territories are often asserted to be clearly bounded: 'Every person knows the boundaries, because we are divided into clans and the clan boundaries are known '. 34 Yet at the same time boundary-drawing is considered a morally and spiritually dangerous exercise, connoting an antisocial divisiveness. Like falsely claiming land or disputing boundaries, it is seen to provoke the dangerous spiritual forces associated with the soil and streams. Land custodians are said to point out boundaries to people, but 'they don't put marks like a signpost; they use natural things, like trees, streams, hills. If any of us uses a hoe and starts making a boundary, that is already a curse' -'that is a sign of division and it will bring curses, death'. 35 In effect, this means that boundaries are constituted in the knowledge and memory of clan land custodians and other respected elders (preserving a central role for them in boundary disputes) more than they are visible on the ground -a technique of territoriality not entirely different from the existence of boundaries as lines on maps, interpretable on the ground only by those with the necessary technical expertise and equipment. 36 Clan traditions thus suggest the existence of territory before or beneath the creation of states, in the sense that the firstcomer/ custodian families claim to have long exercised exclusive -albeit largely ritualized and latent -authority over land and its resources within bounded clan territories, which form an intricate small-scale patchwork across the region. But this depiction is complicated by the limits of clan-based authority and existence of multiple wider forms of political power, such as rainmakers or local allies of the nineteenth-century commercial and military forces. 37 Oral traditions are also influenced by changing understandings of clan territoriality in the twenty-first century as the increasing monetization of land transactions has given new value to the ritual and historical expertise claimed by land custodians. As we shall see, increasing disputes over administrative boundaries have further entangled and politicized clan boundaries in the assertion of larger ethnic territories. Yet the strikingly consistent thing about the clan traditions is that they tell of multiple origins: all the founding fathers of clans in both Kajokeji and Moyo are said to have come from other places and from different ethnic origins; the same clans are also found now within different ethnic groups. 38 The process of becoming Kuku or Ma'di appears to have emerged through coresidence in a particular area and the gradual ascendancy of one or other language and identity. This is a process that was certainly encouraged, if not coerced, by colonial administrations. European colonialists arrived in the region on the expansionist tide of confidence in their technological capacity to control vast territory and with a zeal for imposing 'a geometry of lines and areas', as they already had across much of Europe and beyond. 39 Yet boundaries were not simply decided by arbitrary haggling and line-drawing in European boardrooms. 40 For the broader cartographic obsession of the nineteenth century also sought the control and categorization of space within territories, exemplified in the Great Trigonometrical Survey of India -though the illusory power of maps frequently belied the limits of imperial knowledge and the complex realities on the ground. 41 In much of Africa the illusion of colonial order rested on 'tribal' categorizations. From the earliest stages of boundary negotiation in what would become the Sudan-Uganda borderlands, the same logics of 'tribal mapping' were thus at work in both internal and inter-colonial territorial ordering. During the protracted Anglo-Belgian negotiations over the Lado Enclave, King Leopold's negotiator produced 'an elaborate tribal map of the southern Sudan', derided by British negotiators as 'a fantastic combination of the King's imagination and Junker's explorations made some twenty years before'. 42 The British were soon engaged in their own attempts at such mapping, and decided in 1911 that the new boundary between the Anglo-Egyptian Sudan and the Uganda Protectorate 'should be a tribal one'. 43 To the west of the Nile, the subsequent Boundary Commission was instructed to identify a line that would separate Bari language-speakers, that is Kuku and Kakwa, from the Ma'di and Lugbara, despite a Ugandan report of close relations between Kuku and Ma'di. 44 The first British administrator in Kajokeji, Captain Chauncey Stigand, was particularly fond of stereotyping entire tribes, and had already decided in 1911 that the Kuku around his headquarters were of a 'peaceful disposition' while the Ma'di were 'a treacherous and cowardly people'. He also reported that the Kayu/Ayo stream was 'the boundary between the Madi and Kuku country', a definition that would be included in the Commission's decision. 45 Yet in a more detailed account published posthumously, Stigand noted the very recent migrations in the area (reporting that some Kuku had previously lived south of the Kayu stream), and the ethnic 'mixture' in many areas. 46 Captain Kelly, Chief Commissioner of the Sudan-Uganda Boundary Commission, casually acknowledged that creating a tribal boundary might necessitate 'the transplanting of a few villages' and that 'it should remain with the officials conversant with actual local conditions to arrange the exact line which will most conveniently separate the mixed population'. Despite this presumptive confidence in the colonial capacity for territorial 42 reorganization, he admitted that 'the boundary recommended is not based on first-hand knowledge'. 47 The Sudan Director of Surveys suggested 'that a definite settlement should stand over until a reliable map has been prepared', and the Sudan government therefore only agreed to the publication of the Uganda Order on 21 April 1914 'as a provisional measure'. 48 The governments of Sudan and Uganda did not make any immediate effort to clarify the boundary, however. Instead, as the Commission had advised, it was left to provincial and district administrators on the ground to try to make sense of it. From the outset, these local officials were as or more preoccupied with mapping and organizing internal territory. The first British administrator of the new West Nile District of Uganda (including Moyo), A. E. Weatherhead, met Stigand soon after his arrival and was clearly influenced by the latter's categorization of tribes in the area and his goal of amalgamating small communities under chiefs in order ultimately to build 'tribal' administration. 49 Weatherhead complained about the ethnic mixing in the district, including Kuku among Ma'di -a situation he set out to remedy by trying to establish clear boundaries between 'tribes', if necessary by moving settlements. 50 Once again, however, colonial confidence outweighed actual knowledge: a later British officer serving in the same district described a map drawn by Weatherhead in 1920 as 'not merely inaccurate, but completely wrong. Whole tribes are shown in the wrong places, rivers flow in the reverse directions, and distances are mistaken by hundreds per cent'. 51 The creation of the Sudan-Uganda boundary was flawed from the outset by the gap between European cartographic confidence and actual geographic knowledge, and more fundamentally by the assumption that a 'tribal' chequerboard should be the basis for territorial governance within and between colonial states. This set up a lasting tension between 'the social definition of territory' among pre-colonial clans, and the 'territorial definition of society' imposed by colonial states; 52 between the flexibility and fluidity of clan affiliation, authority and settlement patterns and the colonial vision of a permanent and precise tribal boundary. The partial entanglement of clan territories into a new patchwork of ethnic, administrative and national territorialities would be a complicated and gradual (indeed still ongoing) process that was little noticed or remarked by colonial officials. Yet this process would draw clan territoriality into even the highest levels of inter-colonial border negotiations by the 1930s.
After the First World War, colonial territorial ambitions were largely confined within agreed boundaries and directed towards the ordering of space within these. In this 'age of territory', Maier emphasizes the spreading and powerful idea that identity space and political space should be congruent. 53 In the African context this manifested in colonial attempts to construct territorial hierarchies of chiefdoms and ethnically defined local government districts, within which subjects could be controlled and taxed. 54 Boundaries between colonial territories were now subject less to European rivalries than to the same imperatives of containment and regulation of colonial subjects that drove internal territorial ordering. This was given added urgency in the case of the Sudan-Uganda boundary by concerns about the northward spread of sleeping sickness. But even in this era, colonial ambitions to map, impose and regulate boundaries did not follow through into the creation of a clear boundary line between Sudan and Uganda, or succeed in confining people within territorial chiefdoms, districts or colonies. That these territories began to materialize owes far more instead to the 52 initiative of local administrators and chiefs with more direct interests in clarifying or extending the boundaries of their jurisdictional patches.
The sleeping sickness campaign was one of the most centrally directed interventions in the borderlands, yet the boundary that it created was not a clear line but a wide uninhabited no-man'sland, which in the long run has contributed more confusion than clarity to the borderline. The creation of boundaries as sanitary cordons has earlier parallels in European history. 55 In colonial Africa, the whole approach to sleeping sickness was territorial and focused on preventing its spread across borders. 56 This justified the creation of 'an uninhabited belt of ten miles on each side of the Sudan-Uganda boundary' through a coercive resettlement programme. 57 'My uncles resisted moving, so the British administration set their houses on fire to drive them out of the land by force'. 58 Colonial officers and Anglican missionaries in Sudan reported continuing cross-border movement on hidden pathways by people 'visiting their Uganda relatives', despite the threat of punishment. 59 But the extended relocation of the borderland inhabitants was nevertheless brought up by interviewees as a key aspect of boundary creation: While sleeping sickness was in itself a very real concern, it also provided district officials with the opportunity to resettle people in more concentrated villages along the roads and closer to the government-recognized chiefs. Both colonial administrations had established a new institution of chiefship, known among Bari-speakers like the Kuku as the matat lo gela/miri, 'the chief of the whites/government'. 61 In both Kajokeji and Moyo, the recognized chiefs had some prior authority as rainmakers or war leaders. But the idea of a single chief having executive authority over multiple clans was alien, and their new role as the tax collectors and enforcers of colonial orders was often a fraught one. The governments gave them their own courts and police to enhance their authority, and sought to establish territorial chiefdoms within which they would collect taxes and maintain roads. But chiefly jurisdiction retained considerable uncertainty as to whether it was strictly territorially defined, as the governor of Sudan's Equatoria province (in which Kajokeji sub-district was located) complained in 1947: We must have things in terms of territorial as opposed to tribal or clan administration, though of course the ideal is for the two to administer [sic]. We cannot permit persons to live in one Chiefs [sic] area and owe allegiance to another chief. If people want to change their chief they must also be prepared to move their villages and cultivations. 62 Recent oral accounts reflect this uncertainty: Our people settled around the valleys and hills, but the British moved them to live along the roads . . . So proper demarcation of boundaries was not easy -you find people from a particular clan were living far away from their indigenous community. So the chief had to go a long way to 60 Perhaps reflecting change over time, others asserted the opposite: 'People knew in the British time where the tax collection boundaries were because chiefs cannot collect tax in another area'. 64 In an attempt to keep control of people within chiefdoms, chiefs had to keep registers of taxpayers (that is, adult men) and update these annually; compulsion and remuneration for tax collection gave chiefs a vested interest in trying to keep people within their jurisdiction. 65 As the Sudan administration began to condone cross-border labour migration to the plantations in southern Uganda, it was chiefs and elders in Equatoria who complained about its 'very unsettling effect' and the loss of the young men's labour and taxes. 66 In the later 1930s, chiefs were also made responsible for issuing the official sleeping sickness passes required to cross the border legally (and which were contingent on the payment of poll tax), giving them a further role in border governance. 67 The devolution of tax collection to local chiefs and district officials would be a recurrent factor motivating local-level attempts to clarify and enforce the Sudan-Uganda boundary, in order to define the boundaries of local taxation regimes. It prompted the first local-level attempt to demarcate a clearer line in the early 1930s, when the neighbouring British district commissioners (DCs) conducted a 'border march' with their chiefs to agree 'provisionally, where the boundary was' by marking 'prominent trees' and 'rocky outcrops'. 68 The creation of the border thus involved the entwining of British officials' and chiefs' visions of its geography from the outset, as well as being shaped by natural features.
The DCs' demarcation, or 'red line' (see Map, line 4), became the basis for fresh intergovernmental attempts to agree on a final definition of the boundary in the 1930s. But both governments were clearly influenced by the claims of their own subjects, and so local territorial interests reached a surprisingly high level of government dialogue: while the Governor of Uganda asserted Ma'di claims to fishing rights on the Nile, the Governor-General of the Sudan expressed 'grave misgiving' that the boundary 'would deprive the Sudan tribes of the ancestral rainmaking sites to which they attach so much importance', and even named several specific clans with claims to territory as far south as Mount Midigo (Map, lines 2-3). 69 No final settlement was reached, however. As the sleeping sickness restrictions were lifted and people returned to the border areas in the 1940s, the uncertainty of the borderline became more contentious. In 1943, twelve Kuku hunters were killed, reportedly just south of the Kayo/Ayo stream. The authorities reacted swiftly to try to prevent retaliatory conflict and an individual Ma'di man was executed for the killing. 70 But it is striking how frequently interviewees recounted this incident without prompting, as an origin of ongoing Kuku-Ma'di tensions and a motive for revenge on both sides. 71 Hunting conventions were one of the primary ways in which authority over land is said to have been recognized: 'In the past when people hunted or trapped animals, they give the foreleg to the landlord, or he will curse you. So everyone knows whose land it is'. 72 The incident may thus have reflected ongoing disputes and uncertainties over land and boundaries as people returned from extended displacement. 73 Hints such as these in both the colonial records and local memory indicate that clan territoriality was on occasion asserted vociferously enough to reach the attention of local and even central governments, and that there is a long history to contemporary disputes over the relation between clan territory and the international boundary. Far from clarifying the borderline, the colonial governments had established considerable uncertainty over it by their unresolved negotiations and their creation of a wide no-man's-land as a cordon sanitaire. The boundary that came closest to being accepted was the 'red line' made by district-level officials and chiefs in around 1930 (recorded as the '1936 ad hoc administrative agreement', line 4 on Map), driven by local administrative imperatives and the convenience of using prominent natural features. But this does not appear to have been mapped in any detail and it clearly left unresolved questions over the reach of chiefs' jurisdictions as well as clan territorial claims in the borderlands. Uganda-Sudan boundary was at its height in the 1950s and 1960s. Sudan's independence in 1956 followed an uprising in parts of Southern Sudan, where the new, largely northern Sudanese, administration was therefore preoccupied with pursuing the remnants of what it termed 'mutineers' and 'outlaws', many of whom took refuge in the Uganda borderlands. 75 This produced some tension with the still-colonial government of Uganda, and the British were eager to resolve the boundary delimitation before Uganda too became independent. But efforts to establish a boundary commission were hampered by the fact that Uganda's border with Kenya also needed to be resolved, and the British government in Kenya was wary of stirring up the unresolved issue of its own borders with Sudan and Ethiopia. Several years of high-level government correspondence over the Sudan-Uganda border still failed to produce a resolution by the time Uganda became independent in 1962. 76 Even in this era of nationalism and centralizing state authoritarianism, it was thus largely at the local level of government that more practical attempts would be made to create and administer the international boundary. At this district level, the transfer of rule from British to Ugandan or Sudanese administrators, councillors and political representatives produced an intensified interest and investment in local state territory, and it was these interests that would primarily drive disputes over both internal and external borders.
By the late colonial period, districts were becoming the primary territorial units not only of local government administrations but also of emerging political organization and representation. Local government reforms from the late 1930s were not successful in their aim of diverting African political energies from nationalism, but they established the district as a focus for political action and ambition -often understood, in Uganda at least, in ethnic terms. 77 One effect was to produce demands for new independent districts, and to generate tensions over district boundaries, sparked by localized jurisdictional interests. 78 For example, disputes over tax collection along the Madi-Acholi district boundary in Uganda were reported to be driven primarily by chiefs, while their people enjoyed close relations and had little interest in the boundary. 79 New motives for administrative independence were also emerging among local elites through the district-level organization of cotton cooperatives and ginneries, and by the basis of political constituencies in administrative boundaries. 80 District' as a result of Aringa fears that Madi District sought to annex the county. 82 Districts in Southern Sudan were not usually defined in such overtly ethnic terms, though their territories were still shaped by colonial understandings of tribal boundaries. 83 But the identification of chiefs and councillors with their district was clearly strengthening in the 1940s and 1950s, 84 and here too on occasion disputes over hunting rights or settlement and taxation could prompt the revisiting of district boundaries. 85 Indeed the colonial district territories have retained considerable salience up to now in South Sudan, despite later rearrangements: in 2014, the new rebel opposition proposed a federal government structure based on the twenty-one colonial districts and their boundaries. 86 In Uganda the colonial district boundaries have been retained even as districts have been internally subdivided in recent years (usually along former sub-district/county boundaries). 87 Colonial administration in both countries thus established a patchwork pattern of chiefdoms and local government units with lasting effect. Chiefs, local government officials and councillors in Kajokeji and neighbouring Moyo and Aringa were increasingly invested in the territoriality of their administrations, and hence in the international boundary that would be created by first Sudan's independence in 1956 and then Uganda's in 1962.
In 1958, it was disputes over 'the jurisdiction of local chiefs and the collection of taxes' that prompted a meeting between the 49-51. between Kajokeji and Aringa County. 88 They discovered that their maps differed but agreed to adopt the Ugandan version of the line 'because it was easier to follow on the ground', demonstrating again that administrative pragmatism and natural features did more to shape the boundary than high-level directives. 89 This was apparent again two years later, when a Sudan chief tried to collect taxes from Kuku people who had already paid taxes to the Uganda government. 90 The British DC of Moyo insisted that Jale Hill, on the road between Moyo and Kajokeji, had been accepted as 'the locally recognized border' for the past twenty years. 91 At a meeting held at Kajokeji in 1960, however, the Sudanese representatives disputed the boundary line at both Jale and Keriwa hills. The meeting nevertheless agreed that the existing 'administrative line should be recognized as a purely temporary expedient'; that the current tax arrangements in the borderlands should be preserved and that new settlement in a four-mile-wide border zone deterred. and military activities in the borderlands. Many of the refugees took advantage of their close relations across the border to settle locally in Moyo and West Nile. But already in 1964, a District Intelligence report recommended the removal of Sudanese refugees from West Nile to prevent them laying claim or bringing conflict to 'Uganda's soil'. 94 Over the next two years, the borderlands became increasingly insecure, as the 'Anyanya' rebel movement activity and Sudan government counterinsurgency intensified. In 1966, Sudan army soldiers were reported to have crossed the border at Afoji where they killed one refugee and took others to Sudan. The DC Madi protested this 'invasion' to the Ugandan Prime Minister and requested a Ugandan army presence in the district, and for the refugees to be relocated further inside Uganda. 95 Local government officials thus appealed to the idea of state territorial sovereignty to claim greater central government support in the borderlands.
That support came in the form of an army operation to relocate the Sudanese refugees away from the borderlands, at the same time as the Ugandan army was increasingly co-operating with the Sudan army against the Anyanya rebels. 96 This is another episode in the border history that is bitterly recalled in current narratives in Kajokeji. Yet in Moyo too, interviewees expressed considerable ambivalence about the coerciveness of the operation. 97 At the time, the Ugandan DC of West Nile had to respond to complaints about it in the district council: The operation was intended to take away refugees from the border and put them in an area far from the border and where they could be registered and known. We must know who are living in Uganda. At present people are entering Uganda at will, like a market, and as if this is a ''no-man's'' land. It has been very difficult to plan services for our people because we just don't know who are living in Uganda. Even this Council has already experienced this from shortage of drugs at the dispensaries. 98 This statement epitomizes a recurrent local government discourse, protesting at unregulated cross-border movement and citing service provision and administrative imperatives as the basis for asserting territorial sovereignty.
Yet the border itself has frequently taken the form of a noman's-land rather than a clear line. This was again exacerbated by the intensifying Sudanese conflict in late 1966: 'the border up to a radius of four miles inside Uganda became more and more dangerous to live in and people are increasingly deserting their homes'. 99 Even in the midst of such insecurity, there are hints that the borderline was already a source of tension in some of the villages disputed up to now between Moyo and Kajokeji: It is also rumoured that some refugees of Afoji and Chunyu have refused to move inward and claimed that those places belong to the Sudan and if the Madi would try to interfere with their settlement they are prepared to fight by any means. It is believed that the Anyanya would be willing to assist them in case of any fight [emphasis added]. 100 Meanwhile in 1967, the government of Uganda under Milton Obote published its own definition of the Sudan boundary in its new constitution, running across the summits of Keriwa and Jale hills, the latter now said to be marked with a surface beacon (Map, line 6). 101 vociferous Sudanese protests against such Ugandan claims in 1960; by 1967, Sudan was under a caretaker coalition government, and both its international reputation in the region and its territorial control in the border areas had been increasingly eroded by the rebels. 103 Indeed the local concerns over the border would receive diminishing interest from either national government from now on. Ugandan policy was shifting as Obote's army chief, Idi Amin, pursued closer relations with the Anyanya and their supporter, Israel. 104 Amin, who would seize power from Obote in the coup of January 1971, was himself of a 'liminal identity' from the westernmost Uganda-Sudan borderlands, 105 and his period of rule did much to further blur the international boundary; Hansen suggests that 'he regarded the national frontier as penetrable and subordinate to ethnic considerations'. 106 He heavily recruited Southern Sudanese as well as West Nile Ugandans into his military and security forces and administration. The 1972 peace agreement in Sudan led to the re-opening of the border and gradual return of Sudanese refugees, and in subsequent years, peace and 'cordial relations' were reported, with 'free movement and contact' across the border. 107 Once again it was left to local and provincial authorities to handle the implications of this movement. A border meeting between the local administrations at Keriwa in 1974 resolved to tighten controls on cross-border movement of people, livestock and trade goods and agreed that 'People at Keriwa village are to pay taxes where they want to and the respective Chiefs to issue receipts', suggesting that chiefly jurisdictions were still uncertain in the borderland. 108 Soon after, Sudanese crossing into Uganda in this area complained of being taxed again by the Ugandan local authorities, despite carrying Sudanese poll-tax receipts. 109 As the Ugandan economy collapsed in the later years of Amin's government, the informal economy, or 'magendo' emerged as a major and enduring source of income and survival, creating new vested interests in cross-border trade and smuggling. 110 As Tidemand points out, however, the collapse of the formal economy in Uganda had limited impact on district administrations, since their revenue base was 'graduated taxes and market dues rather than taxes on formal sector incomes'. 111 This also heightened the concerns about local tax collection on both sides of the border, as local governments struggled to get this revenue in the absence of central government support. 112 While Amin's regime may have done more to blur and subvert the international boundary than to define it, conversely it also contributed to the hardening of internal boundaries in Uganda and thus to the overall strengthening of a patchwork geography. From 1972 the government began redrawing district and regional boundaries, claiming 'to meet the aspirations of various small societies which had hitherto been pressed into unwanted associations with their neighbours'. 113 The move was ultimately part of attempts to secure greater centralized control over the districts, however, and local administrative positions were increasingly taken over by military personnel. Divide-and-rule tactics even within Amin's home region of West Nile contributed to the fragmentation of any regional identity and the 'contraction of boundaries', 114 a process that would only accelerate in later decades.
Such local differentiation did not prevent reprisals against the people of West Nile in general following Amin's overthrow in 1979, leading to their flight across the Sudanese and Congolese borders. Some Ma'di refugees settled among the Kuku of Kajokeji and may have even 'adopted a Kuku identity for a period'. 115 By around 1986, Sudan People's Liberation Army (SPLA) attacks forced a reverse migration once again, with both Ma'di and Kuku returning to Uganda. Again, many Kuku were able to settle among relatives and friends in Moyo and neighbouring districts. 116 This capacity to self-settle near the border and to shift ethnic or national identity has been an important strategy for the borderland inhabitants. But refugee movements also created tensions among them and sharpened national identities. The relocation of Sudanese refugees from Moyo to neighbouring districts in the late 1990s was referred to by several interviewees as a cause of deteriorating relations, as one man from Kajokeji emphasized: In 1987 I left [Kajokeji] and ran just across the border and joined school there [in Moyo]; I even went to school with some of the current leaders there. There were always some tensions; we were seen as refugees. In 1997, when I was in Senior 4, we were evicted from Moyo; all the refugees were sent away from Moyo and Metu because we were foreigners . . . It was a really bad experience; it soured relations. We were taken to Waka camp in 1997-8, and there were a lot of problems with insecurity because of the West Nile Bank Front attacks. 117 The Ugandan rebel West Nile Bank Front targeted the refugee settlements from its bases across the border in Sudan because it suspected refugees of supporting the SPLA, as was the Ugandan government. More general refugee-host tensions emerged over resources, services and jobs in the humanitarian agencies, and Ugandans reportedly associated crime and insecurity with the refugee presence. 118 While refugee movements might in some ways blur borders then, in other ways they could provoke the defence of territorial interests among 'hosts', harden the distinction between 'nationals' and 'foreigners' (particularly in relation to land rights), and thus increase the value of territorial belonging and homeland for refugees. 119 In recent years, debates over the disputed border areas often focus on whether particular groups of people were settled in these areas temporarily as refugees or were internally displaced within their own country. 120 Following the outbreak of war in Uganda in 1979, the Ma'di took refuge in Dwani Wano. They were received warmly and allowed to settle among the people before they were repatriated to Uganda. During this time the Ma'di refugees paid taxes to Sudanese authorities. Now the district local authorities in Moyo have extended claim over Dwani Wano through which the disputed road passes. Yet the land is undoubtedly Kuku land in which is located salt water where rituals used to be performed. 121 This statement from the 'Kuku community' reveals the entanglement of a ritual landscape of clan-based authority with the logics of state territoriality defined by the boundaries of taxation regimes. These threads would be woven ever more closely into the patchwork of local state territoriality from the 1990s, even as the patches themselves were being cut up and re-stitched.
IV 'TOO MANY CUSTODIANS OF BORDERS'? DECENTRALIZING TERRITORIAL SOVEREIGNTY SINCE THE 1990S
With the end of the Cold War and growing attention to 'globalization' and regional integration policies in Europe, Maier suggests that territorial priorities were becoming seen as 'anachronistic' in the West by the late twentieth century. 122 The revival of the East African Community in 2000 promised a similar softening and opening of borders here, while peace agreements in Sudan and Uganda in 2005 and 2006 enabled a massive acceleration of cross-border trade and investment. In the same period, however, disputes proliferated along this international border, and over internal boundaries in both Uganda and South(ern) Sudan, signifying an intensification rather than disappearance of territoriality -epitomized perhaps above all in the secession of South Sudan from Sudan in 2011. 123 Yet within the new state and its neighbour Uganda, the overall effect has been to strengthen state territorial sovereignty, as local authorities and citizens appeal to its logics and assert its boundaries in pursuit of local interests. The localization of state power and territoriality has received particular impetus from programmes of decentralization in both Uganda and South Sudan since the 1990s. 124 In Uganda, the National Resistance Movement/Army of Yoweri Museveni followed up its military victory in 1986 with the consolidation of a five-tier system of Local (initially 'Resistance') Councils (LCs), from village to district levels, with substantial financial decentralization to the district councils. These reforms have thus re-intensified the concentration of power and politics at the district level, leading to heightened competition for positions and to proliferating demands for the creation of new districts, often expressed in ethnic terms. The result was an increase from thirty-three districts in 1986 to 121 in 2017. 125 Uganda's LC system has been much heralded for bringing local democracy, development and genuine decentralization after the centralized authoritarianism of Obote and Amin. But critics have also argued that new district creation has become an electioneering and patrimonial strategy, rewarding local politicians loyal to the ruling party, and confining much political debate and competition to the district rather than national level. 126 Campaigns for new districts have 'allowed local extremists to assume power and exacerbate ethnic tensions', leading to increasing conflicts over old and new boundaries. 127 The channelling of aid and development directly to the districts by international agencies has also furthered the 'build-up of assets' at this level. 128 This reflects broader processes by which development programmes assume a territorial definition of recipient communities or localities and thus enhance the value of controlling local administrative territories. 129 Similar processes are evident across the border in South Sudan, where by the late 1990s the SPLM/A had begun to establish its own local government system in the 'liberated' areas of Equatoria, including Kajokeji. Districts were renamed counties, and new sub-county divisions created, often based on chiefdoms. These structures would be formalized by the Local Government Act of 2009 and inherited by the new state in 2011. 130 There has been little sign here of the extent of decentralization occurring in Uganda. But local governments still became the focus for competition over their limited resources, in the form of intermittently salaried positions and control over local taxes, court revenues, land transactions and aid projects. As in Uganda, there has therefore been a rapid fragmentation and proliferation of new counties and lower units, including chiefdoms, and widespread disputes and conflicts over boundaries. 131 The increasing value of controlling local government territories has been furthered by growing concern and competition over land in both countries, and the associated politicization of customary land governance. In first Uganda and later South Sudan, customary land tenure has been given novel constitutional and legal recognition by the current ruling regimes, accompanied by declarations that 'land belongs to the people'. Yet at the same time, state and military interests have frequently ridden roughshod over these rights in the commercial exploitation of land and natural resources, generating new insecurities over land tenure among ordinary people. 132 In addition, the rapid growth of cities, towns and smaller market centres along roads has created new pressures on land in particular areas, and fuelled an unprecedented market for leases or land titles, formal and informal. The new value of land was further enhanced by a revival of commercial farming, land leases for government or NGO infrastructure and development projects, and a vague but confident anticipation of 'investors', which was particularly intensified by oil and mineral prospecting in north-western Uganda. 133 Both northern Uganda and South Sudan were the focus for massive externally funded programmes of post-conflict reconstruction after 2005, which if nothing else contributed to a boom in infrastructural development, construction and crossborder trade. 134 This combination of factors had an obvious impact on customary land governance just when it had also been given new legal recognition. Local governments established land committees and worked with clan land authorities and customary chiefs to handle increasingly lucrative land transactions and disputes. 135 Competition for land was generating more exclusionary definitions of land rights, based more strictly on patrilineal descent, fuelling disputes over history, genealogy and law even among close neighbours and relatives. 136 Similar principles were extended to the level of administrative boundaries, as neighbouring local governments laid claim to territory and key sites like markets on the basis of ethnic and customary land boundaries. Rights and access to resources were becoming defined by whether or not one could claim ancestral belonging to a particular clan and ethnic territory, so that migrants and minorities feared having less rights unless they could claim their own sub-territory with its own administration. As well as driving the proliferation of new administrative units, these logics presume that administrative boundaries should align with clan and ethnic boundaries. The historical knowledge and spiritual authority claimed by clan land authorities has thus gained new political and even commercial value as the basis for defining control over and rights to land and territory. 137 The effect has been to further rework and entangle clan territoriality in the stitching, cutting and re-stitching of the seams between local administrative patches in both states, and to engage more people than ever in this work as they seek to protect or extend their own land rights.
Horizontal tensions between neighbouring territorial administrations have tended to work ultimately to reinforce vertical political relations, thus strengthening rather than fragmenting state power. By 2014, both Moyo District in Uganda and Kajokeji County in South Sudan were embroiled in multiple boundary disputes, including between Moyo and neighbouring Yumbe District (formerly Aringa County). Threats of bloodshed over this boundary compelled central government intervention in the form of a delegation of junior ministers and lands ministry personnel, which met with district representatives in Moyo in 2017. The delegation sought to assert the sovereignty and technical capacity of central state institutions: 'the custodian of all borders is the Ministry of Lands; otherwise there are too many custodians of borders'. But even state technocratic solutions were vulnerable to local appropriation, according to district spokespersons who claimed that the GPS machine used in a previous demarcation attempt had been programmed to understand only one of the local languages -a potent expression of the way that seemingly neutral technologies of state territoriality could become entangled in local territorial rivalries. The delegation repeatedly criticized local politicians and district administrations for inciting conflict, including by creating new administrative subunits in the disputed areas: 'We realize the two local governments have rushed to those areas to put villages and give names in their own tribal languages'. 138 As we have seen, there is a long history to the idea that boundaries could be carried by people, and to the strategy of using local administration, taxation, infrastructure and services to stretch the seams of the territorial patchwork. Even while the Sudanese war was still ongoing in the 1990s, tensions were reported in the long-disputed international borderland at Keriwa, where, according to an SPLA officer, 'the local people (Sudanese) believe that these areas have been encroached upon' by the Uganda local government authority having 'extended services -schools, health clinics and roads to these areas' and 'even gone further to encourage the local Sudanese people to pay tax known in Uganda as ''Machoro'' '. Border meetings held in 1997 had failed to resolve the issue, because the meetings were only 'locally initiated' -once again, the border was being left to local governance. 139 Liwolo Payam of Kajokeji County, who claimed that part of their territory and people had become the sub-county of Keriwa in Yumbe District: The Uganda government brought services, so people considered Uganda as the only government which helps them, but people in that sub-county are on their own land, not Ugandan land; they did not go there as refugees. But the Uganda government created positions for them as LCs, village chiefs -that is the Uganda administration. 140 Ugandan authorities argue that the ethnicity of the population should be irrelevant to the international border line. 141 The escalation of conflict over the international boundary from around 2007 was similarly triggered by potent assertions of administrative sovereignty, such as naming of villages, construction of a road and telecommunication mast, and most notably the extension of the Uganda national census -the most powerful 'instrument of modern territoriality', according to Gray 142 -to disputed border areas in 2014. Commercial farming initiatives by local elites in the fertile border zone also contributed to the escalating tensions. 143 The Kajokeji-Moyo boundary, with its poor road connections, was of much less significance to higher authorities than the major border crossing points either side of it at Kaya and Nimule, where customs revenue was the focus of competition among different levels of government. 145 Local government leaders in both Moyo and Kajokeji therefore complained at the lack of interest or security provision from their own central governments, who were said to see the dispute as 'just a local border issue'. 146 But they also repeatedly petitioned the national governments for support and to demand border demarcation, appealing to the idea of territorial sovereignty. 147 In 2011, for example, the Moyo district chairman wrote to the Minister of Internal Affairs complaining of South Sudanese incursions across the border and requesting government action: 'To make it abundantly clear that the laws governing this nation are adhered to by all who are within the territorial boundaries of the Republic of Uganda'. 148 Similarly a district council member for Moyo emphasized that 'it is also important to have marks so that the border is clear, because this is a country'. 149 Higher authorities responded by reiterating the joint directive of the South(ern) Sudanese and Ugandan presidents in 2009 that major economic activities or projects in the border zone should be suspended until a boundary commission had resolved the borderline. Once again then, central government policy worked to produce an effective no-man's-land along the border, which is also conspicuous in the mile-wide gap between border posts on the main road between Kajokeji and Moyo. At the same time, higher-level authorities urged 'amicable solutions' at the local level and dialogue between chiefs and elders to resolve any border disputes, reprising the long history of local government responsibility for the international boundary. 150 In 2015, a South Sudanese minister explained this in terms of the more pressing problems of armed rebellion and economic crisis faced by the central government: 'We have boundary issues with all our neighbours, but this is not the time to address them -only that this administration in Moyo stirred up the issue'. 151 The colonial resort to an unmapped ethnic definition of the Kajokeji-Moyo boundary has given particular prominence a century later to ethnic identity and the politicization of clan territories in the borderlands, contributing to conflict along ethnic lines, and to the attempted conflation of national citizenship and territorial sovereignty with ethnic and clan identities. Nearby stretches of the boundary were not defined in such ethnic terms, where other factors such as control of the navigable Nile outweighed the colonial preference for tribal boundaries: like several other ethnicities, the Ma'di east of the Nile were thus divided between Sudan and Uganda. Yet here too, tensions have arisen in recent years as new settlement patterns and lucrative cross-border markets have politicized clan-based land claims, leading to intra-ethnic disputes. In one case, rival Ma'di clans have been supported by neighbouring district administrations in Uganda, with one clan accusing the other of being South Sudanese and hence having no right to land in Uganda. 152 Here the boundary line was more clearly delimited, and the people on both sides share the same ethnicity. Yet here too, local administrative, economic and political ambitions are entangling clan territorialities -which may actually traverse the international boundary -in the assertion of national territorial sovereignties and citizenship. V CONCLUSION International boundaries need to be understood not just as the result of cartographic impositions by and between states, or in terms of centre-periphery relations, or even as products of specific cross-border dynamics, relations and resources, but also as part of the internal territorialization processes across states on either side. These internal processes in Uganda and South Sudan reveal that state territory and sovereignty is coproduced through local-level as well as national political work, by local actors and institutions investing in defining and defending their 'patch' of jurisdiction and constituency.
The stretch of boundary on which this article has focused exemplifies this argument -central governments over the past century have had little direct interest in locating a clear boundary within the no-man's-land created by recurrent state policies and conflicts. Instead it has been left to local institutions and actors to assert the boundaries of state sovereignty via their own jurisdictions. There are obviously stark contrasts with more heavily policed and taxed boundaries around the world, where international borders may be clearly demarcated and where central state institutions may exert more direct control. But even in such contexts, the meaning of external state borders is at least partially produced by the territorial organization within states as well as between them. This is apparent, for example, in Anssi Paasi's case study of a Finnish locality on the Russian border, which, as the East-West frontier, was formally closed and securitized during the Cold War. Yet here too, the local history and meaning of the border for its inhabitants was 'inseparable' from the production and institutionalization of other scales of territory, from the village or commune to the province, region and state. 153 Even in the context of France -often seen to have initiated and epitomized the formation of the unitary, centralized, territorial nation statestate decentralization and heritage policies in recent decades have invigorated and articulated 'multiple local territories' with the 'territoire/s of the nation'. 154 But while state formation everywhere may involve the construction of local and regional as well as national territory, these processes have been particularly intense, entangled and mutually reinforcing in the context examined in this article. From colonial indirect rule to ongoing programmes of decentralization, state sovereignty in South Sudan and Uganda has been produced through the interests of local as well as national elites in the control of territory. The resulting patchwork territoriality of these states exhibits horizontal tensions that to some extent work against centre-periphery tensions, helping to hold the fabric of the state together even as they might seem to pull it apart. As Boone writes, local government institution-building across Africa has 'tied distinct rural peripheries . . . into the national space', through 'patterns of segmented authority whereby regions . . . were tied to the center, but at the same time separated from each other by the very institutions of the state (as under colonial rule)'. 155 Rather than seeing such horizontal tensions as the fragmentation and failure of states then, we should see them as the product of state strategies (at multiple scales) for exercising control over people and territory over the past century. And rather than seeing in the segmentation of local government units a reversion to ethnic solidarities, we might better explore this as a process of 'spatial socialization' over that period, 156 in which discourses of tradition, indigeneity and historical memory are employed in the construction of new forms of territorial identification with administrative boundaries. 157 But this shift to an increasingly 'territorial definition of society' has been an incomplete and ultimately 'ambiguous' process, as Gray argues for the case of colonial Gabon. 158 The boundaries themselves have never been stable, for as Paasi reminds us, 'territorial units and regions, states and nations -and their representations -are in a continual state of flux, rising and disappearing in perpetual regional transformations'. 159 The persistence of the idea that people can carry boundaries with them -as well as the ongoing fragmentation of administrative territoriesdemonstrate the continuing instability of boundaries and identifications in the South Sudan-Uganda borderlands. The structures of political control can also prove to be 'subversive of the territorial integration they were intended to promote', leading to fractures, secessions or coups. 160 The story told in this article is not then a straightforward account of the role of borderlanders in constructing state territory, any more than it is a story of state fragmentation and failure. Rather, it is an illustration of the much more ambiguous, unpredictable and fluctuating processes whereby state territoriality has been localized and local territorialities have been worked and reworked in messy and tangled ways into the fabric of states.
Durham University
Cherry Leonardi | 2019-09-17T01:05:05.024Z | 2020-07-13T00:00:00.000 | {
"year": 2020,
"sha1": "d7aa0b22f40d16b56af4b10d1cc2b97fdafec6cc",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/past/article-pdf/248/1/209/33558671/gtz052.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "f867ff570da1e096c92343dd09f76b9661947b3a",
"s2fieldsofstudy": [
"Political Science",
"History"
],
"extfieldsofstudy": [
"Political Science"
]
} |
58989063 | pes2o/s2orc | v3-fos-license | Load balancing factor using greedy algorithm in the routing protocol for improving internet access
Load Balancing is very well applied to the distribution of internet access at any point of the Wi-Fi area so that the use of limited devices can be optimal and on target. The proper use of the Greedy Algorithms when applied to an access point device is very capable in resolving excessive loading in a single resource to take the best choice at every stage in an optimum process. Access point device is also a success factor in running the optimization of the distribution of Wi-Fi access on the Access Point is strongly influenced by the parameters set. In this research, user access factor in one very high Wi-Fi area also influences the possibility of incoming access failure. For the application of the results of this study can be utilized on all access point devices that have limits, but for speed internet access remains at the capacity provided bandwidth.
Introduction
In the case of the shortest path that can be solved by several algorithms is war shall algorithm, greedy algorithm, and bellman algorithm. The shortest paths are also some obstacles that can be solved in the presence of forbidden paths. In the greedy trajectory algorithm there is a Gweighted graph, s and t, and the set X as the forbidden path in G, in search of the shortest path P to no path passage of P containing X path P and permitted to repeat each vertex and its edges [1].
One of the mechanisms for dividing computational loads to multiple servers is called Load Balancing. Load Balancing serves to optimize resources, minimize time response, and avoid overloading in resources. Because computing resources can also reduce the destruction of services as resources can replace each other [2]).
The routing classful protocol does not carry the subnet mask information in its routing table. RIP version 1 (Routing Information Protocol) is a classful routing protocol, this is the first routing protocol that is widely used on the internet in networks. RIP is useful for local and medium size networks. RIP is known as the distance vector routing protocol, which functions in hop calculation as a routing metric, RIP allows 15 maximum number of hop no. The 16th hop count is considered an infinite distance that looks at distances such as routes that cannot be reached and roads that will not be carried out in the routing process. RIP operates a limited network size [3].
Formulation of the problem
Based on the background described, the problem that must be solved is the distribution of Wi-Fi access on an access point device for all users to be used in a balanced and equitable manner so that internet facility with bandwidth can be allocated optimally by looking at the 2 1234567890''"" 2nd Nommensen International Conference on Technology and Engineering IOP Publishing IOP Conf. Series: Materials Science and Engineering 420 (2018) 012127 doi:10.1088/1757-899X/420/1/012127 load balancing factor using the greedy algorithm in Wi-Fi access-based protocol routing classfull find strategies in operating a limited network size with optimum results.
Research methods
The research was conducted on campus of University of Medan Area by doing an observation to get the accuracy of information and data of internet access user at one point of Wi-Fi network on an access point device with user restriction.
Observations are also conducted thoroughly on the distribution of bandwidth provided from the main server to all available Wi-Fi networks in each access point so that the data obtained can later become input material to solve the problems contained in each available Wi-Fi network. The flowchart of research for Wi-Fi access on an Access Point can be seen as follows: Basically testing is done is concentrated only at one point of hotspot area only in the area that many users access the internet so that the results of Load Balancing can be seen accurately and precisely targeted. The author tested using ASUS RT Access Point Kit N12HP. This device is only able to serve access to the internet as much as 25 (twenty-five) users only. While supporting applications to perform testing is by using Winbox Mikrotic utility version 3.13 which serves to remote to Access Point ASUS RT N12HP devices in GUI mode (Graphical User Interface). So with the GUI mode it will be easier to perform monitoring and manage the bandwidth usage that is accessed by the user (user) internet in a hotspot area. The use of the greedy algorithm in this test is to optimize internet access for limited Access Point capacity can be optimized so that internet users in the hotspot area more targeted and sustainable. In this test result whether load balancing factor using the greedy algorithm in Wi-Fi access based protocol routing classful with limited network size able to run optimally and efficient will be presented next.
Result and discussion
In testing is done by changing the structure of internet access (login) on hotspots that are already available on the router features Mikrotic using HTML programming language which is the default program on Mikrotic router that will be redirected to the login page (login) for the process of authentication user.
In the test in this study was conducted into four stages of testing access entry (login), testing of access point devices, testing the duration of internet usage and testing for 30 users of internet access users.
a. Testing Access Login (Login)
At this stage of access testing is done by using a web-based interface with HTML programming. The Greedy algorithm is applied here to change the access structure (login) of the internet on the hotspot as the main access point of the user to be able to access the internet. Mbps. Access Point ASUS RT N12HP device is very precise to the needs of more efficient distribution of Wi-Fi access. In some tests made access point ASUS RT N12HP has been able to reach the coverage area as far as a radius of 150 meters so that the distribution of access to the internet can be better and efficient.
c. Testing Duration of Internet Access Usage
In the distribution of internet access to the user it is necessary to manage the ideal capacity in one area and the effective usage period. In this test is to determine the maximum limit of the user. So here the author divides into 2 access user status entry, active and passive user.
Active users are asserted that the internet usage is real-time and continuous online to the internet. While passive users that internet users are inconsistent in the use of the Internet which means used internet access online that do occur during the gap. Efficiency in passive internet users that will be managed with online duration with average bandwidth at 0.00 K / s up to 1.00 K / s over a span of 10 minutes. From the results of tests conducted that the use of internet access by applying the greedy algorithm running according to the expected optimization. With the time range and duration that has been set so that it can be classified appropriately and systematically active users and passive users.
The parameters required are the users of Internet services in the area that has been done testing is monitored from continuous user activity so that access to the device provided to be more optimal and appropriate.
d. Test Results for 30 Login Access Users
In the previous explanation in load balancing factor testing using the greedy algorithm in Wi-Fi access based on classful protocol routing is adjusted with limitation capacity on Access Point ASUS RT N12HP which only can for 25 user access only. Testing is done exceeds the access capacity of access Point Access devices that is with the difference of 5 (five) more users so that the total will enter 30 (thirty) users with the assumption that there are two user status is active users and passive users.
The results in the table list show that the limited capacity of Access Point ASUS RT N12HP device which is only for 25 users only but successfully increased to 30 users with the assumption that 5 more users are passive users. In the table shows that the user access entry has exceeded the capacity limit provided by access point ASUS RT N12HP device that is 30 users, so that access to internet users become more leverage. The internet service user login graph using Wi-Fi obtained at the time of the test provides excellent and measurable results. In monitoring testing using a website-based application provided by the Mikrotic router can be classified into several user segments daily, weekly, monthly and yearly.
After testing the load balancing factor using the greedy algorithm in Wi-Fi access based on protocol routing classful with limited network size is fundamentally affecting the users of internet access that exceeds the capacity of the tools provided.
e. The Systematic Network
In general, to set and monitor the connection flow between the access point device that will continue to be utilized by users viewed from a crowded area of internet user access. This study was conducted at the point where the active crowd is located, namely in the Campus Library.
Without changing anything on the existing network topology and without changing the existing network structure. Here is an overview of a systematic network that has been running at the time of research.
Conclusions
Research is only focused on a single hotspot area that many users access the internet that is in the Center Library Bureau of Rectorate Building University of Medan Area. From the daily data that the author obtained that visitors who come using the internet facility service reaches more than 500 users every day during the active lecture. | 2019-01-24T15:48:16.182Z | 2018-10-01T00:00:00.000 | {
"year": 2018,
"sha1": "7b92ffd9194eb9ffbc35974b26e11d8e60582c1c",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/420/1/012127",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "f3a9659b8f5ce9cd629fed4c9dc51d77d8c568ac",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
212752415 | pes2o/s2orc | v3-fos-license | Nano- And Microfiber-Based Fully Fabric Triboelectric Nanogenerator For Wearable Devices.
The combination of the triboelectric effect and static electricity as a triboelectric nanogenerator (TENG) has been extensively studied. TENGs using nanofibers have advantages such as high surface roughness, porous structure, and ease of production by electrospinning; however, their shortcomings include high-cost, limited yield, and poor mechanical properties. Microfibers are produced on mass scale at low cost; they are solvent-free, their thickness can be easily controlled, and they have relatively better mechanical properties than nanofiber webs. Herein, a nano- and micro-fiber-based TENG (NMF-TENG) was fabricated using a nylon 6 nanofiber mat and melt blown nonwoven polypropylene (PP) as triboelectric layers. Hence, the advantages of nanofibers and microfibers are maintained and mutually complemented. The NMF-TENG was manufactured by electrospinning nylon 6 on the nonwoven PP, and then attaching Ni coated fabric electrodes on the top and bottom of the triboelectric layers. The morphology, porosity, pore size distribution, and fiber diameters of the triboelectric layers were investigated. The triboelectric output performances were confirmed by controlling the pressure area and basis weight of the nonwoven PP. This study proposes a low-cost fabrication process of NMF-TENGs with high air-permeability, durability, and productivity, which makes them applicable to a variety of wearable electronics.
Introduction
Fiber-based electronic devices are attracting extraordinary attention due to their flexibility, lightness, comfortableness, and applicability to a variety of industries and products, including physical or chemical sensors [1,2], biomedical monitoring [3,4], soft robotics [5,6], and wearable devices [7,8]. Furthermore, various portable smart devices are playing major roles in daily life, and these devices require lighter, smaller, or larger capacity power sources. Related research topics such as sensors [9,10], energy harvesting [11][12][13][14], piezoelectric nanogenerators (PENGs) [15][16][17], and triboelectric nanogenerators (TENGs) have received significant attention [18][19][20][21][22][23] because a large amount of available energy is generated by the human movement and clothing friction. The triboelectric effect is caused by the contact between different materials, and it can induce strong electrostatic charges. As static electricity can lead to ignition, dust explosions, and electrical shocks, the unintended static electricity is generally considered to have a negative impact in the industry. However, the combination of the triboelectric effect and static electricity has been extensively studied as a TENG [24][25][26][27][28] because it produces a sufficiently large amount of electrical energy to be used as a generator. An all-nanofiber-based stretchable TENG (S-TENG) with polyvinylidene fluoride (PVDF) and thermoplastic polyurethane (TPU) nanofiber membranes was reported by Zhao et al. [29] for energy harvesting. This S-TENG, which according to the analysis had a full separation of surface-to-surface, had an excellent triboelectric output performance. Zhu et al. [30] introduced the microfiber-based TENG in 2018. This TENG consisted of ZnO-coated polypropylene (PP) microfibers with a spacer, and it exhibited high transfer charge and output voltage. The properties of a TENG depend on various elements, such as the surface morphology, dielectric constant, spacer, and triboelectric potential difference between the triboelectric materials [31][32][33][34].
Since the fiber-based TENG was introduced by Zhong et al. in 2014 [35], related studies have been actively reported [15,16,36]. Notably, the TENG using nanofibers has been widely investigated because of its advantages, such as ease of production by electrospinning and high surface roughness [37][38][39][40][41]. Owing to the porous structure of nanofibers, which can contain a large volume of air with high dielectric constant, they have a large contact area, which can enhance the triboelectric effects and produce a high-output generator [42][43][44]. However, TENGs are generally designed with structures that include spacers to enhance the electrical performance, and this leads to increased fabrication process steps, cost, and total volume of devices. In addition, nanofibers can be produced only by electrospinning with high-cost and limited yield, which also results in relatively low mechanical properties due to the difficulty in improving strength through the orientation of polymers [45].
The melt blowing process is a well-known method of producing nonwoven fabrics and can be applied to various thermoplastics, including polyethylene terephthalate (PET) [46], polyolefin [47], and polylactic acid (PLA) [48]. Melt blown nonwoven fabrics, which generally consist of microfibers, can be produced with low cost on a mass scale in large areas, are solvent-free, their thickness is easily controlled, and have relatively better mechanical properties than nanofiber webs [49,50].
In this study, in order to maintain and mutually complement the advantages of nanofibers and microfibers and avoid their shortcomings, a nano-and micro-fiber-based TENG (NMF-TENG) was fabricated using a polyamide (PA), especially nylon 6, nanofiber mat and melt blown nonwoven polypropylene (PP) as triboelectric layers. The NMF-TENG is composed of the nylon 6 solution electrospun on the nonwoven PP and Ni-coated fabric electrodes. The nanofiber mat and nonwoven PP contain a large volume of air in the porous structure, and thus, the NMF-TENG does not require a spacer. The morphology, porosity, pore size distribution, and fiber diameters of the triboelectric layers were characterized. Moreover, the electrical output performances of NMF-TENGs were investigated. This study proposes a low-cost fabrication process of NMF-TENGs with high triboelectric output performance, air-permeability, durability, and productivity. Therefore, their application to a variety of wearable electronics is expected.
Materials
A PP pellet (HP561X, melt-blown grade, Polymirae, Korea) with 800 g/10 min of melt flow rate (MFR) and 0.9 g/cm 3 of density was used as a polymer for the melt-blowing process. Nylon 6 (1011 BRT) was received from Hyosung (Korea). Formic acid and acetic acid as solvents were purchased from Samchun (Korea).
Fabrication of Nonwoven Triboelectric Layers
The melt blowing of PP was performed with a pilot scale melt blown line (KIT MB 2005, Hills Inc., Melbourne, FL, USA). The PP pellet was melted at 250 • C in the extruder and extruded through the spinneret. The air gap was 0.4 mm, and the nozzle to collector distance was 300 mm. The air temperature was set at 260 • C. The fabricated melt blown nonwoven basis weight was varied between 15, 30, and 50 gsm.
To fabricate the nanofiber mat, the nylon 6 was dissolved in the mixture solvent of formic acid and acetic acid (8:2) in a concentration of 15 wt%. The dissolved solution was ejected on the obtained nonwoven PP through the metal nozzle (25G) connected to a high-voltage generator (NNC-HV30, NanoNC, Seoul, Korea). The applied voltage and feed rate in the solution were 15 kV and 10 uL/min, respectively. The distance from the metal nozzle to the collector was 15 cm. After that, the prepared nanofiber mat was dried at 50 • C under vacuum for 24 h.
Fabrication of the NMF-TENGs
The NMF-TENGs were fabricated based on the vertical contact mode among the fundamental operation modes. The NMF-TENG contains two parts: the triboelectric layer with nonwoven PP (161-475 µm thick) and nylon 6 nanofiber mat (80 µm thick), and a Ni-coated conductive fabric (110 µm thick), as the top and bottom electrodes. The triboelectric layer was cut into 50 mm × 50 mm, and the Ni fabric electrode was cut to 45 mm × 65 mm. The Ni fabric electrodes were attached to the top and bottom of the triboelectric layer. Finally, the NMF-TENGs were easily fabricated without spacers.
Characterization
The surface images of the nonwoven PP and nylon 6 nanofiber mat were observed by field emission scanning electron microscopy (FE-SEM) (SU8010, Hitachi Co., Tokyo, Japan) with an acceleration voltage of 10 kV after sputter coating with osmium (Os). The pore size distribution of these specimens was measured by a capillary flow porometer (CFP-1500-AEX, PMI Inc., Ithaca, NY, USA) according to the ASTM F316-03 standard. The specimen size was 20 mm × 20 mm, and it was obtained by following equation: where d is the pore diameter (µm), γ is the surface tension (mN/m), p is the pressure (Pa), and C is a constant (2860) [51]. The porosity of the nonwoven PP and nylon 6 nanofiber mat were characterized by a mercury porosimeter (Autopore IV 9500, Micromeritics, Norcross, GA, USA). To analyze the electrical output performance of the NMF-TENGs, the open-circuit voltage (V oc ) and the short-circuit current (I sc ) were measured and recorded by a digital oscilloscope (Keysight, DSOX4024A, Santa Rosa, CA, USA) and a source meter (Keysight, B2901A, Santa Rosa, CA, USA), respectively.
Results and Discussion
The manufacturing process of the NMF-TENG is shown in Figure 1a. In the first step, nonwoven PP was manufactured in the pilot scale equipment. After that, the nanofiber mats were electrospun on the nonwoven PP fabric, and an integrated NMF-TENG without a spacer was fabricated. As shown in Figure 1b, the nonwoven PP was randomly distributed. In the case of the nanofibers, the structure of the fabricated mats was more dense and compact than that of the nonwoven PP. Figure 1c shows the chemical structure both of PP and nylon 6, and it also presents the cross-sectional FE-SEM image of the NMF-TENG specimen. Although it is combined without a spacer between the two triboelectric materials, there is no full contact between the rough fibers owing to the porous structure of the nonwoven fabric. The structural characteristics of the nonwoven PP and nylon 6 nanofiber mat are summarized in Figure 2 and Table 1. In the case of the nonwoven PP, its average diameter was in the range of 2.4-2.7 µm as the basis weight of the nonwoven PP increased. The thickness of the nonwoven PP increased up to approximately two times with the increasing weight. The air permeability decreased with increasing weight, and it is assumed that the air pathway increases in the thickness direction. In Figure 2a, the pore size distribution of the PP membrane with the various basis weights was exhibited. The pore size showed a relatively broad range, which was not affected by the increase in basis weight of the nonwoven PP. The average and maximum pore sizes were 16 and 33 µm. The porosity of the nonwoven PP was slightly increased, up to 80%, as the basis weight of the nonwoven PP increased. It was confirmed that the air permeability and porosity of the nonwoven PP is higher than that of other film-based materials commonly used in TENGs. In addition, the pore area of the PP50 basis weight related to the air volume was significantly increased, and the area of fiber contact between the materials could also be increased with an applied force. For the nylon 6 nanofiber mat, the fiber diameter was 283 nm, which was ten times smaller than that of the nonwoven PP. The pore size was sharply distributed around 320 nm, and the maximum pore size was 800 nm. From the previous FE-SEM image, the structure of the nylon 6 nanofiber mat was relatively denser than that of the nonwoven PP. The porosity of the nanofiber mat was approximately 87%, a high pore volume. From those results, we confirmed that both the nonwoven PP and nylon 6 nanofiber mats have a highly porous and open pore structure through the thickness direction. A schematic of the NMF-TENG under the vertical contact mode is shown in Figure 3a. The nylon 6 nanofiber mat and nonwoven PP are chosen as the negative and positive triboelectric layers, respectively. The nylon 6 nanofiber mat is successfully deposited by electrospinning onto the nonwoven PP surface, and the Ni fabric electrodes are attached to the top and bottom of the triboelectric layers. Figure 3b shows a schematic of the NMF-TENG to which an external force is applied and released. The triboelectric layers of the NMF-TENG contain numerous pores with a volume of air and critical contact points between the fibers. As the external force with vertical components is applied, the triboelectric layers are deformed, which results in a decrease in the air volume and an increase in critical contact points between the frictional materials. Therefore, by applying a mechanical force such as pressing and releasing, surface triboelectric charges are generated due to changes in the contact area between frictional materials having different polarities. Figure 3c shows a schematic of the electricity generation mechanism in the vertical contact mode. In the original state, there is no generation of frictional charges or potential differences between the two electrodes [52]. When an external force is applied to the top surface of the NMF-TENG, the triboelectric layer is deformed as described above, and then triboelectric charges are generated by the change in contact area between the nylon 6 nanofibers and PP microfibers. When the external force is released, the opposite charges are separated, and electrons flow through the external circuit, inducing the potential difference. As a result of this sequence, the open-circuit voltage and short-circuit current shown in Figure 3d are generated during one cycle by the external force applied to the NMF-TENG surface. The effect of the PP basis weight on the electrical performance of the NMF-TENGs was investigated by applying an external force to NMF-TENGs with different PP basis weights. The fabricated NMF-TENGs were periodically pressured and released under the constant force of 5 N at a frequency of 8 Hz.
The NMF-TENGs generated excellent output voltage and current, as shown in Figure 4, despite the absence of a spacer in the structure. The electrical output performance of PP15 shows V oc of 1.55 ± 0.12 V and I sc of 161.01 ± 10.12 nA. The electrical performance of PP30 shows V oc of 2.79 ± 0.13 V and I sc of 243.88 ± 12.42 nA, which is an improved output performance over that of PP15. Furthermore, when the PP basis weight is increased to 50 gsm, V oc and I sc are further improved to 3.54 ± 0.13 V and 374.39 ± 22.28 nA, respectively. Based on these results, the electrical output performance of the NMF-TENG was improved with an increase in the PP basis weight from 15 to 50 gsm. These results arise because the high basis weights of nonwoven PP contain a high porosity and large volume of air with a high dielectric constant, thereby enhancing the triboelectric effect. In this study, the NMF-TENG fabricated by the use of nonwoven PP50 and a nylon 6 nanofiber mat is selected to examine the triboelectric properties. The electrical output performances of the fabricated NMF-TENG were investigated by applying various external forces. The NMF-TENG was pressured and released for three cycles with increasing force to 0.5, 2, and 5 N in a mechanical pressure machine (SnM Tech, Korea), as shown in Figure 5a. Figure 5b illustrates the electrical response of the NMF-TENG in terms of voltage as the force increases. As shown in the results, it can be observed that the output voltage of the NMF-TENG increases to 1.26 ± 0.01, 2.38 ± 0.24, and 3.47 ± 0.06 V as the pressure varies between 0.5, 2, and 5 N, respectively. Figure 5c shows the electrical response of the NMF-TENG in terms of current. The results indicate that the output current increases to 107.76 ± 7.07, 150.23 ± 9.71, and 247.58 ± 10.71 nA as the pressure gradually increases. As mentioned above, the increase in electrical performance of the NMF-TENG in response to an increasing pressure is related to a change in the interfacial contact area between the friction materials. Moreover, the electrical charge potential of the NMF-TENG was realized by finger (Figure 5d), blade (Figure 5e), and palm tapping (Figure 5f) with the human's hand, and the generated output voltages were 12.52 ± 0.87, 23.10 ± 0.60, and 33.93 ± 2.09 V, respectively, despite the application of a similar pressure, from approximately 14 to 16 N. The results indicate that the area of pressure applied to the surface of the NMF-TENG is related to the output performance. It means that the charge potential of the inner friction area is increased through the increase in pressure area. To demonstrate the capability of the NMF-TENG for energy harvesting, it was used as a power source to charge capacitors through a bridge rectifier under a periodic external force of 5 N at 8 Hz, as shown in Figure 5g. As plotted in Figure 5h, the capacitor with 0.1 µF is rapidly and steeply charged by the NMF-TENG. However, as the capacitance becomes larger, a characteristic linear charging behavior occurred. Furthermore, we replaced the capacitor with LEDs in the bridge rectifier, and only the power source generated by the NMF-TENG was used to turn on the LEDs, as shown in Figure 5i. The full fabric-type NMF-TENG can be considered as a power source for wearable devices.
Conclusions
In summary, we have developed a nano-and micro-fiber-based TENG (NMF-TENG) by using a nylon 6 nanofiber mat and melt blown nonwoven PP as the triboelectric layers to enhance the electrical performance. The NMF-TENG was fabricated by electrospinning nylon 6 on the nonwoven PP with Ni coated fabric electrodes. Owing to the porous structure of the triboelectric layers containing a large volume of air with a difference in electronegativity, our NMF-TENG could generate triboelectric effects even with very thin layers and without a spacer. The morphology, porosity, pore size distribution, and fiber diameters of the triboelectric layers were characterized. Moreover, the electrical output performance of NMF-TENGs was investigated. When PP15, PP30, and PP50 were applied to the triboelectric layer with a nylon 6 nanofiber mat, the electrical output performance of the NMF-TENG was improved with the increase in PP basis weight from 15 to 50 gsm. Furthermore, the electrical charge potential realized by hand tapping with different pressure areas was 33.93 V for palm, 23.10 V for blade, and 12.52 V for finger tapping. It means that the charge potential of the inner friction area is increased through the increase in pressure area. To demonstrate the capability of the NMF-TENG for energy harvesting, it was used as a power source to turn on LEDs.
The triboelectric output performance of NMF-TENGs with high permeability provides the opportunity to apply them to smart and wearable device power systems and self-generated electronic systems. The large area of the NMF-TENG and its application to clothing will be studied in the near future. | 2020-03-19T13:07:57.027Z | 2020-03-01T00:00:00.000 | {
"year": 2020,
"sha1": "2955eb4f35ba165c6a43f2c2e9b08ceef9c4bc17",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4360/12/3/658/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "58e0942677fb03946956dde317ed5bbe96a472a3",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
} |
51929930 | pes2o/s2orc | v3-fos-license | Prevalence of helminthic infections and determinant factors among pregnant women in Mecha district, Northwest Ethiopia: a cross sectional study
Background Intestinal parasites are the most common infections in developing countries. Prevalence and impacts of these parasites are high in pregnant women. The aims of this study were to determine prevalence of helminthic infection and evaluate the determinant factors during pregnancy. Methods A cross-sectional study was conducted in Mecha district from November 2015 to January 2016. The data were collected by interview technique and collecting the stool sample from each pregnant woman. Descriptive statistics and binary logistic regression were used. Results A total of 783 pregnant women were included. The prevalence of intestinal parasite among pregnant women was 70.6% [95% CI 67 -74%]. Ascaris lumbricoides (32.7%) was the predominant intestinal parasite species. Intestinal parasitic infection were 2.94 folds higher in the absence of latrine (AOR: 2.94 [95% CI: 1.5–5.8]). Absence of regular hand washing habit increase the odds of infection by 3.33 folds higher (AOR: 3.33 [95% CI: 1.54–7.14]). Not wearing shoe increased the odds of helminthic infection by 6.87 folds higher (AOR: 6.87 [95% CI: 3.67–12.9]). Illiteracy increases the odds of intestinal parasitic infection by 2.32 folds higher (AOR: 2.32 [95% CI: 1.04–5.26]). Ingestion of raw vegetables increases the odds of intestinal parasitic infection by 2.65 folds higher (AOR: 2.65 [95% CI: 3.23–9.9]). The odds of intestinal parasitic infection were higher in rural areas (AOR: 2 [95% CI: 5–10]). Intestinal parasitic infection was higher in women aged less than 21 years (AOR: 6.48 [95% CI: 2.91–14.4]). Conclusion The prevalence of helminthic infection is high in this study. Latrine utilization, hand washing habit, eating raw vegetables and bare foot were the major determinant factors for the high prevalence. Therefore, health education and improvements in sanitary infrastructure could achieve long-term and sustainable reductions in helminth prevalence.
Background
Intestinal parasites especially geohelmenths are the most common and widespread of human parasites in the developing world [1]. Thousands of rural and impoverished villagers are often chronically infected with different species of parasitic worms [2]. More than 1.5 billion people, or 24% of the world's population, are infected with soil-transmitted helminthic infections worldwide. Infections are widely distributed in tropical and subtropical areas, with the greatest numbers occurring in sub-Saharan Africa, the Americas, China and East Asia [3].
Intestinal parasitic infection is very common in Ethiopia [4] and the magnitude of infection varies from place to place [5]. Intestinal parasitic infections account the second most predominant causes of outpatient morbidity in the country [6]. High prevalence of parasitic infection in Ethiopia were due to the unsafe and inadequate provision of water, unhygienic living conditions, the absence of proper utilization of latrine and habit of walking with a bare foot [7,8].
Pregnant women are also at high risk of parasitic infection due to their close relationship with children [9]. Recently, a study done on pregnant women indicated that pregnancy has been associated with an increasing prevalence of parasitic infections compared to non-pregnant women [10].
Infections with helminth were associated with a modest decrease in hemoglobin levels and indicators of poor nutritional status. Helminthic infections, such as Hookworm, Trichuriasis, and Schistosomiasis, have been shown to directly contribute to severe anemia in patients through blood loss and micronutrient deficiencies [11]. Low hemoglobin level is associated within areas where with a high prevalence of Hookworm infection [12]. Hookworm is the leading cause of pathologic blood loss in endemic areas [13].
Anemia accounts 20% of maternal death globally [14]. Anemia in these highly endemic regions is common among pregnant women and often multi-factorial. Anemia has a devastating effect on pregnant women and has been associated with stillbirth, prematurity and low birth weight [15].
Although there are a lot of factors that causes anemia, intestinal parasites like Hookworm, Trichuriasis trichuira and schistosoma are highly associated to cause anemia in pregnant women in endemic parts of Ethiopia. These parasites cause anemia directly by feeding the red blood cells or indirectly by causing bleeding, feeding the micronutrients and infiltrating the blood forming organs. The complication of intestinal parasitic infection during pregnancy leads to stillbirth, prematurity and low birth weight. To minimize the burden of parasitic infection during pregnancy, studying the prevalence of intestinal parasitic infections in pregnant women is very ideal. Therefore, the aims of this study were to determine the prevalence and determinants of intestinal helminth among pregnant women in Northwest Ethiopia.
Methods and materials
A cross sectional study was conducted in Mecha district. Mecha district was located 40 km to the north of Bahir dar city and the district contains 376,000 residents. The data were collected from November 2015 to January 2016. The target population was all pregnant women residing in Mecha district. Pregnant women absent during the data collection period or unable to give stool sample were excluded from the study.
The sample size was calculated using single population proportion formula with the assumption of 95% CI, 50% proportion of intestinal parasite in pregnant women, 5% margin of error, none response rate of 10% and a design effect of 2 gives 846 pregnant women's. Multistage sampling technique was used. First 10 kebeles (the smallest administrative unit in Ethiopia) were selected from 40 kebeles of Mecha district using simple random sampling technique. Then simple random sampling technique was used to select pregnant women from these 10 kebeles using the kebele health extension workers registration list as a sampling frame.
The data collection procedure was conducted using interview technique and collecting stool sample from each interviewed pregnant women. Fifteen health extension workers were recruited for the data collection and 5 clinical nurses were recruited for supervision. The stool sample was collected from each pregnant woman, preserved with 10 ml sodium acetate-acetic acid-formalin solution (SAF) and transported to Bahir dar regional laboratory for analysis. From each pregnant woman, one gram stool sample was collected. Concentration technique was used. The stool sample was well mixed and filtered using a funnel with gauze then centrifuged for one minute at 2000 RPM (revolution per minute) and the supernatant was discarded. 7 ML (Milliliter) normal saline was added, mixed with a wooden stick, 3 ML ether was added and mixed well then centrifuged for 5 min at 2000 RPM. Finally, the supernatant was discarded and the whole sediment was examined for parasites [16].
To ensure the quality of this research, training was given for all data collectors and supervisors. The pre-test was conducted in 50 pregnant women then the necessary correction was done on the questionnaire after the pre-test. The whole data collection procedures were closely supervised by field supervisors and investigators.
We say the women practice proper hand hygiene if she washed her hands after visiting the toilet, before cooking food and before feeding her child. We say the women properly utilize toilets if the household members consistently utilize the toilets.
The data were entered into the computer using Epi-info software and transported to SPSS software for analysis. Descriptive statistics were used to estimate the prevalence of intestinal parasite and binary logistic regression was used to identify the determinants of intestinal parasite and variable with a p-value less than 0.05 was declared as determinants of intestinal parasite. Adjustments were done for age, gravidity, parity, ANC visit, religion, ethnicity, residence, ingestion of raw vegetables, latrine utilization, hand washing practice, bare foot and educational status.
Results
A total of 783 pregnant women were included giving a response rate of 92. 55% 60.8% of the infection was on high intensity of infection, 18.3% of the infection was on moderate intensity of infection and 21% of the infection was on low intensity of infection ( Table 2).
The absence of latrine increases the odds of intestinal parasitic infection by 2.94 folds higher. The odds of intestinal parasitic infection were 3.33 folds higher among pregnant women that didn't have regular hand washing habit. Not wearing shoe increases the odds of intestinal parasitic infection by 6.87 times higher. Illiteracy increases the odds of intestinal parasitic infection by 2.32 folds higher. The odds of helminthic infections were 2.65 times higher in pregnant women that ingest raw vegetables.
Pregnant women whose age less than or equal to 21 years old were 6.48 times more likely to be infected with helminth than the others. The odds of intestinal parasitic infections were 2 folds higher in the rural areas (Table 3).
Discussion
The prevalence of helminthic infection was 70.6% (95% CI: 67 -74%). This result was comparable with previous studies conduct in Uganda [17] and Venezuela [18], but higher than previous studies conducted in Ethiopia [19][20][21], and Kenya [22]. This might be due to difference in the socio-demographic factors and lack of awareness on prevention of parasitic infection. The current high prevalence of Ascaris lumbricoides (32.7%) is comparable with the previous study conducted in southern Ethiopia [20], Venezuela [18], and Kenya [22], but higher than studies done in western Ethiopia [21]. This difference may be due to the difference in altitude and awareness in the prevention of parasitic disease.
14.2% of pregnant women were infected with hookworm. This result was higher than studies conducted in southern Ethiopia [23], and Niger Delta regions of Nigeria [24], but lower than previously reported data in western Ethiopia [21].
The prevalence of schistosoma among pregnant women was 17.4%. This result was higher than studies conducted in Southeast Ethiopia [19,23].
This study identified that intestinal helminth is underestimated public health problem among pregnant women and that socioeconomic factors play an important role in the establishment and spread of the infections in the communities.
Intestinal parasitic infection among pregnant women was determined by age, walking with bare foot, utilization of latrine, hand washing habit, and habit of eating raw vegetables.
Illiteracy increases the odds of intestinal parasitic infection in pregnant women by 2.32 folds higher. This result was in accordance with the previous study conducted in Kenya [25]. This might be due to the health-seeking behavior of literate pregnant women. The odds of intestinal parasitic infection were 2 folds higher in pregnant women living in the rural areas. This finding agrees with finding from south Ethiopia [23]. This is due to the reason that rural pregnant women have less access to the primary healthcare interventions. The odds of intestinal parasitic infections were 3.33 folds higher among pregnant women that didn't have regular hand washing habit. This finding was in line with previous study conducted in Nigeria [26]. This is due to the reason that proper hand washing practices breaks the chain of transmission for intestinal parasites. Not wearing shoe increases the odds of intestinal parasitic infection by 6.87 times higher. This finding was similar to the previous study conducted in Ethiopia [21]. This is due to the reason that soil-transmitted helminth like hookworm infection will be prevented from entering the susceptible host. Ingestion of raw vegetables increases the odds of intestinal parasitic infection by 2.32 folds higher. A similar previous finding was recorded in Southeast Ethiopia [19,27]. This is due to the reason that raw vegetables acts as vehicle for transporting intestinal parasites [28][29][30].
Conclusion
A high prevalence of intestinal parasites was observed in pregnant women. Walking with bare foot, living in the rural area, illiteracy, age less than 21 years, the absence
Acknowledgments
We would like to acknowledge federal democratic republic of Ethiopia for financially sponsoring this research work. The funder has no role in data collection, analysis of data and interpretation of data, writing of the manuscript and decision to send the manuscript for publication. We would also like to acknowledge Mecha Woreda health office for their unreserved cooperation during data collection stage. At last but not least we would also like to acknowledge all organizations and individuals that contributed to this work.
Author contribution BEF conceived the experiment; BEF and THJ performed the experiment, BEF plan the data collection process, BEF, and THJ analyzed and interpreted the data. BEF and THJ wrote the manuscript and approved the final draft for publication.
Funding
This research work was financially supported by the federal democratic republic of Ethiopia ministry of health. The funder has no role in the design of the study and collection, analysis, and interpretation of data and in writing the manuscript.
Availability of data and materials
The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.
Competing interest
The authors declare that they have no competing interests.
Ethics approval and consent to participate Ethical clearance was obtained from Amhara national regional state health bureau with protocol number of 314/10-2015. Permission was obtained from Mecha Woreda health office. Written informed consent was obtained from each study participants. The name was not written on the questionnaire and the confidentiality of the data was kept properly. Pregnant women with intestinal parasitic infection were referred to the nearby health center for further investigation.
Consent for publication
Not applicable. | 2018-08-13T06:22:54.339Z | 2018-08-06T00:00:00.000 | {
"year": 2018,
"sha1": "35b5cab00dd05befd2a7c35c45f3a8db48ccda3b",
"oa_license": "CCBY",
"oa_url": "https://bmcinfectdis.biomedcentral.com/track/pdf/10.1186/s12879-018-3291-6",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "35b5cab00dd05befd2a7c35c45f3a8db48ccda3b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
244176709 | pes2o/s2orc | v3-fos-license | Influences on Consumer Engagement with Sustainability and the Purchase Intention of Apparel Products
: Apparel and textile products are filling landfills and contributing to extensive waste found across the world. Much of the textile waste is due to the typical consumer not being aware of the care for, disposal of, and sustainable options for textile products. To identify consumers’ intention to engage in sustainable practices and the intention to purchase sustainable apparel options, this study measured consumers’ attitudes, subjective norms, and perceived behavioral controls. Data were collected from a sample of 397 participants through a Qualtrics online survey disseminated on Amazon’s MTurk. Results of the multiple regression analysis yielded three of note: (1) a positive attitude toward recycling and the environment is related to a higher intention to engage in sustainable behavior, (2) a positive attitude toward green apparel products leads to a higher intention to purchase sustainable products, and (3) family and friends and the convenience of finding sustainable apparel products in stores have also influenced the purchase of sustainable apparel. Thus, this study provides significant insights into both intention to engage in sustainable behavior and the intention to purchase sustainable products and serves as a foundation for future studies on the sustainable engagement and purchase intention toward sustainable products.
Introduction
The availability of fast fashion has encouraged less expensive clothing to become popular, leading to overconsumption among consumers. Compared to previous decades, current manufacturing practices are typically outsourced to facilities abroad that can manufacture cheap apparel at high volumes. Almost 98% of apparel products sold in the USA are made in other countries, and the average cost of an apparel product is less than $15. However, it is clear that this increased manufacture and consumption of apparel products has an adverse effect on the environment [1]. Many apparel products that are discarded are of poor quality and construction or are not routinely worn by the consumer due to changes in trends or personal style [2]. Between the years 1999 and 2009, post-consumer textile waste in the United States increased from 8.3 million tons to 11.3 million tons of waste. The amount of textile waste per year continued to increase and was expected to reach 16.1 million tons by 2019. Unfortunately, most of this waste is 100% recyclable, but around 85% of textile waste continues to be thrown in landfills [1].
The apparel industry has a negative environmental impact, but the effect can be decreased as the reuse and recycling of textiles has been found to be more beneficial than incineration and landfilling. For the industry to be considered sustainable, the impact per apparel product must be reduced by 30-100% by 2050. To achieve this reduction, consumers must prolong each apparel product's usability as many products are discarded even when usable [3]. Often, worn or stained apparel will be thrown away, while products that are in good condition are more likely to be recycled [4]. This waste warrants a closer look at sustainable consumption, which is defined as the individual understanding of the long-term impacts of their own consumption behavior [5,6]. Sustainable consumption is
Consumerism
Consumerism revolves around the premise that material products are valuable and that life may not be complete without possessions, which may outshine social relationships or experiences. In particular, apparel products are purchased for hedonic purposes beyond their inherent utilitarian purpose, and shopping is considered a recreational activity for millions of consumers [8]. Beyond the act of shopping and consuming apparel products, many consumers only think of environmental problems from a supply perspective and do not consider the linkages between consumption and environmental degradation. Consumption must be considered in conversations about sustainability, as encouraging recycling and reducing waste is not enough [9]. Currently, consumers have some sustainable choices available but may be reluctant to make sustainable choices in their consumption habits in favor of products that have rapid turnover [10].
Sustainability in the Fashion Industry
Sustainable apparel is defined as 'clothing which incorporates one or more aspects of social and environmental sustainability' [11], but a sustainable supply chain considers the triple bottom line. For consumers who are interested in sustainable products, supporting companies that practice the triple bottom line is essential [11]. Over one billion apparel and accessory products are produced every year, adding over $3 trillion to the global economy. The cost of producing these items involves extensive resources, including water, cotton, and energy. In addition, three-fifths of the products purchased are discarded within one year [12]. While sustainable initiatives begin with production, it must also include a change in consumer consumption patterns. Since trends in apparel products change quickly, it is difficult to promote the reuse or extended use of the product [13]. If all of the trashed apparel and textile products were recycled, the Environmental Protection Agency estimates that the reduced impact would be the equivalent of the carbon dioxide emissions of 7.3 million cars [14].
Consumers have limited awareness of the unsustainable impact of apparel consumption and have a limited understanding of sustainability in general. Despite attempts to educate consumers on the challenges of sustainable apparel consumption, it has become clear that the premise of sustainability itself will not elicit changes in consumption patterns. In order for change to take place, consumers must understand the role that the care for and disposal of apparel products have on environmental sustainability [15].
Delaying the disposal of an apparel product helps sustainability. However, it is essential to permanently reduce needless waste through making thoughtful purchases and using already purchased apparel to the end of its lifecycle [16]. Recycling reduces the volume of textile waste in landfills, as well as the resources such as water, fibers, and chemical dyestuffs. There is a significant lack of recovery of apparel waste when attempting to recycle textiles, as consumers do not typically have the necessary knowledge on how to dispose of their apparel in a sustainable way, including the proper recycling method [17].
Some communities in the United States have attempted to facilitate the recycling of apparel through recycling contests and corresponding prizes. Overall, consumers' positive emotions found when recycling can overshadow the negative emotions associated with being wasteful. Thus, a call for research on the factors that influence waste, reuse, and recycling was made by Sun and Trudel [4], which would lead to actionable initiatives for policymakers.
Consumers must also communicate with others about sustainability in order for it to 'catch on' among their peer groups. In a study by Youn and Jung, consumer data on sustainability were analyzed and determined that consumers were talking the most about "eco-friendly, "recycle", and "ethical". These terms are broad in nature and signal a way for retailers to communicate with consumers [18]. Researchers must also continue research on sustainable apparel to help the industry, even though active research on sustainability in the apparel industry has dramatically increased since 2005 [19].
Theory of Planned Behavior
The theory of planned behavior (TPB) is a widely known model that helps predict and explain human behavior. The TPB was adapted from the theory of reasoned action (TRA), which focuses on two general behaviors. First, individuals process information and act in a rational manner. Once intention is established, the behavior will result. Second, attitude and subjective norms help to build intention [20]. Since the TPB focuses on the decision-making process and the motivations behind human behavior, this model and its three constructs were investigated as predictors of behavioral intention: attitude, subjective norms, and perceived behavioral control [21]. The TPB also traditionally includes the actual behavior in which the consumer would engage, however, actual behavior can be difficult to measure. Extant literature has also excluded actual behavior, including Kang et al. [22]. Thus, this study has focused on intention and excluded the variable of behavior from the model.
Attitude
Attitude can be defined as the degree to which a person has a favorable or unfavorable evaluation [23] of a product, service, or behavior. The definition can also include the perception of engagement in a behavior, as well as the consequences that may result from the behavior. Attitude has also been found to positively impact behavioral intention [16]. Thus, if people have a positive attitude toward sustainability and the environment, it is more likely that they will make positive changes in their consumption decisions. Overall, attitude will help us to understand the challenges that consumers have when adopting a more sustainable lifestyle [24].
Attitude has been found to be directly related to behavior in regard to sustainability and, more specifically, recycling. Consumers feel good when recycling, helping to form a positive attitude toward the behavior. While environmental concerns may not be the main motivating factor, a positive attitude toward recycling leads to sustainable behavior [25].
Hypothesis 1 (H1).
Positive attitudes towards recycling apparel items lead to a higher intention to engage in sustainable behavior.
Hypothesis 2 (H2).
Positive attitudes towards the environment lead to a higher intention to engage in sustainable behavior.
Hypothesis 3 (H3).
Positive attitudes towards green products lead to a higher intention to purchase sustainable apparel.
Subjective Norms
A subjective norm is defined as the perceived social pressure from family and friends to perform a set behavior. People have an innate drive for approval, and significant others' opinions and actions influence resulting behavior [21]. This component of the TPB measures individuals' feelings about the social pressures they encounter when engaging in certain behaviors. In relation to this study, it has been found that people have been influenced to engage in sustainable behaviors if peers demonstrated sustainable behavior [23]. Consumers want to be seen doing the right thing by their peers, leading to the subjective norm serving as a strong predictor on a behavioral outcome. Thus, consumers' intention to recycle and buy sustainable products is predicted to depend on the subjective norm [24].
Consumers feel social pressure to purchase more sustainable products, and this has a key impact on sustainable consumption. A more significant impact is also found when consumers want a positive social image. Thus, subjective norms have positively influenced behavioral intentions towards purchasing sustainable products [26].
Hypothesis 4 (H4).
Family and friends influence the intention to engage in sustainable practices.
Hypothesis 5 (H5).
Family and friends influence the intention to purchase sustainable apparel.
Perceived Behavioral Control
Perceived Behavioral Control (PBC) can be defined as the perceived ease or difficulty of completing a behavior. To achieve the behavior, there are many elements that must be controlled, including the resources, skills, and abilities to reach the outcome. If someone perceives less control over an outcome, then the behavioral intention decreases for that activity [21].
Two main influences, convenience and price, have been found to be primary contributors to PBC when purchasing products. For sustainable products to be viable options for consumers, the products must be of good value and easily accessible. The higher price of green products can be a deterrent for consumers. However, consumers have been found to pay a higher price if the products are of higher quality than other options. Brand recognition also helps support the selection of green products. Despite the higher price point, a primary driver of green consumption is the perceived benefit to the environment [26].
Hypothesis 6 (H6). Convenience of purchasing green products will have a positive influence on intention to engage in sustainable behaviors.
Hypothesis 7 (H7). Convenience of purchasing green products will have a positive influence on the intention to purchase sustainable apparel.
Hypothesis 8 (H8).
Price of green products will have a positive influence on the intention to engage in sustainable behaviors. Hypothesis 9 (H9). Price of green products will have a positive influence on intention to purchase sustainable apparel.
Behavioral Intention to Engage and Purchase
Behavioral intention serves as an antecedent to predicting an outcome behavior and highlights the likelihood of actual engagement in that behavior. When intentions are strong, there is a higher probability of people carrying out that behavior. In relation to recycling, the behavioral intention would be a process that fits into the consumers' lifestyles. Thus, recognizing processes in which people can engage can help identify potential outcomes that may work for others [21]. To test the relationships between behavioral intention and other variables in this study, the TRB model was followed.
Methods
This quantitative study used thirty-seven, five-point Likert-type survey questions adapted from previous literature [22][23][24] and was reviewed by a panel of apparel industry experts. The survey items were presented to participants in a traditional online survey format and the adapted questions representing the TPB were located at the beginning of the survey, while demographic questions appeared at the end. The adapted questions were found in extant literature, some of which used the TPB model as a foundation for their study. Overall, nine questions represented attitude, two questions for subjective norms, nine questions for PBC, six questions for intention to engage, and four questions for the intention to purchase. Survey questions were coded (1 = Definitely Not, 2 = Not, 3 = Neither Yes or No, 4 = Yes, 5 = Definitely Yes) and averaged to represent the construct for analysis. The survey questions, their connections to the TPB, and the corresponding codes are available in Appendix A.
The survey was distributed in 2019 to consumers of legal age, regardless of geographic location in the USA or other demographic variables. Access to Amazon Mechanical Turk's workers was used to recruit a convenience sample of participants, in which the survey was presented on MTurk and housed on Qualtrics. All participants were paid $0.10 for completing the survey, and the only condition to participation (beyond legal age) included their routine purchasing of consumer products, either in-store or online. The condition was put in place to ensure the consumer has dealt with the need to dispose of products throughout a given year. As the data were collected, responses in Qualtrics were automatically coded in preparation for analysis. Within 24 hours, a total of 403 responses were collected. Upon review of the data, six responses were removed due to consistent response patterns or missing responses. All survey items were found to have a Cronbach's Alpha of 0.7 or higher, as shown in Table 1.
Multiple regression analyses using SPSS software were used to determine the acceptance or rejection of hypotheses. As all regression assumptions were met, the researchers determined that regression was a sufficient test over structural equation modeling. Correlations for each variable are also available in Table 2. Despite obtaining a convenience sample, participants in this study represented a diverse subset of the population. About 56% identified themselves as female, and the remaining 44% identified themselves as male. A majority of the sample (49%) was between the ages of 25 and 34, while those aged between 35 and 44 made up the second-largest Sustainability 2021, 13, 10655 6 of 15 segment (23%), and those aged between 19 and 24 made up the third-largest segment (13%). About 50% of participants had completed a bachelor's degree, 20% had obtained a high school diploma or a GED, 13% did not finish high school, and 12% had obtained an associate's degree. Income was also highly diverse, as those with an annual income of less than $20,000, $20,001-$35,000, $35,001-$50,000, and $50,001-$75,000 all resulted in approximately 20% of the sample.
Results
Each independent variable was measured against the variables of 'Intention to Engage' and 'Intention to Purchase'. The testing of the relationships between variables are detailed in Figure 1. verse subset of the population. About 56% identified themselves as female maining 44% identified themselves as male. A majority of the sample (49%) the ages of 25 and 34, while those aged between 35 and 44 made up the se segment (23%), and those aged between 19 and 24 made up the third-larg (13%). About 50% of participants had completed a bachelor's degree, 20% ha high school diploma or a GED, 13% did not finish high school, and 12% had associate's degree. Income was also highly diverse, as those with an annual i than $20,000, $20,001-$35,000, $35,001-$50,000, and $50,001-$75,000 all re proximately 20% of the sample.
Results
Each independent variable was measured against the variables of 'Int gage' and 'Intention to Purchase'. The testing of the relationships between detailed in Figure 1. First, testing the independent variables of attitude, subjective norms, a behavioral control on the dependent variable of intention to engage w through a regression analysis (Intention to Engage = a + b1 * attitude + b2 * sub + b3 * PBC + e). Coefficients can be found in Table 3. Testing participants' engage in sustainable behaviors generated many positive results and yielde First, testing the independent variables of attitude, subjective norms, and perceived behavioral control on the dependent variable of intention to engage was measured through a regression analysis (Intention to Engage = a + b1 * attitude + b2 * subjective norm + b3 * PBC + e). Coefficients can be found in Table 3. Testing participants' intention to engage in sustainable behaviors generated many positive results and yielded a variance of 64.8% (R 2 = 0.648, F (5, 374) = 140.437, p < 0.001). Participants' positive attitudes towards recycling items led to a higher intention to engage in sustainable behavior (t = 7.639 ***, p < 0.001). Thus, Hypothesis H1, testing that 'positive attitudes towards recycling apparel leads to higher intention to engage in sustainable behavior' was accepted. Similarly, participants' positive attitudes towards the environment led to a higher intention to engage in sustainable behavior (t = 8.488 ***, p < 0.001). Consequently, Hypothesis H2, testing that 'positive attitudes towards the environment leads to a higher intention to engage in sustainable behavior' was also accepted. Participants' convenience of purchasing green products was also found to have a positive influence on the intention to engage in sustainable behavior (t = 2.059 *, p < 0.05). Therefore, Hypothesis H6, testing that 'convenience of purchasing green products will have a positive influence on intention to engage in sustainable behavior' was accepted.
On the other hand, participants' family and friends' influence did not have a positive effect on their intention to engage in sustainable behavior (t = 0.705 *, p > 0.05). Thus, Hypothesis H4, testing that 'family and friends influence the intention to engage in sustainable behavior' was rejected. The price of green products did not have a positive influence on Sustainability 2021, 13, 10655 7 of 15 their intention to engage in sustainable behavior either (t = 1.570*, p > 0.05). Hypothesis H8, testing that 'the price of green products will have a positive influence on the intention to engage in sustainable behavior' was rejected as well. Of importance, attitude toward the environment has a strong effect on the intention to engage. Second, testing the independent variables of attitude, subjective norms, and perceived behavioral control on the dependent variable of intention to purchase was measured through a regression analysis (Intention to Purchase = a + b1 * attitude + b2 * subjective norm + b3 * PBC + e). Coefficients can be found in Table 4. Testing participants' intention to purchase yielded positive results and a variance of 49.7% was found to predict the participant's intention to purchase sustainable apparel (R 2 = 0.497, F (4, 382) = 96.445, p < 0.001). Participants' positive attitudes towards green products led to a higher intention to purchase sustainable apparel (t = 3.736 ***, p < 0.001). Therefore, Hypothesis H3, testing that 'positive attitudes towards green products leads to a higher intention to purchase sustainable apparel' was accepted. In addition, participants' family and friends influence also affected their intention to purchase sustainable apparel (t = 3.478 *, p < 0.05). Thus, Hypothesis H5, testing that family and friends influence the intention to purchase sustainable apparel was accepted as well. Participants' convenience of purchasing green products had a positive influence on their intention to purchase sustainable apparel (t = 7.443 ***, p < 0.001). Thus, Hypothesis H7, testing that 'convenience of purchasing green products will have a positive influence on the intention to purchase sustainable apparel' was accepted. On the other hand, the price of green products did not have a positive influence on the participants' intention to purchase sustainable apparel (t = 1.535 *, p > 0.05). Therefore, Hypothesis H9, testing that the 'price of green products will have a positive influence on intention to purchase sustainable apparel' was rejected. Of importance, convenience has a strong effect on the intention to purchase. The regression result for each hypothesis is available in Table 5.
A variance of 2.1% was found between the attitude variable and the participant's gender when considering their positive attitudes towards recycling. Female participants were found to be more likely to have positive attitudes towards recycling and thus engage in sustainable behavior when compared to male participants (r = 0.149, p = 0.002). A variance of 5.9% was also found between the Attitude variable and the participants' age when considering green products. Older participants had a higher intention to purchase sustainable apparel when compared to younger participants (r = −0.234, p = 0.000). Next, a variance of 51.0% was found between the Subjective Norm variable and the participants' age. Older participants were more likely to be influenced by family and friends (r = −0.203, p = 0.000). Finally, a variance of 1.5% was also found between the Intention to Purchase and the participants' age. Older participants were more likely to purchase sustainable apparel when compared to younger participants (r = −0.128, p = 0.006).
Discussion
Textiles can be reused and recycled in several ways to reduce the amount of waste that is burned or added to landfills, which in turn helps to conserve natural resources, limit pollution, and save energy. Improvements to current recycling rates can include: (1) better infrastructure between textile producers and recyclers, (2) adding curbside collection programs for consumers, (3) increasing end-use markets of waste recyclables, and (4) educating consumers on the advancements they can be making to be sustainable [27]. Connecting many of these improvements, the single-stream process of recycling paper, glass, plastic, and cans have been successful in recent years. Single-stream processes are convenient for the consumer, as the materials do not need to be sorted before the recycling process occurs. The convenience of single-stream recycling serves as a huge step forward for recycling engagement across thousands of United States households [28]. A positive next phase will include curbside pickup of textiles across numerous communities. Additional funding for developing recycling techniques in connection to government regulations would be beneficial for increasing sustainability. The growth of buy-back programs across the industry would also serve as a big step to gain additional buy-in from consumers [17].
The current study found that participants had positive attitudes toward recycling, the environment, and green products. Participants also indicated their willingness to engage in sustainable behaviors and their intention to buy sustainable apparel. These relationships highlight that holding a positive attitude toward an action can result in an intention to act. Attitude was also found to be the strongest predictor of the intention to purchase green products. Positive consumer attitudes and a greater concern for the environment lead to stronger efforts to reduce environmental impacts [23] and a stronger purchase intention. If consumers feel as if they can positively impact the environment, they are more likely to engage in more sustainable consumption. These consumers also have a higher likelihood of purchasing green products, as they feel that their individual consumption behaviors have a direct impact [22].
In the current study, the subjective norm yielded results inconsistent with previous literature [26], as family and friends were not found to influence intention to engage in sustainable practices. In contrast, family and friends were found to influence participants' intention to purchase sustainable apparel. This influence is likely to derive from indirect cues that are taken when family and friends purchase quality, a sustainable brand, or avoid purchasing fast fashion. These observations may educate consumers on the products that they purchase.
When consumption of sustainable apparel is supported, family and friends maybe more impactful if they engage in similar sustainable practices. In this study, family and friends that purchased apparel with organic, low-impact dyed, or recycled materials had a positive influence on participants' purchase intention. Sustainable consumption also increases the tendency for consumers to give and receive secondhand apparel among family and friends. It is also believed that more sustainable consumption will lead to reuse, upcycling, reselling, or donating unwanted apparel items [13]. As consumers spend more time caring for their garments, an emotional attachment and engagement in more sustainable practices may result.
Previous literature has indicated that social norms are not direct influences on behavior but have an indirect impact through personal norms [20]. This study also supports the disconnect between social norms and behavior, as influences from family and friends were not found to have a significant relationship with the intention to engage in sustainable practices. However, family and friends were found to influence the intention to purchase sustainable apparel. This may be due to the visibility that sustainable apparel has when it is being worn by family and friends. People that have a stronger social conscience are reported to be more aware of environmental challenges, more involved with recycling, and more willing to purchase sustainable apparel [20]. While the influence of family and friends may be indirectly important, previous literature has also indicated that approval from significant others may not be as impactful as previously thought [23]. It is also important to note that consumers may be purchasing sustainable apparel based on their own decisions and interests instead of a direct influence from family and friends [22].
In the current study, perceived behavioral control has strongly supported the basis of convenience rather than price. Convenience played a significant role in both the intention to engage in sustainable behavior and the intention to purchase sustainable apparel. Paul et al. [23] also determined that communicating convenience and the availability of sustainable products is an important aspect of PBC and sustainable product purchase intentions. Based on this information, it is also important to note that the perceived availability of sustainable apparel is viewed to be limited as compared to more unsustainable apparel products.
Of significance, price did not have a significant influence on the intention to engage in sustainable behavior or the intention to purchase sustainable products. This result is of great interest, as price is typically a strong factor when people are shopping. However, it does not seem to be an influential factor when people are seeking sustainable options. Kang et al. [22] also found something similar, as consumers indicated that making a meaningful difference has a greater impact on their actions than concerns about price, availability, location, or consumption. Thus, price is irrelevant for consumers that feel they can make a difference through their own sustainable practices. Consumers that are knowledgeable of sustainability issues are not deterred by the price of sustainable apparel and will support sustainable initiatives if they feel they can make a difference for the environment [22].
Relationships between the TPB variables and demographic characteristics also drew significant insights in this study. Female participants were found to have more positive attitudes toward recycling than male participants and were also found to engage in more sustainable behaviors. Cho et al. [13] also had a similar result which stated that females tend to engage in more sustainable apparel consumption, as they are more frugal and fashion-conscious. Females have also been found to be more interested and engaged in general sustainable consumption processes [13].
A significant relationship was also found between subjective norms and participant age, as younger participants were less influenced by family and friends than older par-ticipants. Even though children learn consumer socialization behaviors through family members, it seems as if younger consumers actively reject product recommendations from family and friends [29]. In the current study, younger participants were also found to have less positive attitudes toward green products and lower intention to purchase sustainable apparel than older participants. It may be hypothesized that factors such as having a stronger ecological conscience may be related to age, as consumers are likely to have engaged in new or refined behaviors throughout their lives. Thus, older consumers may continue to be learning from family, friends, and other outside sources. The positive relationship between older consumers and their likelihood to adopt new behaviors is a positive sign for younger generations as well.
Overall, sustainable consumption must feel relevant to consumers' lives and must enhance consumers' social image. If these conditions are met, consumers are more likely to develop a positive attitude toward sustainable options, feel more pressure from peers to engage in purchasing sustainable apparel, and overcome challenges related to sustainable consumption [22]. However, additional research is needed on the topic of relevancy based on consumers' specific social images. As the current study highlights, consumers have positive attitudes toward recycling, the environment, and green products, while older consumers are more influenced by family and friends. Thus, there are numerous opportunities for retailers, marketers, policymakers, and governments to step in and support sustainable initiatives.
Implications
Of significance in this study, participants indicated that convenience is of greater importance than alternative sustainable processes, even when participants indicated that they were concerned about the social and environmental impact of consumption. Knowledge of consumer behavior toward sustainability helps policymakers, retailers, product developers, and marketing managers make appropriate decisions for their communities, companies, and consumers. The design of new products and packaging should also be reviewed for more sustainable options to increase recycling rates [4] and quality. The insights gained can also help educate and persuade consumers to engage in convenient processes that help achieve more sustainable practices and improve their current habits.
The utility of this study is primarily with retailers. Retailers can determine the various viewpoints that their consumers may have toward sustainability and adjust their assortments and marketing campaigns accordingly. It is important to know where consumer behavior and product demand is headed in order to make a profit.
We continue to be a long way from full consumer awareness of sustainable practices, as some consumers are ignorant of the global impact that consumption has had on the planet [15]. To make a difference in the industry, consumers, retailers, manufacturers, and other partners must join forces and share information on sustainability. The industry must determine sustainability standards that will support retailers and help consumers make informed decisions. Governments across the world must work to set basic laws to protect the environment. These laws need to consider the use of the planet's resources, minimize excessive consumption, and improve current waste disposal methods that are significantly polluting our planet. As outlined by Markkula and Moisander [30] and Harris et al. [15], policymakers must focus more on large-scale actions, including cultural and social contexts, instead of simply informing and educating consumers.
This study also serves as a foundation for researchers, as extant literature primarily focuses on consumer interests toward sustainability and not the perceptions that consumers have toward their own behaviors. Results from this study also further the use of the TPB in sustainability literature and has provided an avenue to solidify the theory. The results of this study also provide new topics for research, including the need to delve deeper into how demographics impact intention to engage.
Limitations
There are a few limitations to this study. First, intention is a widely accepted predictor of behavior but may not fully represent the actual behavior that would unfold. In the context of this study, people may not actually engage in the purchasing of green products, as there may be a lack of confidence in the performance of the product, and the higher price point of the green product may dissuade the consumer when making a purchasing decision [26].
Second, participants were only recruited from Amazon's Mechanical Turk, which may be biased toward specific populations, including people who are comfortable using the internet. Due to the nature of online surveys, participants may have also rushed through the survey without much consideration or may have selected more preferential answers. To help eliminate possible issues, data that had repetitive answers were removed from the data analysis. In addition, the post soliciting participants was only distributed once, leading to a possible bias based on when the participant engages in the survey (e.g., people seeking income during the day versus people who seek entertainment in the evening, etc.). Future research on this topic should seek a random sample to further test the TPB and hypotheses formed in this study. Overall, since the compensation for this study was relatively modest, it also is not believed that participants were biased when responding to the survey questions.
Third, participants may have engaged in virtue signaling when selecting responses within the survey. Virtue signaling is 'to take a conspicuous, but essentially useless action, ostensibly to support a good cause by actually showing off how much more moral you are than everyone else' [31]. In this study, participants may have wanted to demonstrate themselves in a more positive light than what exists in reality. However, these surveys were anonymous, confidential, and completed individually, and were also completed in a short amount of time. Due to the instinctual quick nature of filling out the survey, it is believed that participants will have indicated their true feelings on the topic. The survey items were also stated in a neutral manner with the goal of making the survey unbiased and open to genuine responses.
Conclusions
This study highlighted how consumers who are invested in sustainability will be driven to purchase sustainable apparel and intend to engage in sustainable behaviors. In contrast, consumers that are indifferent toward sustainability will do less to recycle and will engage in fewer sustainable actions. Influences, including family and friends, play a role in selecting sustainable options and sustainable processes. Adopting more sustainable behaviors must be convenient and adopted by more people for real change. Consumers must also fully understand the long-term benefits, including the long-term negative impacts, which could unfold if sustainable practices are not widely adopted. Therefore, this study serves as a foundation for using the TPB when investigating the intention to engage in sustainable behavior and the intention to purchase sustainable products. Future research must further examine the topic of apparel sustainability by using a random panel of diverse participants to further determine barriers toward sustainable engagement and the purchase of green products. $90,000+ | 2021-10-18T18:21:17.620Z | 2021-09-25T00:00:00.000 | {
"year": 2021,
"sha1": "d42d7d92d22e066d029232e1a422c32f2c17864e",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/13/19/10655/pdf?version=1632565655",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "9c5263a3217ba4d73f9ca10c92797c9ec7d4125e",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Business"
]
} |
118370400 | pes2o/s2orc | v3-fos-license | Localized gravitational energy in a Schwarzschild field
An interpretation of general relativity is developed in which the energy used to lift a body in a static gravitational field increases its rest mass. Observers at different gravitational potentials would experience different mass reference frames. It is shown that bodies falling in a Schwarzschild field exhibit the relativistic mass/energy relationship from special relativity. This new result is independent of the choice of coordinates. The proposed approach provides a physical explanation for gravitational energy, which is localized as a scalar function intrinsic to general relativity. Applying this model to the Robertson-Walker metric demonstrates that time-varying fields induce a net energy transfer between bodies that is not exhibited in static fields.
I. INTRODUCTION
Over 30 years ago Rafael Vera introduced fascinating concepts for mass [1]. They have not gained widespread acceptance, possibly because they were thought to be incompatible with general relativity (GR). In this paper, a careful accounting of observer reference frames shows that Vera's ideas are consistent with GR in a Schwarzschild field. This combination is an attempt to begin to resolve the long-standing perplexity over the localization of gravitational energy.
One of Vera's ideas is that the rest mass of a particle increases with its gravitational potential [1]. Therefore, gravitational energy would be localized with the masses of bodies. This must occur in such a way that proper (locally measured) rest mass is invariant with position, as perceived from a local observer's mass reference frame. The mass/velocity relationship for a body falling along a geodesic in a Schwarzschild field is shown to be the old-fashioned expression for relativistic mass from special relativity. This new result is independent of the choice of coordinates, as confirmed for the isotropic and harmonic metrics. This approach strengthens the correspondence between special relativity and GR, and rehabilitates the undervalued concept of relativistic mass.
Reference frames are usually associated with spatial coordinates and time. The introduction of reference frames for mass is essential for solving this problem.
Applying Vera's concepts to time-varying gravitational fields is problematic. It is shown that his concept of an invariant falling mass does not apply to such fields. Unlike static fields, time-varying fields induce a net longrange energy transfer between bodies.
The paper proceeds with the following steps: A dimensionless scalar β ab is introduced; its important identities are found. A mass transformation is derived for a onedimensional, static gravitational field, then generalized to transformations between mass reference frames. The * Electronic address: james.kentosh.336@my.csun.edu transformations are applied to the Schwarzschild solution and other metrics. The mass/velocity relationship for falling bodies is evaluated for radial and oblique motion. It is shown that the proposed interpretation conserves energy and mass within a Schwarzschild field while the conventional interpretation does not. Time-varying fields are studied and limitations on the proposed interpretation are imposed. As with Maxwell's equations, in which a time-varying electric field causes effects not seen in static fields, a time-varying gravitational field also causes effects not exhibited by static fields. This work is a matter of interpretation and is intended to complement GR.
II. BACKGROUND
What happens to the energy used to lift a body in a gravitational field? During the development of GR, Einstein derived an expression t µν for the stress-energy of the gravitational field. That expression is a pseudotensor that varies with the choice of coordinates in an unseemly way. (For a review see [2,3].) Many attempts have been made to better understand gravitational energy in GR. Tolman [4] defined gravitational energy as a separate term in the energy-momentum tensor T µν . In 1935, Whittaker [5] proposed the term "potential mass" to describe the contribution of gravitational potential energy to the gravitating mass of a particle. After much deliberation, the interpretation arose that gravitational energy is not localized in GR [6,7]. Thus, it is meaningless to ask where gravitational energy resides. In an influential textbook [8], Misner, Thorne and Wheeler sought to resolve this quandary with the words, "Anybody who looks for a magic formula for 'local gravitational energymomentum' is looking for the right answer to the wrong question. Unhappily, enormous time and effort were devoted in the past to trying to 'answer this question' before investigators realized the futility of the enterprise." Yet Bondi insists that non-localizable forms of energy are inadmissible in GR [9].
Although today most relativists seem satisfied with the conventional treatment of gravitational energy as unlo-calizable, there remains considerable ambiguity in the interpretation of gravitational energy within GR. Many authors believe that gravitational energy, and even mass, are stored in fields [10][11][12]. Some conclude that gravitational self-energy, which is a type of gravitational energy, contributes to the masses of bodies [13,14]. Vera [1], Savickas [15], and Ghose and Kumar [16] interpret rest mass as depending on local gravitational potential, while Ohanian and Ruffini [10] argue that rest mass is a constant independent of space and time. Since energy cannot simultaneously be distributed in fields and contribute to particle masses, some of these views appear to be incompatible. Hayward states, "It comes as a surprise to many that there is no agreed definition of gravitational energy (or mass) in general relativity" [17].
A few dedicated researchers continue to study gravitational energy and related concepts such as quasi-local energy momentum (e.g., [2,[18][19][20][21][22][23][24][25]), driven partly by the belief, as stated by Mirshekari and Abbassi [26], that "one of the old and basic problems in general relativity which is still unsolved is the localization of energy."
III. CONCEPT FOR GRAVITATIONAL ENERGY
As proposed or implied by others (e.g., [1,15,16]) it is hypothesized that the energy used to lift a body in a static gravitational field increases its rest mass. It is also assumed that the mass of a body remains constant while falling in a static field [1]. This interpretation is illustrated in Fig. 1 with the following steps: The energy ∆E used to lift a body increases its rest mass by ∆E/c 2 . When the lifted body is allowed to fall from a higher elevation, it maintains a constant mass during its freefall, equal to its mass at the point it began its fall. As the body passes its starting point with some velocity, it has a greater mass than an identical body at rest there, in general agreement with special relativity. When the fall is stopped, the body loses its extra mass and returns to the rest mass it had before being lifted. The mass that the body loses during its impact is transferred to other bodies (and to itself) in the form of vibrational energy, heat, elastic or inelastic energy, etc. Total mass would always be conserved during lifting, falling, and collisions. No mass or energy would be stored in the gravitational field.
With this interpretation, the rest masses of all bodies would depend on position in a gravitational field. In accordance with the equivalence principle, any experiments conducted at different elevations should be unable to detect any difference in mass, since all objects would be affected in exactly the same proportion. This interpretation does for potential energy what special relativity did for kinetic energy -gives it a mass equivalent. The history of various concepts of gravitational energy is a fascinating topic not covered herein.
The concepts shown in Fig. 1 fields. However, it is shown later that the mass of a body cannot remain constant while falling in a time-varying field, in which gravitational energy must be conveyed between bodies via gravitons and/or gravitational waves. The essential element of Fig. 1 that remains applicable in time-varying fields is that energy and mass are primarily localized as part of the masses of bodies rather than distributed in fields.
For the purpose of this paper, no distinction needs to be made between inertial mass, active gravitational mass, and passive gravitational mass. They are assumed to be the same [27,28]. Also, the term "gravitational energy" refers to the gravitational potential energy of a body and is not to be confused with the more general gravitational energy-momentum tensor T µν .
As an example of the conservation of mass after an inelastic collision, thermal energy increases the velocity of the molecules of a body, with corresponding relativistic mass given by so that the mass of a falling body is in part converted into the mass associated with additional kinetic energy KE of the molecules [29]. After the body cools off, it returns to its original mass. Although the terms "rest mass" and "relativistic mass" have fallen out of favor [30][31][32], they are used herein out of necessity. Here the term "rest mass" describes the mass of a motionless body in any reference frame fixed to a central mass in a static gravitational field. Since reference frames span different gravitational potentials, rest masses may vary with elevation. "Proper rest mass" is the rest mass of a body when it is co-located with the observer measuring it; proper rest mass is presumed to be invariant. "Relativistic mass" refers to the mass of a body moving relative to an observer fixed with respect to the central mass, as perceived from that observer's mass reference frame. Rather than dwell on semantic arguments over the acceptability of the term "relativistic mass," it is more productive to focus on physical arguments and their consequences.
Several variable mass theories of gravity have been proposed [33][34][35][36]. However, they do not directly relate changes in mass with potential energy.
The standard model extension (SME) has been developed partly to study possible variations in fundamental constants [37][38][39]. In the SME, the relative masses of different materials might vary with gravitational potential, in disagreement with GR. That possibility is not considered herein. Modern versions of the Eötvös experiments [40,41] support the hypothesis that different materials are affected by gravitational fields in the same way.
IV. DIMENSIONLESS SCALAR β
Because it is necessary to understand static fields before exploring time-varying fields, this work begins with a detailed analysis of the Schwarzschild solution. For static gravitational fields it is useful to define the scalar dimensionless ratio β ab as follows: where dt a is the time interval between two events measured at some point A and dt b is the time interval between those two events measured from some other point B, using clocks that are motionless relative to the field. If β ab < 1, time passes more slowly at point A than at B. (In a static field it is easy to account for the time for a signal to travel between A and B.) From the form of (2) the following identities for β can be found: and β ab β bc = β ac .
V. GENERAL MASS TRANSFORMATION
The next step is to derive a general transformation for rest mass in a static, one-dimensional gravitational field, from the perspective of observers who are motionless relative to the field. Based on Einstein's equivalence principle, an observer at any location should measure the same rest mass for a given body when it is at her location. This could be achieved experimentally by comparing the mass of the body to the mass of a kilogram standard carried alongside.
Consider an observer located at some arbitrary reference elevation O. Let the proper rest mass of some body be m o . The mass of that body when it is at some other elevation x, relative to the reference frame at O, is defined to be m xo -an important notation for what follows. The difficulties of measuring mass at another location must be overlooked for now. When the body is at O, the observer there will measure its mass to be m oo ≡ m o . When the body is carried to some other elevation x, an observer there will measure its mass to be m xx = m o .
Consider a static field causing a proper acceleration g(x) in the −x direction. Let x represent proper distance, which will generally differ from the coordinates used in some solution to the field equations. Consider a body with proper rest mass m o being slowly lifted against the field. Then the proper energy required to lift the body some small distance dx is given by In the local reference frame at x, the small increase in mass as the body is lifted is given by From the reference frame at O, the body at x has mass m xo , while an observer at x measures the mass of the body to be m o . The ratio of those masses will apply to any body or mass increment at x. Therefore, the incremental mass dm at x observed from O and from x are related as follows: Combining (7) and (8) yields This can be integrated to yield the general mass transformation between elevations in a one-dimensional, static gravitational field: This equation, also found in [15], indicates that the mass of a body lifted against the field to x is greater than the mass of an identical body at O. Since there is no preferred elevation in a gravitational field, point O can be selected to represent any point. The equation also shows that proper mass does not vary with elevation.
In a weak field, (10) where Φ i is the classical gravitational potential energy of the body.
Next the general transformations between mass reference frames can be found. Consider the general case of two elevations A and B. Let O → A and x → B. From the form of the preceding equation, it can be shown that any rest mass m o at B relative to the mass reference frame at A, and vice-versa, satisfies identities that are similar to those for β, so that and where m ab , for example, is the mass of a body at A, with proper rest mass m o , relative to the mass reference frame at B. Observers at different gravitational potentials experience different mass reference frames that are related by these formulas.
VI. MASS TRANSFORMATION OF THE SCHWARZSCHILD SOLUTION
Next, the mass transformation is applied to the exterior Schwarzschild solution of GR, described by the following metric: where The coordinate time dt is the time interval of a stationary clock at infinity and ds = c dτ , where dτ is proper time of a clock corresponding to the spacetime interval ds. Setting dr = dθ = dφ = 0 in (14) for a motionless clock at r yields β r∞ = dt r /dt ∞ , where dτ → dt r is the time interval of a stationary clock at r and dt ∞ = dt is the time of a stationary clock at infinity. Thus, β r∞ is the ratio of the time rates at r and infinity, corresponding to the definition for β in (2). That is the origin of the subscripts in (15).
Equation (15) provides β at r relative to an observer at infinity. At some other radius R, From identities (4) and (5) an expression for the β's relating elevations r and R can be found: Besides using the identities, this can also be derived from the relative time rates at r and R.
Since the mass transformation (10) is based on proper acceleration g(x), the next step is to find proper acceleration relative to a motionless observer in a Schwarzschild field. For radial motion, Hobson et al. [42] find the following geodesic equation of motion: whereṙ ≡ dr/dτ . In the general case, τ is the time measured by a clock that may be either moving or stationary.
In the case of a stationary clock at r, the terminology τ → t r is used to differentiate between those two cases. For a motionless clock at r, dt/dτ ≡ dt ∞ /dt r = 1/β r∞ . Coordinate r in the Schwarzschild metric has little physical significance. Of greater relevance is the proper distance given by ds, which in the general case may correspond to a moving meter stick or one that is motionless relative to the central mass. To distinguish between those two possibilities, variable x is used to denote proper distance in the reference frame of a motionless observer at r. By setting dt = dθ = dφ = 0, letting ds → dx in (14), and using the appropriate sign for a spacelike spacetime interval, the following relationship is found: which relates proper distance dx to the radial coordinate distance dr. Using the resulting formula for dx in place of dr in (18), Consider a body at the instant it is released to freefall, before it has attained appreciable velocity. Then dτ ≈ dt r , the proper velocity is dx/dτ ≈ 0, and only the right hand term in (20) contributes. Using the resulting equation in combination with (18) and (19), the proper acceleration of gravity for a slowly falling body at r in a Schwarzschild field is given by This is the acceleration that must be resisted when slowly lifting a body in the field. Note that g(r) is positive towards the central mass. Suppose a small body with proper rest mass m o is lifted radially from r to R in a Schwarzschild field. The mass of the body at R, relative to the reference frame at r, can be found by substituting (19) and (21) into (10), which yields The surprisingly simple solution to this integral is Since β Rr > 1 if R > r, the mass of the lifted body at R is perceived to be greater than m o from the reference frame at r. In a weak field, m Rr is its rest mass plus (or minus) the mass equivalent of the change in classical potential energy. If r is chosen to be the reference frame at an infinite distance from the central mass and if R → r, then the relative mass of a body at any point r perceived from the reference frame at infinity is given by Thus, stationary bodies in the vicinity of a central mass have a decreased mass relative to a reference frame at infinity. It is apparent that β r∞ plays the role of scale factor for mass as well as time.
The preceding results for the Schwarzschild solution suggest the following general transformation for rest mass in any static gravitational field: This generalization, offered without proof, does not depend on the choice of coordinates; it is not used again in this paper. The proof of (25) for any static field g(x) requires topics not covered here.
Most studies of localization focus on the gravitating mass rather than test particles. Nevertheless, mass reference frames reveal an interesting property of the gravitating mass in a Schwarzschild field. In some ways, the Schwarzschild metric is written from the perspective of a reference frame at infinity: Coordinate time t represents a clock at infinity and radial distance r converges to flat space coordinates at infinity. Therefore, the central mass at the origin should be M ≡ M o∞ , its mass from the reference frame at infinity. From a reference frame at r, all masses appear increased from (23), so that M or = M o∞ /β r∞ . Substituting this into (21) yields where r is the proper circumference divided by 2π. Thus, Newton's law of gravity is preserved for any local observer if the magnitude of the central mass in his reference frame is used. This result is independent of the choice of coordinates.
VII. RELATIVISTIC MASS OF A FALLING BODY
Now that the mass transformation has been derived for a Schwarzschild field, the mass/velocity relationship for a falling body can be found. Equation (23) provides the mass of a body that has been lifted from r to R. Based on the concept in Fig. 1, if that body is allowed to fall from R back to r, it will maintain the same mass during its fall. It will therefore arrive at r with the relativistic mass m v = m Rr given by (23).
In GR it is straightforward to derive the velocity of a body falling radially from R to r in a Schwarzschild field. The radial geodesic equations of motion [42] arė Expressingṙ in terms of dr/dt and setting constant k so that the coordinate velocity dr/dt = 0 at R, the coordinate velocity of the body when it arrives at r is Using prior formulas, the proper velocity v r of a body at r can be related to its coordinate velocity as follows: where v r is measured with respect to proper coordinates x fixed relative to the central mass. The preceding two equations can be combined to yield the proper velocity of a body falling from R, measured at r by a motionless observer there: Using (23) to provide the locally measured mass of the falling body m v = m Rr , it is found that the relationship between proper velocity and total mass of the falling body at r is given by This is the well known relativistic mass/velocity relationship of special relativity, derived for a body falling in a Schwarzschild field.
VIII. OTHER INITIAL CONDITIONS
The mass/velocity relationship of (32) was derived for a body released from rest at R. What if some initial, proper radial velocity v R is imparted to the body at R as it begins its fall? Then an observer at R will perceive the body to have an increased mass as it begins its fall, as given by From the reference frame at r, the mass of the body at R will be increased above its rest mass m Rr in the same proportion. That is the value of the body's mass that remains constant during the fall from R to r, and will be measured as m v when it arrives at r. Using the equation of motion (27) and other equations previously derived, it can be shown that, regardless of any initial velocity imparted to the body at R, the mass/velocity relationship of (32) applies when the body arrives at r.
IX. GRAVITATIONAL REDSHIFT
For static fields there are two distinct physical explanations for gravitational redshift that can be found in the literature. The most common view is that the redshift is caused by a loss (gain) in the energy of a photon as it climbs (descends) in a gravitational field. Another view, argued by Okun, Selivanov and Telegdi [43,44] and noted by others (e.g., [45,46]) is that the redshift is caused by a change in the energy levels of lifted atoms. These viewpoints are incompatible; if both were true the redshift would be doubled.
The concept of mass reference frames supports the view that gravitational redshift is caused by a change in the energy levels (masses) of lifted atoms. The energy of a photon would not change while traveling along a null geodesic in a static gravitational field [1,15]. If a photon is emitted at elevation O with proper energy E o , it will arrive at a higher elevation x with the same energy. However, all stationary masses and energies at x are increased relative to O. To an observer at x, the photon arrives with a reduced energy compared to a photon at x emitted by the same process. Using a uniform field approximation of g(x) ≈ g o and E = hf , it can be shown that a photon with proper energy E o emitted at O arrives at x with a frequency shift of relative to the frequency the photon would have if it were emitted at x with proper energy E o . The proper frequency of the photon at x is established by its proper energy there, in combination with Planck's constant, and is not an inherent property of the photon. Experimental evidence of the local position invariance of Planck's constant [47] is consistent with this interpretation. A review of gravitational redshift experiments (e.g., [48]) indicates that they are unable to distinguish whether photons lose energy during travel, or appear to be redshifted due to changes in mass/energy reference frames. The proposed interpretation also satisfies the thought-experiment of Nordtvedt on the conservation of energy in gravitational redshift experiments [49]. The proposed existence of mass reference frames appears to be consistent with predictions and observations of gravitational redshift.
This discussion applies only to static fields. The timevarying field created by the expansion of the universe causes a cosmological redshift that is distinct from that of static fields.
X. OTHER METRICS FOR A CENTRAL MASS
Consistent with Birkhoff's law [50], many metrics describe the field of a central mass, each related to the Schwarzschild solution by a coordinate transformation. Two such metrics are the isotropic and harmonic metrics [51]. The isotropic metric for a central mass can be written in the form where and If a body is lifted from r I to R in the isotropic metric, and then allowed to fall back to r I , the mass/velocity relationship of (32) is found at r I .
If this exercise is repeated for the harmonic metric [51], that result is found once again. What this confirms is that mass reference frames are independent of the choice of coordinates, as should be expected. It can also be shown that at the same point in space, as determined by well-known coordinate transformations between the three metrics (e.g., [51]), both β r∞ and the local gravitational acceleration g(r) are identical for all three metrics.
What this suggests is that, in static fields, gravitational energy is stored as part of the masses of bodies in accordance with the simple scalar function β ab , which transforms in a well-behaved way and is independent of the choice of coordinates.
XI. NON-RADIAL MOTION
Up to this point, only radial motion relative to a central mass has been considered. Next it is shown that motion in any oblique direction is not only consistent with the concept of mass reference frames, but corresponds to the Schwarzschild metric.
Consider a stationary observer fixed at some radius r from a central mass. To that observer, special relativity and (32) indicate that a moving or orbiting body at r should have a mass m vrr of where m vrr denotes relativistic mass m v of the body at r in the reference frame at r. From a reference frame at infinity, the mass of the moving body at r appears smaller, since all masses at r appear decreased relative to those at infinity by a factor β r∞ , from (24). From that frame, the mass of the moving body at r is perceived to be where the subscripts on β r∞ are dropped for now. If the body is in free-fall around the central mass, moving in any direction with no other external forces, its total mass will remain constant during its fall or orbit. Therefore, the body's total mass from the reference frame at infinity is constant as follows: where K is a constant of the motion. This formula makes it easy to find the velocity at any radius r. If K > 1 the body has sufficient relativistic mass -and associated velocity -to escape the central mass. If K < 1 the orbit will be bound. This result is similar to that of [52].
Using the expressions v r ≡ dx/dt r and dt r = βdt, (40) can be rearranged to yield where dt is the time of a clock at infinity. Let dτ represent the time of a clock on the moving body. Those two times are related by Using (40) and (42), it can be shown that the left hand side of (41) is c 2 dτ 2 = ds 2 , so that As previously defined, dx is proper length in the motionless reference frame at r. For any given metric, dx can be related to the coordinates used in that solution. For example, for the standard form of the Schwarzschild solution This can be found from the metric at a synchronous instant in time (dt = 0), which gives the spacelike spacetime interval ds 2 → dx 2 when the appropriate sign is used. Substituting this into (43) yields This is the Schwarzschild metric of (14). Other metrics, like the isotropic and harmonic metrics, can similarly be obtained when the corresponding expression for dx 2 is used. Although this is not an independent derivation of the metrics, it demonstrates the correspondence between mass reference frames and the metrics of GR. Since this result is based on (38) it also indicates that a body moving in any oblique direction will exhibit the mass/velocity relationship of special relativity in a local frame.
XII. FINAL SUPPORTING ARGUMENT FOR STATIC FIELDS
The derivation of (32) confirms that mass and energy are conserved during a lift and fall cycle under the proposed interpretation. That provides a compelling argument to support the existence of mass reference frames: As shown below, the prevailing interpretation of gravitational energy in GR does not conserve energy and mass during a lift and fall.
Consider a body being slowly lifted in a Schwarzschild field. The mass equivalent of the proper lift energy expended over a small distance dx is given by (7). If total mass is indeed invariant between all elevations, then m xo = m o , contrary to (9). Then the mass equivalent of the energy used to lift a body from O to x would be given by If this is integrated in a Schwarzschild field, the total mass equivalent of the energy expended while lifting the body from r to R would be given by In weak fields, m o + ∆m approaches m Rr in (23). When the body falls from R to r it must arrive with a relativistic mass given by (32) in the reference frame of a stationary observer at r. It is easy to show that the lift energy/mass and the additional relativistic energy/mass acquired by the body when it returns to r are not conserved, as follows: It takes less energy/mass to lift a body than arrives back at the starting point. Thus, without the use of mass reference frames, energy and its mass equivalent are not conserved within a lift/fall cycle. This does not indicate any flaw in GR, but suggests that mass reference frames are necessary to complement the theory.
XIII. MACH'S PRINCIPLE
To the extent that distant stars contribute to the local gravitational potential at any point, the proposed interpretation implies that the magnitude of the inertia of any body is directly influenced by masses throughout the universe. Thus, the proposed interpretation brings GR into closer harmony with Mach's principle. Nevertheless, the concepts developed herein are more likely to fuel the recurring debate on Mach's principle than to resolve it.
XIV. TIME-VARYING FIELDS
This paper focuses on static fields, which encompass the most important tests of GR. Nevertheless, it is important to show how the proposed interpretation is limited in time-varying fields. Such fields introduce issues that are discussed only briefly here.
The analysis of time-varying fields begins with the simple thought-experiment shown in Fig. 2. Consider two masses M 1 = M 2 located deep in intergalactic space and separated by distance d. Initially, both masses are approximately co-moving with the cosmological fluid. The attractive gravitational force between the bodies is F . Imagine that a long cable is attached to M 2 , with the other end attached to a winch W secured to a distant, massive object. The winch pulls on the cable with a tension of T = 2F , so that mass M 2 moves away from M 1 at the same rate that M 1 falls towards M 2 . Both masses will accelerate towards W while maintaining distance d between them. Once they have achieved some large velocity v, let mass M 2 be pulled a great distance away from M 1 , leaving M 1 moving at velocity v in empty space. In this example, body M 1 has been accelerated by gravity to velocity v, imparting relativistic energy/mass to the body. Where has that energy come from? If Vera's interpretation of falling bodies from Fig. 1 is applied, then the total mass remains constant and the original rest mass m o has been converted into relativistic mass m v = m o . If the body's motion is then stopped, it will lose the relativistic part of its mass and drop to a new rest mass of m ′ o < m v . Then m ′ o < m o in a region where the mass reference frame is unchanged. The body's rest mass has been decreased by the motion induced in Fig. 2. Conversely, the rest mass of M 2 would necessarily increase. That interpretation would require the rest masses of all bodies and elementary particles to vary with their prior motion in time-varying gravitational fields. In the absence of experimental evidence of an appreciable variability in the rest masses of protons or electrons, it is evident that Vera's interpretation of a falling body does not work in a time-varying field.
Fortunately, all is not lost. The most essential of Vera's concepts is that gravitational energy is stored as part of the masses of bodies rather than in empty fields. In Fig. 2, the energy expended by the winch travels directly to M 2 via the cable. Half of that energy then travels through space almost immediately to M 1 , limited only by c. No energy would be stored in fields over the longterm.
What can be deduced from this exercise is that gravitational energy must travel through space in a time-varying field. Bondi described the movement of energy across the empty space between two bodies [9]. In static fields, energy may be exchanged between gravitating masses, but the net change in the mass of a test particle during freefall should be zero. Thus the proposed interpretation of gravitational energy is compatible with time-varying fields, but is limited accordingly. This approach is consistent with the transport of energy by gravitational waves, and also suggests that gravitons, if they exist, should convey mass/energy. The transfer of energy in Fig. 2 is fundamental and does not arise from the concept of mass reference frames.
To show how the proposed interpretation works in time-varying fields, it is next applied to the most important cosmological metric of GR -the Robertson-Walker metric, which can be written in the following form: ds 2 = c 2 dt 2 − a 2 (t)[dχ 2 + sin 2 χ(dθ 2 + sin 2 θdφ 2 )], (49) where t is the cosmological time of all observers comoving with the cosmological fluid, a(t) is the scale factor representing the expansion of the Universe, and dχ represents radial coordinate distance. Proper distance dx in the radial direction is related to dχ by dx = a(t)dχ at any synchronous time t.
Suppose that at some time t 1 a particle is given an initial proper radial velocity V relative to the cosmological fluid,. At some later time t, it will arrive at a second point with proper velocity v r ≡ dx/dt. From the geodesic equations of motion (e.g., [42]), the proper radial velocity of the body at t can be found: As expected, the equation confirms that if a particle starts out with an initial velocity V = 0 relative to the cosmological fluid it will continue to move with the cosmological fluid without relative acceleration. Moreover, a photon with an initial proper velocity of V = c will maintain a proper velocity of c.
As previously discussed for the Schwarzschild solution, suppose a particle is slowly "lifted" between two points. What (50) indicates is that no matter how small the initial velocity V > 0, the particle's proper velocity will never subsequently fall to zero. It will forever continue to drift slowly across the Universe. This means that all points in the expanding universe have the same gravitational potential. There is no gravitational force or pseudo-force to lift against when slowly moving a particle. Therefore, all observers co-moving with the cosmological fluid experience the same mass reference frame.
However, (50) shows that as the universe expands, the proper velocity of of the particle decreases monotonically below its initial proper velocity V . Therefore, its relativistic mass will decrease with time, relative to cosmological observers. Using (50) the relativistic mass of a particle as a function of time is where m o is its rest mass. Thus the relativistic mass decreases with time. The change in relativistic mass with time can be found from (50) and (51) dm This loss of relativistic mass is directly proportional to the rate of expansion of the Universe. In a static universe, a(t) = 0 and there is no loss of mass. This result is consistent with Vera's interpretation that the mass of a falling body remains constant in a static field, while requiring the transfer of mass in a time-varying field. It is important to recognize that the loss of mass in (52) does not arise from the concept of mass reference frames, but is a direct consequence of the Robertson-Walker metric and is implicit in GR. So what happens to the relativistic mass lost by a body moving relative to the cosmological fluid? The concepts developed herein suggest that mass/energy must travel through space to other bodies, most likely at the speed of light. For the Robertson-Walker metric, the receiving bodies would be the masses comprising the expanding Universe.
Similarly, this approach can be applied to any metric to quantify the mass/energy lost or gained by falling bodies. The concepts developed herein only begin to explore this topic.
Equation (51) provides another way to calculate cosmological redshift. If m V is the relativistic mass of a particle at time t 1 when it has initial velocity V , then the limit of m v (t)/m V as V → c is a(t 1 )/a(t), which in terms of energy corresponds to the expected cosmological redshift of a photon [42].
XV. CONCLUSIONS
The proposed interpretation of gravitational energy provides a means to explore the disposition of gravitational energy in solutions to the field equations. The localization of gravitational energy in static fields is explained by simple concepts in which the energy is localized with the masses of bodies. Observers at different gravitational potentials would experience different mass reference frames. With this approach, it is shown that the relativistic mass of a falling body can be calculated directly from GR. Thus, the correspondence between general and special relativity is enhanced. Unlike the pseudotensor t µν , this interpretation is independent of the choice of coordinates. The use of mass reference frames also conserves mass and energy during a lift/fall cycle, unlike the concept of invariant mass. Inasmuch as there currently exists no widely-accepted treatment of localized gravitational energy in GR, it is hoped that the proposed interpretation will be considered worthy of further study and discussion. Much work remains to be done.
Many attempts have been made to incorporate a scalar field into gravitational theory. Examples are the scalartensor theory of Brans and Dicke [53,54] and the dilaton field [55]. What this paper suggests is that, in static fields, GR already includes an intrinsic scalar field β ab that accounts for the energy of the gravitational field.
Unlike static fields, time-varying fields require a net exchange of energy and mass between gravitating bodies. With this interpretation, gravitational waves and/or gravitons would convey energy and mass.
In Newtonian gravity, potential energy allows total energy to be conserved when lifting a body. The equivalence of a small amount of mass to a large amount of energy allows total energy and mass to be conserved without it, consistent with everyday experience. | 2012-11-27T05:44:51.000Z | 2012-11-27T00:00:00.000 | {
"year": 2012,
"sha1": "928a0d0b0f2e6a1c9bba4d05f3511647d3472ebb",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "928a0d0b0f2e6a1c9bba4d05f3511647d3472ebb",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
248621043 | pes2o/s2orc | v3-fos-license | Level of Knowledge of Self-Protection from Sexual Exploitation
10.31004/obsesi.v6i5.1859 Abstract Background Research begins with many people who think that sexual education is a taboo subject to discuss with children before they become adults. In addition, there are many examples of sexual harassment that are rampant in recent times. The purpose of this study is that early childhood can protect themselves from sexual exploitation. The research method used is quantitative and quasi-experimental. The research subjects were the experimental group, class B1, and control group B2, with a total of 24 children. The research location is in TK Pertiwi 26-13 Bogares Kidul, Pangkah District, Tegal Regency. The results of the analysis in the two experimental groups that received actions in the pre-test and post-test assessments obtained were t = 5.548, p = 0.00 These results explained that there were differences after and before the action was taken before the action the child could not protect himself from sex crimes. After the action is taken, the child has good self-protection against sexual crimes. This means that role play sex education can increase self-protection in early exploitation. Abstract Latar belakang Riset diawali dari banyak orang yang menganggap bahwa pendidikan seksual adalah hal yang tabu untuk dibicarakan dengan anak-anak sebelum mereka dewasa. Selain itu banyaknya contoh pelecehan seksual yang marak terjadi akhir-akhit
Introduction
The life of a nation becomes the initial basis for determining life in the future (Suhsmi & Ismet, 2021). For this reason, planning for the next generation of the nation is needed in preparing children's growth and development properly (Tanu, 2019). Therefore children need protection from all elements (Pemerintah Republik Indonesia, 2014). Everyone is not allowed to do, ignore, carry out, order to carry out, let alone be involved in economic and or sexual exploitation of children.
But lately, we have seen many in newspapers or television, we can find cases of sexual violence that have also happened to children, this makes parents and educators anxious. The sad thing is that the exploitation is carried out by people who the child knows (Zutema & Nurwati, 2020). Furthermore, researchers also examined data from claims submitted to the Indonesian Child Protection Commission (ICP), there were 234 cases of exploitation during April 2021. Based on this, ICP also urges all elements to conduct education to prevent exploitation causes (Ansori, 2021).
Then based on data recorded at the Metro Jaya Regional Police in uncovering cases of child exploitation. Since January 2021, Polda Metro Jaya has received 10 Police Reports (LP) and named 15 suspects in child exploitation cases. The victims of this case amounted to 286 people, 91 of whom were still in early childhood (Ernes, 2021). Most cases of child sexual exploitation that occur are caused by children not knowing and not realizing that it is wrong for adults to touch their private parts (Soesilo, 2021). This happens because children have never been taught to recognize their body parts, especially their genitals (Alucyana et al., 2020). Children are not introduced to which private parts are allowed and not to be touched, and they do not understand how to protect themselves if they receive such an act(Hi. Yusuf, 2020). Sex education explain is parents and schools, but in reality, sex education is still not applied at home and school (Azzahra, 2020). For this reason, it is necessary to have sex education in schools.
The study of sex is an attempt to contribute to a report on introducing the designations and uses of body parts, understanding gender differences, interpreting how sexual relations are, and knowledge about the rules and norms that exist in society regarding gender (Dewiani et al., 2020). Prevention of sexual abuse against children in the field of education is so that children can identify dangerous situations and prevent sexual abuse, as well as teach children the forms of inappropriate touch, how to refuse or end interactions with suspicious people or people, and how to ask for help (Fitriani et al., 2021).
Sensuality training should indeed exist taught in ancient years to check procreative obstacles that continue to be high (Djiwantoro, 2004). Sex education should start as early as possible from an early age regularly until the child begins to grow up (Dianawati, 2003). Learning reproduction study bottle stay secluded of spirituality also maybe everything must be based on religion (Rahman & Muliati, 2018). For that, it is necessary to form a person who is responsible. Presenting sex education to kids is a way to contribute knowledge concerning the role of these reproductive glands by instilling moral ethics (Saripah et al., 2021).
Information about reproduction and sexual relations, sex education should not be complex (Irwanto et al., 1999). General sexuality education covers aspects of personality, social culture, psychology, religion and also teaches someone to be able to protect themselves (Husin & Guntara, 2021). The problem today is that many people think that sexual education is a taboo subject to discuss with children before they become adults. In addition, multiple samples from child sexual disorders that occur in kids are caused by children not knowing and not realizing that the treatment of adults who touch private parts is the wrong treatment. This is because children do not have good and sufficient information about sex education.
One of the solutions used to overcome these problems is to use the role-play method. With the role-play method, children do not feel bored and bored in learning activities (Alucyana, 2018). This is because participants who participate actively will be three times more effective in learning than using a passive approach. The role-play method provides a fun atmosphere, kids will be actively involved in studying so that scholars do more concerned about playing during lessons and better understand the subject matter as well as improved learning outcomes.
Therefore, researchers are interested in using the role-play method to provide education about self-protection and prevention of cases of child exploitation. This is following the theory of Jill Hahfiel and Wahab introducing the role-playing method. Role-playing is often intended as an application of experiential teaching (Jill Hahfiel dan Wahab, 1998). So with the Role Playing method students can appreciate what role they play, able to place themselves in other situations that the teacher wants, especially concerning school life, family, and community behavior around students (Mulyono, 2011). Theoretically the role-playing method requires the involvement of some or all students in playing a character or object, this condition requires students, not to silent, he will be active, not static, but dynamic (Hamzah B Uno, 2012). Roleplaying is a planned learning activity designed to achieve specific educational goals (Ilmanuddin & Siregar, 2019).
For this reason, the material taught in this counseling includes the first material to introduce body parts and then recognize the differences in body parts between boys and girls. The second material introduces situations that lead to sexual exploitation such as seductive behavior, holding body parts, peaking at other people's body parts, undressing, and holding genitals. The third material is how to do some ways to refuse offers, inducements, or coercion from others that make him feel afraid or uncomfortable which leads to acts of sexual exploitation. This study points to determine the ability of sexual protection in kids to sexual exploitation between before and after receiving sexual education with the role-play method. This research is useful to restrict this experience from physical impairment in first youth. The correct knowledge and understanding of sex will help children have a sense of responsibility from an early age. The hypothesis in this study is that there are differences in self-protection from sexual exploitation in early childhood between before and after sex education with the role-play method.
Methodology
The study uses a quasi-experimental method. This experimental design employed lived a non-randomized pretest and post-test grub design. The research design chart is as shown Tabel 1. For more details, the research design is illustrated as figure 1. This research uses subjects from groups B1 and B2 of TK Pertiwi 26-13 Bogares Kidul. Where Group B1 is the experimental group and class B2 is the control group. Group B2 was chosen as the control group because they already had better protection. The measuring instrument in this research is modified from the relevant research conducted by (Hastjarjo, 2019)namely the scale of sexual exploitation protection. The researcher modifies the items used concerning the subject in the research. There are three aspects in the scale instrument, namely the anticipation of situations that tend to be sexual exploitation, the ability to take action when in a situation that is inclined to sexual exploitation, and the ability to report situations that are prone to sexual exploitation. The sexual protection scale in early childhood consists of 25 items. The data analysis used in this research is quantitative analysis. Quantitative data analysis to test the hypothesis was carried out using statistical analysis in the form of a paired sample ttest technique. Differences in self-protection from the sexual exploitation of children due to differences in treatment outcomes were observed repeatedly, namely before treatment and after treatment between the experimental group who received the sexual education training program and the control group.
Results and Discussion
The research subjects divided into the experimental group and the control group in terms of gender and age are presented in table 2.
Hypothesis Testing Analysis Results
The results of data analysis between groups that received treatment (experimental group) and groups that did not receive treatment (control group at the initial condition of the previous measurement (pre-test) decided that t = 0.247, p = 0.809 > 0.05 (not important). there is a difference in the initial conditions before the measurement (pre-test).
The results of subsequent data analysis using a paired sample test between the group that did not receive treatment (control group) in the measurement before (pre-test) and next method (post-test) decided that t = 1.915, p = 0.082 > 0.05 (not important). This means that there is no significant difference in the group between before and after the treatment was given.
Then the results of the analysis between the experimental groups that received treatment on measurements before (pre-test) and after treatment (post-test) found that t = 5.548, p = 0.000 <0.05 (significant). This means that there is a significant difference in the experimental group between before and after the treatment was given. Furthermore, to see the difference in the self-protection score, the average self-protection score is in The table shows that the experimental group experienced an increase in the average score of self-protection on measurements between before (pre-test) and after (post-test) sexual education with role play. The average increase in self-protection was 5.35. Then we also present quasi-experimental data through pie charts as shown Chart 1.
Chart 1. Self Protection level of pre-test and post-test respondents
From pie chart 1, it can be seen that the level of knowledge of self-protection from sexual exploitation when children have not been given treatment in the form of sex education using the role-play method is very low, only around 32 percent. However, after being treated with sex education using the role-play method, the level of children's self-protection against sexual exploitation increased by 68 percent.
Of the results of the analysis, it can be concluded that the hypothesis proposed in this research is accepted. There does a important exception in self-protection in early childhood between before and after receiving sex education with role play media.
Research preparation
Researchers carried out the process of preparing role play educational materials. The researcher modified the scale of self-protection from sexual exploitation from a previous study developed by (Malikah, 2011). The material written is more adapted to the context of cases of sexual exploitation that are often experienced or reported in the media so far that has happened to children. This is following a review of research conducted by (Santosa Budi, 2009) this study explains how important the importance of education from pre-school age is. Furthermore, research from Ratnawati, 2021) in the study mentions that one solution to prevent cases of sexual violence in early childhood is through sex education. And research from (Dini et al., 2021) in this study describes the importance of parents and schools to introduce sex education from an early age. For this reason, in this study, researchers are more directing the contest on self-protection from sexual exploitation in early childhood. The research instrument consisted of 25 items.
Research Implementation
After all research preparations are completed, the next stage is the implementation of the sex education role-play program. This is following the theory of Jill Hahfiel dan 68% 32% self protection
Post Test
Pre Test Wahab (1998) which explains that there is a need for attractive sex education for children. This is as researched by Fitriani et al. (2021) who conducted research on sex education for early childhood. In his research, he used books, while in the research I did, he used the roleplaying method. Furthermore, research was conducted by Soesilo (2021), Anwar & Alfina (2021), and research from Munisa (2019). In his research, sex education was carried out by parenting which targeted parents, while my research was sex education through roleplaying learning methods. Next, the research conducted by Alamsyah et al. (2021) in her research to measure knowledge of sex education with the Fan Flashcard media turned out to have a considerable influence on increasing knowledge of sex education in mothers, this is different from the research I did because the sex education I did was through the play method. role. Of the five studies that have been carried out, it proves that this research is novel because it raises sex education through the role-playing method by the children themselves so that children can explore knowledge about sex education through roleplaying. For this reason, the researcher explained that this program was carried out in an attractive form using the role-playing method.
Implementation using three materials
The first material regarding development is specifically related to the sexual development of children, namely efforts to identify body parts that can be touched and who cannot and who can and cannot touch them or see them. The second material is about identifying situations that lead to acts of sexual exploitation. That is an effort to identify situations that lead to acts of sexual exploitation, including seductive behavior, holding prohibited body parts, peeking at other people's body parts, undressing, and holding genital parts. The third material is about self-protection from sexual exploitation. Efforts to identify and avoid sexual exploitation behavior are related to several circumstances, including protection efforts by paying attention to how to dress, protection when jostling with many people. Then protection by rejecting offers, inducements, or coercion from others that make him feel afraid or uncomfortable which leads to acts of sexual exploitation.
After the process of providing the program with the three materials was carried out in the experimental group, the participants were immediately given a post-test by giving a sexual protection scale again. In the control group, a post-test was also carried out by giving a sexual protection scale without being given any previous treatment. This process is carried out to see the effectiveness of providing sexual education programs with the role-play method in early childhood.
The results obtained in this research show that there is an important distinction in selfprotection in early childhood between before and after receiving sex education with role play media. Statistical analysis between the experimental group that received treatment on measurements previously (pre-test) and after therapy (post-test) decided that t = 5.548 p = 0.000 <0.05 (significant). This means that there is a significant difference in the experimental group between before and after the treatment was given. In the experimental group, there did an improvement while that common rate from self-protection in the measurements between previously (pre-test) and following (post-test) sex education by role-play method with an average increase in self-protection score of 5.35.
There are several factors support the improvement of protective abilities in children, namely the role-play method used during the sex education process. The need for a role-play method is simply conveying material through play that involves all aspects of children's abilities, both cognitive, language, and motor skills. For example, in giving the third material, the subjects were asked to practice it direct by involving all aspects of the child's ability to try to practice certain attitudes and actions to avoid sexual exploitation in early childhood.
The method of using role-play brings closer the distance between abstract matters regarding the material reality of life, a more realistic situation that is indeed threatening at this time. This is following cognitive development in childhood which begins with learning to deal with problems in a concrete way. This learning process is a function of the interaction between children and the outside environment so that they are more trained so that with assistance from outsiders they can help the problems they face.
Relevance to reality means how to provide material that has the effort as close as possible to the reality of the subject, including giving examples and using language. The material is presented with many examples of important parts. The facilitator often repeats the keywords, says no when given something to a stranger, shouts for help if someone touches or opens the privacy area.
The success of sex education using the role-play method shows the positive influence of sex education using the role-play method as an effort to protect oneself from sexual exploitation in early childhood. Sex education using a comprehensive role-play method that includes biological, sociocultural, psychological, and spiritual dimensions, including teaching children to be able to protect themselves and make responsible decisions is very important for children.
One way to increase the ability to protect themselves from sexual exploitation in early childhood is to provide sex education using the role-play method. The results of this research show that sex education using the role-play method can effectively increase the ability to protect oneself from sexual exploitation in early childhood. Sensuality study using the roleplay method can be implemented and become part of the school. Schools can prepare and synergize sex education using this role-play method in the learning process and policies that have been set at the school.
Sex education using the role-play method consists of the knowledge and skills provided as part of an effort to protect children from sexual exploitation. This role-play method of education can be a program that is in the school curriculum. The material provided in this program can be synergized with other materials for your child. This matter must be introduced to children from the beginning so that it can be self-protection energy in overcoming the tendency of sexual exploitation performance into kids.
Conclusion
The level of knowledge the ability of self-protection of early childhood who received sex education with the role-play method increased in a better direction. This shows that there is a vital distinction in self-protection in early childhood between sebum and after receiving sex education using the role-play method. | 2022-05-10T16:00:58.308Z | 2022-04-17T00:00:00.000 | {
"year": 2022,
"sha1": "7a0d59e779fa39555299ba4d1c4150557c75ad4f",
"oa_license": "CCBYSA",
"oa_url": "https://obsesi.or.id/index.php/obsesi/article/download/1859/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "eae3648426206cb882ae9cdd4d6aa3166d823499",
"s2fieldsofstudy": [],
"extfieldsofstudy": []
} |
10869270 | pes2o/s2orc | v3-fos-license | Decreased activation of inflammatory networks during acute asthma exacerbations is associated with chronic airflow obstruction
Asthma exacerbations are associated with subsequent deficits in lung function. Here, we tested the hypothesis that a specific pattern of inflammatory responses during acute exacerbations may be associated with chronic airway obstruction. Gene coexpression networks were characterized in induced sputum obtained during an acute exacerbation, from asthmatic children with or without chronic airflow limitation. The data showed that activation of Th1-like/cytotoxic and interferon signalling pathways during acute exacerbations was decreased in asthmatic children with deficits in baseline lung function. These associations were independent of the identification of picornaviruses in nasal secretions or the use of medications at the time of the exacerbation. Th2-related pathways were also detected in the responses, but variations in these pathways were not related to chronic airways obstruction. Our findings demonstrate that decreased activation of Th1-like/cytotoxic and interferon pathways is a hallmark of acute exacerbation responses in asthmatic children with evidence of chronic airways obstruction.
Introduction
In most cases of asthma, the first symptoms of the disease occur during childhood,1 and in a subgroup of children, the disease is associated with deficits in airway function that track with age and predict persistence of asthma symptoms into adult life. [2][3][4] There is now strong evidence suggesting that these deficits are acquired during the course of the disease,5 are more likely to become apparent during childhood,6 and that chronic administration of inhaled corticosteroids does not prevent them.7 Moreover, at least one-third of older asthmatics develop chronic obstructive pulmonary disease (COPD),8 and among that third, airflow limitation is often present in early adult life.9 Thus, a better understanding of the molecular mechanisms that may be associated with airflow limitation in children with asthma could help design new treatments to prevent persistent asthma and asthma-associated COPD.
Recent longitudinal studies have suggested that, among children and adults with asthma, those with a higher incidence of acute exacerbations are more likely to have subsequent deficits of lung function growth10 or excess decline,11 respectively. The mechanisms that underlie the associations between exacerbations and chronic lung disease in humans are unknown. Evidence of infection with rhinoviruses has been reported in 50-70 % of asthma exacerbations,12 and accumulating data suggests that immune responses to viruses may be altered in asthmatics. When infected with rhinovirus, bronchial epithelial cells obtained from adult atopic asthmatics have deficits in interferon type I and III responses.13, 14 Impaired interferon gamma and augmented Th2 responses in blood and BAL T cells from asthmatics is associated with rhinovirus-induced clinical illness severity and viral load. 15 Rhinovirusinduced interferon gamma-to-IL-5 response ratios in PBMC cultures in vitro are positively correlated with forced expiratory volume in 1 second (FEV1), 16 and these same response ratios in sputum are inversely related to viral clearance in vivo. 17 A recent study in mice demonstrated that respiratory viral infections can trigger the development of a chronic asthma/COPD-like lung disease after the virus is cleared to trace levels. 18 In this mouse model, invariant natural killer T (iNKT) cells play a central role in disease pathogenesis; nevertheless, there are conflicting data on the role of iNKT cells in the pathogenesis of human asthma. 19,20 We postulated that children whose asthma is associated with chronic airflow limitation could have a specific pattern of inflammatory responses in the airways that predisposes them to the development of deficits in airway function. 21 To test this hypothesis, we induced sputum at the time of a moderate exacerbation and 7-14 days later in a group of children with asthma followed prospectively, and compared global patterns of gene expression in sputum cells in those with or without baseline airflow limitation. Our results show that the activation of Th1-like/cytotoxic and interferon signalling pathways during exacerbations is decreased in asthmatic children with evidence of chronic airflow obstruction. Moreover, regardless of the presence or absence of chronic airflow obstruction, acute exacerbations were associated with markers of iNKT cells, and the latter were highly correlated with cytokines that promote Th1/cytotoxic responses (IL-12A, IL-21).
Wurzburg, Germany). The ratio between FEV1 and forced vital capacity (FVC) was used to assess airflow limitation at baseline, as suggested by Rasmussen et al.22 Skin test reactivity to local allergens was measured (Alternaria, dust mite mix [Dermatophagoides farinae plus Dermatophagoides pteronyssinus], olive, Bermuda, careless weed, mulberry, cockroach, cat, dog, mouse, and ragweed). After enrollment, the children were followed for 18 months or until their first asthma exacerbation, whichever came first.
A moderate exacerbation was defined with criteria equivalent to those recently proposed by the ATS/ERS consensus.23 If a child experienced symptoms of cough, dyspnea, chest tightness, and/or wheeze, they were instructed to initiate use of albuterol (2 puffs, 90 mcg/ puff) by MDI every 20 minutes for up to 1 hour and then every 4 hours if necessary. For those who could use peak flow meter, a reading of < 80 % of personal best was considered indicative of an exacerbation. If the subject could not get symptom relief after 3 treatments, parents were instructed to contact the study center immediately. These three conditions (symptoms of exacerbation, low peak flow readings, or lack of relief or persistence of symptoms after three treatments) were considered indicative of an exacerbation. Participants who met these criteria were scheduled for a visit at the study center within 24 hours. Ascertainment of an exacerbation was ultimately made by a study physician. This research was approved by the Institutional Review Board of the University of Arizona and informed consent was obtained for all participants.
Sputum induction
At the time of the acute exacerbation visit, a physical examination was performed by a study physician, lung function was assessed by spirometry, nasal secretions were collected to test for evidence of a picornavirus infection (online methods), and sputum was induced based on the techniques recommended by Gershman et al24 with slight modification. Following the measurement of spirometry pre-and post-bronchodilator, a peak flow measurement was made to determine the subject's baseline. Participants then inhaled 3 % saline for 1.5 minutes, after which any accumulated saliva was discarded, followed immediately by coughing and expectoration of sputum. Peak flow was checked to assure that there had not been a 15 % or greater drop following saline inhalation. This procedure was repeated at 2 minute intervals six times. Following the final inhalation of 3 % saline and expectoration, a full spirometric maneuver was done in place of peak flow to assure no decline from baseline values. Only children whose FEV1 was > 70 % of predicted were eligible for sputum induction. Seven to fourteen days after the acute episode, children were seen again in the study clinic and physical examination, nasal washes, and sputum collection were repeated as described above.
Sputum processing and RNA stabilization
Sputum was stored at 4 °C and processed within one hour of collection. Briefly, whole sputum was diluted with a volume of 0.1 % Dithiothreitol (DTT-Sputolysin 10 %; Calbiochem Corp, La Jolla, CA) equal to that of the weight of the original sample in grams, and incubated on a shaking water bath at 37 °C for 15 minutes with intermittent aspiration via pipetting every 5 minutes. Homogenized sputum was centrifuged (800 g) for 10 minutes. The cell pellet was immediately resuspended in RNAlater Stabilization Reagent (Qiagen, Valencia, CA) as per manufacturer's recommendations. Total cell counts and differentials were determined from an aliquot of the original homogenized sample. Slides were prepared (Cytospin; Shandon; Runcorn, UK) and stained with a Wright-Giemsa stain. Differential cell counts were made by a blinded observer. One hundred cells were counted for each sample. Differential cell counts are expressed as percentages of total cells.
Microarray-based expression profiling studies
Total RNA from sputum samples stored in RNALater was extracted with TRIzol (Invitrogen, Carlsbad, CA) followed by RNeasy (QIAgen, Valencia, CA). In preliminary studies based on Bioanalyzer analysis, we noted some variation in the integrity of RNA from sputum. However, the microarray and real time quantitative reverse transcription PCR (qRT-PCR) protocols employed in the study are based on random priming and are thus tolerant to variations in RNA quality. Total RNA samples (n=20) were labelled and hybridized to Human Gene ST 1.0 microarrays (Affymetrix, Santa Clara, CA), at the Arizona Cancer Center Genomics Core, the University of Arizona. The microarray data are available from the Gene Expression Omnibus repository (accession GSE19903).
The microarray data was preprocessed in Expression Consol software (Affymetrix, Santa Clara, CA) employing the probe logarithmic intensity error algorithm with gc background subtraction, quantile normalization and iterPLIER summarization. The preprocessed data was imported in the R language for statistical computing (http://www.r-project.org/), and variance stabilization was performed by adding the small constant 16 to all the data points, followed by log2 transformation.
Reverse engineering gene network analysis
The microarray data was filtered to select highly variable genes (top 10 % on microarray -3247 genes) as well as the top 1500 genes that differed in the respective responses (ie. in subjects with (n=10) or without (n=10) deficits in enrollment FEV1/FVC ratios) according to their statistical ranking from a Bayesian t.test analysis.25 Network analysis was then performed on the filtered dataset in all subjects employing the weighted gene coexpression network analysis (WGCNA) algorithm.26-28 The algorithm calculates absolute Pearson correlations for all pairwise gene-gene combinations across the test samples. The correlations are then raised to a power to emphasize stronger over weaker correlations. Genes that had a low overall correlation with the coexpression network were removed from the analysis (approx 25 % of initial genes removed). The topological overlap of the genegene correlations was calculated to quantify the extent that genes have similar overall patterns of correlations with other genes. The topological overlap similarity measure was subtracted from one to convert it into a distance measure and then analyzed by hierarchical clustering to group highly correlated genes into subnets (modules). The modules were defined from the dendrogram output of the cluster analysis employing an automated algorithm (cutreeDynamic). The overall expression of the modules was compared in the respective responses employing Gene Set Analysis without correcting for multiple testing. 29 To determine if the modules were stable, a randomly selected sample was removed from the analysis just prior to the hierarchical clustering stage, and new modules were defined. This process was repeated an additional four times, and the stability of the modules was calculated as the proportion of genes from the original cluster that were detected in the same cluster, averaged over the five iterations.
Bioinformatics analysis of molecular signatures
The list of genes in the Th1-like/cytotoxic pathway (Table S2) was interrogated for significant overlaps with the collection of 1,892 curated molecular signatures from the Molecular Signatures Database. The database contains annotated pathways from online databases, and molecular signatures from published microarray studies, thus captures a broad range of biological, cellular, and clinical states. Statistically significant overlaps were identified based on the Hypergeometric distribution.30 This analysis was performed online (http://www.broadinstitute.org/gsea/index.jsp).
qRT-PCR validation studies
Total RNA was reverse transcribed with a combination of random nonamers and oligo-dT priming using the Quantitect reverse transcription kit with integrated genomic DNA removal (QIAgen, Valencia, CA). qRT-PCR analysis was performed with Quantitect SyBr green (QIAgen, Valencia, CA) on the 7900 thermocycler (Applied Biosystems, Foster City, CA). The primer assay sequences for FCER1A were obtained from Primer Bank (http:// pga.mgh.harvard.edu/primerbank/index.html), and all other assays were obtained from QIAgen. Quantification was based on the relative standard curve method, and standards were prepared for each assay by serial diluting qRT-PCR products. The specificity of the qRT-PCR assays was confirmed by dissociation curve analysis and by testing negative RT control reactions. To select a housekeeping gene for normalization of the qRT-PCR data, we interrogated the microarray data for the following set of house keeping genes: ACTB, ALAS1, B2M, EEF1A1, GAPDH, GUSB, HMBS, HPRT1, PGK1, PPIA, RPL13A, RPL27A, RPL37A, RPLP0, RRN18S, SDHA, TBP, TFRC, TUBB, YWHAZ. We selected HMBS31 because it had the lowest variance. We also confirmed by qRT-PCR that the variance of HMBS expression was an order of magnitude lower than that of one of the most stably expressed genes in the genome (i.e., EEF1A1)32 and several orders of magnitude lower than RRN18S (18S rRNA). It is noteworthy that HMBS is expressed at low levels,31 thus providing more adequate adjustments for variations in RNA quality than highly expressed house keeping genes.
Statistical design and methodology
This study was designed with a two-stage analytical strategy. First, a case-control approach was employed; two groups of 10 subjects each from both extreme tails of the FEV1/FVC distribution at enrollment were selected for the microarray profiling studies. In the second phase, qRT-PCR validation studies were performed on all available exacerbation sputum samples from the whole population (n=40), and in these analyses, FEV1/FVC at enrollment, but also during the exacerbation and at convalescence, was treated as a quantitative trait.
Undetectable qRT-PCR data points were substituted with half the lowest value. Proportions were compared using Pearson chi-square or Fisher Exact test. Association was assessed using non-parametric Spearman's Rank Order correlation coefficient. Tobit regression was used to adjust for confounders when the dependent variable was left censored. Linear regression was used to adjust for controller medication use when the dependent variable was normally distributed. T-tests, oneway ANOVA were used to assess differences between groups for normally distributed data. Mann-Whitney U, Kruskall-Wallis H were used to assess differences between groups for non-normally distributed data. Wilcoxon rank test was used for non-parametric paired data assessments. The data were analyzed using SPSS 17.0 (SPSS Inc, Chicago, IL), STATA 10.0 (StataCorp, College Station, TX) and GraphPad Prism (GraphPad Software, Inc, CA).
Study population and follow-up
The study population consisted of 218 mild/moderate persistent asthmatic children aged 6-18 years who had at least one acute asthma exacerbation during the previous year. Of these 218 children, 117 experienced an exacerbation during the observation period, and sputum samples from 40 of these subjects were available for this study. Of all baseline characteristics, only ethnicity, maternal smoking, and frequency of ever having been hospitalized for asthma were different between included and excluded children who had exacerbations (Table 1).
At enrollment, modest correlations were observed between FEV1/FVC ratio and height and age (rho=−0.33, p-value =0.035 and rho=−0.37, p-value=0.020, respectively). There was no statistically significant relation between ethnicity, gender, parental asthma, current smoking, skin test reactivity, detection of picornavirus in nasal secretions, ever hospitalized for asthma, or concurrent medication use and the baseline FEV1/FVC ratio (Table S1).
Gene coexpression network analysis of exacerbation responses; variations associated with FEV1/FVC ratio
Two groups of 10 subjects each from both extreme tails of the FEV1/FVC distribution at baseline were selected (see Methods) and their mean ± SD FEV1/FVC ratios were 78.8 ± 2.1 and 91.3 ± 2.5, respectively. Total RNA was extracted from induced sputum samples obtained from these subjects during an exacerbation, and the samples were labelled and hybridized to microarrays. A holistic approach was employed to analyse the microarray data, based on reverse engineering gene network analysis. [26][27][28] The mechanics of the network analysis algorithm are detailed in Methods; briefly, the algorithm employs a stepwise analytical process to interrogate variations in gene coexpression patterns across all the samples to identify subnets of coexpressed genes or modules. These modules contain the molecular components that execute biological functions and underpin physiological and diseased states.26-28 As illustrated in Figure 1A, network analysis of exacerbation responses in sputum identified twelve modules of coexpressed genes. To determine if any of the modules were differentially expressed in responses from subjects with or without deficits in FEV1/FVC ratios, the modules were tested with Gene Set Analysis29 -a statistical procedure that tests for the association of a set of genes with a trait of interest. This analysis provided preliminary evidence that three modules were associated with enrollment FEV1/FVC ratios (modules II, III, and IV, Figure 1B), and accordingly these modules were selected for further study.
Th2-related pathways were not detected in the network analyses; however, a conventional differential expression analysis of the microarray data suggested that FCER1A and IL-5 were elevated in the responses from subjects with low FEV1/FVC ratios ( Figure S1). Therefore we also selected Th2-related genes (FCER1A, IL-5, IL-13) for further study.
Gene expression patterns during acute exacerbations and at 7-14 days convalescence
The microarray analyses above were based on sputum samples obtained during an acute exacerbation, thus it is not known if the gene expression signals are transient/activationassociated, or constitutive. To obtain more detailed information in this regard, we profiled expression of representative genes with known immunological functions from the identified pathways, by real time qRT-PCR in paired acute and convalescent samples, which were available for a subset of the study population (n=18). The responses for all 18 subjects combined are shown in Table 2 (see Figure 2 for illustrative examples), and these data demonstrated that representative genes in the Th1-like/cytotoxic pathway, the interferon signalling pathway, and the Th2 pathway (FCER1A, IL-5) were significantly elevated in acute versus convalescent comparisons, and there was also a trend for increased levels of IL-13 (p-value=0.051). In contrast, genes in the epithelial differentiation pathway were not significantly modulated in acute versus convalescent comparisons ( Table 2). Of note, expression of the house keeping gene HMBS which was utilized to normalize the qRT-PCR data to adjust for variations in RNA quantity and quality was not different in acute versus convalescent comparisons (p-value=0.92).
Stratification of the subjects (from Table 2/ Figure 2) into FEV1/FVC tertiles demonstrated that the responses for representative genes from the Th1-like/cytotoxic and interferon signaling pathways were much more consistent and/or intense in subjects with higher baseline FEV1/FVC ratios ( Figure S2-Figure S6). Moreover, statistical analyses detected consistent associations between FEV1/FVC ratios at enrollment, and exacerbation responses for genes from the Th1-like/cytotoxic and interferon signaling pathways, but not from the epithelial differentiation or Th2-related pathways ( Figure S7).
qRT-PCR analysis of exacerbation responses and variations in FEV1/FVC ratios
We next sought confirmation of the preliminary associations between exacerbation response patterns and enrollment FEV1/FVC ratios detected by microarray in Figure 1B, employing real time qRT-PCR analysis of all available samples from the whole population. Of note, these subjects exhibit a broad range of FEV1/FVC ratios, thus in the analyses below FEV1/FVC was treated as a quantitative trait. The non-parametric analyses illustrated in Table 3 confirmed that expression of representative genes from the Th1-like/cytotoxic and interferon signaling pathways during exacerbations was strongly and positively correlated with enrollment FEV1/FVC ratios. However, we could not demonstrate that similar correlations were observed with FEV1/FVC ratios at the time of the exacerbations, when the sputum samples themselves were obtained. There were no associations between expression of genes from the epithelial differentiation and Th2-related pathways with FEV1/FVC ratios assessed at any stage. No association was detected with FEV1/FVC ratios and expression of the house keeping gene HMBS.
Inclusion in the analyses of sputum samples which contain a higher proportion of contaminating squamous cells may decrease power to detect true associations, and this could potentially explain the lack of association between the epithelial and/or Th2-related pathways with FEV1/FVC ratios. To address this issue, we excluded samples which contained more than 30 % squamous cells, repeated the analyses, and the results were unchanged (Table S6).
Variations in gene expression could potentially be explained by variations in the clinical characteristics of the study population. However, no consistent associations were found between gene expression levels and skin test reactivity, or evidence of concurrent picornavirus infection, or use of specific medications at the time of exacerbation (Table S7).
Exacerbation responses, specific medications, and FEV1/FVC ratios
Linear regression modelling was employed to investigate associations between exacerbation response patterns and FEV1/FVC ratios, whilst adjusting for medications used at the time of the exacerbation. Consistent with the unadjusted analyses in Table 3, we identified strong and positive associations between the Th1-like/cytotoxic and interferon pathways and enrollment/convalescent FEV1/FVC ratios, and again, no associations were detected between FEV1/FVC ratios and either Th2 or epithelial differentiation pathways (Table S8).
Cellular immune signatures enriched within the Th1-like/cytotoxic pathway
In order to attempt to infer the cellular origins of the gene expression signals in the Th1-like/ cytotoxic pathway, we employed a bioinformatics "in-silico" approach using a publicly available database (see Methods). Previous studies have documented high expression levels of the cytotoxic pathway in CD4 T cells, CD8 T cells, and NK cells from blood,36 and in these same cell populations isolated from the airways during respiratory viral infections.37 Our bioinformatics analyses (Table S9) showed that the list of genes in the Th1-like/ cytotoxic pathway was significantly enriched for molecular signatures associated with antiviral responses (p-value = 1.13 × 10 −9 ), CD4 T cells (p-value = 1.05 × 10 −9 ), cytotoxic CD8 T cells (p-value = 1.63 × 10 −5 ), T cell receptor signalling (p-value = 2.7 × 10 −4 ), and NK cell mediated cytotoxicity (p-value = 1.77 × 10 −5 ). However, we also identified a signature for iNKT cells (p-value = 1.04 × 10 −5 ) in the Th1-like/cytotoxic pathway. In addition, two genes (SLAM, SAP/SH2D1A, Table S2) which are central to the development of iNKT cells38 are also part of the Th1-like/cytoxic network.
To obtain more detailed information on the role of T cells and iNKT cells in exacerbation responses, we investigated expression of T cell receptor transcripts by qRT-PCR in the sputum samples.20 Expression of the T cell receptor constant chain (TRBC2) -a marker of T cells including iNKT cells, was detected in all exacerbation and convalescent samples, and expression levels were significantly elevated during exacerbations ( Figure 3A). Expression of the iNKT cell markers Vα24 and Vβ11 was detected in 80 % and 70 % of the exacerbation responses respectively (data not shown), and once again expression levels were significantly elevated during exacerbations ( Figure 3B/C).
Finally, we investigated associations between expression of T cell and iNKT cell markers, exacerbation response patterns, and FEV1/FVC ratios. These analyses demonstrated that TRBC2 and iNKT cell markers were not related to FEV1/FVC ratios (data not shown), however the former was strongly and positively correlated with the activation of Th1-like/ cytotoxic and interferon pathways, and the latter were strongly correlated with IL-12A and IL-21 responses (Table S10, see Figure 3E/F for illustrative examples).
Discussion
Childhood asthma is a heterogeneous condition characterized by recurrent episodes of reversible airway obstruction. In a substantive proportion of children with asthma, the disease is associated with progressive airflow limitation, as assessed by the development of deficits in FEV1/FVC ratio, a useful index of the remodelling state of the airway unrelated to anthropometric data.22 What determines this irreversible loss of lung function is unknown, but a recent longitudinal study in children and adults with asthma of recent onset suggested that those who had exacerbations during follow-up were at increased risk.10 The purpose of this study was thus to determine if children with baseline airflow obstruction had a pattern of inflammatory responses during acute asthma exacerbations that was different from that of children without baseline airway obstruction. We demonstrated that decreased activation of Th1-like/cytotoxic and interferon signaling pathways during acute asthma exacerbations were strongly associated with chronic airways obstruction. These associations were independent of atopy, the detection of concurrent picornavirus infections, and the use of medications at the onset of the exacerbation. Th2-related pathways were also detected in the exacerbation responses ( Figure 2, Figure S1), but variations in these pathways were not related to FEV1/FVC ratios. Interestingly and unexpectedly, the patterns of association with lung function observed between acute gene expression and FEV1/FVC ratios at baseline and during convalescence were not observed with FEV1/FVC ratio measured during the acute episode. We speculate that determinants of the severity of the acute obstructive response may be different from those that influence long-term effects on lung function. Finally, bioinformatics analyses of cellular immune signatures enriched within the Th1-like/ cytotoxic pathway suggested a role for T cells and iNKT cells in the exacerbation responses, and we showed that the activation of Th1-like/cytotoxic and interferon signaling networks was strongly correlated with a marker of T cells (TRBC2). Markers of iNKT cells (Vα24, Vβ11) were also associated with acute exacerbations, and although they were not related to airways obstruction, they were strongly correlated with IL-12A and IL-21 responses. These results thus suggest that impairment of acute Th1-like/cytotoxic and interferon signaling responses presumably to viruses and other environmental exposures may play a major role in the development of chronic airways obstruction in asthma.
Our findings confirm and extend previous studies demonstrating deficient rhinovirusinduced interferon responses in asthmatic adults in vivo.14, 15, 17 Of particular interest were the studies by Wark et al13 who compared interferon beta responses, induction of apoptosis, and viral release after rhinovirus infection in bronchial epithelial cells obtained from asthmatic adults with and without airflow limitation (with only the former requiring treatment with inhaled corticosteroids) and normal controls. They found that, whereas interferon beta and apoptotic responses were impaired in both groups with asthma, viral release from these cells was significantly increased only among asthmatic subjects with airflow limitation. Contoli et al14 reported deficits in interferon lambda responses and increased viral shedding by rhinovirus-infected bronchial epithelial cells obtained from subjects with asthma who had mild baseline airflow limitation. These results thus suggested that deficits in several innate inflammatory responses might facilitate virus replication and cytolysis, with increased infection of neighboring airway cells, thus inducing exaggerated secondary responses, which in turn may activate remodeling and abnormal repair mechanisms. Our study addresses this potential complexity by adopting an unbiased approach to the assessment of inflammatory networks that are set in motion during real life asthma exacerbations in children. We demonstrate that the Th1-like and interferon pathways function in the broader molecular context of the cytotoxic pathway (granulysin, granzyme B, perforin-1) and common gamma chain cytokine signalling pathways (IL-2Ra, IL-15, IL-21), which are known to enhance the differentiation, proliferation, and/or cytotoxic functions of CD4 T cells, CD8 T cells, NK cells, and iNKT cells ( Table 2). It is also noteworthy that interferon beta in addition to inducing a robust antiviral state via upregulation of the archetypal interferon-induced genes (Mx1, PKR, PML) also potently activates cytotoxic responses.39 Thus, activation of a complex network of interrelated immune mechanisms seems to be necessary to protect subjects with asthma from the development of airflow limitation after acute asthma exacerbations.
The detection of iNKT cells in sputum during exacerbations is not all that surprising, given their established role in immune responses against respiratory infections18, 40 and to a broad range of other pathogens. 41 Our findings demonstrate that markers of iNKT cells are strongly associated with IL-12A and IL-21 responses (Table S10, Figure 3E Studies in animal models have suggested a pathogenic role for iNKT cells in asthma. 18,48 However, immune responses in mice do not always mimic those in humans,49, 50 and data on iNKT cells in human asthma are limited and conflicting.19, 20 iNKT cells were significantly increased during exacerbations in our data, but this association was modest due to evidence for the persistence of these cells in sputum 7-14 days after the exacerbation in a subset of the subjects ( Figure 3B/C). This was in contrast to a broader marker for T cells, which was reduced to relatively low levels at 7-14 days convalescence ( Figure 3A). Unfortunately, numbers of subjects and duration of follow up were insufficient to determine if persistence of markers for the presence of iNKT cells was associated with chronic airflow limitation, as suggested by mouse models. Further studies are needed to examine the relationship between acute asthma exacerbations, chronic airway obstruction, and numbers of iNKT cells and T cells in the airways.
This study has limitations which should be acknowledged. In our study, the abnormalities in lung function were already present at enrollment, and thus, a cause-effect relation between the inflammatory patterns we observed and these abnormalities cannot be proven using our study design. However, a study design in which these patterns were measured before the development of airflow limitation and the latter was assessed prospectively thereafter would require testing young children in whom induced sputum cannot be obtained and many years of follow-up. Also, the expression profiling studies and network analyses were performed on a mixed cell population from induced sputum, which is subject to sampling variability and contamination with saliva and squamous cells. Follow up studies on highly purified cell populations should thus increase the sensitivity and precision/resolution of these analyses. Finally, sputum induction was not performed during severe exacerbations because it was deemed unsafe and unethical, and thus the mechanisms operating in severe exacerbations may be different from those reported herein.
These limitations notwithstanding, our findings for the first time provide a global perspective of the immunological networks that are operating during naturally occurring acute asthma exacerbations in children, and broadly characterize variations in these networks which are associated with deficits in baseline FEV1/FVC ratios. Our results identify a large range of logical candidates with well characterized immunomodulatory properties ( Table 2) for intervention studies to prevent the development of chronic airflow limitation in children with asthma.
Supplementary Material
Refer to Web version on PubMed Central for supplementary material.
Figure 1. Exacerbation responses are associated with baseline FEV1/FVC ratios
Gene expression was profiled by microarray in sputum samples obtained during an acute exacerbation from asthmatic children with (n=10) or without (n=10) deficits in enrollment/ baseline FEV1/FVC ratios. A: Modules of coexpressed genes were identified by reverse engineering gene network analysis of the microarray data set (n=20). B: The modules were tested for differential expression in responses from subjects with or without deficits in baseline FEV1/FVC ratios. Positive or negative values on the vertical axis indicate that a module is enriched with genes that are increased or decreased respectively (based on gene-level Bayesian t.test statistics25) in responses from subjects with deficits in FEV1/FVC ratios as compared with those with normal FEV1/FVC ratios. The p-values are derived from a statistical analysis at the module-level utilizing the Gene Set Analysis test without adjustment for multiple testing.29 ** p-value < 0.01; * p-value < 0.05. Normalized gene expression levels were measured by qRT-PCR in paired sputum samples obtained from asthmatic children (n=18) during an acute exacerbation (Exa) and 7-14 days later (Conv). Statistical analysis by Wilcoxon signed rank test. Undetected data points were substituted for half the lowest value. Expression of representative genes from the Th1-like/cytotoxic, interferon signaling, and Th2-related pathways are upregulated in sputum during acute exacerbations in comparison to 7-14 days convalescence.
0.0512
A Normalized gene expression levels as measured by qRT-PCR were compared in paired sputum samples obtained from asthmatic children (n=18) during an exacerbation (Exa) or 7-14 days convalescence (Conv). Statistical analyses by Wilcoxon signed rank test. Undetected data points were substituted for half the lowest value. See Table S5 Table 3 Expression levels of Th1-like/cytotoxic and interferon signaling genes in sputum during acute exacerbations are correlated with enrollment and convalescent FEV1/FVC ratios, but not with exacerbation FEV1/FVC ratios. | 2017-11-08T00:48:25.063Z | 2010-03-09T00:00:00.000 | {
"year": 2010,
"sha1": "57e50fc6c409d9338804caee245616233311085c",
"oa_license": null,
"oa_url": "https://www.nature.com/articles/mi201013.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "57e50fc6c409d9338804caee245616233311085c",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
14811995 | pes2o/s2orc | v3-fos-license | Optimal stretching for lattice points under convex curves
Suppose we count the positive integer lattice points beneath a convex decreasing curve in the first quadrant having equal intercepts. Then stretch in the coordinate directions so as to preserve the area under the curve, and again count lattice points. Which choice of stretch factor will maximize the lattice point count? We show the optimal stretch factor approaches $1$ as the area approaches infinity. In particular, when $0<p<1$, among $p$-ellipses $|sx|^p+|s^{-1}y|^p=r^p$ with $s>0$, the one enclosing the most first-quadrant lattice points approaches a $p$-circle ($s=1$) as $r \to \infty$. The case $p=2$ was established by Antunes and Freitas, with generalization to $1<p<\infty$ by Laugesen and Liu. The case $p=1$ remains open, where the question is: which right triangles in the first quadrant with two sides along the axes will enclose the most lattice points, as the area tends to infinity? Our results for $p<1$ lend support to the conjecture that in all dimensions, the rectangular box of given volume that minimizes the $n$-th eigenvalue of the Dirichlet Laplacian will approach a cube as $n \to \infty$. This conjecture remains open in dimensions four and higher.
Introduction
This article tackles a variant of the Gauss circle problem motivated by shape optimization results for eigenvalues of the Laplacian, as explained in the next section. The circle problem asks for good estimates on the number of integer lattice points contained in a circle of radius r > 0. Gauss showed this lattice point count equals the area of the circle plus an error of magnitude O(r) as r → ∞. The current best estimate, due to Huxley [13], improves the error bound to O(r θ+ ) for θ = 131/208, which is still quite far from the exponent θ = 1/2 conjectured by Hardy [10].
One may count lattice points inside other curves than circles, and may further seek to maximize the number of lattice points with respect to families of curves all enclosing the same area. Such maximization problems are the focus of this paper, concerning curves and lattice points in the first quadrant.
Consider a convex decreasing curve in the first quadrant that intercepts the horizontal and vertical axes. For example, fix 0 < p < 1 and consider the p-ellipse (sx) p + (y/s) p = r p , (1) Figure 1. The family of p-circles x p + y p = 1. Theorem 1 and Example 3 handle the convex case 0 < p < 1. The concave case 1 < p < ∞ was treated in [17]. The straight line case p = 1 remains open.
where r, s > 0. This p-ellipse is obtained by stretching the p-circle from Figure 1 in the coordinate directions by factors s and s −1 and then dilating by the scale factor r. Note the p-ellipse has semi-axes rs and rs −1 , and has area A(r) depending only on the "radius" r, not on the stretch parameter s. Write N (r, s) for the number of positive-integer lattice points lying below the curve, and for each fixed r, denote by S(r) the set of s-values maximizing N (r, s). In other words, s ∈ S(r) maximizes the first-quadrant lattice point count among all p-ellipses having area A(r).
Our main theorem implies that these maximizing s-values converge to 1 as r goes to infinity. That is, the p-ellipses that contain the most positive-integer lattice points must have semi-axes of almost equal length, for large r, and thus can be described as "asymptotically balanced". This result in Example 3 is an application of Theorem 1, which handles much more general convex decreasing curves.
For nonnegative-integer lattice points, meaning we include also the lattice points on the axes, the problem is to minimize rather than maximize the number of enclosed lattice points. For that problem too we prove optimal curves are asymptotically balanced.
A key step in the proof is to establish a precise estimate on the number of positiveinteger lattice points under the graph of a convex decreasing function, in Proposition 6. This estimate builds on the corresponding estimate for concave functions, namely the work of Laugesen and Liu [17] on the case 1 < p < ∞ and generalizations, which in turn was based on work of Krätzel [15]. Our proof starts by observing that the convex and concave problems are complementary, as one sees by enclosing the convex curve in a suitable rectangle and regarding the lattice points above the curve as being lattice points beneath the "upside down" concave curve.
Eigenvalues of the Laplacian, and open problems
In this expository section we connect lattice point counting results to shape optimization problems on eigenvalues of the Laplacian. Open problems for eigenvalues arise naturally in this context.
Eigenvalues of the Laplacian. The asymptotic counting function maximization problem was initiated by Antunes and Freitas [2], who solved the problem for positiveinteger lattice points inside standard ellipses. That is, they established the case p = 2 of the previous section. Their result was formulated in terms of shape optimization for Laplace eigenvalues, as we proceed to explain.
For a bounded domain Ω ⊂ R d , the eigenvalue problem for the Laplacian with Dirichlet boundary conditions is: where the eigenvalues form an increasing sequence The relationship between the domain Ω and its associated eigenvalues is complicated. A classical problem is to determine the domain having given volume that minimizes the n-th eigenvalue. A ball minimizes the first eigenvalue, by the Faber-Krahn inequality, and the union of two disjoint balls having the same radius minimizes the second eigenvalue, by the Krahn-Szego inequality. Domains that minimize higher eigenvalues do exist [7,9], although the minimizing domains are not known explicitly.
In two dimensions, a disk is conjectured to minimize the third eigenvalue, and more generally it is an open problem to determine whether a ball in d dimensions minimizes the (d + 1)-st eigenvalue [11, p. 82]. Minimizing domains have been studied numerically by Oudet [18], Antunes and Freitas [1], and Antunes and Oudet [4], [12,Chapter 11].
A challenging open problem is to determine the asymptotic behavior as n → ∞ of the domain (or domains) minimizing the n-th eigenvalue. To gain insight, let us write M (λ) for the number of eigenvalues less than or equal to the parameter λ, and recall that the Weyl conjecture claims where ω d is the volume of the unit ball in R d . This asymptotic formula for the counting function was verified by Ivrii [14] under a generic assumption for piecewise smooth domains, namely that the periodic billiards have measure zero. The appearance of the perimeter in the second term of this formula might suggest that the domain minimizing the n-th eigenvalue (or maximizing the counting function M (λ)), under our assumption of fixed volume, should converge to a ball because the ball has minimal perimeter by the isoperimetric theorem.
This heuristic does not amount to a proof, though, since the order of operations is wrong: our task is not to fix a domain and then let n → ∞ (λ → ∞), but rather to minimize the eigenvalue over all domains for n fixed (maximize the counting function for λ fixed) and only then let n → ∞ (λ → ∞).
It is an open problem to determine whether the eigenvalue-minimizing domain converges to a ball as n → ∞. The problem is easier if the perimeter is fixed, and in that case Bucur and Freitas [8] showed that eigenvalue minimizing domains do indeed converge to a disk, in dimension two.
Antunes and Freitas [2] solved the problem in the class of rectangles under area normalization, as follows. Let R(s) be the rectangle (0, π/s) × (0, sπ), whose area equals π 2 for all s. For each n, choose a number s n > 0 such that R(s n ) minimizes the n-th Dirichlet eigenvalue of the Laplacian. That is, choose s n such that λ n R(s n ) = min s>0 λ n R(s) .
Antunes and Freitas showed s n → 1 as n → ∞, meaning that the rectangles R(s n ) converge to a square. The analogous result for three-dimensional rectangular boxes was later established by van den Berg and Gittins [6]. The problem remains open in dimensions four and higher. Once again, the problem is easier if the surface area is fixed, and in that case Antunes and Freitas [3] showed that rectangular boxes which minimize the n-th Dirichlet eigenvalue of the Laplacian must converge to a cube, in any dimension.
The eigenvalues of the Laplacian on a rectangle are closely connected to lattice point counting: the eigenfunction u = sin(jsx) sin(ky/s) on the rectangle R(s) has eigenvalue λ = (js) 2 + (k/s) 2 , for j, k > 0, and this eigenvalue is less than or equal to some number r 2 if and only if the lattice point (j, k) lies inside the ellipse with semiaxes s −1 and s and radius r. Thus the result of Antunes and Freitas on asymptotically minimizing the n-th eigenvalue among rectangles of given area is essentially equivalent to asymptotically maximizing the number of first-quadrant lattice points enclosed by ellipses of given area -and that is how their proof proceeded.
A conjecture on product domains. The conjecture for rectangular boxes in higher dimensions is supported by results in this paper, as follows. More generally, fix a bounded domain Ω ⊂ R d , and for s > 0 define a product domain For each n, choose s n to minimize the n-th Dirichlet eigenvalue of the Laplacian on the product domain. It is natural to ask whether s n → 1 as n → ∞, and our results suggest this might be the case.
Observe that the eigenvalues of P (s) are given by s 2/d λ j (Ω)+s −2/d λ k (Ω) for j, k > 0. Without loss of generality, assume Ω has volume (2π) d /ω d . Then the first-order Weyl approximation is λ n (Ω) ∼ n 2/d . Using this approximation, we may approximate the eigenvalues of P (s) by s 2/d j 2/d + s −2/d k 2/d . That is, for r > 0 the number of "approximate eigenvalues" less than r 2/d is given by the number of positive-integer lattice points inside the p-ellipse (1), with p = 2/d. If d ≥ 3 then p = 2/d < 1, and so our Example 3 applies to the approximate eigenvalues. Thus if s n were chosen to minimize the n-th "approximate eigenvalue" of P (s), then s n would converge to 1 as n → ∞. This observation suggests the same might hold true for the s n -value minimizing the actual n-th eigenvalue of the product domain. In particular, it seems reasonable to believe that the analogue of the Antunes-Freitas (and van den Berg-Gittins) result will hold for rectangular boxes in all even dimensions ≥ 6, and presumably also in odd dimensions ≥ 5. The evidence is hardly conclusive, of course, since not every rectangular box has the product form P (s) and furthermore we used only the leading order term in the Weyl asymptotic.
The preceding argument does not apply in 4 dimensions: even if a 4-dimensional box can be expressed as a product of two 2-dimensional boxes, taking d = 2 gives the borderline case p = 2/d = 1, and for p = 1 the lattice point maximizing value s n does not seem to approach 1 as n → ∞ [17, Section 9]. Thus one might expect the conjecture on rectangular boxes to be hardest to prove in 4 dimensions.
More general domains. Among more general convex domains with just a little regularity, Larson [16] has shown the ball asymptotically maximizes the Riesz means of the Laplace eigenvalues, for Riesz exponents ≥ 3/2 in all dimensions. If the exponent could be lowered to 0 in this result, then the ball would asymptotically maximize the counting function of individual eigenvalues. Incidentally, Larson also shows the cube is asymptotically optimal among polytopes, for the Riesz means.
Thus the current state of knowledge is that asymptotic optimality holds for the individual eigenvalues if one restricts to rectangular boxes in 2 or 3 dimensions, and holds among more general convex domains and polytopes if one restricts to weaker eigenvalue functionals, namely the Riesz means of exponent ≥ 3/2.
Assumptions and definitions
By convention, the first quadrant is the open set {(x, y) : x, y > 0}. Take Γ throughout the paper to be a convex, strictly decreasing curve in the first quadrant that intercepts the x-and y-axes at x = L and y = M , as illustrated in Figure 2. Write Area(Γ) for the area enclosed by the curve Γ and the x-and y-axes.
Represent the curve as the graph of y = f (x), so that f is a convex strictly decreasing function for x ∈ [0, L], and Denote the inverse function of f (x) by g(y) for y ∈ [0, M ]. Clearly g is also convex and strictly decreasing.
Compress the curve by a factor of s > 0 in the horizontal direction and stretch it by the same factor in the vertical direction to obtain the curve Γ(s) = graph of sf (sx).
The area under Γ(s) equals the area under Γ. Then scale the curve by parameter r > 0 to obtain: Define the counting function for rΓ(s) by N (r, s) = number of positive-integer lattice points lying inside or on rΓ(s) For each r > 0, we consider the set consisting of the s-values that maximize the number of first-quadrant lattice points enclosed by the curve rΓ(s). The set S(r) is well-defined because for each fixed r, the counting function N (r, s) equals zero whenever s is sufficiently large or sufficiently close to 0.
Results
Recall g is the inverse function of f , as illustrated in Figure 2.
Assume the intercepts of Γ are equal (L = M ). Then the optimal stretch factor for maximizing N (r, s) approaches 1 as r tends to infinity, with where the exponent is e = min( 1 6 , a 1 , a 2 , b 1 , b 2 ). Further, the maximal lattice count has asymptotic formula The theorem is proved in Section 8. The C 2 -smoothness hypothesis can be weakened to piecewise smoothness, cf. [17], although for simplicity we will not do so here.
The theorem simplifies considerably when the second derivatives are positive and monotonic all the way up to the endpoints: If the intercepts of Γ are equal (L = M ), then the optimal stretch factor for maximizing N (r, s) approaches 1 as r tends to infinity, with The corollary follows by taking a 1 = b 1 = 1/2, a 2 = b 2 = 1/4, e = 1/6 in the theorem and noting that f (L) > 0 and g (M ) > 0 by assumption.
Example 3 (Optimal p-ellipses for lattice point counting). Fix 0 < p < 1, and consider the p-circle Γ : |x| p + |y| p = 1, which has equal intercepts L = M = 1. That is, the p-circle is the unit circle for the p -metric on the plane. Then the p-ellipse rΓ(s) : |sx| p + |s −1 y| p ≤ r p has first-quadrant counting function We will show that the p-ellipse containing the maximum number of positive-integer lattice points must approach a p-circle in the limit as r → ∞, with where e = min{ 1 6 , p 2 }. To verify that the p-circle satisfies the hypotheses of Theorem 1, we let α = β = 2 −1/p , so that α < 1/2 = L/2 and β < 1/2 = M/2. Then for 0 < x < 1 we have If 0 < p ≤ 1/2 then f < 0 on the interval (0, 1), and so f is monotonic. If 1/2 < p < 1 then f vanishes at exactly one point in the interval (α, 1), namely at α 1 = [(2 − p)/(1 + p)] 1/p , and so f is monotonic on the subintervals (α, α 1 ) and (α 1 , 1). Further, we choose a 1 = a 2 = p/2 and let δ(r) = r −2a 1 = r −p for all large r, and verify directly that The calculations are the same for g, and so the desired conclusion for p-ellipses with 0 < p < 1 now follows from Theorem 1. The case 1 < p < ∞ was treated earlier by Laugesen Theorem 4 (Optimal convex curve is asymptotically balanced). Assume the hypotheses of Theorem 1 hold and the intercepts of Γ are equal (L = M ). Then the optimal stretch factor for minimizing N (r, s) approaches 1 as r tends to infinity, with Further, the minimal lattice count has asymptotic formula min s>0 N (r, s) = r 2 Area(Γ) + rL + O(r 1−2e ).
The theorem holds in particular when the second derivatives of f and g are positive and monotonic all the way up to the endpoints, thus yielding a corollary analogous to Corollary 2. Also, Theorem 4 applies in particular when the curve Γ is a p-ellipse with 0 < p < 1, since we verified the hypotheses already in Example 3.
Concave curves, such as p-ellipses with 1 < p < ∞, were handled earlier by Laugesen and Liu [17]. The standard ellipse case (p = 2) was done first by van den Berg, Bucur, and Gittins [5], who used it to show that the rectangle of given area maximizing the n-th Neumann eigenvalue of the Laplacian will converge to a square as n → ∞.
Two-term upper bound on counting function
In order to control the stretch factor when proving our main results later in the paper, we now develop a two-term upper bound on the lattice point counting function. The leading order term of the bound is simply the area inside the curve, and thus is best possible, while the second term scales like the length of the curve and so has the correct order of magnitude.
Recall Γ is the graph of y = f (x), where f is convex and strictly decreasing on [0, L], with f (0) = M, f (L) = 0. We do not assume f is differentiable, in the next proposition. Proof. It is enough to prove the case r = s = 1 for L ≥ 2, because then the general case of the proposition follows by applying the special case to the curve rΓ(s) (which has horizontal intercept rs −1 L and defining curve y = rsf (sx/r)). Clearly N (1, 1) equals the total area of the squares of sidelength 1 having upper right vertices at positive integer lattice points inside the curve Γ. The union of these squares is contained in Γ, since the curve is decreasing.
Consider the right triangles of width 1 formed by left-tangent lines on Γ, as shown in To complete the proof, we estimate as follows: since L/2 ≥ 1 and f (L) = 0.
Two-term counting asymptotics with explicit remainder
What matters in the following proposition is that the terms on the right side of the estimate in part (b) can be shown later to have order less than O(r), and thus can be treated as remainder terms. Also, it matters that the s-dependence in the estimate can be seen explicitly.
Proof. Part (a). In what follows, remember L and M are not integers since Γ is assumed not to pass through any integer lattice points. The idea is to count lattice points in the "complementary region" lying above the convex curve Γ and inside the rectangle 0, L × 0, M , because then one may invoke known estimates for a region with concave boundary, e.g. Notice F and G are inverses, with y = F (x) if and only if x = G(y). Write Γ for the graph of F (or G), so that Γ decreases from its y-intercept at (0, M ) to its x-intercept at ( L, 0). Define α = L − α and β = M − β. Then α > 0 because we assumed α < L . Applying f to both sides of this inequality gives β > f ( L ) = M − M , and so β < M . Similarly, we find β > 0 and α < L. Also, 0 < δ < α and 0 < < β by the hypotheses in Part (a).
Hence from Part (a) we obtain the conclusion of Part (b) provided the curve rΓ(s) does not pass through any integer lattice points.
If the curve does pass through some lattice points, then simply consider a decreasing sequence r i r for which each curve r i Γ(s) contains no lattice points, and also modify the functions δ(·) and (·) to be continuous at r. Then the desired result follows by passing to the limit in the case of the theorem already proved, noting that N (r, s) ≤ N (r i , s).
Elementary bounds on the optimal stretch factors
We develop some r-dependent bounds on the optimal stretch factors. Later, in the proof of Theorem 1, we will show the stretch factors in fact converge to 1.
By assumption r 2 ≥ 1/x 0 y 0 , and so the curve rΓ(s 0 ) encloses the point (1,1). Hence the maximum of the counting function s → N (r, s) is greater than zero. We will use that fact to constrain the s-values where the maximum can be attained.
The curve rΓ(s) has x-intercept at rs −1 L, which is less than 1 if s > rL and so in that case the curve encloses no positive-integer lattice points. Similarly if s < (rM ) −1 , then rΓ(s) has height less than 1 and contains no lattice points in the first quadrant. The integer-valued function s → N (r, s) is clearly bounded, and we saw in the first part of the proof that it is positive for some choice of s 0 . Thus N (r, s) attains its positive maximum at some s-value between (rM ) −1 and rL. We will prove N (r, s) < N (r, s/2), which implies s is not a maximizer for the counting function and so s / ∈ S(r). By counting lattice points (j, k) with j = 1 and j = 2, we find N (r, s/2) ≥ (rs/2)f (s/2r) + (rs/2)f (2s/2r) Also, counting lattice points (j, k) with j = 1 shows that rsf (s/r) = N (r, s) (lattice points with j ≥ 2 cannot lie beneath the curve rΓ(s) because 2s/r > L). We conclude N (r, s/2) > N (r, s), as we wished to show. An analogous argument proves that s / ∈ S(r) when s ∈ [(rM ) −1 , 2(rM ) −1 ), that is, when s −1 ∈ ( 1 2 rM, rM ].
Proof of Theorem 1
We apply the three step method of Laugesen and Liu [17], which in turn was inspired by the method of Antunes and Freitas [2] for the case where Γ is a quarter circle.
First we estimate the remainder terms in Proposition 6(b), which by the hypotheses of Theorem 1 satisfy whenever r ≥ max(s/L, 1/sM ). Here the implied constants depend only on the curve Γ and not on s. Next we show S(r) is bounded above and away from 0. Applying (2) with s = 1 gives that for all large r, where the constant c > 0 depends only on the curve Γ. Suppose r is large enough that this estimate holds, and also that r exceeds the constant C in Lemma 8. Let s ∈ S(r). Then r ≥ 2s/L by Lemma 8, and so Proposition 5 (which uses convexity of the curve Γ) applies to give N (r, s) ≤ r 2 Area(Γ) − f (L/2)rs/2. Naturally N (r, 1) ≤ N (r, s), because s ∈ S(r) is a maximizing value. Thus combining the preceding inequalities shows that s ≤ c/f (L/2), and so the set S(r) is bounded above for all large r. Interchanging the roles of the horizontal and vertical axes, we similarly find s −1 is bounded, and hence S(r) is bounded away from 0 for all large r.
Lastly we show S(r) approaches {1} as r → ∞. Let s ∈ S(r), so that by above, s and s −1 are bounded above for all large r. Then the right side of (2) has the form O(r 1−2e ), with the implied constant being independent of s; recall the exponent e was defined in Theorem 1. Since r ≥ 2 max(s/L, 1/sM ) by Lemma 8, we see from (2) Taking L = M gives s −1 + s ≤ 2 + O(r −2e ), and so s = 1 + O(r −e ) by Lemma 9 below, which proves the first claim in the theorem. For the second claim, when s ∈ S(r) we have by (2), using also that 1 ≤ (s + s −1 ) Lemma 9 (An elementary comparison used above).
Proof of Theorem 4. The number of lattice points lying on the axes and inside rΓ(s) is Lr/s + M rs + 1 = Lr/s + M rs + ρ(r, s) where the error satisfies |ρ(r, s)| ≤ 1. Thus N (r, s) and N (r, s) (which, respectively, include and exclude the count of points on the axes) are connected by the formula N (r, s) = N (r, s) + r(s −1 L + sM ) + ρ(r, s).
Thus by estimate (2) whenever r ≥ max(s/L, 1/sM ). Next we show that S(r) is bounded above and bounded below away from 0. Applying (4) with s = 1 establishes that r 2 Area(Γ) + cr/2 ≥ N (r, 1) for all large r, where the constant c > 0 depends only on the curve Γ. Suppose r is large enough that this estimate holds. Let s ∈ S(r). Then Proposition 10 applies to give N (r, s) ≥ r 2 Area(Γ) + M rs/2.
Since s is a minimizer for the counting function N (r, ·) we must have N (r, 1) ≥ N (r, s), and so the inequalities above imply that s ≤ c/M . In other words, the set S(r) is bounded above for all large r. Swapping the roles of the horizontal and vertical axes, we find by the same reasoning that s −1 is bounded above, and hence the set S(r) is bounded below away from 0, for all large r.
Finally, we show S(r) approaches {1} as r → ∞. Let s ∈ S(r), so that s and s −1 are bounded above by Step 2, provided r is large. Then the right side of estimate (4) is bounded by O(r 1−2e ), with the implied constant being independent of s and depending only on the curve Γ. From two applications of estimate (4) where we used that (s −1 + s)/2 = 1 + O(r 1−2e ) by above. | 2017-01-12T02:56:34.000Z | 2017-01-12T00:00:00.000 | {
"year": 2017,
"sha1": "f661fb43f59bc26ed391a96cb7313df153f5ecf5",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1701.03217",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "f661fb43f59bc26ed391a96cb7313df153f5ecf5",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
119741131 | pes2o/s2orc | v3-fos-license | Packing chromatic number of subcubic graphs
A packing $k$-coloring of a graph $G$ is a partition of $V(G)$ into sets $V_1,\ldots,V_k$ such that for each $1\leq i\leq k$ the distance between any two distinct $x,y\in V_i$ is at least $i+1$. The packing chromatic number, $\chi_p(G)$, of a graph $G$ is the minimum $k$ such that $G$ has a packing $k$-coloring. Sloper showed that there are $4$-regular graphs with arbitrarily large packing chromatic number. The question whether the packing chromatic number of subcubic graphs is bounded appears in several papers. We answer this question in the negative. Moreover, we show that for every fixed $k$ and $g\geq 2k+2$, almost every $n$-vertex cubic graph of girth at least $g$ has the packing chromatic number greater than $k$.
Introduction
For a positive integer i, a set S of vertices in a graph G is i-independent if the distance in G between any two distinct vertices of S is at least i + 1. In particular, a 1-independent set is simply an independent set.
A packing k-coloring of a graph G is a partition of V (G) into sets V 1 , . . . , V k such that for each 1 ≤ i ≤ k, the set V i is i-independent. The packing chromatic number, χ p (G), of a graph G, is the minimum k such that G has a packing k-coloring. The of notion packing kcoloring was introduced in 2008 by Goddard, S.M. Hedetniemi, S.T.Hedetniemi, Harris and Rall [15] (under the name broadcast coloring) motivated by frequency assignment problems in broadcast networks. The concept has attracted a considerable attention recently: there are more than 25 papers on the topic (see e.g. [1,5,6,7,8,9,10,11,12,13,23] and references in them). In particular, Fiala and Golovach [10] proved that finding the packing chromatic number of a graph is NP-hard even in the class of trees. Sloper [23] showed that there are graphs with maximum degree 4 and arbitrarily large packing chromatic number.
The question whether the packing chromatic number of all subcubic graphs (i.e., the graphs with maximum degree at most 3) is bounded by a constant was not resolved. For example, Brešar, Klavžar, Rall, and Wash [8] write: 'One of the intriguing problems related to the packing chromatic number is whether it is bounded by a constant in the class of all cubic graphs'. It was proved in [8,21,23] that it is indeed bounded in some subclasses of subcubic graphs. On the other hand, Gastineau and Togni [13] constructed a cubic graph G with χ p (G) = 13, and asked whether there are cubic graphs with a larger packing chromatic number. Brešar, Klavžar, Rall, and Wash [7] answered this question in affirmative by constructing a cubic graph G ′ with χ p (G ′ ) = 14. The main result of this paper answers the question in full: Indeed there are cubic graphs with arbitrarily large packing chromatic number. Moreover, we prove that 'many' cubic graphs have 'high' packing chromatic number: Theorem 1. For each fixed integer k ≥ 12 and g ≥ 2k + 2, almost every n-vertex cubic graph of girth at least g has the packing chromatic number greater than k.
The theorem will be proved in the language of the so-called Configuration model, F 3 (n). We will discuss this concept and some important facts on it in the next section. In Section 3 we give upper bounds on the sizes c i of maximum i-independent sets in almost all cubic n-vertex graphs of large girth. The original plan was to show that for a fixed k and large n, the sum c 1 + . . . + c k is less than n. But we were not able to prove it (and maybe this is not true). In Section 4, we give an upper bound on the size of the union of an 1-independent, a 2-independent, and a 4-independent sets which is less than c 1 + c 2 + c 4 . This allows us to prove Theorem 1 in the last section.
Notation
We mostly use standard notation. If G is a (multi)graph and v, u ∈ V (G), then E G (v, u) denotes the set of all edges in G connecting v and u, e
The Configuration Model
The configuration model is due in different versions to Bender and Canfield [2] and Bollobás [4]. Our work is based on the version of Bollobás. Let V be the vertex set of the graph, we are going to associate a 3-element set to each vertex in V . Let n be an even positive integer. Let V n = [n] and consider the Cartesian product W n = V n × [3]. A configuration/pairing (of order n and degree 3) is a partition of W n into 3n/2 pairs, i.e., a perfect matching of elements in W n . There are such matchings. Let F 3 (n) denote the collection of all (3n − 1)!! possible pairings on W n . We project each pairing F ∈ F 3 (n) to a multigraph π(F ) on the vertex set V n by ignoring the second coordinate. Then π(F ) is a 3-regular multigraph (which may or may not contain loops and multi-edges). Let π(F 3 (n)) = {π(F ) : F ∈ F 3 (n)} be the set of 3-regular multigraphs on V n . By definition, We will call the elements of V nvertices, and of W npoints.
Definition 2.
Let G g (n) be the set of all cubic graphs with vertex set V n = [n] and girth at least g and G ′ g (n) = {F ∈ F 3 (n) : π(F ) ∈ G g (n)}. We will use the following result: Theorem 3 (Wormald [25], Bollobás [4]). For each fixed g ≥ 3, Remark. When we say that a pairing F has a multigraph property A, we mean that π(F ) has property A.
Since dealing with pairings is simpler than working with labeled simple regular graphs, we need the following well-known consequence of Theorem 3.
Proof. Suppose property A holds for π(F ) for almost all F ∈ F 3 (n). Let H(n) denote the set of graphs in G g (n) that do not have property A and H ′ (n) = {F ∈ F 3 (n) : π(F ) ∈ H(n)}. Let B(n) denote the set of pairings F ∈ F 3 (n) such that π(F ) does not have property A. Then H ′ (n) ⊆ B(n). Hence by the choice of A, By (1), we have Furthermore, By (3) and Theorem 3, the right-hand side of (4) tends to 0 as n tends to infinity.
We will use the following theorem of McKay [22].
Definition 6.
A 3-regular tree is a tree such that each vertex has degree 3 or 1. A (3, k, a)tree is a rooted 3-regular tree T with root a of degree 3 such that the distance in T from each of the leaves to a is k.
Definition 7. For a positive integer s and a vertex a in a graph
We first prove simple bounds on c 2k (G) and c 2k+1 (G) when G ∈ G 2k+2 (n).
Lemma 8.
Let j be a fixed positive integer and g ≥ 2j + 2. Then (ii) For every ε > 0, there exists an N > 0 such that for each n > N, Proof. (i) Let C 2j be a 2j-independent set in G with |C 2j | = c 2j (G). Since the distance between any distinct a, b ∈ C 2j is at least 2j + 1, the balls B G (a, j) for all distinct a ∈ C 2j are disjoint. Moreover, since g ≥ 2j + 2, each ball B G (a, j) induces a (3, j, a)-tree T a , and hence has 1 + 3 vertices. This proves (i).
(ii) Let C 2j+1 be a (2j + 1)-independent set in G with |C 2j+1 | = c 2j+1 (G). As in the proof of (i), the balls B G (a, j) for distinct a ∈ C 2j are disjoint, and each B G (a, j) induces a (3, j, a)-tree T a . But in this case, in addition, the balls with centers in distinct vertices of C 2j+1 are at distance at least 2 from each other. Let S i be the set of vertices in T a at distance i from a. Then |S 0 | = 1, and for each 1 Therefore I := a∈C 2j+1 I a is an independent set in G and |I| = (2 j+1 − 1)c 2j+1 (G). This together with Theorem 5 and Corollary 4 implies (ii). a Figure 1: A (3, 3, a)-tree T a .
Lemma 9. Let k be a fixed positive integer and x be a real number with
Proof. To prove the lemma, we will show that the total number of 2k-independent sets of size xn in π(F ) over all F ∈ G ′ 2k+2 (n) does not exceed q(n, k, x). Below we describe a procedure of constructing for every set C of size xn in [n] all pairings in G ′ 2k+2 (n) for which C is 2kindependent. Not every obtained pairing will be in G ′ 2k+2 (n), but every F ∈ G ′ 2k+2 (n) such that C is a 2k-independent set in π(F ) will be a result of this procedure: 1. We choose a vertex set C of size xn from [n]. There are n xn ways to do it.
2. In order C to be 2k-independent and π(F ) to have girth at least 2k + 2, all the balls of radius k with the centers in C must be disjoint, and for each a ∈ C, the ball B π(F ) (a, k) must induce a (3, k, a)-tree. Thus, we have (1−x)n 3xn ways to choose the neighbors of C, call it N(C), (3xn)! ways to determine which vertex in N(C) will be the neighbor for each point in π −1 (C), and 3 3xn ways to decide which point of each vertex in N(C) is adjacent to the corresponding point in π −1 (C). Each vertex of N(C) will have 2 free points left at this moment, and in total the set π −1 (N(C)) has now 2 · 3xn = 6xn free points.
3. Similarly to the previous step, consecutively for i = 1, 2, . . . , k −1, we will decide which vertices and points are in the set π −1 (N i+1 (C)) of the vertices at distance i from C, as follows. Before the ith iteration, we have 3x · 2 i n free points in the 3x · 2 i−1 n vertices of π −1 (N i (C)), and We choose 3x · 2 i n vertices out of the remaining (1 − (3 · 2 i − 2)x) n vertices to include into N i+1 (C), then we have (3x · 2 i n)! ways to determine which vertex in N i+1 (C) will be the neighbor for each free point in π −1 (N i (C)), and 3 3x·2 i n ways to decide which point of each vertex in N i+1 (C) is adjacent to the corresponding point in π −1 (N i (C)).
This proves the bound.
2. In order C to be (2k + 1)-independent and π(F ) to have girth at least 2k + 2, all the balls of radius k with the centers in C must be disjoint, and for each a ∈ C, the ball B π(F ) (a, k) must induce a (3, k, a)-tree. Thus, we have (1−x)n 3xn ways to choose the neighbors of C, call it N(C), (3xn)! ways to determine which vertex in N(C) will be the neighbor for each point in π −1 (C), and 3 3xn ways to decide which point of each vertex in N(C) is adjacent to the corresponding point in π −1 (C). Each vertex of N(C) will have 2 free points left at this moment, and in total the set π −1 (N(C)) has now 2 · 3xn = 6xn free points.
3. Similarly to the previous step, consecutively for i = 1, 2, . . . , k −1, we will decide which vertices and points are in the set π −1 (N i+1 (C)) of the vertices at distance i from C, as follows. Before the ith iteration, we have 3x · 2 i n free points in the 3x · 2 i−1 n vertices of π −1 (N i (C)), and We choose 3x · 2 i n vertices out of the remaining (1 − (3 · 2 i − 2)x) n vertices to include into N i+1 (C), then we have (3x · 2 i n)! ways to determine which vertex in N i+1 (C) will be the neighbor for each free point in π −1 (N i (C)), and 3 3x·2 i n ways to decide which point of each vertex in N i+1 (C) is adjacent to the corresponding point in π −1 (N i (C)).
4 Bound on |C 1 ∪ C 2 ∪ C 4 | Definition 13. For a fixed graph G, let c 1,2,4 (G) be the maximum size of |C 1 ∪C 2 ∪C 4 |, where C 1 , C 2 and C 4 are disjoint subsets of V (G) such that C i is i-independent for all i ∈ {1, 2, 4}.
In this section we prove an upper bound for c 1,2,4 (G).
Proof. Let G satisfy the conditions of the lemma, and let C 1 , C 2 and C 4 be disjoint Since C 2 is 2-independent, each vertex in C 1 has at most one neighbor in C 2 . Let Q be the set of vertices in C 1 that do not have neighbors in C 2 , and q = |Q|. Let L be the set of edges in G − C 1 − C 2 and ℓ = |L|. For brevity, the vertices in Q will be called Q-vertices, and the edges in L will be called L-edges. Let s = |C 1 | + |C 2 |.
We will prove the lemma in a series of claims. Our first claim is: To show (23), we count the edges connecting C 1 ∪ C 2 with C 1 ∪ C 2 in two ways: Solving for s, we get s = n 2 − 1 3 (ℓ − |C 1 | + q). Since q, ℓ ≥ 0 and |C 1 | ≤ c 1 , this together with (22) yields as claimed.
Since g(G) ≥ 9, for every a ∈ V (G), the ball B G (a, 2) induces a (3, 2, a)-tree T a . When handling such a tree T a , we will use the following notation (see Fig 2): where N 1 (a) = {a 1 , a 2 , a 3 }, N 2 (a) = {a 1,1 , a 1,2 , a 2,1 , a 2,2 , a 3,1 , a 3 For j ∈ {0, 1, 2}, let S j = {a ∈ C 4 : the total number of L-edges and Q-vertices in T a is j}, and let U = C 4 − 2 j=0 S j . Our next claim is: Indeed, let 0 ≤ j ≤ 2 and a ∈ S j . If a vertex a i ∈ N 1 (a) is not in (C 1 ∪ C 2 ) − Q, then either a i ∈ Q or aa i ∈ L. Thus, by the definition of S j , |N 1 (a) ∩ (( Since each a i ∈ (C 1 ∪ C 2 ) − Q either is in C 2 or has a neighbor in C 2 ∩ {a i,1 , a i,2 }, we get at least 3 − j vertices in C 2 ∩ V (T a ). This proves (25).
Corollaries 10 and 12 imply that for each j ∈ J, there exists an N j > 0 such that for each n > N j , |{G ∈ G g (n) : c j (G) > b j n}| < ε 10 · |G g (n)|.
Proof of Theorem 1. Let k ≥ 12 be a fixed integer and g ≥ 2k + 2. We need to show that for every ε > 0, there exists an N > 0 such that for each n > N, |{G ∈ G g (n) : χ p (G) ≤ k}| < ε · |G g (n)|.
Remark. It seems that with a bit more sophisticated calculations, one can prove the claim of Theorem 1 not only for almost all cubic graphs with girth at least 2k + 2, but for almost all cubic n-vertex graphs. | 2017-03-30T01:47:44.000Z | 2017-03-29T00:00:00.000 | {
"year": 2017,
"sha1": "210840e0ef46cde2eb2822e57440971e1bdd35da",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "210840e0ef46cde2eb2822e57440971e1bdd35da",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
988662 | pes2o/s2orc | v3-fos-license | Association of EMR Adoption with Minority Health Care Outcome Disparities in US Hospitals
Objectives Disparities in healthcare among minority groups can result in disparate treatments for similar severities of symptoms, unequal access to medical care, and a wide deviation in health outcomes. Such racial disparities may be reduced via use of an Electronic Medical Record (EMR) system. However, there has been little research investigating the impact of EMR systems on the disparities in health outcomes among minority groups. Methods This study examined the impact of EMR systems on the following four outcomes of black patients: length of stay, inpatient mortality rate, 30-day mortality rate, and 30-day readmission rate, using patient and hospital data from the Medicare Provider Analysis and Review and the Healthcare Information and Management Systems Society between 2000 and 2007. The difference-in-difference research method was employed with a generalized linear model to examine the association of EMR adoption on health outcomes for minority patients while controlling for patient and hospital characteristics. Results We examined the association between EMR adoption and the outcomes of minority patients, specifically black patients. However, after controlling for patient and hospital characteristics we could not find any significant changes in the four health outcomes of minority patients before and after EMR implementation. Conclusions EMR systems have been reported to support better coordinated care, thus encouraging appropriate treatment for minority patients by removing potential sources of bias from providers. Also, EMR systems may improve the quality of care provided to patients via increased responsiveness to care processes that are required to be more time-sensitive and through improved communication. However, we did not find any significant benefit for minority groups after EMR adoption.
The purpose of EMR systems is to retain, organize, and communicate medical information in a uniform manner. EMRs have the potential to make healthcare providers' decisions more consistent and objective, while ensuring that best practices are pursued [3]. EMRs provide evidence-based decision support to the healthcare provider at the point of service and can lead to timely and informed medical decision making. Such standardization has the potential to provide benefits to members of racial and ethnic minority populations that have disproportionately experienced sub-standard care.
While a growing number of studies have examined the use of EMRs to improve the efficiency of medical care delivery, surprisingly few studies have explored the impact of EMRs on reducing health disparities between racial and ethnic groups. Thus, the aim of this study was to investigate the differential impact of EMR adoption on the outcomes of care among members of minority groups. Our hypothesis was that the adoption of an EMR system by a hospital or hospital system would provide benefits to members of disadvantaged minority groups who had historically experienced lowerquality medical outcomes compared to non-Hispanic white patients.
The hospital EMR adoption rate increased from 9.4% in 2008 to 59.4% in 2013 [4]. The Healthcare Information and Management Systems Society (HIMSS) has tracked changes during the implementation of EMR systems. Using their collected information, we investigated whether the adoption of EMR systems reduced racial and ethnic disparities in health outcome measurements by comparing EMR-adopting hospitals to hospitals that have yet to implement an EMR system. These care quality measures were calculated using linked Medicare claims data for patients receiving care in each hospital for two years before and after the adoption of an EMR system, compared to patient data from hospitals that had not adopted an EMR over the same time period. Thus, our main methodological approach was to use a difference-indifference design to examine changes in outcome disparities. Analyses were adjusted for the characteristics of patients and hospitals using data from the Medicare Beneficiary Summary File and the HIMSS data.
Poor access to care among African Americans and Hispanics is reflected in substantially lower rates of employmentbased insurance coverage, lack of a regular source of medical care, and ethnic/cultural mismatches with care providers. These factors may also reduce the continuity of care among these disadvantaged populations relative to non-Hispanic whites and may diminish the engagement and trust be-tween patient and provider, reducing the quality of medical decision-making. The Institute of Medicine found racial disparities across a wide range of disease areas, clinical services, and clinical settings, even after clinical factors, including age, disease progression, comorbid conditions, and severity of illness, were taken into account [2]. They concluded that prejudices, biases, and negative racial stereotypes may be a potential source of the disparity and proposed "evidence based guidelines" among the solutions. In addition, there are abundant research findings that racial and ethnic disparities in health care can be explained in part by minorities' disproportionate receipt of care from certain institutions where all patients, irrespective of race, tend to experience worse outcomes [5]. There is some concern that regulatory policies aimed to reduce poor outcomes may disproportionately impact minority-serving institutions, perhaps increasing racial disparities [6].
Previous studies have demonstrated disparities in both the outcomes and process of care. Compared with white patients with diabetes, African Americans with diabetes were less likely to have received the recommended processes of care metrics, including glycated hemoglobin (HbA1c) and lipid measurements [7,8]. Another study [9] found that compared with white patients, black and Hispanic patients experienced substantially longer wait times from the start of dialysis to being added to the kidney transplant waiting list.
Furthermore, previous health services research suggests that there are racial disparities in health outcomes. Some studies have found that black patients experienced a higher 30-day readmission rate (24.8%) compared to white patients (22.6%) according to Medicare beneficiary data [5,10]. Other studies found that black patients admitted with pneumonia had a longer length of stay than white patients (incidence rate ratio: 1.19). Moreover, black patients had a higher inpatient mortality rate than white patients after cardiovascular procedures and cancer resections; the odds ratios of inpatient mortality for black patients relative to white patients are 1.21 (carotid endarterectomy), 1.21 (aortic valve replacement), 1.16 (coronary artery bypass graft), 1.32 (abdominal aortic aneurysm repair), 1.08 (resection for lung cancer), 1.32 (cystectomy of the bladder), 1.57 (esophagectomy), and 1.27 (pancreatic resection).
Despite the projection that the minority population is expected to rise to 56% of the total US population, prior research on racial and ethnic disparities in healthcare has failed to identify methods of reducing these disparities [11]. This study was an expansion of the work by Lee et al. [12], which investigated the relationship between EMR adoption and health outcomes of the general population; however, they did not focus on the outcome disparities among minorities. Our study investigated the association between EMR adoption and the health outcomes of minorities, including length of stay, inpatient mortality rate, 30-day mortality rate, and 30-day readmission rate. This study was approved by the Institutional Review Board (IRB) at the University of Texas Medical Branch (IRB No. 09-054).
Data Sources
In this study, we employed four primary data sources: 1) HIMSS data, 2) Provider of Service (POS) files, 3) Medicare enrollment files and 4) the 5% Medicare Provider Analysis and Review (MEDPAR) data from 2000 to 2007. The HIMSS data was sampled from the American Hospital Association (AHA) annual survey of hospitals, which provides information on health IT applications for more than 3,000 US hospitals. Medicare enrollment files provide patient socio-demographic characteristics, such as sex, age, insurance and race. The POS file provides information on providers, including characteristics of institutional providers. The MEDPAR file contains claims data on Medicare beneficiaries hospitalized in Medicare-certified inpatient hospitals and skilled nursing facilities (SNF).
Establishment of the Study Cohort
This study identified more than 2,600 unique acute-care hospitals with more than 100 beds between 2000 and 2007. We excluded hospitals if they were not consecutively observed over the entire 8-year study period, if they did not have a Medicare provider number, or if they adopted any components of a basic EMR before 2002. After excluding hospitals that did not meet the eligibility criteria, 708 acute-care hospitals were finally included in our analyses.
Using HIMSS data, we identified the year of adoption of a basic EMR, defined as a computerized patient record that is supported by a clinical data repository and has clinical decision-support capabilities [13,14]. The four outcomes were length of stay, inpatient mortality rate, 30-day mortality rate, and 30-day readmission rate. The sample included patients who were older than 65 years, had not been enrolled in HMOs, and had both Medicare Parts A and B for the entire 12 months before admission.
1) Outcome variables
(1) Length of stay This sample only included stays shorter than 365 days in which the patient was discharged alive. To correct for data skewness, we excluded admissions in which the length of stay was more than 3 standard deviations from the mean. The final sample size for the length of stay was 360,105.
(2) Inpatient mortality and 30-day mortality rate This sample included hospital stays no longer than 365 days for all patients, including patients who died in the hospital or were discharged alive. The inpatient mortality rate was calculated by the number of patients who died during hospital stays over the total number of admitted patients. The 30-day mortality rate was defined as the number of patients who died within 30 days of admission over the total number of patients admitted. The final sample size for mortality was 403,566.
(3) 30-day readmission rate The accumulation of claims from a beneficiary's date of admission to an inpatient hospital to the date of discharge represents one stay in the MEDPAR file. We only retained data for hospitalizations lasting less than 365 days in which the patient was discharged alive. The 30-day readmission rate was defined as the number of patients discharged and readmitted to any acute hospital within 30 days of discharge over the total number of patients discharged alive. For the multiple readmission rate, one admission per year was randomly selected. The final sample size for 30-day readmission was 237,081.
2) Independent variables Medicare enrollment files were utilized to obtain patients' demographic information, including age, sex, and race. Information on discharge diagnosis related group (DRG) was obtained from the Center for Medicare & Medicaid Services and MEDPAR files. To construct a variable indicating comorbid conditions measured by the Elixhauser index, we utilized inpatient and physician claims from MEDPAR, Outpatient Statistical Analysis files, and carrier files measured for the 12 months prior to initial hospitalization [15]. Structural characteristics of hospitals were obtained from the POS files. We measured teaching status as a binary variable defined as either none or any. We measured hospital size based on bed size and categorized hospitals into quartiles. Ownership status of hospitals was categorized as not-for-profit, profit, or government.
Statistical Analysis
To examine the association of EMR adoption on health outcomes for minority patients, we used a difference-indifference research design based on observational research comparing outcomes two years before EMR adoption and two years after EMR adoption. The difference-in-difference is a statistical technique used to estimate treatment effects by comparing before and after treatment differences in outcome between a control and a treatment group [16]. In our study, the treatment group was composed of hospitals adopting EMR systems between 2002 and 2005, and the control group contained hospitals who did not adopt an EMR system during the same period.
During the study period, 425 hospitals adopted an EMR ( In other words, because 159 hospitals that had adopted EMR systems accounted for 37% of the total 425 hospitals that had adopted EMR systems in 2002, we randomly matched 106 out of the 283 total (37%) non-EMR-adoption hospitals for this particular year in the non-EMR-adoption group. We considered two groups of hospitals: hospitals adopting EMR systems (treatment group) and hospitals not adopting EMR systems (control group). In the treatment group, we selected hospitals that had adopted EMR systems in their third year to compare two years before and two years after EMR system adoption. Then, we multiplied 1) EMR group, 2) year of EMR adoption, and 3) black patients to determine whether EMR adoption could improve outcomes for black patients. Our regression model is expressed as Y it = α + β 1 EMR it + β 2 after_EMR it + β 3 Black i + β 4 EMR it * after_EMR it + β 5 EMR it * after_EMR it * Black + β 6 Pat_ Charac + β 7 Hosp_Charac + Time + ε it , where i represents hospitals, t represents time of year, Y represents outcome; EMR represents EMR-adoption hospitals coded as 1 if EMR was adopted and 0 if not; After_EMR represents years after EMR adoption; Black represents patients of African-American descent; Pat_Charac represents patient characteristics, including sex, age, and race (either white or black), and comorbidity; and Hosp_Charac represents hospital characteristics, including teaching status, bed size, and ownership.
The key independent variables are interaction terms of EMR, after_EMR, and Black. If β 4 or the coefficient of EMR and after_EMR (or odds) was significantly negative (or less than 1), we could confirm that EMR adoption was negatively associated with patient outcomes. Alternatively, if β 5 or the coefficient of EMR, after EMR, and Black (or odds) was significantly negative (or less than 1), we could confirm that EMR adoption could improve patient outcomes for black patients. We used generalized linear models (GLMs) to investigate the link between EMR adoption and reduction in disparities in four health outcomes for black patients (i.e., length of stay, inpatient mortality rate, 30-day mortality rate, and 30-day readmission rate) after taking clinical (i.e., comorbidities) and demographic (i.e., age, sex, and race) characteristics of patients, structural (i.e., ownership type, teaching status, and bed size) characteristics of hospitals, and time (year) into account. We used a GLM with binomial distribution and logit link for the models for (1) inpatient mortality rate, (2) 30-day mortality rate, and (3) 30-day readmission, and a GLM with normal distribution and log link for (4) length of stay. All statistical analyses were performed using STATA 14 (Stata-Corp, College Station, TX, USA).
III. Results
The study cohort included patients admitted to 425 hospitals adopting EMR systems and 283 hospitals not adopting EMR systems over the study period as indicated in Tables 1 and 2. Table 3 presents patient, disease, and hospital characteristics.
Effect of EMR on Outcome Disparities
A larger proportion of patients were female (59%), aged between 76-85 (43%), and white (90.7%). White patients accounted for 90.7% of the sample. The mean of the comorbidity index was 2.9. Slightly less than half of the hospitals were teaching hospitals, while not-forprofit hospital ownership accounted for the majority (78.1%). The number of beds in each type of hospital was equally distributed. Table 3 also shows descriptive statistics for the four outcomes: the mean length of stay was 5.62 days, inpatient mortality 4.4%, 30-day mortality 13.5%, and 30-day readmission 16.3%. Table 4 presents the difference-in-difference GLM regression results of length of stay and inpatient mortality rate. In the first column (length of stay), the variables positively associated with length of stay are black race, older age groups (65-75 and 86 and over), higher comorbidity, and greater number of beds. As seen in the second column (inpatient mortality rate), we found that patients who were black males, had a higher comorbidity rate, and belonged to an older age group (76-85 and 86 and over) had an increased likelihood of experiencing inpatient mortality. In addition, government hospital ownership and number of beds from 261-362 were other factors linked to higher mortality rates. Table 5 presents GLM regression results of 30-day readmission and 30-day mortality after discharge. As seen in the first column (30-day readmission rate), we found that patients who were black, males, had a higher comorbidity rate, and belonged to an older age group (76-85 and 86 and over) were more likely to have a higher 30-day readmission rate. Also, hospitals of not-for-profit ownership were more likely to have a higher 30-day readmission rate. As seen in the second column (30-day readmission rate), patient characteristics associated with higher 30-day mortality rates were male, older age, and higher comorbidity. The hospital characteristic associated with higher 30-day mortality rates was not-forprofit ownership.
However, we could not find any significant effect of EMR adoption, the years after EMR adoption, the interaction terms of the EMR adopted group, after years of EMR adoption and race in all four outcome regressions.
IV. Discussion
EMR systems have been reported to support better coordinated care [3,12,17,18], thus encouraging appropriate treatment for minority patients by removing potential sources of bias from the providers [19]. Health IT systems could improve clinical care by detecting important clinical and sociodemographic risk factors for various conditions particularly relevant to minority patients. EMR systems may improve the quality of care provided to patients via increased responsiveness to care processes that are required to be more timesensitive and through improved communication [20].
While a growing number of studies have examined the association between EMRs and health outcomes, surprisingly few studies have explored the impact of EMRs on outcome disparities among minorities. Thus, to fill this gap in the current literature, this study investigated the impact of EMR system adoption on disparities in health outcomes of black patients. However, we did not observe any significant changes in health outcomes before and after EMR system implementation. Moreover, we did not find evidence that EMR system implementation was associated with outcomes of black patients. A possible reason for the lack of any significant association between EMR and healthcare outcomes of black patients could be the short-term effect of EMR adoption. We compared the outcomes two years before and after EMR adoption. Thus, no meaningful changes in health outcomes of black patients were observed after EMR implementation, which might reflect a short-term effect of EMR implementation on patient outcomes. It has been reported that it may take several years for hospitals adopting EMR systems to fully capitalize on the clinical benefits after EMR implementation [18]. In particular, benefits from EMR on health outcomes may take longer than for process of care for minority populations. For example, Lee [21] found that greater health IT investment leads to shorter waiting times, and the waiting time reduction was greater for non-white than for white pa-
Effect of EMR on Outcome Disparities
tients. He concluded that minority populations could benefit from health IT in terms of process of care [21]. Also, eliminating healthcare disparities among minority groups may remain limited, even though the evidence of healthcare disparity among minority groups is substantial [22]. Acevedo-Garcia et al. [22] argued that an effort to eliminate disparities among specific groups is currently underway but will take more time. For example, this effort has focused on training providers to offer appropriate services through the use of EMR systems and to improve coordination of care [22].
There were some limitations in this study. First, for this work, EMR was defined as computerized patient record systems (CPRS) supported by clinical decision-support (CDS) capabilities and a clinical data repository (CDR) [13,14]. Thus, the definition of EMR adoption was less restraining than those used in previous studies [23][24][25]. For example, Miller and Tucker [23] employed HIMSS data, but defined EMR adoption as four complete components of adoption including CDR, computerized physician order entry (CPOE), CDSS, and digitized physician documentation. Also, prior studies have used a more limited definition of EMR [24,25] that included IT systems, such as electronic medication administration records (eMAR) and nursing documentation. This study used a more expansive definition of EMR systems because HIMSS data was incompatible with health IT systems, such as physician documents. Second, we did not have access to in-depth data indicating the level of care regarding provider proficiency and reluctance related to the use of EMR systems in hospitals. Hence, if care providers are resistant to using an EMR system or are not IT-savvy for various reasons after EMR adoption, our data may be underestimated.
Lastly, there may have been unobserved confounding factors that might have impacted our findings. For example, organizational and management behavior may be correlated with health IT adoption. Although we controlled for structural characteristics of the hospital and patient clinical and demographic characteristics, EMR adoption behavior may vary in ways we could not observe in the data set, leading to bias in our results.
Conflict of Interest
No potential conflict of interest relevant to this article was reported. | 2017-10-15T07:55:54.952Z | 2016-04-01T00:00:00.000 | {
"year": 2016,
"sha1": "b535f3826a36b19aa1a52e962dafe6988aff0b3e",
"oa_license": "CCBYNC",
"oa_url": "https://europepmc.org/articles/pmc4871840?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "b535f3826a36b19aa1a52e962dafe6988aff0b3e",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
6089303 | pes2o/s2orc | v3-fos-license | Critical care: Are we customer friendly?
A b st ra ct Objective: Assessing and enhancing family satisfaction are imperative for the provision of comprehensive intensive care. There is a paucity of Indian data exploring family’s perception of Intensive Care Unit (ICU) patients. We wanted to explore family satisfaction and whether it differed in families of patients admitted under intensivists and nonintensivists in our semi-open ICU. Methodology: We surveyed family members of 200 consecutive patients, between March and September 2009 who were in ICU for >3 days. An internationally validated family satisfaction survey was adapted and was administered to a family member, on day 4 of the patient’s stay. The survey consisted of 15 questions in fi ve categories patient care, medical counseling, staff interaction, visiting hours, and facilities and was set to a Likert scale of 1–4. Mean, median, and proportions were computed to describe answers for each question and category. Results: A total of 515 patients were admitted during the study period, of which 200 patients stayed in the ICU >3 days. One family member each of the 200 patients completed the survey with 100% response rate. Families reported the greatest satisfaction with patient care (94.5%) and least satisfaction with visiting hours (60.5%). Chi-square tests performed for each of the fi ve categories revealed no signifi cant difference between satisfaction scores of intensivists and nonintensivists’ patients. Conclusion: Family members of ICU patients were satisfi ed with current care and communication, irrespective of whether they were admitted under intensivists or nonintensivists. Family members preferred open visiting hours policy than a time limited one.
Introduction
Each year, around 5 million patients are admitted to Intensive Care Units (ICUs) in India.
Considering that the average family size of Indian families is 4.6, [1] 23 million people have to deal with the illness of a family member in the ICU each year.Family members of patients in ICU face an unfamiliar stressful environment at a time they are often least prepared for it.][5] High-quality medical care should be both patient and family centered.In our society, family support carries abundant significance.However, in reality, families' expectations and needs from healthcare providers become secondary to the patient's medical care. [6]Understanding and meeting the needs of the family members of the critically ill are an important responsibility of the ICU team.In the ICU, where the majority of patients are critically ill and are unable to participate in decision making about treatments, the family's perspectives become central to understanding and measuring the satisfaction with the medical care provided. [7]Measurement of family satisfaction has, hence, been proposed as one of the several quality indicators of ICU care. [8] the critical care setting, studies on patients' family satisfaction are few in number and limited in scope. [7]ulturally and socially Indian families differ signifi cantly as compared to those in the west; their expectations, needs, and factors contributing to their stress are likely to be different than those of the western families.This study aims to further our understanding so as to create a better experience for the families of patients under our care in the Indian setting.
We conducted this study in our multi-disciplinary ICU, to determine levels of satisfaction of family members with the care process and the actual care provided and to assess whether it differed in families of patients admitted directly under intensivists from those admitted under nonintensivists.We also sought to fi nd out if family satisfaction changed when patients had prolonged ICU stay.
Methodology
This is a prospective, questionnaire-based study.We surveyed family members of 200 consecutive patients, from March to September 2009.
Setting
The study was conducted in a Tertiary Care Hospital in Chennai.The ICU is a 65 bedded, multi-disciplinary ICU, which admitted both medical and surgical patients.The ICU functions as a semi-open model unit.Physicians, who were not necessarily intensive care specialists, were also primarily responsible for the patient care; however, treatment decisions were often made after discussions with the ICU team.
The healthcare team included the primary team admitting the patient, an ICU team comprising an ICU consultant and shift duty doctors, bedside nurses and technicians with the average nurse-to-patient ratio 1:1.5.There were also 2 patient/family counselors.Families were counseled independently by both the admitting physician and the ICU team.Joint family meetings were held when deemed appropriate.Family visitation was restricted to 2 h a day -1 h in the morning and one in the evening.Extra visitations were granted to families when requested or when more family participation in care was thought to be helpful.Families were allowed to be at the bedside when end-of-life care was being provided for terminally ill patients.
Inclusion and exclusion
Patients who stayed in the ICU for more than 3 days were included in the study.The minimum stay of 3 days in the ICU was chosen to ensure that the family member had adequate time and exposure to the ICU setting.Only one family member in each patient family -the key decision maker -was identifi ed as spokesperson and was surveyed.Family members <18 years of age were excluded.
Sample size
We decided to include 200 consecutive patients arbitrarily, as we did not have any previous studies showing us a response rate or prevalence of specifi c variables.
Data collection
The questionnaire was administered on day 4 of the patient's ICU stay.A research assistant recruited family members consecutively, using the inclusion criteria.The questionnaire was administered in the privacy of a counseling room.All participants were specifi cally assured that their results would be kept confi dential.For patients who stayed more than 3 weeks, the same questionnaire was administered on 22 nd day.
Survey questionnaire
An internationally validated family satisfaction survey [9] was adapted and modifi ed to suit our setting.The questionnaire was administered as a semi-structured interview.
The questionnaire included the demographic details of patients such as age, gender, and date of admission, family members' relationship to patient (optional), physician under which the patient was admitted and satisfaction scale items, which included self-rated levels of satisfaction with fi ve identifi ed key aspects of care related to the overall ICU experience like how the patient and the family member were treated, communication by the ICU team, visiting hours, and the atmosphere in the waiting room.
The survey consisted of 15 questions in five categories, patient care, medical counseling of families (communication to attendants), staff interaction, visiting hours policy, and facilities.The answers were set to a Likert scale of 1-4, scoring was based on the scale, 1 denoting excellent/completely satisfi ed and 4 very poor/very dissatisfi ed.The space was provided for suggestions and comments (optional).
As the study was part of an ongoing quality improvement effort, ethical committee approval was not sought.The respondents were informed that participation was voluntary, and consent was implied by the completion of the survey.
Data analysis
Collected data for all the parameters were coded and analyzed with the statistical software SPSS 17.0 (SPSS IBM, USA).Descriptive statistics were calculated to describe the distributions of individual items and the summary scores.
Means, medians, standard deviations, frequency tables, rates, and proportions were computed to describe the answers for each question and each category.Percentage of positive responses for each item was also computed.Answers that scored 3 and 4 were considered as a negative perception or not satisfactory.
The scores were also standardized using the standardization formula (Standardized Score = [Observed Score − Minimum Score]/[Maximum Score − Minimum Score]).The resultant scores in the scale of 0-100 were cut into halves using 50 as the midpoint.Chi-square tests were used to compare the satisfaction levels between families of patients admitted under intensivists and nonintensivists.t-tests were performed to compare the mean satisfaction before and after a long stay.
For all the statistical tests, a P < 0.05 was considered as statistically signifi cant.
Results
A total of 515 patients were admitted during this period.Of these, 200 family members of patients who stayed in the ICU >3 days were interviewed.One family member each for 200 patients completed the survey with 100% response rate.
Chi-square tests were performed for each of the fi ve categories between satisfaction scores of intensivists and nonintensivists' patients.The results revealed no statistically signifi cant differences between both the groups [Table 3 and Figure 2].There were 7 patients who stayed in the ICU for more than 3 weeks during the study period.There was no statistical signifi cance between their "before and after" satisfaction scores.
Quantitative and qualitative data/written comments
We analyzed the written comments, as they may add important insights not captured by the scores.
More than half of respondents in our survey provided comments; there were totally 103 comments (51.5%).Most comments were relating to visiting hours, followed by communication to attendants, followed by facilities provided for the families (such as waiting room, rest rooms, etc.).21 of the 103 comments (20.3%) were appreciations of overall care provided [Table 4].
The number of positive and negative comments seemed to be in concordance with category-specifi c and overall satisfaction scores [Table 5].Most of the comments/suggestions were regarding visiting hours being inadequate and inconvenient.
Discussion
In our single center study, we found that the majority of family members were satisfi ed with the overall ICU care.[12] When examining individual item scores, "Patient care" scored the highest, "staff interaction" scored the second highest.Communication to attendants scored third in our study, while, in several studies, this had scored the least. [7,13]alacrida et al., [12] in a survey on family satisfaction of patients who died in the ICU found that a high percentage of respondents were satisfi ed with the care but primarily complained about the information received and the way it was communicated.A study on family satisfaction with end-of-life care emphasizes the need for better communication and greater access to physicians and suggests that these factors are strongly associated with satisfaction. [14]In contrast to these studies, our results reveal that 84.5% of patient families provided a positive response with regards to communication they received about their loved ones care.It is possible that Indian patient families' expectations differ from those in the west explaining our results.One study showed that families of ICU nonsurvivors were more satisfi ed than families of survivors. [15]However, we did not look at the differences in family satisfaction among ICU survivors and nonsurvivors.
Our study revealed lower scores regarding facilities provided for families and ICU visitation hours.Other studies have shown similar results with family satisfaction scores consistently low regarding ICU atmosphere and waiting room. [7,11]west satisfaction scores in our survey were seen with the visiting hours policy.Infl uence of visiting hours policy on patient satisfaction has been unclear, and data have been confl icting.Perceptions of ICU caregivers and patient families seem to differ.Fumis et al. have shown that family members in an open visit ICU reported high satisfaction. [16]However, in a study by Stricker et al., fewer visiting hours per day were not associated with lower satisfaction in their study. [11]Some authors are of the opinion that restricted visitation policy in the ICU may be less compassionate and not necessary. [18]Moreover, relatives' presence at the bedside of ICU patients has also been shown to be benefi cial to the patients, without resulting in signifi cant adverse events. [18,19]In a study on perceptions of an open visitation policy by ICU workers, majority of ICU workers answered that an open visitation policy impairs the organization of the care given to the patient and interferes with their work, though they thought that an open visitation policy might help the patient's recovery by decreasing anxiety and stress. [2] one study, the restricted visiting policies were preferred by the staff, especially by the nurses because they were concerned that opening an ICU to visitors could interfere with their care process. [20]An Open Visitng Policy may be common in the pediatric ICU setting but is still uncommon in an adult ICU [21,22] and the impact of visitation policies on family satisfaction, and its effect on care process and its infl uence on ICU staff performance are all debatable.
In their study, Schwarzkopf et al. [23] integrated quantitative and qualitative analyses showing that while families may overall be highly satisfi ed, they still have suggestions for improvement.Similarly, although we received higher scores for most of the questions, more than half of the respondents provided written comments with appreciations, suggestions, and negative feedbacks, which implied that there is still room for overall improvement.
Our study shows that there was no difference in satisfaction scores between patients who were admitted under intensivists and nonintensivists.It is likely that we were underpowered to show any differences between groups considering the relatively small proportion of patients directly admitted under intensivist.Moreover, counseling for all patients were done by both the admitting physician and ICU team equalizing any disparity in the quality of counseling.There also seemed no difference in scores before and after a long stay, however, the number of patients in this group was not signifi cantly high to derive any inference from this.This study is, to our knowledge, the fi rst survey to explore family satisfaction in Indian ICUs.The 100% response rate of consecutive patients (who stayed more than 3 days in the ICU) has ensured that there was no bias excluding patients based on sickness, socioeconomic status, level of literacy, or any other constraint.We have explored the overall family satisfaction and reported key insights in areas that need improvement.
We recognize that there may be limitations in our study.The questionnaire was not a formally validated one but was an adapted version based on an international questionnaire.Second, respondent details and their relation to the patient were mentioned optional and so were not available for the majority of the responses.Third, patients' diagnosis, the severity of sickness or ventilation status were not analyzed against satisfaction scores.Similarly, satisfaction scores of ICU survivors, nonsurvivors, and impact of end-of-life decisions, if any, on family satisfaction were not analyzed separately.Finally, these fi ndings were from a single center and cannot be generalized to all systems considering wide variability in the care process provided.
Conclusion
Family members of ICU patients overall seem to be satisfi ed with our current services.There were no differences in family satisfaction whether the patients were admitted under intensivists or nonintensivists.Family members prefer a more open visiting hour policy than a time limited one.Domains of low satisfaction provide a target to improve the quality of care both for the patients and their families.
Financial support and sponsorship
Nil.
Confl icts of interest
There are no confl icts of interest.
Table 1 : Questions in the survey Questions Minimum Maximum Mean SD SE
SD: Standard deviation; SE: Standard error
Table 3 : Satisfaction scores of patients admitted under intensivists and nonintensivists
*Chi-square tests were used to compare the satisfaction levels between families of patients admitted under intensivists and nonintensivists | 2018-04-03T06:09:47.078Z | 2015-09-01T00:00:00.000 | {
"year": 2015,
"sha1": "fff128545d5c4814ee135cfde6fa2c255b5ca864",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/0972-5229.164796",
"oa_status": "BRONZE",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "cc0322458d976242fa5d8b2397f9a1c761326324",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
248572279 | pes2o/s2orc | v3-fos-license | Quantum battery based on superabsorption
A quantum battery is a device where an energy is charged by using a quantum effect. Here, we propose a quantum battery with a charger system composed of $N$ qubits by utilizing a collective effect called a superabsorption. Importantly, the coupling strength between the quantum battery and charger system can be enhanced due to an entanglement. While the charger time scales as $\Theta\left(N^{-1/2}\right)$ by applying a conventional scheme, we can achieve a charging time $\Theta\left(N^{-1}\right)$ in our scheme. Our results open the path to ultra-fast charging of a quantum battery.
I. INTRODUCTION
Quantum thermodynamics is an emerging field to extend the conventional thermodynamics to microscopic systems where not only thermal but also quantum fluctuations should be taken into account [1][2][3][4][5][6][7]. One of the aims of quantum thermodynamics is to investigate whether quantum devices provide enhancement of performance over classical devices.
A quantum heat engine is one of the promising devices with the enhancement over the classical ones by using the quantum property [8][9][10][11][12][13][14][15][16]. It is possible to obtain a quadratically scaling power P = Θ(N 2 ) by using the quantum enhanced heat engine with an entanglement while a conventional separable engine shows a power of P = Θ(N ) where N denotes the number of qubits [15]. Here, the key feature of this scheme is to adopt a collective effect called a superabsorption that was proposed in Ref. [17], and its proof-of-concept experiment was recently demonstrated using barium atoms in an optical cavity [18].
A quantum battery is also a prominent research subject in quantum thermodynamics to charge an energy of quantum systems. As is the case for conventional batteries (such as lithium-ion batteries) using electrochemical reactions to convert chemical energy into electrical energy, the main issue of the quantum battery is to increase a performance of charging and discharging processes [19][20][21][22]. Such a quantum battery was first proposed in Ref. [19]. In Ref. [20], it was found that the use of entangling operations can improve the performance of the quantum battery compared with the one using only separable operations. Here, the performance is defined as a storable energy in a quantum battery per a unit time. Also, in Ref. [22], it has been found that Dicke-quantum batteries composed of collective N -qubit systems give us a scaling √ N times larger compared to independent N -qubit batteries. In these studies, external pulses are applied to charge the isolated quantum battery.
On the other hand, the battery can also be charged by using an interaction with an environment [23,24]. In Ref. [24], when N quantum batteries interact with N charger systems that are prepared in a steady state with a population inversion, the charging time scales as Θ(N ) by using a collective charging process. Here, the charging time is defined as how long it takes for the quantum battery to be a steady state. In Ref. [25], the improvement of the quantum battery performance due to the collective effects has been experimentally confirmed, and the charging time scales as Θ N −1/2 where N denotes the number of atoms.
Here, we propose a quantum battery using a charger system composed of an entangled N -qubit system. Our quantum battery provides a charging time scaling as Θ(N −1 ). This is stark contrast to the conventional scheme where the charging time scales as Θ(N −1/2 ) by using N three-level systems as a charger. The key factor for the enhanced charging time is utilizing the superabsorption for the charging process. The charger systems are prepared in an entangled N -qubit state called Dicke state via an interaction with the environment, and the quantum battery can strongly interact with the charger system in a collective way. An energy exchange between the charger and the battery occurs, and the battery, initially prepared in a ground state, can be eventually raised in an excited state with a necessary time scaling as Θ(N −1 ).
Our paper is organized as follows. In Sec. II, we review a charging model with one three-level system and one qubit battery. Also, we explain a charging model using N -qubit charger initially prepared in a separable state, where the charging time scales as Θ(N −1/2 ). In Sec. III, we explain our charging scheme with a charging time scaling as Θ(N −1 ) by using an N -qubit charger initially prepared in an entangled state. In Sec. IV, we conclude our discussion.
II. BATTERY CHARGING WITH SEPARABLE N THREE-LEVEL CHARGERS
Let us review the conventional charging model with separable states. We consider the charger system with a three-level system and the quantum battery system with a qubit. The total arXiv:2205.03823v1 [quant-ph] 8 May 2022 Hamiltonian H 1 tot is given as follows.
where E i (i = 0, 1, 2, E 0 < E 1 < E 2 ) is the eigenenergy of the charger system, ∆ is the energy of the quantum battery system, g is a coupling strength between the charger and quantum battery. We prepare the initial system ψ 1 (0) = |1 C |0 B where the charger state is prepared as |1 C and the battery state is prepared as |0 B . A steady state of the charger system after a coupling with two thermal baths can be |1 C by adjusting the parameters where one of the thermal baths induces a transition between |0 C and |2 C while the other thermal baths induces a transition between |2 C and |1 C , and this is called a population inversion where the population of the first excited state becomes higher than that of the ground state [14,17,24]. The purpose of the charging is to obtain a battery state of |1 B from the initial state. The battery state ρ 1 B (t) can be described as where Tr C denotes partial trace of the charger system. When we turn off the interaction at gt = π/2, we obtain a state of|1 B , and this means that the charging is done. Next, let us explain a scheme to use N three-level systems as a charger. We consider the charger system with N threelevel systems and the quantum battery system with a qubit. Strictly speaking, we need three-level systems for the charger systems, because this allows us to use a population inversion when the charger system becomes a steady state after the coupling with thermal baths, as we mentioned above. However, once we successfully obtain the population inversion for the charger system, the dynamics between the quantum battery and charger system is confined in a subspace spanned by |0 C and |1 C , and so we consider only this subspace for simplicity.
We define the collective operators J z and J ± as J z = The total Hamiltonian H sep tot is given by follows.
where ω A denotes a frequency of the qubits for the charger system and a quantum battery and g is coupling strength between the charger and quantum battery. This Hamiltonian was experimentally realized by using a superconducting qubit and an electron-spin ensemble [26][27][28][29]. We prepare the initial system ψ N (0) = |11 · · · 1 C |0 B where the charger state is prepared as all excited states, |11 · · · 1 C , and the battery state is prepared as |0 B . The purpose of the charging is to obtain a battery state of |1 B from the initial state. The battery state ρ N B (t) can be described as From this analysis the necessary time to obtain |1 B from |0 B is t = π/2 √ N g. This means that the charging time for the battery scale as Θ(N −1/2 ) in this model. Such a behavior was theoretically predicted in [30,31].
III. BATTERY CHARGING WITH SUPERABSORPTION
Here, we introduce our scheme to charge the quantum battery with a charging time to scale as Θ(N −1 ) by using N charger qubits.
A. Hamiltonian
We consider the charger system and the quantum battery system. The former one is composed of N -qubits while the latter one is composed of two-qubits. The total Hamiltonian H tot is given as follows.
where H N (H B ) denotes the Hamiltonian for N -qubit system (2-qubit system), H I denotes the interaction Hamiltonian between charger (N -qubit) and quantum battery (2-qubit), ω A denotes a frequency of the each N -qubits, Ω denotes a total coupling constant between the N -qubits, g denotes a coupling constant between charger and battery. Here, let us introduce J ±1 and J ±2 , which are the part of the ladder operator. They are defined as follows.
Here, we introduced the Dicke states, which are the simultaneous eigenstates of J 2 and J z . These can be written as |J, M , and the corresponding eigenvalues are J(J + 1) and M . In this paper, we take Dicke states as |M = |N/2, M in the subspace with total angular momentum J = N/2 and assume N is odd. We assume conditions of strong coupling, i.e., |Ω| > ω A . Also, we assume ω A > 0 and Ω < 0. Theses conditions allow us to construct a Λ-type structure for the Dicke states between |3/2 C , |1/2 C , and |−1/2 C as shown in the FIG 3. In this case, |1/2 C has the highest energy in the charger system. This means that J +1 and J −2 play a role in inducing a transition from a higher energy state to a lower energy state in the charger system. On the other hand, J −1 and J +2 induce a transition from a lower energy state to a higher energy state in the charger system. We are going to use a rotating wave approximation (RWA) for gN ω A . In the RWA, we typically ignore terms that oscillate with a high frequency. In our case, terms such as (σ + )J +2 will be dropped. So, by using the RWA, we obtain In this case, the Hamiltonian of the N -qubit system can be diagonalized as follows. hand, |ω A + 2Ω|+δ denotes the frequency of qubit 2, and this is detuned from ∆ 3/2 by δ.
B. Perturbation analysis
Let us derive an effective Hamiltonian. We move into an interaction picture. We consider a unitary operator U (t, 0) = T e −i t t denotes an interaction Hamiltonian in the interaction picture. We expand U (t, 0) to the second order term with respect to the coupling constant g.
Under the assumptions of gN δ ω A < |Ω| and 1 gN t, the first order of the interaction term induces transitions between states with a large energy difference, and these have terms to oscilalte with a high frequency. So the second order term, which includes a resonant transition, becomes the relevant term. Then we obtain Here, when the initial state of the battery is confined in a subspace spanned by {|01 B , |10 B }, the dynamics of the battery state is also confined in this subspace as long as the second order term is relevant. (See the appendix for the details of the derivation). Although a similar approximation has been used in quantum optics [32], we firstly applied this technique to the model of the quantum battery with a superabsorption. We discuss the dynamics of the system with the effective Hamiltonian. We prepare the initial state |ψ(0) = |−1/2 C |10 B (see FIG 4). The purpose of the charging is to obtain a battery state of |01 B from the initial state. Considering the charging process using the effective Hamiltonian H eff , the battery state ρ eff B (t) can be described as ρ eff B (t) = Tr N e −iH eff t |ψ(0) ψ(0)| e iH eff t = cos 2 (λt) |10 C 10| + sin 2 (λt) |01 C 01|. (25) Here, λ = g 2 δ √ a 1/2 √ a 3/2 and Tr N denotes a partial trace of the charger system. We set δ = 10gN to satisfy condition gN δ. Since √ a 1/2 √ a 3/2 ∝ O(N 2 ) and δ ∝ O(N ), the necessary time to obtain |01 B from the initial state of |10 B scales as O(N −1 ). This means that the charging time for the battery scales as O(N −1 ). We use a phenomena called super-absorption where the coupling strength becomes collectively enhanced around the middle of the Dicke ladder structure. However, we use approximations to derive the effective Hamiltonian. To check the validity of the approximation, we will perform numerical simulations in the next subsection.
C. Numerical analysis
The total Hamiltonian for numerical simulations is given by where we use the interaction Hamiltonian H I without the rotating wave approximation. We consider the battery state 7). From the numerical results, we confirm that there is an oscillation between {|10 B } and {|01 B }, while a population leakage to the other states is negligible. Secondly, we numerically calculate how the charging time depends on the number of qubit N as shown in FIG 8. We introduce a ergotropy , a measure of extractable energy from battery. We assume that the charging is done when the ergotropy of the quantum battery can be stored up to 80% of its maximum value, and we call the necessary time for this a charging timeτ N . In FIG 8, we numerically confirm thatτ N scale as O(N −1 ) in gN δ ω A < |Ω|. However, in the region δ ω A ⇔ N N * = ω A δ where δ = 10gN , we cannot finish the charging process because the target ergotropy cannot be 80% of its maximum value. This comes from the fact that the detuning δ is not strong enough to confine the dynamics into the subspace spanned by {|10 B , |01 B }. In other words, the first order term in the Eq. (A9) has a non-negligible contribution to the dynamics. Here, we define N * = ωA/δ, and our effective Hamiltonian becomes invalid when the number of the qubits becomes more than N * . We set the parameters as g = 10 −3 , δ = 10gN and Ω = −2.3ωA.
IV. CONCLUSION
In conclusion, we propose a quantum battery with a charger system composed of N -entangled qubits. We ulitize a superabsoption for an entanglement-enhanced energy exchange between the charger system and quantum battery. Our scheme provides a scaling of a charging time Θ (N ) while a conventional scheme provides a scaling of a charging time Θ N −1/2 . Our results pave the way for the realization of a ultra-fast quantum battery. In this section, we derive the effective Hamiltonian We adjust parameters to satisfy the following for all µ i , i ∈ {1, 2, 3, 4}. First, we expand the first order of interaction term It is worth mentioning that the first order of the interaction term induces transitions between states with a large energy difference. In this case, we have terms to oscillate with a high frequency, and these tend to be small. On the other hand, these terms also have a collective enhancement factor of √ a M . We are going to evaluate whether these can be negligible or not as a total.
By choosing suitable parameters, these terms are negligible as we show below.
Therefore, we can drop the term of t 0 H i I (t )dt for gN δ ω A < |Ω|. Next, we expand the second order of the We calculate the first term of the (A3).
By solving the (A10), we obtain a state at a time t with the effective Hamiltonian. This kind of approximation has been used in quantum optics [32], but we firstly apply this technique to the model of the quantum battery with a superabsorption. | 2022-05-10T06:47:55.833Z | 2022-05-08T00:00:00.000 | {
"year": 2022,
"sha1": "69bea4359f8912dab6f0b001b964632c1b43ec45",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2205.03823",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "e7f599a7a0d5926f0f0d6853cd5af97c3e9a0f00",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
} |
253420786 | pes2o/s2orc | v3-fos-license | Improving Noisy Student Training on Non-target Domain Data for Automatic Speech Recognition
Noisy Student Training (NST) has recently demonstrated extremely strong performance in Automatic Speech Recognition(ASR). In this paper, we propose a data selection strategy named LM Filter to improve the performance of NST on non-target domain data in ASR tasks. Hypotheses with and without a Language Model are generated and the CER differences between them are utilized as a filter threshold. Results reveal that significant improvements of 10.4% compared with no data filtering baselines. We can achieve 3.31% CER in AISHELL-1 test set, which is best result from our knowledge without any other supervised data. We also perform evaluations on the supervised 1000 hour AISHELL-2 dataset and competitive results of 4.73% CER can be achieved.
INTRODUCTION
In recent years, Semi-Supervised Learning (SSL) has attracted a lot of research interest in many fields of deep learning, such as Automatic Speech Recognition (ASR) [1,2,3] , Computer Vision [4,5,6] and Natural Language Processing [7,8,9]. Among these methods, Noisy Student Training (NST) has recently demonstrated extremely strong performances in Image Classification [6] by introducing noise and randomness into traditional Teacher-student Learning [10,11]. This method further demonstrates its robustness in the ASR field [12,13,14]. After combing with pre-train methods [15], NST is shown to be a vital component for achieving SOTA results on a number of datasets, e.g. Librispeech [16].
However, NST has not been widely investigated in ASR tasks when the domain of the supervised data does not match the unsupervised data. Noise and domain play an important role in ASR [17] and the abundant unsupervised data from social media may not always match the domain of the desired Wen Ding † is the corresponding author. This work is done during Yu Chen's internship at NVIDIA. Thanks to Yuekai Zhang and Hainan Xu for helpful suggestions. This work has been open-sourced into WeNet toolkit.
task. Thus, proper data selection techniques are required to remove noise and select data that is close to the target domain [18]. The most common filter in ASR is the Confidence Score that selects the most trustworthy transcriptions based on confidence estimation and threshold [19,20,21]. However, this method is not always promising in scenarios with large amount of unlabelled data with domain mismatches. Another recent unsupervised data selection technique is investigated in [18], where a contrastive Language Model is applied as a data selector to better improve the target-domain ASR task.
In this paper, we propose a novel data selection strategy named LM Filter which can utilize model differences to filter more valuable non-target domain data to improve the performance of NST. We leverage concept of contrastive LM and data selection method in [22]. Our LM Filter is based on hypotheses from LM to gradually remove noisy data inside each iteration of NST method. The filter condition is relaxed through the NST iteration to make the model advance gradually in due order. This method has the following benefits: • No additional data selection models are required.
Model differences can be obtained from different decoding strategies (e.g. with/without LM).
• Label is not required to perform the data selection and it is totally unsupervised.
• Less time and resources are utilized to run the NST method and it can converge faster in fewer iterations.
Experiments on AISHELL-1 [23] as supervised data and WenetSpeech [24] as unsupervised data indicate a significant improvement of 10.4% comparing with no data filtering baselines. When combined AISHELL-2 [25] and WenetSpeech as unsupervised data, 3.31% character error rate(CER) is achieved on AISHELL-1 test set, which is the best result from our knowledge without any other supervised data on this test set. LM Filter further demonstrates its robustness in larger dataset such as AISHELL-2 (supervised) and Wenet-Speech (unsupervised) to achieve promising result of 4.73% which has 13.6% improvement comparing with the baseline.
The rest of the paper is organized as follows. Section 2 briefly introduces the basic concepts and methods of NST in ASR. Our proposed data selection strategy LM Filter will be included in section 3. Experiment details are introduced in Section 4. Eventually we give our conclusions in section 5.
NOISY STUDENT TRAINING FOR ASR
Noisy Student Training [16] is an iterative self-training method evolved from Teacher Student Learning, the pipeline of which is illustrated in Fig 1. Initially a teacher model is trained with supervised data and pseudo-labels are generated by the teacher. Then data augmentation methods such as SpecAug [26] and speed perturbation are applied during training and the student model is trained using both augmented supervised data and pseudo-data.
In our pipeline, we follow the design that the student model always has same parameters as the teacher, and adopts dropouts and stochastic depths so that the student could be more robust and general than the teacher when it is trained. After training finishes, the student model will be assigned as the new teacher and the whole pipeline will iterate. After several rounds of training, models trained to tolerant noises and augmentations will tend to have better performance generally.
DATA SELECTION STRATEGY
Data selection and filtering play a significant role in SSL especially in an out-of-domain situation. This circumstance occurs frequently in the industry when we have limited labelled data in the desired area or in low resource tasks.
Initially, standard NST is performed on AISHELL-1 as supervised data and AISHELL-2 as unsupervised data without any other filtering strategy. The generated pseudo labels have quite promising results of 8.38% CER but when unsupervised data is set to WenetSpeech which has different domain and recording settings, pseudo-label's CER increases dramatically to 47.1% which is unacceptable for training. LM Filter is then proposed to improve the performances of NST when non-target domain data is provided.
Our hypothesis is that if a language model believes the sentence does not require any further modification, then this sentence has higher probability of being a correct pseudolabel. Here we introduce two definitions and examples to better understand how our LM Filter works.
• CER-Hypo is the CER between student model's hypothesis with greedy decoding and student model's hypothesis with Language model.
• CER-Label is the CER between student model's hypothesis with Language model and the true label.
We evaluated our method on the Mandarin corpus using CER while the same definition can be applied to other languages e.g. English by replacing CER with WER. Two cases are listed in Fig 2. In case 1, the difference between the hypothesis with greedy decoding and the hypothesis decoding with LM is 1 character (eg. char "数" and char "诉") so the CER-Hypo is 16.67% . The CER-Label is also 16.67% in this case, since it takes 1 substitution step to transfer the hypothesis to true label (eg. char "申" and char "胜"). In case 2, the sentence is more challenging than the first case for the initial student model. The student model learns partial acoustic features but the transcripts are mostly wrong. The LM tends to make more modifications due to the low probabilities of such sentence in the corpus. The CER-Hypo and CER-Label both are extremely high in this case. A large amount of cases suggest that CER-Hypo and CER-Label have strong positive correlations, sentences with lower CER-Hypo tend to have lower CER-Label. Our LM Filter uses CER-Hypo as a threshold (eg.10%) to filter out high CER-Label data. We also observe that unsupervised data with similar domain to supervised data are more likely to have lower CER-Hypo values. For unsupervised data from Youtube, similar topics in "readings"and "news" tend to have lower CER-Hypo and non-target domain such as "drama" and "variety" are more likely to be removed by the LM Filter. We also propose a speaking rate filter for WenetSpeech dataset, which is the hypothesis length divided by audio time. The music and song audios that are common elements of drama and variety shows can be effectively removed by this filter.
Datasets and domain description
We evaluate our proposed data selection strategy on the following three datasets: AISHELL-1, AISHELL-2 and Wenet-Speech. AISHELL-1 is a 178-hour open-sourced Mandarin speech corpus, with strictly annotated and inspected transcriptions which mainly covers 5 topics of Finance, Technology, Sports, Entertainments and News. AISHELL-2 consists of 1k hours of Mandarin speech with the same device and recording environment settings as AISHELL-1. The major topics of these two datasets are similar, but the transcripts and audios of the test set are different. WenetSpeech has 10k hours of speech where transcripts are generated by OCR on video data from Youtube and Podcast, which lacks inspection and accuracy. Domains are diverse and mostly consists of Drama, Variety show and Audio books.
Experiment settings
First, we use AISHELL-1 as the supervised dataset and treat AISHELL-2 and WenetSpeech as unsupervised data. Initially 1k hours of WenetSpeech data are randomly selected to match the size of AISHELL-2. And then the size of WenetSpeech data is increased up to 4k hours to test the degree of saturation for unsupervised data. Eventually, we switch the supervised dataset to AISHELL-2 to evaluate the performances of our data selection strategy on industrial-level supervised datasets. The upper bound of data ratio for supervised and unsupervised data is set to 1:9.
The neural structures for both teacher and student models are the same, which is a 16-layer Conformer model [27]. Our language model is a 5-gram model with corpus contains training texts as well as extra wiki texts. All experiments are conducted in WeNet toolkit [28] and NVIDIA A100 GPUs. We perform 7 iterations of NST with and without data selection strategy on WenetSpeech and 5 iterations on AISHELL2.
Baselines
Supervised baseline using only AISHELL-1 data, which is the initial teacher of NST iterations is shown in Table 1. Then supervised training is done on AISHELL-1 data mixing with supervised AISHELL-2 and WenetSpeech. These two results are considered as ceilings of our model's performance. Then standard NST experiments is conducted without data selection strategy using AISHELL-1 as supervised data and AISHELL-2 as unsupervised data, the results of which are shown in Table 1. The 3.99% CER can be achieved after first NST iteration because these two datasets have similar domain and recording settings. The closer the topics and configs, the better performance the NST algorithm will have. In the case of the ideal data distribution, the filtering approach is not required. However, in the majority of recognition jobs, this condition is not typical. After first NST iteration with WenetSpeech pseudo-label, the CER increases to 5.52%, which is even higher than the supervised baseline using only AISHELL-1 data. To reduce CER of pseudo-labels and make training easier in early stages, an appropriate filter is required.
Data selection strategy performances
Performances of LM Filter of supervised AISHELL-1 data and unsupervised WenetSpeech data are shown in Table 2. The best 4.31% CER can be achieved after 7 iterations in the test set. There can be relatively 11.13% CER reduction compared with the supervised baseline. In addition to the test set's CER, the following three metrics are used to assess the quality of the pseudo-label : Pseudo CER which is referred to the CER of pseudo-labels, Filtered CER and Filtered hours which are the CER and the duration of filtered unsupervised data. Results in initial iteration indicate that the LM Filter can significantly decrease the Pseudo CER from 47.1% to 25.18% which makes the pseudo-label satisfactory for further training. The Pseudo CER and Filtered CER decrease as the number of iterations rises, and LM Filter permits more filtered data to be fed into the model. This suggests that LM Filter may gradually learn noisy information, and our student model could make even greater use of non-target domain data.
Discussions
Multiple NST iterations: Multiple NST iterations are conducted to show our proposed data selection strategy can achieve better performance and converge faster. In contrast, tags for drama and variety show that are not commonly used in AISHELL yielded inferior pseudo-labels. The pseudo-labels' quality will further affect the filtered data size and NST iterations' converging speed. Among all the topics, we also discover that sources like Audio books and Podcast most likely provide pseudo-labels with higher qualities. Effectiveness on large dataset: To further demonstrate our LM Filter's effectiveness on large supervised dataset, we conduct experiments using AISHELL-2 as supervised data. The CER results are shown in Table 3. In the AISHELL-2 test set, 13.6% relative improvement is achieved, which further demonstrates LM Filter's scalability on larger supervised data under industrial scale. Detail performances is shown in plot (c) of Fig 3, it illustrates similar trends as AISHELL-1.
CONCLUSIONS
In this paper, a novel data selection strategy named LM Filter is proposed to improve the performances of NST in nontarget domain data, which utilizes the model differences from decoding strategies. Results reveal significant improvements of 10.4% compared to baselines with no data filtering. we obtain 3.31% CER in AIShell-1 test set, which is best result according to our knowledge without any further supervised data. In addition, we perform evaluations on 1k hour AIShell-2 dataset and achieve 4.73% CER on test set, which further demonstrates the robustness of LM Filter with larger supervised data. | 2022-11-10T06:42:39.545Z | 2022-11-09T00:00:00.000 | {
"year": 2022,
"sha1": "4aef4472a1609f0fd911ef6379839eae1991b0e7",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "4aef4472a1609f0fd911ef6379839eae1991b0e7",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Engineering"
]
} |
232243859 | pes2o/s2orc | v3-fos-license | Characteristics Sought When Hiring Faculty in Family Medicine Residency Programs
Department chairs and residency program directors in family medicine have noted faculty shortages are a significant concern for medical education.1 Similar concerns have been described by the Association of Academic Health Centers, the Association of American Medical Colleges, and the Society of Teachers of Family Medicine.1-3 A 2019 study of family medicine residency program directors and department chairs revealed that family medicine departments had an average of 3.9 vacancies in the preceding year.5 Further, departments were able to fill only 66% of vacancies with at least one position remaining open for over 1 year.4 Moreover, Corrice et al reported that 30% of family medicine faculty left positions over a 3-year period.5 Current vacancies, high turnover, and ongoing growth of family medicine residency programs suggest that many programs are or will be recruiting faculty over the next few years. The factors that lead an individual to apply for an academic position are not well understood. Although values, mentorship, and debt may be part of a complex set of factors that influence physicians, the specific factors that lead to selection of an academic careers are still largely undefined.6-7 Furthermore, once an individual has applied for a faculty position, little is known about what characteristics interviewers may prize in a faculty candidate. We found no studies that examined how family medicine faculty and resident physicians rate the importance of various skills and characteristics when considering hiring a new faculty member.6-7 Multiple studies suggest that gender bias exists in the hiring process across disciplines in science, some suggesting that a male candidate may be preferred even if his qualifications are inferior to a female candidate.8-14 Also, an analysis of 7,326 teaching evaluations revealed that evaluations of males tended to include words such as “big picture,”
D epartment chairs and residency program directors in family medicine have noted faculty shortages are a significant concern for medical education. 1 Similar concerns have been described by the Association of Academic Health Centers, the Association of American Medical Colleges, and the Society of Teachers of Family Medicine. [1][2][3] A 2019 study of family medicine residency program directors and department chairs revealed that family medicine departments had an average of 3.9 vacancies in the preceding year. 5 Further, departments were able to fill only 66% of vacancies with at least one position remaining open for over 1 year. 4 Moreover, Corrice et al reported that 30% of family medicine faculty left positions over a 3-year period. 5 Current vacancies, high turnover, and ongoing growth of family medicine residency programs suggest that many programs are or will be recruiting faculty over the next few years.
The factors that lead an individual to apply for an academic position are not well understood. Although values, mentorship, and debt may be part of a complex set of factors that influence physicians, the specific factors that lead to selection of an academic careers are still largely undefined. [6][7] Furthermore, once an individual has applied for a faculty position, little is known about what characteristics interviewers may prize in a faculty candidate. We found no studies that examined how family medicine faculty and resident physicians rate the importance of various skills and characteristics when considering hiring a new faculty member. [6][7] Multiple studies suggest that gender bias exists in the hiring process across disciplines in science, some suggesting that a male candidate may be preferred even if his qualifications are inferior to a female candidate. [8][9][10][11][12][13][14] Also, an analysis of 7,326 teaching evaluations revealed that evaluations of males tended to include words such as "big picture," From the University of Kansas School of Medicine -Wichita -Department of Family and Community Medicine.
Characteristics Sought When Hiring Faculty in Family Medicine Residency Programs
Gretchen Irwin, MD, MBA; Kari Nilsen, PhD; Raghuveer Vedala, MD; Rick Kellerman, MD BACKGROUND AND OBJECTIVES: Faculty shortages are a significant concern in family medicine education. Many family medicine residency programs need to recruit faculty in the coming years. As a result, family medicine faculty and resident physicians will be interviewing candidates to fill these vacancies. Little is known about the characteristics valued in a family medicine residency faculty candidate.
METHODS: Using a cross-sectional survey of family medicine faculty and resident physicians in family medicine residency programs in Kansas, we attempted to define which characteristics are most valued by current faculty members and resident physicians in family medicine residency programs during the faculty hiring process.
RESULTS:
Of 187 invited respondents, 93 completed the survey (49.7% response rate). Twenty-five characteristics, grouped into five domains of relationship building, clinical, teaching, research and administrative skills, were rated as either not important, important, or very important. Building and maintaining healthy relationships was the most important characteristic for faculty, residents, males, and females. Administrative characteristics were the lowest ranked domain in our survey.
DISCUSSION:
These results provide an important snapshot of the characteristics valued in faculty candidates for family medicine residency programs. Understanding the paradigm used by existing faculty and resident physicians in family medicine residency programs when considering new faculty hires has an important impact on faculty recruitment and faculty development programs. "run rounds," "master," "art," and "master clinician," whereas evaluations of females tended to include words such as "empathetic," "delight," "warm," and "attention to detail." 15 While differences exist in how males and females are perceived by learners or during the hiring process, a question remains if male and female interviewers seek out different characteristics in applicants for family medicine residency faculty positions.
This study seeks to describe characteristics that are deemed important in a faculty applicant for a family medicine residency faculty position and to further describe the impact of faculty or resident interviewer status as well as male or female gender on what characteristics are valued. We hypothesized that faculty and residents may differ in the characteristics valued in a faculty member. Similarly, we hypothesized that males and females may value different characteristics. Such differences in preferences could result in hiring decisions that are influenced by the demographic makeup of the interview committee. Understanding what is valued in faculty applicants may aid residency programs in designing an optimal process and metrics for recruitment of faculty.
Methods
This cross-sectional survey of all family medicine faculty and resident physicians in the four family medicine residency programs in Kansasof which three are community based, university affiliated, and one is university based-was conducted from February 2019 through April 2019. The University of Kansas School of Medicine Institutional Review Board approved the study. All family medicine faculty and resident physicians associated with family medicine residency programs in the state of Kansas were invited to participate in an online survey. All potential responders were given a link to the online survey within an email invitation, which was created using Survey-Monkey. Participants were informed that completing the survey would act as consent to participate. All respondents were assured that no data were collected that allowed individuals or specific residency programs to be identified. Nonresponders were emailed up to three times over a 6-week period before the survey data collector was closed.
Using the promotion and tenure guidelines available to faculty at the University of Kansas School of Medicine, we developed 25 characteristics of faculty candidates that represent persons skilled in one of five domains, including administrative, clinical, relationship building, research, and teaching. The survey asked respondents to use a Q-sort methodology to rank the importance of each of the 25 characteristics listed in Tables 2 and 3. 16 The specific instructions on the survey were: You have a vacancy on your current residency faculty and have been asked to recommend which candidate should be hired for the vacancy. Assume the candidates otherwise have the same We also collected demographic data about the respondent.
We conducted data analysis using SPSS Statistics for Windows, Version 25.0 (Armonk, NY: IBM Corp). We analyzed the demographic characteristics of survey respondents using descriptive statistics. We evaluated responses by role in program, comparing faculty and resident responses as well as gender, comparing male and female responses. Differences in the proportion of each subgroup that indicated the characteristic was very important were analyzed using t tests.
Results
A total of 187 individuals were invited to participate in the online survey. Of these, one person opted out and 101 individuals opened the email. Of the individuals who opened the email, 93 completed the survey, for a participation rate of 92.1% and a response rate of 49.7% of all invited. Similarly, administrative characteristics had the lowest means for faculty and residents with no administrative characteristic having a mean of greater than 1.60 for faculty and 1.51 for residents. Table 2 describes each characteristic as well as the mean and standard deviation for faculty and resident groups of respondents who classified characteristics as not important (1), important (2), or very important (3). Residents rated "proficient with many ambulatory and inpatient procedural skills," "proficient with procedural skills that no one else on the faculty possesses," and "expert on bedside clinical teaching" as statistically significantly more important that did faculty. Faculty rated "recipient of awards for providing high quality," "patient-centered care" and "active member of hospital committees" as statistically significantly more important than did residents. Table 3 describes each characteristic as well as the mean and standard deviation for male and female groups of respondents who classified characteristics as not important (1), important (2), or very important (3). There were no statistically significant differences in the importance rating of each characteristic when comparing male and female respondents.
Discussion
Family physicians prize relationships with patients and colleagues. Perhaps not surprisingly, strong relationship-building characteristics are among the highest valued traits in a candidate for a faculty position. When all 25 characteristics were ranked by mean score, all five relationship characteristics were within the top eight items on the list. Conversely, all five administrative characteristics were among the lowest eight items on the list.
Although this study provides information about which characteristics are valued by faculty and resident physicians, it does not provide information about why the respondents selected the characteristics that they did. From our experience, we suspect that many factors may contribute to our findings. Perhaps administrative characteristics are perceived as easier to teach through targeted faculty development, making the need to find a candidate who already possess such skills less important. Faculty and residents may believe that administrative skills can be taught whereas relationship building skills may be more difficult to develop. Similarly, faculty and residents may consider administrative tasks to be less critical for the average faculty member. Also, the number of administrative tasks required of a successful faculty member may be underrecognized and thus not sought after in a candidate. It should be noted that our survey asked about characteristics valued in a faculty candidate. Responses might change and administrative prowess may be more highly valued in an applicant for an associate program director or medical director role.
Faculty and resident respondents placed statistically significantly different importance on five of the 25 characteristics. Residents placed more importance on faculty with broad and unique procedural skills as well as expertise in bedside clinical teaching. Faculty, meanwhile, placed more importance on a faculty candidate having been recognized with awards for high-quality care and having served on hospital committees.
In future studies we hope to examine why a difference between faculty and residents exists as it relates to the importance of various characteristics. We suspect the increased importance of procedural skills and clinical teaching may suggest that residents value proficiency in the clinical arena as a marker of a successful faculty member as opposed to skills in didactic teaching, administrative or research work. Studies of residents considering academic careers have suggested that residents may avoid academic careers because of lack of clinical readiness or ability to be successful in multiple domains. 17 A lack of a unique or broad procedural skill set or lack of confidence in bedside clinical skills may be seen as lack of readiness by residents. Residents may believe that without first being a strong clinician, a faculty member will not be successful. Faculty, meanwhile, may place less importance on these skills, recognizing that while clinical skills are important, bedside clinical proficiency and broad procedural skills alone will not translate into a successful faculty member. Furthermore, faculty may place less emphasis on unique procedural skills as practical concerns such as credentialing, crosscoverage issues, or obtaining needed equipment and supplies may overshadow the value of having a single faculty member who performs a given procedure.
Faculty placed more value on active participation on hospital committees than did residents. While all of the administrative characteristics were not highly valued by either faculty or residents, participation on hospital committees can be important for the ongoing success of a family medicine residency program. Faculty may be more aware of the political relationships that are important for the health and success of a family medicine residency program and thus may value input and participation on hospital committees more highly than the residents do.
Males and Females
Interestingly, we found no statistically significant differences in the importance ratings of male and female respondents regarding any of the 25 characteristics of a faculty candidate. Other studies have demonstrated that while female candidates may be held to strict criteria during the hiring process, male candidates may be assumed to be able to acquire skills required that they do not possess. 18 While our study reveals that both males and females highly value a candidate being able to build and maintain healthy professional relationships, we did not assess what criteria would be used to determine that a candidate possessed that skill or if male and female candidates would be evaluated differently.
Limitations
Our study has several limitations. The survey was created using domains and characteristics typically evaluated during a promotion and tenure process. While this provided a useful framework, there may be FAMILY MEDICINE ORIGINAL ARTICLES characteristics critically important in faculty candidates that were not evaluated, as they did not fit into this framework. We invited only family medicine residents and faculty within the state of Kansas to participate, and the faculty characteristics valued by these individuals may not be the same as those valued in other institutions or geographic locations. We collected minimal demographic data and other attributes about respondents may have influenced our results. Further, the current resident and faculty makeup of a program may influence the responses that a faculty or resident provided on the survey. For example, if a faculty member with a specific clinical characteristic set was needed in the program at the time of the survey, the respondent may have ranked clinical characteristics higher than he or she would have otherwise done. Although we asked respondents to rank characteristics that would be important in a faculty candidate, we did not define a hypothetical role that the faculty member was filling. Responses may have varied if a job description had been provided or roles defined that the hypothetical faculty candidate would need to fill. Nonetheless, this data provides an important snapshot of the characteristics valued in faculty candidates for family medicine residency programs.
Conclusions
Understanding the paradigm used by existing faculty and resident physicians in family medicine residency programs when considering hiring new faculty has important impacts on faculty recruitment and faculty development programs. Our study suggests that faculty, residents, males, and females place emphasis on relationship building, and clinical and teaching skills over research or administrative prowess. Also, residents place more importance than faculty on procedural and bedside teaching skills when evaluating a faculty candidate. Faculty development and resident academic career preparation programs may need to focus on building research and administrative skills if the hiring process tends to select candidates with alternative strengths.
Moreover, knowing that importance is placed on specific skills may help programs to design an interview and hiring process that helps clarify those skills in each applicant. If a faculty is required to have skills in areas that are not typically rated as important, such as expertise in finance and budgeting, perhaps a dedicated investigation of the applicant's abilities in this area would be important, as interviewer questions may otherwise not focus on this skill.
As more and more family medicine residency programs face the need to recruit and hire faculty, an enhanced understanding of the values of current faculty and resident physicians may provide important information to aid the process. Future studies will aim to survey more residency programs to determine if our data is limited to Kansas or is more generalizable. Undertaking studies designed to assess how interviewers evaluate potential candidates for important characteristics will be important to fully develop hiring interventions. | 2021-03-17T06:17:27.922Z | 2021-03-01T00:00:00.000 | {
"year": 2021,
"sha1": "4714880e6fb6e41e85369c8ae0864959f08513b3",
"oa_license": null,
"oa_url": "https://journals.stfm.org/media/3734/irwin-2020-0059.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "c80991727ad2ea1c43819ffef95e07e29e482e69",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
243798227 | pes2o/s2orc | v3-fos-license | Optimization methods for correcting the duration of the anovulatory period of cattle
Post-calving anestrus leads to serious economic losses and is the reason for the animal culling. Deficiency of estrogen and hypofunctional state of the ovaries due to negative energy balance, as well as violations of the timing of postpartum involution of the uterus are considered the leading causes of an extended anovulatory period in fresh cows. A comparison was made of the therapeutic efficacy of Ovsynch scheme and Ovariovit preparation when used in first heifers with postcalving anestrus against the background of ovarian hypofunction. When using Ovsynch scheme, the fertility of the first insemination was 50%; estrous cyclicity recovered in 80% of first heifers. When using Ovariovit, the fertility of the first insemination was 60%; the estrous cycle was restored in 100% of first heifers.
Introduction
Post-calving anestrus has common occurrence in dairy farms. Even in the absence of postpartum complications, the lactation dominant significantly suppresses the reproductive function of a highly productive cow in the first months after calving. Long service period leads to serious economic losses and is the reason for animal culling, reduces the period of their economic use.
Normally, the first wave of follicular growth after calving appears on days 5-7 and ovulation of the first dominant follicle should occur by 18-20 days, and, accordingly, signs of estrus should appear (1). In those cases when a cow does not come to oestrum for a long time, it signals post-calving anestrus.
In livestock farms, up to 38% of cows have an extended anovulatory period, including as a result of a negative energy balance [2]. With a lack of energy in the body, the secretion of gonadotropins decreases, which in turn negatively affects the ability of the follicles to produce estradiol, which is necessary for their maturation and ovulation; the timing of the first ovulation, and hence the estrus, is significantly lengthened [3,4].
Since ovulation of the dominant follicle is possible only when the estradiol level is sufficient to stimulate the preovulatory release of LH and FSH, a decrease in the basal level of estradiol is considered one of the leading causes of postpartum anestrus [2,5,6,7].
In addition to energy deficit, heat stress, changes in the functions of individual organs (liver, adrenal cortex), low levels of precursors (acetate, cholesterol), as well as steroid substances (phytoestrogens, phytoecdysteroids) and fats supplied with feed may lead to lower estradiol levels [8,9,10]. Significant estrogen losses during birth can also contribute to a decrease in their level in cows' blood in the postpartum period [11].
Postpartum diseases associated with a violation of the uterus involution timing are also one of the causes of postpartum anestrus. The incidence of postpartum endometritis is positively correlated with cases of uterine subinvolution and ovarian dysfunction. So, for example, of 222 fresh cows of the black pied Holstein breed with a productivity from 7000 kg to 11000 kg, 46% showed signs of endometritis, 39% had ovarian hypofunction and 13% had cysts; complete involution was observed only in 12% of cows [2].
The relationship was established between the nature of postpartum involutional processes course with the hormone-synthesizing function of the ovaries. So, if rhythmic activity of the ovaries is noted during the normal course of cows' postpartum period, then in the pathological one there is a weekly delay in the third wave of follicular growth (at 25-30 days), which results in cystic atresia of the follicles. [12]. The lack of prostaglandins PGF2α synthesized by healthy mucous membrane of the uterus lowers the contractility of the myometrium and simultaneously slows down luteolysis, disrupts ovarian function.
In order to ensure the possibility of planning the production cycle at livestock enterprises, hormonal synchronization schemes are widely used (Presynch, Ovsynch, Double Ovsynch), which include injections of PGF2α and gonadoliberins and allow planning the manifestation of oestrum in most animals at a certain time.
Frequent use of GnRH can lead to depletion of the receptor apparatus and sensitivity loss of the pituitary gonadotropic cells secreting FSH and LH. In such cases, stimulatory therapy does not result in the desired way.
To resume cyclicity, it is extremely important to ensure the adequacy of hormonesregulators' interconnection in the hypothalamus-pituitary-ovary system against the background of metabolic imbalance correction. The combination of Ovariovit and Liarsin preparations has proven itself well in the hypofunctional state of the ovaries. Liarsin is a metabolic, it optimizes energy metabolism, prevents acidosis and ketosis. Containing phytoestrogens and inositol, Ovariovit activates the ovaries' function, the production of gonadotropic hormones, promotes the formation of a dominant follicle, and normalizes the estrous cycle.
The aim of this study was to compare the therapeutic efficacy of Ovsynch scheme and Ovariovit preparation when used in first heifers with post-calving anestrus. The objectives of the study were to assess the effect of Ovsynch scheme and Ovariovit preparation on follicular growth, manifestation of oestrum, restoration of the reproductive cycle and the fertility of the first insemination of first heifers.
Materials and methods
The study was carried out on the basis of APC "Druzhba" (Smolensk region, Pochinkovsky district) according to the approved plan for the clinical study PKI OV-R-01/2020. The study included 20 Brown Swiss breed first heifers of average body condition with signs of post-calving anestrus against the background of ovarian hypofunction, without symptoms of clinical or latent endometritis, hyperthermia. Emaciated animals or animals above the average degree of nutrition, as well as first heifers with luteal or follicular cysts, as well as those treated with hormonal drugs 40 days before the start of group formation were not included in the study. According to the farm scheme, all first heifers were administered with only Magestrofan (2.0 ml, i.m., once) within 12 hours after calving.
At the time of groups' formation, there were no signs of oestrum in the animals for 41-79 days after calving.
Ovarian hypofunction was confirmed by rectal and ultrasound studies. Ultrasound examinations were performed using a PS-301V veterinary ultrasound scanner (Partner-Vet LLC, RF).
The animals' conditions of feeding and keeping (tied with daily walking on a specialized site) in the experimental and control groups were identical. Feeding: fully mixed diet: roughage (silage, haylage, hay) 55-60%, concentrates (flattened grains, cake (rape, soybean), compound feed (with a protein content of at least 19%) 40%, vitamin and mineral premix 1-1.5 %%, salt, chalk, soda 1% each. Free access to water.
The effectiveness of ovarian function stimulation was assessed by oestrum manifestation and the growth of the dominant follicle (ultrasound and rectal examination). The timing and productivity of the first insemination and the effectiveness of estrous cyclicity restoration was also assessed. Additionally, adverse events and complications were considered.
Results and discussion
At the time of inclusion in the CS, all heifers were diagnosed with hypofunction: at rectal examination the ovaries were small (0.5 -1.0 cm), smooth, had a dense consistency. Ultrasound examination revealed no cysts, small primordial follicles that did not protrude above the surface of the ovaries and were not detected during rectal examination were visible.
In the control group all the animals showed oestrum after 10 days after treatment according to Ovsynch scheme; during rectal and ultrasound examination, a growing dominant follicle was found in one of the ovaries. All animals were inseminated according to Ovsynch schedule. Pregnancy was confirmed in 5 heifers (50%). Three of the remaining five non-pregnant cows came back to oestrum within a month; in two (20%) -the estrous cyclicity did not recover, no growing follicles were found in these animals during rectal and ultrasound studies (Table 1).
In the experimental group, all the animals also reached oestrum. One heifer showed signs of oestrum on the 4th day from the start of treatment, two -on the 10-11th day from the start of treatment (after the 1st and 2nd Ovariovit injection, respectively). Thus, 7 first heifers (70%) required a full course of treatment (three injections of Ovariovit), and in 30% ovarian function recovered faster. Oestrum manifestation in the experimental group was observed on average by 15.2 ± 2.0 days (4-27 days), while the growing dominant follicles were visualized during rectal and ultrasound examination ( Table 1).
Insemination of first heifers in the experimental group was carried out by visual detection of estrus and rut, as well as in the presence of dominant follicles according to the ultrasound results. After the first insemination (on the day of oestrum), pregnancy was confirmed in 60% of the animals. (Table 1). Nonpregnant cows (40%) re-showed signs of estrus in the following month. Thus, estrous cyclicity was restored in 100% of first heifers in this group (Table 1, Figure 1). https://doi.org/10.1051/bioconf/20213606020 FSRAABA 2021 Fertility of the first insemination 50% 60% 7.
The proportion of animals with restored estrous cycle 80% 100% Thus, the use of both Ovsynch scheme and Ovariovit preparation ensured the activation of ovarian function, the growth of the dominant follicle, followed by ovulation and the manifestation of estrus signs in all animals in the group for a month. The fertility rate of the first first heifers' insemination did not differ significantly between the groups, but in the experimental one (Ovariovit) it was 10% higher. It should be noted that despite the initially expressed follicles' growth in all first heifers under the influence of hormonal stimulation, in the following month some non-inseminated animals showed signs of ovarian hypofunction again, i.e., estrous cyclicity was not restored in 20%. After the use of Ovariovit in the experimental group, the growth of follicles and signs of oestrum were observed in all non-inseminated first heifers -restoration of estrous cycle occurred in 100% of animals.
Adverse reactions and side (undesirable) effects. During the study period, no adverse reactions and / or adverse events were observed in either the experimental or control groups. | 2021-10-18T17:31:20.284Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "30a6e076445168a239f53b0e143268b8c305013e",
"oa_license": "CCBY",
"oa_url": "https://www.bio-conferences.org/articles/bioconf/pdf/2021/08/bioconf_fsraaba2021_06020.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4862914597d2d139c025e7db5f7b1e875308e1f8",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": []
} |
233225393 | pes2o/s2orc | v3-fos-license | A Case Report of Reconstruction of the Great Toe with a Homodigital Island Flap
Summary: Homodigital flaps are frequently used in the reconstruction of defects in the fingers. Their use in the coverage of defects of the toes is not commonly reported because such defects are usually treated with skin grafts, proximally based local flaps, or even amputation and shortening of the toe. We describe the implementation of a homodigital island flap of the great toe for reconstruction of a traumatic defect of the dorsal aspect of its distal phalanx.
S oft tissue defects of the distal phalanx of the great toe are both uncommon and difficult to treat. When it comes to isolated deformities of the aforementioned area, the usual course of action is simply shortening or amputating the toe, leaving it permanently disfigured.
When the decision is made to cover the defect, the use of skin grafts or local flaps give variable aesthetic and functional results. Grafts require a superficial defect, without exposure of the bone or tendon, which is usually not the case in toes, either after trauma or excision of lesions. Reconstruction with local flaps usually requires employing tissue from proximal areas of the foot, such as the dorsum, thus compromising a large donor area compared with the size of the defect. 1,2 The robust arterial network of the great toe gives the opportunity for a fairly large amount of skin of its dorsal aspect to be raised and used to cover deformities of its distal end as a homodigital flap. 3 It is supplied by both the first dorsal metatarsal and the first plantar metatarsal arteries, which communicate via numerous branches. Adequate dorsal flow is usually present when Doppler signal from the dorsalis pedis to the first webspace is strong.
CASE PRESENTATION
A 22-year-old man was referred to us by the orthopedics department for a traumatic defect of the dorsal aspect of the distal phalanx of his right great toe. The bone of the distal phalanx was fractured and the nail bed had been avulsed. He also had sustained a traumatic amputation of the distal phalanx of the second toe. The wound on the first toe had been debrided and the nail bed was completely removed, leaving the distal phalanx exposed dorsally. A k-wire was put in to stabilize the fractured bone (Figs. 1, 2). Also, revision amputation of the distal phalanx of the second toe had been performed, without remaining skin deficit. After completion of the above treatment by the orthopedics department and the referral to us, a treatment plan by our team was contemplated for the defect of the first toe. After clinical and laboratory examination, we decided to employ a local homodigital flap for the coverage of this defect.
Using Doppler ultrasound to detect the dorsal distal arterial arc, a pedicled homodigital island flap from the proximal phalanx of the great toe was raised, approximately the size of the defect, perfused by reverse flow through the dorsal distal arcade, without the need to sacrifice the plantar digital arteries. The flap was transposed to the distal dorsal aspect of the distal phalanx, covering the exposed bone and inset in the anatomical area of the nailbed. The edges of the flap were sutured only subcutaneously, thus leaving them inverted, resembling the anatomy of the nail body.
The patient had expressed concerns over the aesthetic appearance of the donor site, as he desired the best possible outcome when it came to depth and color match. Also, because the dissection of the flap was done with an effort to include as much bulk as possible, the resulting defect had a very thin layer of intact paratenon. After his informed consent, we decided to employ a 2-staged operation, using artificial dermis (Nevelia bi-layer matrix), to both maximize the cosmetic result and avoid graft failure. The matrix was replaced with a split thickness skin graft 3 weeks after the above operation (Figs. 3, 4 ).
The patient had an uncomplicated recovery, and the flap showed no signs of local stress or infection. The k-wire was removed by the Orthopedics at a later stage. At 6 months, the patient had no significant problems in ambulation and also expressed his satisfaction with the appearance of the reconstruction, since the way of inset described above made the flap appear as a new nail, especially when the patient was standing, thus allowing him to wear sandals without concerns (Fig. 5).
DISCUSSION
Soft tissue loss involving the great toe is usually treated conservatively, with simple methods of skin grafting or partial amputation. In most cases, the functionality of the foot is maintained, but the patient is left with a disfigured great toe, which is obviously shortened. When the defect also involves the nail, the deformity is even more recognizable, even from a greater distance.
Homodigital reverse island flaps have been used extensively for the reconstruction of the fingers, but rarely reported in toe reconstruction. This method was first reported by Niranjan et al 3 for great toe defects and has also been used for reconstruction of neuropathic great toe ulcers in diabetics. 4 A variation as a lateral toe pulp flap of the great toe has also been reported by Cheng et al. 5 The above case highlights the potential of this flap as a "substitute" of the great toe's nail, since the aesthetic result is significantly more optimal than the other simpler methods of closure, without compromising functionality or a large donor surface. It is also a reminder that local flaps are very useful in the plastic surgeon's workforce and should never be forgotten or underestimated as a simpler and effective reconstructive tool.
CONCLUSIONS
Traumatic wounds of the great toe can be covered with local homodigital flaps after debridement, especially when the size allows for it. These flaps are rarely used in the reconstruction of the toes. We advocate for more frequent implementation of them when the appropriate indications are met, which will lead to simple, effective, and quicker recovery of the patients.
Antonia Fotiou, MD Plastic Surgery Department and Burns Intensive Care Unit
Nicosia General Hospital 4 Antigonou Larnaca 6036 Cyprus E-mail: antonia-fotiou@hotmail.com | 2021-04-14T13:59:38.786Z | 2021-03-01T00:00:00.000 | {
"year": 2021,
"sha1": "6c32f4f42ecad3cb3e3559e613050dc309b876c3",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1097/gox.0000000000003503",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6c32f4f42ecad3cb3e3559e613050dc309b876c3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
255072846 | pes2o/s2orc | v3-fos-license | Interpolation in 16-Valued Trilattice Logics
In a recent paper we have defined an analytic tableau calculus PL16\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${{\mathbf {\mathsf{{PL}}}}}_{\mathbf {16}}$$\end{document} for a functionally complete extension of Shramko and Wansing’s logic based on the trilattice SIXTEEN3\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${SIXTEEN}_3$$\end{document}. This calculus makes it possible to define syntactic entailment relations that capture central semantic relations of the logic—such as the relations , , and that each correspond to a lattice order in SIXTEEN3\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${SIXTEEN}_3$$\end{document}; and , the intersection of and . It turns out that our method of characterising these semantic relations—as intersections of auxiliary relations that can be captured with the help of a single calculus—lends itself well to proving interpolation. All entailment relations just mentioned have the interpolation property, not only when they are defined with respect to a functionally complete language, but also in a range of cases where less expressive languages are considered. For example, we will show that , when restricted to Ltf\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathcal {L}_{tf}$$\end{document}, the language originally considered by Shramko and Wansing, enjoys interpolation. This answers a question that was recently posed by M. Takano.
Introduction
In Muskens and Wintein [4] we have presented an analytic tableau calculus PL 16 for a functionally complete extension of the logic considered in Shramko and Wansing [8]. Both Shramko and Wansing's original logic and our extension are based on the trilattice SIXTEEN 3 and PL 16 can capture three semantic entailment relations, |= t , |= f , and |= i , that each correspond to one of SIXTEEN 3 's three lattice orderings. 1 The calculus has a relatively simple formulation-only one rule scheme is needed for each of the three negations present in the logic, while each of the three conjunctions and each of the three disjunctions comes with two rule schemes.
In this paper we build upon [4] and study interpolation in Shramko and Wansing's trilattice logics. Using what is essentially Maehara's method we will prove a variant of his lemma for PL 16 . Interpolation theorems for |= t , |= f , |= i , and the intersection |= of |= t and |= f readily follow if these notions are interpreted as relations between sentences of the functionally complete language L tfi . We will also consider restrictions of these relations to those sublanguages of L tfi that have the property that if one of the conjunctions or disjunctions of the language is present then so is its dual. All these restrictions enjoy interpolation. In particular, |= is shown to have the (perfect) interpolation property on Shramko and Wansing's original language L tf , which answers a question by Takano [11].
The rest of the paper will be set up as follows. We will first give concise definitions of SIXTEEN 3 , of the functionally complete language L tfi and its semantics, and of the tableau system PL 16 . Once the stage is set in this way we will state and prove our interpolation results-first for logics based on L tfi and then for the restrictions. A short conclusion will end the paper.
The Trilattice SIXTEEN 3
The introduction of SIXTEEN 3 in Shramko and Wansing [8] was motivated by a wish to generalise the well-known four-valued Belnap-Dunn logic (Belnap [1,2], Dunn [3]). The latter is based on the values T = {1} (true and not false), F = {0} (false and not true), N = ∅ (neither true nor false), and B = {0, 1} (both true and false) and can be viewed as a generalisation of classical logic-a move from {0, 1} with its usual ordering to P({0, 1}) with two lattice orders. Shramko and Wansing in fact repeat this move, going from the set of truth-values P({0, 1}) = {T, F, N, B} to its power set P(P({0, 1})), now with three lattices. While the four-valued logic is meant to model the reasoning of a computer that is fed potentially incomplete or conflicting information, the 16-valued logic that results models networks of such computers (for more complete information, see the papers cited above, Wansing [12], or Shramko and Wansing [9], for example).
While the logic is thus based on P({T, F, N, B}) and can have a direct formulation on the basis of this set of truth-values, it is in fact slightly more convenient to follow Odintsov [5], who represents subsets of {T, F, N, B} with the help of matrices of the following form.
n f t b
Here each element of the matrix is a 0 or a 1 and signals the presence or the absence of an element of {T, F, N, B}. Rivieccio [7] linearises this notation, obtaining the more manageable b, f, t, n . We shall follow him in this and define 16 as {0, 1} 4 . Any A ⊆ {T, F, N, B} will be represented by a quadruple S B , S F , S T , S N ∈ 16 such that S X = 1 iff X ∈ A, for X ∈ {T, F, N, B}. With this representation in place, the three lattice orderings of the trilattice can be defined as follows (we let ≤ f be the inverse of the relation originally defined in [8], so that it becomes a nonfalsity ordering, not a falsity ordering, see also [5,6]). Figure 1 depicts the orderings ≤ t and ≤ f on 16, while Figure 2 shows ≤ i and the intersection ≤ t ∩ ≤ f . The node names employed in these pictures belong to the object language defined in Table 1 below (with tb denoting 1, 0, 1, 0 , for example). While the definition above provides lattice orderings, the next definition gives the lattices via their meet and join operations. The official definition of SIXTEEN 3 is based upon these operations.
It is worthwhile to observe that, for each pairwise distinct x, y ∈ {t, f, i}, the following contraposition, monotonicity, and involution properties hold.
The Language L tfi and its Semantics
The language L tfi is defined by the following BNF form (where p comes from some countably infinite set of propositional constants).
This language receives an interpretation as follows.
Definition 4. A valuation function is a function V from the sentences of L tfi to 16 such that L tfi sentences ϕ and ψ are logically equivalent if V (ϕ) = V (ψ), for all V .
Muskens and Wintein [4] show that L tfi is functionally complete. Indeed, it is possible to denote each of the elements of 16 with the help of an L tfi sentence, as in the following definition.
Definition 5. Let p 0 be some fixed propositional constant. The formulas in the first column of Table 1 will be defined by the corresponding entries in the second column. For any of these abbreviations ξ and any p, we will write ξ p for the result of replacing each p 0 in ξ by p.
It is not difficult to verify that, for any valuation V , any ξ in the first column of Table 1, and any p, V (ξ p ) equals the corresponding entry in the third column.
We now come to the definition of the semantic consequence relations. As was already announced in the introduction, the relations |= t , |= f , and |= i are directly based upon ≤ t , ≤ f , and ≤ i respectively, while |= is the intersection of |= t and |= f . Definition 6. Let the relations |= t , |= f , |= i , and |= be defined as follows.
Further decomposition of these relations is in fact possible and useful. This decomposition will be in terms of the relations |= B , |= F , |= T , and |= N , defined below. We follow the convention that, for any V and ϕ, V B (ϕ) refers to the first element of V (ϕ), V F (ϕ) to its second element, V T (ϕ) to its third, and V N (ϕ) to its fourth (so that V (ϕ) = V B (ϕ), V F (ϕ), V T (ϕ), V N (ϕ) ).
Definition 7. For each x ∈ {T, B, F, N}, define the auxiliary entailment relation |= x by letting, for each two L tfi sentences ϕ and ψ, ϕ |= x ψ iff for It is not difficult to see, on the basis of these definitions and the ones in Definition 1, that the equivalences in the following proposition hold.
The Calculus PL 16 and Satisfiability
In order to capture these semantic entailment relations, Muskens and Wintein [4] define the calculus PL 16 . Entries in this calculus are signed formulas x : ϕ, where ϕ is an L tfi formula and x is one of the signs b, f, t, n, b, f, t, and n. While the role of these signed sentences in the calculus is a purely formal one, they also have an intuitive meaning. b : ϕ, for example, can be read as saying that the first (i.e. B) component of the value of ϕ is 1; that of b : ϕ is that it is 0. The other signs can be interpreted similarly.
Definition 8. The following are expansion rules of the calculus PL 16 . The general form of these rules is ϑ/B 1 , . . . , B n , where ϑ is a signed sentence, called the top formula of the rule, and each B i is a set of signed sentences, called a set of bottom formulas of the rule. For example, using this general form one instantiation of the (∧ 1 i ) rule can be expressed as On the basis of these rules tableaux can be obtained in the usual way (see [4] for a precise definition). A tableau branch will be closed if it contains signed sentences x : ϕ and x : ϕ for x ∈ {n, f, t, b}, while a tableau is closed if all its branches are closed.
As we shall see shortly there is an intimate connection between the PL 16 rules just given and the following notion of satisfiability.
Definition 9. Let Θ be a set of signed L tfi sentences and let V be an L tfi valuation. V satisfies Θ iff the following statements hold.
A set of signed sentences will be called satisfiable if some V satisfies it, unsatisfiable otherwise.
It is shown in [4] that a finite set of sentences is unsatisfiable if and only if it has a closed tableau. In this paper we will stay entirely on the semantic side of this equation, but will make use of the following relation between the PL 16 rules and satisfiability. It follows from an easy inspection of the relevant definitions.
Hence if Θ is a set of signed L tfi sentences and ϑ/B 1 , . . . , B n is a PL 16 rule, We also note that the following connection between unsatisfiability and our auxiliary entailment relations obtains.
A Maehara Style Theorem and Interpolation in L tfi
Interpolation theorems usually come in two flavours, depending on whether the logical language that was defined is capable of naming truth-values with the help of zero-place connectives or not. Classical propositional logic, for example, has the property that whenever ϕ |= 2 ψ (with |= 2 the classical entailment relation), there is an interpolant χ such that ϕ |= 2 χ, χ |= 2 ψ, and all propositional letters occurring in χ also occur in both ϕ and ψ. If the language that was defined contains ⊥ or as zero-place connectives, that is, otherwise a condition is needed that excludes cases where ϕ and ψ have no propositional letters in common. The usual condition is that ϕ is not a contradiction and that ψ is not a tautology.
A similar condition will not always work here. Consider the relation |= t and let p and q be two (distinct) propositional letters. Then f p |= t ftb q clearly holds, f p is not a contradiction in any sense (f p |= t nf for example), ftb q is not a tautology (tb |= t ftb q ), but since there are no formulas that do not contain any propositional letters there cannot be an interpolant. One obvious way to get rid of this somewhat artificial conundrum would be to reintroduce, say, tb as a zero-place connective, but here we will stick to our earlier set-up of the language in [4] and will state conditions on interpolation where necessary. These conditions will be stated in terms of the existence of shared vocabulary.
We will prove a general Maehara-style theorem in this section, but will first prepare the ground and start with laying down conventions with respect to signs.
b} then x is the opposite of x, and x is the opposite of x. The opposite of any sign x ∈ {n, f, t, b, n, f, t, b} will be denoted by x . If S is any set of signs, then {x | x ∈ S} will be denoted as S and will also be called the opposite of S. A signed sentence x : ϕ will be called S-signed or signed in S if x ∈ S and a set of signed sentences Θ will be said to be S-signed or signed in S if each of its elements is signed in S. We will formulate our theorem not just for the functionally complete language, but also for (virtually) all sublanguages of L tfi . Languages will be identified with their basic set of connectives, as usual.
Note that the only rules in PL 16 that change the signs of signed formulas are the negation rules (∼ t ), (∼ f ), and (∼ i ). In Figure 3 we have summarised them. The eight signs of the calculus form the nodes of a labelled graph that is arranged in such a way that whenever x and y are vertices connected with an edge labeled ∼ k , any signed sentence x : ϕ can be obtained from y : ∼ k ϕ with the help of rule (∼ k )-and vice versa, the graph is undirected. We see at a glance, for example, that b : ϕ can be obtained from t : There clearly are an infinite number of paths between any two nodes x and y, but we find it expedient to define canonical short paths between them and canonical strings of negations labelling these paths.
Definition 11. We denote the empty string with . Define C to be the following set of strings of negations.
If τ ∈ C then τ is called a canonical string of negations. Consider Figure 3 and let x and y be signs. There is a unique σ ∈ C labelling a path in Figure 3 from x to y. σ is called the (canonical) x, y-string. Figure 3 there is a path labelled ∼ k ∼ from x to y (k, ∈ {t, f, i}), there is also a path labelled ∼ ∼ k from x to y. Also, if there is a path labelled ∼ k ∼ k from x to y, then x = y. It follows that if there is any string of negations from a language L ⊆ L tfi , labelling a path from x to y, there
If in
Note that the first partition corresponds to the cube in Figure 3 as a whole, the next three partitions each correspond to opposing faces of that cube, the following three to sets of edges, and the last to its set of vertices is also a canonical x, y-string of L negations. Another observation is that, for any x and y, the x, y-string is identical to the x , y -string. The following proposition is easily seen to be true. Of course, if one or more negations are not present in L ⊆ L tfi , there may be no x, y-string of L negations (and hence no path labelled with negations from L at all) between two given nodes. We introduce the notion of Lreachability.
Definition 12. Let L ⊆ L tfi , and let x and y be signs. x and y are in the L-reachability relation if the x, y-string contains only negations from L. L-reachability clearly is an equivalence relation. For each L and each sign x, let [x] L be the set {y | y is L-reachable from x}. For ease of reference, Table 2 gives an overview of the various partitions We define a general notion of interpolant. In the following, as in the rest of the paper, Voc(ϕ) will be used for the set of propositional letters occurring in ϕ and Voc(Θ) will be the set of propositional letters occurring in signed sentences in Θ.
Definition 13. Let L ⊆ L tfi , let Θ 1 and Θ 2 be sets of signed L-sentences, let z be any sign, and let p be a proposition letter. An L-sentence χ is called We now state and prove a general theorem for the calculus. The proof is in fact an adaptation of Maehara's method-most often used in the context of Gentzen sequent calculi-to the present setting.
x is a sign} and let Θ 1 be a set of S-signed sentences, while Θ 2 is a set of S -signed sentences, and Θ 1 ∪ Θ 2 is unsatisfiable. Let z ∈ S and let p be a proposition letter. Then there is a z, p-interpolant of Θ 1 and Proof. We will proceed by induction on the number of connectives occurring in signed sentences in Θ 1 ∪ Θ 2 . For the base step, assume that Θ 1 ∪ Θ 2 only contains signed propositional letters.
In general, if a set of signed sentences Ξ has only elements of the form y : q, with q a propositional variable, and, for no q and y, {y : q, y : q} ⊆ Ξ, then Ξ is easily shown to be satisfiable. By contraposition we find that {x : r, x : r} ⊆ Θ 1 ∪ Θ 2 , for some x and r.
We consider two main subcases and in each define a z, p-interpolant χ.
I. x : r ∈ Θ 1 and x : r ∈ Θ 2 , for some x and r. In this case we can let χ = σr, where σ is the x, z-string. Since x and z are both elements of S, σ only contains negation symbols from L. Note that in this case χ is in fact a z-interpolant of Θ 1 and Θ 2 in L.
for some x and r. Then L must contain at least one conjunction or disjunction and so either tb, or nf, or nt, or fb, or nftb, or ∅ is definable in L (compare Table 1). In the first case (in which ∨ t ∈ L) we can consider the following further subcases. In each of these subcases Voc(χ) ⊆ (Voc(Θ 1 )∩Voc(Θ 2 ))∪{p}, while Θ 1 ∪ {z : χ} and Θ 2 ∪ {z : χ} are unsatisfiable. The cases where conjunctions or disjunctions other than ∨ t are present in L are entirely similar and left to the reader.
For the induction step, assume that Θ 1 and Θ 2 satisfy the constraints mentioned in the theorem, while the unsatisfiable Θ 1 ∪ Θ 2 contains n + 1 connectives and the theorem holds for all Θ 1 and Θ 2 such that Θ 1 ∪ Θ 2 contains at most n connectives. Let ϑ ∈ Θ 1 ∪ Θ 2 be a signed sentence containing at least one connective. There is a unique tableau rule ρ such that ϑ is an instantiation of its top formula. We prove the induction step by cases, taking into account 1) which rule ρ matches ϑ and 2) whether ϑ ∈ Θ 1 or ϑ ∈ Θ 2 . This gives 30 cases, but they cluster in two similarity groups. Note that all rules have the property that if their top formula is an L sentence signed in S, their bottom formulas will also be signed in S.
Since a. and c. are unsatisfiable, e. and f. below are too, and from this we deduce that g. is unsatisfiable.
There are now two possibilities. The first is that z is one of the signs mentioned in the side condition of (∧ 2 t ), i.e. z ∈ {n, f, t, b}. Then z ∈ {n, f, t, b}, i.e. z is one of the signs mentioned in the side condition of (∧ 1 t ). Using (∧ 1 t ) we see that h. is unsatisfiable since g. is and using (∧ 2 t ) it follows that i. is unsatisfiable because b. and d. are. We conclude that χ 1 ∧ t χ 2 is a z, p-interpolant of Θ 1 and Θ 2 in this case.
If, on the other hand, z ∈ {n, f, t, b}, we reason as follows. Since x ∈ {n, f, t, b}, while x ∈ S and z ∈ S, it must be the case that ∼ t ∈ L. This means that ∼ t (∼ t χ 1 ∧ t ∼ t χ 2 ), a sentence equivalent to χ 1 ∨ t χ 2 (note that we have not assumed that ∨ t ∈ L), is an L sentence. Using (∨ 1 t ) we conclude from g. that Θ 1 ∪ {z : χ 1 ∨ t χ 2 } is unsatisfiable, while from b. and d. it follows with the help of (∨ 2 t ) that Θ 2 ∪ {z : χ 1 ∨ t χ 2 } is. Therefore the sets j. and k. are unsatisfiable and hence ∼ t (∼ t χ 1 ∧ t ∼ t χ 2 ) is the z, p-interpolant that was sought after.
Let us turn to the entailment relations we are interested in and to the auxiliary relations in terms of which they are characterised. We first define what it means for these relations to have the interpolation property on a sublanguage of L tfi .
R is said to have the perfect interpolation property on L if the condition that Voc(ϕ) ∩ Voc(ψ) = ∅ can be dropped, i.e. if, for any ϕ, ψ ∈ L such that ϕRψ, there is a χ ∈ L with Voc(χ) ⊆ Voc(ϕ) ∩ Voc(ψ) such that ϕRχ and χRψ.
The auxiliary relations |= T , |= F , |= N , and |= B indeed have the interpolation property on all sublanguages L of L tfi (note that for the functionally complete language itself this also follows from Takano [10]). If at least one of the negations is missing from L, they have the perfect interpolation property. Can this result be extended to the entailment relations |= t , |= f , |= i , and |= that we are after? The answer is that in many cases we can find interpolants for these entailment relations that are certain truth-functional combinations of interpolants for the auxiliary relations in terms of which they can be analysed. Before we show the general procedure, let us first make a few simple observations. The first has to do with perfect interpolation. Proof. Let R be as described. Suppose ϕ and ψ are L sentences such that ϕRψ. Then ϕ |= T ψ and Lemma 1 gives an interpolant χ such that Voc(χ) ⊆ Voc(ϕ)∩Voc(ψ). Since no sentence can have an empty vocabulary, it follows that Voc(ϕ) ∩ Voc(ψ) = ∅. So, since R has the interpolation property on L, it has the perfect interpolation property on L.
The second observation concerns the relation |=.
From this proposition the following useful lemma follows directly.
Lemma 3. If |= t or |= f has the (perfect) interpolation property on a language L, then |= likewise has the (perfect) interpolation property on L.
Interpolation Results for Sublanguages of L tfi
The language L tfi is functionally complete and hence maximally expressive given the underlying semantics. This makes it relatively easy to construct interpolants. Do less expressive languages still have the interpolation property? The question is not without interest, as it concerns languages such as L tf : Shramko and Wansing [8], and [4] we have shown to be expressively equivalent to the languages L → t tf and L → f tf considered in Odintsov [5].
We will give affirmative answers for these and a range of other languages here, but will restrict attention to those sublanguages of the functionally complete one that are closed under duals in the following sense.
So, in all languages under consideration conjunctions and disjunctions come in pairs. Let us first discuss languages that do not contain all of these pairs. For these certain dualities arise. First a definition.
Definition 16. For each sign x and each k ∈ {t, f, i}, x * k will denote the unique sign such that The reader may want to compare this definition with the side conditions of the (∼ k ) tableau expansion rules. On languages that do not have all conjunction/disjunction pairs some entailment relations are coextensive.
Proposition 7. Let L ⊆ L tfi be a language such that, for some k ∈ {t, f, i}, An immediate consequence of this duality (and Proposition 1) is that certain entailment relations collapse to equivalence and as a consequence have the interpolation property. {t, f, i}). Then ϕ |= k ψ implies V (ϕ) = V (ψ), for all valuations V and L sentences ϕ and ψ. It follows that |= k enjoys interpolation on L.
From this the following proposition about the limiting case of languages only containing negations follows immediately.
Another consequence of Propositions 1 and 7 is that in the absence of ∧ k and ∨ k (k ∈ {t, f, i}) the characterisations of entailment relations |= , where = k, can be simplified.
as before, and let ϕ and ψ be L sentences. Then the following equivalences hold.
Moreover, if two conjunction/disjunction pairs are missing, the only remaining entailment relation that does not collapse to equivalence will in fact be coextensive with |= T , as the following proposition shows.
Proof. Let ϕ and ψ be L sentences. Again use Propositions 1 and 7 in order to show that ϕ |= k ψ ⇐⇒ ϕ |= T ψ. That |= k has the interpolation property follows from Lemma 1. That |= enjoys interpolation, for ∈ {t, f, i} and k = , follows from Proposition 8.
Propositions 9 and 11 imply that |= t , |= f , and |= i enjoy interpolation on all relevant L that have at most one conjunction/disjunction pair. So, from this point on we can focus on languages closed under duality that contain at least two conjunction/disjunction pairs.
But what if negations are missing? We have already seen that interpolation results for languages lacking one or more negations can immediately be strengthened to results about perfect interpolation, but now must take into account that it is no longer a given that formulas constantly denoting elements of 16 are definable. Suppose, for example, that L is a language not containing ∼ i and ϕ is an L-sentence. Then a straightforward induction on sentence complexity gives that if V (p) = 0, 0, 0, 0 for every p ∈ Voc(ϕ), we also have that V (ϕ) = 0, 0, 0, 0 . Similarly, V (ϕ) = 1, 1, 1, 1 , if V (p) = 1, 1, 1, 1 for every p ∈ Voc(ϕ). It follows that no L-formula can have a constant denotation. Since formulas with constant denotation were used to 'glue' interpolants together in Proposition 6, we need to adapt the method.
In languages that contain only a single negation we see a property similar to the one just described. Consider, for example, a language L that only contains the ∼ i negation and let ϕ be any sentence of L. Then we see that, if V B (p) = 0 and V N (p) = 1 for every p occurring in ϕ, we also have V B (ϕ) = 0 and V N (ϕ) = 1.
Let us analyse the situation a bit further. Here are some useful definitions.
Definition 17. A form is a partial function F : {B, F, T, N} {0, 1} with a non-empty domain. If V is a valuation and ϕ is a formula then V is called an F -valuation on ϕ if, for all x ∈ dom(F ), V x (ϕ) = F (x). If P is a set of propositional letters then V is an F -valuation on P if V is an F -valuation on all p ∈ P . A form F is fixed for a formula ϕ if V is an F -valuation on ϕ whenever V is an F -valuation on Voc(ϕ), for all V . F is fixed for a language L if F is fixed for all L-sentences. Table 3 gives, for each L ⊆ L tfi , a collection of forms fixed for L, depending on the value of For example, { B, 0 , N, 1 } is a form fixed for languages containing only the ∼ i negation, while for languages that contain only ∼ t and ∼ f { B, 0 , F, 0 , T, 0 , N, 0 } is fixed. This corresponds to two of the situations just described. The proof of the following proposition is a straightforward induction on the complexity of L formulas in each case. as in the left column of Table 3, then the corresponding forms on the right are fixed for L. B, 0 , F, 0 , T, 0 , N, 0 }, { B, 1 , F, 1 , T, 1 , N, 1 } {∼t, ∼i} { B, 1 , F, 1 , T, 0 , N, 0 }, { B, 0 , F, 0 , T, 1 , N We will use certain conjunctions and disjunctions of literals for 'glueing' interpolants together. Here is a definition.
Definition 18. A literal over the propositional letter p is any formula σp, where σ is a (possibly empty) string of negations. A literal σp is in canonical form if σ ∈ C, where C is as in Definition 11. Let L ⊆ L tfi . A literal over p in canonical form that is also an L-formula is called a canonical L-literal over p. If P is a set of propositional letters, we let Lit L (P ) := {ϕ | ϕ is a canonical L-literal over some p ∈ P } .
The following proposition makes a connection between values that are not fixed by some form and literals witnessing that fact.
Proposition 13. Let L ⊆ L tfi while p is a propositional letter and x ∈ {B, F, T, N}. For each valuation V , one of the two following statements holds.
(a) For some F that is fixed for L, V is an F -valuation on p and x ∈ dom(F ).
Proof. Note that (a) holds in case L∩{∼ t , ∼ f , In all other cases, we suppose that (a) does not hold, pick the unique form F that is fixed for L such that x, V x (p) ∈ F , conclude that V is not an F -valuation on p, and construe the desired literal that witnesses (b). We give two examples.
• Consider the case that Other cases are left to the reader, but are each very similar to one of these two.
While we will not use the fact, it is worthwile to note that whenever L ∩ {∼ t , ∼ f , ∼ i } is as in the left column of Table 3 and some form F is fixed for L, F is the union of corresponding forms on the right. This can be proved in a way akin to the proof of the preceding proposition. Here is a sketch. If {∼ t , ∼ f , ∼ i } ⊆ L then no F is fixed for L (for the reason we have just seen) and the statement is trivially true. Suppose that {∼ t , ∼ f , ∼ i } ⊆ L and F is not a union of forms in the entry for L on the right of Table 3. Then there is a x, y ∈ F , such that the unique form F on the right with x, y ∈ F is not a subset of F . This means that there is a x , y ∈ F such that x , y / ∈ F . In each concrete case it is now easy to find an F -valuation V on some p and a canonical L literal λ over p such that V x (λ) = y, which shows that F is not fixed for L. Details are left to the reader. It is now easy to see that the forms fixed for a given L are exactly those unions of forms mentioned in the entry for L in Table 3 that are functions.
Proposition 13 can be used to show that, while it is impossible to define the top and bottom elements of the three lattices if not all negations are present, we can have approximations.
For each nonempty but finite set P of propositional letters, there are L-formulas τ k P and β k P , containing only propositional letters from P , such that, for each x ∈ {B, F, T, N} and each valuation V , one of the following two statements holds.
(a) There is an F that is fixed for L, x ∈ dom(F ) and V is an F -valuation on P . [In this case
Proof. Define τ k P as k Lit L (P ) and β k P as k Lit L (P ). Let V be a valuation, let x ∈ {B, F, T, N}, and suppose that (a) does not hold, so that V is not an F -valuation on P for any F fixed for L with x ∈ dom(F ). By Proposition 13 there are a p ∈ P and a canonical L-literal λ over p such that V x (p) = V x (λ). Inspection of Definition 2 reveals that (b) holds.
Let us stress that in the (b) case of the preceding proof it is not necessarily the case that V (τ k P ) = k or V (β k P ) = ⊥ k . Counterexamples are easily arrived at. The 'pointwise' formulation is really essential here, as it is in the applications of the proposition below.
So we have formulas that approximate the constantly denoting formulas that we want, modulo certain exceptions. Will the exceptions spoil our game? They will not and the following proposition gives the essential reason.
Proposition 15. Let L ⊆ L tfi and let ϕ, ψ, and χ be L formulas such that ϕ |= x ψ for some x ∈ {T, F, N, B}, while Voc(χ) ⊆ Voc(ϕ) ∩ Voc(ψ). Let F be fixed for L and let V be an F -valuation on Voc(ϕ) ∩ Voc(ψ). Then, if is shown similarly. If F (x) = 1 then V x (χ) = 1 and we are done. Assume that F (x) = 0. Define the valuation V by letting, for each y ∈ {T, F, N, B} and each p, V y (p) = F (y) if p ∈ Voc(ψ) and y ∈ dom(F ), while V y (p) = V y (p) otherwise. Then V is an F -valuation on Voc(ψ) and V x (ψ) = 0. Since ϕ |= x ψ, it follows that V x (ϕ) = 0. But V and V agree on Voc(ϕ), so V x (ϕ) = 0 and the statement holds.
We now have enough material to prove the remaining interpolation statements. Let us first consider the case that all conjunctions and disjunctions are present. We then get a generalisation of Proposition 6 whose proof is close to the latter's, but with the twist that it uses the considerations above in order to get the necessary 'glue'.
Then the entailment relations |= t , |= f , and |= i each have the interpolation property on L.
Let P be short for Voc(ϕ) ∩ Voc(ψ) and let τ t P , τ f P , β t P , and β f P be as in Proposition 14. Let b ≈ := τ t P ∧ i β f P , f ≈ := β t P ∧ i β f P , t ≈ := τ t P ∧ i τ f P , n ≈ := β t P ∧ i τ f P , and let χ be the following sentence.
With the help of Proposition 14 is easily seen that, for all valuations V , and all x ∈ {B, F, T, N}, at least one of the two following statements is true.
(a) V is an F -valuation on P , for some F fixed for L with x ∈ dom(F ).
In the (b) case, note that, in view of Definition 2, only the V x values of t ≈ , b ≈ , f ≈ , and n ≈ are relevant for the value of V x (χ), so that we have the following.
It can be concluded that, for all It follows that |= t enjoys interpolation on L. That |= f and |= i also do follows by almost identical argumentation.
The remaining case is the one in which exactly one conjunction and its dual are absent from the language. Its proof makes essential use of Proposition 10. Otherwise it is very much like the previous proof.
Proposition 17. Let L ⊆ L tfi be a language closed under duals such that Proof. Let L be as described. Consider the case that L ∩ {∧ i , ∨ i } = ∅, so that {∧ t , ∨ t , ∧ f , ∨ f } ⊆ L.
The proof for |= f is virtually identical, and also gives an interpolant of the form (χ 1 ∧ f τ t P ) ∨ f (χ 2 ∧ f β t P ). That |= i enjoys interpolation on L follows from Proposition 8.
This concludes the case that L ∩ {∧ i , ∨ i } = ∅. The two remaining cases are very similar. In case L ∩ {∧ f , ∨ f } = ∅ one arrives at interpolants of the form (χ 1 ∧ t τ i P ) ∨ t (χ 2 ∧ t β i P ) , while the case that L ∩ {∧ t , ∨ t } = ∅ leads to interpolants of the form (χ 1 ∧ i τ f P ) ∨ i (χ 2 ∧ i β f P ) . In all cases χ 1 and χ 2 are interpolants for appropriate auxiliary entailment relations. Details are left to the reader.
We sum up our results in the following theorem, which is just a combination of Propositions 9, 11, 16, 17, and Lemmas 2 and 3. The theorem affirmatively answers the question that was asked in Takano [11]-does |= enjoy perfect interpolation on L tf ? Concrete interpolants are easily extracted from our proofs. In particular, if ϕ and ψ are L tf sentences such that ϕ |= ψ, we can conclude that also ϕ |= t ψ. From the proof of Proposition 17 it follows that ϕ |= t χ |= t ψ, where χ is . Here χ 1 and χ 2 are perfect interpolants for ϕ |= T ψ and ϕ |= B ψ respectively and can be extracted from the proof of Theorem 1. τ t P is the ∨ t disjunction of all canonical L tf literals over the (nonempty) shared vocabulary P of ϕ and ψ, while β t P is a similar ∧ t conjunction. From Lemma 3 it follows that in fact ϕ |= χ |= ψ, so that we have extracted the interpolant that was sought after.
Conclusion
The analytic tableau calculus PL 16 provides several propositional logics based on the trilattice SIXTEEN 3 with a syntactic characterisation. Entailment relations of interest are typically characterisable as intersections of certain auxiliary entailment relations and/or their converses and verifying or disproving an entailment may require the development of several tableaux.
In this paper we have shown that several entailment relations of obvious interest enjoy interpolation. Our methods have been constructive-in concrete cases interpolants can be found by first finding interpolants for some of the relevant auxiliary entailment relations and by then glueing these together in certain ways. The method works for a language that can express all truth functions over PL 16 , but also for all sublanguages closed under duals. This includes the language originally considered by Shramko and Wansing [8]. | 2022-12-25T14:50:47.433Z | 2017-08-22T00:00:00.000 | {
"year": 2017,
"sha1": "b5c4356f55ad6779528f160f37ef77325e149b56",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11225-017-9742-z.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "b5c4356f55ad6779528f160f37ef77325e149b56",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": []
} |
245736482 | pes2o/s2orc | v3-fos-license | A Comparison of IgG Index and Oligoclonal Band in the Cerebrospinal Fluid for Differentiating between RRMS and NMOSD
As the oligoclonal band in the cerebrospinal fluid (CSF-OCB) in predicting relapsing-remitting multiple sclerosis (RRMS) is less sensitive in Asian populations than that in westerners, it remains elusive whether the IgG index could serve as an alternative. The purpose of this study was to compare these two methods of differentiating between RRMS and neuromyelitis optica spectrum disorder (NMOSD) in Chinese patients. A total of 171 patients (81 RRMS and 90 NMOSD) were retrospectively recruited, of whom 82 (56 RRMS and 26 NMOSD) received the CSF-OCB testing additionally. When the onset age was ≤38.5 years, IgG index with the threshold of 0.67 had a significant agreement (k = 0.4, p < 0.001) with the diagnosis while CSF-OCB failed to discriminate (k = 0.1, p = 0.578). However, when the onset age was >38.5 years, both IgG index with the threshold of 0.8 and CSF-OCB were moderately consistent with the diagnosis (both k > 0.4, p < 0.05). In total, our optimized algorithm had the sensitivity, specificity, and predictive accuracy of 0.778, slightly outperforming the CSF-OCB model. Accordingly, a combination of the onset age and IgG index could serve as an alternative to CSF-OCB for differentiating between RRMS and NMOSD in Chinese patients.
Introduction
The presence of "oligoclonal bands" (OCBs) in the cerebrospinal fluid (CSF) has been identified as a key immunodiagnostic biomarker for multiple sclerosis (MS), detected in about 95% of clinically definite cases in western countries [1,2]. CSF-OCBs are the soluble clonal immunoglobulins resulting from the intrathecal immune response against antigens that are still not completely known. Although the CSF-OCB does not imply dissemination in time, it can substitute for the requirement for demonstration of this measure [3], accelerating the diagnosis for MS without waiting for an additional attack. However, isoelectric focusing (IEF) gel electrophoresis, a frequently used sensitive and accurate technique for detecting CSF-OCB [4], is not always available or affordable in many rural areas. More importantly, in patients with MS, CSF-OCB-positive cases in many Asian countries account for a much lower proportion than those in western countries [5][6][7], suggesting that the sensitivity of this laboratory marker may vary between races. This, on the other hand, would also lead to a delayed diagnosis in a considerable proportion of Asian patients. In this scenario, the IgG index, a traditional and expedient method measuring the intrathecal IgG synthesis, seems to be an alternative.
Although previous studies have noted that the IgG index is less sensitive than the CSF-OCB by IEF in predicting MS [1,4,8,9] and an elevated IgG index does not necessarily herald the presence of CSF-OCB [3], these results are based on westerners and a fixed threshold, e.g., 0.7, for all the individuals. In fact, the IgG index may also be influenced by Brain Sci. 2022, 12, 69 2 of 10 other factors apart from diseases, e.g., age. This is because theoretically, the age-dependent dysfunction or delayed recovery of the blood-brain barrier may facilitate autoreactive B cells that evade tolerance checkpoints and aberrantly break tolerance to autoantigens to migrate from the periphery to the central nervous system (CNS), where they possibly differentiate into antibody-secreting cells (ASCs) (i.e., plasmablasts and plasma cells) [10] and contribute to an elevated IgG index in certain conditions. Moreover, as the onset age is an important classifier between CNS-demyelinating dysimmunities, e.g., patients with MS frequently had younger ages at onset than those with neuromyelitis optica spectrum disorder (NMOSD) [11][12][13][14], a combination of this measure and the IgG index may improve the diagnostic accuracy to some extent.
Accordingly, in this study, we aim to investigate the clinical significance of the IgG index in predicting Chinese patients with MS through systemically comparing it with CSF-OCB, used for differentiating the common CNS-demyelinating conditions based on age stratification, and hope to provide a clue for the preliminary diagnostic aid in the neuroimmunology clinic.
Participants
Patients' data were retrospectively reviewed from the electronic medical record system in the department of neurology in Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, between December 2019 and October 2021. Only participants who fulfilled the following criteria were included: diagnosis of relapsingremitting MS (RRMS) according to the 2017 revisions of the McDonald criteria [3] and aquaporin-4 antibody (AQP4-Ab) positive NMOSD based on the 2015 international consensus [15]; available CSF data with IgG index. Cases who received immunosuppressive treatment or disease-modifying therapies (DMTs) for at least 6 consecutive months closely before lumbar puncture (LP) were excluded from our study, along with those who were administered with rescue therapies (pulsed glucocorticoids, plasma exchange, and intravenous immunoglobulins) within one month before LP. AQP4-Ab was detected by cell-based assay (CBA).
IgG Index and CSF-OCB
All recruited patients received the tests for IgG index within one month of the clinical attack, some of whom additionally received the CSF-OCB detection performed by IEF (SEBIA HYDRASYS), with both CSF and blood samples in the two testings obtained at the same time. The IgG index was detected using a rate immunonephelometry technique (SIEMENS BN-II) and calculated as the quotient of the QIgG (CSF-to-serum IgG ratio) and Qalb (CSF-to-serum albumin ratio). Detection of CSF-OCBs was defined as positive if patterns 2 or 3 were present [4,16].
Statistical Analyses
Mann-Whitney test was used for continuous distributions to compare between groups, with Fisher's exact test for nominal data. Bivariate and partial correlations, as well as their significance, were performed by Spearman's rank correlation analyses. Cohen's kappa (K) was used to assess the agreement of the IgG index and CSF-OCB with the diagnosis. A K < 0 refers to poor agreement, 0.01-0.2 slight, 0.21-0.4 fair, 0.41-0.6 moderate, 0.61-0.8 substantial, and 0.81-1.0 approximately perfect agreement. The optimal cutoff value or threshold associated with the outcome (diagnosis) was determined by receiver operating characteristic (ROC) curve analysis. The significance of the stratified variable in the crosstab was assessed by Cochran and Mantel-Haenszel (CMH) tests. All data were analyzed using IBM SPSS 23 software, with a 2-sided p < 0.05 considered significant.
Results
A total of 171 patients (81 with RRMS and 90 with AQP4-Ab NMOSD) were recruited, and their clinical laboratory profiles are summarized in Table 1, with 108 and 82 participants who additionally received the tests for the serum complement component (C3 and C4) and CSF-OCB, respectively. The sex ratio was similar between the two groups (p = 0.22), while the patients with RRMS had a significantly lower onset age and age at LP than those with AQP4-Ab NMOSD (both p < 0.001). Additionally, the IgG index and the number of positive CSF-OCB in the RRMS group were markedly higher than those in the NMOSD counterpart (both p < 0.001), whereas a relatively elevated Qalb was observed in the latter group (p = 0.005). Notably, neither of the complement components C3 nor C4 in the serum was statistically different between groups (both p > 0.1). To clearly demonstrate the correlations between variables, Spearman's correlation analysis was used and showed that the diagnosis was significantly associated with the age at onset, with the correlation coefficient of 0.54, followed by the age at LP (0.516), the number of positive CSF-OCB (−0.418), and the IgG index (−0.411). This suggested that age, especially at the onset, could serve as the preliminary classifier between RRMS and AQP4-Ab NMOSD.
Identification of the Optimal Cutoff Values between Groups by ROC Curve Analysis
A preliminary classification was built based on the onset age performed by ROC curve analysis, which revealed that 38.5 years could serve as the optimal cutoff value in differentiating between the two diseases, with the sensitivity of 0.741 and specificity of 0.8 (p < 0.001). Then, in the early onset age (onset age ≤ 38.5 y) group, the optimal threshold of the IgG index for prediction was 0.67 (sensitivity: 0.783 and specificity: 0.667, p = 0.001); while in its late-onset age (onset age >38.5 y) counterpart, this figure was 0.8 (sensitivity: 0.762 and specificity: 0.806, p < 0.001), i.e., an elevated IgG index was defined as a value > 0.67 when the onset age was ≤38.5 y or that ≥0.8 when the age was >38.5 y.
Comparison of the IgG Index and CSF-OCB for Differentiating between RRMS and AQP4-Ab NMOSD Based on the Stratification of the Onset Age
Crosstabs based on the stratification of the onset age were built to compare the differences between IgG index and CSF-OCB in the diagnostic prediction (Table 2). In the early onset age group, the CSF-OCB failed to differentiate between RRMS and AQP4-Ab NMOSD (K = 0.1, p = 0.578), whereas it could achieve moderate agreement with the final diagnosis in the late-onset age group (K = 0.43, p = 0.015), with significant CMH tests (both p < 0.01). By comparison, the IgG index could predict the diagnosis in both early and late-onset age groups with a fair-to-moderate agreement (both K ≥ 0.4 and p < 0.001), and the CMH tests were also significant (both p < 0.001). Notably, when patients with different onset ages were pooled, both IgG index and CSF-OCB could well classify the two conditions (both p < 0.001), with a relatively higher agreement observed when the IgG index served as the classifier (IgG index vs. CSF-OCB: 0.56 vs. 0.4). * A "−" refers to an IgG index lower than the defined threshold while a "+" indicates a value greater than the defined threshold. Abbreviations: RRMS = relapsing-remitting multiple sclerosis; NMOSD = neuromyelitis optica spectrum disorder; CSF-OCB = oligoclonal band in the cerebrospinal fluid; Sen = sensitivity; Spe = specificity; PPV = positive predictive value; NPV = negative predictive value; PA = predictive accuracy.
In the early onset age group, although CSF-OCB had a slightly higher positive predictive value (PPV) than IgG index (0.935 vs. 0.887), the latter outperformed the former in sensitivity, specificity, negative predictive value (NPV), and predictive accuracy (PA). However, with the increase in age at onset (>38.5 y), in the scheme in which CSF-OCB served as the discriminator, the sensitivity and PPV decreased, while the specificity, NPV, and PA were elevated, with similar trends also observed in the scheme in which IgG index was employed as a distinguisher. Meanwhile, the IgG index algorithm had a markedly high NPV (0.921) yet a lower PPV (0.533) than its CSF-OCB counterpart. Nonetheless, in the pooled group, concerning all the assessments except PPV, the IgG index had an improved level of performance, compared with CSF-OCB, in differentiating between the two diseases.
Notably, if the patients with RRMS who had only one clinical attack were excluded, the optimal cutoff value of the onset age remained 38.5 y, but the CSF-OCB failed to predict the diseases in either early or late-onset age group (both K < 0.3, p > 0.1), while the IgG index still could (both K > 0.3, p < 0.01) (Supplementary Table S1). Moreover, in the populations with early and late-onset age, the CSF-OCB and IgG index were in moderate (K = 0.6, p < 0.001) and substantial (K = 0.64, p < 0.001) agreement with each other, respectively.
Interestingly, if the age at LP served as the preliminary classifier, with the optimal threshold of 44.5 y, almost the same results as those achieved by the onset age were observed (Supplementary Table S2). However, if the threshold of the IgG index was set at 0.7, regardless of age, the specificity and predictive accuracy would drop to 0.6 and 0.69, Brain Sci. 2022, 12, 69 5 of 10 respectively, while the sensitivity was similar (0.79). Again, if the presence of intrathecal IgG synthesis was defined according to Reiber's hyperbolic function [17], the agreement was almost the same as that achieved by the IgG index with the threshold of 0.7 but lower than that in our scheme, in both early and later-onset age groups (Supplementary Table S3).
Establishment of a Discriminative Model
According to these results above, a simple discriminative practice scheme based on the onset age and IgG index was constructed (Figure 1), with both the sensitivity and specificity of 0.778, slightly higher than those based on CSF-OCB (sensitivity: 0.679 and specificity: 0.769). Interestingly, when the onset age and IgG index were included as continuous independent variables in the binary logistic analysis, the overall percentage correct was 78.4%, almost the same as ours (77.8%). Table S2). However, if the threshold of the IgG index was set at 0.7, regardless of age, the specificity and predictive accuracy would drop to 0.6 and 0.69, respectively, while the sensitivity was similar (0.79). Again, if the presence of intrathecal IgG synthesis was defined according to Reiber's hyperbolic function [17], the agreement was almost the same as that achieved by the IgG index with the threshold of 0.7 but lower than that in our scheme, in both early and later-onset age groups (Supplementary Table S3).
Establishment of a Discriminative Model
According to these results above, a simple discriminative practice scheme based on the onset age and IgG index was constructed (Figure 1), with both the sensitivity and specificity of 0.778, slightly higher than those based on CSF-OCB (sensitivity: 0.679 and specificity: 0.769). Interestingly, when the onset age and IgG index were included as continuous independent variables in the binary logistic analysis, the overall percentage correct was 78.4%, almost the same as ours (77.8%).
Figure 1.
A two-step algorithm for discriminating between RRMS and NMOSD based on the onset age and IgG index. Abbreviations: RRMS = relapsing-remitting multiple sclerosis; NMOSD = neuromyelitis optica spectrum disorder.
Sex-Related Differences in CSF Analysis between RRMS and NMOSD
As the blood-CSF barrier permeability may vary between males and females, we further investigated this difference and found that female sex was correlated with a lower Qalb and total protein, as well as albumin level, in CSF of both MS and NMOSD groups (Table 3). However, when the onset age was controlled, the female sex was still borderline associated with lower CSF protein levels in the MS group, but these markers did not vary between sexes in the NMOSD group (Supplementary Table S4), with the similar results observed when the age at LP was controlled.
Discussion
In this study, we compared the IgG index and CSF-OCB for distinguishing RRMS from AQP4-Ab NMOSD and revealed that the IgG index could better discriminate between the two diseases based on the stratification of the onset age and different cutoff values. When the age at onset was no more than 38.5 y, the IgG index with the threshold of 0.67 could achieve fair agreement (K = 0.4, p < 0.001) with the diagnosis, while the CSF-OCB failed (K = 0.1, p = 0.578). However, when the patient was older than 38.5 y at onset, both IgG index with the cutoff value of 0.8 and CSF-OCB could predict the final diagnosis with moderate consistency (both K > 0.4, p < 0.05). These results suggest that it may not be appropriate to classify Chinese patients with early onset ages by the presence of CSF-OCB, whereas the predictive values of the IgG index and CSF-OCB were similar in those with late-onset ages. On the other hand, these also imply that the IgG index still has a role in discriminating between RRMS and NMOSD. In centers where AQP4 antibody (by CBA) and CSF-OCB (by IEF) assays are not available or in cases where these tests are not affordable for candidate patients with RRMS or NMOSD, our practical scheme could be applied as a preliminary diagnostic aid or may provide diagnostic clues as to which patients should be detected especially for antibodies.
Traditionally, a threshold for the elevated IgG index for all the individuals to predict the potential intrathecal IgG synthesis is commonly defined, regardless of age, and in many laboratories in China, including our center, this value is usually set at 0.7. Nevertheless, we found that this scheme had a lower specificity and predictive accuracy than our algorithm, with a similar sensitivity between the two. Further analysis revealed that the IgG index was borderline associated with the age at onset (correlation coefficient: 0.198, p = 0.062) in patients with NMOSD, while this association was statistically absent in those with RRMS (correlation coefficient: 0.045, p = 0.689) (Supplementary Table S4), suggesting that the correlation of the onset age and the IgG index may vary between the two diseases. Generally, Qalb is considered a measure of the blood-CSF barrier function and is age dependent [18]. Although IgG is larger than albumin in molecular size [17], QIgG, to a lesser extent, could have similar properties to Qalb in the absence of intrathecal IgG synthesis. These findings were evinced by our study-namely, that both of the Qalb and QIgG were positively correlated with the onset age or age at LP in both RRMS and NMOSD group (Supplementary Table S4, both p < 0.001), implying an age-dependent, attenuated blood-CSF barrier function. This dysfunction may subsequently facilitate the passive transfer of the albumin and immunoglobulin as well as the migration of immune cells, including the antibody-secreting cells (ASCs), from the periphery to the CNS and lay a foundation for the intrathecal immunoglobulin synthesis, which may partially account for the elevated IgG index with the increased age in patients with NMOSD, leading to a rise in the optimal cutoff value of IgG index (0.8) between NMOSD and RRMS when the candidates aged more than 38.5 y. In this context, the schemes that simply employed the presence of intrathecal IgG synthesis as the distinguisher would lead to the low specificity. By contrast, our optimized algorithms maximized the discrimination by defining different thresholds of the IgG index based on the stratification of the onset age, regardless of the presence of intrathecally synthesized IgG, and demonstrated a potentially improved discriminative ability (Supplementary Table S3). However, it is still elusive why the IgG index was not associated with the onset age or age at LP in RRMS herein (Supplementary Table S4). In fact, although intrathecally synthesized immunoglobulins and the presence of CSF-OCBs could imply a diagnosis of MS, the underlying mechanisms and their pathogenic significance remain intriguing. Antibody deposition can be observed histologically in MS lesions [19] and rituximab, a B-cell-depleting agent, has been proved to be effective in limiting the disease activity [20][21][22][23], suggesting that B-cell-driven humoral dysimmunity may also play a critical role in the pathogenesis of MS. Nevertheless, the lack of specific pathogenic antibodies implies that other functions of B cells apart from antibody secretion, including antigen presentation to Th cells and cytokine production, may be more important [24]. On the other hand, the onset age is an optimal classifier between RRMS and NMOSD, with the highest association with the diagnosis in our study (correlation coefficient: 0.54, p < 0.001, Table 1). Therefore, a predictive scheme based on the IgG index and the age at onset is probably rational, which is also proved by significant CMH tests.
Notably, an increased IgG index did not necessarily herald the presence of CSF-OCB because detection of CSF-OCB was considered positive if patterns 2 or 3 were present [16], albeit both of the two markers were highly correlated in the whether RRMS or NMOSD group (both p < 0.05, Supplementary Table S4). Since the presence of CSF-OCB is not bound to conclude the diagnosis of RRMS or exclusion of NMOSD, especially in Asian countries, where the positivity of CSF-OCB in the population with RRMS is not as high as that in western ones [5][6][7], a scheme that IgG index discriminates between the two diseases through predicting the presence of CSF-OCB would obviously lead to decreased sensitivity and specificity. However, theoretically, in extreme cases in which the blood-CSF barrier is fully disrupted, the maximum of the IgG index would be 1 in the absence of intrathecal IgG synthesis, i.e., an IgG index ≥ 1 can definitely predict a positive CSF-OCB. Moreover, as CSF-OCB can substitute for the requirement for the dissemination in time, accelerating the diagnosis of MS in a considerable number of patients with only one clinical attack, the inclusion of these patients may elevate the IgG index in the RRMS group to some extent, given the high correlation between the IgG index and CSF-OCB. When they were excluded, CSF-OCB failed to classify the diseases correctly in either early or later onset age group (both K < 0.3, p > 0.1) while the IgG index still could (both K > 0.3, p < 0.01) ( Supplementary Table S1), outperforming CSF-OCB.
Interestingly, although patients with NMOSD had a significantly higher Qalb value than those with RRMS (Table 1), Qalb did not differ when the onset age (p = 0.864) or the age at LP (p = 0.983) was controlled in the partial correlation analysis (Supplementary Table S5), suggesting that the blood-CSF barrier function may be similarly influenced between the two diseases in the case of the same age, and thus, the age-dependent, attenuated barrier function seems to be the main contributor to this difference in Qalb. On the other hand, this could also imply that the two distinct immune responses are likely to exert a negligible discrepancy on the barrier function. Or, Qalb may not be sensitive enough to perceive the local blood-CSF barrier breakdown, assuming that this disruption is indeed remarkable in patients with NMOSD.
Both the onset age and age at LP herein were significantly correlated with the diagnosis (p < 0.001, Table 1), yet the former seemed superior, as it had a slightly higher correlation coefficient (onset age vs. age at LP: 0.54 vs. 0.516). Theoretically, the age at LP is additionally associated with the disease duration, which could dilute its predictive effect, as both of the patients with RRMS and NMOSD are vulnerable to relapse and can have long durations, particularly when the durations vary significantly. When the age at LP instead of the onset age served as the preliminary classifier in our model, its optimal cutoff value was 44.5 y. This algorithm would achieve the same diagnostic accuracy (0.778) for NMOSD and RRMS candidates in total as the scheme in our study, possibly owing to the similar median durations between the two conditions (RRMS vs. NMOSD: 3 y vs. 3.5 y, Table 1). Both discriminative schemes are statistically acceptable, whereas, in clinical practice, the onset age often tends to serve as the preliminary classifier due to its relatively higher correlation with the diagnosis.
The sex-related differences in blood-CSF barrier permeability were previously observed in a CSF analysis between hospital and general populations [25], between patients with MS, other inflammatory neurological disorders, and non-inflammatory [26], as well as between those with schizophreniform and affective psychosis [27]. Although females are generally vulnerable to dysimmunities, male patients had a significantly higher total protein in CSF than females across all ages [28]. In line with this finding, we also noted that female patients had lower protein levels, including total protein, IgG, and albumin, in CSF than males, in both RRMS and NMOSD groups (Table 3). Nonetheless, further analysis revealed that these sex-related differences also varied between diseases when the onset age (Supplementary Table S4) or the age at LP was controlled, suggesting that the age may exert a different effect on this sex-related variance in blood-CSF barrier permeability between NMOSD and RRMS.
Although the vasculocentric deposition of immunoglobulin and activated complement components was observed in the NMOSD lesions [29][30][31], we noted that the concentrations of complement component C3 and C4 in the serum did not significantly differ between patients with NMOSD and RRMS, with similar findings seen in the IgG level in the serum and CSF ( Table 1), suggesting that the depletion of these immune mediators resulting from the different CNS immune responses is likely to be inappreciable for an individual.
Our study was limited by the relatively small number of the patients, especially those with NMOSD who received CSF-OCB testing, which is possibly attributed to the selection bias in clinical practice and may lead to a potentially incorrect estimate of the diagnostic value of this testing method. In addition, admittedly, dichotomizing a continuous variable is not the optimal solution statistically [32] but has practical significance, which could help the clinicians draw a rapid and preliminary impression on the candidate patients. Nevertheless, we further validated that our results were in agreement with those produced by the binary logistic regression model, suggesting that this algorithm is rational. In future studies, there is an urging need for research with more patients, in multiple centers, in other Asian countries, and more parameters included.
Conclusions
In summary, the IgG index may still have a role in discriminating between RRMS and AQP4-Ab NMOSD, as CSF-OCB in China is not as sensitive as that in western countries. Different thresholds defined based on the stratification of the onset age or age at LP can improve the predictive value of the IgG index and help achieve, at least, the noninferior, or even superior, accuracy and sensitivity to the CSF-OCB herein. Accordingly, this optimized scheme may be applied as an alternative in cases where CSF-OCB testing by IEF is unavailable or affordable. Notably, although the diagnosis of MS has been facilitated by the advancement of MRI imaging, it remains a great challenge to differentiate it from the MS mimics, especially in those with only spinal or optic nerve injuries or one clinical attack, due to lack of a specific antibody similar to AQP4-Ab in NMOSD. Comprehensively collecting and assessing the data from a candidate patient is required before a probable diagnosis is drawn. Occasionally, time will tell. Supplementary Materials: The following are available online at https://www.mdpi.com/article/10 .3390/brainsci12010069/s1, Table S1: Comparison of the IgG index and CSF-OCB for differentiating between RRMS patients with more than one clinical attack and patients suffering from AQP4-Ab NMOSD based on the stratification of the onset age, Table S2: Comparison of the IgG index and CSF-OCB for differentiating between patients with RRMS and AQP4-Ab NMOSD based on the stratification of the age at LP, Table S3: The agreement with diagnosis achieved by different methods; Table S4: Spearman's bivariate correlation analysis between NMOSD and RRMS; Table S5: Partial correlation analyses between Qalb and diagnosis while controlling the onset age or age at LP. | 2022-01-06T16:28:44.664Z | 2021-12-31T00:00:00.000 | {
"year": 2021,
"sha1": "49916dc901d52bf4db1220394599bdfa48da058b",
"oa_license": "CCBY",
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8773790",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "74315e48f1be34abe3c06d63c4ebaf43067953fa",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
16728067 | pes2o/s2orc | v3-fos-license | Does a modified STarT Back Tool predict outcome with a broader group of musculoskeletal patients than back pain? A secondary analysis of cohort data
Objectives The STarT Back Tool has good predictive performance for non-specific low back pain in primary care. We therefore aimed to investigate whether a modified STarT Back Tool predicted outcome with a broader group of musculoskeletal patients, and assessed the consequences of using existing risk-group cut-points across different pain regions. Setting Secondary analysis of prospective data from 2 cohorts: (1) outpatient musculoskeletal physiotherapy services (PhysioDirect trial n=1887) and (2) musculoskeletal primary–secondary care interface services (SAMBA study n=1082). Participants Patients with back, neck, upper limb, lower limb or multisite pain with a completed modified STarT Back Tool (baseline) and 6-month physical health outcome (Short Form 36 (SF-36)). Outcomes Area under the receiving operator curve (AUCs) tested discriminative abilities of the tool's baseline score for identifying poor 6-month outcome (SF-36 lower tertile Physical Component Score). Risk-group cut-points were tested using sensitivity and specificity for identifying poor outcome using (1) Youden's J statistic and (2) a clinically determined rule that specificity should not fall below 0.7 (false-positive rate <30%). Results In PhysioDirect and SAMBA, poor 6-month physical health was 18.5% and 28.2%, respectively. Modified STarT Back Tool score AUCs for predicting outcome in back pain were 0.72 and 0.79, neck 0.82 and 0.88, upper limb 0.79 and 0.86, lower limb 0.77 and 0.83, and multisite pain 0.83 and 0.82 in PhysioDirect and SAMBA, respectively. Differences between pain region AUCs were non-significant. Optimal cut-points to discriminate low-risk and medium-risk/high-risk groups depended on pain region and clinical services. Conclusions A modified STarT Back Tool similarly predicts 6-month physical health outcome across 5 musculoskeletal pain regions. However, the use of consistent risk-group cut-points was not possible and resulted in poor sensitivity (too many with long-term disability being missed) or specificity (too many with good outcome inaccurately classified as ‘at risk’) for some pain regions. The draft tool is now being refined and validated within a new programme of research for a broader musculoskeletal population. Trial registration number ISRCTN55666618; Post results.
INTRODUCTION
The Keele STarT Back Tool is designed to stratify patients with low back pain according to their risk of future physical disability, in order that prognostic subgroups can receive matched treatment. 1 For example, individuals at a low risk of persistent disabling problems can be reassured and discouraged from receiving unnecessary treatments and investigations, while those at high risk can matched to treatment which combines physical and psychological approaches. [2][3][4] A large randomised trial testing a risk stratification approach (use of the STarT Back Tool and matched treatments) for low back pain in comparison to best current care Strengths and limitations of this study ▪ First study to demonstrate that modified STarT Back Tool items are similarly predictive of 6-month physical health across different musculoskeletal pain regions. ▪ Within two large independent cohorts it was consistently shown that a modified STarT Back Tool similarly predicts 6-month physical health outcome in other musculoskeletal pain regions as well as low back pain. ▪ A limitation of the study was that the original STarT Back Tool was not included in these two data sets, so a direct comparison between the performance of the original and modified STarT Back Tool versions for patients with low back pain was not possible. demonstrated superior clinical and cost outcomes. 5 In addition, an implementation study testing risk stratification for patients with low back pain in routine general practice demonstrated significant improvements in physical function and time off work, sickness certification rates and reductions in healthcare costs compared to usual non-stratified care. 2 Since low back pain accounts for only 17% of all UK primary care musculoskeletal consultations in general practice, 6 if a similar screening tool could be used for patients with other common pain presentations, such as neck pain and knee pain, then there could be potential for stratified care to make a greater impact for patients and healthcare services. A previous systematic review of 45 cohort studies 7 reported that prognostic factors are often similar across different musculoskeletal presentations, with 11 factors predicting poor outcome at follow-up for at least two different musculoskeletal pain problems. Other studies have similarly shown that a generic set of baseline factors ( pain intensity, episode duration, pain interference, depression and comorbid pain problems) predicts risk of a poor outcome across different pain regions, including back pain, headache, facial pain and knee pain, regardless of the specific location of pain or underlying pathology. [8][9][10][11][12] These studies indicate that it might be possible to use the same prognostic factors as those included within the STarT Back Tool to discriminate risk status for a much larger group of musculoskeletal pain patients than those consulting with low back pain. The key benefit of using a single tool to stratify patients with a wide range of musculoskeletal conditions rather than multiple site-specific prognostic screening tools is its simplicity for use in busy clinical practice.
While the likely value and acceptability of extending risk stratification to patients with other common musculoskeletal pain is as yet unknown, evidence suggests that the majority of general practitioners (GPs) consider prognosis to be important in their clinical decision-making for musculoskeletal treatment. 13 Despite the widespread support for prognostic information, the clinical reality is that predicting outcome in these patients is not always easy and patient's risk status is not typically included within medical records. 14 GPs are not alone in wanting information about patients' likely prognosis over time, as >80% of musculoskeletal patients also want prognostic information from their GP, although less than a third actually receive this information. 14 Existing musculoskeletal prognostic tools are available (eg, Linton and Hallden 15 and Von Korff et al). 16 17 However, these prognostic tools were not designed or tested to support clinical decisions in primary care about matched treatments (stratified care); only the STarT Back Tool has been specifically developed and tested to guide patient treatment matching.
The aim of this study was therefore to investigate the performance of a modified STarT Back Tool for predicting future physical health outcome for a broader group of musculoskeletal pain patients. Specific objectives were to compare the predictive performance of a modified STarT Back Tool for patients with musculoskeletal pain in different body regions and assess the consequences (false-positive and false-negative rates) of using existing STarT Back Tool score cut-points for classifying patients as medium/high risk across different pain regions (neck, back, upper limb, lower limb and multisite pain).
METHODS Design
This study involved prespecified further analysis of existing data sets from two prospective cohorts of adults with musculoskeletal conditions consulting in two different services in the National Health Service, UK. Full ethical approval for both these studies was obtained and patients provided written informed consent prior to their research participation.
Patient population 1. The PhysioDirect trial included 2249 adult musculoskeletal patients taking part in a randomised trial comparing a PhysioDirect service (telephone-based physiotherapy assessment and advice) with usual physiotherapy care. [18][19][20] Primary outcome data (physical health measured using the SF-36v2 physical component score) at 6-month follow-up and baseline modified STarT Back Tool score were available for 1887 patients (84%) and were included in this analysis. The trial was conducted in four NHS community physiotherapy services in four different areas of England (Bristol, Somerset, Stoke-on-Trent and Cheshire). Adults (aged ≥18 years) who were referred by 94 GPs (covering a wide range of geographical areas and populations), or who referred themselves for physiotherapy for a musculoskeletal problem, were eligible for the trial. Patients completed postal questionnaires at baseline and 6 months after randomisation. Details about the PhysioDirect patient sample have been published. 18 For the study reported here, we used patients from the control and intervention arms. 2. The SAMBA study was an observational cohort of adults attending an NHS musculoskeletal clinical assessment and treatment service at the primary-secondary care interface. 21 22 The study population included 2166 patients referred from primary care and subsequently triaged to musculoskeletal and back pain interface clinics in Stoke-on-Trent Primary Care Trust (PCT) over a 12-month period. Primary outcome data at 6-month follow-up ( physical health measured using the SF-36v2 physical component score) and the modified STarT Back Tool score at baseline were available for 1082 patients (50%) who formed the study population for this evaluation. All adults (aged ≥18 years) capable of giving written informed consent were eligible to participate in the study. Patients completed study questionnaires before their first appointment during which consent was obtained and 6 month after that initial clinic appointment. Details of the SAMBA study sample have been published. 22 Modifying the STarT Back Tool The original STarT Back Tool includes nine items of which five concern psychosocial factors (fear, catastrophising, anxiety, depression and bothersomeness). The PhysioDirect trial and SAMBA study included the STarT Back Tool's psychosocial items within their baseline questionnaires. 1 These items were used without modification as they were developed from generic tools and are not specific to low back pain. However, the four further items of the original STarT Back Tool that capture three physical factors (referred pain from the back down the leg, comorbid pain in the neck and shoulder, and physical function with walking and dressing items) are specific to low back pain and therefore these items in their original form needed to be replaced by similar items that were applicable for all musculoskeletal patients. We therefore used proxy items for these outcome domains that were available in both data sets. The STarT Back Tool's two 'function' items (walking and dressing) were replaced by items from the generic EQ-5D 23 ('I have some problems in walking about', Y/N and 'I have some problems washing or dressing myself', Y/N), and we used item 7 from the SF-12 24 ('How much bodily pain have you had?' with positive responses defined as 'extremely' or 'very severe') instead of the original STarT Back Tool item for comorbid pain in the neck or shoulder. It was not possible to replace 'referred pain from the back down the leg' with an item that was suitable for all musculoskeletal pain and so this construct of the 'spread of pain' was omitted from the modified tool. To score the modified STarT Back Tool, responses from these eight items were summed (range 0-8) for all patients in both data sets. The original STarT Back Tool cut-off of 0-3 positive items was used to classify patients as at low risk and 4 or more as at medium or high risk. There were no reference standards for psychological distress in either the PhysioDirect or SAMBA data sets and so in this analysis we did not seek to examine the ability of the modified STarT Back Tool to identify a high-risk-only group. We believe that there is a strong clinical rationale for identifying musculoskeletal cases that are 'at risk' of a poor prognosis, which reflects the combined medium-risk and high-risk subgroup. In our previous IMPaCT Back study 2 implementing risk stratification in general practice, the clinicians used a 6-item STarT Back Tool which only discriminated between low-risk and a combined mediumrisk/high-risk group to decide which patients to refer or not to refer physiotherapy. In that study, the physiotherapists who received 'at risk' patients then used the full 9-item STarT Back Tool to discriminate the distressed patients who needed a psychologically informed physiotherapy treatment approach.
Defining the body regions of pain Participants were asked to indicate the primary site of their musculoskeletal pain for which they had sought treatment. From this information, patients were categorised as having one of the following regional pain problems: neck, back (thoracic or lumbar), upper limb, lower limb or multisite pain ( pain in more than one region).
Defining physical health outcome
The standardised summary score for the Physical Component Score (PCS) of the Short Form 36 (SF-36) Health Survey is population normalised (0 is worst physical health and 100 is best physical health) and was classified by tertiles (≤33, 34-66, >66) as has been used previously 25 26 with a 6-month poor outcome defined using the most severe tertile (≤33). Outcome was defined as poor physical health at 6-month follow-up using the SF-36 PCS because this was the most appropriate physical function outcome score available in both studies, and it has demonstrated good validity and responsiveness in this population. [27][28][29] Statistical analysis All analyses were conducted separately for the two data sets and a descriptive comparison of the modified baseline STarT Back Tool scores (mean and SD) and proportion with poor 6-month physical health outcome (SF-36 PCS ≤33) calculated. Descriptive statistics using means and SDs were used to examine the modified STarT Back Tool score's distribution and investigate potential floor or ceiling effects (>10% of either lowest or maximum score). 30 Predictive performance (discrimination) was assessed by calculating ROC curve AUCs for baseline modified STarT Back Tool total scores against 6-month poor physical health outcome (dichotomised as poor/good) for each of the five different bodily pain presentations and their equality compared using STATA's 'roccomp' command to establish whether AUC differences were statistically significant.
To examine whether the optimal subgroup cut-point on the modified STarT Back Tool total score to discriminate low from medium/high risk for poor 6-month physical health outcome was consistent across the five different pain regions and across the two data sets, we used two methods based on sensitivity and specificity of each potential cut-point. First, we used Youden's J Statistic which is calculated as sensitivity+specificity−1 for each potential cut-point and the optimal cut-point is the tool score with the highest value. 31 32 Second, we a priori agreed that specificity should not fall below 0.7, as lower values would mean potentially overtreating >30% of medium-risk/high-risk patients, which was considered an unacceptable level for an efficient matched treatment approach.
In this study, we were not able to identify optimal subgroup cut-points on the modified STarT Back Tool to distinguish between medium-risk and high-risk patients as there were no reference standards for psychological distress in the two available data sets. The original STarT Back Tool used these reference standards to identify distress 'caseness' at baseline, and identified the optimal cut-point to screen for these distressed 'cases' using a psychological subscale score. Without these reference standards for psychological distress, we were limited to determining optimal subgroup cut-points on the total scale score between low and medium/high risk alone.
Distribution of the modified STarT Back Tool scores in both data sets
In the PhysioDirect trial sample (n=1887), the 8-item modified STarT Back Tool score at baseline was normally distributed with a mean (SD) of 3.35 (2.09); 8.4% had the lowest score (0) and 2.2% had the maximum score (8). The distribution of primary pain regions was reported by clinicians as: lower limb 31.1%, back 28.7%, upper limb 23.5%, neck 11.8% and multisite pain 4.8%. The 6-month SF-36 PCS mean (SD) was 43.7 (10.9) with 18.5% having a 'poor outcome' in their physical health at 6-month follow-up. The mean age was 48 years old and 60% were female.
In the SAMBA study sample (n=1082), the 8-item modified STarT Back Tool score at baseline was not normally distributed but had roughly equal numbers of all possible scores with a mean (SD) of 3.95 (2.65); 12.6% had the lowest score (0) and 10.9% had the maximum score (8). The distribution of primary pain sites was reported by patients as: lower limb 30.8%, back 26.7%, upper limb 23.8%, multisite pain 13.4% and neck 5.4%. The 6-month SF-36 PCS mean (SD) was 38.41 (12.76) with 28.2% having a 'poor outcome' in their physical health at 6-month follow-up. The mean age was 51 years old and 57% were female.
Predictive performance of the modified STarT Back Tool score across pain regions in both data sets Predictive performance of the modified STarT Back Tool as determined by ROC curve AUCs ranged from 0.72 to 0.83 and was not found to be statistically different across different pain regions in the PhysioDirect trial (p=0.098) and SAMBA study (p=0.130) (presented in figures 1 and 2).
Optimal modified STarT Back Tool score cut-offs in both data sets Table 1 reports sensitivity, specificity and the Youden's J statistic for each possible modified STarT Back Tool score cut-point at baseline for each pain region. The results demonstrate that the optimal STarT Back Tool baseline score cut-point for discriminating 'poor outcome' at 6-month follow-up was not consistent across pain regions. For example, among (PhysioDirect) patients with neck, back and multisite pain, the optimal STarT Back Tool cut-point for discriminating 'poor outcome' was 5, whereas this was 4 for those with upper limb and lower limb as their primary pain site.
DISCUSSION
This is the first study to demonstrate that a modified STarT Back Tool is similarly predictive of 6-month physical health (defined by worst tertile of the SF-36) across different musculoskeletal pain regions. Predictive performance determined by AUCs for the 8-item modified STarT Back Tool total score was in fact slightly higher for neck, upper limb, lower limb and multisite pain than for back pain, although differences were not statistically significant. The results therefore demonstrate that the prognostic factors included within the STarT Back Tool are predictive of 6-month physical health across a range of musculoskeletal pain regions, not just back pain. However, the results demonstrated that the optimal baseline STarT Back Tool score cut-point for identifying individuals with poor physical health outcome was neither consistent across different pain regions nor across clinical services (community physiotherapy services (PhysioDirect trial)) and primary-secondary care interface services (SAMBA study). This finding was consistent regardless of method used to determine the optimal modified STarT Back Tool score cut-point (Youden's J statistic or an a priori defined maximum false-positive rate of 30%). This implies that the existing original STarT Back Tool score cut-point (4 or more out of 9) used to allocate patients with low back pain to the medium-risk/high-risk subgroups cannot simply be applied to patients with other musculoskeletal pain presentations or in different clinical services. This is likely to be due to differences in patient characteristics across services such as episode duration, which is known to influence the performance of the original STarT Back Tool. 33 It is also likely that individual modified STarT Back Tool items are not equally applicable to patients with pain in the five regions. 34 For example, the item about walking difficulties is likely to be less relevant and therefore less predictive of physical health outcome for patients with upper limb pain than for those with lower limb or spinal pain. A key message from this study is the value and importance of testing the capabilities of the STarT Back Tool in different settings and patient populations and not presuming that existing primary care subgroup cut-points will be the same in other groups. If wider validity is demonstrated, this will help strengthen the case for the general applicability of the tool.
The findings of this study concur with previous evidence suggesting that the same set of prognostic variables can be used to estimate prognosis of patients with different musculoskeletal pain presentations. 7 15 17 The STarT Back Tool uses biopsychosocial constructs known to predict persistent disability among patients with low back pain, such as: difficulty with walking and dressing, pain elsewhere, fear avoidance, pain catastrophising, anxiety and low mood. 1 However, the STarT Back Tool is not just a prognostic index, but is used to stratify patients for different matched treatments. An important issue highlighted by this analysis is that if clinicians simply modify the STarT Back Tool for use with other musculoskeletal pain patients, they are at risk of matching patients to inappropriate treatments. It is also apparent that future translation and validation studies of the STarT Back Tool need to carefully consider adopting the same STarT Back Tool score cut-points as used in the original UK STarT Back Tool study 1 without first testing if these cut-points are appropriate for their own clinical populations. Based on these findings, our team has begun to further refine and validate an improved stratification tool-the Keele STarT MSK Tool-which will be specifically designed for use with primary care patients consulting with the five most common musculoskeletal pain presentations in a new programme of research. While our study was not able to examine optimal high-risk subgroup cut-offs for 'distressed' patients, a previous cross-sectional study 34 in a US physical therapy population has compared the relationships between a modified STarT Back Tool and psychological measures in people with different pain regions. It is found that regardless of body region of pain, higher modified STarT Back Tool scores were associated with higher levels of kinesiophobia, catastrophising, fear avoidance, anxiety and depressive symptoms. The strengths of our analyses reported here include the large sample sizes of the PhysioDirect and SAMBA studies and the opportunity to examine optimal cut-points in patients with different pain sites and in different NHS musculoskeletal services. An additional strength was that both studies used the same measure of physical health (SF-36), had the same 6-month follow-up time-point and included patients whose pain could be classified into the same musculoskeletal pain regions. Given the potential weakness of using the Youden's J Statistic to define optimal cut-points for discriminating between low and medium/high risk, we also used a clinically determined guide (maximum false-positive rate), which showed similar inconsistencies in optimal cut-off between regional pain site and clinical setting. One weakness is that the original STarT Back Tool was not included in these two data sets, which meant a direct comparison between the performance of the original and modified versions for patients with low back pain was not possible. The choice of poor physical health outcome at 6 months using the lowest tertile on the SF-12 was also relatively arbitrary, but served the purpose of this analysis to compare outcome between different regional pain sites, making the exact definition of poor outcome less critical to the study aims. It should be noted that the different levels of poor clinical outcome between the PhysioDirect (18.5%) and Samba (28.2%) studies could be due to the different settings and design of these two studies and it is possible this may have influenced the findings.
The implications from this analysis are that, despite good predictive performance of the modified STarT Back Tool in patients with pain in different regions of the body, clinicians need to cautiously consider the choice of cut-points when using a modified STarT Back Tool for musculoskeletal pain regions other than low back pain. The results suggest that existing cut-points may lead to an inefficiency in healthcare resource use, with too many patients with a likely long-term disability being missed, or too many patients with good physical health outcome being inaccurately classified as 'at risk', which may result in over treatment of low-risk groups.
CONCLUSIONS
A modified version of the STarT Back Tool has similar predictive performance when used for patients with musculoskeletal pain in different body regions. However, the cut-points used to identify patients with a poor physical health outcome at 6-month follow-up are not consistent across pain regions or clinical services. Further research is underway to refine and validate a new Keele STarT MSK Tool which will form part of a new stratified care approach to be tested in a randomised controlled trial. | 2018-04-03T01:06:46.011Z | 2016-10-01T00:00:00.000 | {
"year": 2016,
"sha1": "34449b51a0ef7d2897b0a54c5e8e3e9c38902d33",
"oa_license": "CCBY",
"oa_url": "https://bmjopen.bmj.com/content/bmjopen/6/10/e012445.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "34449b51a0ef7d2897b0a54c5e8e3e9c38902d33",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
62899733 | pes2o/s2orc | v3-fos-license | Discrete dislocation dynamics simulations of dislocation-$\theta'$ precipitate interaction in Al-Cu alloys
The mechanisms of dislocation/precipitate interaction were studied by means of discrete dislocation dynamics within a multiscale approach. Simulations were carried out using the discrete continuous method in combination with a fast Fourier transform solver to compute the mechanical fields. The original simulation strategy was modified to include straight dislocation segments by means of the field dislocation mechanics method and was applied to simulate the interaction of an edge dislocation with a $\theta'$ precipitate in an Al-Cu alloy. It was found that the elastic mismatch has a negligible influence on the dislocation/precipitate interaction in the Al-Cu system. Moreover, the influence of the precipitate aspect ratio and orientation was reasonably well captured by the simple Orowan model in the absence of the stress-free transformation strain. Nevertheless, the introduction of the stress-free transformation strain led to dramatic changes in the dislocation/precipitate interaction and in the critical resolved shear stress to overcome the precipitate, particularly in the case of precipitates with small aspect ratio. The new multiscale approach to study the dislocation/precipitate interactions opens the possibility to obtain quantitative estimations of the strengthening provided by precipitates in metallic alloys taking into account the microstructural details.
Abstract
The mechanisms of dislocation/precipitate interaction were studied by means of discrete dislocation dynamics within a multiscale approach. Simulations were carried out using the discrete continuous method in combination with a fast Fourier transform solver to compute the mechanical fields (Bertin et al., 2015). The original simulation strategy was modified to include straight dislocation segments by means of the field dislocation mechanics method and was applied to simulate the interaction of an edge dislocation with a θ precipitate in an Al-Cu alloy. It was found that the elastic mismatch has a negligible influence on the dislocation/precipitate interaction in the Al-Cu system. Moreover, the influence of the precipitate aspect ratio and orientation was reasonably well captured by the simple Orowan model in the absence of the stress-free transformation strain. Nevertheless, the introduction of the stress-free transformation strain led to dramatic changes in the dislocation/precipitate interaction and in the critical resolved shear stress to overcome the precipitate, particularly in the case of precipitates with small aspect ratio. The new multiscale approach to study the dislocation/precipitate interactions opens the possibility to obtain quantitative estimations of the strengthening provided by precipitates in metallic alloys taking into account the microstructural details.
Introduction
Plastic deformation in metallic alloys is carried by dislocation slip and strengthening is achieved with obstacles that hinder the motion of dislocations. These obstacles can take the form of dislocations during deformation, solute atoms in the lattice, grain boundaries, etc. but precipitation hardening is well established as the most effective mechanism to increase the yield strength in metallic alloys (Ardell, 1985). Obviously, the strengthening provided by the dispersion of second phases depends on the chemical composition, size, shape, orientation, spatial distribution, etc. of the precipitates, which have been optimized over the years through costly, experimental trialand-error approaches. Nevertheless, these strategies are being overcome by recent advances in multiscale modelling approaches based on the coupling of ab initio and atomistic simulations with computational thermodynamics and phase-field models that allow an accurate prediction of the precipitate features as a function of the alloy chemical composition and thermo-mechanical treatment (Liu et al., 2013(Liu et al., , 2017Ji et al., 2014).
In addition to these tools, the design of metallic alloys in which precipitate strengthening has been maximized requires the development of multiscale modelling strategies that are able to account for the mechanisms of dislocation/precipitate interaction. In the case of very small precipitates (< 10 nm), the analyses can be based on atomistic simulations, which can establish whether the precipitates are sheared or by-passed by the dislocations, and determine the associated energy barrier (Singh and Warner, 2010;Bonny et al., 2011;Saroukhani et al., 2016). This becomes, however, impossible for larger precipitates (> 100 nm), which are overcome by the formation of dislocation loops, due to computational reasons. The analysis of this process was pioneered by Orowan using a constant line tension model, which computed the critical resolved shear stress (CRSS) necessary to overcome a periodic square array of spherical precipitates impenetrable for dislocations (Orowan, 1948). Later, Bacon et al. (1973) included the effect of the interaction stresses between the dislocation segments, while other authors expanded the results of Orowan to deal with random distributions of obstacles (Foreman and Makin, 1966;Kocks, 1966).
While these approaches can provide qualitative trends, they cannot be quantitative because the precipitate geometry and orientation as well as the details of the dislocation/precipitate interaction are not taken into account and numerical approaches have been used in recent years. Xiang et al. (2004); Xiang and Srolovitz (2006) used a level-set representation of the dislocation line to simulate the interaction of both edge and screw dislocations with spherical precipitates. Matrix and precipitate were elastic and isotropic with the same elastic constants and it was assumed that the precipitates could or could not be sheared by dislocations, the latter by including a strong repulsive force on the dislocation within the precipitate. Moreover, the effect of a misfit dilatational strain between the matrix and the precipitate was included. The simulations showed that the richness and complexity of the dislocation/precipitate interactions and postulated new by-pass mechanisms. However, it should be noted that these simulations did not take into account the crystallography of slip, leading to limitations in the precise modelling of the dislocation mobility (cross-slip, climb).
The influence of crystallographic slip was taken into account by Monnet (2006) and Monnet et al. (2011), who used discrete dislocation dynamics (DDD) simulations to obtain the CRSS necessary to overcome spherical precipitates, which were made impenetrable to the dislocation by adding a friction stress within the precipitate. Further DDD simulations included the effect of the image stresses induced by the elastic modulus mismatch between the matrix and the spherical precipitate on the CRSS (Takahashi and Ghoniem, 2008;Takahashi and Terada, 2011;Shin et al., 2003) using the superposition technique developed by Van der Giessen and Needleman (1995). Takahashi and Ghoniem (2008) reported the influence of the shear modulus mismatch on the CRSS when the spherical precipitates were sheared while Shin et al. (2003) address the case of the formation dislocation loops. The influence of the image stresses was higher in the former case but increased in the latter as several dislocations loops were around the precipitate, increasing the hardening rate. However, the superposition method requires to solve the elastic boundary value problem (using either the finite element or the boundary element method) in each time step of the DDD simulations to obtain the image stresses. Further DDD simulations to study dislocation/precipitate interactions (many of them focussed in the case of shearable γ precipitates in Ni-based superalloys) have ignored the modulus mismatch (Yashiro et al., 2006;Vattré et al., 2009;Yang et al., 2013;Hafez Haghighat et al., 2013;Huang et al., 2012;Zlek et al., 2017;Monnet, 2015). Only more recently, Gao et al. (2015) carried out DDD simulations of dislocation/precipitate interaction that took into account the effect of modulus misfit as well as of the misfit stresses arising from the lattice mismatch between γ and and γ phases in Ni-based superalloys. The Fast Fourier Transform (FFT) method -much faster than the traditional methods -was used in this case to compute the image stresses.
While these results have improved our understanding of precipitation strengthening, they often ignore the actual details of the precipitate shape and orientation, of the dislocation mobility as well as of the complex stress field around the precipitates (misfit and stress-free transformation strains). More recent analyses are trying to overcome these limitations by obtaining this critical information from simulations at lower length scales. For instance, Lehtinen et al. (2016) carried out DDD simulations of dislocation/precipitate interaction in BCC Fe in which the parameters of the simulation (dislocation mobility, shear modulus, dislocation core energy) were obtained from atomistic simulations. The precipitates were spherical and the dislocationprecipitate interaction was modelled by a Gaussian potential that was calibrated from atomistic simulations.
This investigation presents a comprehensive multiscale modelling strategy based on DDD to study the mechanisms of dislocation/precipitate interaction. The methodology is applied to Al-Cu alloys but it is general and can be extended to any other metallic alloy. The details of the the θ (Al 2 Cu) precipitates (size, shape and orientation) as well as the stress-free transformation strains around the precipitate were obtained in a previous investigation by the coupling of ab initio and atomistic simulations with computational thermodynamics and phase-field models (Liu et al., 2017) and were in good agreement with the experimental data. In addition, the elastic constants and the dislocation mobility laws were obtained from atomistic simulations and this information was used to determine the actual mechanisms of dislocation/precipitate interaction in this system by means of DDD simulations in which all the relevant physical processes were accounted for. In particular, the influence of the precipitate shape, orientation, modulus mismatch and stress-free transformation strain on the dislocation/precipitate interaction mechanisms was analyzed and their influence on the CRSS was determined and compared with the predictions of the classical models for dislocation/precipitate interaction.
The paper is organized as follows: the characteristics of the dislocations in the Al matrix and of the θ precipitates (obtained using different simulations strategies) are presented in section 2, while the DDD simulation strategy is detailed in section 3. The results of the DDD simulations are shown and discussed in Section 4 while the conclusions are drawn in section 5.
Material system
θ precipitates are the key strengthening phase in Al-Cu alloys aged at high temperature (Polmear (1995)). θ is a stoichiometric phase with chemical composition Al 2 Cu and tetragonal structure (space group I4/mmm, a θ = 0.404 nm, c θ = 0.580 nm). The unit cells of α-Al (space group F m3m, a α = 0.404 nm) matrix and θ are shown in Figs. 1 a) and b), respectively. Previous studies of the transformation path of the θ precipitate from the α-Al lattice have shown three successive steps, which are shown in Fig. 2 (Dahmen and Westmacott, 1983;Nie and Muddle, 1999;Nie, 2014). The Al atoms in layers 2 and 3 of the α-Al lattice are first shifted in opposite directions by a distance a α /6. This step is followed by a homogeneous shear deformation of the cell by an angle arctan(1/3) and, finally, by the shuffling of one Cu atom to the center of the cell and diffusion of the other Cu atom into the Al matrix. According to this transformation path, the lattice correspondence between the parent phase (α-Al) and the θ precipitates is given by [013] α → [001] θ and [010] α → [010] θ and the transformation matrix, T, that relates the lattice parameters in the α-Al, e α , and in the θ precipitate, e θ (Te α = e θ ) is expressed as (Gao et al., 2012;Liu et al., 2017) The transformation matrix includes both strains and rigid body rotations and the corresponding stress-free transformation strain (SFTS), 0 , can be computed as where I stands for the identity matrix. The nucleation and growth of θ precipitates during high temperature ageing has been recently modelled in 3D by means of a multiscale phasefield approach, that takes into account the contribution of the chemical free energy, the interface energy and the elastic energy due to the SFTS (Liu (Nie, 2014). Red and blue spheres stand for Al and Cu atoms, respectively. From Liu et al. (2017).
STEP 1 STEP 2 STEP 3
Figure 2: Transformation path from α-Al to θ precipitates during high temperature ageing (Nie, 2014). Red and blue spheres stand for Al and Cu atoms, respectively. From Liu et al. (2017Liu et al. ( ). et al., 2017. The chemical free energy was given by computational thermodynamics results, while the interface energy and the lattice parameters of both phases (which determine the elastic energy associated to the SFTS) were obtained from density functional theory simulations. The computed lattice parameters were a α = 0.405 nm, a θ = 0.408 nm, c θ = 0.5701 nm, very close to the experimental data reported above (Nie, 2014) and it was assumed that a α a θ to compute the SFTS. The multiscale simulation predicted that the θ precipitates grew with an orientation relationship (001) α (001) θ , [100] α [100] θ . The precipitates were circular disks and the broad face of the disk was coherent with the Al matrix and parallel to either the (100), (010) or (001) planes of the FCC Al lattice, leading to three different orientation variants, while the edges of the circular plates were semi-coherent. Four different deformation variants were possible for each orientation variant of the precipitate due to the four-fold symmetry of the (100) planes in the FCC lattice, leading to a total of 12 deformation variants, in agreement with the experimental observations (Dahmen and Westmacott, 1983;Nie, 2014), which were characterized for their corresponding SFTS matrix that can be found in Liu et al. (2017). The simulations predicted a precipitate diameter in the range 120-180 µm and a thickness of 4-8µm, with an average aspect ratio of ≈ 26. These results were in close agreement with experimental data in the literature for peak-aged Al-4 wt. % Cu alloys (Liu et al., 2017;Zhu et al., 2000;Biswas et al., 2011).
The elastic constants of the α-Al matrix and of the θ precipitates were determined using Density Functional Theory (DFT) with the Quantum Espresso plane-wave pseudopotential code (Giannozzi et al., 2009). The exchangecorrelation energy was evaluated with the help of the Perdew-Burke-Erzenhof approach (Perdew et al., 1996) within the generalized gradient approximation. Ultrasoft pseudopotentials were used to reduce the basis set of plane wavefunctions used to describe the real electronic functions (Vanderbilt, 1990). After careful convergence tests, a cutoff of 37 Ry was found to be sufficient to reduce the error in the total energy below 1 meV/atom. A k-point grid separation of 0.03Å −1 was employed for the integration over the Brillouin zone according to the Monkhorst-Pack scheme (Monkhorst and Pack, 1976).
The elastic constants were obtained by applying a given strain to the unit cell in the ground state and calculating the corresponding stress after the atom coordinates in the unit cell were relaxed. Taking into account the crystal symmetries, the cubic α-Al unit cell was deformed in the direction normal to the cube face and in shear to compute the three independent elastic constants. The BCT cell of the θ precipitate was deformed along two normal directions perpendicular to two faces of the tetragonal lattice and in two shear directions to compute the six independent elastic constants. Six strain levels (varying from -0.003 to 0.003) were used for each deformation pattern to obtain a reliable linear fit of the stress-strain relationship. The elastic constants of α-Al and of θ precipitates obtained by DFT are depicted in Tables 1 and 2, respectively. The ones for α-Al were very close to the experimental data in the literature (Vallin et al., 1964;Sinko and Smirnov, 2002). To the best of the authors' knowledge, no experimental data are available for θ . Table 1: Elastic constants (in GPa) of α -Al at 0K obtained from DFT. The experimental values extrapolated at 0K (Sinko and Smirnov, 2002) are included for comparison.
Discrete Dislocation Dynamics Strategy
The dislocation/precipitate interaction is analyzed by means of DDD simulations following the discrete continuous model developed by Lemarchand et al. (2001). In this approximation, the dislocations are treated as plate-like inclusions with an eigenstrain that corresponds to the plastic strain associated with the area sheared by the dislocation. The dislocation loops are discretized in segments which move depending on the stresses acting on the segments and the mobility rules and the plastic strain is computed from the dislocation glide. The DDD code was coupled in the original model with a finite element code that computed the displacement field that is solution of the boundary value problem taking into account the plastic strain provided by the DDD simulations (Lemarchand et al. (2001)). This framework neither requires the use of analytical expressions for the displacement fields of the dislocation segments (and, thus, can be easily extrapolated to anisotropic materials), nor the computational power increases with the square of the number of dislocations segments. However, computational efforts are limited by the fine finite element discretizations necessary to achieve accurate results, particularly in the case of precipitates with very large aspect ratio. These limitations were overcome recently by Bertin et al. (2015), who used the Fast Fourier Transform (FFT) method to compute the mechanical fields and solve the boundary value problem for periodic cases. Moreover, the heterogeneous stress distribution that appear due the elastic modulus mismatch between the matrix and the precipitate and the stresses induced by SFTS can be easily incorporated to the simulations.
Dislocations are discretized into segments. The dislocation mobility follows a viscous linear law, where the velocity of node i, v i , of the dislocation line is given by where B is viscous drag coefficient that depends on the dislocation character and F i the force acting on node i, which is given by where f ij is the force acting on the segment ij, which is computed according to where N i is the interpolation function associated to node i and f pk ij is the Peach-Koeler force given by where b ij is the Burgers vector of the segment ij andt ij the unit vector parallel to the dislocation line.
Calculation of the stress fields
The stress field within the simulation domain is computed using the FFT algorithm. The mechanical state of the system is determined by solving the mechanical equilibrium equations in the domain V with periodic boundary conditions according to where C denotes the fourth order elasticity tensor, the total strain, p the plastic strain, 0 the SFTS, ∂V stand for the boundaries of domain V with normal n and E is the imposed macroscopic strain. The SFTS is an homogeneous eigenstrain within the precipitate (that only depends on the precipitate variant) and, thus, δ(x) is a Dirac delta function that is equal to 1 when x is within the precipitate and 0 otherwise. This discontinuous strain field may lead to Gibbs fluctuations when using a FFT solver. They were attenuated by the use of discrete gradient operators in Fourier space (in this particular case, the rotational discrete gradient operator proposed by Willot (2015)). The fluctuations in the simulations were not significant, as shown in the stress fields below.
The plastic strain p (x) is computed directly from dislocation motion in the DCM (Bertin et al., 2015), while 0 is given in Section 2. The polarization scheme proposed by Moulinec and Suquet (1998) is used to solve the mechanical equilibrium problem in each step of the DDD simulation. This is achieved through the introduction of a reference medium with stiffness C 0 where τ (x) is the polarization tensor, which is given by where δC(x) = C(x) − C 0 . The SFTS tensor 0 (x) takes the value for the corresponding precipitate variant within the precipitate and it is equal to 0 outside of the precipitate (Bertin and Capolungo, 2018). From the expression for the total stress in (8) and the mechanical equilibrium condition (7), the mechanical fields can be obtained using the FFT algorithms detailed in Bertin et al. (2015); Bertin and Capolungo (2018).
Introduction of single dislocation lines
Only dislocation loops can be initially introduced in the DCM but this configuration is not appropriate to analyze the interaction of a single dislocation line with the precipitate. This limitation was overcome in the cubic domain (with periodic boundary conditions) by introducing within the domain a rectangular prismatic loop parallel to one cube faces (Fig. 3). Two opposite sides of the prismatic loop (shown as discontinuous lines in the figure) were moved in opposite directions until they reached the boundaries of the domain and annihilate each other, because they have opposite directions, leading to two straight dislocations forming a dipole within the domain. One of the dislocations was fixed during the simulation (yellow line in the figure) while the other was free to move following the mobility rules established in the following section. The Field Dislocation Mechanics (FDM) method was then used to cancel the stress field created in the domain by the fixed dislocation following the procedure reported in Berbenni et al. (2014);Brenner et al. (2014); Djaka et al. (2017). [110] [112] [111] b Figure 3: Introduction of a prismatic dislocation loop parallel to one cube faces. Two opposite sides of the prismatic loop (shown as discontinuous lines in the figure) were moved in opposite directions until they reached the boundaries of the domain and annihilate each other, leading to two straight dislocations forming a dipole within the domain. One of the dislocations (yellow line) was fixed during the simulation while the other was free to move in the slip plane (shaded) and interacted with the precipitate.
The FDM method involves the Stokes-Helmholtz decomposition of the elastic distortion into incompatible and compatible parts. The existence of a non-zero dislocation density in the material is accounted for by the incompatible part, while the compatible part ensures that the boundary conditions and the stress equilibrium conditions are fulfilled (Acharya, 2001). The incompatible elastic distortion is included in the FDM method through Nye's dislocation tensor α (Nye, 1953), which is defined as α ij = b i t j , where b i is the net Burgers vector in direction e i per unit surface S and t j the dislocation line direction along e j .
The incompatibility equation and the conservation law are expressed, respectively, by where U e is the elastic distortion tensor. Applying the Stokes-Helmholtz decomposition to the elastic distortion tensor, where U e,⊥ and U e, stand for the incompatible and compatible parts of the elastic distorion respectively. Taking into account that div(U e,⊥ ) = 0 and applying again the operator curl to the expression (13), after some manipulation, it yields a Poisson-type equation div(grad(U e,⊥ )) = −curl(α) that can be expressed in component form as The Poisson equation (16) can be solved in the Fourier space. To this end, the dislocation density α(x) is computed in the Fourier space,α(ξ), and the incompatible elastic distortion is obtained in the Fourier space as: OnceŨ e,⊥ (ξ) is known, the inverse Fourier transform is computed to get the incompatible elastic distortion in the real space. Finally, the incompatible elastic strain e,⊥ is the symmetric part of U e,⊥ . The incompatible elastic strain is introduced as plastic strain in the Lippmann-Schwinger equation, which is solved using the FFT algorithm following the same procedure used in section 3.1.
In order to screen the stress field of a straight edge dislocation with Burgers vector b, the corresponding Nye tensor is given by where −b is the magnitude of the Burgers vector (opposite to the one of the dislocation to cancel the stress field) and l the voxel size of the discretization. This value of the Nye tensor is applied in the voxels where the immobile dislocation is located, being zero in the rest of the simulation domain.
Dislocation mobility
The drag coefficient vector, B, that characterizes the dislocation mobility has been recently determined in Al by Cho et al. (2017). They carried out molecular dynamics simulations of straight dislocation segments with different character and determined B as a function of temperature in the regime in which the dislocation mobility is controlled by the viscous friction force arising from phonon damping. it was found that the drag coefficient of a mixed dislocation cannot be obtained by a linear interpolation between those of edge and screw dislocations (Fig. 4). The maximum drag coefficients were found for the screw dislocation and a mixed dislocation whose Burgers vector forms an angle of 60 • with the dislocation line. The drag coefficients obtained from the molecular dynamics simulations were fitted to two parabolic functions according to where B(0) and B(π/2) stand for the drag coefficients of pure screw and edge dislocations, respectively. This drag coefficient was used in the DDD simulations for the Al matrix. The drag coefficient in the θ precipitates was assumed to be infinity and, therefore, the dislocations could not shear the precipitates and were forced to by-pass them.
Results and discussion
DDD simulations were carried out using a cubic domain of 729 x 729 x 729 nm with periodic boundary conditions, which was discretized with a grid of 128 x 128 x 128 voxels. The axes of the cubic domain were aligned with (111)[110] was introduced in the simulation box following the strategy described above. The precipitate was inserted at the center of the simulation box as a circular disk parallel to either the (001) and (010) plane, which stand for the respective habit planes. θ precipitates also grow along the (001) habit plane but this dislocation/precipitate configuration is equivalent to the that of the (010) precipitates. The slip plane of the dislocation intersects the center of the precipitate. The initial configuration is represented in figure 5 for both orientations of the precipitate. For the precipitate parallel to the (001) plane, the section of the precipitate along the glide plane was parallel to the Burgers vector (Fig. 5a), whereas it formed an angle of 60 • for the precipitate in the (010) plane (Fig. 5b). A shear strain rate is applied to the cubic domain along the [110] direction, as shown in Figure 5. The precipitate volume fraction was held constant and equal to 3.1 10 −4 in the simulations, so the interaction between precipitates can be neglected. The elastic constants of the Al matrix and of the θ precipitate in Tables 1 and 2, respectively, were used, while the dislocation mobility in Al was given by the drag coefficient B in eq. (19). It was assumed that the precipitate was impenetrable to dislocations. All the simulations presented below were carried out at an applied strain rate of 10 4 s −1 because the results obtained at this strain rate are equivalent to those obtained under quasi-static conditions, as shown in the Appendix. [110] [112] [111] τ [110] [112] igure 5: Initial configuration of the edge dislocation and the θ precipitate for the DDD simulations. (a) Precipitate habit plane (001). The angle between the Burgers vector and the section of the precipitate along the glide plane is 0 • . (b) Precipitate habit plane (010). The angle between the Burgers vector and the section of the precipitate along the glide plane is 60 • . The orientation of the dislocation line and the precipitate in the slip plane are shown for both configurations below each figure.
Mechanisms of dislocation -precipitate interaction
The mechanisms of dislocation precipitate interaction and the particular role played by the SFTS around the precipitate can be understood from the shear stress-strain curves obtained from the DDD simulations. In this section, the stress-strain curves and the path followed by the dislocation is analyzed for each precipitate variant. The precipitate diameter in these simulations was 156 nm and the aspect ratio 26:1, in agreement with the results of the phase-field simulations for θ' in Al-Cu alloys (Liu et al., 2017). Simulations were carried with and without including the effect of the SFTS to assess the influence of this factor on the mechanics of dislocation/precipitate interactions. The interaction of the dislocation with the 12 deformation variants induced by the presence of the SFTS is reduced to 6 independent cases due to the symmetries of the FCC lattice, two corresponding to the 0 • configuration (Fig. 5a) and four to the 60 • configuration (Fig. 5b).
0 • orientation
The shear stress-strain curve of the simulation in the 0 • orientation without SFTS is plotted in Fig. 6. The configuration of the dislocation line around the cross-section of the precipitate in the glide plane is also included in the figure for different values of the applied strain. In the initial stages of deformation, marked with (i) in the figure, dislocation glide takes place at very low stress and the dislocation line remains straight, indicating that there is no influence of the precipitate. Linear hardening is observed afterwards in region (ii) as the dislocation starts to bow around the precipitate. The dislocation overcomes the precipitate by the formation of an Orowan loop, as shown in (iii) and, as the dislocation leaves the domain, another dislocation enters the domain by the opposite boundary due to the periodic boundary conditions (region iv), leading to hardening.
The dislocation precipitate interaction for the 0 • configuration is changed in the presence of the stress fields around the precipitate induced by the SFTS, which can be found in Table 1 of Liu et al. (2017). The magnitude of the SFTS in this table is given in a reference system which follows the orientation relationship between the matrix and the precipitate. They have to be rotated to the reference frame in Fig. 5 and the two SFTS considered for this precipitate configuration are 0 1 and 0 2 , which are given by (Liu et al., 2017).
In the case of 0 1 , (Fig. 7a), the stress field around the precipitate leads to an initial repulsion between the dislocation and the precipitate, which is shown in the initial hardening in the stress-strain curve in region (i) and in the shape of the dislocation line. After this initial barrier is overcome, one small segment of the dislocation line is attracted to the precipitate (region ii) and the dislocation starts to bow around the precipitate (region iii) but the dislocation loop is not symmetric due to the SFTS. The Orowan loop is finally created around the precipitate (regions iv and v) and the process is repeated as a new dislocation enters the domain (region vi). The stress field created by the SFTS 0 2 leads to a different behavior, as shown in Fig. 7b). The dislocation line is initially attracted to the precipitate (region i) and a minimum in the stress-strain curve is found when the dislocation line gets in contact with the precipitate (region ii). Linear hardening is observed afterwards as the dislocation bows around the precipitate (region iii) and overcomes the obstacle by the formation of an Orowan loop (region iv). However, the final Orowan loop is not attached to the precipitate surface and the final shape of the Orowan loop is different from the ones found in Figs. 6 and 7a).
The presence of the SFTS increased considerably the CRSS (i.e. the maximum stress in the shear stress-strain curve) necessary to overcome the precipitate. According to the line tension model, the CRSS is controlled by the minimum radius of curvature of the dislocation line during the Orowan process, which decreased in the presence of the SFTS because of the anisotropy introduced in the development of the Orowan loops. In addition, the CRSS in the presence of 0 1 was slightly higher than the one in the presence of 0 2 .
60 • orientation
Similar DDD simulations were carried out when the precipitate was in 60 • configuration. In the absence of the SFTS, the dislocation overcomes the precipitate by the formation of an Orowan loop (Fig. 8) and the regions of the shear stress-strain curve are equivalent to those found in Fig. 6 in the 0 • orientation in the absence of the SFTS. In this case, the dislocation line advances toward the precipitate and rotates until is in full contact with the broad face of the precipitate (region ii). Afterwards, the dislocation arms advance until an Orowan loop is formed around the precipitate (region iii).
In the orientation 60 • , there are four independent SFTS that lead to (23) and the dislocation-precipitate interactions in the presence of the stress fields created by the SFTS are plotted in Fig. 9, together with the corresponding shear stress-strain curves. In all cases, the dislocation line tends to rotate and to become parallel to the broad face of the precipitate, and an Orowan loop is formed afterwards as the dislocation arms propagate at both sides of the precipitate. However, the approximation of the dislocation to the precipitate and the formation of the Orowan loop is modulated by the SFTS.
In the case of 0 3 (Fig. 9a), the stress field near the precipitate initially attracts the dislocation toward the precipitate (region i), but this is followed by a strong repulsion between the dislocation line and the broad face of the precipitate (region ii), leading to the formation of a half loop whose extremes are in contact with precipitate (regions iii and iv). The interaction between the stress field of the dislocation and the stress field created by the STFS in this case is shown in the contour plots of τ yz in Fig. 10. The repulsion between the dislocation and the precipitate due to the presence of the STFS leads to the formation of the ellipsoidal Orowan loop which is only in contact with the edges of the precipitate (regions v and vi). Interestingly, the minimum radius of curvature of the dislocation line during the Orowan process was higher than that in the case without SFTS (Fig. 8) and the CRSS for the 60 • configuration with SFTS 0 3 was smaller than that obtained in the absence of the SFTS (Fig. 8).
In the case of 0 5 (Fig. 9c), the dislocation line is initially repulsed by the precipitate (region i) but it is strongly attracted afterwards towards the broad face of the precipitate (region ii). The final Orowan loop is in contact with the precipitate along the whole matrix/precipitate interface (regions iii and iv). This leads to a very small radius of curvature of the dislocation and the CRSS in this case is much higher than the one in the absence of the SFTS (Fig. 8). The situations in the presence of the two other SFTS (Fig. 9b and d) are in between those reported above and the CRSS in these cases were equal ( 0 4 ) or slightly higher ( 0 6 ) than that in the absence of the SFTS.
Influence of the aspect ratio of the precipitates
Although θ precipitates in Al-Cu alloy have a large aspect ratio, this geometric feature may be changed by the addition of alloying elements which modify the interfacial energy between the Al matrix and the precipitate (Mitlin et al., 2000;Yang et al., 2016;Duan et al., 2017). Thus, it is interesting to analyze the influence of the precipitate aspect ratio on the mechanisms of dislocation-precipitate interaction in the presence of the SFTS. To this end, DDD simulations in the 0 • and 60 • configuration were carried out with precipitates with aspect ratios in the range 26:1 to 1:1 while the precipitate volume fraction (3.1 10 −4 ) was held constant.
In the absence of SFTS, the dislocations overcome the precipitate by the formation of an Orowan loop and the corresponding results are not plotted for the sake of brevity. The shear stress-strain curves corresponding to the 0 • configuration with SFTS given by 0 1 and 0 2 are plotted in Figs. 11a) and b), respectively. The corresponding contour plots of the shear stress τ yz in the initial configuration are shown in Figs. 12a) and b).
In the first case (12a), the stress field induced by the SFTS repels the dislocation (region i) and impedes that the dislocation line gets in touch with the precipitate (region ii). As a result, the effective precipitate diameter that controls the radius of curvature of the dislocation arms to form an Orowan loop is increased (region iii). On the contrary, the stress field created by 0 ( Fig. 11b) strongly attracts the dislocation line (regions i and ii) and the dislocation spontaneously overcomes the precipitate by the formation of a very tight Orowan loop (region ii). This process is repeated is further strain applied to the simulation domain (regions iv and v).
The mechanisms of dislocation-precipitate interaction in the case of 60 • configuration with smaller aspect ratio are qualitatively similar to those reported above and are not included for the sake of brevity. It is, however, important to assess the influence of the SFTS and of the precipitate aspect ratio on the CRSS for both precipitate orientations. These results are plotted in Figs. 13a) and b) for the 0 • and 60 • orientations, respectively. The CRSS obtained from the simulations with and without SFTS are included in each figure, together with the predictions of the Orowan model for the CRSS, τ O , which is given by where G (= 29.9 GPa) is the shear modulus of the Al matrix in the slip plane parallel to the Burgers vector b (= 0.2863 nm) and L is the distance between precipitates along the dislocation line (Fig. 13). In the case of the 0 • orientation, the variation of the precipitate aspect ratio from 26:1 to 1:1 (while the precipitate volume fraction was held constant) did not change L significantly (from L = 718 nm to 636 nm, respectively), while the differences in L with the aspect ratio were slightly different in the 60 • configuration (from L = 587 nm for 26:1 to L = 637 nm for 1:1 aspect ratio). Thus, the CRSS given by eq. (24) was almost constant for the 0 • configuration and decreased slightly with the aspect ratio in the 60 • configuration (Fig. 13).
The predictions of the Orowan model were in good agreement with the DDD simulations in the absence of the SFTS in the 60 • configuration but they overestimated by ≈ 20% the CRSS for precipitates with large aspect ratio oriented at 0 • . These results are in agreement with the main hypotheses of the Orowan model, which assumed that the precipitates were spherical (small aspect ratio) and that the dislocation line formed a circular loop between precipitates. The introduction of the SFTS led to large variations in the CRSS, which were more important if the precipitates had small aspect ratio. Depending on whether the stress field associated with the SFTS attracted or repelled the dislocation, the CRSS could increase or decrease dramatically in the precipitates in the 0 • orientation (Fig. 13a) when the aspect ratio was 1:1. As shown in Fig. 11, the stress field associated to the SFTS controlled the shape of the dislocation loop at the instability point and, thus, the magnitude of the CRSS. Similar results were obtained in the case of the precipitate oriented at 60 • with an aspect ratio of 1:1 (Fig. 13b) for 0 5 and 0 3 . It should be noted, however, that the SFTS is an important factor to determine the shape of the precipitate, as shown by Liu et al. (2017). The SFTS used in this simulations led to precipitates with large aspect ratio (¿ 10) and may not be realistic for equiaxed precipitates.
The influence of the SFTS decreased as the precipitate aspect ratio increased because the dislocation loop configuration depended not only in the SFTS but also on the precipitate shape, leading always to an elongated loop parallel to the main axis of the precipitate (Figs. 7 and 9). Nevertheless, it should be noticed that the presence of the SFTS increased the CRSS of precipitates with an aspect ratio of 26:1 in all cases (with the exception of the SFTS 0 3 in the 60 • configuration) the CRSS with respect to the values obtained by DDD simulations without the SFTS. These results indicate that the stress fields around the precipitate (due to the SFTS or to thermal stresses generated upon cooling from the ageing temperature as a result of the thermal expansion mismatch between the matrix and the precipitates) have to be taken into account to make quantitative predictions of the strengthening provided by precipitates in metallic alloys. It should be finally noted that the presence of the SFTS 0 6 in the 60 • configuration changed the shape of the Orowan loop around the precipitate when the aspect ratio increased from 1:1 to 2:1 and again for larger values of the aspect ratio, leading to a complex variation of the CRSS with this parameter for this particular SFTS. Moreover, the stress fields around precipitates can interact with each other for large precipitate volume fractions (far away from the dilute conditions of this investigation), leading to complex interaction patterns between dislocations and precipitates.
Influence of the elastic mismatch between matrix and precipitate
All the results presented above were obtained using different values of the elastic properties for the matrix and the precipitate, according to the DFT results in Tables 1 and 2. However, it is interesting to assess the influence of the elastic heterogeneity on the stress-strain curves and on the CRSS. Thus, two simulations were carried out for the 0 • and 60 • orientations (without STFS) for precipitates with an aspect ratio 26:1 in which the elastic properties of the precipitate were identical to those of the matrix. The corresponding stress-strain curves for these homogeneous simulations are plotted in Figs. 14a) and b) for the precipitates oriented at 0 • and 60degree, together with the results obtained with the actual elastic constants of the matrix and the precipitate. The differences in the mechanisms and in the CRSS were negligible, in agreement with previous investigations (Shin et al., 2003).
Concluding Remarks
The mechanisms of dislocation/precipitate interaction were analyzed by means of a novel DDD strategy based in the DCM (Lemarchand et al., 2001) in combination with a FFT solver to compute the mechanical fields (Bertin et al., 2015). This framework neither requires the use of analytical expressions for the displacement fields of the dislocation segments (and, thus, can be easily extrapolated to anisotropic materials), nor the computational power increases with the square of the number of dislocations segments. Moreover, very fine discretizations (necessary to model precipitates with large aspect ratio) can be used owing to the efficiency of the FFT solver, and the influence of the image stresses (induced by the elastic modulus mismatch between the matrix and the precipitate) and of the SFTS can be easily incorporated to the simulations. The original DDD strategy (Bertin et al., 2015) was modified to include straight dislocation segments by means of the FDM model, the appropriate configuration to analyze the interaction of a single dislocation line with a precipitate. The novel DDD model was used to analyze the mechanisms of dislocation/precipitate interaction and the corresponding CRSS in Al-Cu alloys. The orientation, size and shape of the θ precipitates as well as the SFTS associated to the different precipitate variants were obtained from recent multiscale modelling simulations based on the phase-field model (Liu et al., 2017), while the elastic constants of the Al matrix and of the precipitates were calculated by DFT and the dislocation mobility as a function of the dislocation character was obtained from molecular dynamics simulations (Cho et al., 2017). This leads to a multiscale modeling strategy, in which all the parameters in the DDD simulations are obtained from calculations at lower length scales.
The DDD simulations provided for the first time a detailed account of the influence of the precipitate aspect ratio, orientation, SFTS and elastic mismatch between the matrix and the precipitate on the dislocation path to form an Orowan loop and on the CRSS to overcome the precipitate. It was found than the elastic mismatch have a negligible influence on the dislocation/precipitate interaction in the Al-Cu system while the influence of the precipitate aspect ratio and orientation was reasonably captured by the simple Orowan model in the absence of the SFTS. Nevertheless, the introduction of the SFTS led to dramatic changes in the dislocation/precipitate interaction and in the CRSS. This effect decreased as the precipitate aspect ratio increased but it was still very important (above 50% in the CRSS for some precipitate variants) for θ' precipitates with the typical aspect ratio found in Al-Cu alloys. Thus, this investigation reveals the large influence of the SFTS on the mechanics of dislocation/precipitate interaction, an important factor that has not been previously considered in the analysis of precipitation hardening.
Finally, The methodology presented in this paper opens the possibility to explore in more detail the mechanisms of dislocation/precipitate interaction in metallic alloys with realistic values of the precipitate size, shape and aspect ratio as well as of the elastic mismatch and of the dislocation mobility. They will be able to provide quantitative assessments of the strengthening provided by the precipitates, taking into account the influence of the SFTS and of the thermal stresses that develop upon cooling from the ageing treatments at high temperature. Finally, they can be extended to deal with larger volume fraction of precipitates to account for the interaction between the SFTS of different precipitates and to model the propagation of a dislocation through a forest of precipitates including the effect of cross-slip. These topics will be the subject of future investigations.
Acknowledgments
This investigation was supported by the European Research Council under the European Union's Horizon 2020 research and innovation programme (Advanced Grant VIRMETAL, grant agreement No. 669141). LC was funded by the US Department of Energys Nuclear Energy Advanced Modeling and Simulation (NEAMS).
Appendix A. Influence of strain rate
Dislocation dynamics simulations are normally carried out at high strain rates (10 2 to 10 5 s −1 ) for computational reasons and this limitation often leads to question whether the results obtained are applicable under quasistatic conditions. In order to analyze this effect, several simulations were carried out using a relaxation strategy that allows to study the dislocation dynamics under quasi-static conditions. In this approach, a strain increment is applied to the simulation box at at a high strain rate, in this case 4.0 10 7 s −1 , and the energy of the system is relaxed afterwards during several steps at a constant applied strain. The shear stress is reduced during relaxation and the process is finished when the difference in the shear stress between to consecutive relaxation steps is lower than a certain tolerance, and the system can be considered to be in equilibrium. Then, a new strain increment is applied and the whole relaxation process is repeated. The shear stress-strain curve obtained following this process is plotted in Fig. A.15a) in the case of the interaction of an edge dislocation with a precipitate with a diameter of 156 nm and an aspect ratio of 26:1 in the 0 • configuration. The blue line with open symbols shows the successive strain increments followed by the relaxation of the shear stress and the red line with solid symbols stands for the quasi-static shear stress-strain curve. The shear stress-strain curves obtained at different strain rates (10 4 and 10 5 s −1 ) for this case are plotted in Fig. A.15b), together with the quasi-static curve in Fig. A.15a). The comparison between these curves shows that the results obtained at an applied strain rate of 10 4 s −1 were very close to the quasi-static simulations and, thus, the DDD presented in this paper were carried out at an applied strain rate of 10 4 s −1 . | 2018-06-06T08:12:22.000Z | 2018-05-19T00:00:00.000 | {
"year": 2018,
"sha1": "ac3b2adf8b24fda894bd094edaaff7e76c6fc0a4",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1805.07597",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "ac3b2adf8b24fda894bd094edaaff7e76c6fc0a4",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
232139034 | pes2o/s2orc | v3-fos-license | Application of Linear Discriminant Function Analysis in Classification of Bovine Mastitis
Mastitis is the most important and expensive disease of dairy industry (Sharif and Muhammad 2009). This disease is characterized by inflammation of mammary gland in response to injury for the purpose of destroying or neutralizing the infectious agents and to prepare the way for healing and return to normal function. Inflammation can be caused by many types of injury including infectious agents and their toxins, physical trauma or chemical irritants.In the dairy cow, mastitis is always caused by microorganisms, usually bacteria, that invade the udder, multiply in the milk-producing tissues, and produce toxins that are the immediate cause of injury.Elevated leukocytes or somatic cells produced by inflammatory response cause a reduction in milk production and alter milk composition. These changes in turn adversely affect quality and quantity of dairy products (Jones and Bailey 2009). International Journal of Current Microbiology and Applied Sciences ISSN: 2319-7706 Volume 9 Number 11 (2020) Journal homepage: http://www.ijcmas.com
Introduction
Mastitis is the most important and expensive disease of dairy industry (Sharif and Muhammad 2009). This disease is characterized by inflammation of mammary gland in response to injury for the purpose of destroying or neutralizing the infectious agents and to prepare the way for healing and return to normal function. Inflammation can be caused by many types of injury including infectious agents and their toxins, physical trauma or chemical irritants.In the dairy cow, mastitis is always caused by microorganisms, usually bacteria, that invade the udder, multiply in the milk-producing tissues, and produce toxins that are the immediate cause of injury.Elevated leukocytes or somatic cells produced by inflammatory response cause a reduction in milk production and alter milk composition. These changes in turn adversely affect quality and quantity of dairy products (Jones and Bailey 2009).
ISSN: 2319-7706 Volume 9 Number 11 (2020)
Journal homepage: http://www.ijcmas.com The dairy industry is facing a great set back due to high prevalence and incidence of mastitis in milch animals. The aim of this study was to identify the factors discriminating between 'clinical and subclinical mastitis, 'clinical and chronic mastitis' and 'subclinical and chronic mastitis' using linear discriminant function analysis. The present study was conducted at the Large Animal Clinic of Madras Veterinary College (MVC) Hospital, Chennai. Out of two hundred and eighty milch animals examined during the study period, sixty cows were affected by mastitis. Results revealed that while discriminating clinical and subclinical mastitis, the variables named breed of the animal, hand pre-washing prior to milking and udder washing before milking discriminated 0.748, 0.413 and 0.644 times respectively towards subclinical mastitis. Stage of lactation and provision of bedding material discriminated 0.462 and 0.547 times respectively towards clinical mastitis. A total of seven variables was found to be significant for discriminating between clinical and chronic mastitis. Fishers linear discriminant functions obtained might be used to discriminate the further cases of mastitis to be presented in Madras Veterinary College hospital.
Contagious mastitis can be divided into three types, clinical mastitis, subclinical mastitis and chronic mastitis (Awale et al., 2012). Clinical mastitis results in alterations of milk composition and appearance, decreased milk production and the presence of the cardinal signs of inflammation (pain, swelling and redness, with or without heat in infected mammary quarters). It is readily apparent and easily detected.
Subclinical infections are those in which no visible changes occur in the appearance of the milk or the udder, but milk production decreases, bacteria are present in the secretion and composition is altered (Jones and Bailey 2009). An inflammatory process that exists for months and may continue from one lactation to another in chronic mastitis (Rabello et al., 2005).
Discriminant function is a multivariate technique for studying the extent to which different groups diverge from one another. The aim of this study was to identify the factors discriminating between 'clinical and subclinical mastitis, 'clinical and chronic mastitis' and 'subclinical and chronic mastitis' using linear discriminant function analysis.
Materials and Methods
The present study was conducted at the Large Animal Clinic of Madras Veterinary College (MVC) Hospital, Chennai. The primary data were collected from milch cows presented in outpatient ward of the MVC hospital. In addition, farm visit of the respective farmers were made to obtain the additional information on bovine management practices followed by the selected farmers.
Total number of animals arrived at the Large Animal Clinic of Madras Veterinary College (MVC) during the study period of four months from October 2016 to January 2017 was considered as the total population (N) for the present study. Out of two hundred and eighty milch animals examined during the study period, sixty cows were affected by mastitis. Pre-tested questionnaire was prepared and detailed information about mastitis infected animals was collected from the farmers.
Total farm details including details of barn, management aspects, previous history of disease aspects if any and hygienic aspects were collected through personal interview method Viguier et al., (2009) stated that the severity of the inflammation could be classified into sub-clinical, clinical and chronic forms. They added that chronic mastitis is a rarer form of the disease, results in persistent inflammation of the mammary gland. Kurjogi and Kaliwal (2014) determined the prevalence of clinical mastitis in cows by examination of changes in the udder, namely, redness, rise in temperature, swelling, hardness of udder, changes in milk colour, and reduction in milk quality and quantity. Subclinical mastitis were confirmed using California mastitis test (CMT). Abebe et al., (2016) used California mastitis test (CMT) as a screening test for sub-clinical mastitis.
The variables which are significantly discriminating the types of mastitis were found out using linear discriminant function. Discriminant analysis were performed by using IBM ® SPSS ® 20.0.for windows ® . Discriminant function is a multivariate technique for studying the extent to which different individuals diverge from one another. The objective of a discriminant analysis is to classify objects, by a set of independent variables, into one of two or more mutually exclusive and exhaustive categories (Alayandeand Adekunle, 2015).
Results and Discussion
Factors discriminating between 'clinical and subclinical mastitis, 'clinical and chronic mastitis' and 'subclinical and chronic mastitis' were done using linear discriminant function analysis. The variables which are significantly discriminating the types of mastitis were found out by taking two type of mastitis at a time. The results obtained are discussed in separate sub headings. Table 1 explained the eigen value as 1.620 and canonical correlation as 0.786. Eigen value explained the amount of variance obtained from a function. Square of the canonical correlation value can be considered as R 2 (co-efficient of multiple determination). That means 61.8 (0.618) percent information about the above said discrimination was being explained by all the independent variables chosen for the study. Rest of the value (1-0.618) is the Wilks' Lambda. The data generated from 1378 birds on body linear parameters and weight were subjected to discriminant analysis by Gwaza et al., (2013) and estimated group statistics, test of equality of group means, canonical correlation coefficients, Wilks' lambda, structure matrix and classification statistics.
Factors discriminating between clinical and subclinical mastitis
Co-efficient of discriminant function is given in Table 2. From a total of twenty one independent variables considered, five variables were found to be significant. Breed of the animal, hand pre-washing prior to milking and udder washing before milking discriminated 0.748, 0.413 and 0.644 times respectively towards subclinical mastitis. Shittu et al., (2012) found out the association of subclinical mastitis with breed characteristics. The other two variables named stage of lactation and provision of bedding material discriminated 0.462 and 0.547 times respectively towards clinical mastitis. Linear discriminant function obtained can be used for predicting further discrimination between clinical and subclinical mastitis.
Chi-square values showed that the model was significant at one percent level of significance. 91.1 percent of original grouped cases were correctly classified (predicted) by the discriminant model as seen in Table 3. Milewska et al., (2015) used discriminant model for predicting achievement and failure of pregnancy. They concluded that the discriminant analysis allowed for the creation of a model with a 51.22 percentage correctness of prediction to achieve pregnancy during in vitro fertilization treatment and with 74.07 percentage correctly predicted failure of pregnancy. Table 1 explained the eigen value as 3.883 and canonical correlation as 0.892. Eigen value explained the amount of variance obtained from a factor. Square of the canonical correlation value can be considered as R 2 (co-efficient of multiple determination). That means 79.5 (0.795) percent information about the above said discrimination was being explained by all the independent variables chosen for the study. Rest of the value (1-0.795) is the Wilks' Lambda.
Factors discriminating between clinical and chronic mastitis
Co-efficient of discriminant function was given in Table 2. From a total of twenty one independent variables considered, seven variables were found to be significant. Breed of the animal, lactation number, hygiene of the farm, injury to the udder prior to mastitis and udder drying after washing of udder discriminated 0.765, 0.620, 0.909, 0.872 and 0.432 times respectively towards chronic mastitis. The other variables including hand pre-washing prior to milking and provision of bedding material discriminated 0.523 and 0.987 times respectively towards clinical mastitis.
Linear discriminant function obtained can be used for predicting further discrimination between clinical and chronic mastitis. Chisquare values showed that the model was significant at one percent level of significance. 95.3 percent of original grouped cases were correctly classified by the discriminant model as seen in Table 4.
Factors discriminating between subclinical and chronic mastitis
Table 1 explained the eigen value as 1.581 and canonical correlation as 0.783. Eigen value explained the amount of variance obtained from a factor. Square of the canonical correlation value can be considered as R 2 (co-efficient of multiple determination). That means 61.30 (0.613) percent information about the above said discrimination was being explained by all the independent variables chosen for the study. Rest of the value (1-0.613) is the Wilks' Lambda.
Co-efficient of discriminant function was given in Table 2. From a total of twenty one independent variables considered, three variables were found to be significant. Injury to the udder prior to mastitis and udder washing before milking discriminated 0.890 and 0.667 times respectively towards chronic mastitis. The breed of the animal discriminated 0.672 times respectively towards subclinical mastitis. Linear discriminant function obtained can be used for predicting further discrimination between subclinical and chronic mastitis. Chi-square values showed that the model was significant at one percent level of significance. 87.5 percent of original grouped cases were correctly classified by the discriminant model as seen in Table 5. As a conclusion, Fishers linear discriminant functions obtained might be used to discriminate the further cases of mastitis to be presented in Madras Veterinary College hospital. | 2021-03-07T19:47:53.754Z | 2020-11-20T00:00:00.000 | {
"year": 2020,
"sha1": "191a88b987cd387f659314cfac45b06400663665",
"oa_license": null,
"oa_url": "https://www.ijcmas.com/9-11-2020/Safeer%20M.%20Saifudeen%20and%20G.%20Senthilkumar.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "191a88b987cd387f659314cfac45b06400663665",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
237496135 | pes2o/s2orc | v3-fos-license | Let Me Take Over: Variable Autonomy for Meaningful Human Control
As Artificial Intelligence (AI) continues to expand its reach, the demand for human control and the development of AI systems that adhere to our legal, ethical, and social values also grows. Many (international and national) institutions have taken steps in this direction and published guidelines for the development and deployment of responsible AI systems. These guidelines, however, rely heavily on high-level statements that provide no clear criteria for system assessment, making the effective control over systems a challenge. “Human oversight” is one of the requirements being put forward as a means to support human autonomy and agency. In this paper, we argue that human presence alone does not meet this requirement and that such a misconception may limit the use of automation where it can otherwise provide so much benefit across industries. We therefore propose the development of systems with variable autonomy—dynamically adjustable levels of autonomy—as a means of ensuring meaningful human control over an artefact by satisfying all three core values commonly advocated in ethical guidelines: accountability, responsibility, and transparency.
INTRODUCTION
As the use of Artificial Intelligence (AI) grows, we continue to see increased societal calls for human control and for an AI development pipeline that follows our legal, ethical, and social values. In particular, many public and governmental organisations have been producing guidelines for the development and deployment of responsible AI systems (Jobin et al., 2019). These documents, while providing high-level guidance on the core values that should drive system development and deployment, often provide no explicitness on how to interpret and operationalise such values . This focus on the high-level thus provides no single definition of what it means to adhere to these values, making it challenging to first implement and then assess whether systems adhere to the societal criteria set down in those documents.
An example of differing interpretations can be found in the idea of human oversight, a prominent theme across guidelines and other initiatives, with an emphasis on respecting and fostering human autonomy and agency. Technical approaches for the inclusion of human oversight over systems, such as human-in-the-loop and human-on-the-loop, have been much discussed in the academic literature (Amershi et al., 2014), policy documents (European Commission, 2019), and popular science communication (Wang, 2019). However, when it comes to responsible AI, the notion of human oversight extends beyond mere technical human control over a deployed system: it also includes the responsibility that lays in the development and deployment process, which entirely consists of human decisions and is therefore part of human control. The concept of meaningful human control, developed for the critical area of autonomous weapons, extends beyond mere oversight by including design and governance layers into what it means to have effective control (Horowitz and Scharre, 2015;Santoni de Sio and van den Hoven, 2018;Van der Stappen and Funk., 2021). Meaningful human control refers to control frameworks in which humans, not machines, remain in control of critical decisions, e.g., in the case of autonomous weapons, where the concept was first introduced, humans decide-and bear the responsibility of-when the weapon is allowing the use of lethal force.
In this paper, we argue that the core values of accountability, responsibility and transparency are necessary to ensure meaningful human control in the wider socio-technical sense. Indeed, in this definition, meaningful human control requires taking into consideration the relevant human agents, relevant moral reasons, and appropriate level of responsiveness to those reasons (Santoni de Sio and van den Hoven, 2018). In fact, meaningful human control over a system is not achieved by simply having human presence to authorise the use of force (Santoni de Sio and van den Hoven, 2018; Van der Stappen and Funk., 2021). Rather, it requires the interaction between the user and system to be done in a transparent, traceable manner. If an action is challenged or otherwise requires evaluation, then at least one responsible human along the causal chain can be identified and held accountable. At the same time, the system needs to be developed in a responsible manner by taking into consideration any soft and hard policy and integrating the means for the system to be responsive to the human moral reasons that are relevant to the given circumstance.
Further, we introduce variable autonomy to operationalise meaningful human control. Variable autonomy refers to the ability to dynamically adjust the levels of autonomy of the system, i.e., the level of autonomy can switch anywhere between and including full autonomy or complete teleoperation (Chiou et al., 2019). In a system with variable autonomy, the user can take (or relinquish) control over certain (or all) subsystems. As part of our contribution, we argue that in order to effectively design systems that allow for dynamically adjusting the autonomy level we need to consider the same aspects of accountability, responsibility and transparency that constitute meaningful human control. Indeed, by their nature, systems with variable autonomy must include explicit deliberations of the dimensions of autonomy that are afforded, the contexts encountered and the human operator's knowledge and ability: precisely the considerations that are required to have meaningful human control.
The paper is structured as follows: first, we discuss different approaches to human control in the literature. Next, we discuss how accountability, responsibility and transparency build up to meaningful human control in the socio-technical sense. We then introduce variable autonomy (VA) and show how VA ensures meaningful human control. Finally, we reflect on how VA might look like in current applications of AI and propose some ways forward.
HUMAN-IN/ON/OUT-OF-THE-LOOP AND HUMAN CONTROL
Human oversight is a key component in the design of AI systems that support human autonomy and decision-making. This is highlighted by the AI High-Level Expert Group in the European Commission (2019) "Ethics guidelines for Trustworthy AI," where human-in-the-loop (HITL), humanon-the-loop (HOTL) and human-in-command (HIC) are presented as governance mechanisms that can potentially aid in achieving human oversight. The keyword loop may originate from control theory, where the system is engaged in a continuous cycle of measuring and adjusting itself towards achieving a desired state (Norman, 1990). However, in the context of socio-technical systems like the ones discussed in this paper, the idea of "the loop" widens to contain the entire lifecycle of the system, by spanning across all its phases from development to deployment and beyond.
In this section, we discuss established frameworks for the inclusion of humans in the loop. Indeed, the human's optimal position relative to the loop will vary depending on the human's role as well as the overall context under which the human and system operate (Grønsund and Aanestad, 2020). Furthermore, we discuss how a static notion of human presence or oversight does not suffice for maintaining human control, and the need for alternative more adaptable solutions exist.
Human-In-the-Loop
In HITL, the human plays an integral role throughout the entire operation, influencing every decision cycle of the system. This is desirable and often necessary in environments where near optimal performance is required, and machine performance suffers such as those that are dynamic, highly complex or uncertain (Marble et al., 2004;Leeper et al., 2012). For instance, interactive machine learning methods can be used to solve problems for which insufficient data exist or to help deal with complex datasets that capture rare events. The human is brought into the loop during the learning phase of the algorithm to provide indispensable expert knowledge that it cannot acquire on its own (Holzinger, 2016). This model serves as a powerful tool to not only improve system accuracy and efficiency, but also to regulate its behaviour. However, requiring human input at every step in the decision cycle can be inefficient and introduce bottlenecks in the system (van der Stappen and Funk, 2021). Furthermore, the human may not have enough information-or courses of action-to effectively influence the system at every decision (Horowitz and Scharre, 2015). When human involvement is not necessary at every decision step, the HOTL model can be sufficient to regulate system behaviour.
Human-On-the-Loop
In HOTL, the human steps back during the execution of the operation to assume a supervisory role (Chen and Barnes, 2014), influencing the system by monitoring its behaviours and interjecting only as needed. This has many benefits in industrial robotics, for instance, where one human supervisor can oversee multiple assembly robots, checking performance and interrupting only if system failure occurs. In order to intervene, the supervisor must be able to maintain awareness over the system's status as well as the environment in which it operates. Coordination becomes increasingly difficult to manage as systems grow more complex; especially when multiple agents are involved in the operation (Scerri et al., 2003). Furthermore, if the human does not place realistic trust expectations onto the system, i.e. either over trust and rarely intervene or distrust and intervene too often, the performance of the system will be compromised-potentially even leading to safety concerns (Lee and See, 2004).
Human-Out-of-the-loop
In situations where humans lack local knowledge, expertise, or timely reaction to optimally respond to the environment, human-out-of-theloop is more appropriate. In these circumstances, autonomy is more of a necessity than a convenience (Castelfranchi and Falcone, 2003). For instance, advanced driver-assistance systems in vehicles promote road safety by detecting and alerting the driver of incoming collision threats and overriding control if necessary. Human error and slow response to time-critical operations motivate the need for full autonomy where the human is pushed entirely out of the control loop, allowing the system to independently execute its task (Kaber and Endsley, 2004). In the case of monotonic systems, human oversight remains even when out of the loop because the behaviours are explicit and known. This is not necessarily the case for AI systems that can deviate from what is expected, e.g., in the case of multi-agent systems, where the human designer cannot influence unexpected emergent behaviour of the organisation ( Van der Vecht et al., 2007). HIC addresses this by requiring human involvement in defining conditions for its governance, operation, and use, as well as determining the appropriate evaluation and sanction processes, ensuring human oversight is not lost.
Still, fully autonomous systems cannot always eliminate the risk of human error. The out-of-the-loop performance problem emphasises the issues of skill degradation and reduced situational awareness limiting the human operator's ability to manually interfere in system operations in case of failure (Endsley and Kiris, 1995). Moreover, autonomous systems can propagate biases learned from human data and can reinforce any systematic discrimination found in society. This point was highlighted in the New York State Department of Financial Services, 2021 "Report on the Apple Card Investigation", stating that credit scores today perpetuate societal inequality even when calculated in compliance with the law. Striving only for full autonomy can divert attention away from the goal of developing human-centric AI, where human agency is supported and never undermined . This is precisely a reason why human oversight is important.
Choosing one model of human oversight over another is entirely dependent on the context in which the system is deployed, the independent capabilities of the system, the user's trust in the system (Muir, 1994), as well as the potential risks imposed on society. Indeed, it is often not only a single human who is in the loop, but rather a larger group. Rahwan (2018) uses the term "society-in-the-loop" to refer to the combination of HITL control with a social contract. The challenge thus becomes one of balancing stakeholders' competing interests and managing issues of coordination. In many real-world applications, the environment is continuously changing and the demand for human or system involvement, i.e., mechanism for oversight, will vary (Marble et al., 2004).
Still, it should be emphasised that human presence is not sufficient for meaningful control from a responsibility standpoint. One may have insufficient information to influence the process rationally, or limited control over parts of the system and no ability to influence other critical components of the causal chain (Horowitz and Scharre, 2015). For meaningful human control, the decision-making system must be able to both track relevant moral reasons and trace back to an individual along the chain who is aware and accepting of the responsibility (Santoni de Sio and van den Hoven, 2018). Moreover, contexts that change will demand changing levels of responsiveness-a key characteristic of variable autonomy, which is further described in section 4 (Marble et al., 2004).
ACCOUNTABILITY, RESPONSIBILITY, AND TRANSPARENCY FOR MEANINGFUL HUMAN CONTROL
Responsible AI rests on three main pillars: Accountability, Responsibility, and Transparency (Dignum, 2019). In this section, we discuss these three values in relation to the human control of our system. First, we discuss why we should-always-be striving for accountability and the importance of identifying the relevant actors along the causal chain of responsibility. Then, how transparency aids in tracing back to said actors who can ultimately be held accountable, and how transparency on its own provides no guarantee of accountability, robustness, or the observation of good practices.
Accountability and Responsibility
As incidents occur-and sometimes reoccur-the ability to effectively hold the responsible parties answerable for a system's behaviour is essential for maintaining the public's trust to the technology (Knowles and Richards, 2021). Yet, multiple scholars have raised concerns over an ongoing accountability gap, i.e., current moral and legal frameworks fail to explicitly answer who should be held responsible for the actions taken by an autonomous system and how (Raji et al., 2020;Santoni de Sio and Mecacci, 2021). Although the systems themselves cannot be granted legal personhood and held accountable, the organisations and individuals who may be benefiting through their development, deployment, and use can Solaiman, 2017). Those organisations and individuals are part of a "chain of responsibility" and need to be able to explain and justify their decisions (Dignum, 2017). After all, accountability is the state of being answerable for a system's actions and its potential impacts (Narayanan 2018). However, the exact scope of the justification given for the actions and impact of a system depends on who is asking for them. Bovens (2007) breaks down accountability into five distinct types, each with its own enforcement mechanisms and means of control over an actor's behaviour: 1. legal accountability is when civil or administrative courts enforce legislation; Frontiers in Artificial Intelligence | www.frontiersin.org September 2021 | Volume 4 | Article 737072 2. professional accountability is when professional bodies enforce codes-of-conduct and good-design practices; 3. political accountability is when elected representatives, e.g., in a parliament, scrutinise-and intervene-to the actions taken of other politicians and political-appointed civil servants; 4. administrative accountability is when independent external administrative and financial supervision (e.g., auditing offices) exercise oversight; 5. social accountability is when the public or non-governmental organizations hold organisations and individuals accountable for their actions. While direct sanctions are not possible, social responsibility may lead to boycotting and other indirect measures against someone's actions. Social responsibility is linked to self-regulation activities, as organisations try to maintain their social standing (Jobin et al., 2019).
Each type of accountability can be seen as a means of control: it compels, under the "threat" of being held accountable, responsible actors to adhere to specified regulations and practices (Bovens, 2007). These responsible actors are all part of a "chain of responsibility," which includes anyone influencing and influenced by the technologies and policies that are used to develop and govern our systems: from researchers to developers to deployers to users to policymakers (Dignum, 2017). For stakeholders to act, they first need to acknowledge and understand their own responsibilities. Education and governance initiatives, e.g., introducing the need for professional certification, can help make those responsibilities explicit .
It is after all foundational for the effective governance of these systems to recognise that we cannot separate the technology, or the artefact that embeds that technology, from the wider sociotechnical system of which it is a component (Mittelstadt and Floridi, 2016;Dignum, 2019). Only then can we increase our ethical and legal foresight and establish practices of accountability and responsibility by looking at technical solutions, socioorganisational activities, as well as processes performed in connection to the technology. Responsibility practices in the socio-organisational level include, for example, the use of software development practices for code robustness, maintainability, and reusability Raji et al., 2020). At the same time, responsibility as a technical solution includes ensuring technical robustness (Baldoni et al., 2021), the means of linking a system's decision to its key stakeholders and having the ability for a system to reason for its actions within a specified moral framework (Dignum, 2017).
While responsibility is forward thinking, i.e., acting to deter incidents and violations of our ethical and legal values from occurring, accountability is "a form of backward-looking responsibility and provides an account of events after they occurred" (Van der Stappen and Funk, 2021). To perform its function, accountability requires not only the methods of holding people into account, e.g., legislation in the case of legal accountability, but also the means of tracing actions back to the appropriate responsible party (Van der Stappen and Funk, 2021). Our ability to effectively maintain meaningful human control with accountability relies on having the appropriate auditing and traceability mechanisms in place to track the events that led to a system's behaviour and which of the stakeholders is responsible for those system actions (Horowitz and Scharre, 2015;Santoni de Sio and van den Hoven, 2018). Approaches such as algorithmic transparency and traceability, which we discuss next, can help us do that.
Transparency
Transparency is the single most frequently referred to principle in the 84 policy documents reviewed by Jobin et al. (2019), 73 of which promote its need for building socially beneficial AI systems. Yet, transparency can have different interpretations depending on the context in which it is encountered (Winfield et al., 2021). In particular, and where it concerns meaningful human control, transparency can be seen to mean different things:
Transparency as Trust
Transparency is often considered as the means of providing an understanding of the emerging behaviour of an agent as it interacts with its environment (Theodorou et al., 2017). The behaviour of systems, alongside with our inherent lack of theory of mind for machines, makes autonomous systems far too complex to effectively debug and understand with "traditional" techniques of testing information systems. User studies have demonstrated how the display of transparency-related information can help users adjust their mental models (Rotsidis et al., 2019;Wortham, 2020) and calibrate their trust to the machine (Dzindolet et al., 2003;Hoff and Bashir, 2015;Mercado et al., 2015). By knowing when to trust-or distrust-the system, the user can make informed decisions on when to accept or reject actions taken by a system and, therefore, exercise more effective control over the system, improving both the safe operation and performance of the human-machine system (Lyons, 2013).
Transparency as Verifiability
Others have linked transparency to traceability, i.e., the ability to keep a record of information related to a decision . Traceability is particularly important for verification and validation (Fisher et al., 2013) and overall testing of a system. Traceability is also fundamental to enabling incident investigators in the identification of the responsible parties (Santoni de Sio and van den Hoven, 2018; Winfield and Jirotka, 2017). However, for the effective attribution of accountability, we need to look not only into the decisions the AI system made, but also into the ones made in its wider socio context. Therefore, auditing frameworks need to look beyond the technical component and instead verify the decisions, policies, and actions taken by all key stakeholders around a system's lifecycle (Raji et al., 2020).
Transparency as Fairness
Transparency is also presented as a means of pursuing fairness in algorithmic decision-making (Jobin et al., 2019). The data that are used to develop learning systems reflect social biases that are perpetuated and amplified with the system's continued use. Caliskan et al. (2017) algorithms trained on language corpora acquire harmful historic biases and reinforce them. Certainly, data is not the only source of bias embedded in AI. Design decisions are directed by human moral agency, which cannot be free of bias. Humans use heuristics to form judgements in decision-making, and while these heuristics can be neutral and useful for efficient input processing, they are culturally influenced (Dignum, 2017;Hellström et al., 2020). This presents the risk of formulating harmful biases that are reinforced through practice. Identifying and addressing unwanted biases to ensure fairness requires transparency at every stage of the AI lifecycle.
Transparency as Contestability
Still, we should not consider algorithmic transparency as a panacea. In fact, complete algorithmic transparency may not always be possible or desirable to implement due to technical, economic, or social factors ((Ananny and Crawford 2018). Moreover, focusing on algorithmic transparency ignores the fact that AI systems are part of a complex socio-technical ecosystem. Algorithmic transparency without sufficient openness about stakeholder decisions, interests, and overall context, provides not much more than a peephole into a limited part of the whole socio-technical system. Contrary to popular belief, providing transparency-or even explanations-from the system does not mean that we can effectively contest the decisions (Aler Tubella et al., 2020;Lyons et al., 2021). Contestability, i.e., the ability to contest decisions, requires looking beyond why a decision was made. Instead, to adequately demonstrate that a contested decision was both correct and permissible, we need to investigate the wider context in which the decision was made. Socio-legal factors, e.g., the fairness of the decision, or even actions of other actors and systems need to be taken into consideration. Our right to contest decisions made for us is not only protected by the Regulation (EU), 2016 GDPR, but it should also be considered an important aspect of human control and further motivate systems with variable autonomy.
VARIABLE AUTONOMY
The term variable autonomy (or adjustable autonomy) is frequently seen in the robotics literature to describe humanrobot teams in which the level of autonomy (LOA) of the robot varies depending on the context: from complete human operator control to full robot autonomy (Chiou et al., 2019). Variable autonomy (VA) approaches are therefore adopted with the aim to maximise human control without burdening the human operator with an unmanageable amount of detailed operational decisions (Wolf et al., 2013;Chiou et al., 2016). Because of this versatility, VA approaches are for example put forward for exploratory contexts (Dorais et al., 1998;Bradshaw et al., 2003;Valero-Gomez et al., 2011) where conditions are uncertain and broadband connection is unstable, or for controlling multirobot systems where the operator's workload is affected by the number of robots under their supervision (Sellner et al., 2006;Wang and Lewis, 2007).
Beyond robotics, VA is also discussed in the context of multiagent systems (MAS) where interacting autonomous agents participate in the pursuit of a collective organisational goal (Van der Vecht et al., 2007). This type of system requires some coordination (emergent or explicit) between actors. One extreme type of coordination involves fully autonomous agents that generate their own local decisions without any point of control to influence the emergent MAS behaviour. The other extreme is fully controlled coordination which implies a single point of control that explicitly determines and assigns tasks to each actor. In the latter, each agent still carries out their assigned task autonomously, but they do not decide for themselves what actions to perform. With incomplete information about the environment in which the agents are deployed, fully controlled coordination is susceptible to failure and flexibility at the local level is required. This motivates the consideration of VA to dynamically adjust coordination rules, as well as role and interaction definitions within the system.
Dimensions of Variable Autonomy
Variable autonomy approaches vary in terms of which aspects of autonomy are adjusted, by whom (human, agent, or both), how (continuous or discrete), why (pre-emptive or corrective), and when (design phase, operation, etc.). On the one hand, autonomy is composed of many facets that can be curtailed (Castelfranchi and Falcone, 2003;Bradshaw et al., 2004): these include the level of permissions (adjusting which actions the system can undertake autonomously), obligations (number of tasks allocated to the system) or capabilities (regulating access to information or to other agents). Thus, a first dimension of enacting variable autonomy involves concretising exactly which aspects of autonomy are in fact variable. On the other hand, the adjustments of the level of autonomy can be performed by the human in what is known as human-initiative, or by either the human or the agent in mixed-initiative approaches (Marble et al., 2003). Specifying who has the ability to allocate autonomy determines the level of human involvement at the meta-level of autonomy control and requires forethought on which considerations trigger a possible change in autonomy levels. Indeed, in human-initiative approaches the operator needs to be presented with the relevant information to decide on autonomy adjustment. Additionally, in mixed initiative approaches the system needs to be programmed with the conditions that trigger a change in autonomy level. Deciding who gets to change the level of autonomy and when is therefore a key dimension in VA architectures.
Both the dimension of which aspects of autonomy and the dimension of who gets to adjust it and when are considered in the literature in terms of their influence on the design and effectiveness of VA systems (Kaber et al., 2001;Castelfranchi and Falcone, 2003;Scerri et al., 2003). When designing for VA, it is necessary to decide where it lies in terms of these two dimensions. This means that the design process explicitly includes considerations on identifying and documenting which aspects of autonomy are adjustable (including system permissions, access to information, etc.) and on the contexts that trigger a change in LOA. In fact, explicit deliberation on Frontiers in Artificial Intelligence | www.frontiersin.org September 2021 | Volume 4 | Article 737072 system capabilities and human control in different scenarios (ideally taken in conjunction with all stakeholders affected by a system) is precisely what is required for accountability in intelligent system design (Dignum, 2017;Theodorou et al., 2017), making VA an instance of accountability by design. Furthermore, in both human-led or mixed approaches, the level of autonomy can be adjusted depending on the context. It has been demonstrated that for some models in which the human has the ability to take over and change the LOA at any point during the system's use, robot performance and use is improved due to the human's ability to act directly at the error level (Valero-Gomez et al., 2011). In such scenarios it is crucial that the human be aware of where their attention is needed, and of how to quickly tackle the problem when they take over (Sellner et al., 2006). This necessitates transparency, where not only the appropriate quantity of information about the system must be available, but the information must also be delivered at the appropriate time and in the appropriate manner such that it is immediately understood and processed by the relevant human assigned to intervene. Whereas this aspect is a challenge in the implementation of VA, it immediately aligns such systems with the transparency standards increasingly demanded by society such as those outlined in the European Commission (2019) "Ethical Guidelines for Trustworthy AI".
Variable Autonomy for Responsible AI
The implementation of VA is often discussed in relation to the operational requirements that ensure one (or many) human operator(s) can maintain control over the system on a technical level (influence over a system to adjust its actions). We argue that these same deliberations, when extended to the wider socio-technical level, give VA an upper hand in terms of accountability, responsibility, and transparency. For a system with VA to be effective, roles and responsibilities must first be explicitly defined. A role encompasses the set of well-defined tasks that any given entity is expected to independently execute within well-defined conditions of the overall system (Zambonelli et al., 2000). Only by explicitly defining which entities are capable and responsible for which tasks can it be appropriately determined at runtime who transfers control of what and to whom. In order for these entities to adequately fulfil their roles and responsibilities within the system, there must also be an appropriate means for information-exchange such that the current state of the system and state of the environment are well understood. Only by establishing this means of information exchange can the appropriate actor within the system determine when a transfer of control is needed and why.
Variable Autonomy for Accountability and Responsibility
The requirement of making explicit who does what and when extends beyond the roles of human operator and machine. In a socio-technical setting, all key stakeholders who both influence and are influenced by the system should be involved in assigning roles and responsibilities to the relevant actors. Such roles include (but are not limited to) designers, developers, operators, bystanders, and policy makers. With such definitions clearly in place, the value of accountability (a form of backward-looking responsibility) is fulfilled because an account of events and the responsible actors involved can be presented as needed.
Variable Autonomy for Transparency
Permissions and access to information is determined by role such that each actor is capable of determining when and where their action is needed as well as why they are required to act. In order to allow for the relevant actor to (re-)gain awareness over the status of system and environment, an account of the relevant events that have occurred must be accessible and available. The system must therefore be inspectable at the appropriate level of abstraction for the relevant entity (an operator and a developer, for instance, will have different views). That is, a means for exchanging just enough information, at the appropriate time, between appropriate actors, in an appropriate manner. With access to information that describes the reasons behind decisions that were made, the system fulfils the value of transparency because the relevant individual is able to gain an understanding as to where their attention is needed and how to appropriately respond.
VARIABLE AUTONOMY IN PRACTICE: CREDIT-SCORING SYSTEM
In this section, we will apply our proposal of VA to the case of the Apple Card, which was under investigation by the New York State Department of Financial Services, 2021 (NYSDFS) after allegations that their credit-lending system, provided to them by Goldman Sachs, discriminated against women (Nasiripour and Natarajan, 2019). We study this use-case to highlight a growing public awareness of companies attempting to hide behind AI to avoid corporate responsibility. First, we describe the events that led to the allegations of gender-bias and the conclusions that were drawn from the NYSDFS investigation. Then, we demonstrate how the requirements of VA outlined in the previous section can address the same issues that triggered the involvement of law enforcement in the first place. We conclude with some considerations about alternative solutions and reflect on how they compare to the VA approach we propose.
The NYSDFS launched their investigation into the Apple and Goldman Sachs after many applicants voiced concerns of genderbias reflected in decisions made by the Apple Card credit-lending algorithm. This criticism was raised after the system granted male applicants a significantly higher credit limit than their female spouses who, in some cases, had better credit scores. Numerous attempts to appeal resulted in the same response: the decision was made by an algorithm and there was no way to challenge its output. Apple representatives insisted that they do not discriminate, and yet they failed to provide a reasonable explanation for the disparity between credit limits.
While the New York State Department of Financial Services, 2021 did not conclude that Apple and Goldman Sachs exhibited any unlawful discrimination against women, lack of transparency and poor responsiveness to customer appeals were implicated. The Department emphasised that these two features are of particular importance when customer insight into the basis for their credit terms is little to none. Apple and Goldman Sachs failed to provide meaningful control over the situation, as the deployed system did not allow for the ability to track moral reasons for the outcome or the ability to trace back to a responsible individual who could both understand the outcome and explain it to the contesting party in a timely manner. Apple's policy at the time required the applicants to wait 6 months before appealing the decision made by the system. Only after authorities intervened did the relevant actors present reasons for each individual outcome. If applicants were able to contest the decisions effectively, e.g., speak with a representative who could explain the outcome instead of being told a "computer said so", then the investigation might have been avoided. We argue that variable autonomy applied to such a case would demand transparency by design, ensuring that the relevant actors can intervene at the right moment and respond appropriately to the contesting individual, thus providing meaningful control over the system.
The first ailment of the Apple credit card programme was lack of transparency. An effective VA approach requires transparency such that all actors along the causal chain are known, and their responsibilities made explicit prior to deployment of the system. Additionally, each actor must be able to obtain an adequate understanding of where their action is required if they are to fulfil their roles and responsibilities. This necessitates appropriate access to the information that is of particular relevance to each individual actor's role. Apple and Goldman Sachs' system presented the applicant with insufficient explanations for the decisions that were made. Moreover, Apple representatives could not provide reason beyond "the algorithm said so" because they had no insight into the system that Goldman Sachs supplied to them, i.e., the system was a "black box". It is argued, however, that Apple accepted a role and responsibility when they launched the credit card programme. Without sufficient information about the system that the programme's success heavily relied on, they could not fulfil their responsibilities or maintain control.
The second ailment was poor contestability. VA's requirement of explicitly defining the roles and responsibilities of all actors along the causal chain primes the system for presenting an account of occurrences as needed. We propose the need for a more robust design approach for which Apple representatives are given appropriate role assignments within the wider sociotechnical system that match the responsibilities they possess. This way, the individual who is tasked with inspecting the system at the appropriate level (there might be multiple levels of abstraction) and accounting for the decision steps that led to any given outcome would be known. This individual can respond as needed to the contested decision and the involvement of law enforcements could have been avoided.
If the decision was found truly to be biased, then steps can be taken towards amending the fault within the system. With VA, humans can assume control, understand the state in which the system errs, and make a more informed decision for the contesting applicant as well as all other applicants using the same system. A discriminatory system is a failed system and control must be transferred to an entity that can be challenged and held to account.
Other solutions to cases where the opaque system is suspected to be biased include the use of debiasing techniques and conformance testing. If the dataset used by credit lenders is suspected to be imbalanced, then one solution is to re-train the system on a more representative dataset (Noor, 2020). However, careful methods of data collection and bias-testing in pre-processing stages cannot necessarily account for all cases, so there should be robust mechanisms in place to handle the potential for failure. Moreover, data collection methods can be time consuming and expensive, especially if they are to be performed at every occurrence of a detected fault. Representative data cannot ensure a bias-free model, however, especially from a credit-scoring perspective (Hassani, 2020). Historic social biases can be reflected in the data and reinforced by their use in credit score calculations. Constraints can be applied to the model itself in attempts to correct for bias, but it is difficult to ensure fairness across all categories without compromising performance (Kleinberg et al., 2016;Hassani, 2020). Another solution is to perform regular conformance testing such as scheduled audits to the system. While this is useful for accountability (Raji et al., 2020), it is also a time and labour-intensive task that requires major efforts from both parties and cannot always take place.
Still, only throwing more data at the problem or auditing the system periodically solves none of the issues that triggered the investigation into Apple card in the first place: lack of transparency and poor response to appeals. More robust governance mechanisms need to be in place prior to system deployment. Human-in/on-the-loop are governance mechanisms that are inefficient in this case because it is labour intensive and requires humans to continuously oversee the system-it is simply not feasible. The human-out-of-the-loop model provides faster decision-making at a cheaper cost for financial institutions, but it is high-risk in uncertain situations. Therefore, a dynamic solution is more reasonable. Time investments can be made in the training of all actors within the system to inform them of their role and responsibilities and ensure they are fit to serve. With responsibilities specified, each human knows their position along the chain, what parts of the system they can access, what they cannot, and who they need to contact in case an intervention is required, e.g., an appeal. The appropriate actor can be traced along the chain and localise the issue, providing reason for outcome and satisfying meaningful control. The VA solution is versatile and encompasses the values of accountability, responsibility, and transparency. By adhering to these values, Apple and Goldman Sachs would have maintained meaningful control over the system. However, the development of such a VA system is not without open challenges, and the need for careful considerations throughout its design, implementation, and use exists. consider in the development of systems with VA. The answers to these questions are contextual and will vary between systems. It will also depend on the values of the stakeholders involved in the design. Moreover, successful coordination between actors is heavily dependent on both internal (inner workings of the system) and external (environmental) factors that influence overall system stability. For a system with VA to be effective, decisions that require human (or machine) input must first be identifiable. Then, the appropriate entity to transfer control of these decisions to must be capable, available, and authorised to make these decisions without incurring significant costs to the system due to e.g., decision-making delays or miscoordination (Scerri et al., 2003).
REFLECTIONS AND ADDITIONAL CONSIDERATIONS
In high-risk situations, the assumption that the human will be capable of taking over control immediately without disruption can result in severe miscoordination and ultimate system failure. It is therefore important for the system to consider that the human is not guaranteed to respond to a request for input (Scerri et al., 2003). In other low-risk situations, user neglect is more tolerable, particularly when the alternative is disaster. Neglect tolerance is therefore an important consideration for the design and development of VA transfer of control mechanisms. Allowing for agents to reason about decision uncertainty, costs, and constraints is one way to optimise this transfer of control problem (Scerri et al., 2003). Other hybrid approaches combine logic reasoning with machine learning methods to solve the same. In such systems, the authority to transfer control need not only be reserved for the human but can also be mixed initiative. A transfer of control that is triggered by the system is desirable in circumstances e.g., where the human is not responsive, under extreme stress or in a suppressed cognitive state (Parasuraman et al., 1999).
The human's cognitive state is another important consideration in the development of VA systems. How situational awareness can be achieved and maintained warrants further research, as the form and the means of presenting information is a system-specific consideration. Cues from safety-critical systems, for instance, might vary in amount of information depending on the situation. This is critical for the avoidance of infobesity, i.e., overload of the humans' cognitive abilities, and risks having the opposite effect on situational awareness (Endsley and Kiris, 1995). Finally, human factors research in trust demonstrates the need for transparency to enable calibration of trust, otherwise the human may misplace their trust in the system, resulting in the system's misuse or disuse (Lee and See, 2004).
CONCLUSIONS AND FUTURE WORK
As the deployment of AI systems continues to expand across industries, it is becoming increasingly important to ensure that control over any intelligent system is maintained in a way that is both meaningful and practical to its use. In this paper, we described the challenges in maintaining human oversight using governance mechanisms such as human-in-the-loop and humanon-the-loop. We argue that these mechanisms for control do not suffice for the maintenance of what is understood to be meaningful human control as they do not necessarily encompass the requirements of tracking moral reasoning and tracing accountable individuals along the causal chain of responsibility. Moreover, dynamic contexts will demand systems with adaptable levels of human responsiveness. We further discussed the importance of effective governance over intelligent systems by highlighting accountability, responsibility, and transparency as the three main pillars of the responsible and trustworthy development and use of AI.
We presented the concept of variable autonomy as a means of ensuring the effective governance and subsequent alignment of systems with our socio-ethical legal values. We introduced design and implementation considerations needed. For example, the importance of clearly defining the roles and responsibilities of all actors along the causal chain of the system (from designer to enduser), such that all actors are aware of the set of tasks they are responsible for, and the circumstances under which they must execute said tasks. This necessitates a means of information availability and exchange between relevant actors such that they are enabled to fulfil their assigned roles. Such are the requirements for VA systems to adhere to the values of accountability, responsibility, and transparency, which in turn ensure meaningful human control.
Moving forward, we intend to apply a quantitative analysis of VA systems for meaningful control. Further study is needed in determining the optimal action selection sequence for transfer of control given uncertainties, costs and constraints imposed on the system. In particular, we are interested in investigating the use of hybrid systems with VA, combining logic reasoning with machine learning methods to optimise this transfer of control problem. We will investigate these combined methods not only to determine who to transfer control to and when, but also in what manner. These are a few of the questions that we aspire to answer as a step towards determining how best to integrate VA into systems at large, encouraging their responsible development and deployment across all industries.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author.
AUTHOR CONTRIBUTIONS
LM did the majority of the writing. AT provided the original idea and contributed to the writing of the article. AAT contributed with text and comments. VD provided overall vision and comments.
FUNDING 952026. The AI4EU project, funded under European Union's Horizon 2020 research and innovation programme under grant agreement 825619, has funded Aler Tubella's efforts. This work was partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation. Theodorou was also funded by the Knut and Alice Wallenberg Foundation project RAIN, under grant 2020:0221. | 2021-09-14T13:27:08.630Z | 2021-09-14T00:00:00.000 | {
"year": 2021,
"sha1": "ae56ecb9d95fb10548c86b2fc1440c47071b138c",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/frai.2021.737072/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ae56ecb9d95fb10548c86b2fc1440c47071b138c",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
} |
238255033 | pes2o/s2orc | v3-fos-license | The epidemiology of uveal melanoma in Germany: a nationwide report of incidence and survival between 2009 and 2015
Purpose To calculate the overall incidence of uveal melanoma in Germany and to compare incidences in different German states. In addition, we computed the overall and cancer-specific survival rates nationwide. Methods Incidence data for the period between 2009 and 2015, covering the entire German population, was collected through the German Center for Cancer Registry. ICD-O-3 topography codes C69.3-C69.4 and histology codes for melanoma subtypes were used to collect the incidence data. Confidence Intervals with a level of 95% (95% CI) were calculated for rates. Survival was calculated using the Kaplan–Meier. The log-rank test was used for survival comparisons. Results This study comprised 3654 patients with uveal melanomas, including 467 (12.8%) with iridial and ciliary body tumors. The overall age-standardized incidence rate (ASIR) was 6.41 person per million. Generally, the ASIR was higher in males than females (6.67 (95% CI 6.37–6.98) vs. 6.16 (95% CI 5.88–6.45 per million). Higher crude incidence rates were noted in the northeastern states (12.5 per million (95% CI 10.5–14.7) in Mecklenburg-Vorpommern) compared with the southwestern states (2.1 per million (95% CI 1.7–2.6) in Hessen). The 5-year overall survival stood at 47%, while the cancer-specific survival stood at 84%. Multivariate analysis showed that women, younger patients, and patients living in Berlin achieved significantly higher overall survival. Conclusion Overall ASIR of uveal melanoma in Germany indicates that the disease is more common in males and that it follows the same geographical distribution previously noted in central European countries, with the highest incidence in northern parts of Germany. Supplementary Information The online version contains supplementary material available at 10.1007/s00417-021-05317-7.
Introduction
Uveal melanoma is the most common adulthood primary ocular malignancy [1]. It most frequently arises from the choroid and less commonly from the ciliary body and iris. In the USA, the age-standardized incidence was estimated at 5.1 per million, a figure that remained constant over the years [2]. The disease rarely affects children [3]. The incidence increases with age and plateaus or declines after the age of 75 [2,4]. Previous studies have shown a relation between incidence and era of birth, gender, ethnicity, and geographical location. White population and males were shown to have higher incidences of the disease [5]. Higher latitudes were associated with increased incidence of uveal melanoma in the USA [6]. A study conducted between 1983 and 1994 reported similar latitude-associated differences in the incidence of uveal melanoma, with incidences of 2 per million in southern Europe, compared with more than 8 per million in northern countries [4]. This study, however, examined data from only one German state, Saarland, which represents only 1% of the German population. Saarland is located on the very southwest borders with France and belonged to the previous West Germany, which had a different healthcare system than that of East Germany prior to unification in 1990 [7,8]. Therefore, we expected to find a disparity in incidences of uveal melanoma and treatment outcomes between eastern and western states, including Saarland. We aimed at determining the crude and age-standardized incidence rates of uveal melanoma in Germany at the national level between 2009 and 2015. We further aimed to investigate disease characteristics and treatment outcomes including the nationwide overall and cancer-specific survival rates of uveal melanoma patients.
Study population and methods
Data from 2009 to 2015, covering the entire population of Germany, were gathered from the German population-based cancer registries through the German Center for Cancer Registry at the Robert Koch Institute together with the Association of Population Cancer Registries in Germany (GEKID) [9]. Data collected pertained to patients 15 years of age or older. Further details on the methodologies of the German Cancer Registry are available elsewhere [10].
Patients were identified as having uveal melanoma by using the ICD-O-3 topography codes, including choroid (ICD-O-3 topography code C69.3), the ciliary body (and iridial) code (C69.4), and histology codes for melanoma and malignant behavior codes. Patients initially coded as suffering from retinal melanomas (n = 10) were recoded as having choroidal disease. This was done with the knowledge of previous practices where there was often miscoding of this group of patients [2]. We used the International Classification of Diseases version 10 for the purposes of coding causes of death. Cause-specific survival rates were determined by analyzing deaths caused by choroid, ciliary body, and retinal disease (C693, C694, and C692, respectively). We excluded patients with unknown or benign disease behavior as well as those whose diagnoses were reported on death certificates only (DCO) to ensure quality of the data collected. No patients were reported as having had uveal melanoma based on DCO in this cohort. The TNM classification version 7 was used to determine staging in 88.1% of the tumors, followed by version 6 in 11.4%. Staging was determined in a minority of cases through the use of the TNM versions 8 (n = 13) or 5 (n = 4) [11]. For practical reasons, we have joined them together in the analysis.
Population estimates as well as the German Standard Population Report of the 2011 Census, both provided by the Federal Statistical Office, were used to calculate crude (CR) and age-standardized incidence rates (ASIR) [12]. We plotted the incidences of uveal melanomas in the federal states at annual intervals according to the annual population estimates provided by the Federal Statistical Office. For the purposes of this study, the German states of Schleswig-Holstein, Hamburg, Lower Saxony, Bremen, North Rhine-Westphalia, Berlin, Brandenburg, Mecklenburg-West Pomerania, and Saxony-Anhalt were grouped as northern states. Conversely, the states of Hesse, Rhineland-Palatinate, Baden-Württemberg, Bavaria, Saarland, Saxony, and Thuringia were grouped as southern states. In order to examine the burden of disease in the regions of the former East Germany, the states of Brandenburg, Sachsen, Thuringia, Mecklenburg-West Pomerania, and Saxony-Anhalt were grouped as eastern states. Data from the formerly divided Berlin was represented separately. Data for each group were then further subdivided by age and gender, and further analysis was conducted accordingly. Further information on data collection and analysis can be found elsewhere [13].
Software and statistical analysis
The IBM SPSS version 27 was used to conduct the descriptive statistical analysis [14]. Microsoft Excel for Office 365 was used to organize data and calculate incidence rates [15]. Tableau version 2020.1.2 was used to map the results on OpenStreetMaps and create incidence graphs [16]. Kaplan-Meier was used to calculate survival rates. The log-rank statistic was used to compare survival rates among different groups. A p-value of 0.05 or lower was considered significant for two-tailed tests. We calculated confidence intervals for 95% level [17,18]. Directly standardized rates were calculated using methods mentioned elsewhere [17]. Annual percent change (APC) was calculated using the Join-Point Software version 4.9.0 and using permutation test for selecting the model [19]. We also conducted a multivariate Cox regression analyses using "survminer" version 0.4.9 and "survival" version 3.2-10 packages in R software version 4.0.4 (2021-02-15)-the R Foundation for Statistical Computing, to estimate the hazard ratios of the influence of age at diagnosis, sex, topography, and geographical location on overall and cancer-specific survival.
Patients' characteristics
Of the 3654 uveal melanoma diagnoses reviewed in this study, 3187 (87.2%) were choroidal melanomas (Table 1). Mean age at presentation was 65.41 (95% CI: 64.98-65.84) years of age, while the median stood at 67.5 (range = 83.5, interquartile range: 18.69) years. No significant differences in age of presentation were detected between men and women (t = − 1.9, p-value = 0.055) or patients with choroidal versus ciliary body tumors (t = − 1.7, p-value = 0.8) (Fig. 1, Supplementary Table 1). The overall age-standardized incidence of uveal melanoma was 6.41 per million (95% CI 6.21-6.62). The incidence of uveal melanoma was higher among men than women (6.67 (95% CI 6.37-6.98) vs. 6.16 (95% CI 5.88-6.45) per million). Northern states had higher crude incidence rates compared with southern states (7.65 (95% CI 7.33-7.97) vs 5.21 (95% CI 4.95-5.48) (Fig. 2, Supplementary Table 5). Mecklenburg-Vorpommern, a northern state, had the highest crude incidence rate among all German states (12.5 per million; 95% CI 10.5-14.7), while Hessen, a south-central state, had the lowest incidence rate (2.1 per million; 95% CI 1.7-2.6). Overall, incidences of uveal melanoma were higher in males than females (Fig. 1). The ASIR in males reached a peak of 1.32 per million in the 70-74 age group and in females in the same age group with ASIR of 1.1 per million. Of all patients, 55.7% were diagnosed at the ≥ 65 years of age, while only 7.5% were diagnosed at 15-45 years of age. The ASIR fluctuated over the years, reaching a peak in 2010 of 7.8 per million, and ended in a trough in 2015 at 5.2 per million. The overall trend showed a slight decreasing incidence with APC of − 2.8 (95% CI − 9.6-4.5, p-value = 0.359) (Fig. 1B). The details of the incidence are presented in Supplementary Tables 2, 3, 4, and 5.
Therapy
Records reported details of patient treatment for 253 (6.9%) patients. Of these, 79 (31.2%) only received radiation therapy, 60 (23.7%) underwent operations, and 39 (15.4%) were treated with both irradiation and a surgical intervention (Supplementary Table 7). 2 1. This category is not used in comparisons because its column proportion is equal to zero or one. 2. Tests are adjusted for all pairwise comparisons within a row of each innermost subtable using the Bonferroni correction.
Discussion
Melanoma is the most common primary intraocular malignancy in adults, but it may affect orbital tissues at lower rates. Singh [2]. This reported value nearly matched our findings with regard to central Germany. The age-specific incidence in the US population increased by age to reach a peak incidence in females at between 65 and 69 year of age and males at between 70 and 74 years of age. The previously mentioned US Surveillance, Epidemiology, and End Results Program "SEER" study showed a lower relative incidence of the disease among African Americans and Asian Americans. Unfortunately, the German national registry does not specify the ethnicity of patients. It would be worthwhile to conduct a study that examines ethnic variations of uveal melanomas, given the recent flow of refugees from both the Middle and Far East regions [20]. A study that included data from European cancer registries between 1983 and 1994, including Saarland (a German state, as aforementioned), reported incidence rates ranging from < 2 per million in southern Spain and Italy to > 8 in Norway and Denmark in the north, respectively [4]. Our study showed a similar distribution within the mentioned range but within the same country. The observed higher incidences in northern and eastern Germany compared with the southern and western regions of the country may be attributable to ethnic variations, with movement of populations with pigmented or less fair skin over generations from western and southern Europe on the one hand, and similar ethnic make-up in northern states as that of Nordic countries. Fair skin was associated with an increased incidence of uveal melanoma, possibly due to unusual exposure to sunlight [21]. However, the relationship to sunlight exposure is contradictory in literature [6,22]. In this study, patterns of distribution of uveal melanoma were illustrated in maps, rather than point gradients, due to the fact that some German states extend over longitudes.
The highest reported population-based incidence worldwide was of Australian men. It stood at a rate of 10.9 per million, while the lowest population-based incidence was reported in Japan with 0.3 per million [23]. The mentioned high rate in Australia is may be attributable to its majority susceptible Caucasian White population of European origins. A previous study used two different methods to calculate incidences of uveal melanoma by integrating data from the population registry of Münster, a city in the federal state of Northrhine-Westphalia, with data from two case-controlled studies in that region. Two estimates, 2.3 and 8.6 per million, were reported [24]. We calculated an incidence of 5.7 per million for the same state, which is close to the mean of the previously mentioned two estimates from the earlier study. The earlier study by Virgili et al. also reported an incidence of 4.5 per million in the state of Saarland, approximating the incidence rate reported for the same state by our study (4.3 per million) [4].
Contrary to the increasing incidence of skin melanoma in the USA and other countries, uveal melanoma has shown varying annual incidence trends. For example, while Sweden has experienced a declining incidence of uveal melanoma over the years [22], Canada has had a small annual increase in incidence over time [25]. Our study has found a slight decline in incidence over time, a finding similar to that from north European countries. Our study showed varying incidence trends in each German state as well. We believe that further follow-up studies are vital for a better understanding of long-term trends.
Older age at diagnosis as well as diagnoses of ciliary body tumors was associated with worse overall survival. Previous studies have reported increased incidences of uveal melanomas as well as worse prognoses in older age groups [26,27]. The better survival in younger age groups was attributed to the underlying histological features of uveal melanomas commonly associated with younger age. Other factors that may have influenced measured incidence rates may include social and psychological factors that drove certain patients to seek diagnosis and treatment. Furthermore, specific mutations, such as the SF3B1 mutation, were found to be associated with younger age, choroidal involvement, and a better prognosis [28]. On the other hand, BAP1 mutations were found to be associated with older age and worse prognosis. Notably, the lower survival rate of patients with ciliary body melanomas was found to be an independent prognostic factor for survival in a number of studies [26,29,30]. However, in our study, the cancer-specific survival was not significantly attributable to topography. This lower survival in the other studies was attributed to its higher rate of metastasis [31]. Risk of metastasis in these cases was reportedly related to tumor size, microvascular patterns, and monosomy 3 and 8q gain [32], Better survival rates for women were a notable finding of our study. This, as a number of studies have suggested, could be attributed to women's hormonal profile, varied genetic predispositions compared with men, and/or differences in occupational factors including amount of sun exposure [21,[33][34][35]. Other studies have attributed the differences to a tendency towards tumor extension, or to the involvement of ciliary body extensions [36]. We believe that those clinical aspects should be considered in parallel with the relevant molecular, genetic, and environmental backgrounds of each patient.
In general, the survival of patients from Berlin showed both higher overall and cancer-specific survivals, while those from both eastern and westerns states of Germany showed similar 5-year overall and cancer-specific survivals. On the other hand, graphs showed lower overall survivals after the period of 5 years, indicating a possible affection of health status on the long term, the effect of the age distribution between eastern and western states, or other factors that were not considered in this study. However, our analysis did not yield any differences in the distribution of uveal melanomas between different regions of Germany by age.
Most of the patients in our study were reported to have been diagnosed with the non-specified 8720/3 malignant melanoma histological subtype. This may give an impression that pathological studies were either not carefully done or were erroneously reported by the registrars. On the contrary, this highlights the dependency on clinical diagnosis for the management of uveal melanomas, currently the cornerstone of the diagnosis of these tumors [37]. Clinical diagnosis has been reported to have had an accuracy of up to 99% [38]. This is clear for medium-sized to large melanomas, but small ones are often hard to diagnose. Moreover, this unspecified histological coding can be attributed to the success of clinical management, resulting in the unavailability of pathological samples. On the other hand, some authors have proposed that this was related to the increasing complexity of incidence calculation [39].
The low DCO (not presented) rate indicates a relatively high level of reliable data that can be used in further research. Both show the accuracy of the reporting system in the deliverance of accurate information at presentation or during management, allowing a chance for further follow-up. On the other hand, the discrepancy between the method of confirming diagnoses and histopathological staging indicates a gap in the reporting of (2), topography (3), latitude (4), and Germany old segmentation scheme (5) ◂ the histopathological staging of uveal melanomas. Establishing a central cancer registry for uveal melanomas can help more accurately complementing the national general cancer registry.
This study is the first to report on the nationwide incidence of uveal melanomas in Germany. As with all registry-based studies, it has limitations. Some differences between states can be attributed to the degree of registry completion. Two of the southern registries (Hessen and Baden-Wuerttemberg) were founded in 2007-2009 and, therefore, are still being built up. Moreover, variations in the details reported by registries, including treatment and outcome, could have resulted from a lag time between diagnoses and reporting times. Furthermore, the small number of patients within some groups and states may result in inaccurate subgroup and trend analyses. Further efforts should be invested in follow-up studies and training ophthalmologists and cancer registrars on reporting uveal melanomas and other cancers as well.
In conclusion, the age-standardized incidence of uveal melanoma in Germany was 6.41 per million. Men showed higher standardized incidences and lower survivals than women. Patients with choroidal disease showed higher survivals than those with a ciliary body or iridial tumors. Patients from the former East Germany showed similar 5-year survival rates to those from the former West Germany. | 2021-10-04T13:32:26.107Z | 2021-10-04T00:00:00.000 | {
"year": 2021,
"sha1": "9c5baea09ac17dd8084dc608aaba69329a215d71",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00417-021-05317-7.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "9c5baea09ac17dd8084dc608aaba69329a215d71",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
250311024 | pes2o/s2orc | v3-fos-license | Exploring psychopathy traits on intertemporal decision-making, neurophysiological correlates, and emotions on time estimation in community adults
There are certain characteristics of psychopathy that may be related to changes in intertemporal choices. Specifically, traits such as impulsivity or lack of inhibitory control may be associated with a more pronounced discounting function in intertemporal choices (IC) and, in turn, this function may be based on changes in the basic mechanisms of time estimation (TE). Therefore, this study aimed to examine potential differences in neurophysiological correlates, specifically through N1, P3, and LPP measurements, which may be related to TE and IC, examining their modulation according to psychopathic traits, different emotional conditions, and different decision-making conditions. This experimental study included 67 adult participants (48 women) from the northern region of Portugal, who performed an intertemporal decision-making task and, of those, 19 participants (16 women), with a mean age of 25 years (SD = 5.41) and a mean of 16 years of schooling (SD = 3.37) performed the time estimation task. The instruments/measures applied were MoCA, used as a neurocognitive screening tool; the Triarchic Psychopathy Measure (TriPM), a self-report instrument with 58 items that map the core features of psychopathy along three facets – boldness, meanness, and disinhibition – and considers them continuously distributed among the general population; intertemporal decision-making and time estimation tasks – for the time estimation task, the stimuli consisted of 45 color images extracted from the Nencki Affective Picture System (NAPS). In the TE task, there was an almost significant effect of disinhibition on the values of θ, with higher values on this variable associated with greater values of θ in the unpleasant emotional condition. In the IC task, there were no significant effects of any psychopathy measure on the values of the gains and losses ratios. In addition, the analysis of the neurophysiological correlates of the IC task did not reveal a main effect of the decision-making condition, nor effects of any psychopathy measure on the N1 and P3 amplitudes. The analysis of the neurophysiological correlates of the TE task revealed that higher meanness values are associated with smaller N1 amplitude in the pleasant emotional condition, whereas higher disinhibition values are associated with greater N1 amplitude in the pleasant emotional condition. Still in this task, higher disinhibition values were associated with a smaller LPP amplitude in the unpleasant emotional condition. The increase in the distribution of attention resources towards time and/or the increase in activation states, including those originated by responses to emotional stimuli, may be the main factor that alters the way impulsive individuals and, presumably, individuals with high psychopathy, consider time when making decisions.
There are certain characteristics of psychopathy that may be related to changes in intertemporal choices. Specifically, traits such as impulsivity or lack of inhibitory control may be associated with a more pronounced discounting function in intertemporal choices (IC) and, in turn, this function may be based on changes in the basic mechanisms of time estimation (TE). Therefore, this study aimed to examine potential differences in neurophysiological correlates, specifically through N1, P3, and LPP measurements, which may be related to TE and IC, examining their modulation according to psychopathic traits, different emotional conditions, and different decision-making conditions. This experimental study included 67 adult participants (48 women) from the northern region of Portugal, who performed an intertemporal decision-making task and, of those, 19 participants (16 women), with a mean age of 25 years (SD ¼ 5.41) and a mean of 16 years of schooling (SD ¼ 3.37) performed the time estimation task. The instruments/measures applied were MoCA, used as a neurocognitive screening tool; the Triarchic Psychopathy Measure (TriPM), a self-report instrument with 58 items that map the core features of psychopathy along three facetsboldness, meanness, and disinhibitionand considers them continuously distributed among the general population; intertemporal decision-making and time estimation tasksfor the time estimation task, the stimuli consisted of 45 color images extracted from the Nencki Affective Picture System (NAPS). In the TE task, there was an almost significant effect of disinhibition on the values of θ, with higher values on this variable associated with greater values of θ in the unpleasant emotional condition. In the IC task, there were no significant effects of any psychopathy measure on the values of the gains and losses ratios. In addition, the analysis of the neurophysiological correlates of the IC task did not reveal a main effect of the decision-making condition, nor effects of any psychopathy measure on the N1 and P3 amplitudes. The analysis of the neurophysiological correlates of the TE task revealed that higher meanness values are associated with smaller N1 amplitude in the pleasant emotional condition, whereas higher disinhibition values are associated with greater N1 amplitude in the pleasant emotional condition. Still in this task, higher disinhibition values were associated with a smaller LPP amplitude in the unpleasant emotional condition. The increase in the distribution of attention
Introduction
Psychopathy is a personality structure characterized by behavioral, emotional, and interpersonal problems (Patrick et al., 2009;Venables et al., 2014). At the emotional level, individuals with psychopathy often exhibit characteristics such as a lack of self-blame or guilt, superficial emotions, and a lack of empathy (Tuvblad et al., 2017). At the behavioral level, these individuals exhibit antisocial traits, impulsiveness, parasitic lifestyles, and difficult-to-control behaviors (Thompson et al., 2014). At the interpersonal level, traits such as surface charisma, manipulation, and egocentricity are evident (Tuvblad et al., 2017). As such, psychopathy is viewed as a personality construct that is often associated with antisocial behavior, use of alcohol and other psychoactive substances, and often with membership in antisocial peer groups (Wu and Barnes, 2013). This personality structure is often associated with childhood experiences and traumas: children who experienced abuse in childhood are more likely to develop behavioral problems in childhood and personality problems in adulthood (Wu and Barnes, 2013), namely of a psychopathic type .
When comparing people with and without a diagnosis of mental illness, it can be concluded that the former has higher rates of offending and are more likely to reoffend (Santana, 2016). The prevalence of psychopathy in the general population ranges from 0.6% to 4.0%, with evidence of higher prevalence in men than in women (Tuvblad et al., 2017). Therefore, understanding psychopathy is critical as it is associated with disruptive behaviors such as crime, aggression, recidivism, substance (abuse) and sexual crime (Somma et al., 2016). Cleckley (1941) defined psychopaths in his work The Mask of Sanity, based on his own experience with such individuals. Thus, the author defines 16 characteristics evidenced by psychopathic individuals, dividing them into three distinct categories: positive adjustment (intelligence, rationality, and absence of delusions or nervousness), behavioral deviance (distrust, irresponsibility, promiscuous, antisocial, impulsive behaviors, and without life goals), and emotional and interpersonal deficits (lack of remorse/shame, egocentricity, disloyalty, loss of insight, negative emotional reactions, and absence of genuine feelings) (Cleckley, 1941;Crego and Widiger, 2016;Patrick et al., 2009).
Men and women differ in the behavioral presentation of psychopathy (e.g., Falkenbach et al., 2017). In men, individuals with high psychopathy tend to exhibit externalizing behaviors, such as antisocial and aggressive behavior, and often use psychoactive substances (Falkenbach et al., 2017). Women with high psychopathy are often more aggressive and emotionally unstable, manipulative, seductive, and often manage to deceive others to achieve their goals than close family members Colins et al., 2016). Women with high levels of psychopathy are also more likely to suffer from anxiety, depression, and borderline personality disorder (Colins et al., 2016).
Psychopathy has been characterized using personality traits as described above (Verschuere et al., 2018), and tools such as the Psychopathy Checklist-Revised (PCL-R) have been used to characterize psychopathy, grouping them into four aspects: emotions, relationships, lifestyle, and antisociality (Thompson et al., 2014). According to various characteristics typical of this personality structure, these aspects are divided into two factors: Factor 1 includes interpersonal and emotional aspects, and Factor 2 includes lifestyle and antisocial aspects (Thompson et al., 2014). Specifically, Factor 1 at the interpersonal level includes traits such as surface charisma, grandiosity, pathological lies, and manipulative styles (Thompson et al., 2014;Verschuere et al., 2018). The affective level includes superficial affect, callousness/lack of empathy, not taking responsibility for one's behaviors, and the absence of remorse/guilt (Thompson et al., 2014;Verschuere et al., 2018). Furthermore, Factor 2, and specifically the antisocial facet, includes early problematic behaviors, impulse control deficits, juvenile delinquency, revocation of conditional release and criminal versatility (Thompson et al., 2014;Verschuere et al., 2018). As for lifestyle, it includes sensation-seeking/boredom, impulsivity, irresponsibility, parasitic lifestyle, and absence of long-term goals (Thompson et al., 2014;Verschuere et al., 2018).
In another approach, Patrick et al. (2009) hypothesized that psychopathy is based on biological and behavioral traits. Therefore, the authors developed a triarchic model of psychopathy that includes three dimensions: boldness, meanness, and disinhibition (Patrick et al., 2009;Somma et al., 2016). Disinhibition is manifested by difficulty controlling impulses and behavior, lack of planning, difficulty influencing, and the need for instant gratification (Patrick et al., 2009;Somma et al., 2016). Meanness is characterized by a lack of empathy, contempt, lack of intimacy, and the need to be awakened by cruelty to others (Patrick et al., 2009;Somma et al., 2016). Boldness is manifested in the ability to remain calm in the face of danger and to recover quickly from stress, including high levels of self-confidence, social efficacy, and tolerance for threats (Patrick et al., 2009;Somma et al., 2016).
There are certain characteristics of psychopathy that may be related to changes in intertemporal choices. Specifically, traits or characteristics such as impulsivity or lack of inhibitory control may be associated with a more pronounced discounting function in intertemporal choices and, in turn, this function may be based on changes in the basic mechanisms of time perception, namely of time estimation. For example, Ainslie (1974) calls the smaller, more immediate reward "impulsive", and the larger, delayed reward "self-controlled". Based on this distinction, it is possible to formulate the thesis that individuals with lower scores on the disinhibition facet of psychopathy manifest a less pronounced discounting function (they are more self-controlled) than individuals with higher scores on the disinhibition facet (they are more impulsive). If so, the latter may have a less accurate time estimation mechanism.
Nothing is known about the perception of time in highly psychopathy. However, it is known that patients with Borderline Personality Disorder (BPD) appear to estimate significantly higher time intervals between events than healthy controls (Berlin et al., 2005), suggesting an accelerated subjective sense of time in these patients. However, the results reported in the literature are inconsistent. Berlin and Rolls (2004) found that patients with BPD perceive time more accurately than patients with schizophrenia, producing time intervals that are closer to the target time interval (10, 30, 60, and 90 s), but significantly shorter than normal time interval controls do. These patients also overestimated time intervals, supporting the argument for accelerated subjective time perception, although no statistically significant differences were observed compared to controls. In a further study, people with borderline or schizotypal personality disorder also showed preserved functioning in time perception compared to healthy individuals (Berlin et al., 2010). Although inconsistencies were found, some core features of BPD, particularly impulsivity, may be associated with changes in time perception, especially an accelerated subjective sense of time, but the available evidence is inconclusive.
Regarding Antisocial Personality Disorder (ASPD), Schulreich et al. (2013), designed to examine a dual-process model of psychopathy, implemented an experimental paradigm in which participants had to guess a 1-s pass and at the end received positive or negative feedback based on their performance. Egocentric impulsivity, assessed by the Psychopathic Personality Inventory-Revised (PPI-R) (Alpers and Eisenbarth, 2008;Lilienfeld and Andrews, 1996), was the only personality trait that had an impact on time perception. Participants with higher impulsivity scores showed longer estimated time intervals (Havik et al., 2012). Additionally, individuals with higher impulsivity also estimated longer time intervals than individuals with lower impulsivity (Correa et al., 2010). Consequently, patients with personality disorders, or at least those with markedly impulsive traits, tend to overestimate time intervals.
When deciding, it is necessary to analyze the cost and revenue elements that occurred over different time periods. Since perceptions of monetary value and the utility associated with monetary outcomes change depending on the point in time at which they were obtained, adjustments are needed to make these intertemporal moments comparable. This adjustment is made using a discount rate. This allows us to calculate discount factors, which in turn can be used to convert current or "foreseeable" prices, accumulating their present value in the future (Dasgupta et al., 2000). Koopmans (1960) pioneered the concept of time preference, although research on the subject had been conducted as early as 1912 (Fishburn and Rubenstein, 1982). Several subsequent contributions have significantly expanded the depth of knowledge on this topic. The concept of lag discounting derives from the behavior of economic agents and is expressed by a set of standard economic axioms. These axioms construct individuals' intertemporal preferences by introducing various formalized assumptions that are at the heart of any discounting model. As behavioral assumptions change, so does the discounting model.
One of the key behavioral axioms defines impatience and procrastination. If the result is positive, a shorter time interval is better than a longer one. If the result is zero, the person does not care about the time period in which the result occurred. If the result is negative, thus causing resentment, a longer period is preferred, which means procrastination.
Findings from intertemporal decision-making studies suggest that people generally avoid risk-taking when faced with a choice between options related to possible or specific outcomes, and that the time interval between choosing and achieving win/loss is an important factor in decision-making because individuals prefer to profit first and then lose. However, few studies have focused on the details of decision-making in individuals with overt psychopathic personality traits, and only one study has examined intertemporal decision-making in these individuals (Blackburn et al., 2012). The literature suggests that frontal-limbic processes are affected in psychopathy, which can support patterns of disinhibition and impulsivity that influence the way these individuals make decisions and organize them over time. It would be plausible that individuals with greater disinhibition would show a more marked tendency than normal for smaller immediate gains and greater postponed losses. This tendency would not be as pronounced in individuals with greater boldness and meanness, as they are not as impulsive as the former.
Changes in intertemporal choice processes are thought to be associated with a wide range of decision-making dysfunctions and failures in planning (Angeletos et al., 2001). Since the neurophysiological correlates of these dysfunctions are common to psychopathy (Fowles, 1980;Perry and Carroll, 2008;Plichta et al., 2009), it is possible to assume there are also changes in intertemporal choice processes among individuals with high psychopathy. However, a pattern of preference, which is more pronounced than the norm, for a smaller immediate reward or a larger loss in the future, among individuals with higher scores on the disinhibition facet of psychopathy, still needs to be demonstrated. Once demonstrated, it is important to understand whether it is associated with characteristics of psychopathy related to basic emotional and cognitive aspects, such as time perception, which can be studied through time estimation tasks. The neurophysiological correlates of these basic mechanisms can be examined through electroencephalography (EEG) techniques and the extraction of event-related potentials (ERP).
In its raw form, EEG provides crude measures of brain activity, and EEG tracings represent the accumulated activity from different neural sources (Luck, 2014). However, neural responses can be related to specific sensory, cognitive, and motor events and it is possible to extract these responses from the EEG, usually through a simple averaging technique. These responses are called ERP, to indicate the fact that they are electrical potentials associated with specific events or stimuli (Luck, 2014). Concretely, ERPs are extracted from the electroencephalographic signal, synchronizing time with the occurrence of a relevant stimulus and calculating the average of several trials (Luck, 2005). Generally, both the mean or peak amplitude, as well as peak latency, of the waves are measured (Polich, 2012).
The literature refers to four main ERP components induced by early stimuli, in the case of visual stimuli (e.g., C1, P1, N1, and P2), which are induced in the parietal-occipital electrodes, roughly in the first 250 ms after the presentation of the stimulus. Specifically, N1 is a negative wave that reaches its peak between 150 and 200 ms (Folstein and Van Petten, 2008;Houston and Stanford, 2001;Lijffijt et al., 2009) and which reflects the selective amplification of sensory information to encode the stimulus (Hillyard et al., 1998). The N1 amplitude is also sensitive to attention (N€ a€ at€ anen and Picton, 1987) and there is evidence that the amplitude of the auditory N1 appears increased in individuals with more pronounced psychopathic traits of facets 1 (interpersonal) and 4 (antisocial) of the Psychopathy Checklist-Revised (PCL-R) (e.g., Anderson, et al., 2015). Many studies have shown that stimulus parameters, such as lighting, contrast and spatial attention, influence N1 (Hillyard et al., 1998;Mangun, 1995;Luck et al., 2000), as well as the task being performed (Hopf et al., 2002;Vogel and Luck, 2000) and the state of activation . Increased N1 amplitudes have been associated with high levels of impulsivity. For example, impulsive-aggressive participants exhibited an increased N1 in response to visual stimuli, indicating improved attentional orientation (Gehring and Willoughby, 2002). More recently, increased N1 amplitude has been observed when participants with pronounced anxiety traits made an immediate decision (vs. a delayed decision), which also indicates that more attentional resources are being allocated. This result can be considered evidence that early attentive orientation contributes to the impulsive choices of anxious individuals (Xia et al., 2017). N1 and P3 have been used as indicators of attention, cognitive performance, and elaborative processing, in healthy individuals and in clinical or subclinical samples, including in individuals with high psychopathy. P3 is a prominent neural signature, used to index higher-order cognitive processing and has even been used as a diagnostic tool.
P3 is a positive wave that occurs between 300 and 1000 ms after the stimulus presentation; it was also called a late positive complex (Barcel o, 2003;Brydges and Barcel o, 2018;Donchin and Coles, 1988;Friedman et al., 1978Friedman et al., , 2001Simson et al., 1977), because it includes several components with different times, topography, and functional correlates. P3 has been associated with attention processing Fan and Han, 2008;Martín, 2012) and indexes the brain activity underlying the review of a mental model, induced by a stimulus (Donchin and Coles, 1988). If a stimulus provides information inconsistent with the mental model, this model will be updated and the amplitude of P3 will be proportional to the number of cognitive resources recruited during the update (Martín, 2012). Previous results show that higher levels of activation or the greater relevance of tasks lead to higher P3 amplitudes, reflecting a greater allocation of attention (Nieuwenhuis, 2011). This wave is sensitive to a variety of global factors and its amplitude and latency change throughout life (Picton, 1992;Polich, 2004Polich, , 2007Polich and Kok, 1995;Verleger, 1997). Specific experimental conditions (such as oddball tasks with frequent, rare and target stimuli) allow the separation of two main components with different distributions over the scalp, which correlate with different functions: a frontal P3a that reflects the orientation of attention to unexpected events in the environment, and a central-parietal P3b that can reflect rapid information processing when attentional and working memory mechanisms are involved (Barcel o and Cooper, 2017;Polich, 2007). Thus, P3b has been associated with context updating Vogel et al., 1998), to the evaluation of stimuli (Kutas et al., 1977), the speed of allocation of attention resources (Polich, 2007), processes related to response selection (Ouyang et al., 2011(Ouyang et al., , 2013(Ouyang et al., , 2015Saville et al., 2011Saville et al., , 2014Saville et al., , 2015Verleger, 1997Verleger, , 2010 and the closing of a cognitive epoch (Gajewski and Falkenstein, 2011). The estimated neural origin of P3a was in the dorsolateral prefrontal cortex, in the temporoparietal junction (Soltani and Knight, 2000;Friedman et al., 2001) and in the anterior cingulate cortex (Fallgatter et al., 2002(Fallgatter et al., , 2004Polich, 2007). P3b seems to be generated in cortical temporal parietal regions (Di Russo et al., 2016;Polich, 2007;Soltani and Knight, 2000). It has been suggested that P3 reflects the response of the locus coeruleus-norepinephrine system to the result of internal decision-making processes and the consequent effects of noradrenergic potentiation of information processing (Nieuwenhuis et al., 2005). There is already robust knowledge of the effects of various experimental conditions on P3, although there is no clear consensus on the neural and cognitive processes that P3 reflects. Gao and Raine (2009) published a meta-analysis of 38 studies (N ¼ 2, 616) to assess P3 modulation in antisocial behavior. A small but significant effect size was reported. Compared to controls, the antisocial group exhibited reduced P3 amplitude and longer latency. The main findings show that prosocial behavior may be compromised in antisocial individuals, due to the lack of inhibitory control, as well as impaired allocation of attention resources to detect infrequent, but relevant stimuli.
Through a systematic review, Pasion et al. (2018) found a clear relationship between antisocial behavior and a decreased P3 amplitude, with this decrease being primarily explained by impulsive-antisocial psychopathy traits. Conversely, affective-interpersonal traits are only associated with reduced P3 amplitudes in tasks with affective-emotional content, while, in cognitive tasks, there is evidence of an enlarged P3 (Pasion et al., 2018). Thus, it is assumed that the higher the scores on the disinhibition facet of psychopathy, the smaller the P3 amplitude in intertemporal decision-making tasks.
P3 and LPP have been identified as electrophysiological responses to emotional events (van Dongen et al., 2018). LPP, which many consider a variant of P3 in visual tasks, is a positive potential evoked by emotional stimuli and reflects top-down processes such as emotional regulation . LPP is commonly identified in central and parietal locations, namely in tasks of emotional stimuli observation, approximately between 400 and 1000 ms (Hajcak et al., 2009(Hajcak et al., , 2011. The LPP can continue for several seconds after the presentation of the stimulus, it is characterized by a relative positivity on central-parietal electrodes for emotional vs. neutral stimuli, and it reflects the allocation of attention resources to salient events (Schupp et al., 2006;Wiens et al., 2011Wiens et al., , 2012. Individuals with high psychopathic traits exhibit impaired emotional processing, mainly in the processing of negative stimuli, as revealed by deficits in the recognition of negative emotions (Dawel et al., 2012;Jusyte and Sch€ onenberg, 2017;Sch€ onenberg et al., 2016) and reduced autonomic responses after the presentation of negative stimuli (Fairchild et al., 2010;Flor et al., 2002;Levenston et al., 2000;L opez et al., 2013;Rothemund et al., 2012;Vaidyanathan et al., 2011). Despite these observed behavioral deficits, recent studies on the amplitude of the LPP evoked by visual emotional stimuli, in individuals with high psychopathic traits, reported conflicting results (Medina et al., 2016). For example, unpleasant stimuli evoked a smaller LPP amplitude than did neutral stimuli, among individuals with high psychopathic traits, compared to those with low psychopathic traits (Medina et al., 2016). However, both groups had similar LPP amplitude in response to pleasant and neutral stimuli (Medina et al., 2016). In other studies, individuals with high psychopathic traits did not exhibit differences between emotional and neutral stimuli (Carolan et al., 2014), but individuals with low psychopathic traits showed greater LPP amplitude for emotional stimuli than for neutral stimuli (Carolan et al., 2014;Hajcak et al., 2010). In addition, other studies have revealed no differences between groups with high and low psychopathic traits, in LPP amplitudes evoked by emotional stimuli (e.g., Eisenbarth et al., 2013). A recent meta-analysis (Vallet et al., 2019) suggests reduced LPP evoked by unpleasant stimuli and a normal LPP response to pleasant and neutral stimuli that would be specific to individuals diagnosed with psychopathy or, at least, with high psychopathic traits. Therefore, this study aimed to examine potential differences in neurophysiological correlates of attentional processes, namely through N1, P3, and LPP measures, which may be related to time estimation and intertemporal decision-making, as well as to examine their modulation by different emotional conditions. Although mainly exploratory, the following predictions for this study were formulated: (a) the higher the score on the boldness and meanness facets, the smaller the N1 amplitude related to images in the time estimation task, regardless of the emotional condition, indicating a deficient processing of these stimuli associated with those facets of psychopathy; (b) the higher the score on the disinhibition facet of psychopathy, the greater the N1 amplitude in the time estimation task, regardless of the emotional condition; (c) the higher the score on the disinhibition facet of psychopathy, the greater the estimation of time intervals (overestimation), especially in the unpleasant emotional condition; (d) an effect of emotional condition on time estimation is not expected, relative to the boldness and meanness facets of psychopathy; (e) the higher the score on the disinhibition facet of psychopathy, the greater the preference for smaller immediate gains over larger future gains, as well as for larger future losses over smaller immediate losses, indexed both by explicit choices (i.e., by a gains and losses ratio; see methodology), or by shorter response times under these conditions; (f) the greater the preference for larger future gains or smaller immediate losses, the greater the amplitudes of N1 and P3 related to the options of choice, indicating greater cognitive effort in decision-making; (g) the longer the estimated time, the greater the preference for smaller immediate gains and larger future losses; (h) the higher the score on the disinhibition facet of psychopathy, the lower the amplitude of P3 in the intertemporal decision-making task; (i) the higher the score on the disinhibition facet of psychopathy, the lower the LPP amplitude in the unpleasant emotional condition, in the time estimation task; (j) the higher the score on the boldness and meanness facets of psychopathy, the lower the LPP amplitude in pleasant and unpleasant emotional conditions, but not in neutral conditions, in the time estimation task.
Participants
This study included 67 adult participants (48 women) from the northern region of Portugal, with a mean age of 26.6 years (SD ¼ 5.99) and a mean of 15.7 schooling years (SD ¼ 3.33); however, nine participants were excluded, as the wave morphology of their ERP was undetectable. Thus, this study included 58 adult participants (40 women) from the northern region of Portugal, with a mean age of 26.4 years (SD ¼ 5.24) and a mean of 15.8 years of schooling (SD ¼ 3.06), who performed an intertemporal decision-making task and, of those, 19 participants (16 women), with a mean age of 25 years (SD ¼ 5.41) and a mean of 16 years of schooling (SD ¼ 3.37) performed the time estimation task. No other participant was eliminated following the application of other exclusion criteria (screened through self-report), namely neuropathologies, psychopathologies or sensory and motor deficits, as well as self-reported substance abuse or use of medication that could interfere with the performance of the experimental tasks. Inclusion criteria included having Portuguese nationality and between 18 and 65 years of age. Neuropsychological assessments were conducted on all participants (N ¼ 58).
This study was approved by the Ethics Committee of the Faculty of Psychology and Educational Sciences of the University of Porto, and, after a description of the study and respective objectives, a written informed consent was obtained from all participants. No financial compensation was awarded for participation in the study. Portuguese version had an internal consistency of .775, measured using Cronbach's alpha (Freitas et al., 2011). The MoCA was developed specifically for the assessment of milder forms of cognitive impairment (Freitas et al., 2014). According to a validation study conducted with the Portuguese population, this instrument has an ideal cut-off point of 22 for mild cognitive impairment (Freitas et al., 2014).
Psychopathy measure
The Triarchic Psychopathy Measure (TriPM, Patrick et al., 2009; Portuguese version by Paiva et al., 2020) is a self-report instrument with 58 items that map the core features of psychopathy along three facetsboldness, meanness, and disinhibitionand considers them continuously distributed among the general population. The boldness subscale includes the adaptive characteristics of psychopathy, such as social dominance, low anxiety, and an adventurous spirit; the disinhibition subscale contains externalizing factors, such as impulsivity and deficits in the affective regulation of anger and hostility; and, finally, the meanness subscale includes secondary externalizing items, such as lack of empathy and of close ties, insensitivity, and callousness. In terms of application, the TriPM is brief, easy to apply, open access, applicable to large groups, and has already been translated into 12 languages. A higher score in any of the subscales means that a greater number of features of the measured facet are present or that these features are more pronounced. The internal consistency scores on the subscales, in the original study, were boldness (α ¼ .87), meanness (α ¼ .85), and disinhibition (α ¼ .87) (Patrick, 2010). In the Portuguese version, the internal consistency (Cronbach's α) for the three TriPM subscales are boldness (α ¼ .82), meanness (α ¼ .85), and disinhibition (α ¼ .81). The measures considered in this study were the scores on the three subscales of the TriPM. (2) negative valence images (M ¼ 1.94, SD ¼ 0.25), which constitute the unpleasant condition; and (3) neutral images (M ¼ 5.07, SD ¼ 0.14), which constitute the neutral condition. This classification was based on the valence classifications provided with the NAPS image database (Marchewka et al., 2014).
In each trial of the time estimation task, the participants evaluated the time elapsed during the exposure of images of a certain emotional value, organized into three blockspleasant, unpleasant, and neutral imagesadministered without pause between them, and in a counterbalanced way, to control effects of order and proactive interference (carry-over effect). In the trials of each block, the duration of the intervals varied between six integer values (from 2 to 7 s) and their order was pseudorandomized, to avoid two consecutive trials with intervals of the same duration. Fifteen trials were presented for each time interval, totaling 90 trials per block (15 images * 6 intervals). The answer was free (to prevent participants from becoming aware of the different time intervals available). There was a 2.5 s period for the response to be given, followed by an inter-stimulus interval of 500 ms, to prevent an expectation effect and the proactive interference of one trial on the next. Participants were instructed not to count out loud, nor use any type of body movement that could assist in time estimation.
The intertemporal decision-making task was developed specifically for this study and applied as explained in the procedure.
In each trial of the intertemporal decision-making task, the participant had to decide between a certain amount of immediate gain or loss and another amount of delayed gain or loss (after a week, or month) (cf. Table 1).
Thirty trials were administered for each decision pair. As measures of this task, a gains ratio (Gr ¼ frequency of immediate, lower-value choices/frequency of delayed higher-value choices, in gains trials) and losses ratio (Lr ¼ frequency of immediate lower-value choices/frequency of delayed higher-value choices, in losses trials) were calculated, such that Gr > 1 indicates a preference for smaller, immediate gains, whereas Lr < 1 indicates preference for larger, delayed losses, these cases suggesting more impulsive choices. Mean response times (RT) were also calculated as measures of this task, under the conditions of immediate gain, delayed gain, immediate loss, and delayed loss.
Both the experimental tasks, as well as the blocks within each task, were administered in a counterbalanced way between the participants, to reduce effects of order and proactivity (carry-over), without a pause between the blocks. All participants had the opportunity to perform six training trials to familiarize themselves with the tasks.
Both tasks were programmed using the E-Prime 2.0 (Psychology Software Tools, Inc.) software. All stimuli were presented at the center of a 17 00 monitor, positioned approximately 80 cm in front of the participants.
EEG data acquisition and processing
EEG data was collected using a NetAmps 300 amplifier from Electrical Geodesics Inc. (Electrical Geodesics Inc., Eugene, OR, EUA) and a Hydrocel Geodesics Sensor net cap with 128 channels. The scalp electrodes were referenced to Cz, and data was collected with a sampling rate of 500 Hz, using the Netstation V4.5.2 (2008, EGI -Electrical Geodesics Inc., Eugene, OR, EUA).
The raw EEG data was pre-processed in EEGLAB (version 11; Delorme and Makeig, 2004), a toolbox of MATLAB (2017, The Mathworks Inc., Natick, MA, EUA). The sampling rate was reduced to 250 Hz and the data was filtered using a 0.2 Hz high-pass filter and a 30 Hz low-pass filter. The channels with more noise were eliminated (up to a maximum of 10% of the electrodes) and subjected to a decomposition by independent component analysis (ICA). Blinking, saccades and cardiac activity were corrected by subtracting the corresponding independent component activity from the data, followed by visual inspection to ensure that the correction did not alter the signal beyond the time windows of the artifacts. The eliminated electrodes were interpolated, and the signal was re-referenced to the average of the electrodes. The EEG recordings were segmented into 1000 ms epochs (À200 to 800 ms with reference to the start of the event of interest) and visually inspected for manual rejection of segments with artifacts not corrected by the ICA decomposition. All epochs underwent a baseline correction (200 ms pre-stimulus) and the potentials are related/synchronized to the appearance of the stimuli, in each of the experimental tasks. We also aimed to analyze the potentials related to participants' responses, but not enough segments were obtained to allow this analysis.
ERP data analysis
The average ERP per condition was inspected to ensure that the expected morphology of the potentials of interest was present. The average amplitudes for each potential of interest were extracted by averaging 200 voltage samples around the peak amplitude identified in the time window of 100-200 ms for N1, and 300-1000 ms for P3, for the intertemporal decision-making task. For the time estimation task, the time windows of 100-200 ms were used for N1, and 400-1000 ms for the LPP. Electrodes 61, 62, 67, 72, 77, and 78 were interpolated for N1, electrodes 61, 62, 67, 72, 77, and 78 for P3, and electrodes 61, 62, 67, 72, 77, and 78 for the LPP. Given the limited signal-to-noise ratio of the present study, we decided to use the average amplitudes as a more reliable way to quantify the components of interest of the ERP (Luck, 2014).
Data analysis
In the first phase of data treatment and analysis, central tendency (M) and dispersion (SD) measures were calculated. To test the predictions, repeated measures covariance analyses (repeated measures ANCOVA) were performed, using the Statistical Packages for Social Sciences (SPSS, version 26, IBM Corp., Armony, NY).
For the analysis of behavioral data in the time estimation task, the emotional condition (pleasant, unpleasant, neutral) was considered an intra-subjects variable and the scores of the TriPM subscales (psychopathy measures) were considered covariables, to examine their effect on time estimation, measured using θ (estimated time/real time).
In the intertemporal decision-making task, the decision-making condition (gains, losses) was considered an intra-subjects variable and the scores on the TriPM subscales (psychopathy measures) were covariables, to examine their effect on the choice preference of participants (dependent variable), measured by the gains or losses ratios. In a separate model, the type of choice (smaller immediate gains, larger delayed gains, smaller immediate losses, and larger delayed losses) was considered an intra-subjects variable and the scores of the psychopathy subscales were covariables, to examine their effect on response times (RT) (dependent variable).
For the analysis of the neurophysiological data of the time estimation task, the emotional condition (pleasant, unpleasant, neutral) was considered an intra-subjects variable, the scores of the TriPM subscales (measure of psychopathy traits) were covariates and the mean amplitudes of the N1 and LPP components were dependent variables (in separate models).
Concerning the analysis of the neurophysiological data of the decision-making task, the condition of choice (gains now-week, gains now-month, losses now-week, losses now-month) was considered an intra-subjects variable, the score on TriPM subscales (measure of psychopathy traits) were covariates and the mean amplitudes of N1 and P3 components were dependent variables (in separate models).
The partial η 2 was calculated as a measure of effect size (Cohen, 1992) and the Holm-Bonferroni post hoc test was selected for multiple comparisons, as it is more robust than the Bonferroni test (Holm, 1979).
The assumption of normality was assessed by the Shapiro Wilk test and, when violated, an analysis of skewness (Sk) and kurtosis (Ku) coefficients was performed. Since the absolute values of these coefficients varied between 2 and 7 (Kim, 2013), these parametric tests were always selected. The sphericity assumption was assessed using the Mauchly test, and when this assumption was violated, the Greenhouse-Geisser correction was applied, and the epsilon value (Ɛ) was reported.
Lastly, in order to explore the relationship between time estimation and intertemporal choices, a Pearson's correlation coefficient was performed, between the time estimation measures in the three emotional conditions (θ pleasant, θ unpleasant, and θ neutral), the scores on the facets of psychopathy (boldness, meanness, and disinhibition), the response times for smaller immediate gains, larger delayed gains, smaller immediate losses and larger delayed losses, and the gains (Gr) and losses (Lr) ratios for the intertemporal decision-making task. We also explored (a) relationships between the thetas (in the pleasant, unpleasant, and neutral conditions) with the ERP amplitudes, and (b) the relationships of Gr and Lr with the ERP amplitudes.
Descriptive statistics
3.1.1. Time estimation task Table 2 presents the descriptive statistics of the θ values observed in each of the emotional conditions of the time estimation task. To facilitate the comparison between conditions, these results are also displayed in Figure 1. Table 3 presents the descriptive statistics of the N1 amplitude in each emotional condition of the time estimation task. To facilitate the comparison of the N1 amplitude in the three emotional conditions, these results are also presented in Figure 2. Table 4 presents the descriptive statistics of the LPP amplitude in each emotional condition of the time estimation task. To facilitate the comparison of the LPP amplitude in the three emotional conditions, these results are also shown in Figure 3. Table 5 presents the descriptive statistics of the gains ratio (Gr) values for the gains condition, and the losses ratio (Lr) values for the losses condition, observed in the decision-making task. To facilitate this comparison, these results are also displayed in Figure 4. Table 6 presents the descriptive statistics of the response times obtained according to the type of decision, specifically in the choices of smaller immediate gains, larger delayed gains, smaller immediate losses and larger delayed losses. To facilitate this comparison, these results are also shown in Figure 5. Table 7 displays the descriptive statistics of the N1 amplitude according to the decision-making condition (gains now-week, gains nowmonth, losses now-week, losses now-month), in the intertemporal decision-making task. To facilitate the comparison of N1 amplitudes in Figure 4. Effect of the decision-making condition (gain, loss) measured by the gains ratio (Gr) for the condition of gains and the losses ratio (Lr) for the condition of losses (the error bars denote the 95% confidence interval).
Intertemporal choice task
the four decision-making conditions, these results are also presented in Figure 6. Table 8 presents the descriptive statistics of the P3 amplitude in each decision-making condition (gains now-week, gains now-month, losses now-week, losses now-month) of the intertemporal decision-making task. To facilitate the comparison of the P3 amplitude in the two decision-making conditions, these results are also displayed in Figure 7.
Behavioral results
The repeated measures ANCOVA, in which the emotional condition (pleasant, unpleasant, neutral) was an intra-subjects factor and the scores on the TriPM subscales (boldness, disinhibition, and meanness) were covariables, did not reveal a main effect of emotional condition, F(1.44, 21.54) ¼ 0.244, p ¼ .712, η 2 p ¼ .016, ε ¼ .718, nor significant effects of boldness and meanness on the values of θ (both F 1); however, it revealed an almost significant effect of disinhibition, F(1, 15) ¼ 3.475, p ¼ .082, η 2 p ¼ .188, with higher values on this variable being associated with higher values of θ in the unpleasant emotional condition.
The interactions of each of the psychopathy measures (boldness, disinhibition, and meanness) with the emotional condition did not reveal significant effects on the values of θ in any of the cases (all F 1). Since there were no significant effects, we did not proceed with post hoc analyses.
The repeated measures ANCOVA, in which the response times condition was an intra-subjects factor and the scores on the boldness, disinhibition, and meanness subscales of the TriPM were covariables, did not reveal a main effect of the response times condition, F(1.36, 20.3) ¼ 0.062, p ¼ .875, η 2 p ¼ .004, Ɛ ¼ .678, nor significant effects of any of the psychopathy measures, i.e. the covariables, on the values of the response times (ms) (F 2.138, p > .164).
The interactions of each psychopathy measure (boldness, disinhibition, and meanness) with the response times did not reveal significant effects on the values of θ in any of the cases (all F 1). Since there were no significant effects, we did not proceed with post hoc analyses.
Neurophysiological results
The repeated measures ANCOVA, in which the emotional condition (pleasant, unpleasant, neutral) and the scores on the boldness, disinhibition, and meanness subscales of the TriPM were covariables, did not reveal a main effect of emotional condition, F(2, 24) ¼ 0.095, p ¼ .910, η 2 p ¼ .008, nor significant effects of boldness; however, meanness, F(1, 12) ¼ 3.71, p ¼ .078, η 2 p ¼ .236 and disinhibition, F(1, 12) ¼ 4.28, p ¼ .061, η 2 p ¼ .263, were almost significant for N1 amplitude in the pleasant Figure 5. Effect of the type of decision (smaller immediate gains, larger delayed gains, smaller immediate losses, and larger delayed losses) on response time (the error bars indicate the 95% confidence interval). Losses now-month --0.988 1.854 Figure 6. Effect of the decision-making condition (gains now-week, gains nowmonth, losses now-week, losses now-month) on N1 amplitude in the intertemporal decision making task (the error bars indicate the 95% confidence interval). emotional condition, with higher meanness values being associated with a lower N1 amplitude in the pleasant emotional condition; and higher disinhibition values being associated with greater N1 amplitude in the pleasant emotional condition. The interactions of each psychopathy measure (boldness, disinhibition, and meanness) with the emotional condition (pleasant, unpleasant, neutral) did not reveal significant effects in N1 amplitude, in any of the cases (all F 1). Since there were no significant effects, we did not proceed with post hoc analyses.
The repeated measures ANCOVA, in which the emotional condition (pleasant, unpleasant, neutral) was an intra-subjects factor and the scores on the boldness, disinhibition, and meanness subscales of the TriPM were covariables, did not reveal a main effect of emotional condition, F(2, 24) ¼ 0.002, p ¼ .998, η 2 p ¼ .002, nor significant effects of boldness and meanness on LPP amplitude (F 2.326, p > .153); however, disinhibition, F(1, 12) ¼ 4.054, p ¼ .067, η 2 p ¼ .253 demonstrated almost significant LPP amplitude, with greater disinhibition values being associated with lower LPP amplitude, in the unpleasant emotional condition.
The interactions of each psychopathy measure (boldness, disinhibition, and meanness) with the emotional condition (pleasant, unpleasant, neutral) did not reveal significant effects on the LPP amplitude in any of the cases (all F 1). Since there were no significant effects, we did not proceed with post hoc analyses.
The amplitudes of the ERPs in the time estimation task were calculated on Pz (Figure 8), per emotional condition (pleasant, unpleasant, neutral).
Behavioral results
The repeated measures ANCOVA, in which the decision-making condition (gains, losses) was an intra-subjects factor and the scores on the boldness, disinhibition, and meanness subscales were covariables, did not reveal a significant effect of intertemporal decision-making, F(1, 54) ¼ 0.462, p ¼ .500, η 2 p ¼ .008, nor significant effects of any of the psychopathy measures, i.e. the covariables, on the values of Gr and Lr (all F 1).
The interactions of each psychopathy measure (boldness, disinhibition, and meanness) with decision-making (gains, losses) did not reveal significant effects on the choice preference (measured by Gr or Lr) (all F 2.095, p > .154). Since there were no significant effects, we did not proceed with post hoc analyses.
The repeated measures ANCOVA, in which the type of decision (smaller immediate gains, larger delayed gains, smaller immediate losses, and larger delayed losses) was the intra-subjects factor and the scores on the boldness, disinhibition, and meanness subscales of the TriPM were covariables, did not reveal a main effect of type of decision, F(3, 156) ¼ 0.041, p ¼ .989, η 2 p ¼ .007, nor significant effects of any of the psychopathy measures, i.e. the covariables, on the values of RT (ms) (all F 1.415, p > .240). The interactions of each psychopathy measure with the type of decision did not reveal significant effects on RT values (ms), in any of the cases (all F 2.005, p > .116). Since there were no significant effects, we did not proceed with post hoc analyses.
Neurophysiological results
The repeated measures ANCOVA, in which the decision-making condition (gains now-week, gains now-month, losses now-week, losses now-month) and the scores on the boldness, disinhibition, and . Effect of the decision-making condition (gains now-week, gains nowmonth, losses now-week, losses now-month) on P3 amplitude in the intertemporal decision making task (the error bars indicate the 95% confidence interval).
meanness subscales of the TriPM were covariables, did not reveal a main effect of the decision-making condition, F(2.39, 129.24) ¼ 1.286, p ¼ .282, η 2 p ¼ .023, Ɛ ¼ .798, nor significant effects in any of the psychopathy measures, i.e. the covariables, in N1 amplitude (all F 1). The interactions of each psychopathy measure (boldness, disinhibition, and meanness) with the decision-making condition (gains now-week, gains now-month, losses now-week, losses now-month) revealed no significant effects on the amplitude of N1 in any of the cases (all F 1.281, p > .283). Since there were no significant effects, we did not proceed with post hoc analyses.
The repeated measures ANCOVA, in which the decision-making condition (gains now-week, gains now-week, losses now-week, losses nowmonth) was an intra-subject factor and the scores on the boldness, disinhibition, and meanness subscales of the TriPM were covariables, revealed an almost significant main effect of the decision-making condition, F(2.82, 151.99) ¼ 2.395, p ¼ .075, η 2 p ¼ .042, Ɛ ¼ .638 on the amplitude of P3; however, no significant effects were found in any of the psychopathy measures, i.e. the covariables, in the amplitude of P3 (all F 1.175, p > .283).
The interactions of each psychopathy measure (boldness, disinhibition, and meanness) with the decision-making condition (gains now-week, gains now-week, losses now-week, losses now-month) revealed no significant effects on the amplitude of P3 in any of the cases (all F 1.949, p > .128). Since there were no significant effects, we did not proceed with post hoc analyses.
The Pearson correlation matrix showed an almost significant positive relationship between the scores on the disinhibition subscale of the TriPM and the P3 amplitude, in the decision condition of losses nowmonth (r ¼ .236, p ¼ .074), as well as between the scores on the meanness subscale and the P3 amplitude, in the decision condition of gains now-month (r ¼ .245, p ¼ .064).
There was also a significant positive relationship between the scores on the disinhibition subscale (r ¼ .317, p ¼ .015), as well as between the scores on the meanness subscale (r ¼ .288, p ¼ .028) and P3 amplitude, in the decision condition of gains now-week.
The ERP amplitudes in the intertemporal decision-making task were calculated in Cz for four choice conditions: gains now-week, gains nowmonth, losses now-week and losses now-month (Figure 9).
Relationship between intertemporal choices and time estimation
The Pearson correlation coefficient revealed a positive and significant correlation between the θ of the pleasant emotional condition and the response time of the smaller immediate gain (r ¼ .547, p < .05) (Table 9).
Interpretation
There are characteristics of psychopathy that may be related to changes in intertemporal choices. Specifically, characteristics such as impulsiveness or lack of inhibitory control may be associated with a more pronounced discounting function in intertemporal choices and, in turn, this function may be based on changes in the basic mechanisms of time perception, namely time estimation. Thus, this study aimed to examine potential differences in neurophysiological correlates, specifically through N1, P3 and LPP measurements, which may be related to time estimation and intertemporal decision-making, examining their modulation according to psychopathic traits, different emotional conditions, and different decision-making conditions. Although essentially exploratory, we formulated the following predictions for this study: (a) the higher the score on the boldness and meanness facets, the smaller the N1 amplitude related to the images in the time estimation task, regardless of the emotional condition, indicating a deficient processing of these stimuli associated with those facets of psychopathy; (b) the higher the score on the disinhibition facet of psychopathy, the greater the N1 amplitude in the time estimation task, regardless of the emotional condition; (c) the higher the score on the disinhibition facet of psychopathy, the larger the time interval estimate (overestimation), especially in the unpleasant emotional condition; (d) an effect of emotional condition on time estimation is not expected in association with the boldness and meanness facets of psychopathy; (e) the higher the score on the disinhibition facet of psychopathy, the greater the preference for smaller immediate gains over larger delayed gains, as well as for larger delayed losses over smaller immediate losses, indexed either by explicit choices (i.e., by a ratio of gains and losses; see methodology), or by shorter response times under these conditions; (f) the greater the preference for larger delayed gains or smaller immediate losses, the greater the amplitudes of N1 and P3 related to the choice options, indicating greater cognitive effort in decisionmaking; (g) the longer the estimated time, the greater the preference Figure 9. Brain potentials related to the choice between gains now-week (green continuous tracing), gains now-month (green dashed tracing), losses now-week (red continuous tracing), and losses now-month (red dashed tracing), obtained from Cz, in the intertemporal decision-making task. Table 9. Correlation between scores on the facets of psychopathy (boldness, disinhibition, and meanness), time estimation measures (in the pleasant, unpleasant, and neutral emotional conditions), intertemporal choice measures (gains and losses ratios -Gr and Lr) and the N1, P3, and LPP amplitudes in both experimental tasks (n ¼ 19 in the time estimation task and n ¼ 58 in the decision-making task). for smaller immediate gains and larger delayed losses; (h) the higher the score on the disinhibition facet of psychopathy, the lower the amplitude of P3 in the intertemporal decision-making task; (i) the higher the score on the disinhibition facet of psychopathy, the lower the amplitude of the LPP in the unpleasant emotional condition, in the time estimation task; (j) the higher the scores on the boldness and meanness facets of psychopathy, the lower the amplitude of the LPP in the pleasant and unpleasant emotional conditions, although not in the neutral condition, in the time estimation task. Fifty-eight participants (40 women) performed an intertemporal decision-making task and, of these, 19 (16 women) performed a time estimation task, with simultaneous recording of EEG data. In the time estimation task, participants underestimated time in all emotional conditions. Despite studies showing that emotions alter the subjective sense of time (Campbell and Bryant, 2007;Droit-Volet and Meck, 2007;Smith et al., 2011;Tse et al., 2004;Wittmann and van Wessenhove, 2009), namely between an emotional or neutral context (Dirnberger et al., 2012), there was no main effect of the emotional condition. Moreover, there were no significant effects of boldness and meanness, nor of the interaction of these facets with the emotional conditions, on the values of θ (which meets hypothesis d), but there was an almost significant effect of disinhibition on the values of θ, with higher values on this variable associated with higher values of θ in the unpleasant emotional condition. This means the greater the disinhibition, the greater the tendency to overestimate time in the unpleasant emotional condition, which suggests confirmation of hypothesis c. It is likely the participants in this study did not reveal pronounced traits of impulsivity or, at least, more pronounced psychopathic traits of disinhibition.
The default delay discounting paradigms are based on explicit choices between immediate vs. delayed options. Impulsive individuals usually choose smaller immediate rewards over larger delayed rewards, possibly because individuals with pronounced impulsive traits have an accelerated sense of time (Berlin et al., 2005). A longer perception of time is associated with higher costs, which leads to the selection of alternatives with more immediate results (Frederick et al., 2002;Kalenscher and Pennartz, 2008;Loewenstein and Prelec, 1992;Pimentel et al., 2012).
Emotional factors may also guide intertemporal decisions, namely by influencing the attention devoted to each of the choices. For example, on the one hand, the emotional salience of an immediate monetary reward influences motivational value and, on the other hand, delayed rewards are more "intangible" (Rick and Loewenstein, 2008;Rolls, 1999). This explanation is in line with recent neurobiological reports of self-control mechanisms that emphasize the role of selective attention in choices that have future consequences (Figner et al., 2010;Hare et al., 2009Hare et al., , 2011. Assuming this emotional salience affects decision-making, the greater the insensitivity to that salience, as expected in individuals of high meanness and boldness, the greater the preference for larger delayed gains and smaller immediate losses. As for disinhibition, the associated impulsiveness would predict the opposite pattern of choices. In the intertemporal decision-making task, this study revealed no significant effects of any psychopathic measure (boldness, disinhibition, and meanness) on the values of the gains and losses ratios. There were also no significant effects of any of the psychopathy measures on the response times observed in each type of choice. As such, hypothesis (e) is invalidated.
The N1 amplitude is sensitive to attention (N€ a€ at€ anen and Picton, 1987) and would be increased in individuals with more pronounced disinhibition traits (e.g., Anderson et al., 2015). The repeated measures ANCOVA, in which the emotional condition (pleasant, unpleasant, neutral) and the scores on the TriPM subscales of boldness, disinhibition, and meanness were as covariables, did not reveal a main effect of the emotional condition, nor significant effects of boldness; however, meanness and disinhibition proved to be almost significant in the N1 amplitude, in the pleasant emotional condition of the time estimation task, with higher meanness values being associated with lower N1 amplitude in the pleasant emotional condition; and with higher disinhibition values being associated with greater N1 amplitude in the pleasant emotional condition, suggesting the confirmation of hypotheses a and b.
Furthermore, previous findings among individuals prone to hypomania revealed greater N1 differentiation between immediate and delayed rewards, in addition to greater N1 amplitudes for rewards themselves (Mason et al., 2012). Increased N1 amplitudes have also been associated with high levels of impulsivity. For example, impulsive-aggressive participants exhibited an increased N1 in response to visual stimuli, indicating improved attentional orientation (Gehring and Willoughby, 2002). In this study, in the intertemporal decision-making task, the repeated measures ANCOVA, in which the decision-making condition (gains now-week, gains now-month, losses now-week, losses now-month) and the scores on the TriPM subscales of boldness, disinhibition, and meanness were covariables, did not reveal a main effect of the decision-making condition, nor significant effects of any psychopathy measure, i.e. the covariables, on the amplitude of N1. The interactions of each psychopathy measure (boldness, disinhibition, and meanness) with the decision-making condition (gains now-week, gains now-month, losses now-week, losses now-month) did not reveal significant effects on the N1 amplitude in any of the cases. The repeated measures ANCOVA, in which the decision-making condition (gains now-week, gains now-month, losses now-week, losses now-month) was an intra-subjects facto rand the scores on the boldness, disinhibition, and meanness subscales of the TriPM were covariables, revealed a nearly significant effect of the decision-making condition on the amplitude of P3. The interactions of each psychopathy measure (boldness, disinhibition, and meanness) with the decision-making condition (gains now-week, gains now-month, losses now-week, losses now-month) did not show significant effects on the amplitude of P3 in any of the cases. These data suggest the invalidation hypothesis (f).
Indeed, the results found were not expected, regarding the P3 amplitude. Through a systematic review, Pasion et al. (2018) found that most studies report reduced P3 amplitude in samples of high-psychopathy individuals, when compared to control groups. Sufficient evidence was found that the impulsive-antisocial characteristics of psychopathy are the main predictor of reduced P3 amplitude. Moreover, evidence of a dissociable effect on the interpersonal-affective characteristics of psychopathy was found, considering the nature of the tasks: these characteristics predicted increased P3 amplitude in cognitive tasks, but emotional-affective tasks were associated with an attenuated P3 amplitude (Pasion et al., 2018). Thus, it was assumed that the higher the scores on the disinhibition facet of psychopathy, the greater the amplitude of P3 in the intertemporal decision-making task. The repeated measures ANCOVA did not reveal a significant effect of any psychopathy subscale on P3 amplitude, in the intertemporal choice task. Nonetheless, the analysis of the correlation matrix shows that, when considering the isolated relationship of each facet with P3 amplitude, that is, without controlling the influence of the other facets, there is an almost significant positive correlation between disinhibition and P3 amplitude, in the losses now-month condition, as well as an almost significant positive correlation between meanness and P3 amplitude, in the gains now-month condition. A significant positive correlation was also found between P3 amplitude in the gains now-week and both disinhibition and meanness. These results may indicate that the greater the meanness, the greater the cognitive effort and attention devoted to the conditions for choosing between immediate gains and gains delayed a week or a month. Regarding disinhibition, the higher the score on this psychopathy facet, the greater the cognitive effort and attention devoted to the choice between losses now or losses delayed one month. To some extent, it is possible these results are compatible with a deficient behavioral inhibition system in high-meanness individuals, making them less sensitive to losses, although normally reward-oriented. These data do not support hypothesis (h).
P3 and LPP have been identified as electrophysiological responses to emotional events (van Dongen et al., 2018). LPP is a positive potential evoked by emotional stimuli and reflects top-down processes, such as emotional regulation . Individuals with high psychopathic traits exhibit impaired emotional processing, mainly in the processing of negative stimuli (Dawel et al., 2012;Jusyte and Sch€ onenberg, 2017;Sch€ onenberg et al., 2016) and reduced autonomic responses after the presentation of negative stimuli (Fairchild et al., 2010;Flor et al., 2002;Levenston et al., 2000;L opez et al., 2013;Rothemund et al., 2012;Vaidyanathan et al., 2011). Despite these observed behavioral deficits, recent studies on LPP amplitude evoked by visual emotional stimuli in individuals with high psychopathic traits reported conflicting results (Medina et al., 2016). For example, unpleasant stimuli evoked lower LPP amplitude than neutral stimuli, in individuals recruited from the community with high psychopathic traits compared to those with low psychopathic traits (Medina et al., 2016). However, both groups exhibited similar LPP amplitude in response to pleasant and neutral stimuli (Medina et al., 2016). In other studies, individuals with high psychopathic traits, recruited from the community, did not show differences between emotional and neutral stimuli (Carolan et al., 2014), but individuals with low psychopathic traits showed a greater LPP amplitude for emotional stimuli than for neutral stimuli (Carolan et al., 2014;Hajcak et al., 2010). In addition, other studies have revealed no differences between groups with high and low psychopathic traits in LPP amplitudes evoked by emotional stimuli (e.g., Eisenbarth et al., 2013). A recent meta-analysis (Vallet et al., 2019) suggests a reduction of the LPP evoked by unpleasant stimuli and a normal LPP response to pleasant and neutral stimuli, specific to individuals with psychopathy. We based the formulation of our hypotheses on this most recent study. The repeated measures ANCOVA, in which the emotional condition (pleasant, unpleasant, neutral) was an intra-subjects factor and the scores on the boldness, disinhibition, and meanness subscales were covariates, did not reveal a main effect of the emotional condition, nor significant effects of boldness and meanness on LPP amplitude, in the time estimation task; however, disinhibition proved to be almost significant in LPP amplitude, with higher disinhibition values being associated with a lower LPP amplitude in the unpleasant emotional condition, suggesting confirmation of hypothesis (i) and invalidation of hypothesis (j).
In the case of intertemporal decision-making tasks, a study examined the electrophysiological correlates of this type of decision (Blackburn et al., 2012). In this study, N1 and frontal related negativity (FRN) were components of interest. N1 was one of the components we analyzed in this task, but we were unable to analyze FRN, because the task did not allow us to induce it. FRN is induced when there is error feedback, when the correct answer is not known, and when a choice result is suboptimal (below the ideal) and passively violates the reward prediction, suggesting a monitoring system that may not be restricted to actions. Only brain potentials related to the appearance of stimuli were extracted (potentials related to responses were also extracted in both tasks, but the reduced number of epochs did not allow their analysis), since the experimental task of decision-making was not programmed to provide feedback and allow the extraction of this type of potential.
Discussion
Time perception can be understood as a basic ability of the human mind, and time is an important dimension when individuals make decisions involving gains and losses at different times, of the present or the future. For example, the waiting time before a beneficial result may be received is seen as a cost and is weighed against the benefits of the result. The role that time perception can play in intertemporal decision-making is not well known, nor whether changes in that perception, which are associated with high impulsivityas is typical of people who score high on the disinhibition facet of psychopathy -, are also reflected in the intertemporal choices. In addition, the existing research suggests that one's emotional state may interfere with basic mechanisms of time perception, but it remains to be clarified whether psychopathy traits that are associated with a lower resonance to emotional stimulation, such as boldness, make time perception mechanisms more immune to said stimulation, as well as how they influence intertemporal choices.
Although there is literature, albeit limited, on time perception in conditions where impulsivity is present, nothing is known about time perception in individuals with high psychopathy. Moreover, time perception and emotions are inexplicably linked to a multitude of external and internal events. Although many studies have shown that individuals are able to accurately measure the passage of time in the range of milliseconds to hours, it remains to be known how our sense of time is altered by emotions (Buhusi and Meck, 2005;Gibbon et al., 1997;Goguen, 2004). Indeed, the analysis of the complex interaction between emotion and time perception remains relatively scarce (e.g., Schirmer, 2004). Given the known emotional deficits in high-psychopathy individuals, exploring the relationship between the core traits of this personality structure and time perception and emotional interference becomes an interesting research question. On the one hand, it is plausible that psychopathy traits related to impulsivity, such as disinhibition, would contribute to an overestimation of time. On the other hand, the low emotional resonance associated with other traits, such as boldness or meanness, suggest that emotional stimulation would have less interference in time perception in these individuals. Thus, the aim of the present study was to examine potential differences in neurophysiological correlates, specifically through N1, P3, and LPP measurements, which may be related to time estimation and intertemporal choices, examining their modulation according to psychopathic traits, different emotional conditions, and different decision-making conditions. To this end, 67 adult participants (48 women) performed a intertemporal decision-making task, of which 19 participants (16 women) performed a time estimation task.
Most studies used idiosyncratic emotional stimuli, which caused problems with interpretation and generalization. Recently, several studies have used sets of standardized stimuli, such as the International Affective Picture System (IAPS; Lang et al., 1997) and the Nencki Affective Picture System (NAPS; Marchewka et al., 2014) and have begun to pay special attention to valence (pleasant, unpleasant, neutral) and intensity or activation (low, high). The pattern of results found by Angrilli et al. (1997), with the IAPS, seemed complex: there was no main effect of activation or valence in the time estimate, but there was a significant interaction between the two dimensions. In the condition of high activation, the duration of negative images was overestimated, while that of positive images was underestimated. In the low activation condition, negative images were underestimated, and positive images were overestimated. This opposite effect of valence as a function of the level of arousal suggests that two different mechanisms are triggered by levels of activation: a controlled attention mechanism for low activation and an automatic mechanism related to motivational survival systems for high arousal (Angrilli et al., 1997). Our data reveal another direction: there was time underestimation in all emotional conditions, which did not meet what was expected (greater overestimation of time in the unpleasant emotional condition, than in the conditions of pleasant and neutral stimulation). Negative images elicited a stronger orientation response than positive images; more attention was paid to negative images than to positive images and the former were judged as being shorter.
Emotions are organized around two basic and independent motivational systems, responsible for avoidance and approach behaviors: the behavioral inhibition system and the behavioral activation system. On the one hand, the behavioral inhibition system is activated mainly in threatening contexts and basic behavior is built on withdrawal and escape, or attack, especially when the first two alternatives are impractical. On the other hand, the behavioral activation system is activated in contexts such as support, procreation, and nutrition, translating into basic behaviors such as provision of food, sexual intimacy and care. Consequently, unpleasant conditions of high activation, which require defensive behaviors, involve the ability to produce a rapid reaction (attack, escape), causing increased activation of the autonomic nervous system (e.g., pupil dilation, increased blood pressure, muscle contraction) and, possibly, the concomitant acceleration of the "internal clock", leading an overestimation of the passage of time. Our data suggest the emotional stimuli used induced low activation. In fact, when the passage of time is experienced as faster, the readiness for action is quick. It is likely that mechanisms related to attention prevail in longer durations due to the expected decrease in the autonomic response a few seconds after the presentation of the stimulus (Droit-Volet and Meck, 2007).
The effects of emotions can change systematically over time. Negative images in the condition of high arousal activate the behavioral inhibition system. Consequently, compared to positive images, the "internal clock" will function relatively faster under conditions of high activation for unpleasant images, which causes an overestimation. On the other hand, negative images, in the low activation condition, will be underestimated, possibly because the capture of attention by these images means that less attention will be given to the internal timing system and there will be less accumulated impulses. In conditions of low activation, the capture of attention by the characteristics that define the emotional valence of the stimulus diverts the processing of resources outside the timing system itself (Buhusi and Meck, 2006;Fortin, 2003). Thus, the time perception seems to be a sensitive index of the basic function of emotions, depending not only on its positive or negative valence, but also on its activation potential.
Time perception is a fundamental factor when individuals must make decisions and consider the results associated with their choices. Rewards that are received earlier are often preferred over future rewards, because the subjective value of a result is discounted as a delay function (Ainslie, 1975;Kirby et al., 1999). Results of decision-making experiences show that individuals avoid risk when they must choose between options associated with likely outcomes vs. certain outcomes. Specifically, individuals choose something certain over rewards with a probabilistic outcomeeven when the probabilistic alternatives have an equal or even greater expected value (Kahneman and Tversky, 1979). The length of time between choosing and receiving the reward is another important factor that influences our decisions. A delayed result of a choice reduces the subjective value of a reward, a phenomenon called delay discounting (Kirby and Santiesteban, 2003;Laibson, 1997). Individuals prefer to receive rewards sooner rather than later. In this study, in intertemporal decision-making task there is no main effect of the type of decision-making conditions (gains, losses), nor significant effects of any of the psychopathic measures on the values of the profit and loss ratios. The interactions of each of the psychopathic measures with the type of intertemporal decision-making also did not show significant effects on the values of the win and loss ratios in any of the cases. There is also no main effect of the type of decision making (gains of smaller immediate value, gains of larger delayed value, losses of smaller immediate value and losses of larger delayed value), nor significant effects of any of the psychopathic measures on the values of response times. The interactions of each psychopathic measure with decision-making also did not show significant effects on the response time values in any of the cases.
Like previous studies, this study presents some limitations. The method was conditioned by instrumental circumstances: the participants provided their answers on a 9-key response box (keys 1 to 9). Therefore, the time estimation task was programmed for exposure times between 2 to 7 s (the 8-second option would have made it too long) and the response options were limited to between 1 and 9 s. If a response box allowing more options had been available, the differences between the participants would possibly be more expressive. Moreover, social desirability was not assessed or controlled. Since the TriPM contains a considerable number of items pertaining to deviant attitudes and acts, particularly in the meanness and disinhibition subscales, it is important that future studies assess the presence of social desirability and test possible moderating effects of this variable.
In tasks such as time estimation, stimulus-related ERPs do not provide information about neurophysiological correlates of time estimation itself; they are simply the brain's response to pleasant, unpleasant, and neutral images (at the moment the stimulus is "released" and the brain response captured, the participant does not even know how long the stimulus, i.e., the image, will be exposed). As explained above, we only analyzed stimuli-related ERPs, because we were unable to extract enough segments from the EEG recordings to obtain response-related potentials of acceptable quality.
In future studies, it would be important to provide a response box with more options, as well as to analyze response-related ERPs and increase the sample size. Given the scarcity of studies conducted to date, future research should consider a dissociation of P3 components to better discriminate the neural correlates of impulsive behavior and, specifically, of psychopathy. It would also be relevant to examine whether a reduction in LPP limited to negative stimuli could discriminate psychopathy or, at least, individuals with more pronounced disinhibition traits compared to other traits. Thus, it would be possible to consider LPP as a potential neuromarker to characterize different phenotypic manifestations of psychopathy.
In summary, in the time estimation task, there was no major effect of the emotional condition. There were also no significant effects of boldness and meanness, nor of the interaction of these facets with the emotional conditions, on the values of θ. However, there was an almost significant effect of disinhibition on the values of θ, with higher values on this variable associated with greater values of θ in the unpleasant emotional condition. In the intertemporal decision-making task, there were no significant effects of any psychopathy measure (boldness, disinhibition, and meanness) on the values of the gains and losses ratios. There were also no significant effects of any psychopathy measure on the response times observed in each type of choice. In addition, the analysis of the neurophysiological correlates of the intertemporal decision-making task did not reveal a main effect of the decision-making condition (gains now-week, gains now-month, losses now-week, losses now-month), nor effects of any psychopathy measure on the N1 and P3 amplitudes. The interactions of each psychopathy measure (boldness, disinhibition, and meanness) with the decision-making condition did not show significant effects on the amplitude of N1 and P3, in any case. The analysis of the neurophysiological correlates of the time estimation task revealed that higher meanness values are associated with smaller N1 amplitude in the pleasant emotional condition, whereas higher disinhibition values are associated with greater N1 amplitude in the pleasant emotional condition. Still in this task, higher disinhibition values were associated with a smaller LPP amplitude in the unpleasant emotional condition.
In sum, the increase in the distribution of attention resources towards time and/or the increase in activation states, including those originated by responses to emotional stimuli, may be the main factor that alters the way impulsive individuals and, presumably, individuals with high psychopathy, consider time when making decisions. According to the cognitive models of time perception, the over-estimation of a certain time duration may be a consequence of a greater focus on time and increasing activation. On many occasions, impulsive individuals, especially when they are distracted, do not over-estimated time, which is an argument against a fundamental dysfunction of the "internal clock". Conversely, these individuals are more likely to experience a slowing of time during situations in which they are unable to express their impulsive urges, for example, when an individual must wait for a delayed reward and is faced with the passage of time. However, more research is needed to determine the causal relationships between decision-making, emotional response, and time perception. Studies with different populations of individuals provide evidence that the notion of time is an important factor in understanding altered decision-making. The authors do not have financial, personal, or professional conflicts of interests. After the local ethics committee approved the study, it was conducted according to APA ethical standards.
Declarations
Author contribution statement Diana Moreira: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper.
Andreia Azeredo: Performed the experiments; Contributed reagents, materials, analysis tools or data.
Susana Barros: Contributed reagents, materials, analysis tools or data. Fernando Barbosa: Analyzed and interpreted the data.
Data availability statement
Data will be made available on request. | 2022-07-07T05:08:37.569Z | 2022-06-27T00:00:00.000 | {
"year": 2022,
"sha1": "346465792db90273d2cacbcb2d2690572a405d3f",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "346465792db90273d2cacbcb2d2690572a405d3f",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
258823001 | pes2o/s2orc | v3-fos-license | Speech-Text Pre-training for Spoken Dialog Understanding with Explicit Cross-Modal Alignment
Recently, speech-text pre-training methods have shown remarkable success in many speech and natural language processing tasks. However, most previous pre-trained models are usually tailored for one or two specific tasks, but fail to conquer a wide range of speech-text tasks. In addition, existing speech-text pre-training methods fail to explore the contextual information within a dialogue to enrich utterance representations. In this paper, we propose Speech-text Pre-training for spoken dialog understanding with ExpliCiT cRoss-Modal Alignment (SPECTRA), which is the first-ever speech-text dialog pre-training model. Concretely, to consider the temporality of speech modality, we design a novel temporal position prediction task to capture the speech-text alignment. This pre-training task aims to predict the start and end time of each textual word in the corresponding speech waveform. In addition, to learn the characteristics of spoken dialogs, we generalize a response selection task from textual dialog pre-training to speech-text dialog pre-training scenarios. Experimental results on four different downstream speech-text tasks demonstrate the superiority of SPECTRA in learning speech-text alignment and multi-turn dialog context.
Introduction
In recent years, speech-text pre-training, which learns universal feature representations from a large training corpus (Chen et al., 2018;Li et al., 2021;Bapna et al., 2021), has achieved significant success in both uni-modal (Schneider et al., 2019;Dosovitskiy et al., 2020) and multi-modal (Lu et al., 2019;Radford et al., 2021) downstream tasks. Ex- * Equal contribution. This work was conducted when Tianshu Yu and Haoyu Gao were interning at Alibaba. † † Min Yang and Yongbin Li are corresponding authors. 1 For reproducibility, we release our code and pretrained model at: https://github.com/AlibabaResearch/ DAMO-ConvAI/tree/main/SPECTRA. isting speech-text pre-training works mainly employed multi-modal self-supervised pre-training objectives, such as cross-modal masked data modeling (Li et al., 2021;Kang et al., 2022a) and crossmodal contrastive learning (Sachidananda et al., 2022;Elizalde et al., 2022), which align the speech utterance representation to the corresponding text sentence representation. Despite the remarkable progress of previous speech-text pre-training models, there are still several technical challenges to constructing an effective and unified speech-text pre-training model for spoken dialog understanding, which are not addressed well in prior works. First, previous models are mainly tailored for specific speech-text tasks, such as speech-to-text translation (Liu et al., 2020b) and speech-language understanding , failing to conquer a wide range of speechtext tasks. Although Tang et al. (2022) proposed a unified speech-text pre-training for speech translation and recognition, it fails to exploit the temporality of an input speech sequence and cannot learn the fine-grained speech-text alignment.
Second, limited exploration has been attempted to bridge the gap between plain speeches/texts and human conversations. In particular, existing speech-text pre-training methods fail to explore the context information within a dialog. Nevertheless, spoken dialog understanding needs to effectively process context information so as to help the system better understand the current utterance, since humans may omit previously mentioned entities/constraints and introduce substitutions to what has already been mentioned.
In this paper, we propose Speech-text dialog Pre-training for spoken dialog understanding with ExpliCiT cRoss-Modal Alignment (SPECTRA), which is the first-ever speech-text dialog pretraining model. We illustrate the framework of our method in Figure 1 and details in Figure 2. The backbone of SPECTRA is composed of a text encoder, a speech encoder, and a fusion module, learning semantic/acoustic information and the interaction between them, and pre-trained on a largescale real-world multi-modal (speech-text) dialog corpus. We propose two pre-training objectives to learn better context-aware speech/text representations for spoken dialog understanding (Dai et al., 2022;Zhang et al., 2022b). Specifically, to consider the temporality of speech modality, we design a novel temporal position prediction task to capture the speech-text alignment by predicting the start and end time of each textual word in the corresponding speech waveform. In addition, to learn the characteristics of spoken dialogs (Gao et al., 2023;Qian et al., 2023), we devise a cross-modal response selection objective to consider the context information within each dialog.
Our contributions are summarized as follows: • To the best of our knowledge, we are the first to propose a speech-text dialog pre-training model for spoken dialog understanding, which fully exploits the characteristics of multimodal (speech/text) dialogs.
• We introduce two pre-training objectives (temporal position prediction and multi-modal response selection) to effectively learn speechtext alignment and dialog context information.
• We conduct extensive experiments on five benchmark datasets belonging to four downstream speech-text tasks, including emotion recognition in conversation (ERC), multimodal sentiment analysis (MSA), spoken language understanding (SLU), and dialog state tracking (DST). We believe that the release of the pre-trained model and source code would push forward the research in this area.
Related Work
Uni-modal Pre-training In recent years, pretrained language models (PLMs), such as BERT (Kenton and Toutanova, 2019), RoBERTa (Liu et al., 2019), and GPT (Radford et al., 2019a) have been proposed and applied to many NLP tasks, yielding impressive performances. PLMs benefit from the rich linguistic knowledge in large-scale corpora (He et al., 2022c,a). Inspired by the success of PLMs in NLP tasks, several speech pretraining models, such as Wav2vec (Schneider et al., 2019), HuBERT (Hsu et al., 2021), and WavLM (Chen et al., 2022), were proposed to learn highquality universal speech representations from massive speech data.
Multimodal Pre-training Compared to multimodal pre-training for vision-and-language tasks, speech-text pre-training is relatively less explored.
SpeechBERT (Chuang et al., 2020) jointly trained multimodal representations based on a single BERT for spoken question-answering. CTAL (Li et al., 2021) extended the original Transformer to crossmodal by modifying the attention mechanism of the Transformer decoder. ST-BERT combined a pre-trained acoustic model with BERT and took phoneme posterior and subword-level tokenized text as input. Kang et al. (2022b) explored multimodal pre-training model in extremely lowresource data scenarios. CLAM (Sachidananda et al., 2022) employed contrastive and multirate information inherent in audio and lexical inputs to align acoustic and lexical information. STPT (Tang et al., 2022) proposed a multi-task learning framework to integrate different modalities in speech-text pre-training. ℒ '(( Figure 2: The overview of SPECTRA. The left part shows the illustration of the temporal position prediction task and the cross-modal response selection task. The right part shows the overall structure of the pre-trained model. understanding model, which trained a semantically rich BERT-based conversation model along with a speech-based model. Different from previous works, SPECTRA is the first-ever speech-text dialog pre-training model, which bridges the gap between plain texts/speeches and human conversations.
Method
In this section, we introduce the model architecture and pre-training objectives of SPECTRA. Figure 2 shows the overall structure of our model SPECTRA, which consists of a text encoder, a speech encoder, and a modality fusion module. During pre-training, we first convert paired text and speech inputs into uni-modal embeddings, which are then fed into the text encoder and speech encoder respectively to obtain uni-modal representations. Finally, we concatenate text representations and speech representations as input of our modality fusion module to get fused representations for speech-text pre-training.
Data Preparation
Before diving into our model, we first prepare input text and speech sequences for our model. Let D = {T 1 , T 2 , ..., T n } denotes a conversation with n dialog turns, where every single dialog turn T i consists of a slice of raw speech waveform s i and its corresponding text Here, w ij is the j-th word of t i , and is annotated with its corresponding start/end time in the speech, denoted as .., t i−2 , t i−1 } and the previous speech dialog history s i−1 . In this way, each sample X i consists of k+1 turns of text and 2 turns of speeches, where the speeches correspond to the latest 2 turns of text. Note that we only use 2 turns of speech in pre-training for efficiency, since the length of speech representation is much longer than its corresponding text representation.
Text Embeddings
For each input element, its vector representation is a summation of the corresponding token embedding, absolute position embedding and segment embedding.
Specifically, we first concatenate all text sentences of each sample X i in temporal order to construct the text input: Note that we use special token <s> to mark the start of the whole sequence, and </s> to mark the end of each turn. Then, we encode each token in I i using a pre-trained RoBERTa (Liu et al., 2019) tokenizer. We assign learnable segment embedding e t,1 to tokens of t i and the last </s> token, and e t,0 for the rest of the tokens. The detailed tokenizing and encoding process is described in Appendix A.
We denote x i as the input text embeddings of I i .
Uni-modal Encoders
Text Encoder Inspired by the remarkable success of uni-modal pre-trained models on various downstream tasks, we employ RoBERTa (Liu et al., 2019) as our text encoder. We pass x i into text encoder to obtain the sequence representations: where H t,i ∈ R n×d h denotes the output hidden states of the last layer of RoBERTa, n is the length of input I i , and d h is the dimension of hidden state.
Speech Encoder We design our speech encoder based on the WavLM structure (Chen et al., 2022) with three key modules: a feature extractor, a feature projection module and a Transformer encoder module. The feature extractor consists of 8 temporal convolutional layers and a layer normalization. We implemented the first seven convolutional layers to be the same as WavLM, and added another convolutional layer with 512 channels, 5 strides and 5 kernels size, in order to shorten the length of the output speech features. As a result, each output token of speech features represents approximately 200ms of speech with a stride of 100ms. The feature projection layer is a layer normalization followed by a fully connected layer converting the size of speech features from 512 to d h . The Transformer encoder module is equipped with a convolution-based relative position embedding layer and 12 WavLM Transformer layers. For each sample, we directly input speech waveforms s i−1 and s i into our speech encoder, and denote the outputs of the feature projection layer for s i−1 and s i as f i−1 and f i : Then, we obtain a speech sequence a i by concatenating f i−1 and f i together with a separation token [SEP] and a starting token [CLS]: where a i ∈ R (m i−1 +m i +2)×d h denotes the concatenated sequence. m i−1 and m i are the lengths of s i−1 and s i , respectively. We pass a i as the input of the Transformer encoder module to get the speech sequence representations: where H s,i ∈ R (m i−1 +m i +2)×d h denotes the hidden states of the last Transformer layer.
Modality Fusion Module
To integrate two modalities, we employ a single self-attention Transformer layer as our modality fusion module. We first concatenate the text sequence representation H t,i and the speech sequence representation H s,i together. Then, we assign text and speech representations with learnable modality embeddings e m,0 and e m,1 respectively, and add the modality embeddings to the concatenated representations as the input of our modality fusion module. Finally, we obtain output hidden representations of modality fusion module H i ∈ R (n+m i−1 +m i +2)×d h as the speech-text joint representations.
Pre-training Tasks
We introduce two novel pre-training objectives for our SPECTRA model, empowering SPECTRA to capture speech-text alignment and multimodal dialog context effectively.
Temporal Position Prediction
Existing speech-text pre-training works mainly learn from prior visual-text pre-training models. These works ignore that speeches are temporal sequences, and thus fail to learn fine-grained speech-text alignment. In this work, we propose a novel temporal position prediction (TPP) objective, which utilizes the textual part of the hidden representations H i to predict the starting and ending time of each word in the speech waveform. In particular, for each word w ij in utterance t i with its start/end time annotations s ij /e ij , we denote its first/last token in H i as h s ij /h e ij . The goal of the TPP pre-training objective is to predict its starting and ending time in s i with h s ij and h e ij , respectively. We use squared error loss to optimize the TPP task: where W start , W end ∈ R d h ×1 are learnable parameters. L a is the maximum speech length limit. By normalizing s ij and e ij over L a , we guarantee that the starting and ending time falls into [0,1].
Here, we only calculate the TPP loss for the words in the last two turns of dialog (i.e., t i−1 and t i ) for each sample X i . We calculate the average TPP loss over all words within those two turns as the TPP loss of dialog X i : where l i−1 and l i denote the total lengths of transcripts t i−1 and t i in sample X i .
Cross-modal Response Selection
Inspired by the success of response selection tasks in textual dialog systems (Bao et al., 2019), we design a cross-modal response selection objective. For each sample X i , we randomly replace the text query t i or speech query s i with the utterances or speech from other dialogs in the dataset. In this way, for each sample X i , we can obtain three kinds of corrupted samples as negatives: (1) only the speech query is randomly substituted; (2) only the text query is randomly substituted; (3) both text and speech queries are randomly substituted. Note that both text and speech queries remain unchanged as positive as illustrated in Figure 2 Since the output of the first <s> token can be viewed as the representation of the whole speechtext sample, we apply a softmax function following a fully connected layer on top of the hidden state of token <s> as a four-way classifier, predicting which case the current example belongs to. We utilize the cross-entropy loss to optimize the crossmodal response selection task, denoted as L CRS .
Cross-modal Masked Data Modeling
Following previous works (Li et al., 2021), we also adopt the cross-modal representations H f for crossmodal masked language modeling (CMLM) and cross-modal masked acoustic modeling (CMAM) objectives. For masked language modeling, we follow the setup of RoBERTa (Liu et al., 2019) to dynamically mask out textual input tokens with a probability of 15%. For masked acoustic modeling, we follow Baevski et al. (2020) and Liu et al. (2020a) to mask continuous speech frames.
We modify the implementation of the original masked acoustic modeling method in previous works to increase the average number of masked speech frames in each sample. We provide the details of masked acoustic modeling in Algorithm 1 in Appendix B. The speech token masking step is performed between the feature extractor and feature projection. We employ the cross-entropy loss for the CMLM task (L CMLM ) and the mean absolute error loss for the CMAM task (L CMAM ).
Joint Pre-training Objective
We combine four pre-training objectives to form a joint pre-training objective for speech-text pretraining:
Fine-tuning on Downstream Tasks
We fine-tune SPECTRA on four downstream tasks, including multimodal sentiment analysis (MSA), emotion recognition in conversation (ERC), spoken language understanding (SLU), and dialog state tracking (DST).
We use the hidden state of <s> token in H i , denoted as h i , and pass it through a prediction head with two fully-connected layers and a GELU activation (Hendrycks and Gimpel, 2016) between them to get the prediction: where σ denotes the GELU activate function, (2) ∈ R do are new learnable parameters in the fine-tuning stage. The output size d o for MSA task is 1, and for ERC and SLU it is the corresponding number of classes. We adopt the squared error loss as the fine-tuning loss function for MSA. The cross-entropy loss is utilized for the rest of tasks.
Pre-training Data
In this paper, we adopt Spotify100K (Clifton et al., 2020) to pre-train SPECTRA, which is a real-world scene speech-text dialog dataset. Spotify100K contains 105,360 podcast episodes, with nearly 60,000 hours of speeches covering a variety of genres, subject matter, speaking styles, and structure formats. The corpus also provides automatically-generated word-level textual transcripts, marking the starting and ending time in the speech for each word.
For a fair comparison with previous speech-text pre-training studies, we only use the first 960 hours of speech as well as the corresponding transcripts to pre-train our SPECTRA model.
Experimental Setup
Baselines In addition to state-of-the-art downstream models tailored for MSA, ERC, SLU and DST (see Section 4.3-4.6), we also compare SPEC-TRA with three types of pre-training models, including the text modality pre-training model RoBERTa (Liu et al., 2019), speech modality pretraining model WavLM (Chen et al., 2022), and speech-text multimodal pre-training model CTAL (Li et al., 2021).
Experimental Settings during Pre-training
We use the first 960 hours of speech and textual transcripts of Spotify100K dataset for pre-training. We cut the speech waveform into slices of a maximum length of 10 seconds and view each slice with the corresponding transcripts as a single dialog turn, forming 356,380 dialog turns in total. By using these dialogs and setting k to a maximum of 7, we construct 350,784 samples, where each sample consists of 2~8 dialog turns of texts and 2 turns of speeches. Besides, we use pre-trained models RoBERTabase and WavLM-base+ to initialize our text and speech encoder, respectively. Since our speech encoder has one more convolution layer than WavLM-base+, we only initialize the first seven convolution layers with pre-trained parameters and randomly initialize the last layer. Both text and speech encoders have 12 Transformer layers with a hidden size d h of 768. We pre-train our SPEC-TRA model for 100 epochs on 8 Tesla-A100 GPUs with a batch size of 20 per GPU. We use AdamW (Loshchilov and Hutter, 2018) to optimize our model with a peak learning rate of 1 × 10 −4 and a linear warmup for the first 1% of updates.
Experimental Settings during Fine-tuning
For SpokenWoz dataset, each dialog turn consist of two utterances, one from the user and the other from the system. For other datasets, each dialog turn is a single utterance. For all datasets we truncate the speech length of each dialog turn to a maximum of 10 seconds. We fine-tune our pre-trained checkpoint on each downstream dataset using an AdamW (Loshchilov and Hutter, 2018) optimizer with a peak learning rate of 2 × 10 −5 and a cosine annealing warmup.
Fine-tuning on MSA
For MSA task , our model aims to predict the positive or negative sentiment polarities of the given multi-modal input. We conduct experiments on two multi-modal datasets MOSI (Zadeh et al., 2016) and MOSEI (Zadeh et al., 2018) to evaluate the effectiveness of our model for the MSA task. We adopt the accuracy over positive/negative sentiments classification (denoted as Acc 2 ) as the evaluation metric for our model and baselines. The experimental results are reported in Table 1.
From the results, we can observe that our model achieves substantially better performance than previous state-of-the-art (SOTA) methods on both datasets. In particular, for the MOSI dataset, the accuracy increases by 3.10% over the strongest baseline MIB (Mai et al., 2022). In addition, as shown in Table 2, our SPECTRA also significantly outperforms the speech modality pre-training model WavLM and speech-text pre-training model CTAL.
Fine-tuning on ERC
ERC task requires the model to predict the emotion category of an utterance given a speech clip with its transcripts and dialog history. Here, we fine-tune our model with the widely-used IEMOCAP dataset (Busso et al., 2008), and follow the settings with Chudasama et al. (2022) to perform a 6-way classification task. For each sample, we construct 11 turns of text and 2 turns of speech with a maximum text length of 512.
In Table 1, we report the accuracy of six-way classification for our model and previous SOTA method M2FNET (Chudasama et al., 2022). In addition, from Table 2, we can observe that our method outperforms uni-modal pre-training models, as well as speech-text pre-training baseline CTAL. Compared with the uni-modal baselines RoBERTa and WavLM, our model benefits from multi-modal pre-training tasks that capture interactions and alignments between modalities. Compared with CTAL, our model is equipped with better speech-text alignment and multi-turn dialog context information with the help of TPP and CRS pre-training tasks.
Fine-tuning on SLU
We also conduct experiments on the spoken language understanding (SLU) task, which aims to predict the user intent (Lin and Xu, 2019) given a spoken utterance with the textual transcript. We use MIntRec (Zhang et al., 2022a) as the experimental dataset for SLU and adopt classification accuracy for the evaluation metric. From Table 1 and 2, we can observe that SPEC-TRA obtains significantly better results than previous methods. In particular, our SPECTRA model improves the results of RoBERTa and the previous SOTA method MAG-BERT (Rahman et al., 2020) by 1.55% and 2.47%, respectively. Compared to WavLM and CTAL, our model can capture semantic information in textual data and the context information within each dialog.
Fine-tuning on DST
For dialogue state tracking, we use an in-house, large-scale, cross-modal dataset called SpokenWoz. The dataset was collected by crowdsourcing recordings through phone calls using the Appen platform 2 . Transcriptions were obtained using a commercial ASR system, and speech-text pairs were annotated using a schema similar to MultiWoz (Eric et al., 2019). SpokenWoz consists of 204k turns, 5.7k dialog, and 249 hours of recordings.We adopt joint goal accuracy (JGA) as the evaluation metric, 2 https://appen.com/ which compares the predicted and ground-truth dialogue states at each turn. We follow Trippy (Heck et al., 2020) and substitute its context model BERT with our SPECTRA model.
As shown in Table 1, our model outperforms the previous SOTA method, SPACE+WavLM. In addition, our model also surpasses the three pretraining baselines by a noticeable margin. This demonstrates better speech-text alignment is critical to tackling complicated conversations.
Ablation Study
To better understand the effectiveness of our SPEC-TRA pre-training method, we investigate the influence of pre-training components and dialog history on the overall performance of SPECTRA. We report the ablation test results in Table 2.
Impact of Pre-training To demonstrate the efficiency of multi-modal pre-training, we directly use uni-modal encoders and randomly initialize the modality fusion module. We observe a significant performance drop by comparing (a) "w/o multimodal pre-training" to other pre-training settings on all five datasets. In particular, setting (a) directly collapses on the ERC task, which is a complicated and conversational scenario. This verifies the necessity of cross-modal pre-training and aligning speech-text modalities. In addition, by comparing SPECTRA and setting (b) "using less pre-training data", we can find that using more pre-training data can further improve the performance of our model. Impact of TPP and CRS By comparing the setting (c) "w/o TPP" to SPECTRA, the performances on all five datasets drop to different extents, which verifies the generalization and effectiveness of our TPP pre-training task. Specifically, the performance drops significantly on SpokenWoz, which requires the model to have a stronger ability to align two modalities. This demonstrates that our TPP pre-training task empowers the model with stronger alignment modeling ability. For setting (d) "w/o CRS" with SPECTRA, the performance drops significantly on multi-turn dialog tasks such as ERC and DST. This suggests that the CRS task is essential to model multi-turn dialog context.
Impact of Dialog History
In setting (e) "using 1 turn of textual dialog history", each instance consists of 2 turns of paired speech and text.The model performance drops substantially on ERC and DST downstream tasks by comparing it with SPECTRA. This demonstrates that increasing dialog history in the pre-training stage is beneficial to the tasks that require multi-turn dialog context.
Case Study
To have a straightforward understanding of how we learn cross-modal interaction in our proposed SPECTRA model, we conduct a case study by providing two cases sampled from the MIntRec dataset. These two cases are incorrectly predicted by the model pre-trained without TPP but correctly predicted by our SPECTRA model. In Figure 3, we visualize the self-attention weights of the fusion layer in our model as well as the model pre-trained without TPP (denoted as w/o TPP). From Figure 3(a) and 3(c), we observe that there are rich crossmodal interactions in the fusion layer of the proposed SPECTRA model. Our model can capture fine-grained information between text and speech for more accurate classification. In contrast, we also visualize the self-attention weights of the w/o TPP model in Figure 3(b) and 3(d). Both cases show that text and speech sequences seldom connect to each other in self-attention layers.
In Table 3, we also illustrate the intent prediction results obtained by SPECTRA and w/o TPP. From the results, we can observe that our model can attend to both text and speech sequences effectively to predict correct intent results. However, w/o TPP is confused by the wrong labels since it hardly attends to speech tokens, which indicates that it has the propensity to omit useful information that exists in speech exclusively.
Conclusion
In this paper, we proposed our model SPECTRA, the first speech-text dialog pre-training model. Considering the temporality of speech and text modalities, we introduced a novel temporal position prediction pre-training task to learn word-level speechtext alignment. To capture multi-modal dialog context in our model, we generalized the response selection task into multi-modal scenarios. Extensive experiments show that our pre-training method can learn better cross-modal interactions as well as multi-modal contextual information and significantly outperformed other strong baselines. In the future, we would like to extend speech-text dialog pre-training to more modalities or generative tasks.
Limitations
We analyze the limitations of this work, so as to further improve the performance of our model in future work. Based on our empirical observation, we reveal several limitations, which can be divided into two primary categories. (1) First, our proposed SPECTRA method relies on large-scale spoken dialog corpora with explicit word-level speech-text alignment annotation, such as Spotify100K. This limits the generality of our model on more spoken dialog corpora.
In the future, we would like to develop a semisupervised pre-training method to leverage both labelled and unlabeled datasets. (2) Second, our method is mainly designed for speech-text understanding and has not been fully explored for generative tasks. We plan to devise dialog generation per-training objective to empower the model with better generation ability. (3) Third, the work only involves speech and text modalities. We are interested in handling more modalities, such as images or videos, to enrich cross-modal information in joint representations. | 2023-05-22T01:15:41.080Z | 2023-05-19T00:00:00.000 | {
"year": 2023,
"sha1": "8dd433ed5539f50d33465d1302c82a24396219cf",
"oa_license": "CCBY",
"oa_url": "https://aclanthology.org/2023.acl-long.438.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "59e3c597f8bfebdb9655d32d778cf4ec49e5a9f2",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
198934041 | pes2o/s2orc | v3-fos-license | Role of Transmembrane Proteins for Phase Separation and Domain Registration in Asymmetric Lipid Bilayers
It is well known that the formation and spatial correlation of lipid domains in the two apposed leaflets of a bilayer are influenced by weak lipid–lipid interactions across the bilayer’s midplane. Transmembrane proteins span through both leaflets and thus offer an alternative domain coupling mechanism. Using a mean-field approximation of a simple bilayer-type lattice model, with two two-dimensional lattices stacked one on top of the other, we explore the role of this “structural” inter-leaflet coupling for the ability of a lipid membrane to phase separate and form spatially correlated domains. We present calculated phase diagrams for various effective lipid–lipid and lipid–protein interaction strengths in membranes that contain a binary lipid mixture in each leaflet plus a small amount of added transmembrane proteins. The influence of the transmembrane nature of the proteins is assessed by a comparison with “peripheral” proteins, which result from the separation of one single integral protein into two independent units that are no longer structurally connected across the bilayer. We demonstrate that the ability of membrane-spanning proteins to facilitate domain formation requires sufficiently strong lipid–protein interactions. Weak lipid–protein interactions generally tend to inhibit phase separation in a similar manner for transmembrane as for peripheral proteins.
Introduction
Lipids in membranes tend to mix nonideally [1]. Many lipid mixtures are known for their ability to phase separate or form domains [2]. Of special interest is domain formation in biomembranes because of its putative functional role associated with the membrane raft hypothesis [3][4][5]. The plasma membrane of mammalian cells is asymmetric and multicomponent, but its lipid composition has often been described-in a first-order approximation-as consisting of phosphatidylcholine (PC) and sphingomyelin (SM) in the outer leaflet, phosphatidylserine (PS) and phosphatidylethanolamine (PE) in the inner leaflet, and cholesterol as being able to populate both leaflets [6]. It is well known from experiments in model membranes that the lipids in the outer leaflet appear to represent a mixture of saturated and unsaturated lipids with cholesterol that forms liquid-ordered (lo) domains [7,8]. No such tendency is observed for the lipids in the inner leaflet [9]. However, there are some hints-concluded mostly from computer simulations [10,11]-that suggest lo domains could also exist in the inner leaflet of the plasma membrane and that they are spatially registered with those in the outer leaflet [6].
The raft hypothesis remains controversial [12,13], but it has sparked a large number of experimental [14,15], computational [16], and theoretical [17] studies about domain formation in model membranes, with an increasing focus on inter-leaflet domain coupling in asymmetric bilayers [7,[18][19][20]. Despite being in their fluid state, sufficiently large domains located in the apposed leaflets of a lipid bilayer tend to register due to a domain mismatch energy on the order of 0.1 − 0.2 k B T/nm 2 [21] (k B T is the thermal energy unit: Boltzmann constant times absolute temperature). There is experimental evidence that the mismatch energy is large enough to not only register preexisting domains, but to even induce domains in one leaflet by an existing domain in the apposed leaflet [22]. The origin of the mismatch energy has been suggested to be mostly entropic [23,24], stemming from a more efficient dynamic penetration of the bilayer's midplane by the lipid tails in the registered as compared to the unregistered domain arrangement. Sufficiently small domains may antiregister to minimize the line tension by hydrophobic domain matching [25,26]. Recent theoretical modeling on the mean-field level of a lattice gas has addressed the calculations of phase diagrams in asymmetric membranes [25,[27][28][29][30][31][32].
Here, the domain mismatch energy penalty drives domain registration, but domain formation itself is driven foremost by interactions of the lipids in the same leaflet. This can lead to a rich phase behavior depending on the lipid-lipid interaction strength within each leaflet and the strength of the inter-leaflet domain coupling.
As mentioned, lipid domains can be coupled across the membrane "thermodynamically" through a domain mismatch energy. Here, registered domains are energetically (but not structurally) connected across the bilayer. There is another possibility that has been suggested [33][34][35] but not further pursued: transmembrane proteins or peptides, or membrane-spanning lipids (such as bolalipids [36]), provide a "structural" domain coupling mechanism that may act in conjunction with the above-mentioned thermodynamic mechanism of energy penalties for mismatching domains. Obviously, membrane-spanning proteins are able to physically connect the domains they are associated with across the membrane, irrespective of the inter-leaflet domain interaction energy. In addition, one single transmembrane protein has a lower in-plane translational entropy in a membrane as compared to two equivalent "peripheral" proteins that result from the separation of the transmembrane protein into two independent units. The lower entropy too is expected to favor domain registration. On the other hand, transmembrane proteins of different hydrophobic lengths invoke hydrophobic mismatch penalties in membranes [37] that will affect their ability to induce phase separation. This was observed, for example, by Ackerman and Feigenson [38] in a coarse-grained molecular dynamic simulation of a four-component lipid membrane in the presence of additional transmembrane WALP peptides of varying lengths. Independently of their length, however, all WALP peptides were observed to increase domain alignment. The structural coupling mechanism is not confined to transmembrane proteins; it also applies to bolalipids [39] and even to lipids with long tails such as monosialotetrahexosylganglioside (GM1) [40] and other long saturated acyl chains [11] that interact with the lipids in the apposed leaflet.
The objective of the present work is to propose and analyze a minimal model for phase separation in a mixed lipid bilayer that is subject to the two distinct inter-leaflet coupling mechanisms: a thermodynamic one due to the presence of a compositional mismatch between the two leaflets and a structural one due to the presence of transmembrane proteins. The term "transmembrane protein" stands as a representative for any type of membrane-spanning molecule that is able to interact with the lipids in both leaflets, including integral proteins, transmembrane peptides, bolalipids, and even long-chain lipids. We propose a bilayer-type lattice model, with two two-dimensional lattices stacked one on top of the other. Lipids of two different types (referred to as A and B) occupy one lattice site each, whereas transmembrane proteins consist of two lattice sites that span the bilayer. Hence, each leaflet contains a ternary mixture consisting of two different lipid types and the protein. We introduce all relevant lipid-lipid and lipid-protein interactions and analyze the model on the mean-field level by calculating spinodal surfaces, critical points, tri-critical points, as well as coexistence regions and tie lines in some cases. We demonstrate the ability of transmembrane proteins to facilitate phase transition and to register domains across the bilayer. Our work represents a first attempt to approach an understanding of the three-dimensional phase diagram of a mixed protein-containing bilayer-the richness of the features in the phase diagram justifies the simplicity of our lattice approach, including the neglect of effects due to hydrophobic mismatch, membrane bending, and multi-body interactions.
Theory
We consider two-dimensional lattice models for the external ("ext") and internal ("int") leaflets of a lipid bilayer that contains a fixed number of transmembrane proteins. The two lattices have the same coordination number z (for example, z = 4 for a cubic and z = 6 for a hexagonal lattice) and reside on top of each other so that each lattice site on the external lattice contacts exactly one lattice site on the internal lattice. Each lattice has a total of M lattice sites; the external one hosts P transmembrane proteins, A ext lipids of type A, and B ext = M − P − A ext lipids of type B. Similarly, the internal lattice hosts P transmembrane proteins, A int lipids of type A, and B int = M − P − A int lipids of type B. Because transmembrane proteins span the entire bilayer, the protein positions in each leaflet are exactly the same for each microstate. The illustration of one specific microstate in Figure 1 shows the correlations of protein numbers and positions across the two lattices.
Free Energy of a Lipid Membrane That Contains Transmembrane Proteins
We consider a mean-field Helmholtz free energy F = U − TS of our lattice model at fixed temperature T. Its internal energy U = U ext + U int + U coupl reflects nearest-neighbor interactions within each lattice (U ext and U int ) and a coupling term across the two lattices (U coupl ). The entropy S = k B ln Ω (with k B denoting Boltzmann's constant) accounts for the number of available states, On the level of the random mixing approximation [41,42], the in-plane nearest-neighbor interaction energies U ext in the external layer and U int in the internal layer can be expressed as where are effective lipid A-lipid B, lipid A-protein, and lipid B-protein interaction strengths in the external and internal leaflets. These reflect the actual interaction strengths between lipid A-lipid A (ω ext AA and ω int AA ), lipid A-lipid B (ω ext AB and ω int AB ), lipid B-lipid B (ω ext BB and ω int BB ), lipid A-protein (ω ext AP and ω int AP ), lipid B-protein (ω ext BP and ω int BP ), and protein-protein (ω ext PP and ω int PP ), all expressed in units of The physical situation we are interested in is the presence of lipid A-lipid B interactions in each leaflet and preferential interactions of the proteins with only one lipid type. To this end, we may simply assume ω ext BP = ω ext AA = ω ext BB = ω ext PP = 0 for the external leaflet and ω int BP = ω int AA = ω int BB = ω int PP = 0 for the internal leaflet. This would leave us with χ ext L = zω ext AB /2, χ ext P = zω ext AP /2, χ int L = zω int AB /2, and χ int P = zω int AP /2, whereasχ ext P andχ int P both vanish. More generally, we assume everywhere in the present workχ ext P =χ int P = 0, whereas χ ext L , χ ext P , χ int L , χ int P may all be non-vanishing. If the interactions are symmetric across the bilayer, we are left with only two interaction strengths that we refer to as We also consider cases of asymmetric interactions where χ ext L = χ int L or χ ext P = χ int P . The case χ ext L = χ int L accounts for the different propensities of the lipids in the two leaflets of a plasma membrane to undergo phase separation [9].
Symmetry demands the lowest order term of the inter-leaflet coupling energy across the membrane to be quadratic; on the basis of the random mixing approximation, we obtain U coupl /k B T = Λ(M − P)[(A ext − A int )/M] 2 , where the coupling constant reflects the inter-leaflet interaction of lipid A with lipid B (ω AB ), of lipid A with lipid A (ω AA ), and of lipid B with lipid B (ω BB ). It is convenient to define the three mole fractions Using Stirling's approximation ln(x!) ≈ x ln x − x in the expressions for S as well as the definitions in Equation (5), we find for the total Helmholtz free energy f = F/(Mk B T), in units of k B T and per lattice site, where we again note that we assumeχ ext P =χ int P = 0. Without that assumption, the free energy in Equation (6) would contain the additional contributionχ ext . A specific goal of the present work is to quantify the phase behavior that follows from the function f = f (φ, ψ, α) specified in Equation (6), subject to fixing the interaction strengths χ ext L , χ int L , χ ext P , χ int P , Λ in the thermodynamic limits of fixed temperature T and an infinitely large membrane size, M → ∞. The compositional variables φ, ψ, and α can vary independently within the ranges 0 ≤ α ≤ 1, 0 ≤ φ ≤ 1 − α, and 0 ≤ ψ ≤ 1 − α. However, the interesting protein mole fraction-on which we focus in the present work-is that of small α. We generally consider 0 ≤ α < 0.05. This seems reasonable because only the most protein-rich biological membranes such as the purple membrane of Halobacterium halobium have protein-to-lipid molar ratios of less than 1:50 [43]. Larger mole fractions may occur locally [44], but their consideration would not only add another layer of complexity to the present work; they would also raise concerns about the structural stability of membranes that the present simple lattice model is not designed to address. The mole fractions φ, ψ, and α constitute three independent degrees of freedom. This renders the phase diagram three-dimensional, with a maximum of four phases that can coexist. We note two symmetries. The first one, f (φ, ψ, 0) = f (1 − φ, 1 − ψ, 0), applies to a lipid bilayer that does not contain proteins. The second one, f (φ, ψ, α) = f (ψ, φ, α), is valid for a membrane with symmetric interactions, χ ext L = χ int L and χ ext P = χ int P .
Spinodals and Critical Points
To investigate the phase behavior, we first consider the spinodal surface, which can be calculated from the vanishing of the determinant, detA = 0, of the stability matrix Points {φ, ψ, α} inside the spinodal surface, where A is negative definite, are locally unstable. Tie lines with end points {φ 1 , ψ 1 , α 1 } and {φ 2 , ψ 2 , α 2 } are determined by the familiar common tangent plane construction [42] where we have introduced the abbreviations The existence of three distinct points that satisfy the common tangent plane construction defines three-phase coexistence. We also note that the limit of tie lines with vanishingly small separation between the two coexisting compositions {φ 1 , ψ 1 , α 1 } and {φ 2 , ψ 2 , α 2 } defines a critical point. Critical points are located on the spinodal surface (as determined by Equation (7)); in addition, they fulfill the operator equation where the B φ , B ψ , B α are the cofactors of A along one arbitrarily chosen row or column. For example, taking the middle row implies We are not aware of previous approaches to express the critical point condition in the operator form of Equation (9). In Appendix A, we briefly discuss the derivation of Equation (9) and state equivalent criteria that appear elsewhere in the literature [45][46][47].
Example for Calculation of Spinodals and Critical Points
In Figure 2, we show two examples for the calculation of spinodal surfaces according to detA = 0 (with A specified in Equation (7)) and its critical points (Equation (9), with the cofactors specified in Equation (10)). To this end, we choose f (φ, ψ, α) according to Equation (6) with fixed Λ = 0.05. Note that the magnitude of Λ can be obtained by dividing the domain mismatch energy by the cross-sectional area per lipid (typically 0.7nm 2 ). However, the domain mismatch energy is not well known. In the Introduction, we referred to the range 0.1 − 0.2 k B T/nm 2 estimated by Risselada et al [21] through Molecular Dynamics simulations, but Putzel et al. [29] argued the domain mismatch energy could be an order of magnitude lower. Clearly, our value Λ = 0.05 should be regarded as a rough estimate of a quantity that remains poorly understood.
The left diagram of Figure 2 refers to the absence of proteins, α = 0. It displays four spinodal lines (curves in blue color labeled "a"-"d") and the solution of Equation (9) (the curve in red color), all calculated at fixed χ ext L = 2.1 and the four different choices χ int L = 2.1 (spinodal labeled "a"), χ int L = 2.045 ("b"), χ int L = 2.0272 ("c"), and χ int L = 1.95 ("d"). Note that there is only a single red curve, independent of χ int L because the third derivatives present in Equation (9) remove the quadratic dependencies of the lipid-lipid interaction strengths. Intersections of the blue and red curves mark the critical point locations. The spinodal marked "a" exhibits two critical points, "b" four critical points, "c" two tri-critical points, and "d" zero critical points. Figure 2 will serve as useful reference in the Results section below.
Numerical Calculation of Coexisting Phases
In order to compute phase diagrams, we need to determine the location of coexisting phases. To this end, it is convenient to minimize the composite thermodynamic free energy of a potentially phase-separated membrane. In Equation (11), we allow for up to four coexisting phases, Hence, the minimization of f th = f th (φ, ψ, α) according to Equation (11) with respect to 12 independent variables and fixed interaction parameters χ ext L , χ int L , χ ext P , χ int P , Λ completely specifies the phase behavior and thus can be used to compute all coexisting phases that correspond to a given point {φ, ψ, α} in the phase diagram. We point out that the accurate calculation of complete three-dimensional phase diagrams as function of all parameters and their meaningful visualization is a formidable task beyond the scope of the present work. Instead, we focus on a few examples that illustrate the role of transmembrane proteins for domain registration across the bilayer.
Free Energy of a Lipid Membrane That Contains Peripheral Proteins
The expected mechanism of how transmembrane proteins couple domains across a lipid bilayer is a structural one, based on the ability of the proteins to protrude into (and interact with) both the external and internal membrane leaflets. To assess how effective this mechanism is, we compare transmembrane proteins with "peripheral" proteins, where we obtain 2P peripheral proteins by cutting each of the P transmembrane proteins in the middle. The two peripheral proteins that result from one single transmembrane protein are able to independently relocate in their host leaflet; see the illustration in Figure 3.
Two modifications of the free energy are associated with transitioning from transmembrane to peripheral proteins. The first is an increase in the number of available states due to the presence of twice as many proteins which will give rise to the additional free energy contribution α ln α + (1 − α) ln(1 − α) in the free energy per lattice site. The second is a modification of the inter-leaflet coupling term due to the presence of lipid-protein interactions across the bilayer. To deduce the latter, we first introduce the presence of inter-leaflet interaction strengths between lipid A and protein (ω AP ), between lipid B and protein (ω BP ), and between protein and protein (ω PP ). These interaction strengths are present in addition to the inter-leaflet interaction strengths of lipid A with lipid A (ω AA ), of lipid B with lipid B (ω BB ), and of lipid A with lipid B (ω AB ) as introduced in Equation (4). Recalling that we operate on the level of the random mixing approximation, we recognize that the total inter-leaflet interactions are given by U coupl /(Mk B T) = Λ(φ − ψ) 2 , where the coupling constant Λ, defined in Equation (4), is independent ofω AP ,ω BP , andω PP . The fact that the inter-leaflet interactions of the peripheral proteins are irrelevant is an immediate consequence of employing the random mixing approximation and the conservation of the number of proteins in the external and internal leaflets. As a consequence, the coupling term in Equation (6) for transmembrane proteins, Λ(1 − α)(φ − ψ) 2 , must be replaced by a coupling term Λ(φ − ψ) 2 for peripheral proteins. Hence, we can write for the mean-field free energy per lattice site in the presence of peripheral proteins where f (φ, ψ, α) has been inserted from Equation (6). In summary, separating all transmembrane proteins in the membrane into pairs of peripheral proteins that reside in opposite leaflets corresponds to adding the mixing entropy term α ln α + (1 − α) ln(1 − α) and an inter-leaflet-interaction contribution Λα(φ − ψ) 2 . We expect the latter to be small because we assume α < 1 throughout this work.
A Simple Example for Phase Stability in the Presence of Transmembrane versus Peripheral Proteins
In order to most clearly illustrate the different influence of transmembrane versus peripheral proteins on the phase behavior, we consider the specific situation χ int L = χ ext L = χ int P = χ ext P = χ, where the proteins interact the same with lipids of type A as lipids of type B interact with lipids of type A. The motivation behind this choice is to eliminate the difference between lipid B and peripheral protein, rendering each leaflet effectively a binary system. We obtain an especially simple free energy expression by demanding identical compositions of the two leaflets, φ = ψ. This removes the inter-leaflet coupling term and thus reduces the free energy difference between a membrane with transmembrane proteins and peripheral proteins to the mixing term α ln α + (1 − α) ln(1 − α).
More specifically, for the peripheral protein, this leads to the free energyf (φ, φ, α) , which amounts to two identical contributions from the two symmetric leaflets. According to Equations (7) and (9), the critical point is located at Phase separation can only be observed for χ > 2, independent of α. Indeed, α is merely a dummy variable because there is no difference anymore between lipid B and peripheral protein: they only differ by their name.
We contrast this to the presence of transmembrane proteins, for which the free energy amounts Here, the critical point is implying that phase separation can already be observed for χ < 2 if proteins are present. Let us focus on the case φ = 1/2 because that always passes exactly through the critical point as α is varied. We can choose α from its minimal value α = 0 to its maximal value α = 1 − φ = 1/2. In the former case, the membrane consists of 50% lipid A and 50% lipid B, which produces a critical nonideality parameter χ = 2. In the latter case, 50% lipid A and 50% transmembrane protein are present, implying a critical nonideality parameter χ = 1. Because every transmembrane protein interacts with two leaflets, the lipid-protein interaction strength is effectively doubled. This implies a reduction of the critical nonideality parameter from χ = 2 to χ = 1. In the intermediate case, for 0 < α < 1/2, the critical point is predicted by Equation (16) to decrease linearly with increasing mole fraction of transmembrane proteins. The presence of lipid-protein interactions is crucial for the ability of the transmembrane proteins to facilitate phase separation. If we repeat the same calculation as in the preceding paragraph, yet with χ int L = χ ext L = χ L and χ int P = χ ext P = 0, we obtain the same critical point irrespective of the proteins being transmembrane or peripheral. Here, the proteins do not exhibit any interactions with the lipid; they merely dilute the two-component lipid mixture, and they do so in the same way for transmembrane and peripheral proteins. This elevates the critical value for χ and thus suppresses phase separation.
Phase Behavior in the Absence of Proteins
We start our analysis by recalling the previously analyzed case α = 0, where no proteins are present in the membrane [24,27]. To this end, Figure 4 displays two phase diagrams, both calculated for α = 0 and fixed Λ = 0.05. They show spinodal lines in blue color with the location of critical points (if present), marked as blue bullets. They also show tie lines as straight solid lines in black, with the two coexisting phases indicated by black bullets at the two ends of each tie line. Regions enclosed by three connected tie lines (present in the left diagram) exhibit three-phase coexistence. The left diagram was calculated for χ ext L = χ int L = 2.1 and the right diagram for χ ext L = 2.1 and χ int L = 1.95. We first discuss the left diagram, which can be viewed as a specific example for a class of systems with χ ext L = χ int L = χ L . The existence of three-phase coexistence requires a sufficiently small coupling constant Λ < Λ with [27] For Λ > Λ and Λ < Λ , the maximum number of coexisting phases is two and three, respectively. The former may be referred to as the strong-coupling regime. At Λ = Λ , the phase diagram contains two tri-critical points φ = φ and ψ = ψ = 1 − φ with Second, for φ = ψ, the membrane is symmetric and the inter-leaflet coupling term vanishes. The phase behavior is then unaffected by Λ and is determined solely by the non-ideality parameter χ L . That is, the spinodal points for φ = ψ are defined by the quadratic equation 2φ(1 − φ) = 1/χ L . Third, the presence of the two three-phase regions for a lipid layer with intermediate asymmetry (where |φ − ψ| is neither very small nor large) reflects the competition between inter-leaflet coupling and the tendencies of each leaflet to phase separate. Indeed, two of the three coexisting phases exhibit the same compositional difference, whereas the remaining third phase (which has the largest compositional difference) serves as host for the "non-matching" lipids. We finally note that the critical points (marked by the blue bullets) are inside the three-phase coexistence regions, which render them irrelevant for the thermodynamically observed phase behavior.
Next, we discuss the right diagram of Figure 4. The parameters χ ext L = 2.1 and χ int L = 1.95 imply that the external but not the internal leaflet tends to phase separate on its own. Hence, phase separation is completely suppressed at small and large φ when the outer leaflet resides outside its binodal region. We still observe two-phase coexistence regions, but no three-phase coexistence. Even a different choice of Λ > 0 will not give rise to three-phase coexistence. Instead, when Λ grows (starting from Λ = 0.05 at fixed χ ext L = 2.1 and χ int L = 1.95), the spinodal detaches from the borders of the phase diagram (that happens at Λ = 0.1, with two critical points appearing at {φ, ψ} = {0.5, 0} and {φ, ψ} = {0.5, 1}) and then forms an increasingly more circular shape. In the limit Λ → ∞, the spinodal is a circle of radius 0.079 centered at φ = ψ = 0.5, with the two critical points {0.556, 0.444} and {0.444, 0.556} attached. The tilt of the tie lines in the right diagram of Figure 4 is a consequence of the inter-leaflet coupling. The observed positive slope of the tie lines emerges from Λ > 0 and the ensuing tendency to minimize the local compositional difference across the bilayer. Any non-vanishing tilt implies distinct compositions of the two coexisting phases in both leaflets. Hence, phase separation in the external leaflet induces phase separation in the internal leaflet. The maximal degree of this "enslaved" phase separation in the inner leaflet is adopted for the tie line that passes the point φ = ψ = 1/2.
How the two phase diagrams in Figure 4 transform into each other upon decreasing χ int L from 2.1 to 1.95 is revealed by the spinodals shown in the left diagram of Figure 2: a tri-critical point exists at χ int L = 2.027, which separates the presence of three-phase regions (for χ int L > 2.027) from its absence (for χ int L < 2.027). It is thus interesting to note that for, say, χ ext L > 2.1, χ int L > 2.02, and Λ = 0.05, there exists no three-phase coexistence despite the presence of a non-vanishing inter-leaflet coupling parameter and despite the tendency of both leaflets to phase separate.
Phase Behavior in the Presence of Proteins
Here, we investigate the influence of transmembrane proteins on the phase behavior and compare it with peripheral proteins. For transmembrane proteins, we use f (φ, ψ, α) according to Equation (6) and for peripheral proteinsf (φ, ψ, α) according to Equation (14).
Consider first the symmetric case with χ ext L = χ int L = χ L = 2.1, χ ext P = χ int P = χ P , and α = 0.04. We initially discuss the left two diagrams of Figure 5. They only differ in the protein type: transmembrane for the top left diagram and peripheral for the bottom left diagram. Each diagram shows eight spinodals (displayed in blue color) with the locations of the associated critical points (blue bullets), calculated for χ P = 0 (the innermost spinodal) and changing in increments of 0.3 until reaching χ P = 2.1 (the outermost spinodal). The light-blue bullets mark additional critical points without the corresponding spinodal lines being displayed. The innermost spinodal in the upper left diagram, calculated for χ P = 0 and α = 0.04, has already been displayed in the right diagram of Figure 2. We recall that two sets of three critical points, residing in close vicinity to each other, are located on that innermost spinodal. We have calculated a number of tie lines for the innermost spinodal and added them to the phase diagram (black lines): clearly, the phase diagram exhibits two three-phase regions, but their small size prevents them from being visible, given our choice of tie line positions. Note that, for χ P = 0, all coexisting phases have the same protein mole fraction, α = 0.04, thus preserving the two-dimensional nature of the phase diagram.
Starting from the innermost spinodal in the upper left diagram of Figure 5 (calculated for χ P = 0 and α = 0.04), upon slightly increasing χ P , the three critical points merge into a single one. Indeed, the second innermost spinodal has only two single critical points left, and so does the third one (which is calculated for χ P = 0.6). Immediately after that, for χ P slightly larger than 0.6, two tri-critical points appear and then give rise to a total of four additional critical points. To visualize this, we have added multiple light-blue bullets that mark critical point locations between the two spinodals for χ P = 0.6 and χ P = 0.9. Hence, the next spinodal, calculated for χ P = 0.9, contains two sets of three critical points. Two of the six critical points have moved outside the phase diagram boundaries for χ P = 1.2, and four of the six critical points have moved outside the phase diagram boundaries for all subsequent spinodals (the three outermost spinodals, calculated for χ P = 1.5, χ P = 1.8, and χ P = 2.1). The lower left diagram of Figure 5 exhibits a similar scenario as the upper left diagram with two differences. First, the innermost spinodal (which has χ P = 0) is very similar in size and shape. All differences result entirely from the different coupling terms (that is, (1 − α)Λ(φ − ψ) 2 for transmembrane proteins versus Λ(φ − ψ) 2 for peripheral proteins) but not from the different mixing entropies of the proteins. However, the innermost spinodal in the lower left diagram exhibits only two (instead of six) critical points. That is, the phase diagram for χ P = 0 and α = 0.04 exhibits three phase regions for transmembrane proteins, but not for peripheral proteins. As in the upper diagram, we have added a number of tie lines to the innermost spinodal; here, no three-phase region is present in the phase diagram. Second, the spinodals for the peripheral proteins enclose smaller regions than the corresponding spinodals for transmembrane proteins. The latter is a major finding of the present work. It quantifies the ability of the transmembrane proteins-even when present at small mole fractions-to induce membrane phase separation, whereas, for equivalent peripheral proteins, the membrane remains uniform. In addition, transmembrane proteins tend to induce or widen three-phase coexistence regions. For example, all tie lines displayed in the left two diagrams of Figure 5 encode for two-phase coexistence where the two phases have the same degree of inter-leaflet mismatch (|φ − ψ| is the same in both phases). A third phase, one with larger inter-leaflet mismatch, does not form because of the prohibitively large inter-leaflet domain-coupling energy. However, transmembrane proteins are able to couple mismatching domains structurally, thus counteracting the unfavorable inter-leaflet domain-coupling mechanism.
The outermost spinodals in the upper and lower left diagrams of Figure 5 refer to χ P = 2.1, where the interactions of the transmembrane proteins with the A-lipids are the same as the interactions of the B-lipids with the A-lipids (that is, χ L = χ P = 2.1). Peripheral proteins then behave the same as B-lipids, reducing each leaflet effectively to a binary mixture of A-lipids with the union of indistinguishable B-lipids and proteins. Hence, all differences in the phase diagram arise from the presence (for transmembrane proteins) or absence (for peripheral proteins) of the inter-leaflet connectiveness between the proteins segments, without being further affected by the different lipid-lipid and lipid-protein interactions within each leaflet. We have discussed this for the special case of a symmetric membrane (φ = ψ) in Section 2.6. The outermost spinodals in the upper and lower left diagrams of Figure 5 clearly demonstrate the widening of the region where the membrane is unstable due to the inter-leaflet connectiveness of the transmembrane proteins.
We have selected one particular spinodal from the upper and lower left diagrams of Figure 5-the fourth one counted from outside, corresponding to χ P = 1.2-and calculated a number of tie lines; the results are shown in the two right diagrams of Figure 5. Note that the phase diagrams are three-dimensional, with tie lines and three-phase regions extending out of the plane of the displayed {φ, ψ, α = 0.04} section of the phase diagram. Hence, unlike in Figure 4, each unstable point {φ, ψ} has its own individual tie line (or three-phase region). This makes a meaningful visualization of the phase diagram more challenging. On the two right diagrams of Figure 5, we have simply selected a few points (marked by triangles) located on the spinodal (the solid blue lines with the critical points marked as blue bullets) and calculated the corresponding phase behavior. In all cases, we found two-phase coexistence as characterized by tie lines. The α-values of the end points of the tie lines are color-coded, with green hues for α < 0.04 and red hues for α > 0.04; see the legend on the bottom right diagram of Figure 5. From the effective lipid A-protein interactions terms in the free energy, χ ext P αφ + χ int P αψ, it follows that proteins preferentially reside in a phase rich in lipid B (that is, small φ and small ψ), given that χ ext P > 0 and χ int P > 0. This has two consequences in the phase diagrams. First, coexisting phases with smaller φ and ψ values tend to have larger protein content (i.e., red triangles on the left bottom end and green triangles on the right top end of each tie line). In addition, second, phase separation tends to become more pronounced in regions of larger φ and ψ. Regarding the latter, compare, for example, the cases φ = 0 and φ = 1 − α = 0.96 for transmembrane proteins (the upper right diagram in Figure 5). Clearly, there is no phase separation for φ = 0, but there is phase separation for φ = 0.96. Comparing the free energy in Equation (6) shows the presence of the additional term χ int P α(1 − α) for φ = 1. This term is thus responsible for the additional destabilization of the membrane. The physical origin of the difference is that, for φ = 0, the upper leaflet contains only lipid type B, whereas, for φ = 1, the upper leaflet contains only lipid type A. There is no preferential interaction of the proteins with lipid B, but a tendency for segregation when the protein resides in a matrix of lipid A. Hence, we observe a stronger tendency for phase separation when φ = 1 as compared to φ = 0.
Similar considerations are also valid for membranes with asymmetric interactions, χ ext L = χ int L or χ ext P = χ int P . We exemplify this in Figure 6 for a membrane with χ ext L = χ ext P = 2.1 and χ int L = χ int P = 1.95. As in the two right diagrams of Figure 5, we show a {φ, ψ, α = 0.04} section of the phase diagram with the spinodal line and a selected number of tie lines displayed. The left and right diagrams in Figure 6 correspond to transmembrane and peripheral proteins, respectively. In fact, the only difference when comparing the upper right diagram of Figure 5 with the left diagram of Figure 6 (for transmembrane proteins) and the lower right diagram of Figure 5 with the right diagram of Figure 6 (for peripheral proteins) is the parameter change from χ int L = χ int P = 2.1 to χ int L = χ int P = 1.95. Hence, because the latter is below the critical point, we no longer observe phase transitions in the small and large φ-regions of the phase diagram. By comparing the phase diagram in the right diagram of Figure 4 with those in Figure 6, we directly observe the influence of adding transmembrane or peripheral proteins of mole fraction α = 0.04 and interaction strengths χ ext P = 2.1 and χ int P = 1.95. We also note that the spinodal lines in Figure 6 reproduce the small and large ψ-regions of those in Figure 5 for χ P = 2.1 (the outermost spinodals in the two left diagrams of Figure 5). As a result of decreasing χ int P from 2.1 to 1.95, no critical points are present anymore and phase separation always produces exactly two coexisting phases. Most importantly and similar to our discussion of Figure 5, transmembrane proteins produce substantially larger unstable regions in the phase diagram as compared to peripheral proteins. We emphasize that the phase diagrams shown in Figures 5 and 6 only contain limited information-spinodals and tie lines at a few selected positions of a {φ, ψ, α = 0.04}-section. Complete phase diagrams contain information about all coexisting phases at every point {φ, ψ, α} in the three-dimensional phase diagram. Visualizing full phase diagrams in a meaningful way and analyzing them comprehensively for the set, χ int L , χ ext L , χ int P , χ ext P , Λ, of interaction parameters is a work in progress.
Conclusions
Our work is a first step into the direction of analyzing the interplay between "thermodynamic" and "structural" coupling of domains across a lipid membrane. "Thermodynamic" coupling results from the inter-leaflet interactions of the lipids, whereas the coupling becomes "structural" for transmembrane proteins (or other membrane-spanning components such as bolalipids) that stretch across the entire lipid bilayer. We have considered a mean-field, lattice-based model with nearest neighbor pair interactions. This type of model is highly simplistic and in many ways oversimplifies a protein-containing lipid membrane. However, it captures the principal aspect of introducing membrane-spanning molecules (which we refer to as transmembrane proteins) into a lipid bilayer and its influence on the membrane phase behavior. We find that this influence depends on the interaction strength of the transmembrane proteins with the lipids in the two leaflets. Weakly interacting proteins suppress phase separation by merely diluting the lipids. If the lipid-protein interaction strength resembles that among the lipids (which leads to domain formation in the first place), then transmembrane proteins are indeed able to couple domains and enhance or even induce their formation. Our present work may trigger a number of extensions. Perhaps one of the most interesting is related to the ability of immobilized pinning sites to limit the growth of lipid domains [48], which is one among a number of mechanisms [49,50] that have been suggested to rationalize the nanoscopic size of membrane rafts. Cytoskeletal coupling of the inner leaflet of a plasma membrane creates pinning sites, but it is the outer leaflet that has a high propensity to phase separate.
We emphasize the simplicity of our model. For example, all energy penalties due to hydrophobic mismatch (either between the lipids among each other or between the lipids and proteins) are lumped into effective interactions' parameters. Hence, our model is not capable of making predictions of how domain coupling depends on lipid chain length or on the hydrophobic protein thickness. Moreover, we represent lipids by the sites of two coupled two-dimensional lattices, thus ignoring any molecular specificities such as head group size, degree of lipid chain unsaturation, etc. The special role of cholesterol does not enter explicitly into our model, neither do membrane bending [51], line tension effects of domains [52,53], protein-mediated coupling of domains to its surroundings [54][55][56], or any multi-body interactions. Finally, we treat our lattice model on the mean-field level, which ignores any type of correlations between interacting membrane constituents [57]. However, even with all these approximations, our simple model leads to a surprisingly rich phase behavior. . We also acknowledge financial support through the Phospholipid Research Center, Heidelberg, Germany.
Conflicts of Interest:
The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.
Appendix A. Thermodynamic Criterion for Critical Point
Equation (9) (together with Equation (10)) defines the location of critical points on the spinodal surface. It can be derived starting from the tangent plane construction in Equation (8) by expressing the two coexisting compositions as φ 1 = φ + φ, ψ 1 = ψ + ψ, α 1 = α + α, φ 2 = φ − φ, ψ 2 = ψ − ψ, α 2 = α − α. Upon carrying out a series expansion up to first order in x = { φ, ψ, α}, we can express the three equal-slope equations in the first line of Equation (8) as Ax = 0, thus reproducing the equation detA = 0 that defines the spinodal. With that, the first non-vanishing term in the expansion with respect to φ, ψ, α of the second line of Equation (8) is of third order. The two eigenvectors of x allow us to express two of its components by the remaining one; for example, φ = φ( ψ) and α = α( ψ). Inserting these into the third-order expansion of the second line of Equation (8) yields where our choice of the independent variable in expressing the two eigenvectors determines which row (or, equivalently, column) the cofactors B φ , B ψ , B α refer to. Equation (A1) is equivalent to Equation (9). We have not observed Equation (9) to be stated previously in the literature. However, multiple equivalent ways to express Equation (9) are well known; see the discussions by Akasaka [46] and by Bell and Jäger [47]. Among those is a set of equations that goes back to Heidemann and Khalil [45] and results from a third-order stability analysis of a Taylor series expansion of the Helmholtz free energy about the equilibrium state. Another method involves the vanishing of the determinant detB = 0 where the matrix B is produced from matrix A (see Equation (7)) by replacing any one of its rows by the row vector {∂(detA)/∂φ, ∂(detA)/∂ψ, ∂(detA)/∂α} [58] . | 2019-07-28T13:03:21.947Z | 2019-07-25T00:00:00.000 | {
"year": 2019,
"sha1": "6871eb0930546a1ae8e5dd2b5eb61db55ca7e9e3",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2218-273X/9/8/303/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6871eb0930546a1ae8e5dd2b5eb61db55ca7e9e3",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
1975002 | pes2o/s2orc | v3-fos-license | Analytical theory of extraordinary transmission through metallic diffraction screens perforated by small holes
In this letter, the problem of extraordinary (ET) transmission of electromagnetic waves through opaque screens perforated with subwavelength holes is addressed from an analytical point of view. Our purpose was to find a closed-form expression for the transmission coefficient in a simple case in order to explore and clarify, as much as possible, the physical background of the phenomenon. The solution of this canonical example, apart from matching quite well with numerical simulations given by commercial solvers, has provided new insight in extraordinary transmission as well as Wood’s anomaly. Thus, our analysis has revealed that one of the key factors behind ET is the continuous increase of excess electric energy around the holes as the frequency approaches the onset of some of the higher-order modes associated with the periodicity of the screen. The same analysis also helps to clarify the role of surface modes –or spoof plasmons– in the onset of ET. © 2009 Optical Society of America OCIS codes: (050.0050) Diffraction and gratings; (050.0050) Diffraction theory. References and links 1. T. W. Ebbesen, H. J. Lezec, H. F. Ghaemi, T. Thio, and P. A. Wolff, “Extraordinary optical transmission through sub-wavelength hole arrays,” Nature (London) 391, 667–669 (1998). 2. H. A. Bethe,“Theory of diffraction by small holes,” Phys. Rev. 66, 163–182 (1944). 3. H. F. Ghaemi, T. Thio, D. E. Grupp, T. W. Ebbesen, and H. J. Lezec, “Surface plasmons enhance optical transmission through subwavelength holes,” Phys. Rev. B 58,, 6779–6782 (1998). 4. D. E. Grupp, H. J. Lezec, T. W.Ebbesen, K. M. Pellerin, and T. Thio, “Crucial role of metal surface in enhanced transmission through subwavelength apertures,” Appl. Phys. Lett. 77, 1569–1571 (2000). 5. L. Martı́n-Moreno, F. J. Garcı́a-Vidal, H. J. Lezec, K. M. Pellerin, T. Thio, D. E. Grupp, J. B. Pendry, and T. W. Ebbesen, “Theory of extraordinary optical transmission through subwavelength hole arrays,” Phys. Rev. Lett. 86, 1114–1117 (2001). 6. M. Beruete, M. Sorolla, I. Campillo, J. S. Dolado, L. Martı́n-Moreno, J. Bravo-Abad, and F. J. Garcı́a-Vidal, “Enhanced millimeter-wave transmission through subwavelength hole arrays,” Opt. Lett. 29, 2500–2502 (2004). 7. J. A. Porto, F. J. Garcı́a-Vidal, and J. B. Pendry, “Transmission resonances on metallic gratings with very narrow slits,” Phys. Rev. Lett. 83, 2845–2848 (1999). 8. F. J. Garcı́a-de-Abajo, R. Gómez-Medina, and J. J. Sáenz, “Full transmission through perfect-conductor subwavelength hole arrays,” Phys. Rev. E 72, 016608 (2005). 9. J. B. Pendry, L. Martı́n-Moreno, and F. J. Garcı́a-Vidal, “Mimicking surface plasmons with structured surfaces,” Science 305, 847–848 (2004). #106467 $15.00 USD Received 16 Jan 2009; revised 11 Mar 2009; accepted 13 Mar 2009; published 24 Mar 2009 (C) 2009 OSA 30 March 2009 / Vol. 17, No. 7 / OPTICS EXPRESS 5571 10. A. P. Hibbins, M. J. Lockyear, I. R. Hooper, and J. R. Sambles, “Waveguide arrays as plasmonic metamaterials: transmission below cutoff,” Phys. Rev. Lett. 96, 073904 (2006) 11. F. Medina, F. Mesa, and R. Marqués, “Equivalent circuit model to explain extraordinary transmission,” in IEEE MTT-S Int. Microw. Symp. Dig., Atlanta, GA, 213-216, June 2008. 12. F. Medina, F. Mesa, and R. Marqués, “Extraordinary transmission through arrays of electrically small holes from a circuit theory perspective,” IEEE Trans. Microwave Theory Tech. 56, 3108–3120, 2008. 13. C. Genet, and T.W. Ebbesen, “Light in tiny holes,” Nature, 445, 39-46 (2007). 14. F.J. Garcı́a-de-Abajo, “Colloquium: Light scattering by particle and hole arrays,” Rev. Mod. Phys. , 79, 12671290 (2007). 15. R. Gordon, “Bethe’s aperture theory for arrays,” Phys. Rev. A 76, 053806 (2007). 16. J. D. Jackson, Classical Electrodynamics, Edt. Wiley, New York (1999), 3rd Ed.
Introduction
A few years ago Ebbesen et al. [1] reported a phenomenon of extraordinary transmission (ET) through metallic screens periodically perforated with sub-wavelength holes.This physical effect was originally attributed to the excitation of surface plasmons on the diffraction screen [3,4,5], in apparent contradiction with Bethe's theory for small apertures [2].In a first period, this phenomenon was mainly related to the plasma-like behavior of metals at optical frequencies.However ET has also been observed at millimeter wave frequencies [6], when metals can no longer be considered solid plasmas (but rather quasi-perfect conductors).This last experimental observation can be explained by means of diffraction theories [7,8] which emphasize the role of screen periodicity in ET, being the specific behavior of the opaque material secondary to the phenomenon.Subsequently, the surface plasmon concept was rescued to explain ET after considering that plasmon-like surface waves (which some people call spoof plasmons) can be supported by structured metallic surfaces [9, 10] even in the perfect conductor limit.More recently, some of the authors of the present paper proposed a comprehensive equivalent circuit model based on the theory of discontinuities in hollow waveguides [11,12].This model accounts for the most salient features observed in ET experiments as well as for fine details of the transmission spectrum obtained from exhaustive numerical computations.For a detailed and comprehensive discussion on the topic, the reader might consult [12] (and references therein) as well as the excellent reviews by C. Genet et al. [13] or by F.J. García de Abajo [14].
The above works, among others, show that ET can be addressed from many different perspectives: surface waves excitation, diffraction theory, equivalent circuit models,..., with each one providing a different approach to the problem.The main aim of this letter is to gain insight into the physics behind ET through the development of an accurate analytical solution of a canonical example.A first and very valuable attempt to develop this analytical solution was recently developed in [15].However, it will be shown that althoug many qualitative conclusions of this analysis are correct, the numerical results presented in the paper are inaccurate probably due to inappropriate approximations in the derivations.We feel that the present analytical solution not only provides accurate numerical results but also shed new light on the problem and makes apparent the interconnection between the previous perspectives.
For simplicity, we will first analyze a zero thickness perfect conducting plate with square holes placed in a square periodic array.Although the present analysis can readily be extended to more complex geometries, the straightforward physical interpretation of this simple structure allows for a better understanding of the physical effects.Following the rationale in [12,15], the first step in our analysis is the transformation of the diffraction problem into the problem of a small diaphragm inside a TEM waveguide.Then, the problem is solved employing wellknown results of diffraction and waveguide theories, and the accuracy of the numerical results is numerically validated by careful electromagnetic simulations.However, as it was already mentioned, the main purpose of this work is not to provide a tool for calculations (something that can easily be obtained from numerical techniques already implemented in common com- To better achieve this goal, an equivalent circuit (rather than a circuit model) is proposed.The equivalent circuit will implicitly contain all the information already provided by the analytical solution, but it has the additional advantage of making its physical interpretation much easier.The role of the different waveguide modes in the onset of ET will be analyzed and connected with the frequency dependence of the different elements of the proposed equivalent circuit.The same analysis will also be applied to elucidate the role of surface waves (or spoof plasmons) in the onset of ET.Finally the present proposal will allow us to link the reported results -which come basically from a diffraction theory analysis-to the circuit theory approach proposed in [11,12].tric conducting plates at the upper and bottom interfaces, perfect magnetic conducting plates at both lateral sides, and a square diaphragm located, say, at z = 0 (see Figs. 1(c)-(d)).Due to the symmetry of this structure, and assuming an incident field of amplitude equal to unity, the field component E y at z = 0 + can be expanded into the following Fourier series: where T is the transmission coefficient, A TE , A TM are the coefficients of the (below cutoff) TE and TM waveguide modes excited at the discontinuity, and f nm (x, y) = cos(2nπx/a) cos(2mπy/a) .Using waveguide theory [16], the electric field component E x can be written as with g nm (x, y) = sin(2nπx/a) sin(2mπy/a) .The coefficients of the expansion (1) can be obtained from the orthogonality properties of functions f nm .However, for small holes and not very large values of n and m (taking into account that E y must be zero at the metallic screen), the following approximation applies [15]: where subindex w and h stands for the waveguide and hole sections respectively, and a2 is the waveguide section.Thus, it is finally found that For small holes, E x should be almost zero at the hole (and zero on the metallic screen).Therefore, from (2) and (4): (5) The transmission coefficient T can now be obtained after imposing the appropriate boundary conditions for the transverse magnetic field.Since the scattered field is produced by the electric currents induced in the diffraction screen, which are confined to the z = 0 plane, it is deduced from symmetry that all the tangential components of the scattered magnetic field must vanish at the aperture.This conclusion comes out from the fact that such induced currents are vectors invariant by reflection in the z = 0 plane, whereas the scattered magnetic field is a pseudo-vector, whose tangential components must change of sign after reflection in such plane.Therefore, the total tangential magnetic field in the hole must be equal to the incident field [16], that is, H x = −Y 0 = − ε 0 /μ 0 and H y = 01 .Once the tangential magnetic field at the aperture has been evaluated, upon substitution of (4)-( 5) in (1) the transmission coefficient T can be obtained after solving the following equation: where Y TE and Y TM are the TE and TM modal admittances given by [16] Since the maximum "resolution" of Eq. ( 1) is limited by the minimum wavelength, λ n = a/n, the upper limits of the series in (6) can be determined by imposing a "resolution" equal to the hole size b.This leads to N, M ≈ a/b, which completes the determination of T from (6).Results for the transmission coefficient for several values of a/b computed from (6) are shown in Fig. 2 together with data coming from full-wave electromagnetic simulations using the commercial software CST Microwave Studio.Both set of results agree quite well not only qualitatively but also quantitatively.The figure also shows other previous analytical results on the same structure [15]. 1 for different values of the ratio a/b versus the ratio ( f W − f )/ f W , where f W = c/a is the Wood's anomaly frequency, with c being the light velocity in free space.Solid Lines correspond to data from (6).Dotted lines correspond to data from CST.For comparison purposes, the numerical value for the ET frequency provided in [15] for a/b = 7.07 (i.e.holes covering a 2% of the total area) is shown with an arrow Now, the equivalent circuit shown in Fig. 1(e) is proposed for the waveguide discontinuity problem shown in Figs.1(c) and (d).The transmission coefficient for this equivalent circuit configuration can be found from the solution of the following equation: where ω = 2π f is the angular frequency.The above equation clearly shows that total transmission is obtained at frequency ω 0 = 1/(LC).Considering now that the evanescent TE(TM) mode admittances are imaginary and positive(negative) [16], a direct comparison between (9) and (6) leads to the following expressions for the capacitive, B C , and inductive, B L , susceptances This result is somehow expected since it is well known that evanescent TM(TE) modes present a capacitive(inductive) behavior as they store mainly electric(magnetic) energy.The above transformations makes it apparent that the equivalent circuit of Fig. 1(e) is not a simple model but merely another way to express the previously obtained analytical results in a circuit-like fashion.It is then apparent the connection between the proposed equivalent circuit and the diffraction theory approach to ET.Furthermore, since the equivalent circuit of Fig. 1(e) is actually a particularization (for infinitesimal screen thickness) of the equivalent circuit reported in [11,12], previous analysis can be considered as a "theoretical validation" of the equivalent circuit theory proposed in those papers.The frequency dependence provided by the modal admittances appearing in ( 10) and ( 11) gives rise to some relevant facts.Near the Wood's anomaly wavelength: λ = a (in our case, it also corresponds to the cutoff frequency of the TM 02 mode), the admittance of the TM 02 mode suddenly grows to infinity, which makes the term associated with this latter mode be dominant in the capacitive susceptance series (10).Under the same circumstances, however, the admittance of the TE 20 mode goes to zero, which means that there is no singularity in the inductive susceptance B L .As it is well know from Bethe's theory, for normal incidence and frequencies far and below Wood's anomaly, a small hole has an inductive behavior (it can be modeled by an equivalent magnetic dipole).Nevertheless, as frequency approaches Wood's anomaly, we have just seen that the absolute value of the associated capacitive susceptance B C grows to infinity and, at certain frequency, it will cancel out the inductive susceptance associated with the hole (namely, B C + B L = 0) and will give rise to total transmission.It is also interesting to note that, within the same previous frequency range, all admittances except Y TM 0,2 have a smooth frequency dependence.In that case, the inductive susceptance B L (11) is found to be roughly proportional to (a/b) 2 .However, the capacitive susceptance, which is dominated by Y TM 0,2 , will be proportional to a/b.This means that |B C |/B L ∝ b/a as λ → a, which implies that the smaller the hole, the smaller the absolute value of B C is with regard to B L .In other words, the smaller the hole, the closer ET is to Wood's anomaly.An additional observation can be made after considering that the absolute value of the capacitive susceptance still increases for frequencies above ET until it becomes infinity at Wood's anomaly.At this last frequency, the LC tank in the equivalent circuit of Fig. 1(e) becomes a short circuit and total reflection will appear.Therefore the equivalent circuit of Fig. 1(e) along with the transformations (10) and (11) explains satisfactorily both ET and Wood's anomaly in periodically perforated zero thickness screens.
Next, it will be studied the behavior of the corresponding electromagnetic field at frequencies near Wood's anomaly.According to (4) and ( 5), the amplitudes of the different evanescent (C) 2009 OSA modes (measured as the amplitudes of the electric field component E y ) excited around the diffraction screen are roughly of the same order.However, given that the absolute value of the admittance of the TM 02 mode is much larger than the admittance of any other mode, the near field will be dominated by the magnetic field component H x of this mode, as well as for its associated axial electric field E z .Since the TM 02 mode is near cutoff, its associated attenuation constant is small and its near field configuration will extend relatively far from the diffraction screen.This near field picture is valid not only at ET (total transmission) but also at Wood's anomaly (total reflection), and therefore it cannot explain by itself ET.Only when the combined effect of all the remaining evanescent modes is also considered, ET can be properly explained.That is, only when the admittance of the TM 02 becomes exactly so large (and its attenuation constant exactly so small) that the excess of electric energy stored in this mode equals the excess of magnetic energy stored by all the remaining evanescent modes, ET will occur.We feel that this is one of the most important conclusions of our analysis: ET is closely related to Wood's anomaly because in both cases the energy stored in the evanescent TM 02 is much larger than the energy stored in any other evanescent mode.However, whereas Wood's anomaly appears when this energy becomes infinite, ET appears when the excess of electric energy associated with this TM 02 mode cancels out the excess of magnetic energy associated with the remaining evanescent modes excited at the hole.The previous analysis can also help to clarify the role of surface waves or spoof plasmons [9,10] in the onset of ET.For this purpose we should consider the solutions of the equivalent circuit of Fig. 1(e) in the absence of excitation.These solutions provide the frequency at which the surface wave supported by the periodic structure has a propagation constant with zero real part so that phase matching with the normally impinging TEM wave is possible and surface plasmons are then excited.In such case only outgoing TEM waves can exist, and the leftand right-hand side transmission lines, which account for free space, can be substituted by resistances accounting for radiation in free space: R 0 = 1/Y 0 (= Z 0 ) = μ 0 /ε 0 ≈ 377Ω.The frequency of excitation of surface plasmons with the appropriate wavevector, k = 2π/a, can now be identified with the frequency of resonance of the loaded LC resonator shown in Fig. 3.This frequency of resonance, ω = ω − iω , must be complex in order to account for radiation losses, and can be computed as the solution to the following implicit equation: (note that both C and L depends on frequency via the TM and TE admittances).The real part of the complex frequency is actually the frequency of excitation of the surface wave (for k = 2π/a), and its imaginary part gives the lifetime of the wave through τ = 1/ω .Clearly, if R 0 is much larger than |B C | as well as B L , the frequency of excitation of the surface waves will be very close to the frequency of ET (although both frequencies will never coincide).Table 1 shows a comparison between these resonance frequencies and the ET frequencies for the cases analyzed in Fig. 2.This table shows that the higher the ratio a/b is, the closer both frequencies appear.In fact, both frequencies have more than five identical significant digits for a/b > 5. Also, the imaginary part of ω becomes almost negligible for high values of this ratio, which shows that this frequency exactly coincides with the frequency of ET in the limit of very small holes (a/b → ∞).However, for smaller values of this ratio, both frequencies, although close, show a significant deviation.Accordingly, the imaginary part of ω becomes more significant as the ratio a/b increases.Larger holes would yield even higher differences between the frequencies associated with the surface plasmon and the extraordinary transmission.Until now we have considered infinitely thin screens.However, the proposed theory can easily be extended to diffraction screens of finite thickness t.In this case the circuit model of Fig. 1(c) must be modified in order to include the evanescent waveguide formed by the hole.If the hole is small, it will be assumed that only the dominant TE 10 mode is significantly excited inside the hole, and hence the effect of higher order modes are neglected.Thus the hole is modeled as an evanescent transmission line with admittance equal to the admittance of this TE 10 mode.For square holes, this admittance (defined as the average current through the hole divided by the average voltage accross the hole) coincides with the wave admittance of the aformentioned TE 10 mode: Y TE 10 = iY 0 (λ /2b) 2 − 1.Moreover, only a fraction of the current flowing through the diffraction screen will go through the holes.This fraction can be roughly estimated as the fractional length along the x-axis (the axis perpendicular to the current) occupied by the holes; namely, b/a.Thus, from power conservation, the admittance Y seen in the diffraction screen at the input of the hole can be obtained from which gives Y = (a/b) 2 Y T E 10 .The circuit element providing this admittance transformation is an ideal transformer with turns ratio n = a/b.Therefore, this ideal transformer must be included between the resonant tank modeling the step discontinuity and the transmission line modeling the hole.The resulting equivalent circuit is shown in Fig. 4. the results obtained for a zero-thickness screen.For comparison purposes the results obtained from numerical simulation using CST Microwave Studio are also shown.As it can be seen, there is a good qualitative agreement between theory and simulations.This agreement includes the presence of two transmission peaks, a well known effect in moderate thicknes screens (see [14] and references therein).Taking into account the logaritmic scale, the quantitative agreement between theory and simulations is also quite good (more than four digits in the frequency of resonance).The source of the small numerical disagreement coul be attributed to the assumed approximate value of the transformer ratio.In summary, an analytical solution for ET through thin diffraction screens has been presented.Our analysis, based on the equivalence with a waveguide discontinuity problem, shows that ET and Wood's anomaly can both be explained from the peculiar behavior of the evanescent TM 02 mode excited at the holes.Since this behavior is imposed by the screen periodicity, the analysis shows that it is this periodicity, rather than the physical nature of the screen, which is on the grounds of ET.Our analysis is also in agreement with the circuit theory of ET recently proposed by some of the authors.The analysis can also be applied to elucidate the role played by surface waves (or spoof plasmons) in ET.It has been shown that a radiating surface wave can be excited at a frequency very close to ET frequency but not exactly at the same frequency.This result suggests that radiating surface waves can play a significant role in the transitory states at the onset and at the end of a monochromatic ET steady state.Finally, the analysis was extended to diffraction screens of finite thickness, thus showing that the present theory also applies to ET in thick screens.
Fig. 1 .
Fig. 1.Perfect conductor screen perforated with square holes: front view (a) and two lateral cuts (b).Front (c) and lateral (d) views of the structure unit cell or equivalent waveguide.(e) Equivalent circuit for the discontinuity in the waveguide.It has been assumed that t → 0.
Fig. 2 .
Fig. 2. Transmission coefficient of the structure shown in Fig. 1 for different values of the ratio a/b versus the ratio( f W − f )/ f W , where f W = c/a is the Wood's anomaly frequency, with c being the light velocity in free space.Solid Lines correspond to data from (6).Dotted lines correspond to data from CST.For comparison purposes, the numerical value for the ET frequency provided in[15] for a/b = 7.07 (i.e.holes covering a 2% of the total area) is shown with an arrow #106467 -$15.00USD Received 16 Jan 2009; revised 11 Mar 2009; accepted 13 Mar 2009; published 24 Mar 2009
Fig. 3 .
Fig. 3. Equivalent circuit for the computation of the frequency of excitation of surface waves with k = 2π/a.
Fig. 4 .Fig. 5 .
Fig. 4. Equivalent circuit for the structure of Figs.1(a)-(b) with finite thickness (t = 0).In Fig.5the computed results for a screen of thickness t = a/7 are shown and compared with solvers) but to give a different physical insight into the physics of ET.
Table 1 .
Normalized resonance and extraordinary transmission frequencies, ( f W − f )/ f W , for the cases studied in Fig.2. | 2018-04-03T03:41:30.086Z | 2009-03-30T00:00:00.000 | {
"year": 2009,
"sha1": "3805600d19d57659a83233412c321917f906a927",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1364/oe.17.005571",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "3805600d19d57659a83233412c321917f906a927",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
241060997 | pes2o/s2orc | v3-fos-license | Joint Topical Word Embedding for Detecting Drift in Social Media Text
Social media texts like tweets and blogs are collaboratively created by human interaction. Fast change in trends leads to topic drift in the social media text. This drift is usually associated with words and hashtags. However, geotags play an important part in determining topic distribution with location context. Rate of change in the distribution of words, hashtags and geotags cannot be considered as uniform and must be handled accordingly. This paper builds a topic model that associates topic with a mixture of distributions of words, hashtags and geotags. Stochastic gradient Langevin dynamic model with varying mini-batch sizes is used to capture the changes due to the asynchronous distribution of words and tags. Topical word embedding with co-occurrence and location contexts are specified as hashtag context vector and geotag context vector respectively. These two vectors are jointly learned to yield topical word embedding vectors related to tags context. Topical word embeddings over time conditioned on hashtags and geotags predict, location-based topical variations effectively. When evaluated with Chennai and UK geolocated Twitter data, the proposed joint topical word embedding model enhanced by the social tags context, outperforms other methods. model The average perplexity 2009) snapshots are computed for evaluating dynamic topic models. Topical clustering is compared for evaluating different topical embedding models (Niu et Topic drift is evaluated using KL divergence (Cai et al. and MSD (Yang & Donnat 2017) measures.
INTRODUCTION
Topics identified in a document make sense to the group of words. Topic drift is different from semantic drift or semantic change from the perspective that the first one is related to the change in the group wise distribution of words over time, where the second one reflects the change in the usage of words over time. Tweets and micro blogs have diverse contents like text, urls, hashtags, geotags, mentions, image, emoticons and time stamp. User groups and user interests are connected with the locations and have faster dynamics based on location specific events. Words and hashtags distributions in tweets fluctuate at regular intervals during festivals (such as Diwali and Christmas) and sports events (such as Olympics). These changes may or may not lead to the topic drift. Hence, the topic drift in tweets is a complex issue, and must be processed with a different perspective.
From literature, it is evident that the topic drift can be examined by modeling time with word co-occurrence patterns (Wang & McCallum 2006), identifying topic boundaries (Liu et al. 2013), detecting the sub topics (Fei et al. 2015), quantifying the impact of the topic on a location (Bernabe-Moreno et al. 2015) and representing the context as a cluster of hashtags (Alam et al. 2017). Social media text reflects a cultural change in the social environment that leads to topic drift where location plays a major role. Geotagged tweets reflect the behavior of people in that region. Location proximity relates messages with the same event in addition to time (Atefeh & Khreich 2015) and used for detecting location based topical variation. Zhang et al. (2015) combined clustering and topic model to study topic clusters of documents from a geolocation. The most commonly used methods for modelling topics is latent Dirichlet allocation (LDA), which represents the discrete unstructured text document as random mixtures over latent topics and topic as a distribution of words (Blei et al. 2003). There is a possibility of grouping a The evolving nature of the contents of social media text is tremendously high. Hence, a suitable dynamic modeling approach is required for capturing the topic dynamics from a large number of tweets that are partitioned across either discrete or continuous time slots. The dynamic topic models (DTM) described, using the variational Kalman filtering and non-parametric wavelet regression (Blei & Lafferty 2006), online inference using a stochastic EM algorithm (Iwata et al. 2010), Gibbs sampling with stochastic gradient Langevin dynamics (Bhadury et al. 2016) and continuous time dynamic topic model using Brownian motion (Wang et al. 2006) illustrate how the posterior inference of topics over time can be performed effectively.
Continuous time dynamic topic models are suitable for generating topics from sequential collection of documents (Wang et al. 2006) and may not be appropriate for tweets because, Twitter APIs cannot be used to find tweets older than a week. In addition to that, topic in tweets depends on multiple attributes with varying dynamic distributions. Hence, the dynamic topic model used for Twitter data should be discrete (Bhadury et al. 2016) that is capable of integrating varying dynamic behavior of words, tags, urls and mentions over time. should incorporate additional contexts related to the mixture of distribution of words, tags, urls, mentions and geolocations. Hashtags are the labels for tweets that share a common topic, which are generated by the internet users to categorize the concepts. Hashtags are associated with the co-occurrence context. Geotag is a metadata that describes the geographic location of tweets and is connected with location contexts. These tags enable the achievement of high quality topical word embeddings enhanced with hashtags and geolocations. These embeddings can be learned incrementally over time for investigating the topic drift in tweets.
The proposed model include, a tags context-based topic model to generate the topic distribution over words conditioned on hashtags and geotags and, a discrete time stochastic gradient Langevin dynamic model with varying mini-batch sizes for capturing the changes in the asynchronous distribution of words, hashtags and geotags. Another contribution of this paper is, a topical word embedding model that enhances the proposed topic model to yield topical word embedding vectors trained on contexts related to hashtags and geotags.
RELATED WORK
The methods used for analyzing the topic drift, existing topic models for the twitter data, the available discrete time dynamic topic models and the use of topical word embedding are presented.
Topic Drift in Twitter Data
Cataldi et al. (2010) extracted terms from tweets, estimated user authority based on a directed graph of active authors, computed 'term life cycle' using aging theory and selected emerging terms based on its age. They constructed a 'topic graph' that links emerging terms with their co-occurring terms. Emerging term connected with the semantically related words lead to a subgraph of the topic graph showing drift in topics. Saha & Sindhwani (2012) employed online nonnegative matrix factorization (NMF) to generate topics of streaming text like blogs and tweets, and formulated a temporal regularization operator for topic evolution. Fei et al. (2015) proposed 'cluster based subtopic detection algorithm' to cluster the tweets into subtopics for examining topic drift. However, hashtags are not considered in analyzing the drift. Rosa et al.
(2011) performed supervised topical clustering of tweets into predefined categories using hashtags as topic indicators. After clustering at coarse and fine levels, they observed the difference between clusters in training and testing data. Alam et al. (2017) represented topic as 'word distribution' and context as 'hashtags distribution' and studied the context over time with continuous and discrete time distribution. Time attribute for topic evolution can be directly added to the topic model (Wang & McCallum 2006) for continuous time distribution. Liu et al. (2013) employed LDA with Gibbs sampling for extracting topic words and measured coherence of topical content as change in alignment of topic word's sequence over time. However, the other topic indicators such as urls, geolocations and mentions have not been considered. The location context has impact on topic distribution equivalent to the co-occurrence context with hashtags.
Bernabe-Moreno et al. (2015) studied the user interactions with topic and investigated the impact of the topic in a location over a period of time in tweets. However, each location has its own influence on the topic distribution in tweets and the location attribute should be integrated into the topic model for extracting topics from tweets.
Extention of Topic Models for Tweets
Labeled LDA (Ramage et al. 2009(Ramage et al. , 2010 was described by categorizing the twitter content into four types of dimensions like substance, social, style and status using hashtags, mentions and emoticons in tweets. Zhao et al. (2011) proposed Twitter LDA for generating topics from tweets by distinguishing the word distributions as 'general words' and 'topic words' and categorized topics as longstanding, entity-oriented and event-oriented for opinion mining. for identifying locations from the tweets. Topic models for examining topics over time in tweets must integrate the topic's distribution over words, hashtags and locations, to track the influence of co-occurrence as well as location contexts. A dynamic mechanism can be incorporated into the topic model to study the topic drift in tweets, that are partitioned into different snapshots over time.
Tracking Dynamic Topics
Traditional time series models are suitable for continuous data, whereas topic models are should be able to process the varying dynamic behavior of hashtags and geotags in addition to words.
Topical Word Embedding
Topic models can be enriched using a WE model. Li The probability distribution of words 'w' in document 'D' sampled under the topic 'z' is described as Gibbs sampling is suitable for generating posterior probabilities by sampling variables from the conditional distribution. With LDA, the joint probability of topic proportion 'θ', set of 'k' topics 'z' and a set of 'Nd' words are given (Blei et al. 2003) as
LDA_Tags
The proposed LDA_Tags differs from the topic model described by Rosa context of both hashtags and geotags can be incorporated into the topic model for the analysis of topic drift in the media text. Hashtags 'y' and geotags 'x' are distributed over topics based on the Dirichlet distribution (θ) and topic 'z' has multinomial distribution (Φ) over words.
Figure 3. LDA_Tags
Now the document is modeled as a mixture of topics (z) and the topics as a mixture of distributions over the words (w), hashtags (hv) and geotags (gv). With LDA_Tags, the probability distribution of word 'w', hashtag 'y' and geotag 'x' in document 'D' sampled under the topic 'z' are described as done by Blei et al. (2003) for words.
Algorithm 2. LDA_Tags
Vocabularies of words (W), hashtags (hv), geotags (gv) and hyper parameters ( , ) define the topic model. Topic proportion is sampled per each document, whereas hashtag , geotag , topic and word are sampled per each word 'w' in the document 'd' (Algorithm 2). Now, the joint probability of the topic proportion (θ) with a set of 'k' topics, a set of Nd words, hashtags & geotags for the given hyper parameters ( , ) is defined by modifying the above equation (2) as, Where, hv -hashtag vocabulary, gv -geotag vocabulary, -hashtag distribution, -geotag distribution.
In Twitter data, hashtags have faster dynamics compared to that of words and geotags.
New hashtags are created every day and their distributions are unpredictable. Hence, varying granularities are to be processed for examining the distribution of words, hashtags and geotags.
Topic Dynamics with Hashtag and Geotag Contexts
The proposed dynamic topic model is based on stochastic process that represents the system with random variables whose probability distributions are randomly changing over time.
The proposed dynamic topic model is described using stochastic gradient Langevin dynamics (SGLD), an incremental gradient descent for minimizing an objective function which runs through a subset of samples. The SGLD parameter for ' ' at t th iteration is given (Welling & Teh 2011) as where, SG noise = ∑ ∇ =1 log � �θ� LD noise = ~ℵ( |0, ) -set of data items, m -mini-batch size, -mini-batch data items, -step size or learning rate The dynamic parameters are inferred using methods like 'Gibbs sampling' which resamples each random variable iteratively given the remaining variables. However, different mini-batch sizes of words, hashtags and geotags for different snapshots are required to be computed.
Varying Mini-batch Size
The dynamic topic model for LDA_Tags must handle the difference in the rate of change Larger mini-batch size may not be able to track contextual change with words and hashtags effectively and smaller mini-batch size may lead to unnecessary processing of location specific distribution. An increase in mini-batch size leads to decrease in convergence rate and reduces the communication cost (Li et al. 2014). Hence, both small and large mini-batch sizes can be optimally initialized and can be varied based on distribution in previous snapshots for learning change in the posterior distribution. Mini-batch sizes for words ( +1 ), hashtags ( ℎ +1 ) and geotags ( +1 ) to be replaced for 'm' in equation (10) can be computed as Dynamic topic model parameters for distribution of words and tags cannot be integrated and must be computed separately. (i) Sampling parameter for mean at time 't', ~ℵ( | , 2 ) (ii) Sampling parameter for topic-document proportion at time 't',
Kullback-Leibler Divergence of Topic
Topic drift can be described by Kullback-Leibler (KL) divergence (Cai et al. 2014) of distribution of a topic 1 over words 'w', conditioned on hashtag 'y' and geotag 'x' over time as If KL divergence of a topic is high during a particular period, it denotes a drift in the topic during that interval.
Topical Word Embedding with Hasgtag and Geotag Contexts
The WE model predicts the context words for the given word, whereas, TWE predicts the context words for the given word and the topic. However, the proposed TWE model is able to predict the context words for the given (word, topic) pair conditioned on hashtag and geotag contexts (Equation 25). Both HCV and GCV can be jointly learned as performed by Niu et al.
The joint learning of topical embedding using both hashtag and geotag context vectors is described as below ( Figure 6).
Figure 6. Joint Learning of Hashtag and Geotag Contexts
Once the model is trained with the two sets of word_topic pairs [(<wc1_zh> t , <wc2_zh> t , ..., The softmax function is described (Liu et al. 2015) as
Topic Drift
Topics at a given period may evolve from the topics at the previous time interval. The
Dataset
Chennai
Topic Model -Results
Topics with posterior probabilities of words, hashtags and geotags are generated using
Evaluation
The evaluation is performed for topic model, topic dynamic model, topical word embedding and topic drift. For topic models, the common parameter used for evaluation is perplexity (Blei et al. 2003
Topic Model Evaluation
Based on perplexity computation by Blei et al. (2003), the perplexity for LDA_hash, LDA_geo and LDA_Tags are computed as The perplexity of the proposed LDA_Tags decreases when the number of topics has increased from 10 to 50. Comparison between LDA (only words distributions), LDA_hash (words and hashtags distributions), LDA_geo (words and geotags distributions) and LDA_Tags (words, hashtags and geotags distributions) with Chennai data shows that lower perplexity is achieved with LDA_Tags ( Figure 7). However, it doesn't give the details about the statistical significance.
Figure 7. Perplexity vs. Topics
To find the significance of the topic model with tags context, one-way ANOVA analysis (Table 5) with Chennai data is done for the perplexity parameter over time. Since baseline LDA does not include tags, it is not considered for ANOVA analysis. The null hypothesis assumed here is "The mean of perplexities over three intervals is same for all topic models". The perplexity mean of three topic models (Table 5) over three periods are statistically different between groups with the minimum value of (143.2254) for LDA_Tags. It is also found that the F-measure between groups (Table 6) is 46.0966 and the p-value is less than 0.05 (0.0002). Hence, the hypothesis is rejected. P-values from within groups details, clearly show that LDA_Tags is significant compared to others and gives improved performance for tags based context. Log posterior estimates of the probability distributions of words, hashtags and geolocations vary with time (Figures 8a, 8b), which confirm their dynamic nature and the need for varying minibatch sizes. There is more variation in the distribution of words compared to hashtags and geolocations in Chennai data (Figure 8a). However, hashtags have more dynamic variation in distribution compared to the nominal change in words and geolocations in UK data (Figure 8b).
This may be due to the abundant and frequent usage of social media by Twitter users in UK compared to Chennai.
Dynamic Model Evaluation
The relative perplexity of recent with previous intervals (Knights et al. 2009) is calculated as, Performance of dynamic topic model with varying mini-batch sizes (words -, hashtagsℎ , geolocations -) is compared with that of random and fixed (words, hashtags, geolocations -100) mini-batch sizes. The average perplexity varies nearly at the same rate and lower values are obtained for SGLD_vm compared to that of SGLD_fm and SGLD_rm (Figure 9a). However, the relative perplexity is high for SGLD_vm with Chennai data and there is no distinction between the dynamic models with UK data which is due to the smaller interval between snapshots.
Hence, SGLD_vm gives better performance compared to other models.
Topical Word Embedding Evaluation
Evaluation of TWE is achieved by visualizing the embedding vectors of top 5 topics using t-SNE distribution ( Figure 10). The plot of TWEs without joint learning using topics from LDA (Figure 10a), LDA_hash (Figure 10b), LDA_geo ( Figure 10c) and with joint learning (LDA_Tags) show that better grouping of topics is achieved with LDA_Tags ( Figure 10d). very high divergence on 19-4-16 and 20-4-16 (Figure 11b) showing the impact of drift on 'job'.
This is due to the association of 'job' with hashtag '#o2job'.
KL Divergence of Topics
The mean squared deviation of topical embedding vectors of ten topics over time shows that topics are having less deviation during May compared to January and August (Figure 12a).
However, MSD of the topic 'jallikattu' is high during January (Figure 12a) which is the correct prediction of the duration of the protest compared to that of low divergence value (Figure 11a).
KL divergence also wrongly detects drift in 'gonna' and 'actor' which is not actually true. With UK data, the topics 'job', 'wind', 'weather' and 'books' have left side skew and 'london' and 'sales' have right side skew showing indications of the possibility of drift (Figure 12a). Among these 6 topics 'sales' has more variation denoting the drift in 'sales' during the interval. This is due to the co-occurrence of 'sales' with '#internet'. The proposed topical embedding model correctly predicts drift in 'sales' whereas KL divergence shows no deviation from the topic 'sales' (Figure 12b). To confirm the results, TWEs of Chennai data (January) have been examined for topical variation of 'jallikattu'. Visualization using t-SNE ( Figure 13) clearly shows the grouping of 'marina' with the topic 'jallikattu' due to the trends in the Twitter data during the protest.
Figure 13. Topical Word Embedding -t-SNE
This ensures that the proposed model detects the drift in topics accurately compared to other models based on topic distribution over words and hashtags. This is possible with the joint learning of hashtags and geotags context. The topic drift detection accuracy of models based on contexts of only words (LDA), words+hashtags (LDA_hash), words+geotags (LDA_geo) and words+hashtags+geotags (LDA_Tags) have been compared for top 100 topics with different mini-batch size options ( Figure 14).
Figure 14. Accuracy
It is found that higher accuracy (68% for Chennai data and 63.2% for UK data) is achieved with the proposed joint learning of TWE model with hashtags and geotags and with varying minibatch sizes compared with the other context models .
CONCLUSION
The proposed topical embedding model based on tags context represents topics and words in same semantic space effectively. The dynamic topic model SGLD with varying minibatch sizes performs well for computing the dynamic topic parameters with lower average perplexity and with higher relative perplexity for identifying the topic drift in the social media text with tags as topic indicators. Contextual changes in the topic distribution with hashtags and geolocations are better detected by the proposed topic model with tags contexts. MSD of topical embedding vectors detects the topic drift exactly than KL divergence of topic distribution. It can be extended in the future by adding different time level discretization for data ie. coarse level for words and geotags, and fine level for hashtags. Individual hashtags can be replaced by hashtag groups to accommodate more hashtags in the topical embedding model. Geolocations may also be grouped to study the topic change due to the community groups. Other cultural factors and events can also be related to the location based impact on the drift.
Conflict of interest:
All authors declare that they have no conflict of interest. | 2020-10-28T19:15:55.791Z | 2020-10-12T00:00:00.000 | {
"year": 2020,
"sha1": "db41f5616074225d763d0c6f0b99eb093911b14f",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-90835/v2.pdf?c=1603251834000",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "cda9889b44f89c4a0f491e7df8cf0e1b39c514ac",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
} |
250566802 | pes2o/s2orc | v3-fos-license | Survey of Techniques on Data Leakage Protection and Methods to address the Insider threat
Data leakage is a problem that companies and organizations face every day around the world. Mainly the data leak caused by the internal threat posed by authorized personnel to manipulate confidential information. The main objective of this work is to survey the literature to detect the existing techniques to protect against data leakage and to identify the methods used to address the insider threat. For this, a literature review of scientific databases was carried out in the period from 2011 to 2022, which resulted in 42 relevant papers. It was obtained that from 2017 to date, 60% of the studies found are concentrated and that 90% come from conferences and publications in journals. Significant advances were detected in protection systems against data leakage with the incorporation of new techniques and technologies, such as machine learning, blockchain, and digital rights management policies. In 40% of the relevant studies, significant interest was shown in avoiding internal threats. The most used techniques in the analyzed DLP tools were encryption and machine learning.
Introduction
In terms of information security, insider threat refers to the risk posed by an organization's employees, partners, or customers to the organization's information [1]. Data leakage is the disclosure of information to unauthorized entities or individuals [2], commonly caused by an intentionally or unintentional threat to the insider [3], [4], [5]. Data leakage protection (DLP) systems or DLPS are designed primarily to monitor data flow in an organization and apply predefined measures on terminal devices or networks within the organization [2]. The measures range from logging activities, sending alerts to end users and administrators, to quarantining data or blocking it altogether. DLP tools can monitor data at rest and in motion to detect sensitive information [3], [6].
In both corporate and hospital environments, the security of classified information is vital, the cost to companies of the lack of DLP technologies is estimated at over $200 per employee per year, and the human factor accounts for 35% of the causes of security breaches, including malicious and unintentional activities of both employees and third parties [7]. Not all sectors are equally affected by the costs of data leakage, the most sensitive being the healthcare and more secure, and (3) exposes the limitations, advances, and applications of DLPS, in order to encourage the development of new tools. This paper addresses the following research questions:
RQ1.
What techniques or technologies are used as DLP tools? It is solved in Sections 2 and 5, giving a presentation of the main tools found in Section 2 and an analysis of their frequency of use in relevant studies in Section 5. RQ2. How is the insider threat addressed in the DLP tools found in the literature?
The answer to this question is presented in section 3, which summarizes how insider threat is addressed in the literature analyzed.
RQ3.
What are the highlights the most used techniques in DLP tools, limitations, advances, and applications of DLPS in different fields, in order to encourage the development of new tools, and 2) it exposes the methods found in the literature to face the insider threat, with the aim of promoting the transformation of data leakage protection in this sense, to make it more secure advances and applications of DLP systems? This question is answered in Section 5.3, where the main advances and applications of DLP systems in the period studied are presented.
The rest of the document is organized as follows: Sect. 2 describes the main techniques and technologies used in DLP. Section 3 presents the methods to address insider threats found in the literature and Sect. 4 describes the methodology followed for the literature review. Section 5 discusses the results obtained and the main limitations, advances, and applications of DLPS. Finally, this article is concluded, and future work is presented.
Techniques and Technologies in DLP
Several studies propose novel DLPS integrated by different techniques and technologies to try to ensure optimal protection of confidential information, this section gives an overview of the most used techniques and technologies in the papers relevant to this study.
Inteligents documents
This technique consists of encapsulating within the document both the data it contains and the security mechanisms banking sectors due to the large volume of personal data they both handle [8]. The Spanish report [9] shows several aspects that give rise to data leakage in the healthcare sector, with malicious insider threats and unintentional employee actions being evident. Motivated by all of the above, the main objective of this work is to survey the literature to detect existing techniques to protect data leakage, and to identify the methods used to address the insider threat.
Studies similar to this focus on reviewing the functions of DRM products popular in 2011 and available on the market, quantitatively evaluating the impact of the use of these products [10]; analyzing the existing digital forensics and incident management literature with the aim of contributing to the knowledge gaps in incident management in the cloud environment [11]; outline lines of research based on a systematic review focused on blockchain technology applied to eHealth [12]; examining the state of the art in security, privacy, and big data protection research [13]; in [14] a survey about sensitive data leakage prevention and antitheft technologies for protecting the information security of e-government users; and in [15] study monitoring strategies for confidential documents based on virtual file system (VFS), in [16] a systematic review of the literature focused on management functions in information security is carried out. The recent studio [17] presents a review focused on the mobile agent model for data leakage prevention. The review only considered papers published in the journal "Communications and Network" and conference papers published between 2009 and 2019. Mobile agent-based distributed intrusion prevention and detection systems were analyzed in terms of their design, capabilities, and shortcomings. Other studies focus on reviewing blockchain strategies for secure and shareable computing, examining the state of blockchain security in the literature, from the point of view of information system security issues, classified into three levels: process level, data level, and infrastructure level [18], survey the literature to analyze how blockchain systems can overcome potential cybersecurity barriers to achieve intelligence in Industry 4.0 [19].
Research on data protection has increased with the introduction of telecommuting due to the pandemic and the need to move data to external devices and networks. Similar work has been found to exist in reviews related to data protection, but it is worth noting that there is no recent study focused on grouping the work developed in the last ten years on DLP tools, where special attention is given to the techniques used in DLP tools and methods to combat the insider threat. The main contributions of this article are the following: (1) it highlights the most used techniques in DLP tools, (2) it summarizes the methods found in the literature to face the insider threat, with the aim of promoting the transformation of protection against data leaks in this sense, to make it
Biometric information
This technique is widely used in DLPS to identify the user accessing the information and thus try to ensure that it is a legitimate user with permissions to access the information [26], [27], [28].
Hypervisor
Hypervisor-based memory introspection, the approach looks for the presence of sensitive raw data in memory on both client and server machines, transcending the dependency on pre-existing security perimeters. This solution presents a high computational cost as a hypervisor-based tool consists of deploying one or more virtual machines to monitor system calls, which consumes too much hardware resources, such as memory and processing [29].
DRM for document protection
Digital Rights Management (DRM) systems, this term refers to a set of policies, techniques and tools that guide the proper use of digital content. A DRM system is based on ensuring that only intended recipients can view sensitive files regardless of their location. Thus, ensuring data protection beyond the boundaries controlled by DLP systems, so that an organization is always in control of its information [30], [31].
The integration of DLPS and DRM policies ensures that vulnerabilities are minimized and that an organization can immediately deny access to any file, regardless of its location [6]. In [31], [32] and [33], the enterprise digital rights management (eDRM) system is presented, which provides persistent protection for documents using cryptographic methods and also includes features for document protection that are easy to use for the enterprise. In the study [34] the authors reveal the importance of DRM solutions to prevent unauthorized users, inside or outside the boundaries of the organization, from reading an accidentally sent document. As well as, their limitations towards certain types of documents, in addition to preventing the file from further propagation on the external network once filtered, nor an expert hacker from attempting to decrypt the file's content. In [35] DRM systems are compared with the proposed DLPS (UC4Win). In [36] the authors reveal some of the problems to control the use of such data [20], [21]. The security mechanisms can be content deletion, content editing, content reading, or an authorized user to perform each operation. This technique makes it possible to record where, when, and by whom the content of the document is accessed [22]. It is a technique generally used in DRM systems and very useful in combination with DLPS.
Encryption
The most widely used technique in DLPS is cryptography, this is because it is the main basis of security and is based on the conversion of data from a readable format to an encrypted format. Any encryption algorithm is equivalent to a mutating substitution algorithm, the substitution unit being the concept of "block", and the substitution table being something nonfixed (and therefore mutating). The robustness of the algorithm is given by the mutability, which prevents statistical attacks [3].
Hash
A widely used DLPS approach is exact file hash matching. This method is based on the verification of outbound traffic by comparing the hash values of the intercepted traffic and existing sensitive data [2]. If a match is detected between the values, a leak is detected by the system. This approach presents the problem that any modification of the original document may result in a completely different hash value, which would not allow the system to detect the confidential document [20].
Virtual file system (VFS)
A VSF is an abstraction layer on top of a real file system (RFS), that is, an intermediate layer between system calls and the RFS driver [15]. They also provide the ability to perform operations before and after reading, writing, etc. In exchange for this intermediate "translation" between the applications and the actual file system, some of the original RFS performance is lost.
Challenges or context-based keys
Challenges replace a stored key with a calculated key, eliminating the security problem in key storage and distribution [3], [21], [22], in turn, allowing the user to be identified through biometric data, the location of the computer by nearby Wi-Fi signals or GPS, among other benefits that this technique allows.
backup, and impression rights. For access control, a method of mutual authentication and key agreement between client and server is proposed, using the SM2 algorithm for its management.
In [26] an approach is presented to control the use of confidential documentation, through the capture of biometric signals from users who interact with the object (document), correlating this information with the content accessed by users, without storing biometric information, but the correlation between the two. In this way, when a loss of information occurs, the organization will be able to know which user accessed the information, minimizing the risk of an attack on the biometric data.
The authors of [23] propose a DLPS based on widows file system mini-filters to control the use of classified documentation by controlling OS I/O operations. The proposed system will block I/O operations from any external storage device. In addition, a strategy is adopted to restrict the movement of classified information by adding the process that performs the read request on the path where the classified information is stored to a blacklist and blocking subsequent write attempts from that process.
The authors of [56] propose a Document Semantic Signature (DSS) approach to address the insider threat. To obtain the DSS, the content of a document is extracted and summarized, updating the DSS dynamically whenever the information is modified. The DLPS monitors the newly generated information by tracking its transfer or exfiltration by comparing the DSS of such information and the DSS of sensitive information. The study takes into account the possibility that an employee with access to confidential information can change the content using synonyms to evade the DLPS, which is based on keyword-based leak detection, and the proposed system addresses this problem. The system was tested with a public dataset achieving encouraging results.
In the study conducted by the authors of [57], a prototype of an anti-leakage system based on the enterprise cloud is presented. The system uses keyword-based content monitoring and filtering techniques. Once the keywords, which represent confidential information in a document to be sent, are detected, the user and the network administrator are alerted of the possible data leakage, and a trace is left in a log where the incidence is written.
In [27] it is proposed to use eye tracking technology for information protection. This technology allows obtaining user behavior information such as gaze location, gaze tracking, and points of interest. This technology in information security can be used to identify the user interacting with confidential information through biometric eye data, obtain metadata of the user performing operations of creating, sending, modifying, and receiving confidential information for use in cases of conflicts detected by the DLP system, in faced by DRM systems as a document security solution, expose that they are difficult or inapplicable to the organization's IT infrastructure and that they rely on certain plugins and these plugins may be used. Table 1 summarizes the contributions of the works found in the literature focused on the development and implementation of DLPS, as well as the techniques and technologies employed.
Methods to address the Insider threat
The main concern of recent times, in information security, is the internal threat posed by employees, partners, and collaborators of the organizations originating confidential information. One of the main measures adopted in the literature is the control of information use, which goes beyond access control [35] allowing to restrict operations that allow data leakage of confidential information and to regulate its use.
The authors of [54] highlight the importance of strengthening the security of the confidential document management system in the face of the threat of company employees to confidential information; to address this situation, they propose a security model for confidential documents with a distribution control strategy. The first is based on storing the content encrypted with a symmetric encryption algorithm, ensuring that only the authorized user is able to decrypt the content; access control information is stored that allows to know the degree of authority of the user to use such confidential information and records each operation that the user performs; in addition, a hash function is used to ensure the integrity of the content. To control the distribution of confidential information, a client-server strategy is used in which a client will not be able to distribute confidential documentation without permission from the server, in which the control policies defined by the administrator are used and a monitor is installed on the client's computer that allows the server to control the operations performed by the user and prohibit unauthorized operations.
In the study [35] a DLPS based on usage control and dynamic data flow monitoring (UC4Win) is presented. This system can to monitor process calls to the Windows API in order to prevent or modify data flows that pose a threat of confidential information leakage.
In [55] a scheme based on mandatory kernel-level encryption on write operation and decryption on open operation is proposed through middleware to ensure that data remain encrypted in memory. In addition, usage control policies are established, such as read-only, save, export, write,
Title
Contribution Techniques and Technologies TaintEraser: Protecting Sensitive Data Leaks Using Application-Level Taint Tracking [37] TaintEraser is presented, which allows you to track user data that you consider sensitive in the different applications that interact with them.
TaintEraser CLOUD SHREDDER: Removing the Threat to On-Road Data Disclosure on Laptops in the Cloud Computing Era [38] This study presents a new approach to eliminate the threat of ubiquitous Internet access and cloud computing "Cloud Shredder".
Cloud Shredder
Hypervisor-based Background Encryption [39] This study proposes a hypervisor-based approach that enables instant disk encryption without interfering with user activities.
BitVisor (Hypervisor + Encryption) Hypervisor-based protection of sensitive files in a compromised system [40] Special purpose hypervisor intended to protect sensitive files on a compromised operating system.
Filesafe (Hypervisor)
A Survey on Data Loss Prevention Techniques [6] A form of DLP is presented by storing them securely with: "On-the-fly encryption security in storage" (O-E-Sis).
O-E-sis (Encryption)
Architectural Design and Realization for Management of End Point DLP [41] Software solution developed for DLP terminal protection. It is an architecture designed with kernel hook in the Windows operating system and coded in the C language.
System-Call- Table (SSDT Hooking) Designing and developing a free Data Loss Prevention system [42] The authors of this study propose the use of the MyDLP and OpenDLP tools as a free DLP solution.
MyDLP + OpenDLP MLDED: Multi-Layer Data Exfiltration Detection System [43] The proposal of this study is MLDED, which is a multi-level data exfiltration detection system. This system allows detecting data exfiltration outside the organization through Hashing, Keyword Extraction, and Tagging techniques.
MLDED (Hashing + Keyword
Extraction + Tagging) The Design and Implementation of User Autonomous Encryption Cloud Storage System Based on Dokan [44] This study proposes Dokan for the design and implementation of an encrypted cloud storage system.
Dokan (VFS) + Encryption + OpenStack
Linebased end-to-display encryption for secure documents [45] A line-based encryption method for the design of an end-to-end display cipher is presented. With this technique, data loss can be avoided by using pixel-domain encryption and additional hardware to decrypt the graphics stream, decrypting data between the endpoint and the display, ensuring that data at rest are always encrypted.
On-screen encryption
Biometric/Cryptographic Keys Binding Based on Function Minimization [46] This study proposes a cryptographic system in which the key depends on biometric patterns, so it is necessary to have a valid biometric template in the system to generate the key.
Biometric cryptosystem (Encryption)
Enterprise Digital Rights Management for Document Protection [31] This paper studies an enterprise Digital Rights Management System (eDRM) based on cryptography that protects documents and provides useful functions for information security.
eDRM (Encryption)
Hypervisor-Based Sensitive Data Leakage Detector [47] "HyperSweep" is the DLPS proposed in this study, based on virtual machine memory introspection technology, aiming to check the memory contents of a guest system for sensitive information.
HyperSweep (Virtual Machine + Hypervisor KVM) Secure data storage and intrusion detection in the cloud using MANN and dual encryption through various attacks [48] An intrusion detection system using Machine Learning (ML) is proposed to detect if a document contains intrusive data, being this the case, it is stored in a secure site.
ML + Encryption + Steganography techniques
Using malware for the greater good: Mitigating data leakage [34] DocGuard is the method proposed in this study to protect against accidental data leakage. It consists of making antimalware and antivirus software active on storage systems, detect the leaked file and block access to it.
DocGuard
Off-line enterprise rights management leveraging biometric key binding and secure hardware [28] This study presents a modification of the biometric key binding scheme proposed in [46] which is mainly used to protect document encryption keys that are stored personal devices.
Biometric cryptosystem
A combined data storage with encryption and keyword-based data retrieval using the scds-tm model in cloud [49] "SCDS-TM" is the cloud storage system proposed in this study. This system attempts to ensure data confidentiality, integrity, and functionality through elliptic curve cryptography and proof of correctness of the storage.
SCDS-TM (Encryption)
Transparent Encryption with Windows Minifilter Driver [24] Implementation of a DLP solution that protects any data that is about to leave an endpoint using a Windows Minifilter driver framework. The application provides a plain text view of files even if they are stored in encrypted form on disk.
Minifilter + Encryption It has also been seen in the results obtained that DRM tools are based on the control of copies of protected information and therefore gain value for the control of use and protection of information from the threat of collaborators and partners. The proposal of [33] and [32] allows the implementation of an information system independent of the servers containing the control policies that were necessary to access with conventional systems. It controls the use and access to the document through a license (document xml apart from the confidential information) containing the security rules and the configuration of the various security modules necessary for the management of the document. The rules are encrypted by means of public and private keys stored and known by the user.
The authors of [58] analyze three models of traditional document security management, exposing the limitations of each of them, and to try to overcome them they propose a system based on storage in the private enterprise cloud, with a system of authorization and encryption of documents in a virtual machine that encrypts all the document that is written in it, as well as light clients with a common terminal in the virtual machine that will guarantee that all written documents are encrypted and to decrypt them will have to be done through the same encryption system that will guarantee that the user leaves a trace of the operation carried out. External users will need an electronic certificate to decrypt the document.
In [36] the main problems of different solutions for information protection within an organization are identified, among which DLP and DRM solutions are described. A solution based on active documents and DRM is proposed addition it can serve to improve the security and integrity of documents based on the information of which parts of the document are of greatest interest to the user.
The authors of the study [25] propose as a solution to the internal threat a free DLPS that is based on detecting confidential information at the exit of the USB ports by means of automatic learning and blocks the copy operation, for this purpose it integrates modules in the kernel space (minifilters). The system is developed for the Windows OS as it is the most widely distributed in business environments.
The study [1] focuses on the insider threat that can be intentionally caused by an employee, for this, they propose "Efficient DLP-Visor" which is a context-based DLPS. The system is a thin hypervisor that intercepts call in kernel space. The proposed DLPS makes it possible to detect data leaks even though the employee in question is the system administrator himself. Basically, the System works as follows: The administrator sets a File System path where sensitive information is stored, the DLPS logs any process that opens or reads a document from that path as critical, and any file written by that process is logged as sensitive, as well as any process that receives information from a critical process. DLPS tracks critical processes by capturing kernel mode calls and blocks the relevant operations of those processes.
In [3] and [21] a DLPS for the protection of confidential information is proposed. The proposed system allows access control, through the development of the encryption key, through the combination of a set of parameters; these parameters can be biometric identification of the user accessing the information, geographic location, electronic fingerprint of the device, date, and time, among others. Although this proposal does not specifically present usage control, it is robust due to the ability to require several parameters to Title Contribution Techniques and Technologies Data loss prevention (DLP) using MRSH-v2 algorithm [20] Analysis of the different types of methods for implementing DLP tools, determining that exact file matching is the best approach, using for demonstration the implementation of the MRSH-v2 algorithm to show the capabilities of this method.
MRSH-v2
E-REA Symmetric Key Cryptographic Technique [50] The authors of this study propose a cryptographic algorithm to protect data from malicious attacks and provide security during data transfer. It is a symmetric key encryption algorithm that represents the improvement of the reverse encryption algorithm.
Encryption A Forecasting-Based DLP Approach for Data Security [51] A DLP solution is presented that allows classification of users accessing confidential information according to the number of accesses, based on past data, to predict the future through statistical models.
Statistical analysis
Design and Development of a Dynamic and Efficient PII Data Loss Prevention System [52] Introduces the DLP tool, PII Guardian, designed to detect and prevent known attacks. PII Guardian provides preventive actions by classifying detected data breaches according to their impact.
Rule-based approach
Cloud security framework and key management services collectively for implementing DLP and IRM [53] This study proposes an open source framework in the cloud area for building a data loss prevention tool.
Cloud security framework + Key management + Encryption Science Direct, IEEE Xplore, Web of Science, Scopus and ACM Digital Library, from 2011 to April 2022, these databases cover relevant scientific information in multiple engineering fields, allowing access to articles published in scientific and academic journals, repositories, archives and other collections of scientific texts.
The following keywords were used for the literature search: "Security" AND ("DLP" OR ("Data AND ("Leak" OR "Loss") AND ("Prevention OR Protection"). These terms are searched in Abstract/Title/Keywords from 2011 to 2022. Figure 1 shows the search strategy used in this research, the search criteria used are provided by the search engine of each of the scientific databases.
Study selection and relevant papers
Once the terms have been entered in the search engines of the databases, the articles to be analyzed are selected by reading the titles of the results obtained (in this case 158). Repeated entries in more than one database were eliminated (56 articles). Selection criteria were applied in the analysis of the abstracts of 102 articles to classify those that were completely analyzed, the selection criteria were as follows: (1) Studies of novel proposals of techniques and technologies for DLP. (2) Studies of analysis of techniques and technologies for DLP; 65 articles were obtained for complete analysis, then those studies aimed at systems for malware and rootkit protection, image cryptography and steganography were eliminated, as well as reviews of techniques and technologies since they are related works to this, but not relevant to the analysis. A total of 42 articles remained for analysis.
The procedure described is shown in in Fig. 2 the PRISMA diagram, where the paper selection process can that allows the control of document usage, mainly copy, paste, cut, delete, and print operations, inside and outside the organization of origin. The transfer channels considered in this work were removable storage, e-mails, and shared folders. This work does not implement the system, but proposes an idea on how to solve the problem of data leakage with active documents.
Given the persistent concern in organizations and enterprises regarding the internal threat to data leakage protection, it has attracted the interest of the research community in an attempt to circumvent it. The recent study conducted in [59], presents a system CITD for the detection of insider threats based on the behavior of workers according to their role and machine learning. The system was tested in three real organizations to reduce false positives that allow improvements in the tool.
Methodology
This paper utilizes the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) method [60] in a literature review to analyze existing techniques and technologies for DLP focused on electronic document and classified information leakage. Three stages of PRISMA application are shown in this study: literature search; selection of relevant articles; and data extraction.
Literature search
For this research, the search was focused on articles related to the techniques and technologies used for DLP published in impact journals, conference articles, and book section, mainly in scientific databases such as Google Scholar, Figure 3 shows the frequency of publications by year of the relevant studies found during the period 2011-2022. It is observed that the year 2019 reaches the highest number of publications in this period. In general, the number of papers published per year ranges from 1 to 8 with a statistical mode of 3 and a mean of 4 approximately, which means that in the years 2011 and 2019 the mean was exceeded. We can appreciate that approximately 60% of the relevant articles for this study were found in the period between 2017 and 2022, which shows a significant interest in recent years in the security of sensitive digital information. Figure 4 shows the number of published papers according to their origin, it is observed that 60% of the relevant papers come from be seen and how, out of a total of 158 papers found, a total of 42 papers papers were relevant for analysis in this paper.
Discussion of results
This section discusses the results, after applying the above methodology, classifying the relevant studies according to year and type of publication, analyzing the number of relevant publications for each year reviewed and their origin, to determine where the greatest dissemination of the topic in question is to be found. We analyze the use of the main techniques in DLPS, discuss their limitations, advances and applications according to the reviewed literature. used in combination with each other, For example, in systems where mini-filters and VFS or middleware are used, documents are often encrypted for storage in memory. Also, when active documents are used, hash algorithms are incorporated to guarantee the integrity of the information, as well as ML to classify the information according to the degree of confidentiality to apply security and access policies accordingly. In DLPS, these and other techniques used as a complement can undoubtedly guarantee maximum security to confidential information.
Limitations, advances, and applications
Limitations that have emerged over the years are the almost complete dependence on the quality of the security policies used and the precise definition of the data to be protected, as well as the necessary over-approaches in the dynamic monitoring of the data flow [35]. In [36] four challenges facing document security are identified, one of them being human negligence, DLPS are not able to overcome this challenge since as a means of security they rely on user, password and security policies to ensure the security of information, without taking into account that the user himself may be the one who provides the data leakage, they themselves are the tools to perform the security policy of any organization so a user and password is not enough. The tracking of unmarked documents [37] or not classified as confidential also represented a major limitation in the DLPS at the time.
Some of these problems have already been solved with the incorporation of new techniques and technologies to DLPS, such as ML for document classification, the recent study [61] proposes a multilayer framework for insider threat detection based on a hybrid method composed of two predictive models with an accuracy level higher than 97%, another application of ML in data protection are network intrusion detection systems, which can be seen in studies [62], [63], [64]. DRM systems for tracking sensitive information outside the organization, biometric information for user identification, and context-based keys to determine the date, place and time of information access. An important advance is the incorporation of blockchain to protect the DLPS logs where the information of detected anomalies is stored, storing these DLPS logs in the Hyperledger Fabric ledger in real time, thus preventing the manipulation of these logs by authorized users to try to eliminate evidence of data leakage [65].
In terms of DLPS applications, the studies reviewed focus on the security of sensitive information at the enterprise level and as such, most of the trends and developments lean in this area. However, the authors of [21] propose a DLP solution using context-based encryption to prevent information leakage in drones. In the poster [66] the authors propose congresses and approximately 30% of them from journals, demonstrating the deep interest in the academic field for protection against data leakage. Figure 5 shows the most frequently used techniques in the literature. It can be seen that among the most used is cryptography with 40% of use and ML is present in 12% of the articles studied, being evident the progress of DLPS in the use of this technique for the classification of sensitive documentation. Others, such as hypervisor, biometric information capture, and intelligent documents, are present in 10% of the 42 relevant papers to this study. In the literature it has been seen that these techniques and technologies are widely Significant progress is seen in DLP tools with the incorporation of techniques such as ML for the classification of sensitive information and detection of anomalous activity, in addition to blockchain for the protection of DLPS records. No article was found in the literature that provides the open access code of DLPS for reuse and improvement by other researchers. Few studies focused on data security in the healthcare sector and only one applying DLP on the Internet of Things (IoT) was found in the search results.That is why we propose as future lines of work to carry out studies on the security and protection of the electronic health record, as well as the development and implementation of a DLPS focused on the insider threat, based on the experience of the works found that meet the requirements of being lightweight, unobtrusive, where access to information does not depend on user data and saved passwords, with free access to the source code so that other researchers can adapt it to their needs and provide validations and improvements. To this end, we propose to carry out a study of the techniques and technologies that allow the development of virtual file systems, for the implementation of a secure file system as a DLP tool. As well as, the study of lightweight encryption and decryption algorithms suitable for the needs of a virtual file system. Another line of research that DLPS intends to adopt is its application to IoT, since this technology is advancing every day and most of them are high collectors of personal data. a data leak detection tool for a health information system based on memory introspection. A recent study proposes a blockchain-based architecture that allows the secure transfer of electronic health records between different health care systems, verifying the integrity and consistency of requests and responses to electronic health records [67].
Conclusions
This research focuses on a literature survey where a total of 42 relevant studies were obtained. The survey allowed answering three research questions that met the objective proposed in this study. A deep interest in evading insider threat was detected in more than 40% of the analyzed studies. In addition, it is given that the DLPS with the highest incidence in this regard have access control and control of the use of confidential information by controlling the operations that allow data leakage (copy, opening, writing and reading), as well as policies of privacy. DRM for the case of partners and collaborators. These tools mainly use biometric information capture techniques, interception of calls in kernel space using hypervisor, VFS, middleware, and mini-filters. As well as security policies encapsulated in documents. In the analysis of the techniques and technologies that are the most used, We found the encryption technique with 40% use in the studies analyzed. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons. org/licenses/by/4.0/. | 2022-07-16T15:10:01.911Z | 2022-07-14T00:00:00.000 | {
"year": 2022,
"sha1": "880cd3225f45014098c11d4b48110f1d53cf2490",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10586-022-03668-2.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "25569c0ef7eae038346ab3ee38278d827a3429f6",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
84586730 | pes2o/s2orc | v3-fos-license | Certain Metals Trigger Fibrillation of Methionine-oxidized α-Synuclein
The aggregation and fibrillation of α-synuclein has been implicated as a key step in the etiology of Parkinson's disease and several other neurodegenerative disorders. In addition, oxidative stress and certain environmental factors, including metals, are believed to play an important role in Parkinson's disease. Previously, we have shown that methionine-oxidized human α-synuclein does not fibrillate and also inhibits fibrillation of unmodified α-synuclein (Uversky, V. N., Yamin, G., Souillac, P. O., Goers, J., Glaser, C. B., and Fink, A. L. (2002) FEBS Lett. 517, 239–244). Using dynamic light scattering, we show that the inhibition results from stabilization of the monomeric form of Met-oxidized α-synuclein. We have now examined the effect of several metals on the structural properties of methionine-oxidized human α-synuclein and its propensity to fibrillate. The presence of metals induced partial folding of both oxidized and non-oxidized α-synucleins, which are intrinsically unstructured under conditions of neutral pH. Although the fibrillation of α-synuclein was completely inhibited by methionine oxidation, the presence of certain metals (Ti3+, Zn2+, Al3+, and Pb2+) overcame this inhibition. These findings indicate that a combination of oxidative stress and environmental metal pollution could play an important role in triggering the fibrillation of α-synuclein and thus possibly Parkinson's disease.
Parkinson's disease (PD) 1 is the second most common neurodegenerative disorder after Alzheimer's disease. Clinical symptoms of PD (tremor, rigidity, and bradykinesia) are attributed to the progressive loss of dopaminergic neurons from the substantia nigra. Some surviving nigral dopaminergic neurons contain cytosolic filamentous inclusions known as Lewy bodies and Lewy neurites (1, 2), a major fibrillar component of which was shown to be the presynaptic protein ␣-synuclein (3). The mutations A53T and A30P in ␣-synuclein have been identified in autosomal-dominantly inherited, early onset PD (4,5). Furthermore, the production of ␣-synuclein in transgenic mice (6) or in transgenic flies (7) leads to motor deficits and neuronal inclusions reminiscent of PD. All this implicates ␣-synuclein in the pathogenesis of PD.
␣-Synuclein is a small (14 kDa), highly conserved presynap-tic protein that is abundant in various regions of the brain (8,9). Structurally, purified ␣-synuclein belongs to the rapidly growing family of intrinsically unstructured or natively unfolded proteins (10,11), which have little or no ordered structure under physiological conditions due to a unique combination of low overall hydrophobicity and large net charge (12). ␣-Synuclein readily assembles into amyloid-like fibrils in vitro with morphologies and staining characteristics similar to those extracted from disease-affected brain (11,(13)(14)(15)(16)(17)(18). Fibrillation occurs via a nucleation-dependent polymerization mechanism (14,17) with a critical initial structural transformation from the unfolded conformation to a partially folded intermediate (11). The cause of PD is unknown, but considerable evidence suggests a multifactorial etiology involving genetic susceptibility and environmental factors. Recent work has shown that, except in extremely rare cases, there appears to be no direct genetic basis of PD (19). However, several studies have implicated environmental factors, especially pesticides and metals (20). In agreement with these observations, it has been recently reported that direct interaction of ␣-synuclein with metal ions (21) or pesticides leads to accelerated fibrillation (22)(23)(24).
Oxidative injury is also suspected as another causative agent in the pathogenesis of PD (25,26). The existence of nitrated ␣-synuclein (i.e. protein containing the product of the tyrosine oxidation, 3-nitrotyrosine) accumulation in Lewy bodies has been demonstrated (27)(28)(29). Accumulation of another product of tyrosine oxidation, dityrosine, has been detected in vitro during experiments on the aggregation of ␣-synuclein in the presence of copper and H 2 O 2 (30) or catecholamines (31) and leads to accelerated fibrillation of ␣-synuclein (32). The methionine side chain is the most readily oxidized amino acid in ␣-synuclein, and the four methionines, Met-1, Met-5, Met-116, and Met-127, are easily oxidized in vitro in the presence of H 2 O 2 . Interestingly, however, oxidation of the methionine residues of ␣-synuclein to the sulfoxides, rather than accelerating fibrillation, was found to prevent it (33). Furthermore, and most importantly, the presence of the methionine-oxidized ␣-synuclein was found to completely inhibit fibrillation of the unmodified protein at ratios of Ն4:1 (33). Given the potential role of metals in the pathological aggregation of ␣-synuclein and the known strong coordination of some metals to sulfoxides, we decided to investigate the structural and fibrillation properties of Met-oxidized ␣-synuclein in the presence of several metals to shed more light on the combined effect of environmental factors (metals) and oxidative damage (methionine oxidation to the sulfoxide) on ␣-synuclein.
MATERIALS AND METHODS
Expression and Purification of Human ␣-Synuclein-Human recombinant ␣-synuclein was expressed in the Escherichia coli BL21(DE3) cell line transfected with pRK172/␣-synuclein wild-type plasmid (kind gift of M. Goedert, MRC Cambridge) and purified as described previously (33). Purity of the ␣-synuclein was determined by SDS-polyacryl-* This work was supported by Grant NS39985 from The National Institutes of Health. The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. amide gel electrophoresis, UV absorbance spectroscopy, and mass spectrometry.
Supplies and Chemicals-Thioflavin T (ThT) was obtained from Sigma. ZnSO 4 and CaCl 2 (analytical grade) were from Fisher. Analytical grade Ti 2 (SO 4 ) 3 , CuCl 2 , and Hg(CH 3 CO 2 ) 2 were from Aldrich, whereas AlCl 3 and PbO 2 were from Mallinckrodt Chemical Works and Matheson Coleman & Bell, respectively. All other chemicals were of analytical grade from Fisher. All buffers and solutions were prepared with nanopure water and stored in plastic vials.
Oxidation of ␣-Synuclein by Hydrogen Peroxide-Oxidation of ␣-synuclein by H 2 O 2 was performed as described previously (33).
Circular Dichroism (CD) Measurements-CD spectra were recorded on an AVIV 60DS spectrophotometer (Lakewood, NJ) using ␣-synuclein concentrations of 1.0 mg/ml and a 0.1-mm path length cell. Spectra were recorded from 250 -190 nm with a step size of 1.0 nm, with a bandwidth of 1.5 nm and an averaging time of 10 s. For all spectra, an average of five scans was obtained. CD spectra of the appropriate buffers were recorded and subtracted from the protein spectra.
Electron Microscopy-Transmission electron micrographs were collected using a JEOL JEM-100B microscope operating with an accelerating voltage of 80 kV. Typical nominal magnifications were ϫ75,000. Samples were deposited on Formavar-coated 300-mesh copper grids and negatively stained with 1% aqueous uranyl acetate.
Fibril Formation Assay-Fibril formation of oxidized and non-oxidized ␣-synuclein in the presence of various metals was monitored using the ThT assay in a fluorescence plate reader (Fluoroskan Ascent) as described previously (33). Standard conditions were 35 M ␣-synuclein, pH 7.5, 20 mM Tris-HCl buffer, 37°C, with agitation. ThT fluorescence was excited at 450 nm, and the emission wavelength was 482 nm.
Estimation of Hydrodynamic Dimensions-Dynamic light scattering was used to determine the Stokes radii with a DynaPro Molecular Sizing Instrument (Protein Solutions, Lakewood, NJ) using a 1.5-mm path length 12-l quartz cuvette. Prior to measurement, solutions were filtered with a 0.1-m Whatman Anodisc-13 filter.
The Effect of Methionine Oxidation and Metal Binding on
␣-Synuclein Conformation-We first examined the effect of methionine oxidation on the conformation of ␣-synuclein and then the effects of selected metals on the conformation of unmodified and Met-oxidized ␣-synuclein. Fig. 1 compares far-UV CD spectra measured for the non-oxidized (Fig. 1A) and Metoxidized ( Fig. 1B) forms of human ␣-synuclein in the absence or presence of several polyvalent cations. The spectra show that oxidized ␣-synuclein is slightly more unfolded than non-oxidized ␣-synuclein in the absence of cations. This is manifested by a small increase in negative ellipticity in the vicinity of 198 nm and somewhat lower intensity in the vicinity of 222 nm. This increased degree of disorder has been attributed to the decreased hydrophobicity of the oxidized methionines, leading to a decrease in the overall hydrophobicity of the protein (33). Previously, we demonstrated that the interaction of metal cations with natively unfolded ␣-synuclein induced a partially folded conformation (21). This transition was attributed to the counter ion-induced neutralization of the coulombic chargecharge repulsion within the very negatively charged protein at neutral pH (21). In agreement with this observation, Fig. 1 shows that in the presence of metals, definite changes occur in the far-UV CD spectra of both non-oxidized and oxidized forms of ␣-synuclein. In particular, a decrease in the minimum at 196 nm was accompanied by an increase in negative intensity around 222 nm, reflecting metal binding-induced formation of secondary structure (Fig. 1). Significantly, Fig. 1 shows that binding of the metals induced comparable structural changes in both oxidized and unmodified proteins, most probably reflecting the stabilization of identical partially folded conformations. Thus, Met-oxidized ␣-synuclein is slightly more unfolded than non-oxidized protein, but in the presence of metal ions, it adopts a similar partially folded conformation. Our previous studies have shown that formation of such a partially folded conformation correlates with accelerated fibrillation, as is seen with the effect of metals on non-oxidized ␣-synuclein (21).
The Effect of Metal Binding on Fibrillation of Methionine-oxidized ␣-Synuclein-Next, we determined the effect of the metals on the fibrillation of Met-oxidized ␣-synuclein. ThT is a fluorescent dye that interacts with amyloid fibrils, leading to an increase in the fluorescence intensity in the vicinity of 480 nm (34). Fig. 2 compares fibrillation patterns of non-oxidized ( Fig. 2A) and oxidized ␣-synuclein (Fig. 2B) in the absence and presence of several metal cations monitored by ThT fluorescence. Fibril formation for the non-oxidized ␣-synuclein at neutral pH was characterized by a typical sigmoidal curve. In agreement with earlier studies (24), the fibrillation rate increased dramatically in the presence of all metal cations investigated ( Fig. 2A). The list of the previously analyzed cations (Li ϩ , K ϩ , Na ϩ , Cs ϩ , Ca 2ϩ , Co 2ϩ , Cd 2ϩ , Cu 2ϩ , Fe 2ϩ , Mg 2ϩ , Mn 2ϩ , Zn 2ϩ , Co 3ϩ , Al 3ϩ , and Fe 3ϩ ) has been extended to consider the effect of Hg 2ϩ , Pb 2ϩ , and Ti 3ϩ . Interestingly, Hg 2ϩ and Pb 2ϩ , which are of particular relevance to environmentinduced Parkinsonism, are among the most effective accelerators of ␣-synuclein fibrillation. This underlines, once again, a potential link between heavy metal exposure, enhanced ␣-synuclein fibrillation, and Parkinson's disease. In contrast, there was no evidence of fibril formation by methionine-oxidized ␣-synuclein at neutral pH (Fig. 2B). Pre- viously, we showed that the inhibitory effect of methionine oxidation on ␣-synuclein fibrillation can be eliminated under conditions of low pH, due to the formation of a partially folded intermediate reflecting protonation of the carboxylate groups (33). In view of this observation, and the observation that metal cations induce partial folding of oxidized ␣-synuclein (Fig. 1), one might expect that fibrillation of the methionine-oxidized protein would occur in the presence of metals. In accord with this hypothesis, methionine-oxidized ␣-synuclein readily formed fibrils in the presence of certain metal ions, such as Ti 3ϩ , Al 3ϩ , Zn 2ϩ , and Pb 2ϩ (Fig. 2B and Table I). However, not all metals were able to accelerate the fibrillation of methionineoxidized ␣-synuclein: for example, Hg 2ϩ , Cu 2ϩ , and Ca 2ϩ , although able to induce the partially folded conformation in the oxidized protein, did not induce its fibril formation (at least not within the time scale examined). Moreover, Fig. 2 and Table I show that in the presence of Zn 2ϩ and Pb 2ϩ , fibrillation of the oxidized ␣-synuclein was as accelerated as for the non-oxidized protein, whereas Al 3ϩ and Ti 3ϩ showed a less pronounced effect. The morphology of the fibrillar material formed by the non-oxidized and oxidized ␣-synuclein in the presence of several metal cations was analyzed by transmission electron microscopy, and both forms of ␣-synuclein formed typical amyloid fibrils, as shown in Fig. 3.
Dynamic Light Scattering Experiments to Monitor Hydrodynamic Size-There are a number of possible mechanisms whereby methionine oxidation could inhibit ␣-synuclein fibrillation. One of these would be through stabilization of off-pathway oligomers, and another would be through the capping nascent fibrils. To investigate these possibilities, we monitored the association state of Met-oxidized ␣-synuclein during its incubation, in the absence and presence of metal ions, using dynamic light scattering (Fig. 4). Given the nature of the experimental measurements, populations of oligomers of less than 5-10% are not considered significant. Since the data shown in Fig. 4 are only for soluble protein, the total concentrations may be different in the different panels of the figure.
Met-oxidized ␣-synuclein remained monomeric for Ͼ100 h under standard incubation conditions (35 M ␣-synuclein, pH 7.5, 37°C, with agitation), as shown in Fig. 4D, indicating that neither oligomers nor fibrils were formed in statistically significant amounts. In contrast, unmodified ␣-synuclein remained predominantly monomeric for the first 20 h (corresponding to the lag time) but then showed dimers and higher oligomers at longer times (in addition to fibrils), as shown in Fig. 4A. Thus, the conversion of methionine to its sulfoxide must, in some way, prevent formation of the critical partially folded intermediate conformation and subsequent association into fibrils. In the presence of Zn 2ϩ , which leads to fibril formation from the Met-oxidized ␣-synuclein, the monomer is the only species initially present. However, at later times, in addition to fibrils, soluble oligomers were detected, amounting to as much as 30% of the total protein and having an R s of ϳ40 nm, similar to the size of the oligomers observed with the unmodified protein. In contrast, in the presence of Ca 2ϩ , which does not lead to fibrils with Met-oxidized ␣-synuclein, only the monomer was detected during the incubation. Since these two metals reflect the observed behavior of the two types of metal ion-induced effects in the other properties investigated, their behavior is considered representative of other metal ions.
DISCUSSION
Oxidative stress is believed to be a factor in the etiology of Parkinson's disease, and the methionine residues of ␣-synuclein are the most easily oxidized side chains in the protein. Therefore, our previous discovery that methionine-oxidized ␣-synuclein, which is expected to represent one of the most common products of oxidative damage to ␣-synuclein, fails to form fibrils and inhibits fibrillation of unmodified ␣-synuclein was rather surprising, although oxidation of the single methionine residue in A has also been shown to attenuate fibrillation of A (35).
Previously, we have shown that formation of a partially folded intermediate is a critical initial step of the ␣-synuclein fibrillogenesis (11) and that ␣-synuclein fibrillation is acceler- (11,36) or in the presence of metal cations (21). A contributing factor to the inhibition of methionine-oxidized ␣-synuclein fibrillation is believed to be the slightly increased stabilization of the natively unfolded conformation (33). The data presented here are consistent with the conclusion that interaction of methionine-oxidized ␣-synuclein with certain metals modulates its conformational properties and propensity for fibrillation. Whereas all the metals studied are able to induce partial folding in this intrinsically unstructured (natively unfolded) protein, not all cations are equal in their abilities to eliminate the inhibitory effect of methionine oxidation on ␣-synuclein fibrillation. In particular, the fibrillation rates were very close for oxidized and non-oxidized ␣-synuclein in the presence of Zn 2ϩ and Pb 2ϩ ; however, fibrillation was still inhibited in the presence of Hg 2ϩ , Cu 2ϩ , and Ca 2ϩ . This observation indicates that factors other than electrostatic interactions must play an important role in overcoming the inhibition of ␣-synuclein fibrillation caused by methionine oxidation. One such factor is undoubtedly the known propensity for certain metals to strongly coordinate with sulfoxides, leading to very stable complexes (37). In particular, for some metal ions, bridging between two sulfoxides is favored. Such intermolecular or intramolecular coordination of two (or more) methionine sulfoxides could significantly affect the fibrillation. In particular, we propose that stable intermolecular bridging metal complexes would significantly promote fibrillation: thus, the presence of Zn 2ϩ or Pb 2ϩ leads to intermolecular cross-bridging, which facilitates the association of Met-oxidized ␣-synuclein and leads to its subsequent fibrillation. Metals such as Hg 2ϩ and Cu 2ϩ , which may also form sulfoxide bridges, may be limited to intramolecular coordination due to different ligand bonding. The results show that in those conditions where fibrillation occurs, large soluble oligomers are present at the latter stages of the lag time and during the fibril growth stage of the aggregation process.
With regard to the biological relevance of these observations, it is becoming clear that many factors can affect the rate of ␣-synuclein fibrillation, suggesting that in dopaminergic neurons, there is a balance between factors that can accelerate fibrillation and those that inhibit or prevent it. It is likely that there are chaperones or chaperone-like species that are important in minimizing ␣-synuclein aggregation under normal conditions. In our earlier study, showing that the addition of Met-oxidized ␣-synuclein inhibited fibrillation of the non-oxidized form (33), we suggested that the methionine residues in ␣-synuclein may be used by the cells as a natural scavenger of reactive oxygen species, since (a) methionine can react with essentially all of the known oxidants found in normal and pathological tissues; (b) ␣-synuclein is a very abundant brain protein; (c) it has recently been shown that the concentration of ␣-synuclein could be increased significantly as a result of the neuronal response to toxic insult (23); and (d) methionine sulfoxide residues in proteins can be cycled back to their native methionines by methionine sulfoxide reductase (38), a process that might protect other functionally essential residues from oxidative damage (39). It should be noted, however, that the efficiency of this regeneration system must take into account the finding that methionine oxidation forms the sulfoxide in two diastereoisomer forms and that stereoselective oxidation can sometimes occur, dependent on both the structural restraints in the region of the methionine molecule and on the oxidant itself (40). Each methionine sulfoxide isomer can be reduced back to its original methionine state, provided that the corresponding complementary reductase is present and active (41).
The balance between the protective antioxidant role of the methionine residues that is enhanced by this recycling and the protective antifibrillation effect of oxidized methionine residues in ␣-synuclein may fail under conditions of environmental pollution due to exposure of a person to lead, aluminum, zinc, titanium, and other metals. We assume that in the presence of the enhanced concentrations of such industrial pollutants, toxic insult-induced up-regulation of ␣-synuclein may no longer play a protective role; rather, it may represent a risk factor, leading to metal-triggered fibrillation of the methionine-oxidized protein. | 2019-03-21T13:03:42.177Z | 2003-07-25T00:00:00.000 | {
"year": 2003,
"sha1": "9d56c6dcbdd1501ab296d7dde590092318b95efa",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/278/30/27630.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "7dbfc50aed902f298e5636edd0a7e409c01276ef",
"s2fieldsofstudy": [
"Chemistry",
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
32889608 | pes2o/s2orc | v3-fos-license | Structural studies demonstrating a bacteriophage-like replication cycle of the eukaryote-infecting Paramecium bursaria chlorella virus-1
A fundamental stage in viral infection is the internalization of viral genomes in host cells. Although extensively studied, the mechanisms and factors responsible for the genome internalization process remain poorly understood. Here we report our observations, derived from diverse imaging methods on genome internalization of the large dsDNA Paramecium bursaria chlorella virus-1 (PBCV-1). Our studies reveal that early infection stages of this eukaryotic-infecting virus occurs by a bacteriophage-like pathway, whereby PBCV-1 generates a hole in the host cell wall and ejects its dsDNA genome in a linear, base-pair-by-base-pair process, through a membrane tunnel generated by the fusion of the virus internal membrane with the host membrane. Furthermore, our results imply that PBCV-1 DNA condensation that occurs shortly after infection probably plays a role in genome internalization, as hypothesized for the infection of some bacteriophages. The subsequent perforation of the host photosynthetic membranes presumably enables trafficking of viral genomes towards host nuclei. Previous studies established that at late infection stages PBCV-1 generates cytoplasmic organelles, termed viral factories, where viral assembly takes place, a feature characteristic of many large dsDNA viruses that infect eukaryotic organisms. PBCV-1 thus appears to combine a bacteriophage-like mechanism during early infection stages with a eukaryotic-like infection pathway in its late replication cycle.
Introduction A fundamental and general stage in viral infection is the transfer of the viral genome into the host cell. After attachment to the cell membrane, viruses that infect animal cells depend on various entry pathways, mainly consisting of endocytosis, pinocytosis, phagocytosis and variants of these strategies [1,2]. Thus, un-coating of viral genomes occurs inside the host cell. In contrast, most bacteriophages eject their genome into their bacterial host through the cell wall and membrane layers [3,4], eventually leaving an empty capsid at the periphery of the bacterial cell.
Paramecium bursaria chlorella virus-1 (PBCV-1) is the prototype of the genus Chlorovirus (family Phycodnaviridae) that infects chlorella-like green algae and along with viruses in the Mimiviridae, Asfarviridae, Poxviridae, Iridoviridae and Marseilleviridae families, is a member of the nucleocytoplasmic large eukaryote-infecting dsDNA viruses clade [5,6]. Viruses belonging to this clade have recently attracted interest due to their unusual size, structural complexity, large genomes and elaborate infection cycles [7,8].
PBCV-1 is an icosahedral virion (190 nm in diameter) that, like bacteriophages, needs to penetrate a thick host cell wall and cellular membranes to initiate infection [9,10]. The virus contains a single spike-like structure at one vertex [11], which makes the first contact with the wall of its host cell [12], the unicellular photosynthetic alga Chlorella variabilis NC64A. PBCV-1 attachment is followed by host cell wall degradation at the point of contact by a virus-packaged enzyme(s) [9]. As reported here, following wall degradation the viral internal membrane fuses with the host membrane, thus generating a membrane-lined tunnel through which thẽ 331kbp linear dsDNA viral genome and viral proteins are ejected into the host cytoplasm [10], leaving an empty viral capsid on the cell surface [9], a trait characteristic of bacteriophages. Once ejected, the viral genome is rapidly translocated to the host nucleus, as indicated by the finding that transcription of viral genes is detected in infected cells at 7 min post infection (PI) [13]. This finding, along with the fact that the virus neither encodes nor packages a recognizable RNA polymerase support the notion that at least initial viral DNA replication and transcription processes occur in the host nuclei. This notion is also consistent with recent observations revealing major morphological modifications of the host nucleus during PBCV-1 infection [14]. Indeed, no extensive morphological changes of host nuclei are detected during the replication cycle of the giant Mimivirus or the Vaccinia virus whose entire replication cycles take place in the cytoplasm [15,16].
These observations raise several fundamental questions. The large internal pressure generated by the highly condensed genome in bacteriophages, along with pull forces exerted by bacterial DNA-binding proteins such as RNA polymerases present in the cytoplasm, have been suggested to contribute to viral DNA ejection [4,[17][18][19][20][21]. Neither of these factors can account for the ejection of the PBCV-1 genome, as the pressure generated by the PBCV-1 genome, although substantial, is significantly less than that characteristic of bacteriophages [22], and no DNA-binding proteins are expected to be present in chlorella cytoplasm. Thus, what are the mechanisms responsible for PBCV-1 genome ejection? Moreover, PBCV-1 genomes ejected into the host cytoplasm are rapidly translocated to the host nucleus [23]. The translocation issue is intriguing since, as shown in this study, PBCV-1 host cells are packed with thylakoid membranes that surround most of the cell periphery and hence generate a formidable barrier for DNA translocation, as are all intracellular membrane structures [24].
To obtain insights into the initial events of the PBCV-1 infection cycle we used advanced super-resolution fluorescence and electron microscopy techniques, including Stochastic Optical Reconstruction Microscopy (STORM) that enables sub-diffraction resolution [25], Scanning-Transmission Electron Microscopy (STEM) tomography, and specific DNA labeling technologies such as Electron Microscopy In Situ Hybridization (EMISH). We demonstrate that shortly after attachment to the host cell wall, PBCV-1 perforates the wall and generates a membrane-lined tunnel through the fusion of viral membrane and host cytoplasmic membrane. Viral genomes are then ejected through this tunnel and rapidly translocated to the host nucleus, possibly through viral-induced perforations of thylakoid membranes. Previous studies revealed that at late infection stages PBCV-1 generates cytoplasmic organelles, termed viral factories, in which viral assembly takes place, a feature characteristic of many eukaryote-infecting dsDNA viruses [8,14,[26][27][28][29][30][31][32][33][34][35][36]. These findings, along with those reported here, which underline the bacteriophage-like traits revealed by PBCV-1 at early infection stages, imply that PBCV-1 uniquely combines a bacteriophage-like infection mechanism during early infection stages with a eukaryotic-like infection pathway in its late replication stages.
Viral membrane fusion with host membrane generates a portal for DNA delivery
Previous studies of PBCV-1-infected chlorella cells demonstrated that shortly after attachment, PBCV-1 degrades the host cell wall and ejects its genome into the host cytoplasm [9]. To obtain deeper insights into viral DNA delivery, we used double-tilt Scanning Transmission Electron Microscopy (STEM) tomography of high-pressure-frozen and freeze-substituted (HPF-FS) PBCV-1-infected chlorella cells. Our tomography studies revealed that degradation of the cell wall at the virion attachment site is followed by the extension of the viral membrane towards the host cell (Fig 1). This deformation is accompanied by the protrusion of the host cellular membrane outwardly at the viral attachment site (Fig 1A-1G; white arrowheads), which is likely to result from the large turgor pressure within the host cells [10].
The concomitant deformation of viral and host membranes leads to a tight proximity between these membranes (Fig 1A and 1B), thus enabling fusion of the two membranes, which in turn results in the formation of a narrow membrane-lined tunnel of~5 nm in its inner diameter and~32 nm long (Fig 1, blue arrowheads). In addition, the PBCV-1 genome appears to undergo massive reorganization during its ejection (asterisks in Fig 1B and 1D), rearranging from an apparently homogenous morphology that is spread throughout most of the internal viral core (Fig 1A and 1B) to a mass that is positioned at the center of the capsid (Fig 1C and 1D). Upon completion of DNA ejection, empty capsids are left attached to the cell wall (Fig 1E-1G), frequently near multi-layered thylakoid membranes (Fig 1G; red arrowheads). In addition, the STEM tomograms revealed that the viral tunnel persists throughout the entire course of DNA delivery into the cell without changing its internal diameter. Our STEM data do not provide, however, an unambiguous answer on the fate of the tunnel following genome delivery. Fig 1H, I shows a 3-D surface reconstruction derived from the STEM tomogram depicted in Fig 1A (S1 Movie). The infecting virus is attached to the cell wall (brown layer) and creates a hole in the wall, presumably using viral packaged enzymes. The tunnel generated following host and viral membrane protrusions and subsequent fusion is highlighted in Fig 1H and 1I (white arrowheads and blue structures, respectively).
Viral genomes are rapidly transported to the host nucleus
Previous studies of the PBCV-1 infection cycle provided circumstantial evidence that following ejection of the viral genome into the host cytoplasm, PBCV-1 DNA and viral proteins are rapidly translocated towards and into the host nucleus [10,23]. Details on how this translocation occurs remain unknown. Specifically, the inherent hurdles associated with trafficking of the large PBCV-1 genomes through the crowded cytoplasm are highlighted in the STEM-derived model (Fig 1H and 1I, S1 Movie) that reveals numerous vesicles, cisternae and Golgi stacks in the host cytoplasm. The multiple, densely-packed chloroplast membrane stacks that surround most of the host cell (Fig 2) are likely to impose an additional and particularly demanding hurdle that viral DNA needs to overcome in its trajectory towards the host nucleus.
As indicated above, the internal diameter of the membrane tunnel generated by the fusion of the host and internal viral membranes is very narrow (~5 nm), presumably allowing concomitant transfer of only a single dsDNA helix along with putative viral DNA-binding proteins [22]. This result was unexpected given that the entire 331 kbp dsDNA PBCV-1 genome is translocated through the tunnel in only a couple of minutes. Such a rapid genome transfer is indicated by the finding that empty capsids attached to chlorella cells are detected already at two minutes following exposure of the cells to the PBCV-1 viruses (Fig 1E-1G).
To localize viral genomes and follow their trajectories we used both immuno-DNA labeling of cryo-preserved specimens as well as Electron Microscopy In Situ Hybridization (EMISH) technology, which allows to specifically identify viral DNA. Briefly, EMISH methodology relies on hybridization of digoxygenin-labeled DNA probes with viral DNA sequences, followed by treatment with anti-digoxygenin antibodies. In addition to detecting cellular DNA localized in the host nucleus and chloroplast, anti-DNA antibodies revealed labeled DNA extending from infecting PBCV-1 particles (Fig 3A and 3B, black arrowhead), suggesting a viral genome in the process of being ejected into the cytoplasm and translocated towards the nucleus. Indeed, DNA was also detected near the host nucleus ( Fig 3C and 3D and S1A and S1B Fig). Within 6 min, PBCV-1 genomes were detected in the vicinity of the nucleus and inside it (Fig 3E and 3F, S2 Fig). Significantly, this is the first visual evidence that PBCV-1 DNA actually enters the nucleus.
Fig 1. Host and viral membrane fusion generates a tunnel though which viral DNA is ejected. A-G.
PBCV-1-infected chlorella cells at 1.5-2 min PI were immobilized with HPF-FS and thick sections were analyzed by STEM tomography. A. A 5.2 nm tomographic slice from a 220 nm-thick STEM tomogram showing the close proximity between the viral and host internal membranes resulting from their convergence at the infection site. B. A 7.8 nm tomographic slice of a high magnification of the inset in panel A. C. A 5.2 nm tomographic slice from a different 220 nm STEM tomogram. D. High magnification of the inset of panel C. The generation of a continuous tunnel is evident. E, F. Two different 5.2 nm STEM tomography slices from the same tomogram showing the same PBCV-1-infected cells with almost completely empty capsids in which the membrane tunnel is still detected. G. A 5.2 nm tomographic slice from a 216 nm-thick STEM tomogram exhibiting an empty capsid attached near thylakoid membrane stacks (red arrowheads). In all panels the membrane tunnel and the protrusion of the host membrane are marked with blue and white arrowheads, respectively. Asterisk: viral DNA. H, I. Volume rendering representation of the STEM tomogram shown in panel A. The 3D surface representation highlights the barriers that viral DNA has to overcome to reach the host nucleus (including cell wall, plasma membrane, cytoplasmic vesicles, Golgi, and photosynthetic membranes that were not captured in this tomogram). A PBCV-1 virion is attached to the cell wall (brown). The host membranes as well as cytoplasmic vesicles are marked in blue. The capsid is depicted in yellow, the internal viral membrane and the membrane tunnel (arrowheads) are shown in blue. Viral DNA is shown in green. Scale bars: A, C, G: 100 nm; B, D, E, F: 50 nm.
The results depicted in Fig 3 imply that viral genomes are translocated as condensed structures rather than as extended, linear molecules (note the dense labeling of viral DNA in Fig 3D and 3F). Specifically, analysis of EMISH sections derived from 18 PBCV-1-infected chlorella cells revealed 16 cases of condensed DNA morphologies and two extended structures. In addition, general antiDNA immunoTEM studies of nine infected cells indicated seven cells with clearly condensed morphologies and two cells with in which the structure of viral genomes could not be precisely defined. Since slices used for EMISH and immunoTEM studies were obtained from random sectioning of infected cells at diverse cell volumes, these results strongly support the notion that viral genomes are translocated as condensed structures. This finding is consistent with the notion that condensed DNA conformations enable trafficking in the dense cytoplasm milieu by facilitating the bypass of cellular obstacles [8]. Notably, in immuno-DNA labeling assays, mock-infected cells revealed DNA labeling in the nucleus and chloroplasts, consistent with the presence of DNA in these organelles (Fig 4A and 4B), but no cytoplasmic DNA labeling. In addition, EMISH analysis of mock-infected cells hybridized with PBCV-1 DNA probes did not reveal any viral DNA sequences in the host cytoplasm or nuclei (Fig 4C and 4D). Further validation of the specificity of viral DNA probes was obtained with PCR and hybridization assays on thin transmission electron microscopy sections of mature virions (S3A and S3B Fig).
Attempts to detect viral genomes inside infected cells using conventional fluorescence microscopy were unsuccessful due to diffraction resolution limit. Therefore, we used the Stochastic Optical Reconstruction Microscopy (STORM) technology that allows for the localization and identification of single-emitting fluorophores and reconstruction of high resolution images [25]. Our STORM studies consisted of immuno-labeling PBCV-1 capsids at 1.5-2 min PI with anti-capsid antibodies, followed by counterstaining with SYTOX Orange for DNA detection and localization. Fig 5A shows a capsid attached to the cell wall at the opposite side of the nucleus, and a condensed DNA structure extending from the capsid. Further STORM analyses of PBCV-1-infected chlorella cells reveal condensed viral DNA extending from capsids towards the nucleus (Fig 5B). It should be noted that the viral capsids depicted in Fig 5 are either empty or almost empty, as no DNA staining was detected. Altogether, the STORM results support our immuno-DNA labeling Electron Microscopy studies, which imply that, following ejection, viral genomes effectively overcome cellular obstacles in their trajectory towards the host nucleus, presumably by assuming a condensed morphology.
Viral genomes are detected in photosynthetic membrane stacks
Empty viral capsids attached near the host chloroplast are frequently observed. This observation underscores the question how do large viral genomes bypass the multilayered thylakoid membranes? An intriguing answer is provided by the observations that viral DNA is detected inside chloroplasts and that a discontinuity of the thylakoid membrane stacks is discerned at the point of viral DNA localization (Fig 6), implying either the use of preexisting gaps in the thylakoid membrane stacks or direct degradation of the membranes. As implied by TEM thin sections and STEM tomograms, the diameter of the perforations in the thylakoid membranes is~5 nm. As our EMISH studies revealed viral genomes in the host chloroplasts, we conducted immuno-fluorescence assays at late PI time points to examine the notion that viruses attached to the cell wall next to chloroplasts are indeed capable of inserting their DNA through the multilayer thylakoid membranes (Fig 7). Our imaging observations, which comprise the entire volume of infected chlorella cells, demonstrate that even virions adsorbed near the host chloroplasts appear to generate viral factories. These virions are therefore infective, thus supporting the notion that viral genomes can be translocated through the thylakoid membrane stacks in their trajectory towards the host nucleus.
To exclude the notion that viral DNA can bypass the chloroplast to reach the nucleus, we carried out Focused Ion Beam Scanning Electron Microscopy (FIB-SEM) studies. This methodology enables capturing the entire volume of a chlorella cell at high resolution. S2 Movie reveals volume imaging of a mock-infected cell, demonstrating that the chloroplast occupies the entire volume, from top to bottom on one side of the cell, implying that a virus attached near the chloroplast must deliver its DNA through it, without the ability to bypass it in order to reach the nucleus. This finding supports the results of our immuno-fluorescence assays, suggesting that these viruses are infective (Fig 7).
Discussion
A model summarizing the early stages of PBCV-1 infection and highlighting the similar patterns of bacteriophage and PBCV-1 early infection cycles is depicted in Fig 8. We demonstrate that within a couple of minutes after exposing the unicellular photosynthetic chlorella cells to PBCV-1, the virus attaches to the host wall, using a spike located at a unique PBCV-1 icosahedral vertex [11] (Fig 8A), and digests the wall at the attachment site ( Fig 8B). The spike complex is subsequently removed [12], thus creating an opening in the vertex and enabling the generation of a portal. The viral genome is then ejected into the host cytoplasm through a 32 nm-long tunnel that, as shown here, is generated by the fusion of the virus internal membrane with the host membrane (Fig 8C and 8D). Notably, such an infection process, including a spike-dependent viral-host attachment that is followed by the removal of the spike complex and formation of a membrane tube was reported for some bacteriophages, such as PRD1 that, like PBCV-1, contains an internal membrane [37,38].
We suggest that membrane fusion is promoted by the large internal pressure within the host cell (discussed below), which enables protrusion of the host cellular membrane towards the viral membrane through the virus-generated aperture in the host wall (Fig 8C). The inner diameter of the tunnel, which persists throughout the process of genome delivery, is~5 nm. Such a narrow portal is intriguing as it enables transfer of only a single double-helix DNA at a time. This linear, base-pair by base-pair DNA translocation represents an additional feature characteristic of bacteriophage genome ejection that proceeds through a narrow 'nanotube' membrane [38,39]. Significantly, this process differs from genome release pathways of other members of large dsDNA viruses such as Mimivirus, which proceeds through a large portal that allows a concomitant release of the entire genome [32], or Vaccinia virus that similarly has a single-step release of the entire genome [40].
Genome ejection in many bacteriophages proceeds through a two-step process, whereby a first 'push' stage is promoted by the very high pressure generated by the tight genome packaging within capsids, which amounts to 60 atmospheres (atm) (~6 MPa) [4,20,41]. The second stage has been suggested to involve genome pulling mediated by several putative mechanisms, including transcription-based internalization and hydrodynamic effects [3,4,18,21,42,43].
Estimates of the internal pressure within capsids that are based on DNA packaging densities imply that although the pressure in PBCV-1 virions is lower than typical pressures in bacteriophages, it is substantial [22] and as such is likely to contribute to the internalization process of the PBCV-1 genome. An additional barrier to PBCV-1 genome ejection is the turgor pressure inside chlorella cells, which is higher than that of bacteria [44], suggesting that host internal pressure represents an additional barrier to PBCV-1 genome ejection. However, this hurdle is at least partially mitigated by the viral-encoded potassium ion channels (Kcv) located in the viral internal membrane [45]. Fusion of the virus membrane with the host plasma membrane (Fig 8C and 8D) results in a rapid depolarization of the host membrane, thus enabling efflux of ions and water out of the chlorella cells [46], thus reducing the host cell turgor pressure. Significantly, several studies demonstrated that the initial infection stages of diverse bacteriophages involve depolarization of the bacterial host membranes, leading to massive efflux of positively charged ions (mainly K + , presumably accompanied by the efflux of additional cations as well as of diverse anions that have not yet been characterized) and water molecules [3,37,[47][48][49]. The findings that bacteriophages as well as the eukaryotic-infecting PBCV-1 utilize a membrane-depolarization pathway to overcome host turgor pressure further supports the notion of a bacteriophage-like process of PBCV-1 infection.
The finding that the viral genome maintains a condensed morphology located at the center of the virion (Fig 1C and 1D) is intriguing, as thermodynamic considerations would have predicted a genome conformation dispersed throughout the viral core. While the source of this unique structure remains unclear, we propose that the multiple and abundant DNA-binding proteins previously shown to be present in the PBCV-1 core (19) promote a condensed DNA morphology.
TEM immuno-labeling and super-resolution fluorescence studies reported here indicate that shortly after being released into the host cytoplasm, the PBCV-1 genome assumes a condensed morphology. Such condensation is implied by the dense and highly clustered DNA labeling that is depicted in Fig 3 and is characteristic of condensed, rather than of dispersed, morphologies, as indeed demonstrated by the heavy DNA labeling revealed by virions shown in this figure. This condensation, which is shown here for the first time to occur during genome internalization of a eukaryotic-infecting virus, presumably plays a crucial role in PBCV-1 infection. In addition to promoting genome internalization, a compact DNA morphology is likely to facilitate passage of the large PBCV-1 genome towards and into the host nucleus within the crowded host cytoplasm. Notably, DNA condensation was proposed to represent a significant pulling force during bacteriophage genome ejection [19,43,50,51]. Once released into the host cytoplasm, the PBCV-1 genome is rapidly translocated towards and into the host nucleus where it is replicated and subsequently released into a cytoplasmic factory where viral assembly occurs [14,52]. As shown in this study, chlorella cells are packed with chloroplasts containing thylakoid membranes that surround most of the cell periphery, underlining the question how do PBCV-1 genomes overcome this major hurdle during their trajectory towards the host nucleus. Our studies reveal that shortly after PBCV-1 infection, host thylakoid membranes are perforated, thus paving a pathway for the virus genome towards the host nucleus. Indeed, viral DNA sequences are present in the host thylakoid membranes (Fig 6). Significantly, a proteome study revealed that PBCV-1 packages two viral-encoded putative phospholipases [53]. It is tempting to speculate that PBCV-1 uses these phospholipases to perforate the thylakoid membranes in order to generate a trajectory towards the host nucleus. This notion is consistent with reports indicating that almost immediately after PBCV-1 infection, substantial reduction in photosynthesis occurs [54,55]. Notably, after ejecting their genome, empty PBCV-1 virions remain attached to the host wall, as is the case for bacteriophages.
Taken together, the results reported here reveal that the initial infection process of the chlorovirus PBCV-1 genome is remarkably similar to the process used by many tailed-bacteriophages yet differs from the process used by eukaryotic-infecting viruses that initiate infection through internalization of the entire virion or a substantial part of the particle. The PBCV-1 infection cycle proceeds through perforation of the host cell wall, cell plasma membrane and thylakoid membranes, thus overcoming the obstacle imposed by these cellular components on the translocation of viral DNA towards the nucleus. Previous studies established that at late infection stages, PBCV-1 generates cytoplasmic organelles, termed viral factories, where viral assembly takes place, a feature characteristic of many eukaryotic-infecting large dsDNA and (+)RNA viruses [14,15,27,30,31,[34][35][36][56][57][58]. Thus, PBCV-1 uniquely combines a bacteriophage-like mechanism during its early infection stages with a eukaryotic-like virus infection pathway in its late replication stages.
Cells, viruses and sample preparation
Chlorella variabilis NC64A cells were grown under continuous light and shaking on a modified Bold's basal medium (MBBM) [54]. PBCV-1-infected as well as mock-infected cells were prepared for electron microscopy studies, including STEM tomography, FIB-SEM and immunoelectron microscopy [14]. Multiplicity of infection (MOI) was 10 in all experiments with the exception of the STORM studies in which it was 20 and immuno-flouresence studies in which it was 1.
Electron microscopy in situ hybridization (EMISH)
I. Sample preparation. Chlorella infected and mock-infected cells were fixed using 4% paraformaldehyde and 0.5% glutaraldehyde (v/v) in MBBM for 2h at room temperature (RT) and washed with phosphate-buffered saline (PBS). Cells were centrifuged and pellets were embedded in 3.4% agar. Dehydration was carried out in ethanol followed by infiltration with increasing concentrations of the methacrylate-based embedding resin HM20 in dry ethanol. Resin was polymerized using 0.5% di-benzoyl peroxide at 70˚C for 72h. Thin sections were mounted on pioloform-coated Nickel grids.
II. DNA extraction from PBCV-1 and chlorella cells. Purified viruses were treated with DNase (RNase-Free DNase, Qiagen) for 1h at 37˚C to remove host-contaminating DNA. Viruses were centrifuged and pellets were resuspended in lysis buffer (10 mM Tris-HCl, pH 8.0, 0.1 mM EDTA, 0.4% SDS, 2 mM DTT, 200 μl phenol) containing glass beads, and vortexed for 3 min. Viral lysates were centrifuged and the soluble, DNA-containing phase was vortexed with phenol:chloroform (1:1); DNA was precipitated with EtOH from the aqueous phase. Pellets were washed with 70% ethanol, air-dried and re-suspended in water. For DNA extraction from chlorella cells, 300 ml of mock-infected cells (~1.3 X 10 7 cells/ml) were centrifuged. Pellets were re-suspended in 10 mM Tris-HCl, pH 8.0, 0.1 mM EDTA. Cell lysis as well as further processing of algal DNA were carried out as described above for PBCV-1 genomes.
III. Determination of viral DNA purity and specificity. DNA extracted from PBCV-1 virions was subjected to PCR analysis to validate its purity from host DNA contaminants. PCR reactions were performed with initial DNA denaturation at 95˚C for 2 min followed by 30 cycles of the following steps: denaturation at 95˚C for 15 sec, annealing at 58˚C for 45 sec, and 1 min at 72˚C for primer elongation. Final elongation was carried out at 72˚C for 7 min. S3 Fig (Panel A) shows PCR results demonstrating that viral DNA does not contain contaminating chlorella DNA. See Table 1 for primers used for PCR analyses.
IV. Generation of viral specific DNA probes. After validating that viral DNA was free of cellular DNA it was treated with restriction enzymes XhoI, XbaI, and EcoRI (New England Biolabs) and subsequently purified with Qiagen Nucleotide Removal Kit (Qiagen). To generate DNA probes, 300 ng of digested DNA products were labeled using a DIG-DNA labeling kit (Roche) according to the supplier protocol.
V. EMISH studies. Thin sections of mock-infected cells and PBCV-1-infected cells were deposited on small drops of 100 μg/ml proteinase K (Sigma) in 20 mM Tris-HCl pH 7.9, 5 mM CaCl 2 for 30 min at 37˚C. Grids were washed 3 times in DDW and boiled for 3 min in 2xSSC in 70% formamide. Grids were transferred to cold hybridization solution containing 50% formamide, 10% dextran sulfate, 400 μg/ml salmon sperm DNA (Sigma) and 5.6 ng/μl PBCV-1 DNA probes. Hybridization was carried out for 16h at 37˚C. Grids were washed with 2xSSC, incubated with 1% BSA in PBS for 20 min at RT and then with sheep anti digoxygenin antibody conjugated to 10 nm colloidal gold beads for 2h at RT. Grids were washed with PBS and post-stained with 2% uranyl acetate and Reynold's lead citrate. Samples were visualized using FEI Spirit Tecnai T-12 and micrographs were recorded with an Eagle 2Kx2K FEI CCD camera (Eindhoven, the Netherlands).
VI. STEM tomography. Sections of~250 nm were transferred to 150-mesh copper grids supported by carbon-coated (Edwards) Formvar film, and decorated with 12nm gold beads on both sides. Tilt series were acquired with FEI Tecnai G2 F20 TEM operated at 200 kV. Automatic sample tilting, focusing and image-shift correction were performed with Xplore3D software (FEI). Double tilt series were acquired at 1.5 0 increments at an angular range of -65 0 to +65 0 , with Gatan bright-field detector in the nanoprobe STEM mode. 3D reconstructions were computed from tilt series using a weighted back-projection algorithm. Tomograms were post processed either with a median or a smoothing filter using IMOD (See Ref. 33 for additional details).
Generation of antibodies for the PBCV-1 major capsid protein Vp54
To detect PBCV-1 in immuno-fluorescence assays we raised antibodies against the major capsid protein, Vp54 using a short peptide sequence, NDDRYNYRRMTDC, derived from the Images were acquired with a photometrics coolSNAP HQ2 CCD (Roper Scientific) and deconvoluted with SoftWorx package using high noise filtering and 10 iterations. Image analysis and processing were conducted with Image J and Adobe Photoshop CS4-extended softwares.
Super-resolution STORM imaging [25] Chlorella cells were infected with PBCV-1 for 1-2 min at MOI of~20 and fixed with 4% paraformaldehyde for 15 min at RT. Cells were washed with PBS and transferred to Poly-Lysine coated glass dishes (Mat-Tek corp.), blocked with 4% BSA-PBS solution for 30 min at RT and exposed to anti-capsid antibody for 1h. Following washes in PBS, goat anti-mouse IgG conjugated to Alexa488 (Life technologies) diluted in 4% BSA-PBS solution was added for 30 min for labeling viral capsids. Cells were washed in PBS and counter-stained with 5 nM SYTOX Orange (Life technologies) in DDW. Images were collected on a Vutara SR200 STORM microscope (Bruker). Before performing super resolution imaging, virus locations were identified by conventional fluorescence using Alexa488 labeling and 488 nm laser excitation (~5kW/cm 2 ). DNA structures labeled with SYTOX Orange were imaged in super resolution using 561 nm laser excitation power of~15kW/cm 2 . Images were recorded using a X60, NA 1.2 water immersion objective (Olympus) and Evolve 512 EMCCD camera (Photometrics). Data were analyzed with Vutara SRX software. Fig 7. Remarkably, the movie highlights the notion that viral DNA cannot by pass the chloroplast if a virus attaches adjacent to it. This is underscored by the fact that the chloroplast is tightly attached to the plasma membrane and occupies most of the cellular volume. The contrast of the images was inverted to match the TEM and STEM images. Each section is~10 nm thick. The movie was created using image J. | 2018-04-03T02:15:06.061Z | 2017-08-01T00:00:00.000 | {
"year": 2017,
"sha1": "4faae763f8e63ea0a0282dc79ac8b0241084e994",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plospathogens/article/file?id=10.1371/journal.ppat.1006562&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4faae763f8e63ea0a0282dc79ac8b0241084e994",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
258290802 | pes2o/s2orc | v3-fos-license | Risk factors differ for viable and low viable crushed piglets in free farrowing pens
Newborn piglets have a high risk of being crushed by the sow, and this risk implies welfare and economic consequences. The aim of this study was to investigate the importance of differentiating between low viable (secondary crushing losses) and viable crushed (primary crushing losses) piglets for the evaluation of risk factors for crushing related to characteristics of the sow, the litter, and the environment. Eleven Swiss farmers recorded sows’ production data (parity class, gestation length, numbers of live-born and stillborn piglets), data (age, sex, weight, cause of death, and signs of weakness) for every live-born piglet that died in the first week after birth (piglet loss), and ambient temperature. Piglet losses were assigned to five categorical events: piglet loss, subdivided into not crushed and crushed, the latter being further subdivided into low viable crushed and viable crushed. Piglets recorded by the farmer as crushed were assigned to the events low viable crushed and viable crushed based on the piglet’s body weight and signs of weakness (diseases, malformations). Data of 9,543 live-born piglets from 740 litters were eventually used to statistically model the hazard of dying at any given time in the first week after birth due to one of these events (mixed-effects Cox model). Five potential risk factors were analyzed as co-variates: parity class, gestation length, number of live-born piglets, number of stillborn piglets, and daily number of hours with ambient temperature >30°C. We identified two risk factors for dying from the event viable crushed that were not identified as risk factors for low viable crushed, namely shorter gestation length and higher daily number of hours with ambient temperature > 30°C. Vice-versa, we identified additional live-born piglets in the litter as risk factor for low viable crushed, but not for viable crushed. Our results show the importance of differentiating between low viable and viable crushed piglets for the interpretation of risk factors for crushing losses. Therefore, we suggest that for breeding purposes and in research, this differentiation should be made.
Introduction
For economic and welfare reasons, one of the main goals in pig production is to decrease pre-weaning mortality (PWM) of piglets (1)(2)(3)(4). The principal cause of death in the period from birth until weaning is crushing by sows, as consistently described in the scientific literature and reviewed by Muns et al. (2). It accounts for around 50% of all piglet deaths, usually happening in the first week after birth (1,5,6). Crushing is described as the final act in a complex chain of interactions between the piglets, the sow, and the environment (2,7). However, several studies reported that not more than between 18 and 70% of the crushed piglets were healthy and potentially viable (1,(7)(8)(9). These findings suggest that a considerable percentage (30-82%) of piglets that were crushed were predisposed to being crushed because of weakness (7). Consequently, the mechanical damage due to crushing is only in a part of the cases the exclusive cause of death (10).
Hypothermia, starvation, and diseases are factors that weaken the piglet (2, 10), leading directly or indirectly to death. The weaker a piglet, the less capable it is to react to posture changes of the sow and to avoid being crushed or trampled (8, 10-16). To protect piglets from the risk of being crushed, farrowing crates are used almost everywhere in the world (17)(18)(19). Multiple studies showed that pre-weaning mortality is higher in non-crated than in crated housing systems of the farrowing and lactating sow (reviewed by 17,18,20). However, some studies reported that the overall survival rate of piglets was not higher in crated systems than in the tested non-crated systems (12,13,21). As shown in two studies (12,21), piglets have a higher risk of being crushed but a lower risk of dying from causes other than crushing in non-crated systems. Although piglets of weak constitution might be crushed in pens without farrowing crates, they are likely to die from other weakness-related causes of death in crated pens (12,13).
To reduce crushing losses and PWM in general, the causes of piglets' death need to be studied in detail (4,5,22). The differentiation of the crushed piglets into healthy and weak individuals is thereby of importance, because risk factors for crushing may vary for small, underweight piglets compared to viable, well-fed ones (8). Crushing is considered the primary cause of death for a crushed, viable and healthy piglet of normal weight (8). In contrast, crushing is considered to be the secondary cause of death for a crushed piglet with signs of weakness or low viability such as underweight, malformations, or diseases (3,8,21,23,24). To date, primary and secondary crushing losses were differentiated in only a few studies (e.g., 3,8,21,23,24).
Examples of risk factors for crushing are environmental factors such as season and temperature and sow factors such as parity class (2). Studies found contradictory results regarding the effects of these environmental and maternal factors on piglet survival. For example, Weber et al. (13) found more crushing losses in summer than in the other seasons in Switzerland, while Rangstrup-Christensen et al. (8) observed the lowest percentage in summer in Denmark. Additionally, Rangstrup-Christensen et al. (8) detected a higher risk for crushing in multiparous than in primiparous sows. However, Pandolfi et al. (25) found that piglets were less likely to die with signs of crushing in later parities than in the first or second one. Besides differences in the study design, the environmental conditions, and the genetics of the sows, the lack of differentiation between primary and secondary crushing losses might explain these discrepancies.
In addition to ambient temperature and parity class, a large litter size is frequently discussed as a risk factor for general PWM (26)(27)(28) and for crushing losses (27, 29). Moreover, a short gestation length (30) and a high number of stillborn piglets were found to be associated with a higher PWM risk (21,31,32). The five risk factors addressed so far (ambient temperature, parity class, gestation length, number of live-born piglets, and number of stillborn piglets) are suitable for a study based on farmers' records. They require little interpretation by the farmer and, therefore, are potentially highly accurate (23).
The aim of this study was to investigate the relevance of a differentiation between low viable and viable crushed piglets for the evaluation of risk factors for crushing losses related to characteristics of the sow, the litter, and the environment. We hypothesized that there are differences in risk factors between the events labelled as viable crushed, i.e., being crushed in viable state, and low viable crushed, i.e., being crushed in low viable state.
Additionally, we expected that risk factors for dying from other causes than crushing (not crushed), typically related to weakness, would be more similar to those for low viable crushed than to those for viable crushed.
Setting of the study
The study is based on data provided by 11 Swiss farmers who collected data on piglet mortality in the first week after birth (0-7 days after birth) by using a detailed protocol. They participated voluntarily in the study and received no financial compensation. Data collection started between May 2018 and July 2019, lasted 5-6 months, and ended after the majority (75-100%) of the producing sows on the farm were recorded at least once, or the end of the study period was reached (December 2019). The farms had an average herd size of 84.5 producing sows (range: 20-168). Small (<50 sows; n = 3), medium (50-100 sows; n = 4), and large (>100 sows; n = 4) herds, as defined for Swiss conditions, were evenly represented. Most farms (n = 8) used F1 crosses between Swiss Large White (SLW) and Swiss Landrace (SL) as damline and pure breed SLW (n = 10) as sireline. Three farms used pure breed SLW sows or Duroc boars, and some farms used more than one damline (n = 1) or sireline (n = 3). One exception was a farm on which a large share of pure SL pigs was used. Over the whole lactation period, sows were kept in free farrowing and lactating pens with a total area of at least 5.5 m 2 , as required by the Swiss Animal Protection Ordinance (33). Mean pen size on the study farms was 7.2 m 2 (± 0.31). Different types of free farrowing pens were used on the different farms and in some cases within the same farm. Nine farms used pens with no option for temporary crating, whereas four farms used simple pens and five farms used FAT2 pens (34) with a separation between dunging and nesting area. Two farms used pens allowing temporary crating, but on one farm this option was never used and on the other farm it was used in exceptional cases only (leg weakness or aggression of the sow against her piglets).
Structure of the protocol
The farmers were given written instructions on how to record data on the protocol sheets. This included photographs to illustrate terms and definitions. Farmers were instructed to record dead piglets with fully intact slippers (eponychium) on claws as stillborn (23,(35)(36)(37). Most farmers were already familiar with similar protocols used for breeding and production data. Each protocol contained specific information for a sow and a given litter. The sow was identified by the ear tag number and the number of the farrowing room. The upper Frontiers in Veterinary Science 03 frontiersin.org part of the protocol asked for information on the sow's parity class, the anticipated and actual farrowing date, the number of live-born and stillborn piglets, the number of cross-fostered piglets, and the number of piglets alive after 7 days. In the middle part of the protocol, data on the age, sex, weight, and the cause of death were recorded for every live-born piglet that died in the first week after birth. Additionally, for crushed piglets, diseases, malformations, and the information whether or not the piglet was cross-fostered, had to be recorded. Finally, in free-form text boxes in the lower part of the protocol, the farmers were asked to fill in information on health problems and medical treatments applied to the sow and her litter. All farmers were provided with the same model of weighing scale (Küchenwaage elektro Prima Vista, Landi Schweiz AG, Dotzigen, Switzerland) to measure the weight of dead piglets. To record the ambient temperature, they were given a temperature logger (UA-001-64 Hobo Pendant 64 K Temp-Alarm Data Logger, Onset Computer Corporation, Bourne, Massachusetts) for each farrowing room (1 to 6 per farm). They were instructed to place the temperature logger in the middle of the room, at head height of the sow (~1 m above ground) and out of reach for the animals. The temperature was logged at a frequency of 1 h −1 and data were retrieved by the authors.
Recorded events
Based on the farmers' records, piglet losses in the first week after birth were assigned to five categorical events. As it was not possible to compare the producer-recorded causes of PWM with post-mortem diagnoses, we followed recommendation by Vaillancourt et al. (23) and defined events that allow little interpretation (see list below). The first event, piglet loss, represented all piglet losses of live-born piglets in the first week after birth. Following the example of Weber et al. (13,28), we further differentiated the event piglet loss into the events not crushed and crushed based on the farmers' judgement of the cause of death. Vaillancourt et al. (23) reported that farmers consistently were able to identify piglets that had been crushed, but frequently misidentified piglets dying from other causes than crushing. Finally, we differentiated the event crushed into the events low viable crushed and viable crushed based on body weight and signs of weakness, as recorded by the farmers. Christensen and Svensmark (35) observed that the sensitivities of the mortality categories were higher when the clinical signs recorded by the farmers were included in the diagnosis.
Addition to event low viable crushed: Poor health state due to diseases, e.g., diarrhea, or malformations, e.g., splay legs, as recorded by farmers, was considered as sign of weakness. Piglets with a body weight of less than 1 kg were defined as absolutely underweight. This is a common rule for breeding purposes in Switzerland (38). Our definition of absolute underweight included underweight piglets at birth (38, 39) and absolutely underweight piglets during the whole study period (first week after birth). Dead piglets were defined as relatively underweight, if their weight was less than the sum of a minimum normal birth weight of 1 kg plus an average daily weight gain of 200 g. Therefore, relatively underweight piglets either were born absolutely underweight or had an average daily weight gain less than 200 g, or both. These 200 g of average daily weight gain for healthy piglets before weaning are based on literature (39,40) and on personal experience of the first author in a previous study in free farrowing pens with piglets in their first 5 days after birth (unpublished data).
Definitions of the events: • Piglet loss = A live-born piglet died in the first week of life.
• Not crushed = A live-born piglet died in the first week of life and was judged by the farmer not to be crushed by the sow (died spontaneously or was appropriately killed by farmer). • Crushed = A live-born piglet died in the first week of life and was judged by the farmer to be crushed by the sow. • Low viable crushed = A live-born piglet was judged by the farmer to be crushed by the sow while being absolutely or relatively underweight and/or having signs of weakness (= secondary crushing loss). • Viable crushed = A live-born piglet was judged by the farmer to be crushed by the sow without being underweight and/or having signs of weakness (= primary crushing loss).
Statistical analysis
Finally, in the statistical analysis we considered 9,543 live-born piglets out of 740 litters with complete data records with respect to characteristics of the dead piglets (birth state [live-born vs. stillborn], death date, and body weight of crushed piglets), the litter (number of live-born piglets, number of stillborn piglets, number of total piglet losses, and information about cross-fostering), the sow (parity class, gestation length, and farrowing date), and the environment (temperature in farrowing room). In total, 123 litters were excluded from statistical analysis as records were incomplete.
We performed mixed-effects Cox regression survival analysis using R (version 4.2.2; R Core Team 2022) and the R package coxme (41). Separate regression models were fitted to analyze the time to occurrence of one of the five events. Piglets that survived the 7-day study period or died on days 0-7 from a different event than the specific one defined for the respective model were censored, as is appropriate for Cox regression. The random and fixed effect structure was identical across all models. Litter identifier nested in farm identifier were set as random intercepts. The parity class, gestation length, number of live-born piglets, number of stillborn piglets, and the ambient temperature in the farrowing room on the day before death were included as fixed effects. The temperature was calculated as the number of hours with a temperature above 30°C. The approach of aggregating the hourly data to daily temperature data was selected from a large set of candidate methods. Candidate hourly-to-daily temperature aggregation methods included hours with temperature above a certain value (21-32°C), mean temperature above the upper boundary [mean (max (0, T°C-22°C))] of the optimal temperature range (18-22°C) as recommended for farrowing rooms in Western Europe (36,42), as well as a large set of statistical measures for central tendency, variability, and distribution. From these candidates, daily hours with T > 30°C was selected, because, when temperature was aggregated in this way and used as fixed effect, this resulted in the best model for the response variable representing time to the event crushed. Interestingly, 30°C is the minimum temperature prescribed in Switzerland for the piglet creep area in the first days after birth (43) and Weber et al. (13) Table 1 provides information on the number of litters, sow characteristics, and litter performance per farm. In total, 10,567 piglets were born in the 740 litters of the data set, corresponding to 14.3 piglets born per litter on average. Thereof, 1,024 piglets were recorded as stillborn, resulting in an average stillborn rate of 9.7% and an average of 12.9 live-born piglets per litter. Average gestation length was 116.6 days. In total, 1,027 of 9,543 live-born piglets (10.76%) died in the first week after birth. These are henceforth referred to as piglet losses and were assigned to the above defined events as follows: 371
Survival analysis
We used individual mixed-effects Cox regressions to statistically model the instantaneous hazard (probability) of dying at any given time in the first week after birth by one of the five defined events (piglet loss, not crushed, crushed, low viable crushed, and viable crushed). Figure 1B shows the estimated hazard ratios (HRs) for the co-variates (parity class, gestation length, number of live-born piglets, number of stillborn piglets, and daily number of hours with a temperature of >30°C). The HR represents the factor by which an unknown baseline hazard multiplies when the co-variate of interest increases by one unit, i.e., HRs of 1.1 and 0.9 correspond to a 10% increase and a 10% decrease in hazard, respectively, per unit increase of the co-variate.
Parity class
With every additional parity of the sow the hazard for a piglet to die at any given time in the first week after birth (piglet loss) increased by 9
Gestation length
With every additional day of gestation (increasing gestation length) the hazard for a piglet to die at any given time in the first week after birth (piglet loss) decreased by 6
Number of live-born piglets
With every additional live-born piglet in the litter the hazard for a piglet to die at any given time in the first week after birth (piglet loss) increased by 3
Number of stillborn piglets
With every additional stillborn piglet in the litter the hazard for a piglet to die by other causes than crushing at any given time in the first week after birth (not crushed) increased by 9
Daily number of hours with a temperature of >30°C
With every additional hour with an ambient temperature above 30°C the hazard for a piglet to die by crushing at any given time in the first week after birth (crushed) increased by 4
. Parity class
In the present study, a higher parity class was associated with an increased hazard for the piglets to die at any given time in the first week after birth, irrespective of whether death was caused by crushing or by other reasons (not crushed). The association between parity class and general PWM in primiparous versus in multiparous sows is well studied (e.g., 12,26,44,45). A lower colostrum yield and quality in primiparous compared with multiparous sows makes piglets of first parity sows more prone to diseases (2,(46)(47)(48). However, in our study we found a higher PWM with increasing parity of the sow. This finding might be explained by three main factors. First, in general, older sows have a longer farrowing duration (49), which increases the probability of intrapartum hypoxia and subsequently reduces neonatal viability (2). Second, the variability of the litter size and birth weight increases with increasing parity class leading to a higher probability of more underweight piglets (2, 3, 50). Third, older sows usually have reduced function and accessibility of teats (2, 45, 51), increasing the inequality in feeding among the piglets (45).
For piglet mortality due to crushing, the conclusions of available studies on potential effects of parity class are inconsistent. In contrast to our results, Vrbanac et al. (53) reported no association between parity class and crushing risk. Consistent with our results, several other studies reported that a higher parity class was associated with an increased crushing risk (e.g., 3,8,26,29,54). Vieuille et al. (55) described a higher reactivity of piglets in litters of first parity sows, they seemed quicker to move away from the mother when she suddenly changed her position. Olsson et al. (3) gave two additional explanations for higher crushing losses with higher parity class. First, maternal responsiveness might decrease with increased parity class owing to older sows being heavier and clumsier and having more health problems, e.g., claw or leg problems and teat damage (3). Second, older sows have larger litters and more underweight piglets Estimated hazard ratios with 95% confidence interval are shown for the five co-variates (potential risk factors). Significance Code: ***p < 0.001, **p < 0.01, *p < 0.05, ^p < 0.1.
Frontiers in Veterinary Science 06 frontiersin.org (3). Koketsu et al. (26) combined in their analyses crushed piglets which had died because of trauma and those characterised by low viability and found that, in parity 3 to 5 and more, piglets had higher mortality ratios than piglets from sows of parity 1 or 2. In line with this reasoning, we expected that the weaker piglets would be crushed in litters of older sows, which is supported by our results.
Gestation length
We found statistical support that with increasing gestation length the hazard to die at any given time in the first week (piglet loss) and to die crushed and viable crushed decreased. Such a decrease in general PWM with longer gestation was shown in the investigations of Hofer (56), based on Swiss genetics with an average number of <12.6 liveborn piglets per litter (28,57), and Hales et al. (30), based on an average number of 15.5 live-born piglets per litter. Hales et al. (30) showed that piglets born before day 116 of gestation had an increased risk of dying compared with piglets that were born later. Hanenberg et al. (58) and Rydhmer et al. (59) hypothesized that selection for longer gestation would probably improve piglet survival. Rydhmer et al. (59) showed a high heritability of the gestation length and positive genetic correlations between gestation length and average birth weight. Vice versa, the selection for piglet survival results in a longer gestation length. In Switzerland, breeding goals changed in 2004, when the breeding value 'piglet survival rate' was introduced (60). Since then, it is given the highest importance in the damlines (61,62), whereas increasing the litter size is no longer a breeding focus (61). Hofer (56) observed a continuous increase in gestation length until 2014 and hypothesized that this high importance of the 'piglet survival rate' resulted in an increase of the gestation length, leading to more mature piglets even in larger litters and finally a decrease in PWM. Furthermore, Rydhmer et al. (59) found positive genetic correlations between gestation length and piglet growth rate during the first 3 weeks. Therefore, it is likely, that maturation and growth rate not only influence general PWM, but crushing risk in particular. Vallet and Miles (63) hypothesized that the impairment of coordination and reflexes due to reduced brain myelination could decrease the ability of small piglets to avoid the sow when necessary, and, therefore, may contribute to the risk of crushing. This hypothesis is supported by the findings of Amdi et al. (64), who found a tendency of higher vitality score in normal (normal birth weight and head morphology) piglets compared to piglets with severe intrauterine growth-restriction (IUGR). IUGR piglets have a higher risk of dying in the first days after birth (30,65), when crushing risk is highest.
Number of live-born piglets
We found that the general hazard to die at any given time in the first week (piglet loss) and to die by a weakness-associated event (not crushed and low viable crushed) increased with increasing number of live-born piglets in the litter. No support was found for an effect of the number of live-born piglets on the hazards for crushed and viable crushed.
Several studies reported higher PWM associated with larger numbers of live-born piglets in the litter (e.g., 3,12,13,26,28,66). An association between litter size and weakness-associated deaths was expected because a large litter size, which corresponds generally to a large number of live-born piglets, is strongly associated with a larger number of underweight (3,50) and IUGR piglets (20, 65). Moreover, in litters with more live-born piglets, each piglet gets less colostrum, as colostrum yield is reported to be independent of litter size (67). Particularly in piglets with a low birth weight, a reduced colostrum intake leads to weakness and consequently a higher PWM risk (39,47,68).
As reviewed by Ward et al. (27) litter size was identified as a contributing factor towards higher crushing incidence across pig breeds (29, 69). Liu et al. (70) hypothesized that larger litter size may cause crowding and leave piglets less space to withdraw while sows are lying down or getting up, which increases the risk of crushing. Additionally, higher crushing losses in larger litters can be explained by the fact that there is more fighting for access to the teats leading to disturbance of the suckling process, more position changes of the sow, and, therefore, a higher risk for crushing (61). In contrast to these results, we did not find statistical support for an effect of the number of live-born piglets on general crushing risk and on crushing risk of viable piglets. Analyzing a large dataset from Swiss commercial farms, Weber et al. (28) reported that with a larger litter size at birth, significantly more losses occurred due to all reasons (total, crushed, others), but while the number of losses other than crushing increased strongly, crushing losses increased only slightly. An explanation might be related to relatively small average litter sizes in Switzerland and to cross-fostering management. To handle larger litters, cross-fostering of heaviest piglets (71,72) between litters is a very important method to equalize litter size, with the aim to secure milk to the piglets (71). Thus, piglets in equalized large litters tend to have better survival chances (72, 73).
Number of stillborn piglets
With every additional stillborn piglet in the litter the hazard for a piglet to die in the first week after birth by other causes than crushing (not crushed) increased. Additionally, weak statistical support for such an increase was also found for low viable crushed. But we found no support for an effect of the number of stillborn piglets on the hazard for the event piglet loss, which is in concordance with the finding of Koketsu et al. (26).
Depending on the time of infection, a combination of stillborn and low viable piglets at birth can be caused by porcine reproductive and respiratory syndrome virus (PRRSV), Aujeszky's disease virus (ADV), classical swine fever virus (CSFV), porcine parvovirus (PPV), porcine circovirus 2 (PCV-2), and leptospira (74). At the time of this study, Switzerland was approved to be free from PRRSV, ADV, and CSFV (75, 76) and just a single case of leptospirosis in pigs was reported in a distance of at minimum 100 km of all study farms (77). Moreover, all farms in this study vaccinated the sows against PPV and cases of PCV-2 induced reproductive failures were described to be relatively rare in Switzerland (78). Therefore, the observed effects of the number of stillborn piglets in the litter on the hazard to die from a cause of death related to weakness (not crushed, low viable crushed) can likely be explained by non-infectious rather than infectious causes. As reviewed by Muns et al. (2), intrapartum hypoxia suffered by piglets at birth is one of the most important causes of stillbirth and early PWM in piglets and directly related to neonatal viability. A reduction in the oxygenation of prenatal piglets, compromising their viability, can be caused by uterine contractions in sows with a long farrowing duration (2). Factors leading to a longer farrowing duration, i.e., high parity, large litters, and low back fat levels in sows, are Frontiers in Veterinary Science 07 frontiersin.org associated with a higher stillborn rate (79). Because a prolonged farrowing duration results in an elevated number of weak or stillborn piglets, sows are often treated with oxytocin, which decreases the duration of farrowing (80,81) but also increases the number of stillborn piglets (81). The routine administration of oxytocin immediately after the birth of the first piglet or overdosing of oxytocin can compromise piglet viability [reviewed by Muns et al. (2)] and might explain our results besides long farrowing durations. Unfortunately, we can only speculate about the use of oxytocin in our study, as this data is not available in our records.
Temperature
With every additional hour with an ambient temperature above 30°C the hazard for crushed and viable crushed increased. Our results are in line with the observations made by Weber et al. (12,13) and what many farmers report; crushing losses especially of viable piglets are generally more frequent in summer than in the other seasons. As mandatory according to the Swiss Animal Protection Ordinance (33, 43), every farm included in this study had a heated piglet creep area integrated into the farrowing pen, to satisfy the completely different thermal demands of the sow and the piglets. In the first 3 days after birth, a minimum temperature of 30°C is prescribed in the piglet creep area independently of the season (43). As shown in several studies (82)(83)(84), the acceptance of the heated creep area by the piglets is low in the first days after birth, when the crushing risk is highest (82). Even lower is the acceptance by the piglets when the temperature difference between sow area and piglet creep area is small (83,85,86). Viable piglets spend less time in the nest away from the sow's body when the room temperature increases toward the nest temperature (13,83), which would elevate the risk of being crushed, as assumed by Weber et al. (13). This was confirmed by Gao found a higher crushing rate in summer than in other seasons, which they attributed to greater heat stress experienced by the sows. Heat stress can cause alterations in sow behavior, such as a higher activity leading to a reduction of the piglets in the amount and duration of suckling, which might in turn be related with higher piglet mortality due to crushing (87). However, the air temperature has to be relatively high (above 27°C) before it affects feed intake, milk yield or weight loss of the sow, and consequently the daily weight gain of litters, as reviewed by Bjerg et al. (88).
Summarizing crushing risk for viable versus low viable piglets
We hypothesized that there are differences in risk factors between the events labelled as viable crushed, i.e., being crushed in viable state, and low viable crushed, i.e., being crushed in low viable state. Our results support this hypothesis, as we identified two risk factors for viable crushed that were not identified as risk factors for low viable crushed. These were shorter gestation length and higher ambient temperature. Vice-versa we identified two risk factors for low viable crushed that were not identified as risk factors for viable crushed, namely higher number of live-born piglets and higher number of stillborn piglets (the latter with only weak statistical support). Additionally, we expected that risk factors for dying from other causes than crushing (not crushed), typically related to weakness, would be more similar to those for low viable crushed than to those for viable crushed. This is supported by our results as the risk factors identified for not crushed were the same as those identified for low viable crushed (number of stillborn piglets and number of live-born piglets) but differed to the risk factors identified for viable crushed.
Conclusion
This study shows the importance of a differentiation between low viable crushed and viable crushed piglets. A differentiation based on the piglet's body weight and external signs of weakness (e.g., diseases, malformations) can considerably affect the interpretation of risk factors. We conclude that low viable crushed and viable crushed piglets should be handled as two different causes of death, particularly for breeding and research purposes. Recording underweight or weak piglets simply as 'crushed' should be avoided. The results of previous studies not differentiating between low viable and viable crushing losses should be interpreted cautiously. Future studies should differentiate between primary and secondary crushing losses and focus on identifying the risk factors for crushing of viable piglets, because viable piglets are the focus of welfare and economic interests.
Data availability statement
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
Ethics statement
The animal study was reviewed and approved by Swiss Cantonal Veterinary Office Thurgau, Frauenfeld, Switzerland. Written informed consent was obtained from the owners for the participation of their animals in this study.
Author contributions
CS-V was responsible for study design and farm acquisition, organised data collection by farmers, digitalised the farmers' records, prepared data for statistical analysis, and drafted all other parts of the manuscript. MSi conducted the statistical analysis, visualised data, and drafted the statistical part of the manuscript. MSc critically reviewed the draft and gave substantial input and constructive criticism on the content of the manuscript. BW edited the manuscript. All authors contributed to the article and approved the submitted version. | 2023-04-23T15:02:52.377Z | 2023-04-21T00:00:00.000 | {
"year": 2023,
"sha1": "7a21ec551bd2ea6666bc8db84e1eda2fb5795faf",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fvets.2023.1172446/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "01f61b100bfa1a794e1d40ed2f97adf3f96b4f33",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
236909151 | pes2o/s2orc | v3-fos-license | Precise Controlled Target Molecule Release through Light-Triggered Charge Reversal Bridged Polysilsesquioxane Nanoparticles
Precise control of target molecule release time, site, and dosage remains a challenge in controlled release systems. We employed a photoresponsive molecule release system via light-triggered charge reversal nanoparticles to achieve a triggered, stepwise, and precise controlled release platform. This release system was based on photocleavage-bridged polysilsesquioxane nanoparticles which acted as nanocarriers of doxorubicin loaded on the surface via electrostatic interaction. The nanoparticles could reverse into positive charges triggered by 254 nm light irradiation due to the photocleavage of the o-nitrobenzyl bridged segment. The charge reversal property of the nanoparticles could release loaded molecules. Doxorubicin was selected as a positively charged model molecule. The as-prepared nanoparticles with an average size of 124 nm had an acceptable doxorubicin loading content up to 12.8%. The surface charge of the nanoparticles could rapidly reverse from negative (−28.20 mV) to positive (+18.9 mV) upon light irradiation for only 10 min. In vitro release experiments showed a cumulative release up to 96% with continuously enhancing irradiation intensity. By regulating irradiation parameters, precisely controlled drug release was carried out. The typical “stepped” profile could be accurately controlled in an on/off irradiation mode. This approach provides an ideal light-triggered molecule release system for location, timing, and dosage. This updated controlled release system, triggered by near-infrared or infrared light, will have greater potential applications in biomedical technology.
Introduction
A controlled molecule release system holds the ability to trigger molecules release and maintain an effective concentration of molecules at the target site. However, precise control over the timing, location, and quantity of molecules released is still a challenge. Stimuliresponsive drug nanocarriers are supposed to overcome the challenge in drug release systems [1][2][3][4][5][6][7]. Light, with remote and on-demand control over irradiation parameters, has become a promising stimulus for drug release systems [8,9]. Therefore, a light-controlled drug release system is considered as an ideal platform for precise control of drug release behavior [10][11][12][13].
A direct method for precisely controlling the dosage of drugs is the covalent combination of photocleavable nanocarriers and target molecules [14]. The target molecules are covalently attached to nanocarriers containing photocleavable groups, such as o-nitrobenzyl and coumarinyl derivatives [15]. The photocleavage of nanocarriers by light leads to the removal of the drug molecules, giving precise control over the drugs. Yibing Zhao reported that carboxytetramethylrhodamine, as a drug model, could be covalently caged by o-nitrobenzyl linkage-modified silica nanoparticles through a direct esterification process and released precisely by light [16]. Linyong Zhu groups provided a mesoporous silica nanoparticle grafted by the phototrigger coumarin modified with the drug chlorambucil, which could precisely regulate drug release upon light manipulation [17]. Magnetic silica nanoparticles fabricated using a coumarin-chlorambucil covalent conjugate were used for the precise control of the photolytic release of chlorambucil [18]. Pradeep Singh grouped developed photoresponsive mesoporous silica nanoparticles for the precise controlled release of chlorambucil attached to a quinoline phototrigger covalently [19]. Whereas the covalent method still has nonnegligible defects that the complicated chemical preparation and multistep operation might affect the effective efficiency of molecules, so it is urgent to find a new strategy to overcome the drawback.
Other methods have been tried to construct light-controlled drug release systems, like photocleavable polymeric vesicles, micelles, or liposomes [20], the drugs are encapsulated into cores of these nanocarriers via hydrophilic and hydrophobic interactions. Upon irradiation, the encapsulated drugs were continuously released from the disrupted nanocarriers due to photolysis, which caused the poor controllability of drug release systems. Therefore, it is still necessary to develop a stable and efficient methodology for precisely controlled drug release.
Electrostatic interaction is an advanced technology to load and release drugs due to its generalizability and fast responsivity. At present, stimuli-responsive charge-reversal nanocarriers exhibit significant potential for controlled drug release systems [21][22][23][24]. For example, the surface charge of nanocarriers plays an important role in prolonged circulation and cellular uptake in the physiological environment. The negatively charged nanocarriers maintain their characteristics during circulation in the blood and then reverse their surface charge to enhance cellular uptake once stimulated [25]. Although charge-reversal nanocarriers triggered by endogenous stimuli (e.g., pH, redox, enzyme) have been constructed [26][27][28][29][30][31][32][33], the low precision of the release dosage is a problem that has to be faced in drug release systems. The concentrations of these stimuli in the physiological environment could not be optionally regulated and controlled. Meanwhile, light-responsive chargereversal nanocarriers hold tremendous potential to be explored for precise control of drug release. If light-responsive and electrostatic interactions are combined, precisely controlled drug release can be achieved. It should be mentioned that light-responsive charge-reversal nanocarriers have been employed for delivering nucleic acid molecules [34][35][36][37]. However, the positive charge of pristine nanocarriers is not conducive to their circulation in the blood, resulting in the inapplicability of the drug release systems.
Bridged polysilsesquioxane has a general chemical structure of [RSiO 1.5 ] n , in which R is an organic bridged group and [SiO 1.5 ] is the inorganic framework of the polymers [38]. The organic bridged group (R) can be designed to endow the special properties of bridged polysilsesquioxane, including its stimuli-responsive properties. The inorganic framework of [SiO 1.5 ] affords this polysilsesquioxane good thermal stability, solvent resistance and biocompatibility. Therefore, bridged polysilsesquioxanes can serve as excellent carriers [39] and are potential candidates for widespread applications, including delivery systems [40][41][42], photoelectric sensors [43,44], molecular recognition [45,46], and adsorbents [47].
In the present work, bridged polysilsesquioxane nanoparticles (BPS) with photocleavable o-nitrobenzyl bridged segments were designed and prepared. The silanol groups have the negative surface charges of BPS in an aqueous environment. The negatively charged BPS can attract positively charged drugs via electrostatic interactions. Upon irradiation, the photocleavage reaction of the organic bridged segments results in protonated amine groups which reverse the surface charge of BPS from the negative property to the positive property. At that time, the electrostatic equilibrium is unstable, so the electrostatic repulsion force results in the release of the target molecules from the surface of nanocarriers. Once the light is off, a new electrostatic equilibrium is established. Thus, a light-triggered, stepwise, and precisely controlled molecule release system can be fabricated. Doxorubicin (DOX), a positively charged drug, was used as a target molecule model compound to be loaded onto the negatively charged BPS. By regulating the irradiation intensity, time and on/off manner, multiple profiles of drug release and precise control of release timing, location, and dosage are reported. The BPS nanocarrier is an excellent candidate for the precisely controlled release of charged target molecules including food additive, dyes, cosmetics, pesticides, functional ultraviolet absorbents, and drugs in certain circumstances. However, since the use of the light wavelength at 254 nm has inevitable limitation such as harm to humans and weak penetration though skin, near-infrared or infrared light should be further considered to trigger the precisely controlled molecule release system.
Synthesis of o-Nitrobenzyl Chloroformate
o-Nitrobenzyl alcohol (7.60 g, 49.6 mmol) and DIPEA (10.0 mL, 57.4 mmol) were dissolved in dry DCM (50.0 mL). Triphosgene (5.68 g, 19.2 mmol) was dissolved in dry DCM (20.0 mL) and added dropwise to the above mixture within 30 min at 0 • C. The reaction mixture was stirred at room temperature for 12 h. The resulting brownish solution was washed with water and extracted three times with DCM. The combined organic extracts were dried over anhydrous MgSO 4 . The organic solvent was removed under reduced pressure to give a brown crude product. The crude product was purified by column chromatography using DCM/PE (v/v, 3/1) as the eluent to yield o-nitrobenzyl chloroformate as a yellow solid (6.95 g, 32.3 mmol, 65.2% yield). Its chemical structure was confirmed by 1 H NMR and 13 C NMR spectra (Supplementary Materials, Figures S1 and S2).
Photoreaction of the o-NB
An 80 µg/mL solution of o-NB in THF was placed in a volumetric flask at 10 cm using a bandpass filter (λ = 254 nm, 200 mW/cm 2 ). 6.0 mL of solution was taken out at various time for UV-vis spectrum irradiation.
A solution of o-NB in CDCl 3 (10.0 mg/mL) was irradiated at various times for the 1 H NMR spectra. The parameters of the laser source, as well as the distance between laser and samples, were kept the same for all irradiation processes in this work. For the light-on or -off DOX release experiments, the sample suspensions were exposed to a laser power density of 200 mW/cm 2 for 1 h (light on) and were then shielded for 1 h (light off), respectively. The light on/off experiments were repeated six times in 12 h. The release of DOX at each light on and light off event was measured by centrifugation (8000 rpm) for 20 min and the resulting DOX content was determined by the UV-vis method.
All the values were measured three times. The rate of the released drug (%) was calculated based on the following equation: Rate of the released drug (%) = Weight of drug in the PBS Initial weight of drug in the nanoparticles × 100% (3)
Characterization Techniques
Nuclear magnetic resonance (NMR) spectra were recorded on a Bruker Avance 400 MHz spectrometer by using CDCl 3 as the solvent and tetramethylsilane (TMS) as the internal standard.
Fourier transform infrared (FTIR) spectra were recorded on a Bruker Tensor 27 spectrometer in the wavenumber range of 400-4000 cm -1 . Samples were ground with KBr and pressed to the plates for measurement.
The particle size, size distribution, and zeta potential were measured by light scattering using Malvern Zetasizer Nano ZS90 system. The diameter of NPs was received from the average of three measurement results.
The morphologies of the nanoparticles were observed by scanning electron microscopy (SEM, Hitachi SU8010) after drying and spraying Pt, and by transmission electron microscopy (TEM, Hitachi 2100) at 100.0 kV.
Thermal gravimetry analysis (TGA) was performed at a heating rate of 10 • C/min under a N 2 atmosphere with a Thermo Gravimetric Analyzer (Netzsch STA 449).
Synthesis and Characteristic of Photoresponsive o-NB
As illustrated in Scheme 1, o-nitrobenzyl alcohol reacted with triphosgene at 0 • C via a substitution reaction to obtain o-nitrobenzyl chloroformate. Then the intermediate product and bis(trimethoxysilylpropyl)amine underwent a substitution reaction in an ice-water bath to produce a bridged siloxane with photoresponsive o-nitrobenzyl groups (o-NB). The chemical structures of all compounds were confirmed via nuclear magnetic resonance (NMR) and mass spectra (MS) spectroscopies (Figures S1-S6).
Characterization Techniques
Nuclear magnetic resonance (NMR) spectra were recorded on a Bruker Avance 400 MHz spectrometer by using CDCl3 as the solvent and tetramethylsilane (TMS) as the internal standard.
Fourier transform infrared (FTIR) spectra were recorded on a Bruker Tensor 27 spectrometer in the wavenumber range of 400-4000 cm -1 . Samples were ground with KBr and pressed to the plates for measurement.
The particle size, size distribution, and zeta potential were measured by light scattering using Malvern Zetasizer Nano ZS90 system. The diameter of NPs was received from the average of three measurement results.
The morphologies of the nanoparticles were observed by scanning electron microscopy (SEM, Hitachi SU8010) after drying and spraying Pt, and by transmission electron microscopy (TEM, Hitachi 2100) at 100.0 kV.
Thermal gravimetry analysis (TGA) was performed at a heating rate of 10 °C/min under a N2 atmosphere with a Thermo Gravimetric Analyzer (Netzsch STA 449).
Synthesis and Characteristic of Photoresponsive o-NB
As illustrated in Scheme 1, o-nitrobenzyl alcohol reacted with triphosgene at 0 °C via a substitution reaction to obtain o-nitrobenzyl chloroformate. Then the intermediate product and bis(trimethoxysilylpropyl)amine underwent a substitution reaction in an ice-water bath to produce a bridged siloxane with photoresponsive o-nitrobenzyl groups (o-NB). The chemical structures of all compounds were confirmed via nuclear magnetic resonance (NMR) and mass spectra (MS) spectroscopies (Figures S1-S6). Figure 1b shows the 1 H NMR spectra of the o-NB in CDCl3 before and after light irradiation. The integral of peak labeled "f" at 3.21 ppm attributed to H(N-CH2-) was set as 4.0. The integral of peak labeled "e" at 5.4 ppm belongs to protons on the benzyl group gradually decreasing from 2.00 to 1.75 with the increasing irradiation time, indicating the consumption of benzyl groups after the photocleavage of the o-nitrobenzyl ester groups. Meanwhile, the appearance of a new peak at 10.3 ppm suggested the formation of photolysis product aldehyde groups. No observed visible changes occurred in the protons attributed to the benzene ring (a~d) and aliphatic alkyl (f~i). The pendent o-nitrobenzyl ester group of the monomer underwent photocleavage into a corresponding o-nitrosobenzaldehyde upon light irradiation at 254 nm, simultaneously releasing CO 2 ( Figure 1a). 1 H NMR spectra was used to prove the photocleavage of o-NB. Figure 1b shows the 1 H NMR spectra of the o-NB in CDCl 3 before and after light irradiation. The integral of peak labeled "f" at 3.21 ppm attributed to H(N-CH 2 -) was set as 4.0. The integral of peak labeled "e" at 5.4 ppm belongs to protons on the benzyl group gradually decreasing from 2.00 to 1.75 with the increasing irradiation time, indicating the consumption of benzyl groups after the photocleavage of the o-nitrobenzyl ester groups. Meanwhile, the appearance of a new peak at 10.3 ppm suggested the formation of photolysis product aldehyde groups. No observed visible changes occurred in the protons attributed to the benzene ring (a~d) and aliphatic alkyl (f~i).
The photoresponsive behavior of o-NB in the tetrahydrofuran solution (80 µg/mL) was monitored by UV-vis spectroscopy (Figure 2). UV-vis spectrum of the unirradiated o-NB in THF (the solid green line in Figure 2a) showed the characteristic absorption of the π→π* electron transition at 220 nm and the n→π* electron transition at 294 nm, belonging to benzyl groups, nitryl, and ester groups, respectively. After irradiation for 60 min (solid red line in Figure 2a The photoresponsive behavior of o-NB in the tetrahydrofuran solution (80 μg/mL) was monitored by UV-vis spectroscopy (Figure 2). UV-vis spectrum of the unirradiated o-NB in THF (the solid green line in Figure 2a) showed the characteristic absorption of the π→π* electron transition at 220 nm and the n→π* electron transition at 294 nm, belonging to benzyl groups, nitryl, and ester groups, respectively. After irradiation for 60 min (solid red line in Figure 2a
Synthesis and Characteristic of BPS
o-Nitrobenzyl bis-trimethoxysilylpropyl carbamate bridged polysisesquioxane nanoparticles were synthesized via a sol-gel process using sodium hydroxide as a basic catalyst and i-PrOH as a cosolvent. The hydrolysis-condensation reaction of the o-NB monomer was conducted under vigorous stirring at 80 • C for 2 h. The nanoparticles were collected by centrifugation, washed with deionized water and ethanol, and dried under vacuum 4 h, giving a white powder. The photoresponsive BPS was carefully characterized by FTIR, UV-vis, and TGA measurements (Figure 3). The FTIR spectrum of BPS (Figure 3a) showed the Si-O stretching vibration mode at 1102 cm −1 and 1030 cm −1 . The Si-C stretching vibration mode at 1198 cm −1 indicated that the organic bridged segments covalently incorporated into the polysisesquioxanes. The peak at 2938 cm −1 was attributed to the C-H stretching vibration. The presence of -NO 2 groups was proven by the typical peaks at 1540 cm −1 of antisymmetric stretching vibration and at 1365 cm −1 symmetric stretching vibration, respectively. The stretching vibration of C=O at 1705 cm −1 and C-O at 1264 cm −1 revealed the eater groups in the organic bridged segments. In addition, the C-H bending vibration of the phenyl groups appeared at 789 cm −1 and 730 cm −1 . The UV-vis absorption spectra (Figure 3b) of BPS dispersed in PBS buffer solution (pH 7.4) showed the characteristic adsorption of the π→π* electron transition at 203 nm attributed to the benzene ring and that of the n→π* electron transition at 306 nm attributed to the unsaturated -NO 2 and C=O groups, demonstrating the possibility of the photoresponsiveness of BPS with o-nitrobenzyl ester organic groups. The difference between the UV-vis absorption spectra of the BPS and the o-NB may come from the effect of Si-O-Si network on o-nitrobenzyl ester groups. The TGA curves were analyzed to quantify the organic content of BPS (Figure 3c). The total weight loss suggested 58.0 wt% of BPS. The first stage of weight loss before 370 • C was caused by the thermal decomposition of the pendant o-nitrobenzyl ester groups. The second stage of weight loss from 370 • C to 700 • C arose from the thermal decomposition of aliphatic bridged chains. The two-stage profiles of weight loss were also clearly shown by the DTG curves (blue dotted line in Figure 3c). The dynamic light scattering (DLS) results showed that the average size of the BPS was 124 ± 12 nm (Figure 3d). The spherical shape of the prepared BPS nanoparticles were confirmed by SEM and TEM (Figure 3e,f).
Synthesis and Characteristic of BPS
o-Nitrobenzyl bis-trimethoxysilylpropyl carbamate bridged polysisesquioxane nanoparticles were synthesized via a sol-gel process using sodium hydroxide as a basic catalyst and i-PrOH as a cosolvent. The hydrolysis-condensation reaction of the o-NB monomer was conducted under vigorous stirring at 80 °C for 2 h. The nanoparticles were collected by centrifugation, washed with deionized water and ethanol, and dried under vacuum 4 h, giving a white powder. The photoresponsive BPS was carefully characterized by FTIR, UV-vis, and TGA measurements (Figure 3). The FTIR spectrum of BPS (Figure 3a) showed the Si-O stretching vibration mode at 1102 cm −1 and 1030 cm −1 . The Si-C stretching vibration mode at 1198 cm −1 indicated that the organic bridged segments covalently incorporated into the polysisesquioxanes. The peak at 2938 cm −1 was attributed to the C-H stretching vibration. The presence of -NO2 groups was proven by the typical peaks at 1540 cm
Light-Triggered Charge Reversal of BPS
The photoresponsive behavior of BPS was investigated using the Zeta potential, DLS, TGA, and UV-vis methods. Zeta potential measurements in PBS solution (pH 7.4) on BPS revealed a negative charge property of −28.20 mV (Figure 4a). In a typical experiment, when an aqueous suspension of BPS was irradiated at 254 nm (200 mW/cm 2 ) for 10 min, the surface charge property reversed to +18.9 mV (Figure 4a), indicating the light-triggered surface charge reversal feature of BPS. In fact, after irradiating for 4 min, the surface charge property of the nanoparticles had already reversed to a positive state (+2.64 mV), suggesting the fast responsiveness of BPS to light. The light-triggered surface charge reversal property resulted from the photocleavage of the o-nitrobenzyl groups. Before irradiation, the existence of silanol groups on the surface of the BPS resulted in a negatively charged surface. Upon irradiation, secondary amine groups were exposed in the bridged segments after the photoreaction of the o-nitrobenzyl groups. The protonation of the produced amine groups reversed the surface charge of the nanoparticles from a negative to a positive state. Figure 4b shows the TGA measurements of the BPS over different irradiation times. The char yield of the BPS was 42.1 wt% under N 2 atmosphere. However, the char yield of the BPS irradiated for 40 min increased to 47.9 wt%. The 5.8 wt% difference in char yield was attributed to the photocleavage of the o-nitrobenzyl groups of the BPS. The supernatant of BPS over different irradiation times was monitored by the UV-vis method ( Figure 4c). The increasing absorption peaks at 234 nm and 306 nm corresponded to the π→π* and n→π* electron transitions of the photolysis by-product o-nitrosobenzaldehyde. In addition, the supernatant of BPS turned gradually from colorless to slightly yellow due to the by-product o-nitrosobenzaldehyde being produced upon light irradiation.
Light-Triggered Charge Reversal of BPS
The photoresponsive behavior of BPS was investigated using the Zeta potential, DLS, TGA, and UV-vis methods. Zeta potential measurements in PBS solution (pH 7.4) on BPS revealed a negative charge property of −28.20 mV (Figure 4a). In a typical experiment, when an aqueous suspension of BPS was irradiated at 254 nm (200 mW/cm 2 ) for 10 min, the surface charge property reversed to +18.9 mV (Figure 4a), indicating the light-triggered surface charge reversal feature of BPS. In fact, after irradiating for 4 min, the surface charge property of the nanoparticles had already reversed to a positive state (+2.64 mV), suggesting the fast responsiveness of BPS to light. The light-triggered surface charge reversal property resulted from the photocleavage of the o-nitrobenzyl groups. Before irradiation, the existence of silanol groups on the surface of the BPS resulted in a negatively charged surface. Upon irradiation, secondary amine groups were exposed in the bridged segments after the photoreaction of the o-nitrobenzyl groups. The protonation of the produced amine groups reversed the surface charge of the nanoparticles from a negative to a positive state. Figure 4b shows the TGA measurements of the BPS over different irradiation times. The char yield of the BPS was 42.1 wt% under N2 atmosphere. However, the char yield of the BPS irradiated for 40 min increased to 47.9 wt%. The 5.8 wt% difference
Drug Loading and Light-Triggered Drug Release In Vitro
Light-triggered charge reversal BPS is a promising candidate for the precise control of molecule release. DOX, a clinical anticancer drug, was used to evaluate the light-triggered molecule release system. As shown in Figure 5, the positively charged DOX was loaded onto the negatively charged BPS (DOX@BPS) by electrostatic interaction in PBS buffer solution (pH 7.4). Upon irradiation at 254 nm, the charge reversal of BPS from a positive to a negative state created electrostatic repulsion between DOX and BPS, resulting in the release of the DOX.
in char yield was attributed to the photocleavage of the o-nitrobenzyl groups of the BPS. The supernatant of BPS over different irradiation times was monitored by the UV-vis method (Figure 4c). The increasing absorption peaks at 234 nm and 306 nm corresponded to the π→π* and n→π* electron transitions of the photolysis by-product o-nitrosobenzaldehyde. In addition, the supernatant of BPS turned gradually from colorless to slightly yellow due to the by-product o-nitrosobenzaldehyde being produced upon light irradiation.
Drug Loading and Light-Triggered Drug Release In Vitro
Light-triggered charge reversal BPS is a promising candidate for the precise control of molecule release. DOX, a clinical anticancer drug, was used to evaluate the light-triggered molecule release system. As shown in Figure 5, the positively charged DOX was loaded onto the negatively charged BPS (DOX@BPS) by electrostatic interaction in PBS buffer solution (pH 7.4). Upon irradiation at 254 nm, the charge reversal of BPS from a positive to a negative state created electrostatic repulsion between DOX and BPS, resulting in the release of the DOX. The SEM and TEM micrographs showed that the surface of DOX@BPS became rough, indicating the target molecules of DOX had loaded on the BPS (Figure 6a,b).The zeta potential of DOX@BPS was studied in the preparation process (Figure 6c). When the feed ratio of DOX to BPS was 0.5, the zeta potential of the obtained DOX@BPS maintained a negatively charged state of −4.9 mV. With the increasing feed ratio of DOX, the surface charge of DOX@BPS reversed to positively charge at +16.2 mV. Due to negatively charged nanocarriers having more advantage in a drug release system, the feed ratio of 0.5 of DOX to BPS was used to prepared the DOX@BPS. The loading content and entrapment efficiency of DOX@BPS at a 0.5 feed ratio was 12.8% and 67.3%, respectively. Furthermore, the average size of DOX@BPS increased to 196 ± 26 nm due to the DLS (Figure 6d). The results of the zeta potential, morphologies, and increased average size illustrated that target molecules DOX had been successfully loaded on the BPS. The SEM and TEM micrographs showed that the surface of DOX@BPS became rough, indicating the target molecules of DOX had loaded on the BPS (Figure 6a,b).The zeta potential of DOX@BPS was studied in the preparation process (Figure 6c). When the feed ratio of DOX to BPS was 0.5, the zeta potential of the obtained DOX@BPS maintained a negatively charged state of −4.9 mV. With the increasing feed ratio of DOX, the surface charge of DOX@BPS reversed to positively charge at +16.2 mV. Due to negatively charged nanocarriers having more advantage in a drug release system, the feed ratio of 0.5 of DOX to BPS was used to prepared the DOX@BPS. The loading content and entrapment efficiency of DOX@BPS at a 0.5 feed ratio was 12.8% and 67.3%, respectively. Furthermore, the average size of DOX@BPS increased to 196 ± 26 nm due to the DLS (Figure 6d). The results of the zeta potential, morphologies, and increased average size illustrated that target molecules DOX had been successfully loaded on the BPS.
indicating the target molecules of DOX had loaded on the BPS (Figure 6a,b).The zeta potential of DOX@BPS was studied in the preparation process (Figure 6c). When the feed ratio of DOX to BPS was 0.5, the zeta potential of the obtained DOX@BPS maintained a negatively charged state of −4.9 mV. With the increasing feed ratio of DOX, the surface charge of DOX@BPS reversed to positively charge at +16.2 mV. Due to negatively charged nanocarriers having more advantage in a drug release system, the feed ratio of 0.5 of DOX to BPS was used to prepared the DOX@BPS. The loading content and entrapment efficiency of DOX@BPS at a 0.5 feed ratio was 12.8% and 67.3%, respectively. Furthermore, the average size of DOX@BPS increased to 196 ± 26 nm due to the DLS (Figure 6d). The results of the zeta potential, morphologies, and increased average size illustrated that target molecules DOX had been successfully loaded on the BPS. The DOX release profile of DOX@BPS from irradiation times of 0 to 16 h at different power densities of 60 mW/cm 2 , 160 mW/cm 2 and 200 mW/cm 2 were evaluated ( Figure S8). DOX release of DOX@BPS under dark conditions after 16 h was only 3.5%, indicating DOX@BPS had good stability without irradiation. When the power density of the laser was 60 mW/cm 2 , 160 mW/cm 2 , and 200 mW/cm 2 , the amounts of DOX released under irradiation 16 h were 21.9%, 48.9% and 82.9%, respectively. To continuously enhance the cumulative release of DOX@BPS, the DOX release behaviors under changed light power densities (60, 160 and 200 mW/cm 2 ) over irradiation times from 0 to 48 h in PBS buffer solution (pH 7.4) at 37 • C were further studied (Figure 7a). UV-vis absorption spectroscopy recorded the absorbance of the supernatant of DOX@BPS upon light irradiation for different durations. The released DOX amounts increased remarkably with increased light irradiation density, suggesting that the target molecule release rate and the cumulative release amount could be regulated in a light-controlled manner by changing the light irradiation intensity and irradiation time. For example, the released DOX in PBS buffer solution (pH 7.4) at a 60 mW/cm 2 power density was only 21.8% of loaded DOX for light irradiation at 16 h. By contrast, at a 120 mW/cm 2 power density, the amount of released DOX was 51.7% after 16 h. Enhancing the irradiation power density to 200 mW/cm 2 induced a 96.2% drug release of the DOX@BPS. Figure 7a clearly shows that a high photoreaction conversion efficiency under a high intensity of laser light resulted in a faster DOX release rate. As a note, a two-stage profile with an initially higher rate of release that thereafter leveled off was clearly shown at the same irradiation power density. No burst release was observed in any irradiation period, because the driving force of the target molecule release was electrostatic interaction instead of the concentration diffusion mechanism. The average size of the DOX@BPS decreased to 140 ± 31 nm after DOX release in the PBS buffer solution (pH 7.4) (Figure 6d). release rate. As a note, a two-stage profile with an initially higher rate of release that thereafter leveled off was clearly shown at the same irradiation power density. No burst release was observed in any irradiation period, because the driving force of the target molecule release was electrostatic interaction instead of the concentration diffusion mechanism. The average size of the DOX@BPS decreased to 140 ± 31 nm after DOX release in the PBS buffer solution (pH 7.4) (Figure 6d). To confirm the capability of a precisely controlled release, a turning on/turning off mode of the target molecule release behavior of DOX@BPS was performed in PBS buffer solution at 37 °C upon irradiation at a 200 mW/cm 2 power intensity. The release process was monitored by UV-vis absorption spectroscopy. During the first laser on and laser off unit operation (1 h), 11.1% of DOX was released from DOX@BPS. DOX release from DOX@BPS only proceeded under irradiation. No DOX release was detected under dark conditions. Light-triggered target molecule release behavior was observed when the same irradiation time was repeated every one hour, and sustained release of DOX up to 60.8% To confirm the capability of a precisely controlled release, a turning on/turning off mode of the target molecule release behavior of DOX@BPS was performed in PBS buffer solution at 37 • C upon irradiation at a 200 mW/cm 2 power intensity. The release process was monitored by UV-vis absorption spectroscopy. During the first laser on and laser off unit operation (1 h), 11.1% of DOX was released from DOX@BPS. DOX release from DOX@BPS only proceeded under irradiation. No DOX release was detected under dark conditions. Light-triggered target molecule release behavior was observed when the same irradiation time was repeated every one hour, and sustained release of DOX up to 60.8% was detected after irradiating DOX@BPS for 6 h. The typical "stepped" profile indicated that the target molecules DOX can be released in a triggered, stepwise, and precisely controlled manner.
Conclusions
In conclusion, a light-triggered charge reversal bridged polysilsesquioxane nanoparticle was successfully designed and prepared. Upon light irradiation, the negatively charged (-28.20 mV) polysilsesquioxane nanoparticles can reverse their surface charge to positive state (+18.9 mV) due to the photocleavage of the organic bridged segments. This nanocarrier was considered an ideal platform for precise control of target molecule release. Doxorubicin, a positively charged model molecule, could be loaded on the surface via electrostatic interaction. The charge reversal property of the nanoparticles could release the loaded target molecules. The release kinetics of the target molecules depended on the intensities, times, and modes of light irradiation. A nearly complete DOX release (96%) was achieved by regulating light irradiation intensity and time. The typical stepped profile of DOX release under an on/off light irradiation mode provides a light-triggered, stepwise, and precisely controlled approach to target molecule release. This precisely controlled release system has potential applications in the fields of food additives, pesticides, dyes, cosmetics, functional ultraviolet absorbents and drugs in certain circumstances. Further research should attempt to use NIR or IR light to trigger the precisely controlled molecule release system to overcome the limitation of UV light at 254 nm for biomedical applications.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author.
Conflicts of Interest:
The authors declare no conflict of interest. | 2021-08-04T02:30:02.451Z | 2021-07-21T00:00:00.000 | {
"year": 2021,
"sha1": "f4a60eb8c643153f0eeec7d086990a1e6fdd188d",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4360/13/15/2392/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f4a60eb8c643153f0eeec7d086990a1e6fdd188d",
"s2fieldsofstudy": [
"Medicine",
"Materials Science",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
234471152 | pes2o/s2orc | v3-fos-license | Development and Implementation of Preoperative Optimization for High-Risk Patients With Abdominal Wall Hernia
IMPORTANCE Real-world surgical practice often lags behind the best scientific evidence. For example, although optimizing comorbidities such as smoking and morbid obesity before ventral and incisional hernia repair improves outcomes, as many as 25% of these patients have a high-risk characteristic at the time of surgery. Implementation strategies may effectively increase use of evidence-based practice. OBJECTIVE To describe current trends in preoperative optimization among patients undergoing ventral hernia repair, identify barriers to optimization, develop interventions to address these barriers, and then pilot these interventions. DESIGN, SETTING, AND PARTICIPANTS This quality improvement study used a retrospective medical record review to identify hospital-level trends in preoperative optimization among patients undergoing ventral and incisional hernia repair. Semistructured interviews with 21 practicing surgeons were conducted to elicit barriers to optimizing high-risk patients before surgery. Next, a task force of experts was convened to develop pragmatic interventions to increase surgeon use of preoperative optimization. Finally, these interventions were piloted at 2 sites to assess acceptability and feasibility. This study was performed from January 1, 2014, to December 31, 2019. MAIN OUTCOMES AND MEASURES The main outcome was rate of referrals for preoperative patient optimization at the 2 pilot sites. RESULTS Among 23000 patients undergoing ventral hernia repair, the mean (SD) age was 53.9 (14.3) years, and 12315 (53.5%) were men. Of these, 8786 patients (38.2%) had at least 1 high-risk characteristic at the time of surgery, including 7683 with 1, 1079 with 2, and 24 with 3. At the hospital level, the mean proportion of patients with at least 1 of 3 high-risk characteristics at the time of surgery was 38.2% (95% CI, 38.1%-38.3%). This proportion varied widely from
Introduction
Surgical care in the United States is expensive and is associated with substantial mortality rates. 1,2provement efforts have traditionally focused on the procedure itself and perioperative care. 3ch less attention has been given to the critical decision of when to operate and on whom, which is inextricably linked to patient outcomes.[15] Surgeons often fail to adopt scientific evidence in their decision-making about which patients are likely to benefit from surgery. 16,17Abdominal wall hernia repair is an exemplary condition in which inattention to preoperative optimization is associated with increased complications and costs after surgery.Abdominal wall hernia repair is one of the most common operations performed in the US, with more than 500 000 repairs performed annually. 18Despite the overwhelming evidence of the benefits of preoperative optimization and delaying elective abdominal wall hernia repair until patient health and risk factors can be optimized, a significant practice gap persists with regard to best practices for patient optimization before hernia repair.In a large population-based study, 4 as many as 25% of patients undergoing elective abdominal wall hernia repair did not undergo optimization before surgery.This finding was associated with increased short-term morbidity and an additional US $60 million per annum in episode of care payments.
Within this context, we used an implementation science framework to inform a stepwise approach to implementing a multifaceted quality improvement intervention to bridge the gap between recommendation and adoption of best practices for preoperative optimization of patients with ventral hernia. 19Using the Theoretical Domains Framework, 20 stakeholders (eg, surgeons) were interviewed to identify the most salient barriers and facilitators to practice change.Identified barriers were then mapped to theoretical domains and candidate techniques for evidence-based behavior change. 21This study explores the selection of evidence-based implementation strategies based on behavior change techniques and their initial piloting and assessment at 2 sites that perform abdominal wall hernia repair.
Methods
This study was designed to engage surgeons and institutional stakeholders in identifying barriers to preoperative optimization for patients with ventral hernia.Using this information, we sought to develop implementation strategies that would enable surgeons to use this evidence-based practice.
We then evaluated referral for optimization after the intervention (Figure 1).This work was performed from January 1, 2014, to December 31, 2019.This clinical quality improvement study was approved by the University of Michigan institutional review board and follows the Standards for Quality Improvement Reporting Excellence (SQUIRE) reporting guideline. 22The requirement for informed consent was waived by the institutional review board.
JAMA Network Open | Surgery
Preoperative Optimization for High-Risk Patients With Abdominal Wall Hernia
Understanding the Practice Gap
A previous report by Howard et al 4 found that 15% to 25% of patients with ventral hernia have highrisk characteristics at the time of surgery.The present study aims to understand hospital-level optimization of these risks.Specifically, to the extent that a given hospital has a high or low proportion of high-risk patients, opportunities for systematic quality improvement may exist.We therefore used data from the Michigan Surgical Quality Collaborative (MSQC) to describe site-level variation in preoperative optimization.The MSQC is a statewide clinical registry comprising 73 hospitals that prospectively collects data on perioperative processes of care and 30-day clinical outcomes after general surgical procedures.Data are reviewed and abstracted by trained nurse abstractors, and a sampling algorithm is used to minimize selection bias. 23timization was defined as the absence of a high-risk characteristic at the time of surgery.
Variation in hospital-level optimization was described by the proportion of patients undergoing ventral hernia repair at each hospital who had a characteristic of tobacco use, morbid obesity, or unhealthy alcohol consumption at the time of surgery.Presence of a high-risk characteristic at the time of surgery was established by review of a patient's complete medical record (eg, history and physical, daily progress notes, and discharge summary) by a trained clinical data abstractor.These data were obtained from January 1, 2014, to December 31, 2018.
Identifying Evidence-Based Implementation Strategies
5][26] Our goal was to understand the barriers and facilitators to preoperative optimization of patient risk that could inform implementation strategies, defined as highly specified, theory-based tools or methods designed to sustain quality improvement through implementation of effective practices.
To understand these barriers, we conducted qualitative interviews with a convenience sample of 21 practicing surgeons in Michigan who responded to a survey sent to 31 MSQC-participating surgeons.Surgeons were from community (n = 8) and academic (n = 13) hospitals and were required to be practicing surgeons who performed abdominal wall hernia repair.Interviews were conducted independently by telephone or in person.Some themes elicited from these interviews have been described previously; however, the present study specifically explores the elicited barriers in more detail as part of a stepwise implementation strategy. 27ter these interviews, a multidisciplinary hernia task force was convened consisting of 25 experts in hernia surgery, pain management, anesthesia, quality improvement, data abstraction, and implementation.Participants were recruited based on existing networks within the MSQC with the purpose of improving the care of patients with hernia in Michigan.The study leads (M.E.and D.T.) shared the results of the qualitative interviews and held open discussion to solicit candidate strategies from participants during three 1-hour, in-person sessions.Given the goal of maximizing feasibility and acceptability, a pragmatic framework was used to identify candidate strategies that were most likely to be adopted within MSQC hospitals. 28Candidate strategies included those that had been used previously within the MSQC to motivate quality improvement initiatives and incorporated value-based reimbursement, pay-for-performance (P4P), case-based billing modifiers (eg, Modifier 22), in-person courses or seminars, and delivery of focused education and reminders to surgeons.Pragmatic strategies were given priority.Delphi and discrete choice models were not used.
Three implementation strategies were ultimately selected based on identified barriers and facilitators, sources of behavior from linkage of theoretical framework domains to behavior change theories, results of prior successful strategies deployed through the MSQC, and established resources available within the MSQC. 25,29,30epwise Implementation of Evidence-Based Interventions at a Pilot Level Selected implementation strategies were piloted within 2 MSQC health care systems to evaluate feasibility and acceptability.As part of this pilot, surgeons at both sites were willing to participate in brief educational sessions to learn about the proposed interventions and resources.Pilot sites were also willing to host on-site facilitators who were added to increase early adoption of the implementation strategies developed as part of this study.The main outcome was the number of referrals placed for preoperative optimization at both pilot sites to measure any changes in optimization referral after implementation of this intervention.
Statistical Analysis
A descriptive analysis was calculated for high-risk characteristics at the time of surgery across hospitals.Statistical analyses were performed using STATA, version 16.0 (StataCorp LLC).
Hospital-Level Variation in Adherence to Preoperative Optimization
The patient selection for this study is presented in Figure 2.Among 23 000 patients undergoing ventral hernia repair during the study period, the mean (SD) age was 53.9 (14.
Barriers to Preoperative Optimization
In a recent study, Vitous et al 27 elicited 3 barriers to practice change that are explored herein in more detail as they apply to an overall implementation strategy (Table ).The first theme centered on Second, surgeons attested to a lack of resources to accomplish optimization among high-risk patients.Although a small number of health systems have stand-alone preoperative optimization programs to which surgeons can refer patients, most surgeons did not practice in such a model.
Therefore, any work to improve a patient's health before surgery fell entirely on the surgeon.This process was seen as burdensome amidst an already busy clinical and operative schedule.Moreover, most surgeons felt that they lacked the expertise in how to effectively counsel patients to improve their health before surgery.
Finally, surgeons identified organizational barriers to deferring surgery until patients had undergone adequate optimization.For example, there was a pervasive belief that health system administration did not view time spent on optimization as high-value patient care.In contrast to already-identified financial barriers in which surgeons raised concerns about losing patient business, surgeons specifically identified expectations to maintain a given volume of operations and asserted that deferral of surgery would decrease this volume.
Identifying Evidence-Based Implementation Strategies
Using this information obtained from practicing surgeons, the expert task force was convened to deliberate potential implementation strategies to encourage better use of preoperative optimization.
Consensus was ultimately reached on 3 implementation strategies to increase surgeon use of preoperative optimization among high-risk patients with hernia.
The first strategy was to conduct educational outreach to address surgeon knowledge gaps and awareness of organizational resources.This strategy was designed to familiarize surgeons with the readily available, statewide prehabilitation Preoperative Patient Optimization Program (PREP).PREP is a structured prehabilitation program that engages patients in 5 domains: physical activity (patients are provided with a pedometer and encouraged to track steps), healthy diet, breathing exercises (patients are provided with an incentive spirometer), mindfulness, and smoking cessation, if applicable.The details of this program have been described previously. 14,15PREP is available to surgeons at MSQC hospitals; however, the previous surgeon interviews demonstrated that most surgeons were unfamiliar with it.Therefore, this implementation strategy involved providing surgeons with training material familiarizing them with the program and instructions to refer patients.Surgeons were also provided with guidelines pertaining to which patients merit referral to PREP (namely, patients with 1 of 3 high-risk characteristics).The overall cost of PREP training and delivery was $7 per referred patient, including the PREP toolkit, surgeon training, and PREP personnel.
The second implementation strategy was active facilitation. 31On-site facilitation was developed to encourage surgeon use of preoperative optimization.This strategy was developed based on the recognition that providing passive education alone would likely not achieve the desired outcome at all hospitals.On-site facilitation would provide additional resources to help align the priorities of clinicians and organizational leaders to support adherence to patient optimization.Facilitation is derived from the PARIHS (Promotion Action on Research Implementation in Health Systems) framework, which supports clinicians in the adoption of evidence-based practices through enhancement of strategic thinking and leadership skills that enable them to cultivate surgeon champions to overcome organizational and system barriers to adoption. 32For example, in the Recovery-Oriented Collaborative Care trial, on-site facilitation delivered at community care practices in Michigan and Colorado compared with surgeon training alone increased use of a care management program. 33e third implementation strategy was to alter the incentive structure surrounding optimization to address financial concerns. 31 program already used by the MSQC to incentivize quality improvement efforts. 34Under this incentive, hospitals receive a P4P score based on hospital-reported performance regarding preoperative optimization referrals.Hospital payments will then be tiered based on this score, and meeting performance measures enables hospitals to receive an additional fixed 40% financial incentive.Starting in quarter 1 of fiscal year 2021, all hospitals within the MSQC will be eligible for this incentive.Similar financial strategies have been deployed at the statewide level to improve postoperative opioid prescribing. 35
Pilot for Improved Preoperative Optimization Using Implementation Strategies
In 2016, PREP training was deployed at 2 sites, including a large academic health system and an affiliate community hospital.These sites were selected based on geographic convenience and willingness of surgeons to participate in pilot implementation efforts.Baseline optimization referral rate was not known for these sites because previous data were deidentified.A subset of surgeons caring for patients with abdominal wall hernia (n = 7) received PREP training.In 2018, referral to PREP increased by 860% from 10 referrals in 2016 to 96 in 2018.
On-site facilitation was implemented in 2018.The on-site facilitator provided clinical guidance and support strategies for high-risk patients, tracked referrals for preoperative optimization, provided feedback, and reinforced guidelines with surgeons.For example, by reviewing medical records for the presence of a high-risk characteristic such as smoking, the facilitator would provide reminders that referral for optimization and even deferral of surgery would be appropriate.Addition of an on-site facilitator was associated with another 40% increase in referral to PREP from 96 referrals in 2018 to 134 referrals in 2019.The incremental total cost of providing the on-site facilitator was $6000 during the 12-month period (0.1 full-time equivalent), which was calculated based on standard salary level for a part-time clinical nurse.
Discussion
This This study demonstrates the use of implementation science methods to overcome widely recognized but poorly addressed barriers to motivating practice change in surgery.Roughly half of the complications that occur after surgery are considered preventable. 36Despite abundant evidence regarding the factors that drive these complications and the benefits of preoperative optimization, there is a significant disconnect between published evidence and actual practice. 37The bridging of this gap has been shown to depend critically on the methodology of implementation science. 38For example, Jafri et al 39 demonstrated that despite the existence of practice guidelines for the management of abdominal wall hernia in women of childbearing age, wide variation remains in surgeon decision-making.The set of implementation strategies outlined above is an example of an implementation intervention that systematically identifies and overcomes this variation in a stepwise fashion by integrating input from stakeholders.Guided by established implementation frameworks, surgeons identified barriers to optimizing care, which then informed specific strategies to overcome those barriers that can be applied across different sites.To our knowledge, this study is the first to apply these strategies to optimization before a common elective surgical procedure.
JAMA Network Open | Surgery
Preoperative Optimization for High-Risk Patients With Abdominal Wall Hernia In this context, this study also demonstrates the importance of identifying institution-or surgeon-specific barriers to implementation of evidence-based practice.This strategy has been used effectively (eg, when combined with the Theoretical Domains Framework) to map and identify targets for practice change. 19In a similar quality improvement initiative, excessive postoperative opioid prescribing was found to arise from surgeon concerns that prescribing less will lead to patient dissatisfaction.By recognizing these beliefs, implementation strategies can be developed that address them, such as behavioral modeling, persuasive communication, and social processes of encouragement. 40Insofar as patient engagement in preoperative optimization has been shown to be associated with postoperative outcomes, a similar strategy can be used to assess facilitators and barriers to maximizing participation of patients in optimization efforts.Identifying concerns about financial losses also allowed us to couple these strategies with P4P incentives, which have similarly been used to motivate adoption of evidence-based opioid prescribing practices after surgery. 35wever, a future state that links P4P to actual preoperative optimization success and not simply referral rate may even further maximize the effect of this intervention.
Limitations
This study has important limitations.Importantly, the purpose of the present study was to evaluate the design and application of an implementation strategy to bridge a known gap in surgical practice.
As such, we did not collect actual outcomes regarding whether optimization referrals led to improvement in risk factors or improved postoperative outcomes.There is a plethora of existing evidence to support the effectiveness of preoperative optimization. 15,41This intervention was also conducted within the context of a preexisting collaborative hospital network that greatly facilitated its feasibility.Accordingly, the findings may not be generalizable to other institutions where no such network exists.Nevertheless, similar variation has been demonstrated in other populations of patients undergoing abdominal hernia repair, and similar collaborative hospital networks have emerged to address the challenges of adhering to best practice. 42,43other limitation is the lack of geographic information regarding the distribution of high-risk health behaviors.Smoking, obesity, and unhealthy alcohol consumption are all associated with socioeconomic status, and therefore the case mix and barriers that hospitals encounter will be unique to their geographic setting.Future iterations of this work should address these variations in local practice so that solutions tailored to a given hospital's patient population can be implemented.
In addition, to the extent that other health care systems and surgical practices may have different barriers to adoption of evidence-based practice, the strategy described herein is generalizable in that it would enable those systems to engage stakeholders in identifying their own unique barriers and design implementation strategies accordingly.
Another limitation is that the overall proportion of high-risk patients was not tracked after implementation of this intervention strategy, so the extent to which additional high-risk patients were not being referred for optimization is unknown.It is similarly unknown how much improvement was due to the addition of an on-site facilitator vs growing awareness of optimization among surgeons.Although we have no reason to believe the patient population at these sites changes substantially during the study period, future efforts should track all patients such that changes in referral rates can be understood in a larger context.Specifically, our intervention did not make use of any automated referral technology or real-time decision support tools, which have been well proven to increase the efficiency and sustainability of similar quality initiatives.There are also limitations in using theoretical domains framework, including the lack of a formal way in which to apply the framework in the health care setting. 20
Figure 1 .
Figure 1.Overview of Intervention Development and Implementation Process
Figure 3 .
Figure 3. Aggregate Variation in Adherence to Preoperative Optimization Across Michigan Surgical Quality Collaborative Sites From 2014 to 2018 study describes the development, implementation, and initial results of a multifaceted, stepwise implementation intervention designed to increase surgeon use of evidence-based preoperative optimization for high-risk patients with abdominal wall hernia.Using qualitative interviews with surgeon stakeholders, the primary barriers to evidence-based practice were identified as lack of financial incentive and unfamiliarity with available preoperative optimization programs.Using these results, a P4P model was developed to financially incentivize use of the PREP program, and surgeon training and on-site facilitators provided information and resource knowledge to increase adoption of preoperative optimization.Within the first 2 years of implementing these interventions, the number of patients with abdominal wall hernia referred for preoperative optimization at no cost to participating surgeons increased substantially.
Table .
Dominant Barriers to Practice Change With Representative Quotations d love to say that we're sticklers about [optimization], but we're not, you know.In this age and era of patient satisfaction, you send all these patients out and say come back when you quit smoking, they just go find somebody else and then they write you a bad review too, so you have to balance that."Therefore, despite recognizing the benefits of optimization, surgeons were nevertheless willing to perform surgery on high-risk patients for fear of losing referrals and revenue.
Local practice patterns and expectations "At the big universities, you can kind of draw the line, but in the communities, I think we're sort of stuck with that."Clinician autonomy "I'm a very lazy surgeon.You should do what's easiest for you.If it would be easy for you, it's probably easier on the patients.""I don't read guidelines.I just make it up."Abbreviation: PREP, Preoperative Patient Optimization Program.JAMA Network Open | Surgery Preoperative Optimization for High-Risk Patients With Abdominal Wall Hernia JAMA Network Open.2021;4(5):e216836.doi:10.1001/jamanetworkopen.2021.6836(Reprinted) May 12, 2021 5/11 Downloaded From: https://jamanetwork.com/ on 09/28/2023 before surgery. | 2021-05-13T06:16:09.265Z | 2021-05-01T00:00:00.000 | {
"year": 2021,
"sha1": "82d5f7ac8312f03c490ce46c21267578fa27109f",
"oa_license": "CCBY",
"oa_url": "https://jamanetwork.com/journals/jamanetworkopen/articlepdf/2779779/howard_2021_oi_210222_1620141980.74595.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "a19107824891adee1c12a9a9206c8e2ba623b612",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
270314220 | pes2o/s2orc | v3-fos-license | Management of intra-abdominal infections: recommendations by the Italian council for the optimization of antimicrobial use
Intra-abdominal infections (IAIs) are common surgical emergencies and are an important cause of morbidity and mortality in hospital settings, particularly if poorly managed. The cornerstones of effective IAIs management include early diagnosis, adequate source control, appropriate antimicrobial therapy, and early physiologic stabilization using intravenous fluids and vasopressor agents in critically ill patients. Adequate empiric antimicrobial therapy in patients with IAIs is of paramount importance because inappropriate antimicrobial therapy is associated with poor outcomes. Optimizing antimicrobial prescriptions improves treatment effectiveness, increases patients’ safety, and minimizes the risk of opportunistic infections (such as Clostridioides difficile) and antimicrobial resistance selection. The growing emergence of multi-drug resistant organisms has caused an impending crisis with alarming implications, especially regarding Gram-negative bacteria. The Multidisciplinary and Intersociety Italian Council for the Optimization of Antimicrobial Use promoted a consensus conference on the antimicrobial management of IAIs, including emergency medicine specialists, radiologists, surgeons, intensivists, infectious disease specialists, clinical pharmacologists, hospital pharmacists, microbiologists and public health specialists. Relevant clinical questions were constructed by the Organizational Committee in order to investigate the topic. The expert panel produced recommendation statements based on the best scientific evidence from PubMed and EMBASE Library and experts’ opinions. The statements were planned and graded according to the Grading of Recommendations Assessment, Development and Evaluation (GRADE) hierarchy of evidence. On November 10, 2023, the experts met in Mestre (Italy) to debate the statements. After the approval of the statements, the expert panel met via email and virtual meetings to prepare and revise the definitive document. This document represents the executive summary of the consensus conference and comprises three sections. The first section focuses on the general principles of diagnosis and treatment of IAIs. The second section provides twenty-three evidence-based recommendations for the antimicrobial therapy of IAIs. The third section presents eight clinical diagnostic-therapeutic pathways for the most common IAIs. The document has been endorsed by the Italian Society of Surgery.
Background
Intra-abdominal infections (IAIs) are common surgical emergencies and represent an important intra-hospital cause of morbidity and mortality, especially if poorly treated.IAIs represent a notable factor contributing to the loss of both human lives and resources across global hospital settings.The WISS study [1] reported an estimated overall mortality rate of 9.2% among patients affected by complicated intra-abdominal infections (cIAIs) globally.The grading of the clinical severity of patients with cIAIs has been well described by the sepsis definitions.The data from WISS study showed that mortality was significantly affected by sepsis status when divided into four categories.Mortality rates increase in patients developing organ dysfunction and septic shock.Mortality by sepsis status was as follows: no sepsis 1.2%, sepsis only 4.4%, severe sepsis 27.8%, and septic shock 67.8%.
Despite still high mortality, short-term survival from sepsis of abdominal origin has improved in recent years [2,3].As a result, there is a growing population of sepsis survivors, and with rapid implementation of evidencebased care, early mortality has decreased substantially, but many sepsis survivors are now progressing into chronic critical illness with poorly defined long-term outcomes [4].These patients frequently experience new symptoms, long-term disability [5], worsening of chronic health conditions, and increased risk for death following sepsis hospitalization [6].
Defining the patient with IAIs at high risk for failure is difficult."High risk" may be attributed to the patient's underlying condition(s), such as age, comorbidity or the disease severity status on presentation.However, a "lowrisk" patient may be converted to "high risk" if the care provider loses the "window of opportunity" to diagnose, resuscitate, and start timely treatment.Thus, there are numerous situations that must be taken into account when addressing high-risk patients and treatment failure.
In general, high-risk IAI is attributed to patient factors (advanced age, immunosuppression, malignant disease, and pre-existing medical comorbidities) or disease factors, represented by high-risk scores (such as ASA, APACHE, SOFA scores), delay in intervention (usually > 24 h), inability to obtain source control, and an IAI that is hospital acquired (rather than community acquired).
The cornerstones of IAIs management encompass timely diagnosis, adequate source control, early and appropriate antimicrobial therapy, and expeditious physiological stabilization through intravenous fluid therapy and vasopressor agents, in critically ill patients.
Diagnosis
In patients with suspected IAIs, a step-up approach for clinical, instrumental, and laboratory diagnosis should be proposed according to the patients' clinical conditions.
Diagnosis of IAIs is based primarily on clinical assessment.Typically, the patient is admitted to the emergency department with abdominal pain that may be associated with signs of systemic inflammatory response syndrome, including fever, tachycardia, tachypnoea and leucocytosis or leukopenia.Abdominal rigidity suggests the presence of a cIAI, while hypotension, polypnea or dyspnoea and acute altered mental status may warn signs of the patient's transition to sepsis.Unfortunately, physical evaluation sometimes can be compromised by a variety of clinical constraints like impaired consciousness or severe underlying disease.
White blood count is one of the most common laboratory exams performed when a cIAI is suspected.However, leucocytosis is poorly sensitive and relatively non-specific in these infections.C-reactive protein (CRP) is an acute-phase protein rapidly released during inflammation, increasing especially during bacterial infections because of the induction of an intense inflammatory reaction.It serves as an indirect marker of both inflammation and infection.Conversely, bacterial infections can cause a rapid increase in procalcitonin (PCT), while viral infections or non-infectious inflammation hardly can affect it [7,8].Several conditions, including trauma, planned and graded according to the Grading of Recommendations Assessment, Development and Evaluation (GRADE) hierarchy of evidence.On November 10, 2023, the experts met in Mestre (Italy) to debate the statements.After the approval of the statements, the expert panel met via email and virtual meetings to prepare and revise the definitive document.This document represents the executive summary of the consensus conference and comprises three sections.The first section focuses on the general principles of diagnosis and treatment of IAIs.The second section provides twenty-three evidence-based recommendations for the antimicrobial therapy of IAIs.The third section presents eight clinical diagnostic-therapeutic pathways for the most common IAIs.The document has been endorsed by the Italian Society of Surgery.
acute or chronic renal impairment and recent surgery can influence both these markers.Therefore, it is important to include PCT and CRP levels in clinical algorithms and interpret them in the clinical context.An increase in serum lactate is typically observed in conditions where there is an imbalance between oxygen demand and its supply.Anaerobic metabolism can result in a reduction in arterial oxygen content (hypoxemia), decreased perfusion pressure (hypotension), misdistribution of flow and diminished oxygen diffusion across capillary membranes.Hyperlactatemia is associated with an augmented risk of mortality [9].The lactate level has been proposed as a useful screening marker for suspected sepsis in adult patients [10,11].
Ultrasonography (US) and computed tomography (CT) scan are essential diagnostic tools in managing IAIs.US is the most suitable imaging modality for critically ill unstable patients who cannot be moved from the intensive care unit (ICU) for additional imaging.US can be performed at the bedside and easily repeated, even though it depends on the operator's skills.Ileus and obesity can limit its value by restricting the sonographic window.US is the preferred initial diagnostic investigation for acute cholecystitis or thoracic fluid collections.
In all the other conditions (stable patients with nonbiliary IAIs), a contrast-enhanced triple phase CT scan is the gold standard in diagnosing and staging IAIs patients.It provides a standardised and operator-independent evaluation and can provide a multisectoral body regions evaluation that may facilitate the identification of the source of infection within a few minutes.Contrast-enhanced CT scan offers improved anatomical details of the intestinal wall, allowing for the detection of both primary and secondary pathologies in the surrounding mesentery.Additionally, it can highlight segmental intestinal ischemia and extraluminal air in the peritoneal cavity [12].It also provides information about the treatment strategy, guiding clinicians in defining the adequate management pathway for each patient.Intravenous contrast medium is sometimes withheld because of the risk of complication in patients undergoing CT scan for abdominal pain.In a multicenter retrospective diagnostic accuracy study of 201 consecutive adult emergency department (ED) patients who underwent a CT scan, unenhanced CT was approximately 30% less accurate than contrast-enhanced CT for evaluating abdominal pain in the ED.In many patients, the risk of withholding iodinated contrast medium may be higher than the risk of administering it [13].In 2019, a Cochrane systematic review assessed the accuracy of CT scans in diagnosing acute appendicitis in adults [14].Overall sensitivity was 0.95, and specificity was 0.94.In subgroup analyses the sensitivity was higher in the intravenous contrast-enhanced CT scan (0.96, 95% confidence interval [CI] 0.92-0.98),rectal contrast-enhanced CT scan (0.97, 95% CI 0.93-0.99),and both intravenous-and oralenhanced CT scan (0.96, 95% CI 0.93-0.98)compared to non-enhanced CT scan (0.91, 95% CI 0.87-0.93).Although it represents the most sensitive imaging investigation for patients with IAIs, a step-up approach with CT performed after an inconclusive or negative US has been proposed as a safe alternative approach for patients with IAIs, especially in the setting of acute diverticulitis and acute appendicitis [15,16].
According to the current literature, magnetic resonance imaging (MRI) may play a significant role in cIAIs [17].However, the challenges of performing MRI in emergency settings limit its routine application.MRI provides at least the same sensitivity and specificity as CT, and despite higher costs and potential availability issues in many centres, it should be considered a first-line imaging examination for pregnant women [17].
Source control
Source control is of paramount importance in managing patients with IAIs.The term source control encompasses all those physical procedures used to control a focus of infection and to restore the optimal function of the affected area [18].
Effective source control requires a deep understanding of pathophysiology, complexities of the infection responses, surgical and non-surgical options and often requires the definition of an adequate balance between therapeutic aggressiveness and judicious decision-making.An adequate source control intervention can rapidly improve the course of IAIs.However, an improper decision may change a difficult clinical challenge into a clinical burden.Adequate source control can also reduce antibiotic use, allowing short courses of antibiotic therapy.Both operative and non-operative techniques can achieve control of the source of infection.Surgery remains the most viable therapeutic strategy for managing surgical infections in critically ill patients.The decision for a specific source control procedure should be defined according to patients' and infection's characteristics, as well as the availability of technical expertise at the local institution.
Non-operative interventional procedures include percutaneous US-or CT-guided drainages and may represent a less invasive, safe and effective strategy in the management of intra-abdominal and extra-peritoneal abscesses in selected patients.The principal cause of failure of percutaneous drainage is the misdiagnosis of the magnitude, extent, complexity, and location of the abscess.
In the setting of IAIs, the primary goals of the surgical intervention include: (a) determining the cause of the infection; (b) draining fluid/pus collections; (c) controlling the origin of the infection.
In patients with IAIs, surgical source control can include resection or suture of a diseased or perforated viscus (e.g., diverticular perforation, gastroduodenal perforation), removal of the infected organ (e.g."Appendix", gallbladder), debridement of necrotic tissue, resection of ischemic bowel, and repair/resection of traumatic perforations with primary anastomosis or creation of a stoma.Source control does not only reduce bacterial and toxin load by removing the focus of infection and ongoing contamination but also transforms the local environment, hindering further microbial growth and optimizing host defences [19,20].
Several research studies on patients with IAIs have consistently shown that failure to obtain adequate source control is one of the most important factors associated with adverse outcomes, including death [21,22].However, defining "adequacy" in source control remains a debatable term.Data collected from CIAO and CIAOW studies, in which a part of the enrolled cIAIs patients underwent surgical procedures to guarantee adequate source control, suggest that delays in the surgical treatment (> 24 h) represent an independent mortality predictor [23,24].However, in cases of uncomplicated IAIs, such as uncomplicated acute appendicitis, scheduling an appendectomy within 24 h from the diagnosis does not pose a higher risk of appendiceal perforation when compared to scheduling the procedure within 8 h [25].
Recently, multi-society guidelines for source control in the emergency setting were published [26].The authors suggest that the initial assessment of patient stratification is a crucial first step in controlling the source of infection, and should consider the patient's current health condition, comorbidities, and ongoing therapies (such as anticoagulants or steroids), as well as their immunological status.Patients can thus be categorized into three classes [26]: 1. Healthy patients with no or well-controlled comorbidities and not immunocompromised, where the infection is the major problem.2. Patients with major comorbidities and/or moderately immunocompromised but clinically stable, in whom the infection can rapidly worsen the prognosis.3. Patients with important comorbidities in advanced stages and/or severely immunocompromised, in which the infection deteriorates the pre-existing severe clinical condition.
The level of treatment urgency is determined by the affected organ(s), the rate at which symptoms develop, and the underlying physiological stability of the patient.In patients with IAIs source control should be rapidly achieved based on the patient's clinical condition.The time between admission and initiation of the surgical procedure for source control is a critical and determinant factor influencing survival in patients with sepsis or septic shock of abdominal origin [27].According to the current evidence, there is no reason to postpone source control for more than 6 h in patients with sepsis or septic shock of abdominal origin [28][29][30][31].
Some patients are prone to persisting signs of infection.In these patients, timely re-laparotomy provides an important surgical option that may significantly improve the clinical outcome when a single operation may not achieve an effective source control; thus, relaparotomy may become necessary [32].In adults with IAIs, on-demand re-laparotomy should represent the first choice of treatment.
Surgical strategies following an initial emergency laparotomy include subsequent "re-laparotomy on demand" (when required by the patient's clinical condition) as well as planned re-laparotomy in the 36-48 h post-operative period.An on-demand laparotomy is performed only when the patient's conditions deteriorate or do not improve and when CT scan findings show a benefit from additional surgery.Planned relaparotomy is performed every 36-48 h for inspection, drainage, and peritoneal lavage of the abdominal cavity.A randomized trial published in 2007 by Van Ruler et al. [33] compared the differences between on-demand and planned re-laparotomy strategies in patients with severe cIAIs.Patients in the on-demand re-laparotomy group did not have a significantly lower rate of adverse outcomes compared with patients in the planned re-laparotomy but had a substantial reduction in re-laparotomies, healthcare utilization, and medical costs.
However, accurate and timely identification of patients needing a re-laparotomy is a very difficult decision-making process.At present, there are no clinical criteria to select patients for a re-laparotomy.Several studies have evaluated clinical variables that may be associated with the need for on-demand re-laparotomy in the immediate postoperative period [34][35][36], without defining standardized criteria.
The open abdomen may seem a viable option for treating physiologically deranged patients with ongoing sepsis, facilitating subsequent exploration and control of abdominal contents, and preventing abdominal compartment syndrome.However, there is still debate about the role of the OA in the management of patients with cIAIs.
Haemodynamic support
Some patients with IAIs may present with sepsis.Sepsis and septic shock can be time-dependent emergencies.Early treatment with aggressive haemodynamic support can limit the damage of sepsis-induced tissue hypoxia and prevent the overstimulation of endothelial activity.Fluid resuscitation increases cardiac output, especially during the early stages of sepsis [37], and increases microvascular perfusion in patients with septic shock, leading to an improvement in organ function [38].
Starting fluid resuscitation as early as possible in the treatment of sepsis is necessary to compensate for capillary leakage of intravascular fluids, drains, gastrointestinal and insensible fluid loss [39,40].Although a protocolized resuscitation aimed at normalizing predefined physiological variables is no longer recommended [41][42][43], 30 mL/Kg of crystalloids in the first 3 h from recognizing sepsis and septic shock, irrespective of patients' fluid status, patients' comorbidities (e.g., heart failure, end-stage renal disease) and infection site [44] are always suggested.Because of the increased permeability of microcirculation and transcapillary leakage, fluid requirements in septic patients may be significant even after the completion of the initial resuscitation phase.This implies the need for further fluid boluses, in addition to maintenance fluids (including enteral fluids, feeding and parenteral nutrition) [45,46].In patients requiring large amounts of crystalloids to maintain cardiac output and peripheral perfusion, albumin solutions should be considered [44].Despite the lack of a clear cutoff value for total crystalloids infused, data from a randomized controlled trial of septic patients showed that net fluid balance may be lower when 20% albumin is added to maintenance fluids.Albumin infusion may provide a survival benefit in the most severe group of septic patients (septic shock patients) [47,48].
In addition to the clinical improvement (urine output > 0.5 mL/Kg/h, reduction of mottling, normalization of capillary refill time), current evidence supports the use of lactate clearance and serum lactate reduction as reliable indicators of resuscitation adequacy after volume loading and cardiac output restoration in septic shock patients [49].
In patients with abdominal sepsis, fluid resuscitation should be kept under control to avoid fluids overload.A recent meta-analysis of observational studies showed fluid overload was associated with mortality in patients with both acute kidney injury and surgery.Cumulative fluid balance was linked to mortality in patients with sepsis, acute kidney injury, and respiratory failure.The mortality risk increased by a factor of 1.19 (95% CI 1.11-1.28)per litre increase in positive fluid balance [50].The haemodynamic consequences of intravascular congestion and the adverse effects of tissue oedema explain why fluid overload may lead to worse outcomes.Therefore, rather than infusing predefined amounts of fluids, the goal should be individualized fluids administration for every patient, based on evaluating both the need for fluids and the patient's premorbid conditions [51].
Especially in patients with abdominal sepsis requiring urgent surgical intervention, aggressive fluid resuscitation improving the intravascular volume status and the organ perfusion may also increase intra-abdominal pressure (IAP) and worsen the inflammatory response, which is associated with a higher risk of complications.Several factors including systemic inflammatory response syndrome, increased vascular permeability and aggressive crystalloid resuscitation predisposing to fluid sequestration can result in peritoneal fluid formation and bowel oedema.Patients with advanced sepsis commonly develop bowel shock, resulting in excessive bowel oedema.These changes associated with forced closure of the abdominal wall may cause increased IAP, ultimately leading to intra-abdominal hypertension (IAH) [52].IAP monitoring is a safe and cost-effective tool for identifying patients at risk for developing IAH and abdominal compartment syndrome (ACS) and can guide resuscitative therapy, reducing mortality and morbidity associated with IAH and ACS.Intra vesicular saline instillation is the most common technique to monitor IAP.It is a simple closed system that can measure bladder pressure in the ICU.
Vasopressor agents should be administered to restore organ perfusion as soon as possible if blood pressure is not restored after initial fluid resuscitation.Some evidence showed that early administration of vasopressors significantly increases shock control by 6 h [53,54].Norepinephrine is considered the first-line vasopressor agent to correct hypotension in patients with septic shock.Norepinephrine is a potent α-1 and β-1 adrenergic receptors agonist, which results in vasoconstriction and increased mean arterial pressure (MAP) with minimal effect on heart rate.Norepinephrine is more efficacious than dopamine and may be more effective in reversing hypotension in patients with septic shock.In a systematic review and meta-analysis [55], norepinephrine resulted in an absolute reduction of 11% in 28-day all-cause mortality when compared with dopamine.Dopamine was related to major adverse events, including an increase in the risk for cardiac arrhythmias.The haemodynamic profile of norepinephrine was more favourable than the other vasopressors, resulting in decreased lactate levels, increased central venous pressure and urine output compared to other vasopressors.Although most patients show a significant improvement in haemodynamic parameters after starting norepinephrine infusion, there is a remarkable proportion of septic shock patients with a poor clinical response to catecholamines, e.g.requiring large doses (> 0.5 mcg/Kg/min of norepinephrine) to reach MAP of 65 mmHg, or not reaching the threshold MAP, despite high-dose vasopressors and optimized fluid therapy [56].In these cases, second-line vasopressors may provide some advantages instead of increasing norepinephrine infusion.
Vasopressin is an endogenous peptide hormone.It is produced in the hypothalamus and stored and released by the posterior pituitary gland.Its mechanism for vasoconstrictive activity is multifactorial and includes binding of V1 receptors on vascular smooth muscle resulting in increased arterial blood pressure.Vasopressin concentration is elevated in early septic shock but generally decreases to the normal range between 24 and 48 h as the shock continues.Studies have shown that the use of low-dose vasopressin (between 0.03 and 0.06 UI/min continuous infusion) reduces mortality in less severe cases of septic shock and has a "catecholamine-sparing" effect by decreasing the norepinephrine dose when both vasopressors are utilized [57,58].Thus, if patients require a norepinephrine dose > 0.25 mcg/Kg/min, adding vasopressin might be appropriate.When using vasopressin, it is advised to exercise caution due to the potential for limb extremities ischemia.
Epinephrine's action is dose-dependent, having potent β-1 adrenergic receptor activity and moderate β-2 and α-1 adrenergic receptor activity.The activity of epinephrine, at low doses, is primarily driven by its action on β-1 adrenergic receptors, resulting in increased cardiac output, decreased systemic vascular resistance (SVR) and variable effects on MAP.At higher doses, however, epinephrine administration results in increased SVR and cardiac output.Studies have shown epinephrine continuous infusion can increase serum lactate levels regardless of tissue hypoperfusion [59].In severe shock epinephrine can impair splanchnic circulation carrying a risk of splanchnic vasoconstriction and mesenteric ischemia [60].
Using corticosteroids in septic shock has been debated for decades.Corticosteroids increase vascular tone and catecholamine sensitivity [61] and their use has been advocated in patients with septic shock and relative adrenal insufficiency [62].More recently, results of two large randomized controlled trials support stress-dose (200 mg/day) hydrocortisone administration in patients receiving moderate-to-high doses of norepinephrine (> 0.25 mcg/Kg/min).Although these studies failed to show a reduction in 28-day mortality, corticosteroids provided significantly faster resolution of shock and more rapid weaning from mechanical ventilation without increasing the infection rate [63,64].The reduction in vasopressor dose may be appealing, particularly in patients with septic shock because of cIAIs at risk of mesenteric ischemia.
Infection prevention and control
Strengthening infection prevention and control practices along with implementing antimicrobial stewardship aims to optimizing antimicrobial use and improve patient outcomes, reduce antimicrobial resistance (AMR), and decrease the spread of infections caused by multidrugresistant (MDR) organisms.
Infection prevention and control includes essential measures to reduce the incidence of healthcare-associated infections (HAIs) and prevent the emergence and spread of AMR.Enhancing hospitalized patient safety necessitates a systematic approach to preventing HAIs and AMR, because HAIs are frequently caused by MDR organisms.The occurrence of HAIs such as surgical site infections (SSIs), catheter-associated urinary tract infections (CA-UTIs), central line-associated bloodstream infections (CLA-BSIs), ventilator-associated pneumonia (VAP), hospital-acquired pneumonia (HAP), and Clostridioides difficile infection continues to escalate at an alarming rate.
Surgical patients may present several risk factors for the acquisition of HAIs.SSIs are the most prevalent HAIs in the surgical setting, especially in patients with IAIs having contaminated or dirty surgical fields [65].SSIs can cause adverse patient outcomes, including prolonged hospital stays and higher related morbidity and mortality.Therefore, integrating SSI prevention measures before, during, and after surgery is essential.The management of most critically ill surgical patients usually involves the use of invasive devices (e.g., endotracheal tubes, vascular and urinary catheters), predisposing patients to the development of additional HAIs.
Since a significant proportion of HAIs is considered preventable [66] by respecting simple evidence-based measures such as hand hygiene practices, the use of barrier precautions, and implementing bundles to prevent specific HAIs like CLA-BSIs and VAP, strengthening infection prevention and control practices are essential to prevent their development.
The pathogenetic role of MDR microorganisms' gut colonization and the value of active surveillance are areas of debate.Screening for carriage of carbapenemase-producing Enterobacterales (CPE) is considered an important infection prevention tool, useful for control strategies [67,68].Some evidence suggests that a patient's risk for CPE colonization should be personalized and assessed according to the local prevalence, individual risk of multi-drug-resistant pathogen acquisition, and linkages with other healthcare providers.Screening for carriage of CPE at admission is highly recommended for patients who within the last 12 months: (1) have been identified as carriers or have had a CPE related-infection, (2) have been hospitalized, (3) have received multiple antibiotic treatments, (4) have a known epidemiological link to a confirmed carrier of CPE, (5) are admitted to augmented care or high-risk units, or (6) have a planned major surgical abdominal intervention (e.g.solid organ transplantation) and/or have been exposed to immunosuppressive treatment (e.g.inflammatory bowel disease).
Antimicrobial therapy
Empirical antibiotic therapy plays a crucial role in the effective management of IAIs; as inadequate initial antimicrobial treatment is associated with less favourable patient outcomes and the emergence of AMR, it is crucial to prescribe antibiotics correctly, with the right spectrum of activity, at the right time, for the right duration and with the right dosage.Optimizing antibiotic prescribing in the hospital setting results in improved treatment effectiveness and patient safety.This minimizes the risk of opportunistic infections such as Clostridioides difficile infection and mitigates the risk of selecting antimicrobial-resistant bacteria.The growing emergence of MDR organisms has caused an impending crisis with alarming implications, particularly concerning Gram-negative bacteria.Antimicrobial treatment should be started when a treatable infection has been recognized or strongly suspected.Misuse and abuse of antimicrobial agents, combined with the inappropriate application of infection prevention and control measures, are recognized as major drivers of the increasing prevalence of AMR.
AMR has become a global threat to public health systems in recent decades.Italy is ranked among the lowestperforming countries in AMR control in Europe by the European Centre for Disease Prevention and Control (ECDC), primarily due to alarmingly high levels of AMR observed in Italian hospitals [69].In January 2017 a team of experts in antimicrobial stewardship selected by the ECDC planned a four-day visit to Italy to investigate and evaluate the situation in the country regarding prevention and control of AMR [70].In the report drafted by the ECDC Committee, the experts highlighted the threat represented by the AMR, and the crucial necessity to design a national plan of action to address this burden.
In 2022, the ECDC published an interesting document evaluating the health impact of infections caused by antibiotic-resistant bacteria in the EU/EEA.The report, covering the period from 2016 to 2020, showed that the overall burden of infections attributed to AMR pathogens, adjusted for population size, was highest in Greece, Italy, and Romania [71].In Italy, an alarming pattern of resistance involving MDR and extensively drug-resistant Gram-negative bacteria has emerged in recent years, and multi-resistant Enterobacterales are now a major concern in daily clinical practice.This phenomenon may be partially attributed to a high average age of the population, predisposing to the development and spread of AMR.However, it is likely to be influenced by a poor perception of the AMR burden.Hence, there is a critical need to raise awareness among Italian healthcare workers regarding the importance of the management of infections, including IAIs.
Methodology
The Multidisciplinary and Intersociety Italian Council for the Optimization of Antimicrobial Use promoted a consensus conference on the antimicrobial management of IAIs, including emergency medicine specialists, radiologists, surgeons, intensivists, infectious disease specialists, clinical pharmacologists, hospital pharmacists, microbiologists and public health specialists.
Relevant clinical questions were constructed by the Organizational Committee in order to investigate the topic.The expert panel produced recommendation statements based on the best scientific evidence from PubMed and EMBASE Library and experts' opinions.
The statements were planned and graded according to the Grading of Recommendations Assessment, Development and Evaluation (GRADE) hierarchy of evidence.The quality of the overall body of evidence has been defined as high, moderate, low, or very low.The strength of the recommendation was defined as weak or strong.For each statement and each algorithm, a consensus among the panel of experts was reached using a Delphi approach.Statements reaching an agreement of ≥ 80%, were approved as strong recommendations.On November 10, 2023, the experts met in Mestre (Italy) to debate the statements.After the approval of the statements, the expert panel met via email and virtual meetings to prepare and revise the definitive document.The manuscript was subsequently reviewed by all members who approved the present manuscript.
This document represents the executive summary of the consensus conference and comprises three sections.The first section focuses on the general principles of diagnosis and treatment of IAIs.The second section provides twenty-three evidence-based recommendations for the antimicrobial therapy of IAIs.The third section presents eight clinical diagnostic-therapeutic pathways for the most common IAIs.The document has been endorsed by the Italian Society of Surgery.
Italian multidisciplinary Consensus Conference for the antimicrobial treatment of Intra-abdominal Infections
What is the optimal timing to start antibiotic therapy in patients with IAIs?
1.In patients with IAIs, empiric antibiotic therapy should be started after a treatable infection has been recognized or highly suspected.Timing of administration should be based on the patient's clinical status (Moderate quality of evidence, strong recommendation).
Antibiotic therapy in patients with IAIs is initially empiric because the identification of pathogens and their susceptibility patterns determined by the standard microbiological culture typically needs 24 h or more.
Timing of administration should be based on the patient's clinical status.In non-critically ill patients, empiric antimicrobial therapy should be started when an IAI is recognized or highly suspected.Survival benefit from adequate empiric antibiotic therapy has not been consistently shown, even in patients with Gram-negative bacteraemia [72].Conversely, in patients with sepsis or septic shock, the prompt administration of appropriate empiric antibiotic therapy can significantly influence the outcome, regardless of the anatomical site of infection.
According to the current literature, a strong correlation exists between each hour of delayed administration of appropriate antibiotic therapy and mortality in patients with septic shock.However, this relationship is less pronounced in patients with sepsis who do not experience shock [73,74].
When should microbiological cultures be obtained in patients with IAIs?
In patients with IAIs at risk of resistant pathogens and in critically ill patients' , cultures from peritoneal fluid should be always obtained. In critically ill hospitalized patients, a minimum of two sets of blood cultures before initiating antimicrobial therapy should be always obtained (Very low quality of evidence, strong recommendation).
Microbiological testing results are crucial in choosing a therapeutic strategy and in guiding a targeted antibiotic treatment.This testing allows clinicians to individualize the spectrum of the antibiotic regimen, broadening it if the initial choice was too narrow, but more commonly narrowing an empiric regimen spectrum that was too broad.
While microbiological cultures have limited influence on common community-acquired IAIs (CA-IAIs) such as acute appendicitis [75,76], in the current era marked by the widespread circulation of MDR bacteria in both hospital and community settings, the burden of AMR cannot be disregarded.Microbiological testing is very important in managing hospital-acquired IAIs, where the risk of resistant pathogens is high.Antibiotic therapy reassessment based on microbiologic culture and susceptibility testing is suggested in critically ill patients.In critically ill hospitalized patients, the expert panel recommends obtaining a minimum of two sets of blood cultures before initiating antimicrobial therapy.It can improve diagnostic sensitivity, as it revealed a pathogen in more than half of the patients in which blood cultures were performed [77,78].These findings are notably higher compared to the 6% observed by Montravers et al. [79].Microbiological testing is of great importance not only to define the therapeutic strategy for patients at risk of AMR but also to allows the clinician to better know the local epidemiology.
Is antibiotic therapy reassessment based on the results of microbiological culture and susceptibility testing needed in patients with IAIs?
In patients with IAIs antibiotic, therapy should be reassessed as soon as possible when the results of microbiological culture and susceptibility testing are available (Moderate quality of evidence, strong recommendation).
Microbiological diagnosis is crucial, because it may help clinicians to prescribe the most appropriate empiric antimicrobial therapy.Simultaneously, recognising the specific pathogen causing the infection can guide personalized and targeted antimicrobial therapy.
Microbiological cultures represent the gold standard methods for a correct microbiological diagnosis.Unfortunately, they are burdened by extended turnaround times.Delaying in starting an appropriate antimicrobial treatment has been associated with poorer patient' outcomes, especially in patients with sepsis and septic shock [80][81][82][83][84][85].As emphasized in the meta-analysis by Bassetti et al., the use of an inadequate antibiotic regimen has been strongly associated with unfavourable outcomes in critically ill patients (OR 0.44, 95% CI 0.38-0.50),and increased length of hospital stay, and hospital costs [86].
The need to speed up diagnostic testing results is a central theme in recent policy initiatives to combat sepsis.Clinical microbiology is currently undergoing radical changes in diagnosing bloodstream infections (BSIs).Compared to conventional methods, fast clinical microbiology techniques can analyse microbiological samples accurately, identify pathogens along with the possible presence of major resistance genes, leading to the rapid confirmation of clinical suspicions of sepsis and warranting an early switch to a targeted antibiotic therapy.By applying these new methods to the diagnostic workflow of septic patients, the administration of appropriate antibiotics can promptly start, resulting in better clinical outcomes and decreased mortality [87][88][89][90][91].
A promising automated assay has recently been developed for IAIs, which can simultaneously identify a large panel of pathogen species (including both bacteria and fungi), toxins genes, and antibiotic resistance markers directly from intra-abdominal-specific solid and liquid samples [92].A total of 300 clinical samples were tested with this technique by Ciesielczuk et al., showing an overall sensitivity of 89.3% and specificity of 99.5%.When compared to standard methods, turnaround time in pathogen identification and full antibiotic susceptibility testing was reduced respectively by an average of about 17 and 41 h [92].
Although this promising premise requires more evidence, this panel suggests that fast microbiological methods should play a supportive role in acquiring a microbiological diagnosis and not substitute the actual gold standard method (standard culture).
What is the optimal duration of antibiotic therapy in patients with IAIs?
IAIs should represent a not-to-miss opportunity for antimicrobial stewardship interventions.
When adequate source control has been achieved, the antibiotic treatment duration can be substantially shortened (Moderate quality of evidence, strong recommendation).
Antibiotic therapy alone can be performed in selected patients with uncomplicated acute appendicitis without appendicolith, advising for the possibility of high recurrence rates and misdiagnosing of complicated appendicitis (High quality of evidence, strong recommendation).
7. In patients with uncomplicated acute cholecystitis, laparoscopic cholecystectomy should be performed no later than 7 days from presentation.If source control is adequate, it is unnecessary to administer post-operative antibiotic therapy to patients with uncomplicated acute cholecystitis (Moderate quality of evidence, strong recommendation).
In immunocompetent patients with uncomplicated acute diverticulitis antibiotic therapy may not be prescribed (Moderate quality of evidence, weak recommendation).
A simple classification divides IAIs into complicated and uncomplicated.Uncomplicated IAIs are characterized by single-organ involvement and do not extend to the peritoneum.When the source of infection is treated effectively by surgical excision, post-operative antimicrobial therapy is unnecessary, as shown in the management of uncomplicated acute appendicitis or cholecystitis [93][94][95].
While appendicectomy represents the gold standard treatment for acute appendicitis, in recent years there has been a significant increase in the utilization of antibiotic therapy as a primary treatment method.Antibiotic therapy is a safe and effective primary treatment option for patients with uncomplicated acute appendicitis without an appendicolith.However, the long-term effectiveness of this approach is compromised by notable recurrence rates and poses a risk of perforation that may increase with preoperative delay, when a precise CT scan diagnosis of uncomplicated appendicitis has not been carried out [96,97].Hence, conservative treatment should be reserved for selected patients, while surgery represents the standard of care.
For acute cholecystitis, treatment is predominantly surgical.Two approaches are available to treat acute uncomplicated cholecystitis: the early option within a few days of onset of symptoms includes laparoscopic cholecystectomy to provide immediate, definitive surgical treatment after establishing the diagnosis and surgical fitness of the patient in the same hospital admission, while the delayed treatment option is performed in a second hospital admission after 6-12 weeks during which the acute inflammation settles [98].In this setting, the role of antibiotic treatment is less defined than in acute appendicitis.However, it is short in early surgery, and longer in delayed surgery.
A systematic review and meta-analysis, including 15 randomized controlled trials (RCTs) and 1669 patients was published by Lyu et al. [99].Early laparoscopic cholecystectomy resulted as safe and effective as delayed cholecystectomy for acute cholecystitis within 7 days from the presentation.No significant differences between the two approaches were found regarding bile duct injuries, wound infections, total complications, or conversion to open surgery.However, the pooled results showed that early surgical chelecystectmoy was related to a significantly shorter duration of hospital stay, without significant difference in postoperative hospital stay.A meta-analysis published in 2021 did not confirm that immediate cholecystectomy performed within 24 h of admission may reduce post-operative complications [100].Importantly, the analysis of literature [101] showed that the timing of early cholecystectomy differed from cholecystectomy performed as soon as possible within 24 h of admission or up to 1 week of the onset of symptoms.Evidence has validated the window of 7 days from the presentation to perform an early cholecystectomy [101].
In recent years, there has been a debate about the need for antibiotic therapy in acute uncomplicated diverticulitis.In 2015, the World Society of Emergency Surgery (WSES) Acute Diverticulitis Working Group proposed a straightforward CT-guided classification of left colon acute diverticulitis.This classification aims to guide clinicians in the day-to-day management of acute diverticulitis and may be universally accepted.The WSES classification divides acute diverticulitis into 2 groups: uncomplicated and complicated.In the event of uncomplicated acute diverticulitis, the infection does not involve the peritoneum; whilst in complicated acute diverticulitis, the infective process extends beyond the colon [102].
In recent years, several studies showed that in patients with mild uncomplicated diverticulitis, antibiotic treatment was not superior to conservative treatment without antibiotics in terms of clinical resolution.The current consensus is that, in immunocompetent patients, uncomplicated acute diverticulitis may be a self-limiting condition in which local host defences can eradicate the inflammation process without antibiotics.Chabok et al. in 2012 published a multicenter randomized trial involving the joint participation of ten surgical departments in Sweden and one in Iceland [103].A total of 623 patients with computed tomography-confirmed acute uncomplicated left-sided diverticulitis were enrolled and randomized to treatment with (314 patients) or without (309 patients) antibiotics.In this study, antibiotic treatment for acute uncomplicated diverticulitis neither sped up recovery nor prevented complications or recurrence.Therefore, antibiotics should be reserved to treat complicated diverticulitis.A prospective, cohort study [104] evaluated the safety and efficacy of non-antibiotic treatment for patients with CT-proven uncomplicated acute diverticulitis during a 30-day follow-up period.Overall, 161 patients were enrolled in the study, and 153 (95%) completed the 30-day follow-up.A total of 14 (9%) patients presented at the admission with pericolic gas.Altogether, 140 (87%) patients were treated as outpatients, and 4 (3%) of them were admitted to the hospital during the follow-up.None of the patients developed complicated diverticulitis or required surgical intervention, but 2 days (median) after inclusion, antibiotics were given to 14 (9%, 6 orally, 8 intravenously) patients.A recent Dutch randomized controlled trial of observational versus systemic antibiotic treatment (DIABOLO trial) [105] for a first episode of CT-proven ALCD Hinchey stages 1a and 1b confirmed that observational treatment without antibiotics did not prolong recovery and could be appropriate in these patients.
In patients with cIAIs the duration of antibiotic therapy should be significantly shortened, based on the adequacy of source control (High quality of evidence, strong recommendation).
11.In patients with cIAIs, undergoing an adequate source control, 4 days fixed-duration antibiotic therapy should be administered.In the setting of complicated acute appendicitis, the duration of antibiotic therapy may be further shortened in selected patients (High quality of evidence, strong recommendation).
Biomarkers such as PCT may guide antibiotic duration in patients with ongoing signs of infection, and act as a valuable tool to suspect a worse evolution and to plan a re-laparotomy (Low quality of evidence, weak recommendation).
In the event of cIAIs, the infectious process proceeds beyond the organ into the peritoneum, causing either localized or diffuse bacterial peritonitis.Treatment of patients with cIAIs involves both source control and antibiotic therapy.Antibiotics can prevent local and hematogenous spread and reduce late complications.
The value of the short duration of the treatment in patients with cIAIs is well documented.In the setting of cIAIs, a short course of antibiotic therapy after adequate source control is considered a safe option.The prospective trial (STOP-IT) by Sawyer et al. [106] demonstrated that in patients with cIAIs undergoing an adequate source control, the outcomes after approximately 4 days of fixed-duration antibiotic therapy were similar to outcomes after a longer course of antibiotics extending until after the resolution of physiological abnormalities.
Probably in the setting of acute appendicitis, the duration of antibiotic therapy could be further shortened.The results of an open-label, non-inferiority trial enrolling patients with complex appendicitis (aged ≥ 8 years) investigating the duration of antibiotic therapy were published recently.Two days of postoperative intravenous antibiotics for complex appendicitis was non-inferior to 5 days in terms of infectious complications and mortality within 90 days, based on a non-inferiority margin of 7.5% [107].
Short courses of antibiotic therapy were also shown to be feasible in patients with post-operative peritonitis [108].A multicenter prospective randomized trial conducted in 21 French ICUs between May 2011 and February 2015 compared the efficacy and safety of 8-day versus 15-day antibiotic therapy in critically ill patients with post-operative IAIs.Patients treated for 8 days had a higher median number of antibiotic-free days than those treated for 15 or 12 days (p < 0.0001).Equivalence was established in terms of 45-day mortality (rate difference 0.038, 95% CI 0.013-0.061).Treatments did not differ in terms of ICU and hospital length of stay, emergence of multi-drug-resistant bacteria, or re-operation rate.The trial showed that a short-course of antibiotic therapy in critically ill ICU patients with post-operative IAIs reduced antibiotic exposure.Continuation of treatment until day 15 was not associated with any clinical benefit.
Recently, a retrospective cohort study of 42 adult surgical ICU patients with BSIs secondary to IAIs was published.Cessation of antibiotic therapy within 7 days from adequate source control was not associated with an increased incidence of recurrence [109].
Considering the lack of generalizable data regarding the optimal duration of therapy for critically ill patients, there may be significant variations in practices in this setting.Many surgical critically ill patients often receive antibiotics for a longer duration than necessary.In patients with evidence of an ongoing infection, an individualized approach is mandatory.Clinical judgment associated with the patient's inflammatory response monitoring by biomarkers trend (PCR and PCT) could empower any decision about continuing, narrowing, or stopping antibiotic therapy.
Patients who have persistent signs of infection or systemic illness after 7 days of antibiotic therapy warrant a second-level diagnostic re-investigation to determine whether additional surgical intervention or percutaneous drainage is necessary to address an ongoing uncontrolled source of infection or antibiotic treatment failure [110].
Patients enrolled in STOP-IT trial were evaluated retrospectively to identify risk factors associated with treatment failure [111].This subgroup analysis was able to identify risk factors for treatment failure, including corticosteroid use, Acute Physiology and Chronic Health Evaluation II score ≥ 5, HAIs, or a colonic source of IAI [111].However, among patients with these risk factors, there was no significant difference in treatment failure rates between the randomized groups.These results suggest that individuals at a high risk of treatment failure did not benefit from a longer duration of antibiotic therapy.
In recent years, PCT has been useful in individualizing antibiotic use duration.Evidence shows that this pro-inflammatory biomarker safely shortens antibiotic duration in critically ill patients in the ICU [112][113][114][115][116]. Some evidence showed the role of PCT in guiding the duration and/or cessation of antibiotic therapy in cIAIs.Three studies showed that a PCT-based algorithm could decrease antibiotic exposure in patients with cIAIs.Huang et al. [117] published in 2014 a prospective study investigating whether a PCT-based algorithm could safely reduce antibiotic exposure in patients with cIAIs undergoing surgery.PCT levels were obtained pre-operatively, on postoperative days 1, 3, 5, and 7, and on subsequent days if needed.Antibiotic treatment was discontinued if PCT was < 1.0 ng/L or decreased by 80% versus day 1, with a resolution of clinical signs.The PCT algorithm significantly improved the time to antibiotic discontinuation (p < 0.001, log-rank test).The median duration of antibiotic treatment in the PCT group was 3.4 days (interquartile range [IQR] 2.2 days), versus 6.1 days (IQR 3.2 days) in the control group.In 2015, Maseda et al. [118] published a retrospective study including 121 consecutive patients with cIAIs, a controlled infection source, requiring surgery, and at least 48-h surgical ICU admission.Treatment was shorter in the PCT-guided group (5.1 ± 2.1 versus 10.2 ± 3.7 days, p < 0.001), without differences between patients with and without septic shock.PCT guidance produced a 50% reduction in antibiotic therapy duration (p < 0.001, log-rank test).In 2017, Slieker et al. [119] published another study to investigate whether PCT levels could tailor postoperative antibiotic therapy in patients with cIAIs undergoing surgery.In the subgroup of patients with cIAIs caused by gastrointestinal perforation, the duration of antibiotic treatment was significantly reduced in the PCT-driven algorithm (7 days in the PCT group versus 10 days in the control group (p = 0.065).
How should the correct antibiotic be chosen in patients with IAIs?In the setting of IAIs, inappropriate choice of initial antibiotic therapy in patients leads to more clinical failure, resulting in a longer hospital stay and higher costs of hospitalization compared with appropriate initial antibiotic therapy [120][121][122][123]. Before causative agent(s) and susceptibilities are known, the optimal choice of antibiotic therapy depends on the local prevalence of resistant bacteria and patient risk factors for them as long as available microbiological data (e.g.colonization status).
The major pathogens involved in CA-IAIs are likely to be due to a patient's flora and are generally predictable and include Enterobacterales such as Escherichia coli, Klebsiella spp., viridans group streptococci, and anaerobes (especially Bacteroides fragilis).In addition, Enterococcus spp.are Gram-positive bacteria frequently isolated in CA-IAIs, even if their pathogenic role remains uncertain [124].In 2012, the Dutch Peritonitis Study Group analysed all patients from the RELAP trial and found that the isolation of Gram-positive cocci [125], predominantly Enterococcus spp., was associated with worse outcomes, although in cIAIs, microbial profiles did not predict ongoing abdominal infection after initial emergency laparotomy [126].
Generally, the key factors for predicting the presence of resistant bacteria in patients with cIAIs include acquiring the infection in a healthcare setting, recent antibiotic therapy, prior infection by MDR bacteria and gut colonization [127].Patients with post-operative peritonitis have increased mortality due to the severity of the clinical condition, underlying comorbidity, frequent atypical presentation, and significant incidence of acquiring resistant bacteria.
Over the past two decades, AMR has emerged as a global burden.The rise in infections caused by resistant Gram-negative bacteria poses an escalating threat to public health worldwide.These infections are challenging to treat and are associated with elevated morbidity and mortality rates.To identify the risk factors for resistant bacteria in post-operative peritonitis, Augustin et al. conducted a review of data from 100 patients hospitalized in the ICU.Logistic regression analysis revealed that the use of broad-spectrum antibiotics between the initial intervention and re-operation was the sole significant risk factor for the emergence of resistant bacteria [128].In a retrospective study, data from 242 patients with cIAIs (88 community-acquired, 154 post-operative cases) treated in the ICU were obtained [129].Enterococci were isolated in 47.1% of all patients, followed by E. coli (42.6%), other Enterobacterales (33.1%), anaerobes (29.8%), and Candida spp.(28.9%).The susceptibility rates were lower in post-operative than in community-acquired cases.
The epidemiology of intra-abdominal flora in critically ill patients with secondary and tertiary abdominal sepsis was studied by a 5-year prospective observational cohort study [130], performed in patients admitted to the ICU with abdominal sepsis syndrome.Abdominal fluid samples for microbiological analysis were collected from 221 of the 239 patients admitted with abdominal sepsis.Aerobic Gram-negative bacteria (AGNB) were isolated in 53% of the cultures of the abdominal fluid.Among them, 45% were E. coli; in 36% of patients, more than one AGNB was found.The highest incidence of AGNB was observed in colorectal perforations (68.6%) and perforated appendicitis (77.8%), while the lowest incidence was observed in gastroduodenal perforations (20.5%).Gram-positive bacteria were found most frequently in colorectal perforations (50.0%).Candida spp. was found in 19.9% of patients, with 59.1% of isolates represented by Candida albicans.The incidence of Candida was higher in gastroduodenal perforations (41%) and lower in colorectal perforation (11.8%).Anaerobic bacteria were cultured in 77.8% of patients with perforated appendicitis.Montravers et al. [131] evaluated the dynamic change of microbial flora in persistent peritonitis and observed a progressive shift of peritoneal flora with the number of reoperations.The proportion of patients harbouring MDR strains increased from 41% at index surgery to 49% at the first, 54% at the second (p = 0.037), and 76% at the third re-operation (p = 0.003 versus index surgery), highlighting the necessity and utility of collecting intraperitoneal specimens in every re-intervention.
In both community-and hospital-acquired IAIs, the most common resistance threat is posed by ESBLs, which are becoming alarmingly prevalent also in the community setting [133,134].ESBLs are enzymes capable of hydrolysing and inactivating a wide variety of beta-lactam drugs, including first-, second-, and third-generation cephalosporins, penicillins, and aztreonam.ESBLs are not effective against cephamycins and carbapenems [135,136].Most ESBLs of clinical interest are encoded by genes located on plasmids, therefore resistance to other multiple-drug classes including aminoglycosides and fluoroquinolones may be co-expressed [136].
The main risk factors for ESBLs-producing infections are: (1) hospitalization for at least 48 h within the last 90 days, (2) use of broad-spectrum antibiotics for 5 days within the last 90 days, (3) gut colonization by ESBL within 90 days, (4) patients coming from healthcare settings with high incidence of MDR bacteria (e.g.elderly people living in long-term facilities) [137].According to the latest annual report from the EARS-NET network of national surveillance systems across EU/EEA countries, the AMR situation in 2022 varied widely depending on the bacterial species and the geographical area.A latitude-dependent gradient in the prevalence of AMR was highlighted.Countries in northern Europe reported the lowest rates, while countries in southern and Eastern Europe reported the highest rates.In this report, high rates of third-generation cephalosporin-resistant E. coli have been found in Greece, Italy, Spain, and the eastern European Countries.The highest rates of thirdgeneration cephalosporin-resistant K. pneumoniae have been reported in Italy, Greece, Croatia, Poland, Slovakia, Romania and Bulgaria [69].However, the knowledge of the national epidemiology is not sufficient to accurately assess the patient's risk of ESBL-related infections.As highlighted in the latest AR-ISS report published by the Italian Antimicrobial Resistance Surveillance Agency [138], third-generation cephalosporin-resistant E. coli rates reported in 2022 vary significantly among the Italian regions, following a North-South gradient with the highest rates detected in Molise (42.6%),Calabria (39.9%),Campania (38.2%),Basilicata (35.2%),Sicily (35.15),Puglia (34.9%), and Lazio (31.8%).In all Northern regions, a prevalence lower than 26% is observed, with the lowest values in the Autonomous Province of Bolzano (10.9%) and Friuli-Venezia-Giulia (11.9%).
Carbapenems are generally considered the empiric agents of choice for treating patients with the most common ESBL-producing Enterobacterales.To avoid excessive carbapenem use, however, de-escalation to other agents, such as piperacillin-tazobactam when a MIC ≤ 8 mg/L (according to the EUCAST breakpoint) is detected, can be considered.Several studies compared piperacillin-tazobactam with carbapenems in the treatment of infections caused by ESBL-producing Enterobacterales.In the MERINO trial [139], the efficacy of piperacillin-tazobactam versus meropenem in the treatment of BSIs caused by ceftriaxone-resistant E. coli or K. pneumoniae was compared, showing an overall 30-day mortality rate threefold higher in the piperacillin-tazobactam arm than in the meropenem one (12.3%versus 3.7%, p = 0.90).Since the low mortality rate in the meropenem group was an unexpected finding, the results of this study have been debated and several issues may have influenced the outcomes of this trial [140,141].Among the debated biases, the pharmacokinetic/pharmacodynamic (PK/PD) target attainment of piperacillin-tazobactam was not optimized, as the trial favoured intermittent 1-h infusion over a prolonged infusion protocol.Also, from a microbiological point of view, the use of the E-test to determine piperacillin/tazobactam susceptibility led to an elevated percentage of OXA-1-producing pathogens being incorrectly identified as piperacillin-tazobactam susceptible (E-test method cannot detect OXA-1).In a second comparative study (MERINO-2 trial), 72 patients with BSIs due to chromosomal AmpC producers were enrolled in a multicenter randomized controlled trial, where they were assigned 1:1 to receive piperacillintazobactam or meropenem.Piperacillin-tazobactam led to more microbiological failures, although fewer microbiological relapses were observed [142].Despite several pieces of evidence coming from observational studies showed no significant difference in efficacy and mortality rate between piperacillin-tazobactam and carbapenems among patients with ESBL-producing BSIs [143][144][145][146], the use of a carbapenem (imipenem or meropenem) for severe infections caused by third-generation cephalosporin-resistant Enterobacterales is generally recommended in critically ill patients [147].
Tigecycline remains a viable treatment option for complicated IAI due to its favourable in vitro activity against anaerobic organisms, enterococci and ESBLs, and to the high concentration achieved in the biliary tract [148].However, in numerous trials, excess mortality was seen in patients treated with tigecycline when compared with other drugs; in 12 of 13 phase 3 and 4 comparative clinical trials [148], all-cause mortality was found higher in the tigecycline group versus the comparison group.Study-level and patient-level analyses identified that patients in the hospital-acquired pneumonia trial, particularly those with ventilator-associated pneumonia with baseline bacteraemia, were at a higher risk of clinical failure and mortality.A mortality analysis was used to investigate the association of baseline factors in intra-abdominal infections, including severity of illness at study entry and treatment assignment, with clinical failure and mortality.Mortality modelling identified multiple factors associated with death which did not include tigecycline [148].Because of its high concentration in the biliary tract, despite its low performance in bacteraemia patients, tigecycline could be considered as an alternative to beta-lactam agents in the setting of IAIs, when considering a combination antibiotic treatment when a secondary bloodstream infection is suspected.
Aminoglycosides are particularly effective against aerobic Gram-negative bacteria and can act synergistically against certain Gram-positive organisms.They are active against Pseudomonas aeruginosa, but are ineffective against anaerobic bacteria.Because of their serious toxic side effects including nephrotoxicity and ototoxicity, and considering the poor penetration rate in the ascitic fluid and the loss of bactericidal activity in the presence of acidic pH, most authors do not recommend aminoglycosides for the routine empiric treatment of IAIs.
Fosfomycin is a broad-spectrum antibiotic with a wide therapeutic range and characteristic pharmacological properties [149].Fosfomycin penetrates excellently into various tissues and is frequently administered in combination with other antibiotics to combat severe bacterial infections in Europe.It exerts bactericidal activity under anaerobic conditions, such as is the case within encapsulated purulent lesions, and has negligible protein binding.These characteristics constituted the rationale for choosing fosfomycin in treating abscesses if source control is not feasible.
Finally, ceftolozane-tazobactam and ceftazidime-avibactam (CAZ-AVI) have shown good efficacy in treating patients with IAIs caused by ESBL-producing Enterobacterales, mostly CAZ-AVI due to the higher enzymatic inhibition of avibactam [146].They may be useful in patients with ESBL-Enterobacterales infections, as a part of carbapenem sparing regimens [148].Due to their activity, CAZ-AVI and ceftolozane-tazobactam should not be used alone because they are not active against anaerobes and Gram-positive bacteria such as Streptococci and methicillin-susceptible Staphylococcus aureus.
Carbapenemase-producing Enterobacterales (CPE), such as K. pneumoniae, are rapidly emerging as a major source of MDR infections worldwide and pose a serious threat in clinical situations where administration of effective empiric antibiotics is essential to prevent mortality following bacteraemia and infections in neutropenic and immunocompromised patients.According to the latest annual report from the EARS-NET, high rates of K. pneumoniae carbapenemase (KPC) have been found in Greece, Italy, Portugal, Croatia, Poland, Romania and Bulgaria [69].In Italy, the latest AR-ISS report reveals high percentages of KPC in the southern region with an important North-South gradient [highest in Calabria (59.2%), and lowest in the Autonomous Province of Bolzano (1.3%)] [138].In the last five years, there have been several new antibiotics with predominant activity against Gram-negative bacteria approved by the U.S. Food and Drug Administration and the European Medical Agency.Some prospective and several retrospective observational studies support CAZ-AVI in the treatment of BSIs, cIAIs, and complicated urinary tract infections in settings with an ICU admission for up to 60% [150][151][152][153][154][155][156][157].Van Duin et al. [150] assessed prospectively 137 CPE infections (38 treated with CAZ-AVI versus 99 with colistin-based regimens).In patients treated with CAZ-AVI versus colistin, 30-day hospital mortality after starting treatment was 9% versus 32%, respectively.Moreover, at 30 days, patients treated with CAZ-AVI, compared with those treated with colistin, had a 64% probability of better outcomes.Tumbarello et al. demonstrated the effectiveness of CAZ-AVI against KPC-producing K. pneumoniae (KPC-Kp) infections by two retrospective observational studies, conducted in Italy [151,152].The first study enrolled 138 patients starting CAZ-AVI salvage therapy after first-line treatment (median, 7 days) with other antibiotics [151].CAZ-AVI was administered with at least 1 other active antibiotic in 109 (78.9%) cases.Thirty-day mortality among the 104 patients with BSIs secondary to KPC-Kp infections was significantly lower than that of a matched cohort whose KPC-Kp bacteremia had been treated with drugs other than CAZ-AVI (36.5% versus 55.8%, p = 0.005).In the second study 577 adult patients with BSIs (391) or nonbacteremic infections, involving mainly urinary tract, lower respiratory tract and intra-abdominal structures, were analyzed [152].All received treatment with CAZ-AVI alone (165) or with ≥ 1 other active antibiotics (412).The all-cause mortality rate 30 days after infection onset was 25% (146/577).There was no significant difference in mortality between patients managed with CAZ-AVI alone and those treated with combination regimens (26.1% versus 25.0%, p = 0.79).Only 35 out of 577 BSIs were associated with IAIs.This did not give sufficient information about the role of the drug in this specific setting but conformed how often IAIs are compartmentalized.CAZ-AVI has activity against most KPC and OXA-48-like-producing CPE, and currently, it represents the preferred treatment option for OXA-48-like-producing infections [158].
Meropenem-vaborbactam is another agent active against KPC-producing CPE.A phase III RCT (TANGO II) [159] assessed 47 patients affected by CPE infections.Of these patients, 32 were treated with meropenemvaborbactam and the other 15 with best-available therapy (including mono/combination therapy with colistin, carbapenems, aminoglycosides, tigecycline, or ceftazidimeavibactam alone).Meropenem-vaborbactam showed a better clinical cure rate (65.6% versus 33.3%; p = 0.03) and a lower mortality rate (15.6% versus 33.3%; p = 0.20) compared to the best available therapy.However, patients enrolled in this RCT required ICU admission only in 15.6% of cases.Other evidence for meropenem-vaborbactam as targeted therapy for CPE infections in critically ill patients came from observational studies, in which ICU admission ranged from 65.4% to 70% [160,161].
Imipenem-relebactam, another agent active against KPC, was compared to imipenem and colistin in the treatment of hospital-acquired bacterial pneumonia and ventilator-associated bacterial pneumonia, cIAIs, or complicated urinary tract infections caused by imipenemnon susceptible bacteria (RESTORE-IMI 1 trial) [162].The favourable overall response was observed in 71% imipenem/relebactam and 70% colistin + imipenem patients.Day 28 favourable clinical response was observed in 71% imipenem/relebactam and 40% colistin + imipenem patients, and 28-day mortality in 10% imipenem/relebactam and 30% colistin + imipenem patients.Unfortunately, only 2 patients per arm with cIAIs were enrolled.Serious adverse events, as well as nephrotoxicity, occurred more often in patients treated with imipenem and colistin.Imipenem-relebactam retains good in vitro activity against P. aeruginosa.Of 1912 P. aeruginosa isolates recovered as a part of a multicenter Canadian surveillance study [163], 166 (8.7%) and 495 (25.9%) demonstrated difficultto-treat resistance and MDR phenotypes, respectively.Among these isolates, several antibiotics were tested.Impenem-relebactam susceptibility was 47.0% for difficult-to-treat resistance isolates and 71.5% for MDR isolates, second only to ceftolozane-tazobactam, and better than CAZ-AVI.
However, the emergence of resistance to CAZ-AVI in CPE has been repeatedly reported.Several KPC variants have emerged, with changes (substitution, insertion, or deletion) in the amino acid sequence compared to wildtype KPC enzymes (e.g.KPC-2 and KPC-3), conferring reduced susceptibility or resistance to CAZ-AVI [166].So far, more than 200 KPC variants have been reported worldwide, with several reports of resistance extended also to meropenem-vaborbactam and sometimes to imipenem-relebactam.These evidences increase the difficulties in optimizing infection treatment and preventing the emergence of new resistant phenotypes/genotypes [167,168].
Eravacycline, a broad-spectrum fluorocycline tetracycline antibiotic, was investigated in the treatment of cIAI by two prospective randomized clinical trials in which a non-inferior clinical cure rate in eravacycline population at the test-of-cure visits was found when compared to ertapenem and meropenem (IGNITE 1 trial: 87.0% for eravacycline versus 88.8% for ertapenem; IGNITE 4 trial: 90.8% versus 91.2%) [169,170].Also, a very low risk of C. difficile infection after eravacycline treatment was observed [170,171].In a recent meta-analysis, Meng et al. revised the results of 25 randomized clinical trials to evaluate the efficacy and safety of eravacycline compared with other seven regimens commonly used for cIAIs treatment (tigecycline, meropenem, ertapenem, ceftazidime/avibactam + metronidazole, piperacillin/ tazobactam, imipenem/cilastatin, and ceftriaxone + metronidazole).In terms of microbiological response rate, eravacycline was significantly better than tigecycline [tigecycline versus eravacycline: RR = 0.82, 95% CI (0.65,0.99)], and there was no significant difference between the other 6 regimens and eravacycline (p > 0.05).In terms of safety, the incidence of serious adverse events, discontinuation rate, and all-cause mortality of eravacycline were not significantly different from those of the other 7 treatment therapies (p > 0.05).[172].Both the European Society of Clinical Microbiology and Infectious Diseases (ESCMID) and the Infectious Diseases Society of America (IDSA) guidance suggest using eravacycline as an alternative option for the treatment of infections secondary to ESBL-producing and CPE (including KPC, metallo-beta-lactamases [MBLs], OXA-48 producing strains), except for the treatment of bloodstream or urinary tract infections [173].Finally, in Hobbs Real-World experience, a possible clinical efficacy of eravacycline in treating infections due to carbapenem-resistant Acinetobacter baumannii was hypothesized, even though further evidence is needed [171].Like tigecycline, eravacycline presents large volume of distribution with excellent tissue penetration, which is supposed to limit its use in primary BSIs [174], and high biliary secretion.However, a posthoc analysis conducted by Lawrence et al. eravacycline demonstrated a similar microbiological eradication rate as comparator agents in patients with cIAI associated with secondary bacteraemia [175].
Cefiderocol is a siderophore cephalosporin.It has shown excellent broad-spectrum antibacterial activity, in part due to its innovative way of cell entry.However, some mechanisms of resistance to cefiderocol have already been identified and reduced susceptibility has developed during patient treatment.Therefore, the clinical use of cefiderocol should be rational.In a phase 3 RCT 150 patients affected by carbapenemresistant Gram-negative infections were randomized to cefiderocol (n = 101) or the best available therapy, including a combination of aminoglycoside, carbapenems, colistin, fosfomycin or tigecycline (n = 49) [176].Clinical and microbiological cure rates did not significantly differ in the two groups.However, the number of documented CPE infections was limited and the number of patients with IAIs enrolled was low (5 in the cefiderocol arm and 4 in the best available therapy arm).Moreover, more deaths occurred in the cefiderocol group, primarily in the patient subset with Acinetobacter spp.infections.Several non-controlled studies were published about Cefiderocol afterwards [177][178][179][180]. Although, Cefiderocol represents a very useful therapeutic option, further clinical data are needed to better understand the role of this novel option in the extensive-drug resistant infection treatment scenario [181,182].
MBLs differ structurally from the other beta-lactamases by their requirement for a zinc ion at the active site.They are all capable of hydrolysing carbapenems.In contrast to the serine beta-lactamases, the MBLs have poor affinity or hydrolytic capability for monobactams and are not inhibited by clavulanic acid, tazobactam, avibactam, relebactam and vaborbactam.The most common metallo-beta-lactamase families encountered in Gramnegative bacilli include the IMP (active-on-imipenem beta-lactamase), VIM (Verona integron-encoded metallo beta-lactamase) and NDM (New Delhi metallo-beta-lactamase) [148].
The lack of in vitro activity of ceftazidime-avibactam against MBLs and the fact that many MBLs producers also coproduce other beta-lactamases (such as ESBLs, AmpC, OXA-48, etc.) have led to hypothesize a potential effect of combining ceftazidime-avibactam with aztreonam, which is not hydrolysed by MBLs per se [183][184][185][186].A real potential breakthrough in the treatment of MBLs could be represented by the recent development of a new antibiotic, made from the combination of the monobactam aztreonam with the non-beta-lactam β-lactamase inhibitor avibactam.
Aztreonam-avibactam is currently under clinical development for the treatment of serious infections caused by MBLs-producing Enterobacterales [181].
Alarming rates of resistance to many antibiotics in hospitals worldwide have been reported for non-fermenting Gram-negative bacteria, including Pseudomonas aeruginosa, Stenotrophomonas maltophilia, and Acinetobcter baumannii.These bacteria are intrinsically resistant to many antibiotics; moreover, they can acquire additional resistance to other important antibiotic agents.
Among Gram-positive bacteria involved in IAIs, the impact of Enterococcus spp. on mortality remains uncertain [187].While the role of Enterococci spp. in determining breakthroughs or superinfections in high-risk patients is well documented, their pathogenic impact on IAIs in low-risk patients is still debated [188].As far as Enterococcus spp. is believed to possess low virulence factors, it may be supposed that an increase in virulence may be obtained by the development of a synergistic effect with other bacteria like E. coli and anaerobes [187,189].Various observational studies highlighted the treatment failure of patients infected by Enterococcus spp.relates with a poorer outcome; however, there is no consistent evidence that routine use of adequate anti-enterococcal coverage improves the survival rate, hypothesizing that Enterococcus spp.isolation represents a negative prognostic marker rather than playing a causative role in the infection [190,191].In their meta-analysis, Zhang et al. [126] found that enterococci-covered antibiotic regimens provided no improvement in treatment success compared with control regimens (RR, 0.99; 95% CI, 0.97-1.00;p = 0.15), with similar mortality and adverse effects in both arms.Basic characteristic analysis revealed that most of the enrolled patients with IAIs in RCTs were young, lower-risk CA-IAIs patients with a relatively low APACHE II score.Interestingly, malignancy, corticosteroid use, surgical intervention, antibiotic treatment, admission to the ICU, and indwelling urinary catheters could predispose patients with IAI to a substantially higher risk of enterococcal infection.Also, the acquisition of an IAI in the hospital setting seemed to represent a risk factor for enterococcal infections (OR, 2.81; 95% CI, 2.34-3.39;p < 0.001) [126].In Dupont et al. study [189], patients older than 75 years and admitted to ICU with Enterococcus spp.isolation presented higher SAPS2 and SOFA scores when compared with the enterococcinegative control group (p < 0.001 for both scores), confirming that the identification of these bacteria represents an independent risk factor for ICU mortality.Similarly, in Morvan et al. study ICU patients had higher 30-day mortality when Enterococcus spp. was isolated, especially when species other than Enterococcus faecalis (mostly E. faecium) or a polymicrobial infection were detected [187].Accordingly, to the above-described evidence, Sanders et al. concluded that the only parameter able to predict Enterococcus spp.isolation was the APACHE-II score (unadjusted odds ratio [OR] 1.07; p < 0.01) [192].In Fabre et al. multicentre study where approximately 65% of patients in both groups had CA-IAIs, there was no difference in the 30-day composite outcome between cIAI patients with E. faecalis isolation from intra-abdominal cultures and those treated with piperacillin/tazobactam rather than receiving ertapenem therapy (which cannot guarantee an adequate anti-enterococcal coverage) (OR 0.80; 95% CI; 0.39-1.63)[193].
An extensive literature review demonstrated some evidence in favour of using empirical therapy with enterococcal coverage for IAIs in the following cases: 1) immunocompromised patients or patients with hospital-acquired/post-operative cIAIs; 2) patients with cIAIs who have previously received cephalosporins and other broad-spectrum antibiotics selecting Enterococcus spp.; 3) patients with cIAIs and valvular heart disease or prosthetic intravascular material, at high risk of endocarditis.The ideal therapeutic regimen for these high-risk patients remains to be determined, but empirical therapy directed against enterococci should be considered [194].
Nearly all strains of E. faecalis, including some strains of vancomycin-resistant E. faecalis, are susceptible to ampicillin.In patients with IAIs, E. faecium is increasingly encountered, particularly in patients with hospitalacquired IAIs.In contrast to E. faecalis, nearly all strains of E. faecium are resistant to ampicillin and the growing prevalence of vancomycin-resistant strains is an area of concern, although the main clinical problem seems strictly related to BSIs.Indeed, a recent meta-analysis showed a higher mortality for vancomycin-resistant E. faecium BSIs compared with vancomycin-sensitive E. faecium BSIs (RR = 1.46; 95% CI 1.17-1.82)[195].For patients with VanA-type vancomycin-resistant E. faecium, linezolid or daptomycin are the preferred agents.Both linezolid and daptomycin have good in vitro activity against vancomycin-resistant E. faecium although higher daily doses are needed [196].For VanB-type resistant strains teicoplanin should be considered the preferred drug [197].
What are the optimal daily doses and modality of administration of antibiotics in patients with IAIs?
19.In patients with IAIs and sepsis or septic shock, appropriate dose and administration mode of antibiotics should include: 1) proper loading dose, 2) extended or continuous infusion for betalactams and 3) knowledge of the peritoneal/biliary penetration rate (Low quality of evidence, strong recommendation).
Beta-lactam antibiotics exhibit good penetration rate in the peritoneal exudate fluid. In critically ill patients, continuous infusion should be implemented to grant optimal PK/PD target attainment at the infection site (Low quality of evidence, strong recommendation).
Administering adequate doses of antibiotics should be based on the intrinsic pharmacokinetic (PK) and pharmacodynamic (PD) characteristics of each antibiotic class, the specific agent, and the specific pathophysiologic characteristics of the patient.
Antibiotic PD refers to the relationship existing between drug exposure and its capability to inhibit bacterial growth.The minimal inhibitory concentration (MIC) is the primary parameter used in vitro to assess the effectiveness of an antibiotic against its target bacteria.To obtain a therapeutic effect, in case of time-dependent antibiotics, namely the beta-lactams, the concentration at the infection site should exceed the MIC against the target bacteria for at least 40% of the dosing interval, and ideally for longer up to 400% (e.g.100% fT > 4 × MIC); in case of concentration-dependent antibiotics, namely the aminoglycosides maximum concentration should be 8-tenfold higher than the MIC (C max /MIC ratio > 8-10) [198].Antibiotic PK describes how antibiotics are absorbed, distributed, metabolized, and eliminated from the body, determining the time course and concentration of antibiotics in serum and tissues and at the site of infection.Suboptimal concentrations at the target site may have important clinical consequences such as therapeutic failure and promotion of AMR development, especially when clinical isolates have borderline in vitro susceptibility [198].Tissue distribution is an important feature because high concentrations at the infection site may prevent resistance development.Generally, tissue distribution is higher for lipophilic agents compared to hydrophilic ones, but disease-related factors may concur with differential tissue distribution [199].In patients with severe IAIs, increased doses may be needed to attain adequate concentrations of ceftazidime, meropenem, and imipenem [200][201][202].The findings of an observational prospective study including critically ill patients with suspicion of IAI needing surgery and empirical therapy with a beta-lactam were published in 2020 [203].It was found that high doses of beta-lactams were able to attain 100% serum fT > 4 × MIC within the first 24 h in as much as 78% of critically ill patients having severe IAIs.To define optimal beta-lactam dosing, the PK/PD target should consider both tissue penetration rate and local antimicrobial susceptibility.
When treating abdominal sepsis, clinicians must be aware that drug pharmacokinetics may significantly vary between patients due to the changeable pathophysiology of sepsis and must also consider the pathophysiological and immunological status of the patient [204].
The "dilution effect", also called as "third spacing" phenomenon, must be considered when administering the loading dose of hydrophilic antibacterial agents such as beta-lactams, aminoglycosides, and glycopeptides, which distribution is limited to the extracellular space.Otherwise, the dilution effect may cause underexposure at the infection site, namely in the peritoneal fluid, and may cause treatment failure and/or resistance development.
Generally, to ensure prompt achievement of adequate therapeutically effective drug exposure at the infection site in patients with sepsis or with septic shock the loading dose of beta-lactams or glycopeptides should be 1.5fold higher than the standard one used in clinically stable patients [204].Afterwards, the maintenance doses should be based on the degree of the patient's renal function and should be reassessed daily, because the changeable pathophysiological conditions of the critically ill patients may significantly affect drug disposition.Maintenance doses of renally excreted drugs must be decreased in patients with impaired renal function and must be increased in patients with augmented renal clearance (a creatinine clearance > 130 mL/min) [204,205].Serum creatinine is an unreliable marker of renal function in critically ill patients.Urinary creatinine clearance should be measured to properly assess the renal function [206].
Dosing regimens should depend on the time-dependency or concentration-dependency antibacterial activity of the selected agent.Beta-lactams exhibit time-dependent activity which is optimal when trough concentrations (C min ) persist above the MIC, namely C min > MIC [204].Intensified frequency dosing, prolonged infusions and/ or continuous infusions may improve the likelihood of achieving this target [204].Under the same daily dose, prolonged or continuous infusions may maximize the attainment of C min > MIC.Large randomized controlled trials comparing continuous versus intermittent infusion of piperacillin/tazobactam in patients with cIAIs [204] as well as of piperacillin/tazobactam, ticarcillin/clavulanate or meropenem in patients with severe sepsis [204], did not find improvement in clinical outcomes.However, the generalizability of these findings should not be extended to patients having high severity of illness and/or infections caused by borderline susceptible pathogens with high MIC, for whom the clinically relevant benefit predicted by the PK/PD theory should be the greatest.This has been supported by some retrospective studies [207][208][209].Consequently, prolonged, or continuous infusions of beta-lactam agents should be considered beneficial for treating severely critically ill patients with abdominal sepsis, especially in settings with a high prevalence of MDR pathogens.
A recent multicentre randomized trial [210] showed that, in critically ill patients with sepsis, continuous infusion of meropenem was not superior to intermittent infusion in reducing mortality and emergence of pandrug-resistant or extensively drug-resistant bacteria at day 28.However, it has been argued that some bias might have affected the findings and limited the possibility of drawing definitive conclusions, namely a relatively long duration of hospitalization before randomization, the baseline severity of the underlying condition and the relatively small sample size [211].Overall, prolonged or continuous infusions of beta-lactam agents should therefore be considered as an added value when treating critically ill patients with abdominal sepsis.Conversely, antibiotics with concentration-dependent activity may maximize their effect when attaining a peak plasma concentration (C max ) to MIC (C max /MIC) ratio > 8-10, so once-daily pulse dosing should be the preferred method of administration [204].
Regarding aminoglycosides, once-daily dosing is beneficial also in terms of decreasing the nephrotoxicity risk compared to multiple daily dosing because accumulation in the renal cortex, related to carriers, may be saturated more effectively with the former administration mode [212].
Antibiotics with good biliary excretion should be used for treating biliary tract infections even if supportive clinical studies are currently lacking (Very low quality, strong recommendation). 22. Data concerning PK/PD target attainment of novel beta-lactam antibiotics in biliary tract infections are currently unavailable (No recommendation).
The bacterial species most often isolated in the biliary tract infections are the same isolated in IAIs and include Gram-negative aerobes, such as E. coli and K. pneumoniae and anaerobes, especially B. fragilis [213].The efficacy of antibiotic treatment of biliary tract infections may depend on effective biliary concentrations, even if clinical data supporting this contention are currently lacking [214].An interesting review of the pharmacokinetics of antibiotics penetrating the bile and the gallbladder wall was published in 2020 [214].The efficacy and pharmacokinetics of 50 antibiotics were analysed, and overall, most of them exhibited a valuable biliary penetration translating into clinical efficacy.Only seven antibiotics (namely amoxicillin, cefadroxil, cefoxitin, ertapenem, gentamicin, amikacin, and trimethoprim/sulfamethoxazole) had poor biliary penetration rates.Three antibiotics (namely ceftibuten, ceftolozane/tazobactam, and doripenem) showed favourable clinical outcomes regardless of unavailability of pharmacokinetic studies assessing their biliary penetration rate.Conflicting efficacy was reported for ampicillin despite adequate biliary penetration, whereas conflicting pharmacokinetic data were reported with cefaclor and moxifloxacin.Even in the absence of supportive clinical studies, the authors concluded that antibiotics with good biliary penetration profiles may have a place in the treatment of biliary tract infections.
What is the optimal antifungal treatment in patients with IAIs?
In patients with septic shock and multi-organ failure, empiric antifungal therapy for Candida species should be considered for patients with hospital-acquired cIAIs, especially those with recent abdominal surgery or proximal gastrointestinal anastomotic leak or for patients with community-acquired infections at high risk (Low quality of evidence, strong recommendation).
The epidemiological role of Candida spp. in patients with IAIs has not yet been conclusively defined [215].Empirical antifungal therapy for Candida spp. is typically not recommended for patients with CA-IAIs, with the exceptions of critically ill patients with septic shock, multiple organ failure and risk factors for Candida spp.infections or immunocompromised patients (due to neutropenia or concurrent administration of immunosuppressive agents, such as glucocorticosteroids, chemotherapeutic agents, and immunomodulators).In 2016, the IDSA guidelines for the treatment of invasive candidiasis were developed and addressed intra-abdominal candidiasis (IAC) [216].IDSA guidelines suggest considering empiric antifungal therapy for patients with clinical evidence of IAIs and significant risk factors for candidiasis, including recent abdominal surgery, anastomotic leaks, or necrotizing pancreatitis, who are doing poorly despite treatment for bacterial infections.While patients with abdominal sepsis may not in general benefit from empiric antifungal agents, some patients with particular risk factors for fungal infection who fail to improve after some days of broad-spectrum antibiotic therapy are at increased risk of having invasive candidiasis.The most recent ESICM/ESCMID guidelines recommendations state that empiric antifungal treatment may be considered in patients with sepsis or septic shock at high risk for Candida spp.infections [217].These recommendations are mainly based on the results of several observational studies, in which an early antifungal therapy was associated with a better outcome.However, no evidence favouring an early empirical antifungal therapy comes from prospective studies.
In a recent German trial aiming to evaluate the diagnostic value of Beta-D-glucan in invasive candidiasis compared with standard cultures, in both arms, empiric treatment was discouraged, confirming the increasing doubts about the role of extended empiric treatment against Candida spp.[218].In any IAIs associated with sepsis or septic shock, extended empiric treatment, generating an excess of exposure to antifungal agents, is probably an important driver of non-albicans-resistant species selection [219].There is a critical need for more robust clinical trials and surveillance of antifungal resistance to enhance patient care and optimise treatment outcomes.Such evidence will help to improve the existing guidelines and contribute to a more personalised and effective approach to treating this serious medical condition.A possible solution to the conundrum of empiric antifungal therapy in IAIs could be based on the application in the clinical practice of the high negative predictive value of 1,3-beta-d-Glucan, used when negative for an early withdrawal of antifungal agents [220].
24.The use of echinocandins for the management of intrabdominal candidiasis may be affected by suboptimal PK/PD target attainment.In critically ill patients dose adjustments may be needed if echinocandins are used (Very low quality of evidence, strong recommendation).
23 In patients at high risk for intra-abdominal candidiasis, liposomal amphotericin B may be used as pre-emptive therapy waiting for the result of the 1,3-beta-D-Glucan (if 1,3-beta-d-Glucan test is available) (Very low quality of evidence, weak recommendation).
Although traditionally recommended by guidelines, the role of echinocandins has been debated in recent years [221].Indeed, there is mounting evidence in the literature showing that echinocandin exposure in the abdomen could be suboptimal in critically ill patients and that dosing increases guided by Therapeutic Drug Monitoring may be needed.
In 2021 Welte et al. assessed the pharmacokinetic behaviour and the antifungal activity of anidulafungin, micafungin, and caspofungin in ascites fluid and plasma of critically ill adults treated for suspected or proven invasive candidiasis [222].The study demonstrated that standard daily doses of anidulafungin, micafungin, or caspofungin resulted in ascites fluid concentrations preventing relevant proliferation of C. albicans and C. glabrata, but did not warrant reliable eradication.Another recent study showed moderate penetration of echinocandins into the peritoneal fluid.These levels were below the threshold of resistance mutant selection published by other authors, justifying a potential risk of resistance in patients with prolonged treatment with echinocandins and suboptimal control of abdominal infection [223].Controlled clinical studies on the treatment of IAC are currently lacking.Recent reports suggest dose adjustments in patients with reduced albumin levels, increased weight, and severe infections.Most studies report 20% lower exposures in critically ill patients.Therefore, concentration might be insufficient although no PK/PD target in the peritoneum has been defined for echinocandins [224].
While azoles are no longer considered the first choice in critically ill patients, due to the high level of resistance and/or the high risk of drug-drug interactions, a qualified alternative could be represented by Liposomal Amphotericin B. Liposomal amphotericin B is a lipidbased formulation of amphotericin B [225].The liposomal formulation allows an increase in doses compared with the deoxycholate formulation.It has improved antifungal efficacy and an affordable lower risk of side effects and nephrotoxicity.The potential of drug interactions is negligible as well as the risk of resistance selection.It has concentration-dependent fungicidal activity, a prolonged half-life and showed an extended post-antifungal effect in time-kill studies [226].Moreover, due to the lipophilic characteristics, it might be less affected by pathophysiological changes than hydrophilic drugs like echinocandins.Clinical data about the use of liposomal amphotericin B in the setting of IAIs are quite limited.However, its use may be rationale and it may be considered as a potential first-line option, also because its acceptable safety has been reported in the ICU setting [227].Very recently a monocentric experience showed that a single 5 mg/kg pulse dose of liposomal amphotericin B as pre-emptive therapy in patients at high risk for IAC was safe and cost-effective while waiting for the result of the 1,3-beta-D-Glucan test [228].
Conclusions
An effective antimicrobial strategy for managing IAIs requires a correct balance between optimizing empiric therapy to improve clinical outcomes and curbing excessive antimicrobial use to mitigate the emergence of multidrug-resistant strains.Shared pathways for the most common IAIs are illustrated in appendices 1-9.
To prevent the selection and the spread of AMR and to treat infections correctly, we need Culture, Methods, Experience, Honesty, Organization and Multidisciplinary approach.
The following principles should be the basis of ethical in-hospital antimicrobial stewardship: 1. Choose antimicrobials using a risk assessment-based approach.2. Do not be impulsive in starting antimicrobial therapy.3. Use appropriate microbiology resources.4. Avoid redundant prescriptions and useless combinations.5. Be aware of PK/PD features.6. Rethink early how antibiotics are prescribed.7. Shorten therapy duration.8. Define the right indications of therapy for new drugs.9. Work together.
Clinical signs and symptoms
• Abdominal pain: it usually has a gradual onset and increases with intensity over time.It is usually relieved in the supine position and aggravated by coughing or abdominal movements.Typically, there may be a short history of migration of the pain from the peri-umbilical region to the right low quadrant.• Nausea and/or vomiting soon after abdominal pain begins.• Fever.
• Tenderness localized in the right low quadrant (typically in complicated acute appendicitis).
Laboratory markers
• Increased white blood cell count.
• CT with IV contrast.
Imaging findings
• Diameter of the appendix > 6 mm.
Laboratory markers
• Increased white blood cell count.
Imaging
• US (investigation of choice in patients with suspected acute cholecystitis).• CT with IV contrast.
• Murphy's sign can be elicited on ultrasound examination.
Uncomplicated cholecystitis
• EARLY TREATMENT: Early (within 7-10 days the onset of symptoms) laparoscopic/open cholecystectomy.One shot prophylaxis if early intervention.
No post-operative antibiotics.• DELAYED TREATMENT: Antibiotic therapy and planned delayed cholecystectomy (second option) (not in immunocompromised patients).Antibiotic therapy for no more than 7 days.
Complicated cholecystitis
Laparoscopic cholecystectomy with open cholecystectomy as an alternative + Antibiotic therapy for 4 days in immunocompetent and no critically ill patients if source control is adequate.
Antibiotic therapy up to 7 days based on clinical conditions and inflammation indices if source control is adequate in immunocompromised or critically ill patients.
Patients who have ongoing signs of infection or systemic illness beyond 7 days of antibiotic treatment warrant a diagnostic investigation.
Cholecystostomy may be an option for acute cholecystitis in patients with multiple comorbidities and unfit for surgery patients who do not show clinical improvement after antibiotic therapy for days.Antibiotic therapy for 4 days.Cholecystostomy is inferior to cholecystectomy in terms of major complications for critically ill patients.It should not be performed in critically ill or immunocompromised patients.
Normal renal function
No
Clinical signs and symptoms
• Intermittent fever with rigors.
• Right upper quadrant abdominal pain.
Laboratory markers
• Increased white blood cell count.
• CT with IV contrast.
Imaging findings
• Dilatation of intra-and extra-hepatic bile ducts.
• Thickening of the bile duct wall.
Biliary drainage
+ Antibiotic therapy for 4 days in immunocompetent and no critically ill patients if source control is adequate.
Antibiotic therapy up to 7 days based on clinical conditions and inflammation indices if source control is adequate in immuocompromised or critically ill patients.
Patients who have ongoing signs of infection or systemic illness beyond 7 days of antibiotic treatment warrant a diagnostic investigation.
The type and timing of biliary drainage should be based on the severity of the clinical presentation, and the availability and feasibility of drainage techniques, such as endoscopic retrograde cholangiopancreatography (ERCP), percutaneous transhepatic cholangiography (PTC), and open surgical drainage.
Normal renal function
No If percutaneous drainage of the abscess is not feasible or not available, in no critically ill patients and immunocompetent patients' antibiotics alone could be considered the primary treatment.
If percutaneous drainage of the abscess is not feasible or not available, in critically ill patients and immunocompromised patients' surgical intervention could be considered the primary treatment.
Diffuse peritonitis
• Primary resection and anastomosis with or without a diverting stoma (in clinically stable patients with no co-morbidities).• Hartmann's procedure (in critically ill patients and/or in patients with multiple major comorbidities).• Laparoscopic peritoneal lavage and drainage suitable only for patients with purulent (but not fecal) peritonitis due to complicated diverticulitis.Very controversial.
+
Antibiotic therapy for 4 days in immunocompetent and no critically ill patients if source control is adequate.
Antibiotic therapy up to 7 days based on clinical conditions and inflammation indices if source control is adequate in immuocompromised or critically ill patients.
Patients who have ongoing signs of infection or systemic illness beyond 7 days of antibiotic treatment warrant a diagnostic investigation.
Normal Renal Function
No critically ill and Immunocompetent patients Adequate source control Amoxicillin/Clavulanate 2 g/0.2 g q8h.
In patients with documented beta-lactam allergy: Patients who have ongoing signs of infection or systemic illness beyond 7 days of antibiotic treatment warrant a diagnostic investigation.
Patients with inadequate/delayed source control or at high risk of infection with community-acquired ESBLs-producing Enterobacterales
Ertapenem 1 g q24h or Eravacycline 1 mg/kg q12h.
If septic shock
One of the following antibiotics: Meropenem 1 g q6h by extended infusion or continuous infusion or Doripenem 500 mg q8h by extended infusion or continuous infusion or Imipenem/cilastatin 500 mg q6h by extended infusion or Eravacycline 1 mg/kg q12h.
Treatment
• Conservative treatment without surgery only in patients not eligible for surgical repair because of severe comorbidities.
• Laparoscopic/open simple or double-layer suture with or without an omental patch is a safe and effective procedure to address small perforated ulcers (standard procedure).• Distal gastrectomy (large perforations near the pylorus; suspicion of malignancy).
+
Antibiotic therapy for 4 days in immunocompetent and no critically ill patients if source control is adequate.
Antibiotic therapy up to 7 days based on clinical conditions and inflammation indices if source control is adequate in immuocompromised or critically ill patients.
Patients who have ongoing signs of infection or systemic illness beyond 7 days of antibiotic treatment warrant a diagnostic investigation.
Laboratory markers
• Increased white blood cell count.
Imaging
CT with IV contrast.
Imaging findings
• Signs of intestinal perforation such as extraluminal air bubbles, intra-abdominal fluid.• Post-operative abscess.
Treatment Localized abscess
Percutaneous drainage.+ Antibiotic therapy for 4 days in immunocompetent and no critically ill patients if source control is adequate.
Antibiotic therapy up to 7 days based on clinical conditions and inflammation indices if source control is adequate in immunocompromised patients.
Patients who have ongoing signs of infection or systemic illness beyond 7 days of antibiotic treatment warrant a diagnostic investigation and a multidisciplinary re-evaluation.
Diffuse peritonitis
Early surgical source control and maximal/broad spectrum antibiotic therapy.The inability to control the septic source is associated with an intolerably high mortality rate.
+ Antibiotic therapy to 7 days based on clinical conditions and inflammation indices if source control is adequate in immunocompromised or critically ill patients.
Patients who have ongoing signs of infection or systemic illness beyond 7 days of antibiotic treatment warrant a diagnostic investigation.
Empiric Antibiotic Regimens; Normal Renal Function Patients without gut colonization by MDR and Immunocompetent patients
One of the following antibiotics: Meropenem 1 g q6h by extended infusion or continuous infusion or Doripenem 500 mg q8h by extended infusion or continuous infusion or Imipenem/cilastatin 500 mg q6h by extended infusion or Eravacycline 1 mg/kg q12h.
Patients with suspected MDR etiology based on epidemiological data and or gut colonization data and or specific risk factors
Imipenem/cilastatin-relebactam 1.25 g q6h by extended infusion or one of the following antibiotics: Meropenem/vaborbactam 2 g/2 g q8h by extended infusion or continuous infusion Ceftazidime/avibactam 2.5 g q8h by extended infusion or continuous infusion + Metronidazole 500 mgq8h + one of the following antibiotics: Linezolid 600 q 12 h or Teicoplanin 12 mg/kg every 12 h 3 LDs then 6 mg/kg q12 h.
In patients with documented beta-lactam allergy: Eravacycline 1 mg/kg q12h.+ in patients at high risk for intra-abdominal candidiasis: Liposomal amphotericin B 5 mg/kg pulse dose as preemptive therapy waiting for the result of the 1,3-beta-D-Glucan test (if 1,3 beta-D-glucan test is available) or one of the following echinocandins (considering PK/ PD principles): Caspofungin 70 mg LD, then 50 mg q24h Anidulafungin 200 mg LD, then 100 q24h Micafungin 100 mg q24h.
Clinical signs and symptoms
• Fever.
• Increased white blood cell count.
• PCT (is the most sensitive laboratory test for detection of pancreatic infection, and low serum values appear to be strong negative predictors of infected necrosis).
• CT with IV contrast.
Treatment
Mild acute pancreatitis: • General (regular) diet and advance as tolerated.
• Pain control with oral medications.
• Routine vital signs monitoring.
Moderately severe acute pancreatitis: No specific pharmacological treatment except for organ support and nutrition should be given.
Prophylactic antibiotics in patients with acute pancreatitis are not associated with a significant decrease in mortality or morbidity.Thus, routine prophylactic antibiotics are no longer recommended for all patients with acute pancreatitis.
Antibiotics are always recommended to treat infected severe acute pancreatitis.However, the diagnosis is challenging due to the clinical picture that cannot be distinguished from other infectious complications or from the inflammatory status caused by acute pancreatitis.
• PCT.
• A CT-or EUS-guided fine-needle aspiration (FNA) for Gram stain and culture.• Endoscopic retrograde cholangiopancreatography (ERCP) in patients with acute biliary pancreatitis and common bile duct obstruction should be performed as soon as possible.
• Laparoscopic cholecystectomy should be considered carefully in patients with moderately severe and severe acute biliary pancreatitis, as it may be associated with increased postoperative mortality and morbidity • Clinical deterioration with signs or strong suspicion of infected necrotizing pancreatitis is an indication to perform intervention.• When a patient deteriorates a step-up approach starting with percutaneous or endoscopic drainage may be indicated.
14 .. 17 .
In patients with IAIs, empiric antibiotic therapy should be based on the local microbiological epidemiology, clinical severity, and individual patient risk factors for resistant bacteria (Low quality of evidence, strong recommendation).15.In most patients with IAIs, agents with a narrow spectrum of activity should be preferred.In community-acquired IAIs, the most common resistance problem is posed by alarmingly prevalent extendedspectrum beta-lactamases (ESBLs).(Moderate quality of evidence, strong recommendation).16.The following risk factors for ESBLs-producing Enterobacterales infections should be always considered: (a) hospitalization for 48 h within the last 90 days, (b) use of broad-spectrum antibiotics for 5 days within the last 90 days, (c) gut colonization by ESBLs within 90 days and (d) patients coming from healthcare settings with a high incidence of MDR bacteria (Low quality of evidence, strong recommendation)Antibiotic therapy aimed at enterococcal coverage should not be routinely prescribed in patients with community-acquired IAIs unless they are immunocompromised (Moderate quality of evidence, strong recommendation).18. Empirical antibiotic therapy covering MDR Gram-negative bacteria should be considered only in specific settings, and based on countrywide epidemiological conditions, clinical severity, immunological impairment, knowledge of colonization status, prolonged exposure to carbapenems and/ or quinolones (Moderate quality of evidence, strong recommendation).
Enteral Nutrition (oral, NG or NJ) If not tolerated, it is possible to use parenteral nutrition.• IV pain medications.• IV fluids to maintain hydration.• Monitoring hematocrit, blood urea nitrogen, creatinine.• Continuous vital signs monitoring.Severe acute pancreatitis: • Enteral Nutrition (oral, NG or NJ).If not tolerated, it is possible to use parenteral nutrition.• IV pain medications.• Early fluid resuscitation.• Mechanical ventilation.
Management of non-traumatic small bowel perforation Diagnosis Clinical signs and symptoms
Antibiotic therapy up to 7 days based on clinical conditions and inflammation indices if source control is adequate in immuocompromised or critically ill patients.
therapy Routine prophylactic antibiotics should be not pre- scribed for patients with acute pancreatitis. Antibiotic therapy should be administered only to treat infected acute pancreatitis. Empiric Antibiotic Regimens; Normal Renal Function Patients without gut colonization by MDR and Immu- nocompetent patients
• The following are indications for surgical intervention: • as a continuum in a step-up approach after percutaneous/endoscopic procedure with the same indications; • abdominal compartment syndrome; • acute on-going bleeding when endovascular approach is unsuccessful; • bowel ischaemia or acute necrotizing cholecystitis during acute pancreatitis; • bowel fistula extending into a peripancreatic collection.• Postponing surgical interventions for more than 4 weeks after the onset of the disease results in less mortality • In patients with severe acute pancreatitis unresponsive to conservative management of IAH/ACS, surgical decompression and use of open abdomen are effective in treating the abdominal compartment syndrome.However, the open abdomen should be avoided if other strategies can be used to mitigate or treat severe IAH in patients with acute pancreatitis.
with suspected MDR etiology based on epi- demiological data and or gut colonization data and or specific risk factors
In patients with documented beta-lactam allergy: Eravacycline 1 mg/kg q12h.+ in patients at high risk for intra-abdominal candidiasis: Liposomal amphotericin B 5 mg/kg pulse dose as preemptive therapy waiting for the result of the 1,3-beta-D-Glucan test (if 1,3 beta-D-glucan test is available) or one of the following echinocandins (considering PK/ PD principles):Caspofungin 70 mg LD, then 50 mg q24h Anidulafungin 200 mg LD, then 100 q24h Micafungin 100 mg q24h | 2024-06-08T13:11:27.721Z | 2024-06-08T00:00:00.000 | {
"year": 2024,
"sha1": "28eee5d8355b649fffe3439ee10d65bc939ab9d5",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "7902aea5c12db31e441fc6b219300f84713b4ba5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
1957433 | pes2o/s2orc | v3-fos-license | GloVe: Global Vectors for Word Representation
Recent methods for learning vector space representations of words have succeeded in capturing fine-grained semantic and syntactic regularities using vector arithmetic, but the origin of these regularities has remained opaque. We analyze and make explicit the model properties needed for such regularities to emerge in word vectors. The result is a new global log-bilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods. Our model efficiently leverages statistical information by training only on the nonzero elements in a word-word co-occurrence matrix, rather than on the entire sparse matrix or on individual context windows in a large corpus. The model produces a vector space with meaningful sub-structure, as evidenced by its performance of 75% on a recent word analogy task. It also outperforms related models on similarity tasks and named entity recognition.
Introduction
Semantic vector space models of language represent each word with a real-valued vector. These vectors can be used as features in a variety of applications, such as information retrieval (Manning et al., 2008), document classification (Sebastiani, 2002), question answering (Tellex et al., 2003), named entity recognition (Turian et al., 2010), and parsing .
Most word vector methods rely on the distance or angle between pairs of word vectors as the primary method for evaluating the intrinsic quality of such a set of word representations. Recently, Mikolov et al. (2013c) introduced a new evaluation scheme based on word analogies that probes the finer structure of the word vector space by examining not the scalar distance between word vectors, but rather their various dimensions of difference. For example, the analogy "king is to queen as man is to woman" should be encoded in the vector space by the vector equation king − queen = man − woman. This evaluation scheme favors models that produce dimensions of meaning, thereby capturing the multi-clustering idea of distributed representations (Bengio, 2009). The two main model families for learning word vectors are: 1) global matrix factorization methods, such as latent semantic analysis (LSA) (Deerwester et al., 1990) and 2) local context window methods, such as the skip-gram model of Mikolov et al. (2013c). Currently, both families suffer significant drawbacks. While methods like LSA efficiently leverage statistical information, they do relatively poorly on the word analogy task, indicating a sub-optimal vector space structure. Methods like skip-gram may do better on the analogy task, but they poorly utilize the statistics of the corpus since they train on separate local context windows instead of on global co-occurrence counts.
In this work, we analyze the model properties necessary to produce linear directions of meaning and argue that global log-bilinear regression models are appropriate for doing so. We propose a specific weighted least squares model that trains on global word-word co-occurrence counts and thus makes efficient use of statistics. The model produces a word vector space with meaningful substructure, as evidenced by its state-of-the-art performance of 75% accuracy on the word analogy dataset. We also demonstrate that our methods outperform other current methods on several word similarity tasks, and also on a common named entity recognition (NER) benchmark.
We provide the source code for the model as well as trained word vectors at http://nlp. stanford.edu/projects/glove/.
Related Work
Matrix Factorization Methods. Matrix factorization methods for generating low-dimensional word representations have roots stretching as far back as LSA. These methods utilize low-rank approximations to decompose large matrices that capture statistical information about a corpus. The particular type of information captured by such matrices varies by application. In LSA, the matrices are of "term-document" type, i.e., the rows correspond to words or terms, and the columns correspond to different documents in the corpus. In contrast, the Hyperspace Analogue to Language (HAL) (Lund and Burgess, 1996), for example, utilizes matrices of "term-term" type, i.e., the rows and columns correspond to words and the entries correspond to the number of times a given word occurs in the context of another given word.
A main problem with HAL and related methods is that the most frequent words contribute a disproportionate amount to the similarity measure: the number of times two words co-occur with the or and, for example, will have a large effect on their similarity despite conveying relatively little about their semantic relatedness. A number of techniques exist that addresses this shortcoming of HAL, such as the COALS method (Rohde et al., 2006), in which the co-occurrence matrix is first transformed by an entropy-or correlation-based normalization. An advantage of this type of transformation is that the raw co-occurrence counts, which for a reasonably sized corpus might span 8 or 9 orders of magnitude, are compressed so as to be distributed more evenly in a smaller interval. A variety of newer models also pursue this approach, including a study (Bullinaria and Levy, 2007) that indicates that positive pointwise mutual information (PPMI) is a good transformation. More recently, a square root type transformation in the form of Hellinger PCA (HPCA) (Lebret and Collobert, 2014) has been suggested as an effective way of learning word representations.
Shallow Window-Based Methods. Another approach is to learn word representations that aid in making predictions within local context windows. For example, Bengio et al. (2003) introduced a model that learns word vector representations as part of a simple neural network architecture for language modeling. Collobert and Weston (2008) decoupled the word vector training from the downstream training objectives, which paved the way for Collobert et al. (2011) to use the full context of a word for learning the word representations, rather than just the preceding context as is the case with language models.
Recently, the importance of the full neural network structure for learning useful word representations has been called into question. The skip-gram and continuous bag-of-words (CBOW) models of Mikolov et al. (2013a) propose a simple single-layer architecture based on the inner product between two word vectors. Mnih and Kavukcuoglu (2013) also proposed closely-related vector log-bilinear models, vLBL and ivLBL, and Levy et al. (2014) proposed explicit word embeddings based on a PPMI metric.
In the skip-gram and ivLBL models, the objective is to predict a word's context given the word itself, whereas the objective in the CBOW and vLBL models is to predict a word given its context. Through evaluation on a word analogy task, these models demonstrated the capacity to learn linguistic patterns as linear relationships between the word vectors.
Unlike the matrix factorization methods, the shallow window-based methods suffer from the disadvantage that they do not operate directly on the co-occurrence statistics of the corpus. Instead, these models scan context windows across the entire corpus, which fails to take advantage of the vast amount of repetition in the data.
The GloVe Model
The statistics of word occurrences in a corpus is the primary source of information available to all unsupervised methods for learning word representations, and although many such methods now exist, the question still remains as to how meaning is generated from these statistics, and how the resulting word vectors might represent that meaning. In this section, we shed some light on this question. We use our insights to construct a new model for word representation which we call GloVe, for Global Vectors, because the global corpus statistics are captured directly by the model. First we establish some notation. Let the matrix of word-word co-occurrence counts be denoted by X, whose entries X i j tabulate the number of times word j occurs in the context of word i. Let X i = k X ik be the number of times any word appears in the context of word i. Finally, let P i j = P( j |i) = X i j /X i be the probability that word j appear in the Table 1: Co-occurrence probabilities for target words ice and steam with selected context words from a 6 billion token corpus. Only in the ratio does noise from non-discriminative words like water and fashion cancel out, so that large values (much greater than 1) correlate well with properties specific to ice, and small values (much less than 1) correlate well with properties specific of steam.
Probability and Ratio k = solid k = gas k = water k = fashion P(k |ice) 1.9 × 10 −4 6.6 × 10 −5 3.0 × 10 −3 1.7 × 10 −5 P(k |steam) 2.2 × 10 −5 7.8 × 10 −4 2.2 × 10 −3 1.8 × 10 −5 P(k |ice)/P(k |steam) 8.9 8.5 × 10 −2 1.36 0.96 context of word i. We begin with a simple example that showcases how certain aspects of meaning can be extracted directly from co-occurrence probabilities. Consider two words i and j that exhibit a particular aspect of interest; for concreteness, suppose we are interested in the concept of thermodynamic phase, for which we might take i = ice and j = steam. The relationship of these words can be examined by studying the ratio of their co-occurrence probabilities with various probe words, k. For words k related to ice but not steam, say k = solid, we expect the ratio P ik /P j k will be large. Similarly, for words k related to steam but not ice, say k = gas, the ratio should be small. For words k like water or fashion, that are either related to both ice and steam, or to neither, the ratio should be close to one. Table 1 shows these probabilities and their ratios for a large corpus, and the numbers confirm these expectations. Compared to the raw probabilities, the ratio is better able to distinguish relevant words (solid and gas) from irrelevant words (water and fashion) and it is also better able to discriminate between the two relevant words.
The above argument suggests that the appropriate starting point for word vector learning should be with ratios of co-occurrence probabilities rather than the probabilities themselves. Noting that the ratio P ik /P j k depends on three words i, j, and k, the most general model takes the form, where w ∈ R d are word vectors andw ∈ R d are separate context word vectors whose role will be discussed in Section 4.2. In this equation, the right-hand side is extracted from the corpus, and F may depend on some as-of-yet unspecified parameters. The number of possibilities for F is vast, but by enforcing a few desiderata we can select a unique choice. First, we would like F to encode the information present the ratio P ik /P j k in the word vector space. Since vector spaces are inherently linear structures, the most natural way to do this is with vector differences. With this aim, we can restrict our consideration to those functions F that depend only on the difference of the two target words, modifying Eqn.
(1) to, Next, we note that the arguments of F in Eqn. (2) are vectors while the right-hand side is a scalar. While F could be taken to be a complicated function parameterized by, e.g., a neural network, doing so would obfuscate the linear structure we are trying to capture. To avoid this issue, we can first take the dot product of the arguments, which prevents F from mixing the vector dimensions in undesirable ways. Next, note that for word-word co-occurrence matrices, the distinction between a word and a context word is arbitrary and that we are free to exchange the two roles. To do so consistently, we must not only exchange w ↔w but also X ↔ X T . Our final model should be invariant under this relabeling, but Eqn. (3) is not. However, the symmetry can be restored in two steps. First, we require that F be a homomorphism between the groups (R, +) and (R >0 , ×), i.e., which, by Eqn.
(3), is solved by, The solution to Eqn. (4) is F = exp, or, Next, we note that Eqn. (6) would exhibit the exchange symmetry if not for the log(X i ) on the right-hand side. However, this term is independent of k so it can be absorbed into a bias b i for w i . Finally, adding an additional biasb k forw k restores the symmetry, Eqn. (7) is a drastic simplification over Eqn.
(1), but it is actually ill-defined since the logarithm diverges whenever its argument is zero. One resolution to this issue is to include an additive shift in the logarithm, log(X ik ) → log(1 + X ik ), which maintains the sparsity of X while avoiding the divergences. The idea of factorizing the log of the co-occurrence matrix is closely related to LSA and we will use the resulting model as a baseline in our experiments. A main drawback to this model is that it weighs all co-occurrences equally, even those that happen rarely or never. Such rare cooccurrences are noisy and carry less information than the more frequent ones -yet even just the zero entries account for 75-95% of the data in X, depending on the vocabulary size and corpus.
We propose a new weighted least squares regression model that addresses these problems. Casting Eqn. (7) as a least squares problem and introducing a weighting function f (X i j ) into the cost function gives us the model where V is the size of the vocabulary. The weighting function should obey the following properties: should be non-decreasing so that rare co-occurrences are not overweighted.
3. f (x) should be relatively small for large values of x, so that frequent co-occurrences are not overweighted.
Of course a large number of functions satisfy these properties, but one class of functions that we found to work well can be parameterized as, The performance of the model depends weakly on the cutoff, which we fix to x max = 100 for all our experiments. We found that α = 3/4 gives a modest improvement over a linear version with α = 1.
Although we offer only empirical motivation for choosing the value 3/4, it is interesting that a similar fractional power scaling was found to give the best performance in (Mikolov et al., 2013a).
Relationship to Other Models
Because all unsupervised methods for learning word vectors are ultimately based on the occurrence statistics of a corpus, there should be commonalities between the models. Nevertheless, certain models remain somewhat opaque in this regard, particularly the recent window-based methods like skip-gram and ivLBL. Therefore, in this subsection we show how these models are related to our proposed model, as defined in Eqn. (8).
The starting point for the skip-gram or ivLBL methods is a model Q i j for the probability that word j appears in the context of word i. For concreteness, let us assume that Q i j is a softmax, Most of the details of these models are irrelevant for our purposes, aside from the the fact that they attempt to maximize the log probability as a context window scans over the corpus. Training proceeds in an on-line, stochastic fashion, but the implied global objective function can be written as, Evaluating the normalization factor of the softmax for each term in this sum is costly. To allow for efficient training, the skip-gram and ivLBL models introduce approximations to Q i j . However, the sum in Eqn. (11) can be evaluated much more efficiently if we first group together those terms that have the same values for i and j, where we have used the fact that the number of like terms is given by the co-occurrence matrix X. Recalling our notation for X i = k X ik and P i j = X i j /X i , we can rewrite J as, where H (P i ,Q i ) is the cross entropy of the distributions P i and Q i , which we define in analogy to X i . As a weighted sum of cross-entropy error, this objective bears some formal resemblance to the weighted least squares objective of Eqn. (8). In fact, it is possible to optimize Eqn. (13) directly as opposed to the on-line training methods used in the skip-gram and ivLBL models. One could interpret this objective as a "global skip-gram" model, and it might be interesting to investigate further. On the other hand, Eqn. (13) exhibits a number of undesirable properties that ought to be addressed before adopting it as a model for learning word vectors.
To begin, cross entropy error is just one among many possible distance measures between probability distributions, and it has the unfortunate property that distributions with long tails are often modeled poorly with too much weight given to the unlikely events. Furthermore, for the measure to be bounded it requires that the model distribution Q be properly normalized. This presents a computational bottleneck owing to the sum over the whole vocabulary in Eqn. (10), and it would be desirable to consider a different distance measure that did not require this property of Q. A natural choice would be a least squares objective in which normalization factors in Q and P are discarded, whereP i j = X i j andQ i j = exp(w T iw j ) are the unnormalized distributions. At this stage another problem emerges, namely that X i j often takes very large values, which can complicate the optimization. An effective remedy is to minimize the squared error of the logarithms ofP andQ instead, Finally, we observe that while the weighting factor X i is preordained by the on-line training method inherent to the skip-gram and ivLBL models, it is by no means guaranteed to be optimal. In fact, Mikolov et al. (2013a) observe that performance can be increased by filtering the data so as to reduce the effective value of the weighting factor for frequent words. With this in mind, we introduce a more general weighting function, which we are free to take to depend on the context word as well.
The result is, which is equivalent 1 to the cost function of Eqn. (8), which we derived previously.
Complexity of the model
As can be seen from Eqn. (8) and the explicit form of the weighting function f (X ), the computational complexity of the model depends on the number of nonzero elements in the matrix X. As this number is always less than the total number of entries of the matrix, the model scales no worse than O(|V | 2 ). At first glance this might seem like a substantial improvement over the shallow windowbased approaches, which scale with the corpus size, |C|. However, typical vocabularies have hundreds of thousands of words, so that |V | 2 can be in the hundreds of billions, which is actually much larger than most corpora. For this reason it is important to determine whether a tighter bound can be placed on the number of nonzero elements of X.
In order to make any concrete statements about the number of nonzero elements in X, it is necessary to make some assumptions about the distribution of word co-occurrences. In particular, we will assume that the number of co-occurrences of word i with word j, X i j , can be modeled as a power-law function of the frequency rank of that word pair, r i j : The total number of words in the corpus is proportional to the sum over all elements of the cooccurrence matrix X, where we have rewritten the last sum in terms of the generalized harmonic number H n, m . The upper limit of the sum, |X |, is the maximum frequency rank, which coincides with the number of nonzero elements in the matrix X. This number is also equal to the maximum value of r in Eqn. (17) such that X i j ≥ 1, i.e., |X | = k 1/α . Therefore we can write Eqn. (18) as, We are interested in how |X | is related to |C| when both numbers are large; therefore we are free to expand the right hand side of the equation for large |X |. For this purpose we use the expansion of generalized harmonic numbers (Apostol, 1976), where ζ (s) is the Riemann zeta function. In the limit that X is large, only one of the two terms on the right hand side of Eqn. (21) will be relevant, and which term that is depends on whether α > 1, For the corpora studied in this article, we observe that X i j is well-modeled by Eqn. (17) with α = 1.25. In this case we have that |X | = O(|C| 0.8 ). Therefore we conclude that the complexity of the model is much better than the worst case O(V 2 ), and in fact it does somewhat better than the on-line window-based methods which scale like O(|C|).
Evaluation methods
We conduct experiments on the word analogy task of Mikolov et al. (2013a), a variety of word similarity tasks, as described in (Luong et al., 2013), and on the CoNLL-2003 shared benchmark der, 2003). Word analogies. The word analogy task consists of questions like, "a is to b as c is to ?" The dataset contains 19,544 such questions, divided into a semantic subset and a syntactic subset. The semantic questions are typically analogies about people or places, like "Athens is to Greece as Berlin is to ?". The syntactic questions are typically analogies about verb tenses or forms of adjectives, for example "dance is to dancing as fly is to ?". To correctly answer the question, the model should uniquely identify the missing term, with only an exact correspondence counted as a correct match. We answer the question "a is to b as c is to ?" by finding the word d whose representation w d is closest to w b − w a + w c according to the cosine similarity. 4 Figure 2: Accuracy on the analogy task as function of vector size and window size/type. All models are trained on the 6 billion token corpus. In (a), the window size is 10. In (b) and (c), the vector size is 100.
Word similarity. While the analogy task is our primary focus since it tests for interesting vector space substructures, we also evaluate our model on a variety of word similarity tasks in Table 3. These include WordSim-353 (Finkelstein et al., 2001), MC (Miller and Charles, 1991), RG (Rubenstein and Goodenough, 1965), SCWS (Huang et al., 2012), and RW (Luong et al., 2013). Named entity recognition. The CoNLL-2003 English benchmark dataset for NER is a collection of documents from Reuters newswire articles, annotated with four entity types: person, location, organization, and miscellaneous. We train models on CoNLL-03 training data on test on three datasets: 1) ConLL-03 testing data, 2) ACE Phase 2 (2001-02) and ACE-2003 data, and 3) MUC7 Formal Run test set. We adopt the BIO2 annotation standard, as well as all the preprocessing steps described in (Wang and Manning, 2013). We use a comprehensive set of discrete features that comes with the standard distribution of the Stanford NER model (Finkel et al., 2005). A total of 437,905 discrete features were generated for the CoNLL-2003 training dataset. In addition, 50-dimensional vectors for each word of a five-word context are added and used as continuous features. With these features as input, we trained a conditional random field (CRF) with exactly the same setup as the CRF join model of (Wang and Manning, 2013).
Corpora and training details
We trained our model on five corpora of varying sizes: a 2010 Wikipedia dump with 1 billion tokens; a 2014 Wikipedia dump with 1.6 billion tokens; Gigaword 5 which has 4.3 billion tokens; the combination Gigaword5 + Wikipedia2014, which the analogy task. This number is evaluated on a subset of the dataset so it is not included in Table 2. 3COSMUL performed worse than cosine similarity in almost all of our experiments. has 6 billion tokens; and on 42 billion tokens of web data, from Common Crawl 5 . We tokenize and lowercase each corpus with the Stanford tokenizer, build a vocabulary of the 400,000 most frequent words 6 , and then construct a matrix of cooccurrence counts X. In constructing X, we must choose how large the context window should be and whether to distinguish left context from right context. We explore the effect of these choices below. In all cases we use a decreasing weighting function, so that word pairs that are d words apart contribute 1/d to the total count. This is one way to account for the fact that very distant word pairs are expected to contain less relevant information about the words' relationship to one another.
For all our experiments, we set x max = 100, α = 3/4, and train the model using AdaGrad (Duchi et al., 2011), stochastically sampling nonzero elements from X, with initial learning rate of 0.05. We run 50 iterations for vectors smaller than 300 dimensions, and 100 iterations otherwise (see Section 4.6 for more details about the convergence rate). Unless otherwise noted, we use a context of ten words to the left and ten words to the right.
The model generates two sets of word vectors, W andW . When X is symmetric, W andW are equivalent and differ only as a result of their random initializations; the two sets of vectors should perform equivalently. On the other hand, there is evidence that for certain types of neural networks, training multiple instances of the network and then combining the results can help reduce overfitting and noise and generally improve results (Ciresan et al., 2012). With this in mind, we choose to use the sum W +W as our word vectors. Doing so typically gives a small boost in performance, with the biggest increase in the semantic analogy task.
We compare with the published results of a variety of state-of-the-art models, as well as with our own results produced using the word2vec tool and with several baselines using SVDs. With word2vec, we train the skip-gram (SG † ) and continuous bag-of-words (CBOW † ) models on the 6 billion token corpus (Wikipedia 2014 + Gigaword 5) with a vocabulary of the top 400,000 most frequent words and a context window size of 10. We used 10 negative samples, which we show in Section 4.6 to be a good choice for this corpus.
For the SVD baselines, we generate a truncated matrix X trunc which retains the information of how frequently each word occurs with only the top 10,000 most frequent words. This step is typical of many matrix-factorization-based methods as the extra columns can contribute a disproportionate number of zero entries and the methods are otherwise computationally expensive.
The singular vectors of this matrix constitute the baseline "SVD". We also evaluate two related baselines: "SVD-S" in which we take the SVD of √ X trunc , and "SVD-L" in which we take the SVD of log(1+ X trunc ). Both methods help compress the otherwise large range of values in X. 7
Results
We present results on the word analogy task in Table 2. The GloVe model performs significantly better than the other baselines, often with smaller vector sizes and smaller corpora. Our results using the word2vec tool are somewhat better than most of the previously published results. This is due to a number of factors, including our choice to use negative sampling (which typically works better than the hierarchical softmax), the number of negative samples, and the choice of the corpus.
We demonstrate that the model can easily be trained on a large 42 billion token corpus, with a substantial corresponding performance boost. We note that increasing the corpus size does not guarantee improved results for other models, as can be seen by the decreased performance of the SVD- 7 We also investigated several other weighting schemes for transforming X; what we report here performed best. Many weighting schemes like PPMI destroy the sparsity of X and therefore cannot feasibly be used with large vocabularies. With smaller vocabularies, these information-theoretic transformations do indeed work well on word similarity measures, but they perform very poorly on the word analogy task. L model on this larger corpus. The fact that this basic SVD model does not scale well to large corpora lends further evidence to the necessity of the type of weighting scheme proposed in our model. Table 3 shows results on five different word similarity datasets. A similarity score is obtained from the word vectors by first normalizing each feature across the vocabulary and then calculating the cosine similarity. We compute Spearman's rank correlation coefficient between this score and the human judgments. CBOW * denotes the vectors available on the word2vec website that are trained with word and phrase vectors on 100B words of news data. GloVe outperforms it while using a corpus less than half the size. Table 4 shows results on the NER task with the CRF-based model. The L-BFGS training terminates when no improvement has been achieved on the dev set for 25 iterations. Otherwise all configurations are identical to those used by Wang and Manning (2013). The model labeled Discrete is the baseline using a comprehensive set of discrete features that comes with the standard distribution of the Stanford NER model, but with no word vector features. In addition to the HPCA and SVD models discussed previously, we also compare to the models of Huang et al. (2012) (HSMN) and Collobert and Weston (2008) (CW). We trained the CBOW model using the word2vec tool 8 . The GloVe model outperforms all other methods on all evaluation metrics, except for the CoNLL test set, on which the HPCA method does slightly better. We conclude that the GloVe vectors are useful in downstream NLP tasks, as was first shown for neural vectors in (Turian et al., 2010).
Model Analysis: Vector Length and Context Size
In Fig. 2, we show the results of experiments that vary vector length and context window. A context window that extends to the left and right of a target word will be called symmetric, and one which extends only to the left will be called asymmetric. In (a), we observe diminishing returns for vectors larger than about 200 dimensions. In (b) and (c), we examine the effect of varying the window size for symmetric and asymmetric context windows. Performance is better on the syntactic subtask for small and asymmetric context windows, which aligns with the intuition that syntactic information is mostly drawn from the immediate context and can depend strongly on word order. Semantic information, on the other hand, is more frequently non-local, and more of it is captured with larger window sizes.
Model Analysis: Corpus Size
In Fig. 3, we show performance on the word analogy task for 300-dimensional vectors trained on different corpora. On the syntactic subtask, there is a monotonic increase in performance as the corpus size increases. This is to be expected since larger corpora typically produce better statistics. Interestingly, the same trend is not true for the semantic subtask, where the models trained on the smaller Wikipedia corpora do better than those trained on the larger Gigaword corpus. This is likely due to the large number of city-and countrybased analogies in the analogy dataset and the fact that Wikipedia has fairly comprehensive articles for most such locations. Moreover, Wikipedia's Figure 3: Accuracy on the analogy task for 300dimensional vectors trained on different corpora.
entries are updated to assimilate new knowledge, whereas Gigaword is a fixed news repository with outdated and possibly incorrect information.
Model Analysis: Run-time
The total run-time is split between populating X and training the model. The former depends on many factors, including window size, vocabulary size, and corpus size. Though we did not do so, this step could easily be parallelized across multiple machines (see, e.g., Lebret and Collobert (2014) for some benchmarks). Using a single thread of a dual 2.1GHz Intel Xeon E5-2658 machine, populating X with a 10 word symmetric context window, a 400,000 word vocabulary, and a 6 billion token corpus takes about 85 minutes. Given X, the time it takes to train the model depends on the vector size and the number of iterations. For 300-dimensional vectors with the above settings (and using all 32 cores of the above machine), a single iteration takes 14 minutes. See Fig. 4 for a plot of the learning curve.
Model Analysis: Comparison with word2vec
A rigorous quantitative comparison of GloVe with word2vec is complicated by the existence of many parameters that have a strong effect on performance. We control for the main sources of variation that we identified in Sections 4.4 and 4.5 by setting the vector length, context window size, corpus, and vocabulary size to the configuration mentioned in the previous subsection. The most important remaining variable to control for is training time. For GloVe, the relevant parameter is the number of training iterations. For word2vec, the obvious choice would be the number of training epochs. Unfortunately, the code is currently designed for only a single epoch: it specifies a learning schedule specific to a single pass through the data, making a modification for multiple passes a non-trivial task. Another choice is to vary the number of negative samples. Adding negative samples effectively increases the number of training words seen by the model, so in some ways it is analogous to extra epochs. We set any unspecified parameters to their default values, assuming that they are close to optimal, though we acknowledge that this simplification should be relaxed in a more thorough analysis.
In Fig. 4, we plot the overall performance on the analogy task as a function of training time.
The two x-axes at the bottom indicate the corresponding number of training iterations for GloVe and negative samples for word2vec. We note that word2vec's performance actually decreases if the number of negative samples increases beyond about 10. Presumably this is because the negative sampling method does not approximate the target probability distribution well. 9 For the same corpus, vocabulary, window size, and training time, GloVe consistently outperforms word2vec. It achieves better results faster, and also obtains the best results irrespective of speed. | 2015-03-07T18:39:34.000Z | 2014-10-01T00:00:00.000 | {
"year": 2014,
"sha1": "f37e1b62a767a307c046404ca96bc140b3e68cb5",
"oa_license": null,
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "ACL",
"pdf_hash": "1baa3f4fda7c92600a5c192adaed80a834d13ff9",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
15707292 | pes2o/s2orc | v3-fos-license | Brief Review of Articles in 'Endocrinology and Metabolism' in 2013
In 2013, we published many excellent reviews and original articles in the various fields of endocrinology and metabolism. I believe that these publications have enhanced our scientific knowledge and set higher standards for medical care. Readers have access to updated reviews and new, original articles in this journal. I will briefly summarize a number of the excellent articles published in 2013 in 'Endocrinology and Metabolism' (Endocrinol Metab).
INTRODUCTION
In 2013, we published many excellent reviews and original articles in the various fields of endocrinology and metabolism. I believe that these publications have enhanced our scientific knowledge and set higher standards for medical care. Readers have access to updated reviews and new, original articles in this journal. I will briefly summarize a number of the excellent articles published in 2013 in 'Endocrinology and Metabolism' (Endocrinol Metab). Kwak [1] from Yonsei University wrote a review titled "Indications for fine needle aspiration in thyroid nodules." This review is based on several published recommendations and helps physicians easily understand the factors favoring fineneedle aspiration (FNA) [1]. Chung [2] from the Sungkyunkwan University School of Medicine wrote that "It is very difficult to maintain a stringent low iodine diet (LID) for a longer period of time. A nonstringent, simple LID for only 1 week might be enough for preparation for radioactive iodine (RAI) therapy" in his review entitled "Low iodine diet for preparation for RAI therapy in differentiated thyroid carcinoma in Korea" [2]. The diagnosis and treatment of hyperthyroidism are different according to geographical area. The Korean Thy-roid Association (KTA) reported a consensus on the management of hyperthyroidism. Moon and Yi [3] summarized the KTA report on the contemporary practice patterns in the diagnosis and management of hyperthyroidism, and compared this report with guidelines from other countries. Lim et al. [4] reported that FNA-proven benign thyroid nodules can experience changes in ultrasonographic features or volume as a natural course. When using the American Thyroid Association recommendation as the criteria for nodule growth, 9% of the nodules increased in volume, 83% were unchanged, and 8% decreased in volume. The shape and echogenicity of the nodules rarely changed. The authors suggested that frequent follow-up ultrasonography is needed for cases with suspicious ultrasonographic findings because of the low malignancy detection rate [4]. The authors of an original article entitled "Expression of thyroid stimulating hormone receptor mRNA in mouse C2C12 skeletal muscle cells" found that the TSH receptor is expressed in a mouse skeletal muscle cell line, but they suggested that the role of TSH receptor signaling in skeletal muscle needs further investigation [5]. Kim et al. [6] analyzed 36 patients who underwent surgery after being diagnosed with papillary thyroid carcinoma (PTC) with lymph node metastasis. They stained primary tumor tissues immunohistochemically with an anti-CD68 antibody and evaluated clinical characteristics according to tumor-associated macrophage (TAM) density. They found TAMs in primary PTC tumors with lymph node metas-
ARTICLES ON DIABETES AND OBESITY
Quan and Lee [16] reviewed the role of autophagy in the pathogenesis of diabetes. In their review, "Role of autophagy in the control of body metabolism," they reported that mice with deficiencies in β-cell-specific autophagy show reduced β-cell mass and an insulin secretion defect resulting in hypoinsulinemia and hyperglycemia, but not diabetes. However, these mice developed diabetes when bred with ob/ob mice, implying that autophagy-deficient β-cells have defects in coping with elevated metabolic stress due to obesity. These results suggest that autophagy has a role in protecting β-cells and preventing the progression from obesity to diabetes [16]. Choi [17] wrote an excellent review called "Sarcopenia and sarcopenic obesity." Both sarcopenia and obesity are becoming major threats to the aging society. The concept of sarcopenic obesity may help to elucidate the interrelationships between physical disability, metabolic disorders, and mortality in the elderly population [17]. A review article on the relationships between lipoproteins and diabetes was presented by Prof. Barter of the University of New South Wales, Australia. Currently, high density lipoprotein cholesterol (HDL-C) is a topic of great interest. Barter [18] summarized the role of HDL-C in reducing cardiovascular risk and improving glycemic control in type 2 diabetes in his paper "High density lipoprotein: a therapeutic target in type 2 diabetes." Cho et al. [19] reviewed current clinical data related to each class of glucagon-like peptide 1 analogs and highlighted several important efficacy and safety issues. Bae et al. [20] reported a cross-sectional study in 9,029 subjects without diabetes, which showed that serum albumin level can be related to insulin resistance index after adjustment for multiple factors. The title of their original article, published in March 2013, is "Association between serum albumin, insulin resistance, and incident diabetes in nondiabetic subjects" [20]. The authors did not study the mechanism, but recent data show that anti-oxidant and anti-inflammatory properties of serum albumin may have an independent protective effect on incident diabetes, as observed in the association with carotid atherosclerosis and cardiovascular mortality. In the original article titled "The relationship of body composition and coronary artery calcification in apparently healthy Korean adults," Yu et al. [21] studied the relationship between waist-to-hip ratio (WHR) and coronary artery calcium score (CACS) assessed by multidetector computed tomography in 945 participants in a medical check-up program. In logistic regression analyses with coronary artery calcification as the dependent factor, the highest WHR quartile showed a 3.1-fold-increased odds ratio for coronary artery calcification compared with the lowest quartile after adjustment for confounding variables. This is a new finding showing a close relationship between WHR and CACS [21]. Seo et al. [22] reported a longitudinal study entitled "Tumor necrosis factor-α as a predictor for the development of nonalcoholic fatty liver disease: a 4-year follow-up study." They reported that higher serum tumor necrosis factor-α levels in subjects without nonalcoholic fatty liver disease (NAFLD) at baseline were associated with development of NAFLD 4 years later. This suggests a pathologic role of inflammation in NAFLD [22].
In 2013, there were two interesting original articles about the association between hemoglobin A1c and cardiovascular disease: "Hemoglobin A1c is positively correlated with Framingham risk score in older, apparently healthy nondiabetic Korean adults" [23] and "A1c variability can predict coronary artery disease in patients with type 2 diabetes with mean A1c levels greater than 7" [24]. The relationship between vitamin D status, obesity, and insulin resistance was investigated by Kang et al. [25] using data from 2,710 individuals aged ≥50 years based on national data from a representative sample of Korea National Health and Nutrition Examination Survey IV-2 in 2008. The author of the original article entitled "The impact of different anthropometric measures on sustained normotension, white coat hypertension, masked hypertension, and sustained hypertension in patients with type 2 diabetes" showed the relationships between anthropometric parameters and hypertension in patients with newly diagnosed diabetes. Even in patients with white coat hypertension or masked hypertension, waist circumference and WHR were higher than in normotensive patients [26]. Serum creatinine is a breakdown product of creatine phosphate in muscle. Skeletal muscle is one of the major target organs of insulin action. In the original article entitled "Variation in serum creatinine level is correlated to risk of type 2 diabetes," the authors report an association between serum creatinine levels and an increased risk of type 2 diabetes [27]. Adipocyte-specific fatty acid-binding protein (A-FABP) is a cytoplasmic protein expressed in macrophages and adipocytes. An association of serum A-FABP with fatty liver index as a predictor of NAFLD was covered in the paper www.e-enm.org 253 entitled "Association of serum adipocyte-specific fatty acid binding protein with fatty liver index as a predictive indicator of nonalcoholic fatty liver disease" [28]. Cardiac autonomic neuropathy (CAN) and diabetic retinopathy (DR) are the most common complications of diabetes, and represent significant causes of morbidity and mortality in diabetes patients. In the study entitled "Association between cardiac autonomic neuropathy, diabetic retinopathy and carotid atherosclerosis in patients with type 2 diabetes," the authors argue that CAN or DR may be determinants of subclinical atherosclerosis in patients with type 2 diabetes by showing the associations between CAN, DR and mean carotid intima-media thickness [29]. A novel mutation in the von Hippel-Lindau tumor suppressor gene was identified in a patient presenting with gestational diabetes [30]. An interesting case report entitled "Recurrent insulin autoimmune syndrome caused by α-lipoic acid in type 2 diabetes" was included in the fourth issue of 2013 [31].
ARTICLES ON BONE METABOLISM
In the review "Vitamin D status in Korea," Choi [32] reported that the prevalence of vitamin D insufficiency, defined as a serum 25-hydroxyvitamin D [25(OH)D] level below 50 nmol/L, was 47.3% in males and 64.5% in females from The Korea National Health and Nutrition Examination Survey IV 2008. Only 13.2% of males and 6.7% of females had a serum 25(OH)D level greater than 75 nmol/L. In Korea, vitamin D insufficiency was more prevalent in young adults than in elderly people, which is likely to be due to the indoor lifestyle of younger people [32]. Lee et al. [33] wrote a review titled "Epidemiology of osteoporosis and osteoporotic fractures in South Korea." The Korean Nationwide-database Osteoporosis Study (KNOS) was performed through collaboration between the Korean Society of Bone and Mineral Research and the Health Insurance Review and Assessment Service. This review of the KNOS is helpful for the estimation of osteoporosis and osteoporosis-related fracture rates in Korea [33]. Divieti Pajevic [34] summarized novel findings in osteocyte biology in his paper titled "Recent progress in osteocyte research" and discussed future avenues of research. "Age-related changes in the prevalence of osteoporosis according to gender and skeletal site: the Korea National Health and Nutrition Examination Survey 2008-2010" is an interesting original article. In this study, the authors showed the age-related changes in the prevalence of osteoporosis according to gender and skeletal site [35]. An interesting case report entitled "Delayed surgery for parathyroid adenoma misdiagnosed as a thyroid nodule and treated with radiofrequency ablation" was published in 2013 [36].
ARTICLES ON PITUITARY AND OTHER ENDOCRINE DISEASES
Recent data show that reactive oxygen species (ROS) generation is a by product of substrate oxidation and has a crucial role in modulating cellular responses involved in energy metabolism. Diano [37] elegantly summarized the effect of ROS levels on hypothalamic neuronal function in the modulation of food intake. In her review titled "Role of reactive oxygen species in hypothalamic regulation of energy metabolism" she reported that the increased ROS level in pro-opiomelanocortin neurons is likely to be an important regulator of neuronal activation leading to cessation of feeding, increased energy expenditure supported by increased sympathetic tone in brown fat, and decreased gluconeogenesis and glucose output in the liver [37]. Hong et al. [38] from Yonsei University College of Medicine gave us detailed clinical data on the epidemiology, clinical characteristics, and treatment of acromegaly in Korea with a thorough literature review. Jiang and Zhang [39] from Massachusetts General Hospital and Harvard Medical School wrote an updated review of the molecular pathogenesis of pituitary adenoma, focusing on the role of tumor suppressor genes, oncogenes, and microRNAs. Kim et al. [40] undertook a retrospective study examining several clinical factors, including age, sex, size, location, function, and histological findings, in 348 patients with an adrenal mass found incidentally on computed tomography undertaken in health examinations or for nonadrenal disease. This study is described in a paper titled "Clinical characteristics for 348 patients with adrenal incidentaloma" [40]. An experimental study was reported in a paper entitled "Herpes virus entry mediator signaling in the brain is imperative in acute inflammation-induced anorexia and body weight loss." In this paper, the authors argue that activation of brain herpes virus entry mediator signaling, well known for its role in the development of various inflammatory diseases, was responsible for inflammation-induced anorexia and body weight loss [41]. In the paper entitled "Effects of chronic restraint stress on body weight, food intake, and hypothalamic gene expressions in mice," it was reported that stress can affect body weight and food intake by initially modifying canonical food intake-related genes and then later modifying other genes involved in energy metabolism [42]. The authors | 2017-10-23T12:50:23.096Z | 2014-09-01T00:00:00.000 | {
"year": 2014,
"sha1": "5f968a2176579b6dd7ce6c8434c84bafc1f67d20",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.3803/enm.2014.29.3.251",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5f968a2176579b6dd7ce6c8434c84bafc1f67d20",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
17693392 | pes2o/s2orc | v3-fos-license | Ceramide galactosyltransferase (UGT8) is a molecular marker of breast cancer malignancy and lung metastases
Background: It was shown recently on the level of gene expression that UGT8, coding UDP-galactose:ceramide galactosyltransferase, is one of six genes whose elevated expression correlated with a significantly increased the risk of lung metastases in breast cancer patients. In this study primary tumours and their lung metastases as well as breast cancer cell lines were analysed for UGT8 expression at the protein level. Methods: Expression of UGT8 in breast cancer tissue specimens and breast cancer cell lines was analysed using IHC, real-time PCR and Western blotting. Results: Comparison of the average values of the reaction intensities (IRS scale) showed a significant difference in UGT8 expression between (1) primary and metastatic tumours (Mann–Whitney U, P<0.05), (2) tumours of malignancy grades G3 and G2 (Mann–Whitney U, P<0.01) as well as G3 and G1 (Mann–Whitney U, P<0.001) and (3) node-positive and node-negative tumours (Mann–Whitney U, P<0.001). The predictive ability of increased expression of UGT8 was validated at the mRNA level in three independent cohorts of breast cancer patients (721). Similarly, breast cancer cell lines with the ‘luminal epithelial-like’ phenotype did not express or weakly expressed UGT8, in contrast to malignant, ‘mesenchymal-like,’ cells forming metastases in nude mice. Conclusion: Our data suggest that UGT8 is a significant index of tumour aggressiveness and a potential marker for the prognostic evaluation of lung metastases in breast cancer.
The endoplasmic reticulum-localised enzyme UDP-galactose:ceramide galactosyltransferase (UGT8, CGT, C. E. 2.4.2.62) (Schulte and Stoffel, 1993;Kapitonov and Yu, 1997;Sprong et al, 1998) is responsible for the synthesis of galactosylceramide (GalCer), which is the major glycosphingolipid of myelin produced by oligodendrocytes in the central nervous system (CNS) and Schwann cells in the peripheral nervous system (Marcus and Popko, 2002). The exact role of GalCer in myelin sheath development and function is poorly understood; however this glycolipid is well recognised as a specific marker for the differentiation of these cells (Pfeiffer et al, 1993). Studies using antibodies suggested that GalCer may participate in signal transduction by regulating the intracellular calcium level and in this way mediating cytoskeletal rearrangements Benjamins, 1990, 1991). On the basis of a knockout mice model lacking UGT8, it was proposed that GalCer, together with sulphatide, is involved in myelin function and stability but not in its biogenesis (Bosio et al, 1996;Coetzee et al, 1996;Dupree et al, 1999). In addition to myelin, GalCer was also found in normal kidney (Ariga et al, 1980;Kodama et al, 1982), small intestine and colon (Natomi et al, 1993), liver (Nilsson and Svennerholm, 1982), testis (Vos et al, 1994), and milk (Bouhours and Bouhours, 1979).
Since the pioneering work of Hakomori and Murakami (1968), it was firmly established that neoplastic transformation and tumour progression are almost invariably associated with changes in the expression profiles of surface glycosphingolipids (Hakomori, 1996). However, there is very little information available on GalCer expression in human tumours. Only in studies on molecular markers in human astrocytomas and oligodendrogliomas was it found that high amounts of GalCer were present more frequently in oligodendrogliomas than in astrocytomas (Sung et al, 1996;Popko et al, 2002). Interestingly, more is known about the expression of UGT8 in cancerous tissues. Transcriptome profiling of prostate cancer cell lines showed that cells with metastatic properties express a much higher level of UGT8 mRNA in comparison with non-metastatic cells (Oudes et al, 2005). Using the same approach, we recently showed that UGT8 is one of six genes whose elevated expression correlated with a significantly increased risk of lung metastases in breast cancer patients (Landemaine et al, 2008). It was also found that elevated expression of UGT8 in breast cancer was significantly associated with ER-negativity, and therefore with a more malignant phenotype (Yang et al, 2006;Ruckhäberle et al, 2008).
As all the available information on the presence of UGT8 in breast cancer tissues was obtained only at the level of mRNA expression, primary tumours of different malignancy grades and their lung metastases were analysed for UGT8 expression at the protein level. In addition, presence of UGT8 and GalCer was determined in breast cancer cell lines representing different tumour phenotypes.
Tissue specimens and cell lines
Tissue blocks from 10 patients with primary breast cancer (invasive ductal carcinoma, IDC) were obtained from the Department of Pathology, Lower Silesian Center of Oncology (Wrocław, Poland) (eight cases) and Department of Pathology, Centre René Huguenin (Saint-Cloud, France) (two cases). Their corresponding lung metastases were collected also as tissue blocks from the Department of Thoracic Surgery, Wrocław Medical University (Poland), and Department of Pathology, Centre René Huguenin (Saint-Cloud, France) (Table 1). Thirty tissue specimens were obtained from patients who underwent resections of primary IDC at the Lower Silesian Center of Oncology, Wrocław (Poland) ( Table 1). For mRNA analysis they were frozen at À801C and for immunohistochemical staining they were fixed in 10% neutralbuffered formalin and embedded in paraffin. Paraffin sections, mounted on Superfrost Plus slides (Menzel Glaser, Braunschweig, Germany), were dehydrated and stained with haematoxylin and eosin. Malignant tumours were graded according to the classification of Bloom -Richardson with the modification of Elston and Ellis (1991). The study was approved by the Bioethical Committee of the Wrocław Medical University (no. KB-87/2005).
Three breast tumour series, the 'MSK', 'EMS', and 'NKI' cohorts (van de Vijver et al, 2002;Wang et al, 2005;Minn et al, 2005aMinn et al, ,b, 2007, for which microarray data are freely available, were also included in this study. The following breast cancer cell lines were used in this study: MCF7, T47D, SKBR-3, BT-474, MDA-MB-231 (Cell Lines Collection of the Ludwik Hirszfeld Institute of Immunology and Experimental Therapy, Wrocław, Poland), MCF10CA1a.cl1 (provided by Dr S Santner, Karmanos Cancer Institute, Detroit, USA) and BO2 as a derivative of the MDA MB 231 cell line (provided by Dr Philippe Clezardin, INSERM U664, France). The cells were cultured in a-minimum essential medium (aMEM) supplemented with 10% foetal calf serum (FCS; Invitrogen, Carlsbad, CA, USA), 2 mM L-glutamine, and antibiotics.
Immunohistochemistry
For immunohistochemical staining, 4-mm-thick paraffin sections were deparaffinised in xylene and gradually rehydrated using ethanol. Endogenous peroxidase activity was blocked by a 5-min exposure to 3% H 2 O 2 . The cultured cells in eight-well culture slides (Becton Dickinson, Franklin Lakes, NJ, USA) were fixed in 4% neutral-buffered formalin for 15 min. Antigen retrieval was performed by exposure of the tissue sections and breast cancer cell lines to boiling Antigen Retrieval Solution (Dako, Glostrup, Denmark) in a microwave oven (250 W) for 15 min. Rabbit polyclonal antibodies directed against UGT8 were purchased from Atlas Prestige Antibodies (Stockholm, Sweden). The antibodies were diluted with Background Reducing Antibody Diluent (Dako). The sections were incubated with primary antibodies for 1 h at room temperature. Goat secondary antibodies (EnVision/HRP; Dako) directed against rabbit immunoglobulins were bound to a dextran framework conjugated with peroxidase. The reaction was developed using 3,3 0 -diaminobenzidine tetrachloride (DAB). Primary Negative Control (Dako) was used as the negative control. All tissue sections were counterstained with Mayer's haematoxylin.
The obtained photomicrographs were subjected to computerassisted image analysis using a computer coupled to an Olympus BX-41 light microscope (Olympus, Tokyo, Japan) using the AnalySis software (Olympus). The degree of UGT8 expression was ranked using the modified semi-quantitative Immunoreactive Remmele Score (IRS) according to Remmele and Stegner (1987). The method takes into account both the proportion of stained cells and the intensity of the reaction, while its final results represent the product of the two parameters, with values ranging from 0 to 12 points (no reaction ¼ 0 points, weak reaction ¼ 1 -2 points, moderate reaction ¼ 3 -4 points, intense reaction ¼ 6 -12 points). The results were subjected to statistical analysis using the Statisitica 7.1 software (StatSoft, Kraków, Poland). When groups of data were compared, which failed to satisfy assumptions of the parametric test, the Mann -Whitney U-test, the non-parametric equivalent of Student's t-test, was used. For matched samples of primary tumours and their metastases, the Wilcoxon signed rank sum test, the non-parametric version of a paired-samples t-test, was used. Correlations were tested by Spearman's correlation analysis. Results were considered statistically significant with Po0.05 in all analyses. Survivals times were determined by the Kaplan -Meier method and significance of differences were determined by log-rank test.
RT-PCR and real-time PCR
Total RNA was isolated from the tissue samples using the RNeasy Fibrous Tissue Mini Kit (Qiagen, Hilden, Germany) according to the manufacturer's instructions. The protocol included on-column DNAse digestion to remove the genomic DNA. First-strand cDNA was synthesised using the SuperScript III First-Strand Synthesis System (Invitrogen, Carlsbad, CA, USA). The relative amounts of UGT8 mRNA were determined by quantitative real-time PCR with an iQ5 Optical System and iQ SYBR Green Supermix (Bio-Rad, Hercules, CA, USA) according to the manufacturer's protocols. GAPDH was used as a reference gene. The primers used were: realUGT8f/realUGT8r for UGT8 and SGAPDH/ASGAPDH for GAPDH ( Table 2). The reactions were performed under the following conditions: initial denaturation at 941C for 120 s, followed by 35 cycles of denaturation at 941C for 30 s, annealing at 581C for 30 s, and elongation at 721C for 60 s. The specificity of the PCR was determined by melt-curve analysis for each reaction.
SDS-PAGE and Western blotting
Cell lysates were obtained by treating the cells with RIPA lysis buffer (50 mM Tris-HCl, pH 8.0, 150 mM NaCl, 0.1% SDS, 1% IGEPAL Ca-630, 0.5% sodium deoxycholate). Proteins were quantified using a bicinchonic acid protein assay kit (Sigma-Aldrich, St Louis, MO, USA) and subjected to electrophoresis on 8% SDS-PAGE gel according to Laemmli. After electrophoresis, the proteins were transferred to a nitrocellulose membrane (Bio-Rad). UGT8 was detected with rabbit polyclonal antibodies (Atlas Antibodies, Stockholm, Sweden) and horseradish peroxidase-conjugated donkey anti-rabbit immunoglobulins (Jackson Immuno-Research, West Grove, PA, USA). The bound antibodies were visualised with the Lumi-Light PLUS Luminal/Enhancer Solution and Lumi-Light PLUS Stable Peroxidase Solution (Roche, Basel, Switzerland). The light-sensitive membrane was then developed by incubating with the Kodak Developer and Kodak Fixer according to the kit's protocol (Kodak, Rochester, NY, USA).
Expression of UGT8 in primary breast cancer tumours and their metastases to the lung
On the basis of histological examination 10 primary breast carcinomas and their matched metastases were included in this study. The primary carcinomas were classified according to the Bloom -Richardson scale in the modification of Elston and Ellis (1991) as G1 (1 case), G2 (3 cases), and G3 (6 cases). Expression of UGT8 in the paraffin sections of the cancer tissue specimens was analysed using rabbit polyclonal antibodies. All primary tumours except one stained more weakly with anti-UGT8 antibodies than did the lung metastases (Figure 1). Comparison of the average values of the reaction intensities (IRS scale) showed a significant difference (Wilcoxon t-test, Po0.017) in UGT8 expression between primary and metastatic tumours (Table 3). Using IHC, UGT8 expression was also studied in the primary tumours according to their malignancy grades. For staining with anti-UGT8 antibodies, additional paraffin sections from 8 tumours of grade G1, seven tumours of grade G2, and 15 tumours of grade G3 were included. Figure 1 Immunohistochemical staining of primary breast carcinoma (A) and paired lung metastasis (B) with anti-UGT8 rabbit polyclonal antibodies.
It was found that the expression of UGT8 in the tumour cells increased with increasing malignancy grade, reaching the highest values in the weakly differentiated cells (G3) (Figure 2A -C). When the average values of the reaction intensities (IRS scale) for tumours of different histological differentiation were compared by the Mann -Whitney U-test significant differences in UGT8 expression between malignancy grades G3 and G2 (Po0.01) as well as G3 and G1 (Po0.001) were found ( Figure 2D). The results obtained at the protein level were confirmed when 30 primary tumours available as tissue specimens were also analysed by realtime PCR ( Figure 3A). A significant positive correlation (r ¼ 0.58, Po0.05) between the expression of UGT8 protein and UGT8 mRNA was found using Spearman's correlation analysis ( Figure 3B). The intensity of UGT8 staining in node-positive breast cancer tumours, according to the IRS scale, amounted on average to 4.7 ± 1.53 and in node-negative tumours to 2.41 ± 1.24 This difference also proved to be significant (Po0.001) (Figure 4).
To validate the predictive ability of the elevated expression of UGT8 in primary breast tumours, the level of mRNA for UGT8 was analysed in three independent cohorts of breast cancer patients whose microarray data were available Wang et al, 2005;Minn et al, 2005aMinn et al, , b, 2007 (Figure 5). In all three analysed cohorts, patients assigned to the high-risk group had significantly shorter lung metastasis-free survival.
Expression of UGT8 in established breast cancer cell lines
The following breast cancer cell lines were used to analyse the expression of UGT8 with rabbit polyclonal antibodies: MCF-7, Remmele and Stegner (1987) and are represented as means; *Po0.01 for primary breast tumours of grade G3 as compared with primary breast tumour of grade G2, and # Po0.001 for primary breast tumours of grade G3 as compared with primary breast tumours of grade G1 (Mann -Whitney U-test).
Detection of GalCer in established breast cancer cell lines
Neutral glycolipids purified from the breast cancer cell lines were separated on HP-TLC plates and immunostained with rabbit polyclonal Po0.05 for UGT8-expressing, node-negative primary breast tumours as compared with node-positive primary tumours (Mann -Whitney U-test). Reaction intensities with rabbit polyclonal antibodies for UGT8 were calculated on the basis of the semi-quantitative IRS scale of Remmele and Stegner (1987) and are represented as means.
antibodies directed against GalCer. Staining of GalCer, migrating as bands of appropriate mobility, was seen only in neutral glycolipid fractions isolated from metastasising breast cancer MCF10CA1a.cl1, MDA-MB-231, and BO2 cells (Figure 7). No bands were detected when using neutral glycolipids isolated from the rest of the analysed cell lines (BT474, SKBR-3, T47D, and MCF-7).
DISCUSSION
Breast carcinoma is the leading cause of mortality due to malignancy in Europe and North America. Death from breast cancer is mainly due to distant metastases, which are organspecific and localise to bones, liver, lung, and brain (Fidler, 2003). On the basis of transcriptome analysis of primary tumours and distant metastases using genome-wide microarray techniques, it was proposed that the risk for developing metastases can be predicted by a 'metastatic-gene signature' expressed by subsets of primary tumours Ramaswamy et al, 2003;Weigelt et al, 2003;Driouch et al, 2007). With the use of a nude mice model it was further shown that clones of MDA-MB-231 breast cancer cells developing organ-specific metastases in bone or lung are characterised by the expression of specific subsets of nonoverlapping genes (Kang et al, 2003;Minn et al, 2005a, b). This finding suggested the existence of specific gene signatures determining the localisation of metastases in specific organs, which was confirmed in three independent series of breast tumours showing that the 'lung metastasis signature' is predictive of high risk for the development of lung metastases (Minn et al, 2005a(Minn et al, , b, 2007. Recently, a six-gene signature predicting breast cancer lung metastases obtained for metastatic human tissue specimens was published (Landemaine et al, 2008), which correlated with increased risk of lung metastasis in a series of 72 lymph node-negative breast tumours. These data were further validated on a larger series of samples (Driouch et al, 2009). Among six genes highly overexpressed in lung metastases as compared with that in other breast cancer metastases, UGT8, which encodes an enzyme responsible for the synthesis of galactosylceramide, was found (Landemaine et al, 2008). In other studies on breast cancer, it was shown that elevated expression of UGT8 was significantly associated with ER-negativity and therefore with a more malignant phenotype (Yang et al, 2006;Ruckhäberle et al, 2008). These studies indicating an elevated level of UGT8 in more malignant breast cancer cells were performed only at the level of mRNA expression using microarray analysis and quantitative RT-PCR. In our study we evaluated UGT8 protein expression in primary breast cancer tumours and their matched lung metastases using IHC. Significantly stronger staining with rabbit polyclonal antibodies directed against UGT8 was observed in the specimens from lung metastases than in paired primary tumours, confirming earlier results obtained at the mRNA level. These data suggested that UGT8 is associated in some way with tumour progression, and that its elevated level could be important in the development of lung metastases (see below). Therefore, to study the changes in UGT8 expression during increasing malignancy of breast cancer cells in more detail, samples of breast tumours having different malignancy grades were analysed for the presence of UGT8 at the protein as well as mRNA level. It was found that the amounts of UGT8 protein and mRNA increased with tumour malignancy grades, and highly significant differences in UGT8 expression were found in G3 tumours vs G2 tumours. Interestingly, highly increased expression of UGT8 was also observed in primary node-positive tumours as compared with that in node-negative primary tumours. When the predictive value of UGT8 expression was further analysed at the mRNA level in primary tumours of the 721 breast cancer patients of the three independent cohorts, the patients assigned to the high-risk group had significantly shorter lung metastasis-free survival. Therefore, our data suggest that UGT8 is a significant index of tumour aggressiveness and potential marker for the prognostic evaluation of lung metastases in breast cancer.
According to Lacroix and Leclercq (2004) breast cancer cell lines can be classified into three groups on the basis of their phenotype and invasiveness. The first group, including BT-483, MCF-7, T-47D, and ZR-75 cells, was named 'luminal epithelial-like' because the cells highly express such genes as ER, CDH1 (E-cadherin), TJP1 (zonula occludens-1), and DSP (desmoplakin-I/II), typical of the epithelial phenotype of breast cells. All these cells are weakly invasive. The second group, called 'weakly luminal epithelial-like', represented by SKBR-3 cells and BT-474, is similar B T -4 7 4 S K B R -3 T 4 7 D M C F -7 S t a n d a r d Figure 7 Immunostaining of neutral glycolipids from human breast cancer cell lines, separated by HP-TLC, with anti-GalCer rabbit polyclonal antibodies. For the analysed cell lines, an aliquot of total neutral glycolipids corresponding to 1 Â 10 7 cells was applied to the HP-TLC plate.
to the first group, expressing the same epithelial markers, although at lower levels. The cells belonging to the third group are quite different as they express proteins found in mesenchymal cells, for example vimentin, and are highly invasive in vitro. They were named 'mesenchymal-like' ('stromal-like') and are represented by MDA-MB-231 and MCF10CA1a.cl1 cells. As the first two groups probably correspond to tumours of grades G1 and G2, and the 'mesenchymal-like' group could represent G3 tumours, we analysed the expression of UGT8 in different breast cancer cell lines. Expression of UGT8 at the mRNA and protein level in the established breast cancer cell lines correlated well with the results obtained for the clinical samples. Cells with the 'luminal epitheliallike' phenotype (MCF-7, T47D, SKBR-3, and BT-474) did not express or weakly expressed UGT8, in contrast to the malignant, 'mesenchymal-like' cells (MCF10CA1a.cl1, MDA-MB-231, and BO2) forming metastases in the nude mice model.
UGT8 is responsible for the synthesis of galactosylceramide, which is the major glycosphingolipid of myelin in the CNS and peripheral nervous system (Marcus and Popko, 2002). There is very little information available on GalCer expression in human tumours, except for human astrocytomas and oligodendrogliomas (Sung et al, 1996;Popko et al, 2002). Very little is also known about the possible functions of GalCer in tumour cells, which is in striking contrast to glucosylceramide (GlcCer), the other simple glycosphingolipid consisting only of ceramide and glucose residue. It is widely accepted that GlcCer is a mitogenic molecule, as stimulation of its synthesis decreases the intracellular pool of ceramide, which has an important function in programmed cell death as a proapoptotic agent (Radin, 2001;Taha et al, 2006). Interestingly, several lines of evidence suggest that overexpression of glucosylceramide synthase and accumulation of GlcCer can lead to the development of drug resistance in cancer cells (Lavie et al, 1996;Okazaki et al, 1998;Radin, 2001). Therefore we analysed the presence of GalCer in breast cancer cells and found that the 'mesenchymal-like' cells MDA-MB-231, BO2, and MCF10CA1a.cl1, each forming metastases in nude mice, are the only cell lines synthesising this glycolipid. This finding is in agreement with the hypothesis of Beier and Gorogh (2005), who proposed that accumulation of GalCer in tumour cells inhibits apoptosis, which facilitates metastatic cells to survive in the hostile microenvironment of the target organ. However, further functional studies are necessary to confirm this hypothesis.
In summary, we have shown for the first time that (1) expression of UGT8 is higher in breast cancer metastases to the lung than in matched primary tumours and that increased amounts of this enzyme in cancerous tissue are associated with progression to a more malignant phenotype, and (2) expression of UGT8 and GalCer is limited only to breast cancer cell lines forming metastases in a nude mice model. | 2014-10-01T00:00:00.000Z | 2010-07-20T00:00:00.000 | {
"year": 2010,
"sha1": "7f63bc9adaa8cfbac045cd9804c1c2a1ac6ab926",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/6605750.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "7f63bc9adaa8cfbac045cd9804c1c2a1ac6ab926",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
244817016 | pes2o/s2orc | v3-fos-license | Development of Resource-Saving Technologies in the Use of Sedimentation Inhibitors for Reverse Osmosis Installations
The processes of desalination of weakly mineralized waters using a reverse osmosis membrane were studied. The operational efficiency of membranes is limited mainly by membrane contamination. It was shown that the preliminary mechanical water purification helps to increase the productivity and selectivity of the membrane. One of the main causes of membrane contamination is the formation of carbonate deposits on their surface. One way to prevent membrane contamination is to dose antiscalants. It was established that the use of hydrolyzed polyacrylonitrile (HPAN) and hydrolyzed polyacrylamide (HPAA) as a stabilizer of scale formation is effective for concentrates of reverse osmosis desalination of water.
INTRODUCTION
Recently, in industrial densely populated areas, the problem of water pollution and a sharp increase in water mineralization of surface waters has become more acute (Biggs et At present, in water-deficient industrial regions, highly mineralized waters with high hardness are used in cooling systems (Filloux et In order to reduce the scale of gypsum and silicon dioxide, it is necessary to use antiscalants with different functionalities, so when cleaning wastewater containing different types of scale, it is important to select a reagent that provides maximum effect (Yin et al. 2021, Pramanik et al. 2017, Rashed et al. 2016, Mi and Elimelech 2013. Phosphonate antiscalants are widely used in the processes of reverse osmosis desalination of water to prevent scale formation and improve the quality of purified water. In reverse osmosis desalination of groundwater at a sampling rate of 85% (Ca 2+ = 765 mg/L, PO 4 3− = 13-15 mg/L and pH = 7.6), various antiscalants were used to inhibit the formation of calcium phosphate. (Mangal et al. 2021). However, it is desirable to remove them before disposing of the RO concentrate, as the presence of phosphonate antiscalants can prevent the removal of hardness from the concentrates and affect the ecosystem. A highly effective magnetic adsorbent (magnetic La/Zn/ Fe3O4@PAC composite) can be used to remove the phosphonate antiscalant (Li, C. et al. 2021).
In (He et al. 2009) it was shown that the K752 antiscalant can significantly extend the induction period for the gypsum nucleation, while the GHR antiscalant extends the induction period for calcite nucleation; even at a dosage of only 0.6 mg/L, they slow down the rate of crystal deposition. In (Qiang et al. 2013) it was shown that scale inhibitor was prepared by modified chrome shavings hydrolyzing collagen and the scale inhibitor had good ability on calcium carbonate scale inhibition.
The efficacy of scale inhibitors for a reverse osmosis desalination plant has been developed and evaluated (Chesters et Despite the significant amount of research and publications, the development of effective scale stabilizers is very important and relevant. Thus, the development of methods for effective stabilization water treatment will reduce corrosion and scale formation in heat exchange equipment, as well as allows switching to closed systems of water consumption and rational use of water ). The development of scientific bases of resource-saving technologies will increase the level of ecological safety of objects, region and country.
The process of reverse osmosis desalination of water was carried out using 10 dm 3 of model solution. By means of a pump, water was fed into a cassette with a reversed osmosis membrane. The concentrate was returned to the container with the initial solution, the permeate was taken in a separate container. The pressure in the system was maintained by a valve that regulates the selection of concentrate. After sampling each dm 3 of purified solution, the permeate and concentrate were analyzed for chlorides, sulfates, hardness ions and alkalinity was determined. The degree of permeate selection was varied from 10 to 90 %. Sulfates were determined by using the photometric method, chlorides with the Moore method, whereas alkalinity and hardness by means standard methods.
Membrane performance (transmembrane flow rate) was determined by the formula: where: ∆V is the volume of permeate (dm 3 ) that passed through the membrane with area S (m 2 ) during the sampling ∆t (h).
When conducting the studies to assess the effectiveness of scale stabilizers a thermostat was used. The samples of 100 ml were kept at a temperature of 60 °C for 6 hours. The choice of temperature is due to the fact that real water circulation systems operate at a temperature of 40-60 °C. As inhibitors were used hydrolyzed polyacrylamide (HPAA) after ozonation of its 5% solution for 1 hour, hydrolyzed polyacrylonitrile (HPAN) after ozonation of 5% solution for 1 hour and HPAN after sonication of 1% solution for 20 minutes. Improvement of the efficiency of reagents is achieved by ozonation or physical modification with low-frequency sound waves (USB). Doses of these reagents were 0.5-15.0 mg/dm 3 . After cooling, the samples were filtered, and the residual water hardness was determined.
The stabilizing effect was calculated by the formula: The anti-scale effect was calculated by the formula: = • 100% (5) where: SE -stabilizing effect, %; ASE -anti-scale effect, ∆H -reduction of hardness in the blank experiment, mg-eq/dm 3 ; ∆H i -reduction of hardness in the experiment with a scale inhibitor, mg-eq/dm 3 ; H res -residual hardness in the blank, mg-eq/dm 3 ; H iresidual hardness in the sample with a stabilizer, mg-eq/dm 3 .
RESULTS AND DISCUSSION
Improving the quality of used water reduces its discharge to purge systems, which reduces the water intake from natural sources and discharge of mineralized water into reservoirs, which leads to their pollution. Development of effective scale stabilizers allows developing the resource-saving technologies of water use.
The methods of combating salt deposits are aimed at preventing the loss of salts or removing the formed salt deposits (Amjad and Demadis 2015, Chauhan et al. 2015). The classification of these methods is presented in Figure 1.
There are several impurities that significantly reduce the service life of membranes (Ruengruehan et al. 2020, Gomelya et al. 2014). The first group includes insoluble solids, suspended and colloidal particles. The second group includes the compounds, the presence of which in water leads to the formation of solid inclusions (Kassymbekov et al. 2021, Sevostianov et al. 2021. It is possible to remove the substances belonging to the first group at the expense of mechanical methods, besides preliminary clarification of water on the filter leads to increase in productivity of a membrane (Polyakov et al. 2019). When filtering the model solution at a pressure of 0.3 MPa, the productivity of the installation decreases gradually (Fig. 2, Table 1) as the mineralization of the concentrate increases with increasing degree of selection of permeate. Pre-clarification of water (curve 2) leads to an increase in installation productivity and has little effect on its selectivity for chlorides, sulfates, hardness ions and hydrocarbons.
In this case, the decrease in membrane productivity with increasing degree of permeate selection is due to the increase in osmotic pressure along with the salt content in the concentrate. However, in general, the decrease in the productivity of membrane installations by 95-97% is determined by the contamination of the membrane surface and only by 3-5% by the compaction of their capillary-porous structure. Fundamentally dangerous compounds that promote sedimentation on membranes include hardness salts -compounds of calcium and magnesium in the form of carbonates, bicarbonates and sulfates (Sharma et al. 2020). One of the promising ways to prevent membrane contamination is the adding of antiscalant (sedimentation inhibitor).
Since under the actual conditions at room temperature, the carbonate deposits on the membrane are formed rather slowly, the effectiveness of antiscalants was assessed using the express method. It was based on determining the stability of concentrates of reverse osmosis water purification. The use of highly mineralized waters leads to intensive sedimentation, especially at elevated temperatures, resulting in the failure of pipelines and equipment. Therefore, it is necessary to use sediment inhibitors.
The effectiveness of scale inhibitors depends on the quality of the source water. This water is unstable to scaling, because when heated, the hardness decreases from 50 to 38 mg-eq/dm 3 at a temperature of 60 °C.
If the polyphosphates are used long-term, the hydrolysis to orthophosphates occurs. As a result of this polyphosphates lose their activity. In addition, phosphorus is a biogenic element that leads to increased biofouling in buildings and communications. HPAN is a stable substance, so it does not decompose chemically in water at temperatures in the range of 0-100 °С. This reagent is an effective inhibitor of scaling of highly mineralized waters at temperatures up to 100 °C when used in concentrations of 0.5-15 mg/dm 3 .
Evaluation of the effectiveness of sediment inhibitors was carried out by changing the hardness of mineralized water when heated to a temperature of 60 °C in the presence of inhibitors. The results are shown in Tables 2, 3.
The stabilizing effect of HPAN and HPAA without treatment in highly mineralized waters at a dose of 1-15 mg/dm 3 is 39-43%. Ultrasonic or ozonation treatment of these reagents can increase the stabilizing and anti-scale effects.
As it can be seen from Table 3, the HPAN in concentrations of 0.5-5.0 mg/dm 3 appeared ineffective; however, an increase of a dose to 5-15 mg/dm 3 showed rather high stability of water in relation to scaling.
The stabilizing effect when using HPAA reached 16.7% at a dose of 1 mg/dm 3 and 45.8% at a dose of 2 mg/dm 3 . When increasing the dose of HPAA to 5.0 mg/dm 3 , this inhibitor provided 100% water stability (Fig. 3).
HPAN at a concentration of 5 mg/dm 3 for highly mineralized waters provides a stabilizing effect of 99.8%, and the anti-scale effect of 99.2% (Fig. 4). Therefore, the obtained results indicate the potential of the selected reagents as stabilizers of scale formation.
CONCLUSIONS
It was established that pre-purification of the model solution before reverse osmosis desalination helps increase the productivity of the membrane and has little effect on its selectivity for chlorides, sulfates, hardness ions and bicarbonates. Evaluation of the efficiency of using HPAN and HPAA as a stabilizer of scale formation for concentrates of reverse osmotic desalination of water (highly mineralized waters) was performed. HPAA at a concentration of 5 mg/dm 3 for the waters with H = 50 mg-eq/dm 3 at T = 60 °C and t = 6 h provides a stabilizing effect at the level Table 3. The dependence of the stability of the concentrate on the dose of HPAN at 60 °C Доза, мг/дм 3 H, mg-eq/dm 3 H res. , mg-eq/dm 3 ∆H, mg-eq/dm 3 СЕ, % | 2021-12-03T16:24:07.183Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "e2459538a524fe8f7eb5bc8ccb77ddfda5b7223a",
"oa_license": "CCBY",
"oa_url": "http://www.jeeng.net/pdf-144075-70043?filename=Development%20of.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "be2e383d326f6987610e7c00c9100f6fb26e969a",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
} |
261516099 | pes2o/s2orc | v3-fos-license | Causal associations between thyroid cancer and IgA nephropathy: a Mendelian randomization study
Background The incidence of kidney disease caused by thyroid cancer is rising worldwide. Observational studies cannot recognize whether thyroid cancer is independently associated with kidney disease. We performed the Mendelian randomization (MR) approach to genetically investigate the causality of thyroid cancer on immunoglobulin A nephropathy (IgAN). Methods and results We explored the causal effect of thyroid cancer on IgAN by MR analysis. Fifty-two genetic loci and single nucleotide polymorphisms were related to thyroid cancer. The primary approach in this MR analysis was the inverse variance weighted (IVW) method, and MR‒Egger was the secondary method. Weighted mode and penalized weighted median were used to analyze the sensitivity. In this study, the random-effect IVW models showed the causal impact of genetically predicted thyroid cancer across the IgAN risk (OR, 1.191; 95% CI, 1.131–1.253, P < 0.001). Similar results were also obtained in the weighted mode method (OR, 1.048; 95% CI, 0.980–1.120, P = 0.179) and penalized weighted median (OR, 1.185; 95% CI, 1.110–1.264, P < 0.001). However, the MR‒Egger method revealed that thyroid cancer decreased the risk of IgAN, but this difference was not significant (OR, 0.948; 95% CI, 0.855–1.051, P = 0.316). The leave-one-out sensitivity analysis did not reveal the driving influence of any individual SNP on the association between thyroid cancer and IgAN. Conclusion The IVW model indicated a significant causality of thyroid cancer with IgAN. However, MR‒Egger had a point estimation in the opposite direction. According to the MR principle, the evidence of this study did not support a stable significant causal association between thyroid cancer and IgAN. The results still need to be confirmed by future studies. Supplementary Information The online version contains supplementary material available at 10.1186/s12864-023-09633-6.
Introduction
Thyroid cancer is the most common endocrine malignancy.The incidence of thyroid cancer is 15.7 per 100,000 population per year in the US [1,2].Worldwide, there are an estimated 567,233 new cases of thyroid cancer and 41,071 deaths per year [3].Thyroid hormones influence renal development, the glomerular filtration rate, renal transport systems, and water/electrolyte homeostasis [4].Thyroid dysfunction also causes kidney disease [5,6].Thyroid dysfunction is associated with nephrotic syndrome, including IgAN, membranoproliferative glomerulonephritis, and minimal change disease [7][8][9][10].
The relationship between thyroid dysfunction and kidney disease has been investigated for many years.However, the impact of thyroid cancer on kidney injury is less explored.Observational studies have demonstrated the increased risk of renal disease from thyroid cancer [11][12][13].This risk is probably related to thyroid cancer treatment and genetic factors [6].Although some cases reported nephrotic syndrome resulting from thyroid cancer [14], the causal association between thyroid cancer and nephrotic syndrome has not been elucidated in observational studies.IgAN is the most common form of primary glomerular disease [15].More than 30% of patients with IgAN will develop end-stage kidney disease within twenty years.IgAN is challenging to treat.The prognosis varies from patient to patient.Consequently, identifying the causal association between thyroid cancer and IgAN has important and practical implications for kidney protection.
MR is a research approach that evaluates the causal link between exposure and disease outcome by analyzing genetic variants related to exposure.The principle of MR is similar to that of randomized controlled trials (RCTs).In MR, the randomization variables are genetic variants.MR studies evaluate the relationships discovered by clinical observational studies and search for a novel association.Disease conditions cannot convert the sequences of the germline DNA.Therefore, MR analysis reasonably avoids reverse causation [16].No observational evidence shows the potential causal relationship between thyroid cancer and IgAN.To test this association, we performed MR analysis [17,18] to investigate the causal impact of thyroid cancer on IgAN.
Study design
This MR study estimated the causal influence of thyroid cancer on IgAN risk by GWAS summary statistics (Fig. 1).The authors declare that all data in this study are available.MR is a method to test the causal impact of exposure on disease development.The instrumental variables are genetic variations.This method overcomes unmeasured confounding to make some causal inferences more precise [19].MR design is dependent on three assumptions: (1) the genetic variants are strongly correlated with thyroid cancer; (2) the genetic variants are not related to any confounders of the thyroid canceroutcome association; and (3) the genetic variants are only related to IgAN via thyroid cancer [18].
GWAS summary statistics for thyroid cancer and IgAN
We searched for thyroid cancer-related traits in a largescale genome-wide association study database (GWAS) (https:// gwas.mrcieu.ac.uk/ datas ets/ ieu-a-1082/) and for available GWAS summary statistics.Before conducting MR analysis, we strictly screened single nucleotide polymorphisms (SNPs) to guarantee quality.The thyroid cancer dataset came from the Italian population.The included population was 43 to 56 years old.The IgAN dataset was a cohort of IgAN patients selected from the UK Glomerulonephritis DNA Bank.Thus, the characteristics of the two population cohorts were not similar in clinical and demographic characteristic (such as age, gender, race, education, etc.), but exists genetic comparability because they share a common European ancestry, we think they share the similar genetic profile.There was no sample overlap between the exposure and outcome datasets.First, we selected SNPs associated with the appropriate exposure at the genome-wide significance threshold (p < 5 × 10 −8 ).Second, we aggregated SNPs in linkage disequilibrium clumping (r 2 < 0.01 within windows 1000 kb for variants in the locus).Third, we calculated the F statistics of the SNPs selected above.To avoid the bias of weak instrumental variables on the final results, we excluded SNPs with F statistics less than 10.We comprehensively searched the risk factors for IgAN from previously published literature [20].According to the previously mentioned MR assumption, we excluded SNPs associated with IgAN or the risk factors for IgAN by searching SNP information in the PhenoScanner V2 web (http:// www.pheno scann er.medsc hl.cam.ac.uk/).The summary statistics data about the association between thyroid cancerrelated SNPs and IgAN were derived from the GWAS database (https:// gwas.mrcieu.ac.uk/ datas ets/ ieu-a-1081/; ICD: "ieu-a-1081").Ethical approval was obtained from relevant institutional review boards for the study data contributing to these GWAS meta-analyses.In the present study, we only summarized data from these studies.Therefore, it was unnecessary to require additional ethics approval.
Statistical analysis
The IVW method can provide a consistent assessment of the causality of the exposure when each variant satisfies all three assumptions of valid instrumental variables.An estimate of IVW can be obtained by calculating the slope of the weighted linear regression.We recognized IVW as the primary approach.Two other methods (weighted median estimator and MR-Egger) analyzed additional sensitivity because all instrumental variables correspond to the MR assumptions in the IVW method.The weighted median estimator could supply a consistent causal assessment when more than half of the instrumental variables are valid.The MR-Egger estimation is unbiased if the genetic instrument is independent of the pleiotropic effects.We used penalized weighted median, weighted mode methods and MR-Egger methods for additional sensitivity analyses, which make differing assumptions for the triangulation of evidence.The penalized weighted median estimator provided a consistent causal assessment when the valid instrumental variables were more than half.Furthermore, the pleiotropy and heterogeneity of SNPs were individually assessed by IVW methods with MR-Egger intercept and Cochran's Q statistics.We calculated the F statistics of the selected SNPs to detect the strength of the IVs at a threshold of F > 10, which is a typical approach in MR analysis.The R 2 and F statistics of each SNP in the included exposure group (instrumental variable) were calculated based on previously published literature [21].The R 2− specific calculation formula is as follows: R 2 = 2 × minor allele frequency (MAF) × (1-MAF) × beta.exposure 2 , where R 2 is the proportion of variance explained in the instrument.The F-statistics calculation is derived from the formula F = beta.exposure 2 /standard error.exposure 2 .The effect estimates of genetically predicted thyroid cancer on IgAN are presented as odds ratios (ORs) with their 95% CIs per 1-unit-higher log-odds of thyroid cancer.The association of each SNP with thyroid cancer was further plotted against its effect on the risk for IgAN.A nonsignificant difference between the intercept and zero (p > 0.05) indicates the absence of pleiotropic effects.The value of Cochrane's Q was used to evaluate the heterogeneity.If the p value of Cochrane's Q was less than 0.05, the primary outcome was the IVW method with a random-effects model; otherwise, the fixed-effects model was the primary outcome.In addition, we applied the leave-one-out analysis to estimate the robustness of the results in MR analysis through any outlier SNP.If the results met the following three conditions, the causal relationship was significant: 1) the p value of IVW was less than 0.05; 2) the direction of the estimates among the IVW, MR-Egger, and weighted median methods was consistent; and 3) the p value of the MR-Egger intercept test was more than 0.05.We analyzed all statistics by the "two sample MR" package in R version 3.4.2(R Foundation for Statistical Computing, Vienna, Austria).A twotailed p value < 0.05 indicated statistical significance.
Instrumental variables for thyroid cancer on IgAN
Table 1 presents the essential characteristics of the dataset included in this study.
A total of 52 available SNPs independently associated with the genetic risk for thyroid cancer were screened through previously defined SNP screening criteria (r 2 < 0.01 within windows 1000 kb for variants in the locus and a p value < 5 × 10 −8 ).We calculated the F statistics of each selected SNP to exclude weak instrumental variable bias.None of these 52 SNPs were identified as having weak instrumental variable bias, and all had F statistic values greater than 10.After analyzing the 52 SNPs in Phenoscanner, we found that no SNP was associated with IgAN or the potential risk factors for IgAN.Thus, there was a total of 52 SNPs that were instrumental variables for thyroid cancer and IgAN, as shown in Table 2.The SNP characteristics and F-statistic for each thyroid cancer and IgAN are shown in Supplementary Table 1.The strength of each SNP for thyroid cancer has an F-statistic value between 30.836 and 320.326, eliminating the bias of weak instrumental variables.Figure 2 shows the overall design and summarizes the results of this study.
Evaluation of assumptions and sensitivity analyses
In addition, IVW analysis (Q = 105.414,P = 0.020) and MR-Egger analysis (Q = 72.567,P < 0.001) were performed to determine heterogeneity.MR-Egger regression revealed the directional pleiotropic effect across the genetic variants (intercept, 0.186; P < 0.001).After deleting each SNP, the result of the merger of the remaining SNPs is basically in a straight line.Thus, no single SNP significantly impacted the MR estimation results based on leave-one-out analysis (as shown in Fig. 3C), with all significant estimates ranging from 0.16 to 0.24.Furthermore, asymmetry in the funnel plot can bias MR methods by indicating directional horizontal pleiotropy.Thus, we examined the funnel plot for asymmetry but found no such evidence in our study (as shown in Fig. 3D).
Discussion
This MR is the first study to explore the causality of thyroid cancer on IgAN risk.
Although IVW was the primary method that demonstrated the impact of thyroid cancer on IgAN, a contrary direction in MR-Egger did not support this association.Therefore, the evidence in this study does not sufficiently demonstrate that thyroid cancer increases the genetic risk for IgAN.
Onconephrology is a new field that has appeared during the last few years [22].A large-scale renal disease emerges in patients with cancer.Kidney injury may result from cancer treatment, chemotherapeutic drugs, and malignancy.Currently, nephrologists encounter an increasing number of cancer patients with clinical features manifesting as nephropathy syndrome.They confront a barrier in the treatment of these patients.
Identifying whether nephropathy syndrome is derived from malignancy or chemotherapeutic drugs is complex.This uncertainty creates hesitation in discontinuing chemotherapeutic drugs or starting kidney treatment [23][24][25][26].Nephropathy syndrome treatment always requires glucocorticoid or immunosuppressant administration [27,28].However, these pharmaceuticals induce many adverse drug reactions, including infection, osteoporosis, diabetes, and gastrointestinal reaction [29].If a cancer patient suffers from kidney disease caused by malignancy, withdrawal of chemotherapeutics or the initiation of glucocorticoids/immunosuppressants is ineffective and worsens the patient's condition.Antineoplastic therapy is much more crucial than kidney protection.Stopping antitumor treatment may threaten survival.Therefore, identifying kidney injury induced by cancer is essential [30].
Recently, some cases of nephropathy syndrome induced by solid tumors have been reported.Most solid tumors associated with membranous nephropathy are lung and gastric cancers, followed by prostate cancer, thymoma, and so on [31].Minimal change disease is frequently observed in lung, colorectal, and thymoma and rarely in pancreatic, bladder, breast, and ovarian cancers [32].The relationship with IgAN occurs in the respiratory tract, buccal mucosa, and nasopharynx [33].These findings indicate various pathological types of nephropathy syndrome induced by solid tumors.However, whether thyroid cancer increases the risk of nephropathy syndrome has not been confirmed.Thyroid function has a vital effect on kidney development.Thyroid dysfunction directly worsens kidney function and leads to the development of kidney disease.A 52-year-old woman was diagnosed with nephropathy syndrome resulting from medullary thyroid carcinoma [14].Clinicians discovered diffuse glomerular deposition of amyloid by kidney biopsy.Medullary thyroid carcinoma releases the calcitonin hormone, forming amyloid deposits in the kidney.This case indicates that thyroid cancer probably causes injury to the glomerulus.This MR did not ultimately confirm a causal link between thyroid cancer and IgAN.Despite insufficient evidence supporting the notion that thyroid cancer patients tend to develop a risk of IgAN, we cannot ignore the probability that thyroid cancer has a potential influence on the kidney.We discovered some connections between thyroid cancer and IgAN in previous reports.We discussed that these associations act through the underlying mechanism of circulating immune complexes and complement activation.
First, thyroid antigens were in the glomerular deposits.Thyroid antigen-antibody probably causes the deposition of circulating immune complexes in the kidney.Immune complexes thicken the glomerular basement membrane, alter podocyte function, and activate the classic pathway of the complement system.Complement system activation accelerates the inflammatory process through the chemotactic factors C3a and C5a.According to previous evidence, glomerular inflammation is one of the mechanisms involved in IgAN [34].We speculate that the deposition of thyroid antigen-antibody circulating immune complexes increases the risk of IgAN.Furthermore, this assumption is consistent with previous evidence [35].According to Santoro et al., autoantibodies directed against the epitopes of thyroglobulin, thyroperoxidase, and glomerular antigens likely cause immune-mediated glomerular disease.Additionally, epitope spreading has not yet been studied in patients but has been shown to occur in experimental immunization with an immunogenic thyroglobulin peptide.Second, previous studies established that thyroid cancer cells could produce IgG.IgG is positively associated with the growth and metastasis of thyroid cancer cells.In thyroid cancer tissues, the colocalization of IgG with C1q, C3c, and C4c was observed [36].IgG was also detected in glomerular immune deposits of all IgAN patients [37].IgAN is an autoimmune disease characterized by the glomerular deposition of immune complexes [38][39][40].In patients with IgAN, the circulating immune complexes consist of IgG, IgA, IgM, and complement C3 [41,42].C3 is biologically capable of activating the complement pathway [43] and is present in kidney biopsy specimens of patients with IgAN.In thyroid cancer tissues, researchers discovered C3c, a fragment of complement component C3.Further validation was required to validate the hypothesis that IgG and C3 released by thyroid cancer cells expose patients to IgAN risk.Finally, we found that IgG and TgAb IgG in thyroid cancer activate the alternative pathway (AP) and lectin pathway (LP), respectively.Some evidence suggests AP and LP as pathogenic mechanisms in IgAN through the promotion of complement activation.AP and LP activity produce a pathogenic link between glomerular IgA deposition and glomerular inflammation and injury [34].Patients with thyroid cancer likely have a risk of AP and LP action, which stimulates the complement to induce kidney injury in IgAN.However, this assumption will have to be demonstrated by future studies.Animal experiments are still important methods for investigating and validating these assumptions in the future.
Conclusion
In conclusion, according to the MR principle, the evidence of this study did not confirm a stable significant causal association between thyroid cancer and IgAN.However, based on previous studies, we cannot neglect the potential connection between thyroid cancer and IgAN from the perspective of circulating complex immune depositions and complement activation.This assumption and underlying impact deserve more investigation and exploration.
Limitations
This study still has several limitations.First, enrolled participants were of European ancestry.Thus, the results are not applicable for other ethnic groups.Second, MR analyses established causal hypotheses by randomly distributing genetic variants.It was not easy to differentiate mediation and pleiotropy by the MR technique.In our genome, huge variants probably influenced one or more phenotypes.Third, additional mediator methods and observational approaches cannot validate the metabolic pathways underlying the link between thyroid cancer and IgAN.Future research will examine the underlying process since the UK Biobank data have limitations.It will provide valuable recommendations for clinical practice.
Fig. 1
Fig. 1 Mendelian randomization model of thyroid cancer and risk of IgA nephropathy.The design is under the assumption that the genetic variants are associated with thyroid cancer, but not with confounders, and the genetic variants influence IgA nephropathy only through thyroid cancer.SNP indicates single nucleotide polymorphism
Fig. 2
Fig. 2 Mendelian randomization model of thyroid cancer and risk of IgA nephropathy.The overall design and abstract of the results of this study
Fig. 3 A
Fig. 3 A Scatter plot to visualize the causal effect of thyroid cancer on IgA nephropathy.Scatter plots of genetic associations with thyroid cancer against the genetic associations with IgA nephropathy.The slopes of each line represent the causal association for each method.The blue line represents the inverse-variance weighted estimate, the red line represents the weighted median estimate.B Fixed-effect IVW analysis of the causal association of thyroid cancer with IgA nephropathy.The black dots and bars indicated the causal estimate and 95% CI using each SNP.The red dot and bar indicated the overall estimate and 95% CI meta-analyzed by MR-Egger and fixed-effect inverse variance weighted method.C MR leave-one-out sensitivity analysis for thyroid cancer on IgA nephropathy.Circles indicate MR estimates for thyroid cancer on IgA nephropathy using inverse-variance weighted fixed-effect method if each SNP was omitted in turn.D Funnel plot of genetic associations with IgA nephropathy against causal estimates based on each genetic variant individually, where the causal effect is expressed in logs odds ratio of IgA nephropathy for each unit increase in thyroid cancer.The overall causal estimates (β coefficients) of thyroid cancer on IgA nephropathy estimated by inverse-variance weighted (light blue line) and MR-Egger (navy blue line) methods are shown
Table 1
Descriptions for data sources and assessment of the instrumental variables strength
Table 2
The characteristics of 52 SNPs and their genetic associations with thyroid cancer and IgA nephropathy EAF effect allele frequency, EA effect allele, OA other allele
Table 3
The Association of thyroid cancer with IgA nephropathy risk using various methods OR odds ratio, CI confidence interval, IVW inverse variance-weighted, MR Mendelian Randomization | 2023-09-05T13:52:54.514Z | 2023-09-05T00:00:00.000 | {
"year": 2023,
"sha1": "67517a091595f42f4989e963a0dd453c72b37994",
"oa_license": "CCBY",
"oa_url": "https://bmcgenomics.biomedcentral.com/counter/pdf/10.1186/s12864-023-09633-6",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d3f49331747d44a7d9340a390b60d649ed976ca1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
260235753 | pes2o/s2orc | v3-fos-license | An automatic system for extracting figure-caption pair from medical documents: a six-fold approach
Background Figures and captions in medical documentation contain important information. As a result, researchers are becoming more interested in obtaining published medical figures from medical papers and utilizing the captions as a knowledge source. Methods This work introduces a unique and successful six-fold methodology for extracting figure-caption pairs. The A-torus wavelet transform is used to retrieve the first edge from the scanned page. Then, using the maximally stable extremal regions connected component feature, text and graphical contents are isolated from the edge document, and multi-layer perceptron is used to successfully detect and retrieve figures and captions from medical records. The figure-caption pair is then extracted using the bounding box approach. The files that contain the figures and captions are saved separately and supplied to the end useras theoutput of any investigation. The proposed approach is evaluated using a self-created database based on the pages collected from five open access books: Sergey Makarov, Gregory Noetscher and Aapo Nummenmaa’s book “Brain and Human Body Modelling 2021”, “Healthcare and Disease Burden in Africa” by Ilha Niohuru, “All-Optical Methods to Study Neuronal Function” by Eirini Papagiakoumou, “RNA, the Epicenter of Genetic Information” by John Mattick and Paulo Amaral and “Illustrated Manual of Pediatric Dermatology” by Susan Bayliss Mallory, Alanna Bree and Peggy Chern. Results Experiments and findings comparing the new method to earlier systems reveal a significant increase in efficiency, demonstrating the suggested technique’s robustness and efficiency.
INTRODUCTION
The utilization of digital health recording systems is slowly increasing, particularly amongst major hospitals, and in the coming years, the e-health platform will be fully available. A significant amount of paper-based health records is still stored in hospital health record repositories. These are the long history of each patient's medical assessment and, therefore, these are reliable sources of medical research and enhanced patient care (Beck et al., 2021).
Due to the significance of these paper records, few hospitals have begun to digitize them as image files where the patient ID is the only reliable key to accessing them, but most hospitals have preserved them as they are. This is because of the high digitization costs and also the comparatively low benefit of records that can only be accessed by patient identification. However, the real purpose of computerizing paper documents is to provide them with functionality so that they can be utilized in medical research such as extracting similar cases between them. If it is difficult to identify realistic possible solutions to this problem, large amounts of these paper-based clinical records will soon be lost in book vaults and the coming years may be destroyed. Therefore, a challenge is faced of designing a better system that is easy to run and can easily integrate the paper-based large history of clinical records into the e-health climate.
For clinical or medical records, medical images and captions deliver essential information. Medical data includes medical papers, dissertations, and theses, as well as information on a variety of clinical trials and life sciences studies. Typically, this information is available as scanned data. For all valuable information, these scanned medical images cannot be read manually. Manually, analyzing and extracting information from all the scanned files available in the medical data repository will take years. This is among the main reasons why large data analysis and visualization with the utilization of machine learning are needed. The data is unavailable as a lot of information still in the form of scanned data and these formats cannot be efficiently interpreted. Using the internet, researchers may access medical data in a variety of formats. Some of these involve single column versus double-column layouts, layout variance, etc. Extracting all the necessary information from medical scanned images becomes almost difficult. Although machine learning can manage vast amounts of data in conjunction with other AI approaches, the complexities of extracting data from scanned images go back generations (Wu et al., 2021). Medical image extraction, distinguishing the pixels of organs or diseases from the context of the medical record, is one of the most difficult challenges of medical image processing that is to provide crucial information on the structures and sizes of these organs. Thus, an important new technique has been implemented in this study to extract medical figures and captions from medical reports. Because of the complex and varied nature of medical publications and the differences in figure form, texture, and material, such extraction is not a simple process. Because medical figures frequently contain several image panels, research has been done to classify compound figures and their constituent panels.
Several researchers have proposed different systems (Li, Jiang & Shatkay, 2018;Li et al., 2017) to extract graphics from medical scanned images automatically by applying different technologies available. Previous systems have been based on traditional techniques like filters for edge detection (Singh et al., 2012;Li, Jiang & Shatkay, 2019;Somkantha, Theera-Umpon & Auephanwiriyakul, 2010;Naiman, Williams & Goodman, 2022;Trabucco et al., 2021) and mathematical techniques (Senthilkumaran & Vaithegi, 2016;Piórkowski, 2016;Rajinikanth et al., 2018;Li et al., 2021). Then, for a long time, machine learning techniques are used to extract hand-crafted attributes have become a dominant methodology to extract medical figures from the medical document scanned (Jiang et al., 2020;Xia et al., 2017;Espanha et al., 2018). The design and implementation of these features have always been the main concern for the development of such a program, and the complexity of these methods has been considered as a major limitation for their deployment. Later, deep learning techniques came into the picture as a result of hardware improvements and began to show their significant abilities in image processing tasks. The impressive skill of deep learning methods has made them a preferred choice for extracting medical graphics from medical scanned documents (Zhou, Greenspan & Shen, 2017;Pekala et al., 2019;Fritscher et al., 2016;Dalmış et al., 2017;Moeskops et al., 2016). The extraction of medical figures created on deep learning methods has established considerable attention, particularly in the previous few years, and it highlights the need for a comprehensive review of it. Various platforms, like the Yale Image Finder, BioText, Open-I, askHermes, and the GXD database purpose to allow users to find suitable medical figures and captions from medical documents (Demner-Fushman et al., 2012;Sanyal, Chattopadhyay & Chatterjee, 2019;Xu, McCusker & Krauthammer, 2008;Yu, Liu & Ramesh, 2010). Some researchers have proposed some methods to extract medical figures along with their caption from medical documents (Li, Jiang & Shatkay, 2018;Demner-Fushman, Antani & Thoma, 2007;Pavlopoulos, Kougia & Androutsopoulos, 2019;Lopez et al., 2011). Though, the primary step toward this aim, viz., extracting figure-associated caption pairs from medical documents is neither well-studied nor so far well-addressed. Thus, an efficient and new technique is introduced to extract figures and captions from medical publications.
The contributions of this study are as follows: • In this article, an effective and new six-fold approach is presented to extract figures and related captions from scanned medical documents. In contrast to previous methods, the raw graphical objects stored in the scanned files are not examined directly by the proposed method.
• The edges are extracted from the scanned document using a-torus wavelet transform. Then, it distinguishes the text element from the scanned file's graphical element and applies maximally stable extremal regions connected component analysis to the text and graphical element to identify individual figures and captions. Text and graphics are separated by using multi-layer perceptron. It is trained to recognize the text parts from the scanned file so that the graphical content parts are identified easily. The bounded box concept is used to create separate individual blocks for every figure-caption pair.
The rest of the article presents the details of the method and validates its efficiency through a sequence of experimentations. 'Materials & Methods' deliberates the related works; in 'Results', the proposed methodology is presented; 'Discussion' presents the experimentations utilized to measure the performance of the proposed technique, together with the results attained by the approach and by other previously established methods utilized for comparison; 'Conclusions' includes the discussion; while Section 6 concludes and summaries directions for future work.
MATERIALS & METHODS
This study aims to extract figures and captions from medical documents. Figure 1 summarizes the full framework for the proposed six-fold method. Throughout this research, the classification of text and graphics is done by classification at the level of the connected component (CC) within a single framework. The processing flow includes two main stages after binarization, edge extraction, and CC extraction. Those are text block classification and graphics block classification. The methodology contains six basic steps: Pre-processing, Edge extraction using a-torus wavelet transform, CC extraction using Maximally Stable Extremal Regions (MSER), identification of graphics blocks, detection of caption, and identifying figure-caption pair.
Pre-processing
The pages collected from medical documents (in PDF format) are first converted to images. These images may be in grayscale or color format. In this step, if the input scanned image is a color image, a weighted sum of red, green, and blue components is used to convert it to grayscale. Eq. (1) depicts the conversion process. (1) Where R, G, and B are red, green, and blue color channels of a color image respectively. The next step of pre-processing is to enhance the grayscale image by reducing the noise. Different grayscale image enhancement methods are published in the literature. In the proposed method, a smoothing filter is applied to improve the transformed grayscale document image g (x,y) followed by unsharp filtering.
The grayscale image is transmitted through an appropriate filter to boost the image quality. The peak signal to noise ratio (PSNR) is utilized as a parameter to determine the appropriateness of the filter. PSNR is a measure of image quality. The higher the output of the PSNR, is better the quality of the image. The filter that offers the highest PSNR is therefore selected in this step. In this study, three filters are utilized for smoothing namely: wiener filter, median filter, and low pass filter.
Wiener filter: This filter is intended to preserve the image in such a way that there should be less square error amongst the original and restored image.
Median filter: This filter preserves each pixel of its neighborhood pixels with a median. This significantly reduces the pepper and salt noise.
Low pass filter: This filter preserves each image pixel with its neighborhood pixel mean. The mask displayed in Fig. 2 is used to get an unsharp filtered image from the smoothened image s(x,y). This leads to the image u(x,y) which can be used to extract enhanced edge information.
The PSNR for an image can be calculated by utilizing Eq. (2).
Where ''G'' is an image's overall gray level value and MSE is a mean square error amongst the original and the enriched image. Let g (x,y) and u(x,y) are of size P × Q, The MSE can be calculated by using Eq. (3). (3)
Edge detection with the undecimated wavelet transform
After enhancing the scanned image, the next step is to detect edges from u(x,y) using a-torus undecimated wavelet transform.
A-torus undecimated wavelet transform
The fast-multi-scale edge detector of Mallat [33] is implemented utilizing an undecimated form of the wavelet transformation called the a-torus algorithm. The discrete method to the wavelet transform may be accomplished using a modified version of the a-trous algorithm. The technique may deconstruct an image (or a signal) into an approximation signal (A) and a detail signal (D) at a scale; the detail signal is referred to as a wavelet plane, and it has the same dimension as the original image. With every decomposition step in the a-trous algorithm, the low-pass and high-pass wavelet filters are spread, and the convolution is done with no sub-sampling. N + 1 (N detail images plus a single image approximation) images of the same size are generated by a-trous algorithm, where N is the number of levels of decomposition. For several different reasons, this algorithm is very efficient: (1) it is translation-invariant (a shift in the input simply moves the coefficients), (2) the discrete transform values are precisely determined at each pixel position without any interpolation, and (3) the correlation between scales is easily manipulated because of its inherent structure. In this study, the detail coefficients are determined as differences amongst consecutive approximation coefficients. With this concept, reconstruction is simply the sum of all the detailed matrices of the coefficient and the final matrix of the approximation coefficient. For an N-level decomposition, the reconstruction formula is then expressed by using Eq. (4).
Edge detection
There are several options of wavelet basis functions, and it is very critical to choose a perfect basis that can be used for edge detection. The mother wavelet, in particular, should be symmetrical. Symmetry is significant since in this study a smooth image is differentiated, and thus a lack of symmetry means that edge position will alter as the image is successively smoothed and differentiated. With the correct wavelet selection, at a specified scale, the edge positions correspond to the maximum modulus of the wavelet transform.
The cubic wavelet B-spline (B3) perfectly fits with the proposal as it is symmetrical and easy to derive. A degree N-1 spline is the convolution of N box functions in particular. This study (1/2, 1/2) is transformed four times to produce the low-pass filter coefficients of the B3 as shown in Eq. (5). (1,4,6,4,1).
The above-mentioned Eq. (5) is extended to two dimensions to produce a low-pass filter kernel (K LPF ) suitable for the utilization with the a-torus algorithm as shown in Eq. (6).
This kernel is interesting because left bit-shifts can be substituted for the costly division operations as the denominators are all powers of two. There is the ability with this edge detection algorithm to monitor the behavior of the edge detection scheme by integrating scale into the overall system. There is no concept of scale with traditional edge detectors, but with a wavelet model, by regulating the number of wavelet decompositions, the scale can be adjusted as desired. Scale determines the significance of the edges that are observed. The various balance between image resolution and edge scale may result from various edge detected images such as high resolution and small scale (minimal number of wavelet decompositions) outcomes in relatively noisier and more discontinuous edges. Low resolution combined with large scale, therefore, can lead to undetected edges. Sharper edges are more prone to be maintained by transforming the wavelet into subsequent scales, while lower edges are attenuated as the scale rises.
Connected Component (CC) extraction using MSER
A novel CC-based region detection algorithm that uses MSER is used in this article. Several co-variant regions, named MSERs, are extracted from an image by the MSER algorithm: an MSER is a stable connected image component. MSER is built on the concept of taking areas that remain substantially constant across a wide variety of threshold. All pixels below a certain threshold are white, whereas all pixels over or equal to that threshold are black. If there were a succession of thresholded images It with frame t matching to threshold t, a black image can be observed at first, then white dots corresponding to local intensity minima form and get larger. These white dots will ultimately merge, resulting in a white image. The set of all extremal regions is the set of all connected components in the sequence. The term extremal refers to the fact that all pixels within the MSER are either brighter (bright extremal regions) or darker (dark extremal regions) than all pixels on its outer boundary.
This study aims to find out the graphical element along with the associated caption from the medical document. The bounding box region property is used to filter the connected components of text and graphics elements. The bounding boxes are expanded to the left, right, and up, downside so that some overlapped boxes are formed. Then the overlapped boxes are merged to form a new bounding box. Thus, reducing the number of bounding boxes. Also, a threshold is set to eliminate the small regions as the area of the medical graphics elements is not very tiny. The area of the regions less than the threshold is ignored for further processing.
Graphic objects detection
The contents of each bounding box are extracted and the percentage of text concerning the bounding box size is measured. For this purpose, some morphological operations like dilation and filling are used to enhance the characters of each bounding box so that it can be recognized by the text classifier. After that, some important features are computed from each enhanced character, and a multi-layer perceptron (MLP) is utilized to recognize the text. Extracted features are numerical values and are stored in arrays. MLP can be trained with these numerical values. The most significant features for character recognition used in this proposed approach are as follows. With the purpose of standardization of the size of sub-images, the sub-images must be cut close to the border of the character before extracting the features from the characters. The standardization of the image is achieved by determining the maximum column and row with 1s and the highest point increasing and decreasing the counter till the black space is reached, or the line with all 0s. This method is demonstrated in Fig. 3 where a character ''P'' is being cropped and resized.
The aforementioned 16 features (input of MLP) are utilized to identify the character from the bounding box.
Then the percentage of bounding box area occupied by the text is computed by dividing the total area of the bounding box by the total area of the character set or text inside the bounding box. A threshold is set to classify between the text and graphics blocks. If the percentage of text is less than the threshold, the bounding box object is considered as the graphics element of the document page.
Detection of caption
Captions that can assist to locate the related figures are first identified utilizing the header prefixes of the caption like Fig, FIG, Figure, The continuous text block succeeding the possible header is identified by utilizing the character recognition algorithm discussed in 'Graphic Objects Detection'. The caption text block is demarcated as an arrangement of characters, where the last character ends with a period and where the last character is trailed by a vertical gap or break (i.e., by a gap whose height beats the regular gap amongst body text lines).
Figure-caption pair extraction
After identifying the caption and figure separately, the next step is to combine the bounding boxes of the figure and associated captions so that they can be extracted as a single element. For that first four parameters are extracted from the bounding boxes of the figure and caption: x-position, y-position, width, and height of the bounding box. Three individual situations are noticed as mentioned in Table 1. After merging the bound boxes of the figure and associated caption, the full figure and its related caption are then preserved as part of the output files together with their bounding boxes on the associated document page which indicates their respective location.
RESULTS
The output attainable by the proposed method in recognizing the graphic elements along with the caption from the medical document will be provided in this section. Using MatLab R2017a, the proposed technique was implemented on Intel Core i7 CPU 3. Manuscript to be reviewed Manual of Pediatric Dermatology'' by Susan Bayliss Mallory, Alanna Bree and Peggy Chern. These books are selected as these books are rich source of figure-caption pair. Table 2 provides the statistics of the dataset. There is a total of 912 figure-caption pairs observed in the dataset.
Pre-processing
In this step, first, the freely accessible medical books are downloaded in PDF format and each page of those books are converted to images. After that images are converted into grayscale (Fig. 4). The output of the smooth grayscale image using three filters as mentioned in 'Preprocessing' is shown in Fig. 5. After several experimentations using the first dataset, the size of the filter mask is set to 7 × 7 pixels. Figure 6 shows the enhanced image outputs after applying the unsharp mask as described in 5B and 5C. For each test image, the PSNR is calculated between the grayscale and three types of enhanced sharpen images. The rounded average PSNR values using those three types of filters are as follows: Wiener = 65, Median = 32, and Low pass = 41. Hence, the Wiener filter is used in this study to enhance the grayscale image.
Edge detection
After enhancing the image, the next step is to detect the edges using a-torus undecimated wavelet transform as described in 'Edge Detection with the Undecimated Wavelet Transform'. The output of the detected edges of sample Fig. 6A is shown in Fig. 7.
Text and graphics bounding box detection using MSER connected component
The bounding box is detected for both text and graphics elements from the edge image using MSER connected component feature as described in 'Connected Component (CC) Extraction using MSER'. After several experimentations using the first dataset, the expansion amount is set to 5 pixels to merge the small bounding boxes into one and after merging, the threshold to ignore small bounding box content is set to 7,000 pixels. The
Detection of graphic element
From the detected bounding boxes, only the graphics elements are extracted by using the MLP approach as mentioned in 'Detection of Graphic Element'. After several experimentations using the first dataset, the threshold used to distinguish between text and graphics bonding box content is set to 20%. Figure 9 depicts the output of the detected graphics blocks.
Figure-caption pair extraction
The caption of the associated figure is detected (Fig. 10A) by using the location of the figure as well as the caption in the scanned page as described in 'Detection of Caption' and ' Figure-caption Pair Extraction'. After detecting the associated caption of the graphic element, both bounding boxes are finally merged to be detected as a single element as shown in Fig. 10B. The results obtained by using the created dataset and the proposed method are presented in Table 3 (precision, recall, and F-score) and are compared with the methods used by previous researchers, when performing (A) figure extraction, (B) caption extraction, and (C) figure-caption pair extraction.
Precision is a measure of how many correct positive predictions are produced (true positives). Precision is an excellent emphasis if the goal is to reduce false positives. In this study, as minimal mistake in the figure-caption pair as feasible is expected. But I don't want to overlook any crucial figure-caption pair. In such instances, it is reasonable to expect it to strive for maximum precision. With respect to the figure-caption pair, it can be represented by using Eq.
For only the figure extraction, only the caption extraction and the figure-caption pair extraction from the dataset, the precision values are 96.73%, 92.87% and 91.76% respectively.
Recall is a measure of how many positive cases the classifier predicted correctly out of all the positive cases in the data. Recall is critical in fields like healthcare, where we wish to limit the possibility of missing positive instances (predicting false negatives). In many circumstances, missing a positive case has a considerably higher cost than incorrectly classifying something as positive. With respect to the figure-caption pair, it can be represented by using Eq. (8). F1-Score is a metric that combines precision and recall. It is commonly referred to as the harmonic mean of the two. Harmonic mean is just another approach to calculate an ''average'' of numbers, and it is typically touted as being more suited for ratios (such as precision and recall) than the standard arithmetic mean. It can be represented by using Eq. (9).
Recall
For only the figure extraction, only the caption extraction and the figure-caption pair extraction from the dataset, the F-score values are 95.86%, 89.94% and 90.17% respectively.
These results demonstrate that the proposed method offers an accurate and reliable means of extracting figures from image documents; it is especially suited in the field of digitization of medical documents, where related articles can cover a wide range of years of publication. In terms of the extraction of the caption, the efficiency of the proposed technique is decreasing compared to the extraction of the figure. The reason behind this is the conversion of the pages from PDF to image, which often leads to inconsistencies that make it harder to identify captions. Thus, for caption extraction, the approach used in Li, Jiang & Shatkay (2018) shows a recall level of 84.74%, while the approach used in Clark & Divvala (2015) produces the recall is at 40.34% and the approach used in Choudhury et al. (2013) shows a recall level of 76.95%. In contrast, the proposed method displays a meaningfully higher recall after facing these hurdles, namely, 88.55% on the same dataset. The proposed system is, therefore, more robust and effective in recognizing captions in a wide range of scanned files compared to existing schemes. Finally, the results are analyzed for the combined task of extraction of the figure-caption pair. Specifically, in this task, all three approaches show lower efficiency as it involves proper extraction of both the figure and its associated caption. Once again, these results confirm and support the proposed system as an efficient and reliable method to extract pairs of figure-caption. Figures 11A-11C demonstrates some examples of figures and captions extracted by the proposed method. The proposed system properly extracts the figure and its related caption both in the simpler situations where there is only one figure with caption present in a page (Fig. 11C), and in more difficult situations, where there are more than one figure with many boundaries and texts placed vertically as a single figure with one caption (Fig. 11a) and different figures placed horizontally as a single figure (Fig. 11B) with a single caption. On the other hand, Figs. 11A1-11C1, 11A2-11C2, and 11A3-11C3 show the (inappropriate) extraction achieved over the same scanned pages by the other three approaches (Li, Jiang & Shatkay, 2018;Clark & Divvala, 2015;Choudhury et al., 2013). Those approaches directly handle the figure objects the scanned file to extract figures, and as such misinterpret some of the figures encoded in a complex document structure-even when the document structure is simple (e.g., Figs. 11C2 and 11C3). It is to be noted that while the proposed approach extracts most of the figures and captions properly, but there are still some cases where the extraction is not correct. The figure-extraction task is mainly challenging where small graphical objects are present in the figure and thus making it is tough to identify whether the boundaries of the figure include or exclude the next figure contents. Figure 12 shows such two situations, where the proposed technique misidentifies the figure region. A better and satisfactory metric should take into consideration for the definite retrieval of contents instead of the coordinates of the bounding box; we plan to create such a metric as part of the future work.
DISCUSSION
This study is mainly done to extract the figure-caption pair from a document. In this study only the biomedical documents are used, but this approach can be applied to other documents as well. In this methodology, six-folds are proposed. First is pre-processing where the PDF pages are converted to the image and then the image is converted to grayscale. After that the image is enhanced by reducing the noise. To reduce the noise, wiener filter is used. The next step is to detect the edge by using a-torus undecimted wavelet transform. In the third step, a novel connected component-based region detection algorithm is used that uses MSER. Several co-variant regions, named MSERs, are extracted from an image by the MSER algorithm: an MSER is a stable connected image component. The next step is for graphic object detection. The contents of each bounding box (created from the connected component) are extracted and the percentage of text concerning the bounding box size is measured. For this purpose, a multi-layer perceptron is used. A threshold is set to classify between the text and graphics blocks. If the percentage of text is less than the threshold, the bounding box object is considered as the graphics element of the document page. The fifth step is to detect the caption. The figure area naturally lies near the caption area. Therefore, the figure position is used to recognize possible caption positions. The next and final step is to extract the figure-caption pair. For this step the detected bounding box of figure and caption is merged and then preserved as the output file. This method is compared with other methods in the literature to prove the efficacy of the method. A self-created datasets are used here for the experimentation purpose where the pages are collected from five freely accessible (open access) medical documents. For only the figure extraction from the dataset, the precision, recall and F-score values are 96.73%, 94.21% and 95.86% respectively. For only the caption extraction from the dataset, the precision, recall and F-score values are 92.87%, 87.14% and 89.94%, respectively. For the figure-caption pair extraction from the dataset, the precision, recall and F-score values are 91.76%, 88.12% and 90.17% respectively.
CONCLUSIONS
In this study, a new and efficient six-fold method is presented for extracting figures, captions, and figure-caption pairs from freely accessible medical documents. Previous techniques generally perceive figures by directly finding the contents present in the scanned file and handles the graphical objects encoded in it. When the figure and the document structures are complex, this approach sometimes leads to inappropriate extraction and that is a common phenomenon within medical publications. On the other hand, the proposed approach first fully separates the text contents from the graphical contents of the image file using multi-layer perceptron and aims to extract figure-caption pair using the bounding box concept. It applies maximally stable extremal regions connected component analysis to the image to detect figures, and separately finds the text portion for captions that lie in the neighborhood of the identified figures. To test the system and compare it to state-of-the-art approaches, a self-created dataset is used. Pages are converted to images from five open access medical books and perceived as the testing dataset. Extensive experiments and results validate that the proposed system is highly efficient concerning the precision, recall, and F-score. Furthermore, the proposed system holds its good performance over documents that varies broadly in style, topic, overall organization, and publication year. Thus, it is ready to be useful in practice.
As part of future work, a new assessment metric will be planned that will make up for both figures and captions' actual retrieved contents, instead of merely correctly identifying bounding box locations. Some deep learning-based methods can be integrated in future to segment the figure and caption from the document.
ADDITIONAL INFORMATION AND DECLARATIONS Funding
The authors received no funding for this work. | 2023-07-28T15:18:01.461Z | 2023-07-26T00:00:00.000 | {
"year": 2023,
"sha1": "a49f66ab8fc4e8c44ce946157750722774257110",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.7717/peerj-cs.1452",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "481abc1ffddaff65a19c110c2a50f42635a0db08",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
15066593 | pes2o/s2orc | v3-fos-license | Renal Failure Prevalence in Poisoned Patients
Background: Renal failure is an important adverse effect of drug poisoning. Determining the prevalence and etiology of this serious side effect could help us find appropriate strategies for the prevention of renal failure in most affected patients. Objectives: The present study is aimed to identify drugs that induce renal failure and also to find the prevalence of renal failure in patients referred to emergency departments with the chief complaint of drug poisoning, in order to plan better therapeutic strategies to minimize the mortality associated with drug poisoning induced renal failure. Patients and Methods: This cross-sectional study surveyed 1500 poisoned patients referred to the Emergency Department of Baharloo Hospital in Tehran during 2010. Demographic data including age and gender as well as clinical data including type of medication, duration of hospital stay, and presence of renal failure were recorded. Mann-Whitney U test and chi-squared statistics were used to analyze the results. Results: A total number of 435 patients were poisoned with several drugs, 118 patients were intoxicated with sedative-hypnotic drugs, 279 patients were exposed to opium, and 478 patients were administered to other drugs. The method of intoxication included oral 84.3%, injective 9%, inhalation 4.3% and finally a combination of methods 2.3%. Laboratory results revealed that 134 cases had renal failure and 242 had rhabdomyolysis. The incidence of rhabdomyolysis and renal failure increased significantly with age, and also with time of admission to the hospital. Renal failure was reported in 25.1% of patients exposed to opium, vs. 18.2% of patients poisoned with aluminum phosphide, 16.7% of those with organophosphate, 8% with multiple drugs, 6.7% with alcohol, heavy metals and acids, and 1.7% with sedative hypnotics. Conclusions: Based on the findings of this study, there is a high probability of renal failure for patients poisoned with drugs such as opium, aluminum phosphide, and multiple drugs as well as the patients with delayed admission to the hospital, and it is necessary to seek appropriate treatment to prevent this significant side effect.
Background
Since a significant percentage of poisoned patients are referred to the emergency departments of hospitals (15-20%), immediate attention to drug toxicity and its clinical signs and symptoms is required at the first step. Previous studies have shown that drug and chemical poisoning is a common and serious clinical problem worldwide. In the United States of America, accidental poisoning with the chemicals leads to an annual 5000 mortality rate (1). Acute renal failure is determined by a sharp decline in glomerular filtration rate (GFR) within a few hours to a full day. Depending on the precise definition of renal failure nearly 5% to 7% of hospitalized cases and 30% of patients admitted to hospital intensive care unit are affected by it. Acute renal failure can be a complication of exposure to many pharmacological and substances, such as X-ray contrast agents, cyclosporine, tacrolimus, anticancer drugs, antibiotics and aminoglycoside antibiotics, opium, and organophosphates (2).
If reliable criteria to determine the risk factors are available, an early prognosis of the patient can be assumed. Knowing drugs associated with high risks of causing renal failure, reduction of renal failure and its associated mortality rates will be possible. Derakhshan and Moadab in a prospective study reported that gentamicin, amikacin, and nephrotoxic drugs like vancomycin are the most common causes of acute tubular necrosis in pediatric poisoning and children on these drugs must receive excessive attention (3). In a recent study conducted by Toth and Dayspring, 120 patients diagnosed with any type of renal disease were treated with rosuvastatin (4). In anoth-er study by Hall et al. it was concluded that if a patient ingested less than 6 grams of ibuprofen, renal testing is not required (5). Also Roy et al. reported that in 2500 patients who were chronic users of lithium, 20% had diabetes insipidus (6). Similarly Shahbazian reported that vitamin D can lead to acute renal failure (7).
Objectives
The present study is aimed to identify drugs inducing renal failure and also to find the prevalence of renal failure in patients referred to emergency departments with the chief complaint of drug poisoning in order to plan better therapeutic strategies to minimize the mortality associated with drug poisoning induced renal failure.
Patients and Methods
A total number of 1500 patients, referred to emergency epartment of Baharloo Hospital, Tehran University of Medical Sciences, during year 2010 were enrolled to this descriptive cross-sectional study. Renal function test results were evaluated for all poisoned patients. A detailed medical history was obtained from the patients or their companion relatives and physical examination was performed by a medical doctor. The laboratory tests including urine or serum drug screen were studied as well. Laboratory tests for rhabdomyolysis including creatine phosphokinase (CPK) and lactate dehydrogenase (LDH) were also measured. Necessary information such as the type of drug, drug dosage, and time elapsed from drug indigestion to admission of the patient, the treatment process, and presence or lack of renal failure and rhabdomyolysis were registered. Patients with previous definite renal failure and the ones not having all laboratory results required to detect renal failure were excluded from further analyzes.
In this study patients exposed to drug toxicity were evaluated. Renal failure was defined as having a creatinine (Cr) level greater than 1.4 mg/dL and rhabdomyolysis defined as CPK and LDH results at least five times higher than normal levels for the same gender and age (8,9). Patients who showed strong evidence of renal failure were excluded from the study. This study is approved by the research deputyship of Tehran University of Medical Sciences regarding ethical and methodological issues. A written consent was obtained from each individual or their parents after giving full description of the aims of the study. The data were registered and analyzed using SPSS version 18.0. Mann-Whitney U test and Chi-squared statistics were employed to analyze the results. Related tables were produced to discuss the results. P values ≤ 0.05 were considered as statistically significant.
Mann-Whitney U test showed a significant correlation between age and renal failure, implying that the prevalence of renal failure increases with age (Table 1). Moreover, there is a statistically significant relationship between gender and renal failure. A greater proportion of males were diagnosed with renal failure (Table 2). Among 17 groups of medications, multiple drugs (29.0%, n = 435 patients) had the highest frequency followed by sedative hypnotic drugs (20.5%, n = 308 patients), and opium However, there is a definite relationship between rhabdomyolysis and renal failure. The patients with rhabdo myolysis generally had a higher risk of developing renal failure ( Table 2). There was also a significant correlation between time interval between drug poisoning and arrival to the hospital and the presence of renal failure and rhabdomyolysis. The later the patient was admitted to the hospital, the higher the chance of renal failure and rhabdomyolysis were (P value < 0.001) ( Tables 2 and 3). Most patients (860 patient, 57%) were hospitalized between 1-5 hours after drug consumption, while 7.9% (n = 145) were hospitalized in less than one hour, 31% (n = 475) between 5-24 hours, and only 3.1% (n = 20) were admitted after 24 hours. Besides, 7.64% of the patients were treated with gastrointestinal decontamination along with the use of activated charcoal, and appropriate antidotes and 3.35% did not receive these treatments on time.
Discussion
This study showed that number of male patients poisoned was slightly greater than women (780 males vs. 720 females). Higher prevalence of addiction and suicide in men may be directly correlated to drug abuse. This may indicate that men may need more preventive education. Shadnia et al. in Loqman Hospital reported that among 10206 hospitalized poisoned patients, 51% were male and 49% were female (10). In another study in 2009 in Shanghai, among 374 patients with drug induced renal failure 65% were male and 35% were female (11). The mean age of patients in this study was 30.55 years with a standard deviation of 11.95. In a study conducted in London 38% of patients were 21 to 30 years old (10) while In Shanghai 51% were over 60 years old (11). Multiple drugs poisoning, sedative hypnotic drugs, and opium, form the majority of abused substances. Therefore improving medical personnel knowledge about the different aspects of these drugs with proper education seems imperative.
In the present study 860 patients (57%) were hospitalized between 1-5 hours after drug consumption, while only 9.7% (n = 145) were hospitalized in less than one hour. Similar studies reported the time interval between the accidental drug consumption and admission to hospital as less than one hour (12). The prevalence of drug-induced renal failure in patients of the study was 8.9% (n = 134) which is relatively high comparing to similar studies. In a study conducted in the United States of America 15.6% of the patients showed drug induced renal failure (13). Also In another study in India, for 10% of patients, anti-TB drugs poisoning led to renal failure (14). Therefore it is important during treatment of drug toxicity that medical personnel pay particular attention to the probability of developing renal failure and to follow up with proper treatment. In this study, the incidence of rhabdomyolysis was 16.1 % (242 people) which is relatively high and an urgent need for an effective prevention strategy is felt. Only in one study in 2006, incidence of rhabdomyolysis in olanzapine poisoning was reported as 17% (15). The present study showed that renal failure incidence increases with age (P < 0.001) indicating that the age is an important risk factor for renal failure. Older individuals need more attention, especially in serum therapy in conditions with volume depletion. In a study conducted in 2011 in Brussels age over 60 years was determined to be the risk factor for renal failure (16). The incidence of renal failure in patients with rhabdomyolysis was 28.5% while 5.2% in patients without it which shows a high risk of renal failure in patients with rhabdomyolysis (P < 0.001). Therefor rhabdomyolysis is a strong risk factor for renal failure and the prompt and proper treatment to prevent rhabdomyolysis is important in further prevention of renal failure. Type of medications causing drug induced renal failure was also determined and opium was the most common cause of renal failure followed by phosphide aluminum, organophosphates, and multiple drug poisoning. To sum up, it is important to treat these patients with special attention due to the higher risk of renal failure. Incidence of renal failure in phosphide poisoning had the second place which may be due to its high mortality before taking any laboratory tests and the rapid and fatal course of it. It should be considered that the other contributing factors like diabetes, hypertension, and chronic renal failure can also affect renal function (17)(18)(19). If the patient is admitted as soon as possible the prevalence of renal failure and rhabdomyolysis will decrease. In this study, briefly the prevalence of renal failure in 1500 poisoned patients was 8.9% with the higher incidence about the opium, aluminum phosphide, multiple medications and organophosphate. Incidence of renal failure increased with age, in patients with rhabdomyolysis, those admitted to the hospital later, and when appropriate medical treatment was not administered. It is essential to pay attention to a poisoned patient with these risk factors and find a proper solution to reduce the incidence of renal failure. | 2016-05-04T20:20:58.661Z | 2014-03-01T00:00:00.000 | {
"year": 2014,
"sha1": "4fdbf28054a0da0d7dc789476ae6b46fe25a7b5b",
"oa_license": "CCBY",
"oa_url": "https://brief.land/num/cdn/dl/bda2770a-503a-11e7-ae47-4b02a6cbee66",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "4fdbf28054a0da0d7dc789476ae6b46fe25a7b5b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
266475382 | pes2o/s2orc | v3-fos-license | Brain morphological variability between whites and African Americans: the importance of racial identity in brain imaging research
In a segregated society, marked by a historical background of inequalities, there is a consistent under-representation of ethnic and racial minorities in biomedical research, causing disparities in understanding genetic and acquired diseases as well as in the effectiveness of clinical treatments affecting different groups. The repeated inclusion of small and non-representative samples of the population in neuroimaging research has led to generalization bias in the morphological characterization of the human brain. A few brain morphometric studies between Whites and African Americans have reported differences in orbitofrontal volumetry and insula cortical thickness. Nevertheless, these studies are mostly conducted in small samples and populations with cognitive impairment. For this reason, this study aimed to identify brain morphological variability due to racial identity in representative samples. We hypothesized that, in neurotypical young adults, there are differences in brain morphometry between participants with distinct racial identities. We analyzed the Human Connectome Project (HCP) database to test this hypothesis. Brain volumetry, cortical thickness, and cortical surface area measures of participants identified as Whites (n = 338) or African Americans (n = 56) were analyzed. Non-parametrical permutation analysis of covariance between these racial identity groups adjusting for age, sex, education, and economic income was implemented. Results indicated volumetric differences in choroid plexus, supratentorial, white matter, and subcortical brain structures. Moreover, differences in cortical thickness and surface area in frontal, parietal, temporal, and occipital brain regions were identified between groups. In this regard, the inclusion of sub-representative minorities in neuroimaging research, such as African American persons, is fundamental for the comprehension of human brain morphometric diversity and to design personalized clinical brain treatments for this population.
Introduction
Human population studies are contributing to understand variability in the prevalence of diseases, treatment response, risk factors, and relationships between genetic and environmental outcomes between diverse societal groups (Falk et al., 2013;Batai et al., 2021).Accordingly, human brain morphological variability has been robustly associated with individual genetic Atilano-Barbosa and Barrios 10.3389/fnint.2023.1027382Frontiers in Integrative Neuroscience 02 frontiersin.organcestry (Fan et al., 2015) and sociocultural influences (Holz et al., 2014;Noble et al., 2015).One of the methodologies used for the characterization of the human brain has been morphological neuroimaging analysis, which consists of the implementation of computational analysis methods of brain magnetic resonance imaging (MRI), aimed to identify the structural characteristics of the brain, highlighting analysis of volume and area, such as cortical surface area and cortical thickness (Mietchen and Gaser, 2009).Brain volumetry is a measure that includes surface area and cortical thickness (Panizzon et al., 2009); the former being a parameter of cortical folding and gyrification (Rakic, 2009) and the latter a parameter of density and dendritic arborization (Huttenlocher, 1990).
Neuroimaging studies have been implemented to identify brain morphometric differences due to educational level (Ho et al., 2011), socioeconomic status (Farah, 2017), gender, and age (Smith et al., 2007;Takahashi et al., 2011).Nevertheless, few neuroimaging studies are designed to explore brain morphometric differences related to racial identity.In this sense, it has been reported that African American persons diagnosed with hypertension and cognitive impairment, commonly referred to as a decline in memory and cognition performance, have lower insular thickness compared to White persons with the same diagnosis (Chand et al., 2017).Moreover, Isamah et al. (2010) implemented a volumetric analysis using magnetic resonance imaging (MRI) in neurotypical White and African American persons.After controlling for variables such as age, sex, years of education, and total brain volume, they reported that African American participants had a greater brain volume of the left orbitofrontal cortex than White participants.These authors agree that morphometric studies in populations with diverse racial identifications will reduce the under-representation of ethnic minorities as well as the comprehension of the influence of these variables on the differentiation in specific brain structures and the prevalence of neuropsychiatric diseases among different populations.
Racial identity has generally been used as a demographic variable and not as a variable of interest in neuroimaging research, which contributes to generalization bias of brain findings based on persons with high educational and socioeconomic status belonging to majority racial groups (Falk et al., 2013;Rouan et al., 2021).Furthermore, studies including minority racial groups are mainly implemented in small samples and in populations with cognitive impairment (Isamah et al., 2010;Chand et al., 2017).Thus, our study aimed to identify morphological brain variability among distinct racial identities in a representative sample of neurotypical young adults.We analyzed brain morphometric data from the Human Connectome Project (HCP) (van Essen et al., 2012).Our selection criteria indicate that White and African American racial identities were the most representative samples in the HCP database.In this regard, we expect to identify differences in brain morphometry between people identified as Whites or African Americans.
Methods
In order to access participants' racial identity information, all authors accepted the terms of data used to access restricted data of the HCP database.After the request was accepted by the WU-Minn HCP Consortium, the database from 1,206 participants was downloaded from the ConnectomeDB, a web-based user interface from the HCP (Hodge et al., 2015).Apart from racial identity information, restricted data included demographic, clinical, psychiatric, and morphometric brain information for each participant.Data were filtered to exclude participants with psychiatric symptoms, substance use and abuse disorders, endocrine disorders, irregular menstrual cycles, neurological abnormalities, and technical issues in the acquisition or preprocessing of their structural brain images.In the filtered database, participants identified as Hispanics were discarded due to unbalanced sample representation between the selected racial identity groups (Hispanic-Whites = 22, Hispanic-African Americans n = 1).Beyond this classification, ethnic identity was not considered for further analysis.Racial identity categories referred to Whites and African Americans were taken from the HCP demographic data based on the NIH Toolbox and U.S. Census classification. 1hree hundred thirty-eight participants identified as Whites [M age(y) = 29.12,SD = ±3.60 , M education(y) = 15.15,SD = ±1.69]and 56 subjects identified as African Americans [M age(y) = 29.25,SD = ±3.62,M education(y) = 14.41,SD = ±1.90]satisfied the inclusion criteria from the filtering process of the original HCP database.Although age [t (392) = −0.2533,p = 0.800] was not significantly different, years of education between groups resulted in significant differences [t (70.96) = 2.760, p = 0.007].Moreover, three participants identified as Whites were excluded from the permutation analysis because of missing education and economic income information (see Table 1).
Summary statistics of FreeSurfer morphometric measures (volume, cortical surface area, and cortical thickness) from the HCP database previously processed by HCP investigators were analyzed (Glasser et al., 2013).These preprocessing methods consist of a PreFreeSurfer pipeline, which was implemented to preprocess highresolution T1w and T2w (weighted) brain images (0.7 mm thickness) for each participant to produce an undistorted "native" structural volume space.The pipeline aligned the T1w and T2w brain images, executed a B1 (bias field) correction for each volume, and co-registered the participant's undistorted structural volume space to MNI space.Subsequently, a Freesurfer pipeline was executed to divide the native volume into cortical and subcortical parcels, reconstruct white and pial cortical surfaces, and perform the standardized FreeSurfer's folding-based surface mapping to their surface atlas (fsaverage) (Glasser et al., 2013).Volumetric, cortical thickness, and surface area brain measures were grouped by participants' racial identity-Whites or African Americans.Before applying statistical analysis, volumetric results for each participant were standardized by dividing the raw volumetric scores by intracranial volume (ICV).Due to unbalanced samples between groups, ANCOVA permutation analyses adjusting for age, sex, education, and economic income were implemented to identify differences between groups for each brain morphometric measure.The estimation of value of ps was based on the criterion in which iteration stopped when the estimated standard error of the estimated proportion of the value of p was less than one-thousandth of the estimated value of p (Anscombe, 1953).A maximum of 5,000 iterations were selected for the analysis.Adjustment of value of ps for multiple comparisons were implemented by the family-wise error (FWE) rate method (Holm, 1979) S4).ANCOVA permutation analyses corrected for multiple comparisons (FWE) on the same morphological parameters described above were implemented for this subsample.
The ordering of the database, data filtering, and statistical analysis was carried out in the programming language R version 3.6.3mounted on the RStudio software version 1.2.5033.ANCOVA permutation analysis was implemented by the aovp function of the lmperm package in R (Wheeler and Torchiano, 2016).The pipeline used for the statistical analysis can be consulted at https://github.com/Danielatilano/HCP_structural_analysis.git.
Brain volumetry differences between groups
Volume comparisons resulted in significant differences in cortical and subcortical brain structures (see Table 2 and Figure 1).
Volumetric measures were obtained from a volume-based stream where MRI volumes are labeled to classify subcortical and cortical tissues based on subject-independent probabilistic atlas and subjectspecific measured values of voxels.Anatomical visualization of brain regions with significant statistical volumetric differences is represented in Figure 2.
Similar results were found on the paired subsample volumetric measures; nevertheless, after multiple comparisons correction (FWER), the bilateral and total cortical white matter, the left cerebellar white matter, the bilateral thalamus, and the anterior section of the corpus callosum maintain significant differences (see Supplementary Table S5).
Differences in cortical thickness between groups
Cortical thickness results indicated significant differences in frontal, temporal, parietal, and occipital brain regions (see Table 3 and Figure 3).
Cortical thickness measures were obtained from the mean distance between the white and the pial surfaces of the cortex.Anatomical visualization of brain regions with significant statistical cortical thickness differences is represented in Figure 4.
Similar results were found on the paired subsample cortical thickness measures; nevertheless, after multiple comparisons correction (FWER), the right banks of the superior temporal sulcus, left cuneus cortex, the right middle temporal gyrus, the right supramarginal gyrus, and the right lateral occipital cortex maintain significant differences (see Supplementary Table S6).
Differences in cortical surface areas between groups
Cortical surface results indicated significant differences in frontal, temporal, parietal, and occipital brain regions (see Table 4 and Figure 5).
Cortical surface measures were obtained from the sum of areas of triangles from the tessellation of the brain surface.Anatomical visualization of brain regions with significant statistical cortical surface area differences is represented in Figure 6.
Similar results were found on the paired subsample cortical surface area measures; nevertheless, none of the brain regions presents significant differences after applying multiple comparisons correction (FWER) (see Supplementary Table S7).
Discussion
Social, educational, and economic inequalities have impacted the health and human rights of ethnic and racial minorities, causing their under-representation in biomedical studies, leading to bias in the effectiveness of clinical treatments and misconceptions of genetic and environmental diseases affecting these groups (Konkel, 2015).According to some estimates, reducing such disparities would have saved the United States more than $ 1.2 billion in direct and indirect medical costs (Laveist et al., 2011).Even though the White non-Hispanic population has been steadily declining in recent years, African Americans and Hispanic/Latinos only represent 5 and 1% of participation in human research, while Whites represent over 70% (Ricard et al., 2022).In this regard, racial/ethnic identity is essential to contextualize neurophysiological and neuroimaging results on structural inequities in society (Harnett et al., 2023).In neuroimaging research, this under-representation bias may be responsible for the reproducibility, generalizability, external validity, and inference crisis in brain research, which exacerbates the disparities and inequalities of minorities in neuroscience (Falk et al., 2013;Dotson et al., 2020).Data sharing and open access to multimodal brain imaging in consortium repositories have been proposed as research opportunities to diminish racial disparities and methodological bias (Falk et al., 2013;Weinberger et al., 2020).Consequently, some advantages of using the HCP database are its public accessibility, a large ethnic/racially diverse sample, preprocessing methods, high-resolution structural brain imaging, and demographic and clinical information of participants (Glasser et al., 2016).
Based on the HCP database, our results indicate volumetric brain differences in white matter structures, subcortical regions, plexus choroids, and total subcortical grey matter between participants identified as African Americans and Whites.Differences in subcortical brain volumetric regions were identified in the bilateral caudate, left thalamus, right globus pallidus, and right ventral diencephalon.Moreover, differences were identified in other brain structures, such as the optic chiasm, the white matter of the right cerebellum, and the corpus callosum in their anterior and posterior portions.In contrast with Isamah et al. 's (2010) study, where differences in bilateral amygdala and total cerebral volume between persons identified as African Americans and White were found, we identified volumetric differences in the bilateral caudate and total cortical white matter.Differences in regional brain volumes in cortical and subcortical structures, such as the bilateral caudate, have been identified between White and Chinese populations (Tang et al., 2010).Moreover, brain differences in total cortical gray matter volume, total cortical white matter volume, total gray matter volume, estimated intracranial volume, and cortical regional volumes have been reported between Indian and White persons (Rao et al., 2017).Furthermore, our results indicate surface area differences in frontal, parietal, temporal, occipital, and frontal brain regions between African American and White racial identities.Specifically, cortical thickness differences were identified in the bilateral cuneus cortex, left fusiform gyrus, bilateral There are few studies that have reported differences in brain cortical thickness and surface area due to ethnic or racial identity.Accordingly, Permutational ANCOVA brain volumetric results between Whites and African Americans with significant differences after applying multiple comparison correction test (FWER).Brain volumetry is standardized as the ratio of cubic millimeters/intracranial volume (mm 3 /ICV).CC anterior: anterior subregion of corpus callosum.CC posterior: posterior subregion of corpus callosum.WM, white matter.GM, gray matter.Asterisks (***) indicate significant results at a value of p of <0.001 and (*) a value of p of <0.05.Brain regions representing volumetric differences between Whites and African Americans.CC, corpus callosum; WM, white matter.GM, gray matter.Brain images were created with BrainPainter software (Marinescu et al., 2019). 10.3389/fnint.2023.1027382 Frontiers in Integrative Neuroscience 06 frontiersin.orgJha et al. (2019) identified cortical thickness differences in the bilateral postcentral gyrus, superior parietal lobules, precuneus, supramarginal gyrus, right precentral gyrus, insula, inferior parietal lobule, supplementary motor area, and rolandic operculum in a large cohort of neonates of African American and White mothers.Furthermore, in middle-aged cognitively impaired hypertensive persons, differences in insular cortical thickness were identified between African Americans and White people (Chand et al., 2017).Similar to our results, Kang et al. (2020) identified differences in surface area and cortical thickness in frontal, parietal, temporal, and occipital subregions; however, these results were based on an analysis of brain surface morphometry between older Chinese and White adults.The U.S. Census has created racial categories that include White and African American people, allowing the self-identification of individuals in groups that represent their community and cultural background (Anderson et al., 2004).In a segregated society, racial identity has emerged as the sense of collective identity based on a perceived common heritage with a racial group (Helms, 1995), promoting wellbeing and protection against racism in African Americans (Hughes et al., 2015).In this sense, Afro-American identity is constituted by an African conscience that establishes behaviors, spirituality, and ancestral knowledge affecting self-concept, selfesteem, and self-image.Moreover, racism and oppression, rooted in a historical background of environmental and interpersonal adversity, have caused a mental and physical pathologization of their identity (Toldson and Toldson, 2001).In contrast, White American identity is rooted in social and economic privileges (McDermott and Samson, 2005) that establish racial attitudes, beliefs, behaviors, and experiences in a racially hierarchical society (Schooley et al., 2019).From this perspective, racial identity is defined and addressed as a social construct from which racial groups are socially created to attach differences between groups (Anderson et al., 2004).In this sense, the descriptive results in our sample related to years of education indicate that participants identified as African American reported less years of education than White participants; moreover, Whites tend to report higher economic income than African Americans.These results may reflect the inequalities in education (Johnson, 2014;Hill et al., 2017) and socioeconomic status (Hardaway and McLoyd, 2008) between White and African American people.Low socioeconomic status has been associated with reduced cortical gray matter thickness in middle-aged persons (Chan et al., 2018).In addition, diverse studies have indicated that socioeconomic status and parental education strongly influence cerebral cortical thickness, surface area, and volume during childhood (Noble et al., 2015;Farah, 2017), particularly average cortical thickness in neonates of African American mothers (Jha et al., 2019).Although our analysis was adjusted for economic income and education, these are only dimensions of socioeconomic status that also imply prenatal and postnatal factors such as biological risks (e.g., nutrition and toxin exposure), psychosocial stress, variability in cognitive and linguistic stimulation, and parenting practices during childhood (Farah, 2017).Our results referred to differences in volume, cortical thickness, and surface area in diverse brain regions between distinct racial identities may be due to these prenatal and postnatal factors anchored in racial inequalities.In this Brain regions representing cortical thickness differences between Whites and African Americans.Brain images were created with BrainPainter software (Marinescu et al., 2019). 10.3389/fnint.2023.1027382 Frontiers in Integrative Neuroscience 08 frontiersin.orgregard, it has been reported that African Americans, compared to the White population, have a higher risk of developing Alzheimer's disease due to exposure to air pollutants (Younan et al., 2021), access to healthcare (Cooper et al., 2010) and educational disparities (Peterson et al., 2020).Moreover, racism and discrimination have been related to higher levels of blood pressure (Lewis et al., 2009), preterm infant birth (Collins et al., 2011;Dominguez, 2011), and stressful life experiences (Williams, 2018).Furthermore, the recent study by Fani et al. (2021) identified that racial discrimination experiences of Afro-American women were associated with functional activation of the middle occipital gyrus, ventromedial frontal cortex, middle and superior temporal gyrus, and cerebellum.Assari and Mincy ( 2021) have reported that racism may impact the volume brain growth of African American children.Accordingly, with these studies, the morphological variability identified between White and African American identities in our study may also be related to racism and oppression, mostly affecting the African American community, due to historical racial segregation (Toldson and Toldson, 2001;Grigoryeva and Ruef, 2015).In this regard, acknowledging inequalities in education (Johnson, 2014;Hill et al., 2017), health (Monk, 2015;Yearby, 2018), justice (Hetey andEberhardt, 2018), and socioeconomic status (Hardaway and McLoyd, 2008) between Whites and African American people is fundamental to acknowledge that racial identity implies social and environmental factors that can impact in human development (Huston and Bentley, 2009) and brain morphology (Ho et al., 2011;Holz et al., 2014;Noble et al., 2015;Gur et al., 2019).
Most studies in human cognitive neuroscience come from majority identities, such as the White population, in contrast to Hispanics, Asians, and African Americans, who have been markedly underrepresented (Dotson et al., 2020).In this sense, our results suggest brain morphological variability between overrepresented and underrepresented samples, supporting the urgency to avoid the extrapolation and generalization of brain findings based on WEIRD (Western, Educated, Industrialized, Rich, and Democratic) population (Chiao and Cheon, 2010;Falk et al., 2013).Accordingly, it is important to consider the human brain as a multilevel ecological system that regards social and biological factors from which it is necessary to develop crosscultural sampling methods and multidisciplinary collaboration to improve the generalizability of neuroscience studies and the comprehension of individual differences in the human brain (Falk et al., 2013).Neuroimaging research groups have developed structural MRI brain atlas and templates based on specific populations due to differences in brain morphology while contrasting with WEIRD samples (Tang et al., 2010;Gu and Kanai, 2014).Brain regions representing cortical surface area differences between Whites and African Americans.Brain images were created with BrainPainter software (Marinescu et al., 2019).
Atilano-Barbosa and Barrios 10.3389/fnint.2023.1027382 Frontiers in Integrative Neuroscience 10 frontiersin.org The African Ancestry Neuroscience Research program has emerged as an initiative to reduce health disparities in the African American community and to promote focused brain research in this population to treat brain disorders by developing personalized therapies and treatments (Weinberger et al., 2020).The evidence of morphological brain variability in our study could contribute to understanding brain disorders and psychological factors affecting African Americans and the prospect of developing brain templates for this population.
Although our study was based on a large sample from the HCP database, some limitations must be considered.First, the sample is unbalanced due to the overrepresentation of persons identified as Whites (n = 877) compared to persons identified as African Americans (n = 193) according to the original HCP database.2Even though the HCP project is focused on neurotypical young adults, this database includes participants with heavy consumption of tobacco, alcohol, and recreational drugs (van Essen et al., 2012).Moreover, we identified participants with psychiatric symptoms, endocrine disorders, irregular menstrual cycles, and neurological abnormalities, as well as technical issues in the acquisition and preprocessing of their structural brain images.In this sense, we consider implementing exclusion criteria to discard these confounding variables that could affect morphological brain results in large neuroimaging data (Smith and Nichols, 2018).Nevertheless, these considerations maintain the imbalance of our sample between Whites (n = 338) and African Americans (n = 56) persons, which reduces the possibility to apply parametric statistical analysis (Kaur and Kumar, 2015).In this regard, we implement a method of sub-selection of persons identified as White (n = 56) and African American (n = 56) paired in age, sex, economic income, and education to overcome the confounding bias.Finally, racial identity was defined from the self-identification of participants.However, genetic ancestry information could have contributed to a more careful characterization of the sample from which specific genetic sequences and gene/environmental interactions could be analyzed to further interpret brain morphological results (Fan et al., 2015).
Conclusion
The human brain is constituted in a unique genetic, social, and experiential domain that is embedded in global hardships such as poverty and discrimination (White and Gonsalves, 2021).In this regard, morphological brain differences in persons identified as African Americans and Whites may be embedded in historical inequalities, oppression, and racism in American society that may impact brain structure.In this study, white matter, forebrain, midbrain, and hindbrain structures display morphological variability between racial groups which could be relevant for understanding neurological or psychiatric disorders differentially affecting these occipital cortex, left pericalcarine cortex, bilateral lingual gyrus, bilateral postcentral gyrus, right superior temporal sulcus, right rostral anterior cingulate cortex, right supramarginal gyrus, right entorhinal cortex, right middle temporal gyrus, and right transverse temporal cortex.Moreover, cortical surface area differences were identified in the bilateral cuneus cortex, left entorhinal cortex, left inferior temporal gyrus, bilateral occipital cortex, left lateral orbitofrontal cortex, left lingual gyrus, bilateral parsopercularis, right parsorbitalis, right caudal middle frontal gyrus, right frontal pole, right fusiform gyrus, bilateral right superior frontal gyrus, and bilateral superior parietal cortex.
FIGURE 3
FIGURE 3Permutational ANCOVA cortical thickness (mm) results between Whites and African Americans with significant differences after applying multiple comparison correction tests (FWER).Asterisks (***) indicate significant results at a value of p of <0.001 and (*) a value of p of <0.05.
FIGURE 5
FIGURE 5Permutational ANCOVA cortical surface area (mm 2 ) results between Whites and African Americans with significant differences after applying the multiple comparison correction test (FWER).Asterisks (***) indicate significant results at a value of p of <0.001 and (*) a value of p of <0.05.
FIGURE 6
FIGURE 6 . Due to sample imbalance, a
TABLE 1
Descriptive results between African Americans and Whites.
1 Mean (SD) of age and education in years.Frequencies (n) and percentages (%) of economic income ranges in US dollars. 2 Welch two sample t-test of age and education between African Americans and Whites (p < 0.05).
TABLE 2
ANCOVA permutation volumetric brain results between African Americans and Whites adjusting for age, sex, education, and economic income.
(***) Significant results at a value of p of < 0.001 and (*) a value of p of < 0.05 when multiple comparison correction test (FWER) was applied.R, Right Hemisphere.L, Left Hemisphere.MSS, Mean sum of squares.Brain volumetry is standardized as the ratio of cubic millimeters / intracranial volume (mm 3 /ICV).10.3389/fnint.2023.1027382Frontiers in Integrative Neuroscience 05 frontiersin.org
TABLE 3
Permutational ANCOVA cortical thickness results between African Americans and Whites adjusting for age, sex, education, and economic income.
TABLE 4
Permutational ANCOVA surface cortical area results between African Americans and Whites adjusting for age, sex, education, and economic income. | 2023-12-23T16:13:36.025Z | 2023-12-21T00:00:00.000 | {
"year": 2023,
"sha1": "cb3366e9a4465872ec5211a7a463d607825eb0ae",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnint.2023.1027382/pdf?isPublishedV2=False",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6f8cb81e6174187998ce4a60d8b20e5aaa55f8c6",
"s2fieldsofstudy": [
"Psychology",
"Medicine",
"Sociology"
],
"extfieldsofstudy": []
} |
44186426 | pes2o/s2orc | v3-fos-license | Predictors of Hepatitis B Surface Antigen Titers two decades after vaccination in a cohort of students and post-graduates of the Medical School at the University of Palermo, Italy
Introduction and objective. The introduction of a vaccine against hepatitis B virus (HBV) for newborn babies in Italy in 1991, extended to 12-year-old children for the first 12 years of application, has been a major achievement in terms of the prevention of HBV infection. The objective of this study was to analyse the long-term immunogenicity and effectiveness of HBV vaccination among healthcare students with different working seniorities. Materials and method. A cross-sectional observational study of undergraduate and postgraduate students attending the Medical School of the University of Palermo was conducted from January 2014 – July 2016. HBV serum markers were performed with commercial chemiluminescence assays. Categorical variables were analyzed using the chi-square test (Mantel–Haenszel), whereas means were compared by using the Student’s t test. Adjusted odds ratios (ORs) and 95% confidence intervals (CIs) were also calculated by a multivariable logistic regression, using a model constructed to examine predictors of anti-HBs titer above 10 mIU/mL, assumed as protective. Results. Of the 2,114 subjects evaluated – all vaccinated at infancy or at the age of 12 years and were HBsAg/anti-HBc negative – 806 (38.1%) had an anti-HBs titre <10 IU/L. The latter were younger, more likely to be attending a healthcare profession school (i.e., nursing and midwifery), than a medical postgraduate level school, and more likely to have been vaccinated in infancy (p <0.001, 95% CI 2.63–5.26, adjusted OR 3.70). Conclusion. The results of the study suggest that assessment of HBV serum markers in workers potentially exposed to hospital infections is useful for identifying small numbers of unvaccinated subjects, or vaccinated subjects with low antibody titre, all of whom should be referred to a booster series of vaccinations.
INTRODUCTION
Hepatitis B infection has a variety of clinical course, including self-limited acute hepatitis, fulminant hepatic failure, chronic hepatitis, and progression to cirrhosis and hepatocellular carcinoma.HBV is transmitted by percutaneous or mucosal exposure to infected blood or other body fluids.In July 2016, the World Health Organization (WHO) estimated that 240 million people are chronically infected with HBV, and more than 686,000 people die every year due to complications of hepatitis B, including cirrhosis and liver cancer [1].The WHO's data reported that the hepatitis B prevalence is highest in sub-Saharan Africa and East Asia, where between 5-10% of the adult population is chronically infected.High rates of chronic infections are also found in the Amazon and the southern parts of Eastern and Central Europe.In the Middle East and the Indian subcontinent, an estimated 2-5% of the general population are chronically infected.Less than 1% of the population of Western Europe and North America is chronically infected [1].
Worldwide, two billion people are infected with HBV (1/3 of the world's population), and there are four million new cases of acute hepatitis per year, with almost 400 million chronic carriers [2].
In countries where large-scale vaccination efforts were made, the epidemiology of hepatitis B has been transformed.In Italy, the epidemiology of this infection changed after the introduction in 1991 of the vaccination of newborn babies that was extended to 12-year-old children for the first 12 years of application.
In Italy, from 1985-2014 in Italy, there has been a reduction in new notified cases: from 12 per 100,000 inhabitants to <1, as reported by the Integrated Epidemiological System for Acute Viral Hepatitis (SEIEVA) [2].The Italian recommended adult/adolescent immunization schedule involves HBV vaccination doses at months 0, 1, and 6, whereas infant vaccination starts from the third month of life for infants, with 2 nd and 3 rd doses at 5 and 11 months.The vaccine is 95% effective in preventing infection and the development of chronic disease and liver cancer due to hepatitis B. In Italy, HBV vaccination is also recommended for people at risk of acquiring HBV infection, including those with important occupational risk as health workers [3][4][5].In Italy, according to the national laws, students are also considered as workers, and therefore, if they are exposed to physical, chemical, biological or psychological risks, they are evaluated by an occupational health physician.
Of the population undergoing to HBV vaccination, 95% develop an effective immune response evaluated by the level of antibodies antiHBsAg > 10 mIU/mL.Several studies confirm that the acquired immunity persists for at least 10 years after vaccination with level of antibodies > 10 mIU/mL, but probably not longer, if vaccination had been performed at neonatal age [6][7].In several countries in the world, students of faculties of medicine are examined to establish if the vaccination performed in infancy is still protective several decades after HBV vaccinations, because this is a population occupationallyexposed to a higher risk of acquiring HBV infection [8][9][10][11][12][13][14][15][16][17][18][19][20].
OBJECTIVE
The main aim of this study was to evaluate the persistence of long-term immunogenicity of HBV in students of the School of Medicine at the University of Palermo.A second aim was to identify possible predictive factors of long-term immunogenicity, such as age of individuals when they were vaccinated, gender and race.
MATERIALS AND METHOD
In this cross-sectional observational study, the levels of serum HBsAg, anti-HBs, and anti-HBc were evaluated of students attending schools of the health care professions, or postgraduate medical schools of the University of Palermo, Italy, who were examined for professional risks from January 2014 -July 2016.For each student, a standardized medical record was compiled, including socio-demographic (age, gender, country of origin) and clinical information (relatives' diseases and personal remote and proximate pathologies).A personal objective exam was additionally conducted for each subject before blood sampling.
Arbitrarily excluded from the study were subjects who met at least one of the following exclusion criteria: a) HBsAg personal or maternal positivity, chronic diseases or immunosuppression; b) absence of primary documentation of vaccination for HBV; c) recent booster dose of HBV vaccine.
According to Italian law, the subjects were requested to provide written informed consent to the processing of data.Moreover, although it is not required in Italy for observational studies, approval of the Local Ethics Committee was aldso obtained [21].
Serological tests.Serological analyses were performed with commercial chemiluminescence assays (VITROS anti-HBs assay on the Vitros ECI Immunodiagnostic system, Ortho-Clinical Diagnostics, UK).In particular, the antibody to the hepatitis B surface antigen (anti-HBs) levels were expressed as mIU/mL.Dynamic range of quantification is 10-1000 mIU/ml.The level of anti-HBs above 10 mIU/mL was considered as protective against HBV infection.Statistical analysis.Statistical analysis was performed with R software version 3.3.2(October 2016).The significance level chosen for all analyses was.05, 2-tailed.Absolute and relative frequencies were calculated for qualitative variables, whereas normally distributed quantitative variables were summarized as mean (standard deviation).Data normality was verified by the Shapiro-Wilk test for normality.Categorical variables were analyzed using the chi-square test (Mantel-Haenszel), means were compared by using the Student's t test.Adjusted odds ratios (ORs) and 95% confidence intervals (CIs) were also calculated by a multivariable logistic regression model constructed to examine predictors of anti-HBs titer above 10 mIU/mL assumed as protective.All variables found to have a statistically significant association (P <.05) with anti-HBs titer > 10 mIU/mL were entered in multivariate logistic regression model in order to check for independence.In the multivariate analysis, age was included as a continuous variable.
RESULTS
The main characteristics of the 2,114 subjects included in the study are shown in Table 1.
All enrolled subjects were vaccinated for HBV and HBsAg/ anti-HBc negative.2, 870 (41.1%) students received a course of 3 paediatric doses (10 μg) of recombinant hepatitis B vaccine at their 3rd, 5th and 11th month of postnatal life, and 1,244 (58.9%) received a course of 3 adult doses (20 μg) of the same vaccine when they were 12 years old, as required by current law in Italy [22] The majority (61.9 %) of the students had an anti-HBs titre >10 mIU/mL (Tab.2).
Students with protective anti-HBs titre were statistically significantly older (27.9 vs. 24.4years, p<0.001), with fewer years after HBV vaccination (19.2 vs. 20.1,p<0.001),The multivariable logistic regression model (Tab.3) shows that after controlling for confounding, HBV vaccination at age of 12 years and attending postgraduate medical school, were significantly associated with increased odds of having protective Hepatitis B surface antibody titers (OR=3.70,95% CI =2.63-5.26 and OR=1.40, 95% CI =1.02-1.94,respectively).In particular, a protective anti-HBV titer was about 4-fold more frequent among subjects vaccinated during adolescence than those vaccinated at infancy.
DISCUSSION AND CONCLUSIONS
Health-care-related transmission has long been recognized as an important source of new HBV infections worldwide.It is estimated that in the United States, 12,000 health care workers were infected per year in the prevaccine era.A health care worker's risk of infection has been shown to correlate with level of blood and needle exposure [22].
It is internationally recognized that the healthcare profession and postgraduate medical students have a high occupational risk for HBV infection, also in countries with a low incidence of the disease [10][11]14].In Italy, several studies have demonstrated a low endemic level (prevalence of HBsAg in the general population <2%) with an incidence of HBV infections of about 1 per 100,000 individuals, suggesting that healthcare workers and students could have a risk that is low but not negligible.Despite such evidence, for health professionals as well as for students and volunteers, to-date there is no obligation of vaccination, which is recommended only [23,24].Fortunately a large majority of young Italian students have been vaccinated according to the national immunization programme that, since 1991, has included HBV vaccination as compulsory for infants and adolescents aged 12 years.Adolescent's vaccination was restricted to the first 12 years of the implementation of the vaccination law and, thus, in 2004, vaccination of 12-year-olds was stopped, but retained for infants.
As demonstrated by several studies, administration of HBV as part of a combination vaccine or as a monovalent vaccine induces long-lasting immune memory against HBV with long term antibody persistence.Several studies have reported that 85-90% of those vaccinated as adolescents have anti-HBs levels >10 mIU/mL when measured 10 years after vaccination.This percentage was 40-60 % for those vaccinated as infants, as measured 15-20 years after vaccination [25][26].Both these data coincide with results obtained by the authors of the presented study, since more than 60% of their students had anti-HBV titers above 10 mUI/mL, also more than 20 years after vaccination.Moreover, it should be pointed out that none of them received a booster dose after the primary vaccination programme, and in the primary documentation of vaccination for HBV no date was indicated for its administration.In the experience of the authors, they observed that this habit is also common among the general population.
Despite declining serum levels of antibody, international evidence shows that vaccine-induced immunity continues to prevent clinical disease or detectable viremic HBV infection.The long-term efficacy of HBV vaccination is confirmed when one considers that none of vaccinated subjects in the current study was found to be HBsAg/anti-HBc positive.
As already reported by several authors, including those of the presented study, a relatively high percentage of students (about 40%) were revealed as having anti-HBV titers below 10 mIU/mL.
The CDC recommend pre-exposure assessment of current or past anti-HBs results upon matriculation, followed by one or more additional doses of HBV vaccine for subjects with anti-HBs <10 mIU/mL, if necessary, helps to ensure HBV protection after contacts with blood or body fluids [27].
The administration of an HBV challenge dose after the primary schedule, induces strong anamnestic responses and is well tolerated [28].In Italy, for Medical School students it is necessary to assess anti-HBs titre before making stages in hospital, in order to identify subjects with levels <10 mIU/ mL.In fact, although the current WHO view is that subjects with an anti-HBs titre <10 mIU/mL still retain memory immunity, and no booster dose is necessary as part of a routine immunization programme, it could be beneficial to have a more protective approach for healthcare workers who are at significantly higher risk of exposure to HBV, administering a booster dose and rechecking titre after 1-2 months to verify whether or not they are responders [29].The screening in the healthcare cathegories is also fundamental to identify students and workers older than 35 years old, that have not been vaccinated, neither at birth or at 12 years of age, and immigrants from countries without universal immunization.This approach would allow identification of non-responders to the primary vaccination cycle or subjects with incomplete vaccination cycle.Moreover we have found that two variables could help in predicting subjects at higher risk of having anti-HBs titre <10 mIU/mL.These variables are HBV vaccination at infancy and attending healthcare professional courses.
The possible causes of the low anti-HBs titres in adult subjects vaccinated in infancy could be due to the immaturity of the immune system in infants, at an age of life when there could be a lower interaction between B and T lymphocytes.Otherwise, it is less clear the reason for which students attending healthcare professional courses could have a higher risk of non-protective HBV titers', also after adjustment for confounding due to age, years after vaccination and vaccination period.Further analyses could be needed for answering to this intriguing question and some variables, as socio-demographic characterists could play a role in this association.
In this sense, the lack of more information about students (e.g.anti-HBV vaccine used for primary cycle, sociodemographic characteristics, immunological status, previous blood exposure etc) represents a major limit of the presented study.Despite these limitations, the presented study enriches the litterature on HBV vaccination, highlighting that although anti-HBV vaccination is associated with long persistence of protective titers, several students and post-graduates of the Medical School could benefit from a booster dose.
Moreover, some factors were found that could contribute to identify susceptible subjects and that could be associated with immune regulation pattern.Long-term follow-up studies several decades after vaccination administration will be needed to confirm the duration and persistence of immune memory.
Table 1 .
General characteristics of subjects (N=2,114) included in the study vaccinated at age 12 years (76.0%with ≥ vs. 41.7% among vaccinate at infancy, p<0.001) and more frequently attending postgraduate medical school (77.4% with ≥ vs. 47.4% among healthcare profession school, p<0.001).No statistically significant differences were observed in antibody titer between males and females.
Table 2 .
Variables associated with persistence of protective Hepatitis B surface antibody titers (≥ 10 mIU/mL) | 2018-04-03T05:52:22.102Z | 2017-06-12T00:00:00.000 | {
"year": 2017,
"sha1": "383e9f1c78f77afa388ee6da0f74aa544c9f241e",
"oa_license": "CCBYNC",
"oa_url": "http://www.aaem.pl/pdf-74716-12549?filename=Predictors%20of%20Hepatitis%20B.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "383e9f1c78f77afa388ee6da0f74aa544c9f241e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
169738841 | pes2o/s2orc | v3-fos-license | An Analysis of Domestic Tourism Consumption Based on R Software
R is a software system which can be used for data processing, calculation and mapping. The syntax of this language is superficially similar to C, but semantically it is functional programming language. It is widely used in statistical analysis. So this paper used it to analyze China domestic tourism consumption data during 1999-2015, and analyzed the main factors affecting the consumption level of domestic tourism in China from residents’ disposable income, GDP, per capita consumption of tourists, tourists, the mileage of railways and the number of travel agencies. Finally it established and solved the multiple linear regression models, taking domestic tourism consumption as dependent variable and taking GDP,and per capita consumption of tourists, the number of tourists and the mileage of railways as dependent variables. The results show that there is significant positive correlation between domestic tourism consumption, GDP, per capita consumption of tourists, the number of tourists and the number of travel
Introduction
Liu Shenzhen's "Analysis of Domestic Tourism Consumption Based on Multiple Linear Regression Model" published on The Journal of Chongqing University of Technology (Natural Sciences) in June 2016, selecting data on domestic tourism consumption in China from 2003 to 2012, used a multiple linear regression model with domestic tourism consumption as the dependent variable and per capita disposable income, tourist per capita consumption, and number of travel agencies as independent variables to analyze factors affecting China's domestic tourism consumption. The paper concludes that: Domestic tourism consumption is positively correlated with disposable income of residents and per capita consumption of tourists, and negatively related to the number of travel agencies [1] . However, the results are inconsistent with the conclusions. Residents' disposable income and the number of travel agencies are not significant in this regression equation. Meanwhile, the article only makes a simple regression analysis of the variables without considering the problems of multicollinearity, time series between variables and etc. This paper holds the idea of verifying results and expands the selection time of variables, sorts out relevant literature from previous scholars, and increases the number of independent variables based on the preservation of the author's original variables, thereby performing a regression analysis of the equations.
Research review
In recent years, with the continuous improvement of people's living standards, people have not been satisfied with the basic consumption, paid more attention to consumption at the spiritual level, which has led to an increase in domestic tourism consumption. Analyzing the major factors affecting tourism consumption has a very important significance for the development of tourism. It can not only provide a reference for tourism management departments to formulate tourism development plans, but also can be used to make shortand medium-term predictions of domestic tourism consumption and promote the healthy development of the tourism industry [2][3][4][5][6] . In studies related to domestic tourism consumption, Li Yunpeng (2005) found that China domestic tourism consumption of urban residents was mainly affected by current income and prices [7] . Sun Gennian and Xue Jia (2009) pointed out that the per capita disposable income was the basic factors affecting and restricting domestic tourism consumption. The view that income has a decisive effect on urban residents' domestic tourism consumption, have been extensively empirically tested [8] . In addition, Wu Chunlai and Gu Huimin (2003) believed that the changes in residents' income distribution structure had transformed the tourism market from a mass-visited tourism industry to a multi-faceted, heterogeneous structure [3] ; Teng found that the urban residents' tourism consumption differed in the types of travel and per capita tourism spending, and this difference was related to per capita income [9] . From the above, we can find that the per capita income of residents has an important role in residents' tourism consumption. This article selected it as one of the explanatory variables. At the same time, with the continuous improvement of living standards, people are pursuing more spiritual and cultural consumption. Because of convenient transportation, more people are beginning to go abroad to experience different scenery and different customs. Weng Gangmin et al. (2007) found that there was the most closely related relationship between urban residents' travel rate and the factors of living standards, and per capita consumption of tourists was an important factor affecting tourism consumption income [10] . Wang Yu and Qin Yuanhao (2011) discussed the relationship between per capita tourism expenditure, GDP, free disposable income of urban residents, average labor compensation of urban residents, total amount of RMB deposits, length of railway, and free time [11] . They put these factors into the model and gradually returned. Finally, it was concluded that discretionary income, length of railway, and free time had a significant impact on tourism consumption. In domestic research, most scholars have used empirical analysis methods to select major factors for quantitative analysis in factors such as people's savings, per capita income, per capita disposable time, railway length, number of travel agencies, and national policies [12] .
Based on the above literature review, we can draw the main factors and models that affecting tourism consumption. This article established and solved a multivariate linear regression model with the domestic tourism consumption (Y) as the cause, per capita disposable income of residents (X1), per capita consumption (X2) and number of travel agencies (X3), domestic tourists (X4), gross domestic product GDP (X5), and total railway kilometers (X6) as independent variables in the multivariate linear regression model.
Data sources
According to the availability, authority, and unified statistical principles of data, the data in this paper are all selected from the "China Statistical Yearbook" from 1999 to 2015. Through the review of previous literature, this paper selects the per capita disposable income of residents (X1), per capita consumption (X2) and number of travel agencies (X3), domestic tourists (X4), gross domestic product (GDP) GDP (X5), and total railway kilometers (X6) as independent variables in the multivariate linear regression model.
Research model and theoretical hypothesis
In the selection of variables, the influence relationship between different variables has been combed. To make this model simple and clear, we use multiple linear regression model, combining the economic theory and the running law of tourism itself, the following equations are constructed: Y=β + β * X + β * X + β * X + β * X + β * X + β * X +μ Among them, Y is dependent variable, X ~X is independent variable, β ~β is correlation coefficient, andμis random disturbance term.
Stability analysis
The empirical judgment method shows that the ADF test value (T value) > 0.05(critical value), does not pass the test. That means, the unit root is not stable, otherwise it is stable. According to the above results, the t value of the original data is more than 0.05, so the time series is not stable. After the first-order difference, only the value of the variable X3 and X4 is a stationary sequence, but because the other four variables are not stable, this paper carries out the second-order difference for all data and the second-order difference results are less than 0.05. The two order difference sequence is stationary.
Co-integration test
The six variables in this paper are all unstable, but both of them are second order stationary. So the cointegration test is used to determine whether the regression equation composed of these six variables is stable or not. If it is stable, it shows that there is a stable relationship between these variables, and this research is significant. The first step is to regress the equation to generate residuals. In the second step, unit root test is performed on the residual sequence. The test result showed t=0.001452 < 0.05, so the residual sequence was stable after testing.
Multicollinearity analysis
In general, because of the limitation of economic data, general correlation among the explanatory variables existed in the design matrix. According to the criterion of correlation coefficient, when the absolute value of correlation coefficient is > 0.8, it shows that there is significant linear correlation between the two variables. When the absolute value is less than 0.3, it is called low correlation. Others were moderately related. According to the calculation results, it can be found that the correlation coefficient between the factors in the equation is more than 0.9, showing a highly correlated trend. In order to guarantee the accuracy and the goodness of the model, the factors should be eliminated and then regressed step by step. According to the calculation results, the P values of X3(0.063317) and X5 (0.090355) are significant under the condition of less than 0.1, and not significant under the condition of less than 0.05. Usually the closer the DW value is to 2; the more unlikely the equation is autocorrelation. The DW value here is 1.5971, the equation may have autocorrelation. Other results show that the regression results of the equation are better. The selection of independent variables in this paper is relatively large, so this paper chooses to continue to eliminate variables for regression. Then the equation is Y ~ X2 + X4 + X5 + X6. According to the result of the calculation, the p value and t value of each variable are significant; R2 is close to 1, which shows good goodness of fit and F value. According to the results of look-up table, DW value is close to 2 in the corresponding interval. This paper holds that there is no self-correlation in the equation. According to the corresponding data, this paper does not eliminate the independent variable. The final expression of the equation is: Y=-25930+31.33*X +0.1072*X +-0.0446*X +1749*X
Heteroscedasticity test
In order to ensure that the estimators of regression parameters have good statistical properties, an important assumption of the classical linear regression model is that the random error term in the general regression function satisfies the same variance, which means they all have the same variance. If this assumption is not satisfied, which means the random error term has different variances. There is heteroscedasticity in the linear regression model. According to White Test, P value = 0.5416 > 0.05. According to white test, there is no heteroscedasticity in the model.
Sequence correlation test
If there is a correlation between the expected values of the random error term, it is said that there is autocorrelation or sequence correlation between the random error terms, which is commonly seen in time series. The results show that there are heavy multicollinearity between X2 and X6, and serious multicollinearity between X4 and X5. Because of the higher multicollinearity among the variables of the equation, the corresponding variables are removed, there is no multicollinearity between X2 and X3, but the results of X3 show that p = 0.565 and T = -0.589, the variable X3 is not significant, so the equation is not illustrative. Considering the significance of the variables and the better regression effect, there is only multicollinearity among the variables, and there is no variable X3 in the equation, so the article chooses no longer to eliminate the variables, and adopts the method of principal component analysis to further optimize the regression equation.
Principal component analysis
By orthogonal transformation, a set of variables that may be correlated are transformed into a set of linearly independent variables, which are called principal components.
According to the results of R studio calculation, the final number of principal components is 1, and the coefficients of each principal component are 0. (1) With the aid of R statistical analysis software, the regression equation is established with the per capita consumption of tourists, domestic tourists, GDP of gross national product and the total number of railway kilometres. The results show that China's domestic tourism consumption is positively related to the per capita consumption of tourists, domestic tourists, gross national product (GDP), and the total number of railway kilometres. There is no significant relationship with the number of travel agencies, which is contradictory to the conclusions of Liu Zhenzhong's article.
(2) According to the established model and the result, it is not hard to find that gross national product (GNP) GDP is highly related to the disposable income of the residents. Although the article excludes the variable of residents' disposable income, GDP can partly reflect the size of the residents' disposable income. In recent years, with the rapid growth of China's economy, the disposable income of the residents and GDP are increasing, which lead to the improvement of transportation, communication and accommodation. It is more convenient for people to travel. Tourism is no longer the exclusive property of rich people.
(3) There is no significant correlation between number of travel agencies and domestic tourism consumption. With transportation convenience, the rapid development of OTA and UGC platforms and negative news of the travel agencies, more and more tourists prefer to travel by self-help rather than package tour. Tourists who join a package tour mainly go abroad, so the numbers of travel agencies have no significant impact on domestic tourism consumption. From the present development situation analysis, the travel agencies need to break through the models of traditional low price group, the zero pay groups and the shopping group, and seek the cooperation between the online OTA and the offline entities so as to seek a developing way.
(4) According to the analysis of previous surveys, tourists spend about 30% on transportation, which is enough to explain the importance of transportation and the impact on domestic tourism consumption. To develop domestic tourism, we should first develop transportation industry; especially increase railway mileage and road mileage. Many scenic spots are located in the poor areas, the lack of transportation facilities and other infrastructure restricted the development of tourism and the economy, so speeding up the construction of railways in these underdeveloped areas is of great importance. At the same time, the investment of highway and other transportation facilities will be increased to form a more unified national mechanism, which is conducive to the development of domestic tourism.
(5) Relevant policies and institutions need to be improved, so as to improve the overall service level of the tourism industry. With the improvement of the people's living standard, domestic tourism consumption has been constantly increasing; tourism is more and more important. However, the overcharge problems of tour guide and the sky-high cost of scenic spots made tourists take a negative attitude towards some destinations. Tourism, a green industry, can not only promote economic, but also bring jobs. It is a way to develop and prosper in resource-poor areas. Therefore, the country needs more supervision system and perfect corresponding laws and regulations to promote sustainable and healthy development of the tourism industry. | 2019-05-30T23:46:06.402Z | 2018-01-01T00:00:00.000 | {
"year": 2018,
"sha1": "6f269ba79d8fdbd640bb783994e17c886a8f34bd",
"oa_license": "CCBY",
"oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2018/87/matecconf_cas2018_05006.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b0c6ebe8e9690c4c8aba08c4d3428cbfd37ac778",
"s2fieldsofstudy": [
"Computer Science",
"Economics",
"Business"
],
"extfieldsofstudy": [
"Business"
]
} |
249298924 | pes2o/s2orc | v3-fos-license | Solution-Processed Silicon Doped Tin Oxide Thin Films and Thin-Film Transistors Based on Tetraethyl Orthosilicate
Recently, tin oxide (SnO2) has been the preferred thin film material for semiconductor devices such as thin-film transistors (TFTs) due to its low cost, non-toxicity, and superior electrical performance. However, the high oxygen vacancy (VO) concentration leads to poor performance of SnO2 thin films and devices. In this paper, with tetraethyl orthosilicate (TEOS) as the Si source, which can decompose to release heat and supply energy when annealing, Si doped SnO2 (STO) films and inverted staggered STO TFTs were successfully fabricated by a solution method. An XPS analysis showed that Si doping can effectively inhibit the formation of VO, thus reducing the carrier concentration and improving the quality of SnO2 films. In addition, the heat released from TEOS can modestly lower the preparation temperature of STO films. By optimizing the annealing temperature and Si doping content, 350 °C annealed STO TFTs with 5 at.% Si exhibited the best device performance: Ioff was as low as 10−10 A, Ion/Ioff reached a magnitude of 104, and Von was 1.51 V. Utilizing TEOS as an Si source has a certain reference significance for solution-processed metal oxide thin films in the future.
Introduction
In recent years, due to their high mobility, low temperature preparation, and compatibility with flexible processes, metal oxide semiconductor (MOS) materials represented by indium gallium zinc oxide (IGZO) have been extensively applied in flat panel displays such as AMLCD and AMOLED, driven by TFTs [1][2][3][4][5]. However, the scarce reserve of indium in the earth's crust (0.25 ppm) leads to its high market price (~$750/kg) [6]. Furthermore, it is toxic, which makes it incompatible with the trend of the consumer electronics market toward low cost environmental benignity. The development of an alternative indium-free oxide semiconductor material system is imperative. Notably, the electronic structure of Sn 4+ (4d 10 5s 2 ) is similar to that of In 3+ (4d 10 5s 0 ) with the spherical symmetry s orbit, leading to the high mobility of SnO 2 and In 2 O 3 even in an amorphous state [7,8]. In addition, Sn is abundant (2.2 ppm) and relatively inexpensive (~$15/kg) [6]. SnO 2 is also non-toxic, environmentally friendly, and chemically stable, making it the most promising candidate to replace In-based MOS materials in semiconductor devices such as TFTs.
SnO 2 -based TFTs have generally been fabricated by magnetron sputtering and other vacuum technologies [9][10][11][12], but those involve an expensive, complex process dependent on a vacuum environment. In contrast, the solution method has broad development prospects in modern electronic device processing [13][14][15], with the advantages of low cost and a simple process ofmanipulation by doping. As a result, solution-processed SnO 2 -TFTs have increasingly become a preferred method.
Oxygen vacancy (V O ) plays a significant role in carrier concentration, and then affects the properties of the material [16,17]. In 2010, Tsay et al. [18] prepared crystalline SnO 2 thin films at 500 • C by spin coating, with the O/Sn ratio of only 1.69 and a carrier concentration of 7.5 × 10 18 cm −3 due to the existence of V O . An excess of carriers caused by a high concentration of V O in SnO 2 leads to TFT performance deterioration, including a large off current (I off ) and difficulty in turning off [19,20]. Many studies have been conducted to suppress the V O concentration by doping. In 2020, Zhang et al. [21] prepared Ga doped SnO 2 -TFT (GTO-TFT) at 450 • C by spin coating, and found that with the Ga content rising from 20% to 60%, the V O decreased from 30.24% to 17.18%, while the I off of TFT correspondingly decreased from 10 −3 A to 10 −11 A. In addition, other commonly used dopants such as Sb, Cr, Zr, Y [22][23][24][25][26] can also reduce the V O concentration, but low reserves and a certain toxicity limit their practical application.
However, Si is environmentally friendly, non-toxic, and resource-rich. Si 4+ has the same valence state as Sn 4+ , and will not introduce new charges into SnO 2 . In addition, the binding energy of Si-O (799.6 kJ/mol) is higher than that of Sn-O (531.8 kJ/mol), and the Lewis acid strength of Si (8.096) is also significantly higher than that of Sn (1.617), which makes Si a superior oxygen binder to suppress the formation of V O [27][28][29]. Liu et al. [30] fabricated silicon doped SnO 2 -TFTs (STO-TFTs) by sputtering, controlling the V O concentration with Si, and the best device performance was obtained with 1 wt.% Si: the saturation mobility (µ sat) was 6.38 cm 2 /(V·s), the on/off current ratio (I on /I off ) was 1.44 × 10 7 , and the subthreshold swing (SS) was 0.77 V/Dec. Therefore, incorporating Si has the potential to lower the carrier concentration of SnO 2 films and improve the device's performance. However, there are few studies of Si doping into SnO 2 by the solution method, and most of them require a high processing temperature (>450 • C) [26,31].
Considering the above problems, this paper utilized tetraethyl orthosilicate (TEOS) and tin chloride dihydrate (SnCl 2 ·2H 2 O) to prepare STO thin films, and the effects of Si doping content on the chemical composition, microstructure, and electrical properties of SnO 2 were investigated. It was found that TEOS can not only act as an Si dopant to diminish the V O and carrier concentrations, but can also modestly reduce the preparation temperature of SnO 2 thin films due to its decomposition and heat release when annealing. In a previous study, it was demonstrated that the AlO x : Nd film is a suitable dielectric in oxide TFTs due to its high dielectric constant and low leakage current density [32]. Based on this, bottom-gate and top-contact STO TFTs were successfully fabricated.
Materials and Methods
0.1 mol/L SnO 2 precursor solutions were synthesized by dissolving SnCl 2 ·2H 2 O in 2-methoxyethanol (2-ME), followed by stirring for 0.5 h to mix well. TEOS was added at an atomic ratio (Si/Sn at.%) of 2.5, 5, 10, and 15, respectively. Before spin coating, the precursor solutions were stirred for 12 h in the air. Figure 1 shows the preparation process of the STO films. The alkali free glass substrate was treated with oxygen plasma with a power of 60 W for 10 min. 40 µL solutions filtered through a 0.22 µm syringe filter were added dropwise to glass substrate, and then spun by a spin coater at 5000 rpm for 30 s to prepare SnO 2 and STO wet films. The resulting films were transferred to a hot plate heated at 100 • C for 10 min to evaporate the organic solvent, followed by annealing at 300 • C for 1 h to obtain dense films. The TFTs were fabricated with a bottom-gate and top-contact configuration, as shown in Figure 2. The preparation process for the active layer was essentially the same as that shown in Figure 1, except that the substrates were composed of Al: Nd/Al2O3: Nd (the thickness of Al: Nd gate electrode was 200 nm and Al2O3: Nd insulator was 300 nm with a capacitance of 38 nF), the Si doping concentrations were 0, 2.5, and 5 at.%, the spin speed was 8000 rpm, and the annealing temperatures were 300 °C and 350 °C. The S/D electrodes were deposited on the surface of the STO films by direct current (DC) sputtering of an Al target with a purity of 99.99%. The sputtering power was 100 W with a deposition pressure of 1 mTorr and a time of 1200 s. The patterning of electrodes was realized by masking the non-S/D electrode area, with a channel width of 800 μm and length of 200 μm. The thermal characteristics of the precursors were measured with a thermogravimetric analyzer (TG) (DZ-TGA101, Nanjing Shelley biology, Nanjing, China) and a differential scanning calorimeter (DSC) (DZ-DSC300C, Nanjing Shelley biology, Nanjing, China) The TFTs were fabricated with a bottom-gate and top-contact configuration, as shown in Figure 2. The preparation process for the active layer was essentially the same as that shown in Figure 1, except that the substrates were composed of Al: Nd/Al 2 O 3 : Nd (the thickness of Al: Nd gate electrode was 200 nm and Al 2 O 3 : Nd insulator was 300 nm with a capacitance per unit area of 38 nF/cm 2 ), the Si doping concentrations were 0, 2.5, and 5 at.%, the spin speed was 8000 rpm, and the annealing temperatures were 300 • C and The TFTs were fabricated with a bottom-gate and top-contact configuration, as shown in Figure 2. The preparation process for the active layer was essentially the same as that shown in Figure 1, except that the substrates were composed of Al: Nd/Al2O3: Nd (the thickness of Al: Nd gate electrode was 200 nm and Al2O3: Nd insulator was 300 nm with a capacitance of 38 nF), the Si doping concentrations were 0, 2.5, and 5 at.%, the spin speed was 8000 rpm, and the annealing temperatures were 300 C and 350 C. The S/D electrodes were deposited on the surface of the STO films by direct current (DC) sputtering of an Al target with a purity of 99.99%. The sputtering power was 100 W with a deposition pressure of 1 mTorr and a time of 1200 s. The patterning of electrodes was realized by masking the non-S/D electrode area, with a channel width of 800 μm and length of 200 μm. The thermal characteristics of the precursors were measured with a thermogravimetric analyzer (TG) (DZ-TGA101, Nanjing Shelley biology, Nanjing, China) and a differential scanning calorimeter (DSC) (DZ-DSC300C, Nanjing Shelley biology, Nanjing, China) at a heating rate of 10°C/min from room temperature to 500 °C under ambient conditions. The contact angles of the solutions were tested by an optical contact angle meter (Biolin, Theta Lite 200, Gothenburg, Sweden). The surface morphology of STO films were observed with laser scanning confocal microscopy (LSCM) (OLS50-CB, Tokyo, Japan) and an atomic force microscope (AFM) (BY 3000, Being Nano-Instruments, Guangzhou, China). The microstructure of the STO thin films was characterized by an X-ray diffractometer (XRD) (PANalytical Empyrean, Almelo, The Netherlands). Microwave photoconductivity decay (μ-PCD) (KOBELCO, LTA-1620SP) was performed to clarify the distribu- The thermal characteristics of the precursors were measured with a thermogravimetric analyzer (TG) (DZ-TGA101, Nanjing Shelley biology, Nanjing, China) and a differential scanning calorimeter (DSC) (DZ-DSC300C, Nanjing Shelley biology, Nanjing, China) at a heating rate of 10 • C/min from room temperature to 500 • C under ambient conditions. The contact angles of the solutions were tested by an optical contact angle meter (Biolin, Theta Lite 200, Gothenburg, Sweden). The surface morphology of STO films were observed with laser scanning confocal microscopy (LSCM) (OLS50-CB, Tokyo, Japan) and an atomic force microscope (AFM) (BY 3000, Being Nano-Instruments, Guangzhou, China). The microstructure of the STO thin films was characterized by an X-ray diffractometer (XRD) (PANalytical Empyrean, Almelo, The Netherlands). Microwave photoconductivity decay (µ-PCD) (KOBELCO, LTA-1620SP, Kobe, Japan) was performed to clarify the distribution of internal defects in the films. The electrical parameters of the STO films were obtained by Hall (ECOPIA, HMS 5300, Seoul, Korea) measurement. The chemical compositions were analyzed by X-ray photoelectron spectroscopy (XPS) (Thermo Fisher Scientific, Nexsa, MA, USA). A semiconductor parameter analyzer (Primarius FS-Pro, Shanghai, China) was employed to measure the electrical characteristics of the TFTs. Figure 3 shows the STO precursors with varying Si doping content after stirring for 12 h. The pure SnO 2 precursor is colorless and transparent without precipitation, indicating that SnCl 2 ·2H 2 O had been fully dissolved in 2-ME, which is conducive to improving the uniformity of the film. After adding TEOS, the precursor displays no obvious change, implying that TEOS has better solubility in the solvent, and Si is evenly dispersed in the precursor.
Results
tion of internal defects in the films. The electrical parameters of the STO films were ob tained by Hall (ECOPIA, HMS 5300, Seoul, Korea) measurement. The chemical composi tions were analyzed by X-ray photoelectron spectroscopy (XPS) (Thermo Fisher Scientific Nexsa, MA, USA). A semiconductor parameter analyzer (Primarius FS-Pro, Shanghai China) was employed to measure the electrical characteristics of the TFTs. Figure 3 shows the STO precursors with varying Si doping content after stirring for 12 h. The pure SnO2 precursor is colorless and transparent without precipitation, indicat ing that SnCl2·2H2O had been fully dissolved in 2-ME, which is conducive to improving the uniformity of the film. After adding TEOS, the precursor displays no obvious change implying that TEOS has better solubility in the solvent, and Si is evenly dispersed in the precursor. Figure 4a shows the DSC-TG curves of SnO2 precursors with 0, 2.5, and 5 at.% Si. For 0 at.% Si, the mass ratio declines rapidly from 99% to 14% during 20~147 °C, with a sig nificant endothermic peak at 133.5 °C. The main process in this stage is the large evapora tion of 2-ME (boiling point: 124.5 °C) and sol-gel reaction of Sn 2+ [33]. The temperature continues to rise, but the mass decreases slowly, corresponding to the gradual removal o impurities and the conversion of SnO2. After 341.7 °C, no obvious weight loss was ob served, suggesting that SnO2 has been completely transformed. Equations (1)-(3) show the reaction process [34,35]. The thermal behavior of an STO precursor with 5 at.% Si is similar to that of 0 at.% Si, but its endothermic peak of solvent evaporating shifts to 114.4 °C. Figure 4b displays the local enlarged view of the TG curves for further comparison It was found that TEOS can markedly reduce the conversion temperature of SnO2, which Figure 4a shows the DSC-TG curves of SnO 2 precursors with 0, 2.5, and 5 at.% Si. For 0 at.% Si, the mass ratio declines rapidly from 99% to 14% during 20~147 • C, with a significant endothermic peak at 133.5 • C. The main process in this stage is the large evaporation of 2-ME (boiling point: 124.5 • C) and sol-gel reaction of Sn 2+ [33]. The temperature continues to rise, but the mass decreases slowly, corresponding to the gradual removal of impurities and the conversion of SnO 2 . After 341.7 • C, no obvious weight loss was observed, suggesting that SnO 2 has been completely transformed. Equations (1)-(3) show the reaction process [34,35]. The thermal behavior of an STO precursor with 5 at.% Si is similar to that of 0 at.% Si, but its endothermic peak of solvent evaporating shifts to 114.4 • C. The above phenomena are ascribed to the decomposition and heat release of TEOS during high-temperature annealing, which provides more energy for film formation [36,37]. Absorbing the extra energy from TEOS prompts the endothermic peak of the evaporating solvent to shift toward a lower temperature, and promotes the formation of O-Sn-O, as shown in Figure 5, which can modestly reduce the preparation temperature of SnO2 films. The above phenomena are ascribed to the decomposition and heat release of TEOS during high-temperature annealing, which provides more energy for film formation [36,37]. Absorbing the extra energy from TEOS prompts the endothermic peak of the evaporating solvent to shift toward a lower temperature, and promotes the formation of O-Sn-O, as shown in Figure 5, which can modestly reduce the preparation temperature of SnO 2 films.
Results
In order to study the wettability of STO precursors on the substrate surface, the contact angle of precursors on the alkali free glass was tested, with the results shown in Figure 6. It was found that the contact angle of a pure SnO 2 precursor on glass substrate is relatively low (16.15 • ), indicating decent contact on the substrates. After adding Si, the contact angle of an STO solution on the substrate decreases, as low as 9.82 • when doping 10 at.% Si. This demonstrates that Si doping can improve the wettability of SnO 2 precursor solution on the substrate surface, which is conducive to improving the quality of films. Good wettability can reduce the interface defects between the film and the substrate surface, and ensure the successful progress of spin coating preparation and device manufacturing. [36,37]. Absorbing the extra energy from TEOS prompts the endothermic pe orating solvent to shift toward a lower temperature, and promotes the form O, as shown in Figure 5, which can modestly reduce the preparation tempe films. In order to study the wettability of STO precursors on the substrate surface, the contact angle of precursors on the alkali free glass was tested, with the results shown in Figure 6. It was found that the contact angle of a pure SnO2 precursor on glass substrate is relatively low (16.15°), indicating decent contact on the substrates. After adding Si, the contact angle of an STO solution on the substrate decreases, as low as 9.82° when doping 10 at.% Si. This demonstrates that Si doping can improve the wettability of SnO2 precursor solution on the substrate surface, which is conducive to improving the quality of films. Good wettability can reduce the interface defects between the film and the substrate surface, and ensure the successful progress of spin coating preparation and device manufacturing. LSCM was employed to obtain the surface morphology of 300 °C annealed STO films, and the captured microphotographs are displayed in Figure 7a. It can be observed that all STO films are flat and uniform in large scale without physical defects such as holes and cracks, while white particles appear on the surface of pure SnO2 film, indicating that adding Si is beneficial for improving the film quality.
The surface roughness of thin films affects the interface contact and the device performance. Figure 7b shows the AFM 3D images of STO films with a scanning area of 10 × 10 μm 2 . The root mean square (Sq) of STO films is generally lower than that of pure SnO2 film, indicating that Si can reduce the surface roughness, which is consistent with the LSCM. The Sq of the STO film with 2.5 at.% Si is as low as 0.23 nm, and, with the rising Si content, the Sq slightly increases to 0.34 nm. Its smooth surface is conducive to decreasing LSCM was employed to obtain the surface morphology of 300 • C annealed STO films, and the captured microphotographs are displayed in Figure 7a. It can be observed that all STO films are flat and uniform in large scale without physical defects such as holes and cracks, while white particles appear on the surface of pure SnO 2 film, indicating that adding Si is beneficial for improving the film quality.
The surface roughness of thin films affects the interface contact and the device performance. Figure 7b shows the AFM 3D images of STO films with a scanning area of 10 × 10 µm 2 . The root mean square (Sq) of STO films is generally lower than that of pure SnO 2 film, indicating that Si can reduce the surface roughness, which is consistent with the LSCM. The Sq of the STO film with 2.5 at.% Si is as low as 0.23 nm, and, with the rising Si content, the Sq slightly increases to 0.34 nm. Its smooth surface is conducive to decreasing the density of interface defects and subsequently improving the device performance. Figure 8 shows the XRD patterns of STO films with different Si concentrations. It was found that the STO films with 0 at.% and 2.5 at.% Si are amorphous. When the Si concentration increases to 5 at.%, crystallization peaks occur at 26.63°, 33.83°, and 52.13°, respectively corresponding to the diffraction peaks of SnO2 on the (110), (101), and (211) crystal planes [22]. Furthermore, XRD patterns reveal no Si element-related diffraction peaks even with 10 at.% Si, implying that there is no obvious second phase in the films and SnO2 remains the main component. In addition, as Si increases from 5 at.% to 10 at.%, the dif- [22]. Furthermore, XRD patterns reveal no Si element-related diffraction peaks even with 10 at.% Si, implying that there is no obvious second phase in the films and SnO 2 remains the main component. In addition, as Si increases from 5 at.% to 10 at.%, the diffraction peaks of SnO 2 become sharper, representing enhanced crystallinity. This can be attributed to the increased exothermic heat and energy supply with the rising of TEOS content. However, for 15 at.% Si, the diffraction peaks disappear completely, which may be explained by a large amount of Si entering into the SnO 2 crystal, destroying its normal lattice structure, and, thus, suppressing the crystallization of SnO 2 . The internal defects of the film significantly affect the carrier conc and the performance of devices. Figure 9 shows the results of a μ-PC related to the recombination rate of photogenerated carriers in the fil fects can trap photogenerated carriers, thus reducing the recombina the mean peak and τ2, the higher the shallow level defect density ris shows that, compared with 0 at.% Si , the mean peak value of the ST Si declines markedly from 26.10 mV to 6.70 mV, and τ2 value decre 0.42 μs. This suggests that 2.5 at.% Si doping can effectively diminish low level defects in SnO2 films, which is conducive to lowering the c of the films. However, as Si content increases from 2.5 at.% to 15 at.% τ2 increase gradually, revealing that a high content of Si can increase th level defects in SnO2. Singhal et al. [41] reported the same trend that defect content in TiO2. The variation of defects in the semiconductor to the shift of Fermi level when doping, which can result in spontane compensating charged defects [42]. The internal defects of the film significantly affect the carrier concentration of the film and the performance of devices. Figure 9 shows the results of a µ-PCD test. The τ 2 is correlated to the recombination rate of photogenerated carriers in the film. Shallow level defects can trap photogenerated carriers, thus reducing the recombination rate. The larger the mean peak and τ 2 , the higher the shallow level defect density rises [38][39][40]. Figure 9 shows that, compared with 0 at.% Si, the mean peak value of the STO film with 2.5 at.% Si declines markedly from 26.10 mV to 6.70 mV, and τ 2 value decreases from 2.04 µs to 0.42 µs. This suggests that 2.5 at.% Si doping can effectively diminish the density of shallow level defects in SnO 2 films, which is conducive to lowering the carrier concentration of the films. However, as Si content increases from 2.5 at.% to 15 at.%, the peak value and τ 2 increase gradually, revealing that a high content of Si can increase the density of shallow level defects in SnO 2 . Singhal et al. [41] reported the same trend that doping Co increases defect content in TiO 2 . The variation of defects in the semiconductor material is ascribed to the shift of Fermi level when doping, which can result in spontaneous formation of the compensating charged defects [42]. In particular, the area under the V O peak is proportional to the concentration of oxygen vacancy, which acts as defects as well as electron donors [16,17,44]. Compared with 0 at.% Si, the V O ratio of STO film with 2.5 at.% Si decreases remarkably from 29.78% to 16.69%, as seen in Figure 10, indicating that Si can effectively suppress V O and reduce the carrier concentration. Meanwhile, the L O ratio increases substantially from 59.38% to 83.31%, implying that the addition of Si can induce the formation of O-Sn-O and preserve its structure [45]. However, as the Si concentration rises from 2.5 at.% to 15 at.%, the V O ratio in STO films slightly increases, but is still lower than 0 at.% Si. This may be due to a disordered structure whereby a large amount of Si is intercalated in the lattice [40], as indicated by the L O ratio (Figure 10f). Consequently, the density of V O can be regulated by varying the Si doping content, and the control of carrier concentration in the SnO 2 film can be realized. fects can trap photogenerated carriers, thus reducing the recombination r the mean peak and τ2, the higher the shallow level defect density rises [3 shows that, compared with 0 at.% Si , the mean peak value of the STO film Si declines markedly from 26.10 mV to 6.70 mV, and τ2 value decreases f 0.42 μs. This suggests that 2.5 at.% Si doping can effectively diminish the d low level defects in SnO2 films, which is conducive to lowering the carrie of the films. However, as Si content increases from 2.5 at.% to 15 at.%, the τ2 increase gradually, revealing that a high content of Si can increase the den level defects in SnO2. Singhal et al. [41] reported the same trend that dopin defect content in TiO2. The variation of defects in the semiconductor mate to the shift of Fermi level when doping, which can result in spontaneous fo compensating charged defects [42]. In particular, the area under the VO peak is proportional to the concentration of oxygen vacancy, which acts as defects as well as electron donors [16,17,44]. Compared with 0 at.% Si, the VO ratio of STO film with 2.5 at.% Si decreases remarkably from 29.78% to 16.69%, as seen in Figure 10, indicating that Si can effectively suppress VO and reduce the carrier concentration. Meanwhile, the LO ratio increases substantially from 59.38% to 83.31%, implying that the addition of Si can induce the formation of O-Sn-O and preserve its structure [45]. However, as the Si concentration rises from 2.5 at.% to 15 at.%, the VO ratio in STO films slightly increases, but is still lower than 0 at.% Si. This may be due to a disordered structure whereby a large amount of Si is intercalated in the lattice [40], as indicated by the LO ratio ( Figure 10f). Consequently, the density of VO can be regulated by varying the Si doping content, and the control of carrier concentration in the SnO2 film can be realized. The electrical properties of the active layer are critical factors for TFT performance. Figure 11a shows the Hall test results of STO films with different Si concentrations. With the increase in Si content, sheet carrier concentration first decreases first and then in- The electrical properties of the active layer are critical factors for TFT performance. Figure 11a shows the Hall test results of STO films with different Si concentrations. With the increase in Si content, sheet carrier concentration first decreases and then increases, which is in line with the variation trend of the peak value, τ 2 , and V O ratio with Si concentration. This indicates that the addition of Si affects the electrical properties of STO films by regulating internal defect density such as V O . Compared with 0 at.% Si, the sheet carrier concentration of the STO film with 5 at.% Si declines from 2.19 × 10 14 cm −2 to 5.84 × 10 13 cm −2 , implying that Si doping can effectively diminish the carrier concentration of SnO 2 . In addition, it was observed that with the increased content of Si, although the sheet carrier concentration of STO films is lower than that of pure SnO 2 film, the hall mobility of STO films gradually decreases, which can most likely be attributed to the scattering caused by the enhanced crystallization, as concluded in the XRD analysis. Based on previous analyses, it was found that STO films with 2.5 at.% Si showed better properties, such as the lowest VO ratio of 16.69% and a response current of 3.76 × 10 −10 A at 5 V. Therefore, STO TFTs with 2.5 at.% Si were further fabricated with an annealing temperature of 300 °C and 350 °C. Their transfer characteristics were measured under the conditions of VGS = ±30 V and VDS = 20.1 V, as shown in Figure 12a. The following performance parameters of corresponding STO TFTs were extracted: on/off current ratio (Ioff/Ioff), off current (Ioff), the subthreshold swing (SS) of 300°C annealed TFT of 3.46 × 10 3 , 7.74 × 10 −9 A, and 5.50 V/Dec, respectively; and the SS of 350°C annealed TFT of 7.43 × 10 3 , 1.19 × 10 −9 A, and 4.24 V/Dec, respectively. Compared with 300 °C annealing, the STO TFT fabricated at 350 °C has a higher Ioff/Ioff, a lower Ioff, and a smaller SS. The decrease of Ioff is probably a result of the increasing temperature that promotes the compensation of VO in the films, and then reduces carrier concentration, as analyzed in Figure 4c. Simultaneously, the rising temperature allows SnO2 to obtain enough energy for the internal structure to reorganize and diminish the defect density at the interface between the STO film and the Al2O3: Nd dielectric layer, leading to the reduction of the SS. However, the mobility (μsat) of 350 °C annealed STO TFTSTO TFT (0.32 cm 2 /(V·s)) is lower than that at 300 °C (0.81 cm 2 /(V·s)), which may be attributed to the enhanced crystallinity of STO films, and, thus, the μsat degrades with the increased scattering caused by the grain boundary [46]. In order to devise a suitable Si concentration range for the preparation of TFTs, I-V curves of STO films with 0, 2.5, and 5 at.% Si were investigated under the condition of a 5 V working voltage, as shown in Figure 11b. The response currents of STO films with 0, 2.5, and 5 at.% Si were 3.49 × 10 −9 A, 3.76 × 10 −10 A, and 2.34 × 10 −9 A, respectively. This phenomenon shows that STO films with 2.5 at.% Si have the potential to reduce the I off of TFTs.
Based on previous analyses, it was found that STO films with 2.5 at.% Si showed better properties, such as the lowest V O ratio of 16.69% and a response current of 3.76 × 10 −10 A at 5 V. Therefore, STO TFTs with 2.5 at.% Si were further fabricated with an annealing temperature of 300 • C and 350 • C. Their transfer characteristics were measured under the conditions of V GS = ±30 V and V DS = 20.1 V, as shown in Figure 12a. The following performance parameters of corresponding STO TFTs were extracted: on/off current ratio (I off /I off ), off current (I off ), the subthreshold swing (SS) of 300 • C annealed TFT of 3.46 × 10 3 , 7.74 × 10 −9 A, and 5.50 V/Dec, respectively; and that of 350 • C annealed TFT of 7.43 × 10 3 , 1.19 × 10 −9 A, and 4.24 V/Dec, respectively. Compared with 300 • C annealing, the STO TFT fabricated at 350 • C has a higher I off /I off , a lower I off , and a smaller SS. The decrease of I off is probably a result of the increasing temperature that promotes the compensation of V O in the films, and then reduces carrier concentration, as analyzed in Figure 4c. Simultaneously, the rising temperature allows SnO 2 to obtain enough energy for the internal structure to reorganize and diminish the defect density at the interface between the STO film and the Al 2 O 3 : Nd dielectric layer, leading to the reduction of the SS. However, the mobility (µ sat ) of 350 • C annealed STO TFT (0.32 cm 2 /(V·s)) is lower than that at 300 • C (0.81 cm 2 /(V·s)), which may be attributed to the enhanced crystallinity of STO films, and, thus, the µ sat degrades with the increased scattering caused by the grain boundary [46]. ture to reorganize and diminish the defect density at the interface between the STO film and the Al2O3: Nd dielectric layer, leading to the reduction of the SS. However, the mobility (μsat) of 350 °C annealed STO TFTSTO TFT (0.32 cm 2 /(V·s)) is lower than that at 300 °C (0.81 cm 2 /(V·s)), which may be attributed to the enhanced crystallinity of STO films, and, thus, the μsat degrades with the increased scattering caused by the grain boundary [46]. Since the device prepared at 350 • C shows better performance, 350 • C annealed STO TFTs with 0, 2.5, and 5 at.% Si were further fabricated. The transfer characteristics obtained are shown in Figure 12b, and all devices exhibit good switching characteristics. Table 1 shows the extracted performance parameters of corresponding TFTs. As the Si content rises from 0 at.% to 5 at.%, it was found that (1) I off gradually declines while I on /I off gradually increases, indicating that Si doping can effectively suppress the formation of V O , thus reducing the carrier concentration of the active layers of the STO TFT; (2) Voltage corresponding to the TFT switching from an off state to an on state (V on ) gradually decreases, which is conducive to lowering power consumption in practical applications; and (3) the SS gradually reduces, probably due to the increased heat release caused by the rising concentration of TEOS, which is conducive to the reorganization of SnO 2 and subsequent reduction in internal defect states. After optimization, the 350 • C annealed STO TFT with 5 at.% Si exhibits the best performance, with a µ sat of 0.13 cm 2 /(V·s), I off of 2.01 × 10 −10 A, I on /I off of 1.04 × 10 4 , V on of 1.51 V, and SS of 3.48 V/Dec.
Conclusions
In this paper, STO TFTs were fabricated by spin coating with TEOS as an Si dopant, and the effects of Si doping concentrations on the properties of SnO 2 were explored. During annealing, TEOS can decompose to release heat and supply energy for film formation, which is helpful to appropriately reduce the preparation temperature of the film and improve its quality. With the rising of Si content, the increased exothermic heat of TEOS led to the enhanced crystallization of the STO films, while excessive Si can destroy the lattice and degrade the crystallinity. In addition, Si doping can effectively suppress the V O concentration, and the V O ratio of 2.5 at.% Si doped STO film was as low as 16.69%. The variation trends of a shallow level defect density, V O ratio, and carrier concentration were concurrent with the change in Si concentration, which first decreased and then increased, indicating that Si doping regulates the electrical properties of the film by controlling defect states such as V O . Following optimization, it was confirmed that 350 • C annealed and 5 at.% Si doped STO TFT showed the best performance, as I off , I on /I off , and V on were | 2022-06-03T15:24:49.410Z | 2022-06-01T00:00:00.000 | {
"year": 2022,
"sha1": "63dec3ad208f8726876cbcb49455fcfb07f558f8",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0375/12/6/590/pdf?version=1654077859",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6015608dbf1acf19cad7f358671999fbe385d169",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
34447856 | pes2o/s2orc | v3-fos-license | Impact of prolonged fraction dose-delivery time modeling intensity-modulated radiation therapy on hepatocellular carcinoma cell killing
AIM: To explore the impact of prolonged fraction dose-delivery time modeling intensity-modulated radiation therapy (IMRT) on cell killing of human hepatocellular carcinoma (HCC) HepG2 and Hep3B cell lines. METHODS: The radiobiological characteristics of human HCC HepG2 and Hep3b cell lines were studied with standard clonogenic assays, using standard linear-quadratic model and incomplete repair model to fit the dose-survival curves. The identical methods were also employed to investigate the biological effectiveness of irradiation protocols modeling clinical conventional fractionated external beam radiotherapy (EBRT, fraction delivery time 3 min) and IMRT with different prolonged fraction delivery time (15, 30, and 45 min). The differences of cell surviving fraction irradiated with different fraction delivery time were tested with paired t-test. Factors determining the impact of prolonged fraction delivery time on cell killing were analyzed. RESULTS: The / and repair half-time (T 1/2 ) of HepG2 and Hep3b were 3.1 and 7.4 Gy, and 22 and 19 min respectively. The surviving fraction of HepG2 irradiated modeling IMRT with different fraction delivery time was significantly higher than irradiated modeling EBRT and the cell survival increased more pronouncedly with the fraction delivery time prolonged from 15 to 45 min, while no significant differences of cell survival in Hep3b were found between different fraction delivery time protocols. CONCLUSION: The prolonged fraction delivery time modeling IMRT significantly decreased the cell killing in HepG2 but not in Hep3b. The capability of sub-lethal damage repair was the predominant factor determining the cell killing decrease. These effects, if confirmed by clinical studies, should be considered in designing IMRT treatments
INTRODUCTION
With the popularization of intensity-modulated radiation therapy (IMRT), an irradiation technique developed to improve target dose conformity and normal tissue sparing [1][2][3][4] , more and more patients with hepatocellular carcinoma (HCC) would receive radiotherapy [5] . IMRT optimized the physical dose distribution of radiotherapy, which thereby could enhance the tumor local control and lower the radiation-induced hepatitis. However, the radiobiological effectiveness of IMRT might be different from conventional external beam radiation therapy (EBRT) especially considering the prolonged fraction delivery time in IMRT. IMRT delivers dose, either dynamically or statically (e.g., step-and-shoot), using many beam apertures (segments) that are shaped with multileaf collimator [1,4,6] . It takes a much longer time to deliver a single fraction dose with IMRT than EBRT. Generally, EBRT takes about 2-5 min to deliver a single fractional dose, whereas IMRT with static delivery requires 15-45 min to deliver the same fractional dose. According to radiobiological theory, cell killing tends to decrease with fraction delivery time increasing because of ongoing sublethal damage repair (SLDR) processes during dose delivery. Wang et al [7] calculated the cell-killing efficiency of simulated and clinical IMRT plans with the generalized linear-quadratic (LQ) model, which indicated that fraction delivery times in the range of 15-45 min may significantly decrease tumor cell killing and may have a significant impact on treatment outcome for tumors with a low / ratio and short repair half-time (T 1/2 ). However, such calculation lacks confirmation of studies in vitro. To clarify the impact of prolonged fraction delivery time in IMRT on tumor cell killing, more detailed studies in vitro are required.
In this study, we attempt to ascertain the impact of prolonged fraction delivery time modeling IMRT on survival of human HCC cell lines HepG2 and Hep3B, so as to provide radiobiological basis for optimizing IMRT plans for this disease.
Cell culture
Human HCC cell lines including HepG2 and Hep3b were employed in this study. Both cell lines were cultured in plastic flasks at 37 ℃ in a humidified atmosphere of 50 mL/L CO 2 and 95% air with the 1 640 medium containing 10-15% fetal calf serum with 100 U/mL penicillin and 100 g/mL streptomycin. Results of regular tests for mycoplasma contamination were negative. When they become confluent, cells were sub-cultured (1:4 dilution). Exponentially growing cells were used for experiments.
Immediately prior to irradiation, single-cell suspension was prepared by trypsination and cell number was counted using a hemocytometer. Cells were then seeded in varying amounts onto 6-well tissue culture dishes with 1 640 medium. Three parallel samples were set at each radiation dose of various irradiation schedules.
Irradiation scheme
Irradiation was carried out at room temperature using a 6-MeV X-ray. To learn the radiobiological characteristics, doses of 0 Gy, 1 Gy, 2 Gy, 4 Gy, 6 Gy, 8 Gy and 10 Gy were given as single, continuous doses at a dose rate of 3.2 Gy/min for generating standard dose-survival curves and acquiring a variety of mathematic model parameters. To achieve the T 1/2 , doses of 0 Gy, 1 Gy, 2 Gy, 4 Gy, 6 Gy, 8 Gy and 10 Gy were given as single, continuous doses at a dose rate of 0.066 Gy/min. To compare the cell killing effectiveness of fraction delivery time modeling EBRT and IMRT, fractionated irradiation of 0 Gy, 1 Gy×1, 2 Gy×1, 2 Gy×2, 2 Gy×3, 2 Gy×4 and 2 Gy×5, were given with one fraction per day just like clinical dose-time-fractionation pattern. In irradiation modeling EBRT, the fraction delivery time was 3 min, with two 1.18 min intervals modeling 3 portals irradiation. In irradiation modeling IMRT with different fraction delivery times, each fraction dose was given in seven equal sub-fractions; the total fraction delivery time was 15, 30 or 45 min.
Clonogenic assays
Standard clonogenic assays were used to acquire the standard dose-survival curves of HepG2 and Hep3b and to determine the effect of irradiation modeling EBRT and IMRT with fraction delivery time of 15, 30 and 45 min. Cells plated in 6-well tissue culture dishes were incubated in an undisturbed state for 10 d. Cell fixation and staining used methanol and 0.5% crystal violet in deionized water and colony counts were performed by visual inspection. A colony was defined as 50 or more cells. Colony plating efficiency was calculated to be the number of viable nucleated cells plated and expressed as a percentage. The surviving fraction at each dose of each irradiation protocols was determined by dividing the plating efficiency of the irradiated cells by that of the untreated control. All data points were the mean results of experiments.
Survival curve fitting and calculation
Dose-survival curves for each experiment were constructed by semi-logarithmically plotting the mean surviving fractions as a function of irradiation dose. The data were analyzed, and survival curves were plotted following the standard linear-quadratic model [S = exp (-D -D 2 )] or incomplete repair model [S = exp (-D -gD 2 )] [8] using GraphPad Prism 4.0 software (GraphPad Software, Inc., USA). and resulted from the best fitting survival curves and were used to calculate surviving fraction at 2 Gy (SF 2 ).
Differences of surviving fraction treated with different irradiation protocols were tested with paired t-test. Statistical significance was assumed when P<0.05.
Radiobiological characteristics of HepG2 and Hep3b
Standard dose-survival curves of HepG2 and Hep3b fitted with the standard LQ model are shown in Figure 1. The survival curves with irradiation at a low dose rate of 0.066 Gy/min fitted with incomplete repair model to acquire the T 1/2 of both cell lines were shown in Figure 2. The radiobiological characteristics of both cell lines described with the parameters of the mathematic models are listed in Table 1. Figure 3. The surviving fractions of HepG2 irradiated modeling IMRT with different fraction delivery times were significantly higher than irradiated modeling EBRT (P<0.05), and the cell irradiated modeling IMRT with longer fraction delivery time has a significant higher survival than that with shorter fraction delivery time (P<0.05). No significant survival differences of Hep3b were found between different fraction delivery time protocols (P>0.05) ( Table 3).
DISCUSSION
This study demonstrated that the prolonged fraction delivery time modeling IMRT decreased the cell killing of HepG2; the cell killing decreased more pronouncedly with the fraction delivery time being prolonged from 15 min to 45 min. These phenomena, however, were not obvious in Hep3b.
Essence of the impact of prolonged fraction delivery time on cell killing
The intrinsic reason for increasing of the cell survival treated with IMRT like protocols is the ongoing SLDR processes during dose delivery. Irradiated tumor cells may be lethally or not lethally damaged. Cells that are not lethally damaged may undergo repair. SLDR is an important type of damage repair that is defined as the enhancement in survival when a dose of radiation is separated over a period time. Generally, SLDR experiments divide a single dose into two relatively equal doses spaced at variable time intervals. Elkind et al investigated this phenomenon in great detail [9,10] . An enhancement in survival after two doses separated in time was observed. This enhancement in survival was due to SLDR.
Factors determining the impact degree of prolonged fraction delivery time on cell killing
The survival increasing of cells irradiated with a prolonged fraction delivery time is mainly associated with the capacity and rate of the SLDR during the dose delivery as well as the fraction dose delivery time.
The capacity of a cell to undergo SLDR, which is associated with the intrinsic radiobiological characteristics of the cell, may be represented by the quadratic term in LQ model. A cell with a small / is considered having a large ability to undergo SLDR. Most human tumor cell lines studied in vitro have a relatively small ability to undergo SLDR [11][12][13][14][15] . Yet, a large capacity for SLDR has been reported for some human tumor cell lines [12,[16][17][18][19][20] . According to this study, HepG2 has a relatively large capacity for SLDR with a / of 3.1, which is much lower than what was expected, while Hep3b has a smaller capacity for SLDR with a / of 7.4.
The rate of SLDR can be represented with T 1/ 2 . Apparently, cells with short T 1/2 have more repairs during a certain fraction delivery time. For human tumor cell lines, the characteristic T 1/2 ranges from a few minutes to several hours [21][22][23] . In a review article, Steel et al [21] pointed out that the repair time for many tumors appears different when measured from a split-dose experiment vs a low-dose-rate exposure. They attributed this difference to the presence of two or more repair components. Others have confirmed that non-exponential or multi-exponential SLDR kinetics are involved in cell killing [24][25][26][27][28] . In split-dose survival experiments, the fast and slow rates of SLDR kinetics can be reasonably approximated by a single (average) first-order repair term. In low-dose-rate experiments, cell killing is more sensitive to the fast repair component. For fraction delivery times in the range of 15 to 45 min (i.e., comparable to IMRT treatment times), the fast repair component is important and the slow repair component has little impact on cell killing. Brenner and Hall [22] have compiled in vitro data on the T 1/2 of human cancer cell lines under low-dose-rate exposure conditions. They have found that the most probable T 1/2 is approximately 20 min. For prostate cancer, Wang et al [17] used clinical data to derive a T 1/2 of 16 min. In this study, the T 1/2 of HepG2 and Hep3b were 22 and 19 min respectively. They are just within the general fraction delivery time of IMRT (15-45 min). Although cells, with shorter T 1/2 (Hep3b), could not be proved as having more SLDR than that with longer T 1/2 (HepG2) in this study for the much larger SLDR capability of HepG2 than Hep3b, the importance of SLDR rate should not be neglected only if the T 1/2 of cell lines being considered were similar just like that in this study and the study of Brenner and Hall [22] .
The fraction dose delivery time is another factor that impacts the effect of cell survival. For HepG2 in this study, the differences of surviving fraction in each group irradiated with different prolonged fraction delivery time were small but significant (P<0.05).
These important factors synthetically affect the effect of prolonged fraction delivery time on cell killing. According to the results of this study, HepG2 and Hep3b have similar T 1/2 . The predominant factor that affects the effect of prolonged fraction delivery time on cell killing apparently should be the SLDR capability of the cells.
In conclusion, the prolonged fraction delivery time modeling IMRT significantly decreased the cell killing in HepG2 but not in Hep3b. The capability of SLDR was the predominant factor determining the cell killing decrease. | 2018-04-03T02:38:06.899Z | 2005-03-14T00:00:00.000 | {
"year": 2005,
"sha1": "856da95a4da82a20268eb4c4bdbcb4263145affd",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.3748/wjg.v11.i10.1452",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "a6c0fbe1d53698fbec50f1e6a508061f11d68e1e",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Physics"
],
"extfieldsofstudy": [
"Medicine"
]
} |
266344121 | pes2o/s2orc | v3-fos-license | Gα12 signaling regulates transcriptional and phenotypic responses that promote glioblastoma tumor invasion
In silico interrogation of glioblastoma (GBM) in The Cancer Genome Atlas (TCGA) revealed upregulation of GNA12 (Gα12), encoding the alpha subunit of the heterotrimeric G-protein G12, concomitant with overexpression of multiple G-protein coupled receptors (GPCRs) that signal through Gα12. Glioma stem cell lines from patient-derived xenografts also showed elevated levels of Gα12. Knockdown (KD) of Gα12 was carried out in two different human GBM stem cell (GSC) lines. Tumors generated in vivo by orthotopic injection of Gα12KD GSC cells showed reduced invasiveness, without apparent changes in tumor size or survival relative to control GSC tumor-bearing mice. Transcriptional profiling of GSC-23 cell tumors revealed significant differences between WT and Gα12KD tumors including reduced expression of genes associated with the extracellular matrix, as well as decreased expression of stem cell genes and increased expression of several proneural genes. Thrombospondin-1 (THBS1), one of the genes most repressed by Gα12 knockdown, was shown to be required for Gα12-mediated cell migration in vitro and for in vivo tumor invasion. Chemogenetic activation of GSC-23 cells harboring a Gα12-coupled DREADD also increased THBS1 expression and in vitro invasion. Collectively, our findings implicate Gα12 signaling in regulation of transcriptional reprogramming that promotes invasiveness, highlighting this as a potential signaling node for therapeutic intervention.
Orthotopic GSC injections
1.5 or 5 × 10 5 control or knocked down GSC23 cells tagged with near infrared IRFP720 were intracranially injected into the mouse brain (6 mice per group), using a stereotactic system as previously described 8 .Survival experiments were performed twice.Tumor size was estimated by fluorescence emission detection by FMT 2500 Fluorescence Tomography (Perkin Elmer) at 720 nm.The onset of neurologic sequelae in the control group was used to determine time of euthanasia.Mice were euthanized by CO2 inhalation in accordance with our institutional guidelines for animal welfare and experimental conduct at University of California at San Diego.Brain samples were collected, and tissue samples were processed for histological examination by H&E and anti-human nuclei IHC at UCSD CALM and MCC Biorepository and Tissue Technology Core.
RNA analysis
Total RNA was isolated using Trizol reagent according to the manufacturer's protocol, followed by RT-qPCR for relative quantification.RNA sequencing of tumors from mice injected with Gα12 KD or control GSC23 cells (4 mice tumors for biological replicates in each group) were submitted to RNA integrity analysis (Agilent Bioanalyzer, Tapestation results eRIN > 8.5), ribodepleted library preparation, and sequencing using Illumina NovaSeq 6000 (run set PE100 and 25 M reads).Gene-set enrichment analysis was performed using GSEA software.
Migration/invasion assay
Uncoated or Geltrex-coated membranes of Transwell 24-well plates (8 µm pore size, Corning, Cat#3422) were used to assess migration and invasion, respectively, as detailed in Supplementary Material.
Statistics
Statistical differences were analyzed using Graphpad Prism software version 8. Analysis of variance (ANOVA) followed by Tukey's multiple test was applied for groups with several features.One-way ANOVA was used to analyze data from experiments with one independent variable, and two-way ANOVA for two independent factors.Data are presented as mean ± SEM and significances based on calculated probability values (*p < 0.05; **p < 0.01).
Ethical approval
In vivo experiments were executed under approval of animal protocol by University of California San Diego Institutional Animal Care and Use Committee (IACUC) Office #S00192M, which were performed following the protocols in complying with federal regulations by USDA, APHIS, CFR, Title 9, Parts 1, 2, and 3.The study was reported in accordance with the recommendations of the ARRIVE guidelines.
GBMs overexpress GNA12 and G⍺12-coupled GPCRs
We interrogated the TCGA database generated from patient GBM surgical specimens and determined that GNA12 mRNA expression was elevated in 30% of 160 GBM patient samples profiled in TCGA, while its homolog, G⍺13, was far less frequently overexpressed (Fig. 1A).We also interrogated the TCGA for a series of GPCRs recently established to couple efficiently to G⍺12 (Supplemental Fig. 1).GPCRs that couple to G⍺12 and are altered in ≥ 5% of patients are shown in Fig. 1A.Notably, most GPCRs were overexpressed in fewer patients and in subsets of patients distinct from those with elevated GNA12.Overall expression of GNA12 mRNA in GBM based on TCGA analysis by Gliovis was 2.2-fold higher than in normal brain (Fig. 1B).www.nature.com/scientificreports/ We examined expression of GNA12 in relation to the molecular classification and phenotypic characteristics of glioma samples, including consideration of their IDH1/p53/PTEN mutational status, tumor grade, patient age, and survival.GBMs have been also classified by transcriptional signatures into proneural, classical, and mesenchymal subtypes 19 .GNA12, but not GNA13, was highly expressed in the classical and mesenchymal GBM subtypes and enriched in elderly patients and those with worst performance status (Fig. 1C).Expression clustered with YAP1, a downstream transcriptional co-activator regulated through G⍺12-RhoA signaling 7,20,21 .
RNA-seq data from 40 patient-derived GSCs generated by the Rich laboratory and compiled and stored in a data base described previously 22,23 was also analyzed; all GSCs were found to have levels of GNA12 expression at least one SD above that of neural stem cells (Fig. 1D).The GSC23 cell line, established from a patient-derived xenograft of a recurrent and aggressive tumor 24 , and also used in our previous work 8 , was intermediate in its expression of GNA12, providing a representative model to examine the role of G⍺12 signaling in GBM growth.
GSCs were transduced by lentiviral-directed short hairpin RNAs (shRNAs) encoding either a control sequence not found in the mammalian genome or one of two non-overlapping G⍺12 sequences.G⍺12 mRNA levels were reduced by over 75% relative to control cells without significant compensatory changes in G⍺13 (Supplemental Fig. 2).Westerns on whole cell lysates also demonstrated an approximately 50% decrease in G⍺12 protein in the knockdowns compared to control cells (Supplemental Fig. 2).
Depletion of G⍺12 does not affect GBM lethality or tumor size
To determine whether G⍺12 protein signals are critical for tumorigenesis in the brain microenvironment we intracranially implanted mice with GSC23 control or G⍺12 shRNA transduced cells labeled with IRFP720.The experiment was repeated twice with 6 animals per group in each experiment.Inhibition of G⍺12 did not change overall survival of the tumor-bearing mice over a period of approximately 30 days (Fig. 2A).We used an additional GBM patient derived glioma stem cell, HK281, which our previous work established to show elevated G⍺12 mRNA (to an extent approximately double that of GSC23 cells) 8 .Knockdown of G⍺12 in HK281 was highly effective and occurred without concomitant changes in G⍺13 expression (Supplemental Fig. 2).Survival of mice implanted with G⍺12 KD HK 281 cells was not altered (Fig. 2B).
Although alterations in overall survival were not observed when G⍺12 expression was inhibited, we carried out additional analysis on GSC-23 cell implanted mice.Tumor size was assessed using fluorescent molecular tomography (FMT) of IRFP720-expressing WT and G⍺12 knockdown tumors as shown in Fig. 2C.Tumor size, assessed longitudinally, did not differ significantly between the mice bearing control and G⍺12 knockdown cells (Fig. 2D).In an additional series of orthotopic injections, we engrafted approximately half the number of GSC23 cells to minimize potential deleterious effects of massive tumor development and associated lethality.Here, again, there was no difference in the survival of the two groups of tumor-bearing mice followed until the time of sacrifice, and serial FMT imaging of tumor-bearing brains revealed insignificant fluorescence range distributions between groups (Fig. 2E).We also confirmed that G⍺12 mRNA levels remained downregulated in the KD tumors (Supplemental Fig. 2A,B).
GNA12 is essential for in vivo tumor invasion
Tumor-bearing brains were harvested, sectioned, and stained with hematoxylin and eosin (H&E) or analyzed by immunohistochemistry (IHC) (Fig. 3A-C).GSC23 control cells generated tumors with typical irregular invasive GBM borders (Fig. 3A).In contrast, the tumor mass in mice injected with G⍺12 KD GSC23 cells was largely confined to the injection site and the tumor border areas were clearly defined and compact.Analysis of tissue samples from two additional experiments confirmed that GSC23 control cells developed tumors with irregular borders and finger-like projections into the mice brain (Fig. 3C), while GSC23 cells with G⍺12 knockdown formed tumors with smoother and more defined borders.Quantitative analysis of the shape of the tumor border from sections shown in (Fig. 2A) was carried out using digital pathology QuPath software.While only two representative images were used to provide quantitative data, the multiforme tumor-stroma interfaces were significantly decreased in virtually all G⍺12 KD compared to the control tumors, indicative of diminished invasiveness (Fig. 3B).To further verify these histological observations, we visualized the GSC cells in the tumor by IHC, staining for a human nucleolar antigen protein; this further revealed micrometastasis along the tumor borders in control but not in KD tumors (Fig. 3C).We also demonstrated that G⍺12 knockdown in HK281 cells diminished the invasiveness of tumors formed in vivo (Fig. 3D).These data support the hypothesis that GPCR ligands in the tumor microenvironment utilize G⍺12 to trigger GSC invasion.
RNA-seq analysis of differentially expressed genes in G⍺12 knockdown and control GSC23 tumors
To explore molecular pathways downstream of GNA12 we performed a comprehensive analysis of gene expression profiles in tumors derived from control and G⍺12-depleted GSCs.We harvested tumors at 21 days and submitted 8 GSC23 tumor samples (4 WT and 4 G⍺12 knockdown) for RNA sequencing (RNA-seq).Data generated from the RNA-seq analysis identified 22,247 expressed genes with high confidence.Of these 272 genes were upregulated and 558 were downregulated in G⍺12-depleted tumors (p-adjusted < 0.05) as shown in the Heatmap and Volcano plot (Fig. 4A).A more stringent cut-off value of p-adjusted < 0.01 was used to rank the most significantly differentially regulated genes annotated in the Volcano plot.Gene oncology (GO) and Gene Set Enrichment Analysis (GSEA) were used to assess the pathways that were influenced by G⍺12 (Fig. 4B).
Genes involved in the regulation of ECM components and organization, matrix adhesion and lamellipodia dynamics as well as stem cell properties and epithelial mesenchymal transition were differentially expressed as shown by RNA-Seq (Fig. 4B).The list includes several markers indicative of a decreased mesenchymal phenotype, for example expression of CHI3L1 (encoding YKL-40) was downregulated over fivefold in 3 of 4 G⍺12 www.nature.com/scientificreports/KD tumor samples.THBS1, which encodes a matricellular protein and has been proposed as a robust clinical marker of the mesenchymal phenotype and GBM prognosis 25,26 , was downregulated by 80% in the G⍺12 KD tumors.Cadherin-11, a cell-cell adhesion molecule that is associated with EMT/PMT 27 , was also significantly downregulated in G⍺12 KD tumors; this gene is also highly expressed in GBM patient samples in TCGA data analysis (Supplemental Fig. 5).The categorization includes many genes that are in several GO categories since the biological process of EMT, stemness and cell migration are interrelated.
A proneural-to-mesenchymal transition (PMT), similar to EMT, has been described for GBM 24,28 .To more specifically assess genes associated with the proneural/mesenchymal transition (PMT) characteristic of glioblastoma we looked for changes in known PMT associated genes in the GSC23 tumors by qPCR (Fig. 5A).
We observed increases in well-known proneural genes in G⍺12 KD tumors (Fig. 5A), specifically up-regulation of CD133, OLIG2, and TAZ along with a trend towards an increase in PATZ.The NF1 gene, disputably proneural 29 , was also increased as was YKL40 which was, however, significantly down-regulated by RNA-Seq.Overall, these data are consistent with deletion of G⍺12 reducing in vivo proneural to mesenchymal transition and leading to attenuated mesenchymal tumor cell properties.We also demonstrated in an in vitro analysis that the typical plasticity of cancer stem cells seen with prolonged culture (21 days) on Matrigel-coated plates was www.nature.com/scientificreports/altered by G⍺12 knockdown (Fig. 5B).Control GSC23 cells formed large and stable spheres, with adherent cells migrating out of the spheres, while G⍺12 KD cells formed less stable spheres that yielded loose rounded-shape cells, consistent with an altered mesenchymal/proneural dynamic state.We also analyzed mRNA levels of core cancer-associated stem cell genes in G⍺12 KD GSC23 (Fig. 5C) and in HK281 tumors (Supplemental Fig. 4B,C).We observed reduced mRNA levels for seven stem cell genes (CCND1, MYC, NANOG, NESTIN, OCT4, PAX6 and SOX2) in the G12 KD GSC23 tumors.To demonstrate that there were functional differences associated with these genetic changes we analyzed GSC self-renewal by in vitro sphere formation comparing control and two lentiviral constructs of G⍺12shRNA knockdown cells (Fig. 5D).Decreasing G⍺12 expression in GSC23 cells lead to diminished stem cell frequency assessed by extreme limiting dilution (ELDA) analysis.GSC23 control cells showed an average of one stem cell for every 35 cells, while the two shRNA knockdown cells averaged one for every ~ 50-80 cells, i.e., the knockdown of G⍺12 protein decreased the ability to generate new spheres by an average of 60% (Fig. 5D).Diminishing G⍺12 mRNA levels in HK281 cells, as in the GSC-23 cells, decreased stem cell properties as assessed by alterations in stem cell gene mRNA levels and growth in the extreme limiting dilution assay (Supplemental Fig. 4).
G⍺12 promotes a mesenchymal-like invasive phenotype through THBS1 signaling
To further investigate the role of G⍺12 in GBM tumor invasion we examined the effect of GNA12 knockdown on GSC migration and invasion in vitro (Fig. 6A,B).We used sphingosine-1-phosphate (S1P) to activate GPCRs coupled to G⍺12 and effected a 2.5-fold increase in cell migration and a fivefold increase in invasion.Migration and invasion were significantly attenuated by G⍺12 KD supporting the role for G⍺12 signaling in GSC23 cell migration and invasion.There was no significant effect of S1P on proliferation of either control or G12 KD cells over this time period nor throughout 5 consecutive days, as assessed by Cyquant analysis (Supplemental Fig. 4D).
THBS1 was the one of the most highly down regulated genes in the RNA seq analysis and it has been functionally associated with cell migration, EMT and stemness.We confirmed the decrease in THBS1 expression demonstrated in the RNA-seq data of G⍺12 KD tumors by qPCR on GSC23 tumor samples (Supplemental Fig. 2E).Accordingly, we focused on THBS1 to interrogate the role that downstream transcriptionally regulated targets of G⍺12 play in GSC23 migration.S1P treatment of GSC23 cells increased THBS1 mRNA and this increase was fully abrogated by G⍺12 KD (Fig. 6C).Notably basal levels of THBS1 were also downregulated in the G⍺12knockdown cells.To investigate the role of THBS1 in GSC23 cell migration we generated THBS1 KD GSC23 cells using lentiviral shRNA (Supplemental Fig. 2E).Migration stimulated by S1P was attenuated by nearly 80% in THBS1-depleted GSC23 cells (Fig. 6D).A gain of function approach was then used to further establish that pharmacological and specific activation of G⍺12 can regulate THBS1 expression and cell migration.GSC23 cells were engineered to express a G⍺12-coupled designer receptor (DREADD) 9 .Activation of the DREADDexpressing GSC23 cells with CNO (the synthetic ligand for the DREADD) was confirmed to be effective based on robust expression of two canonical G⍺12 and RhoA regulated targets genes, CYR61/CCN1 and CTGF/CCN2 7 (Fig. 6E).We further demonstrated that CNO treatment increased THBS1 mRNA in GSC23 cells expressing the G⍺12-coupled DREADD (Fig. 6F) and concomitantly increased cell migration (Fig. 6G).
Analysis of TCGA confirmed that THBS1 is highly upregulated in GBM (Fig. 7A).Notably THBS1 mRNA levels also correlated with those of G⍺12 for mesenchymal tumors in TCGA and the more extensive Chinese Glioma Gene Atlas (CGGA) (Fig. 7B).To evaluate involvement of THBS1 in GBM growth in vivo we implanted mice with GSC23 cells in which either of two shRNA constructs were used to knockdown THBS1.(shTHBS1#1 and #3; Supplemental Fig. 2).There were no significant differences in survival or onset of neurologic sequealae compared to controls after 3 weeks mirroring the lack of effect of G⍺12 knockdown on in vivo tumor growth of GSC23.However, brain sections of mice bearing GSC23 THBS1 KD cells revealed tumors that were less invasive (Fig. 7C).Taken together, these data indicate that THBS1 is a G⍺12-regulated gene critical for cell migration and GBM invasion.
Discussion
In this study, we demonstrate a unique and critical role for the heterotrimeric G-protein, Gα12, in GBM.Signaling through heterotrimeric G-proteins depends on their activation by GPCRs, thus our findings implicate endogenous GPCRs and their locally generated ligands in driving GBM tumor progression.G-protein coupled receptors are upregulated and implicated in growth and invasion of numerous cancer types 15,18 .Recently more than 30 GPCRs were demonstrated to couple to Gα12/13 9 and, based on their downstream signaling, are likely to regulate cancer progression.We established that many of these receptors had altered profiles in GBM samples included in the TCGA PanCancer dataset, including upregulation of S1PR2, LPAR4, EDNRA, FFAR4, HTR7, and OXGR1.Most strikingly, however, GNA12 was altered in almost one third of the profiled samples, a higher rate than that observed for any of the Gα12-coupled GPCRs.GNA12 was notably upregulated in tumor samples largely distinct from those showing overexpression of these GPCRs.Accordingly, even tumors in which GPCRs are not upregulated would be stimulated through Gα12 regulated pathways in a microenvironment in which their ligands (e.g., thrombin, S1P and LPA) are generated.
Knockdown of G⍺12 decreased G⍺12 mRNA without compensatory increases in G⍺13 mRNA, and with an associated decrease in G⍺12 protein.Tumors observed 2-to-3 weeks after implantation of G⍺12KD GSC23 or HK281 cells did not differ in size nor was there a difference in survival of tumor-bearing mice compared to WT controls.The lack of effect on tumor size likely results from the presence of multiple potential stimuli in the tumor microenvironment that could act independently of G⍺12-coupled receptors to stimulate tumor cell growth 30 , and indeed there are multiple pathways for YAP activation and YAP-mediated cell proliferation that would remain intact in the KD cells 31,32 .Consistent with these findings knockdown of G⍺12 did not alter in vitro GSC23 cell proliferation; in addition G⍺12 or G⍺13 deletion has also been reported to have no effect on in vivo growth of pancreatic, breast, or oral cancer cell-derived tumors 21,33,34 .
On the other hand, tumor cell invasion appears to be highly dependent on G⍺12, as it was significantly diminished in the G⍺12 KD tumors, demonstrated in three separate experiments.This was shown in experiments using both GSC23 and HK281 cells, suggesting that it is a generalizable feature of G⍺12 signaling.Our in vitro experiments confirmed this observation, demonstrating that GSC23 cell migration and invasion were enhanced by S1P in control cells, but not in cells in which G⍺12 was knocked down.Our G⍺12 knockdown studies were complemented by gain-of-function experiments using GSC23 cells expressing a DREADD coupled to G⍺12, in which we demonstrated ligand induced activation of cell migration.Defects in cell migration and invasion were also seen in pancreatic and breast cancer cell derived-tumors in which G⍺12 and or G⍺13 were deleted 33,34 , a finding extended by our in vivo orthotopic observations.G⍺12 and G⍺13 couple to RhoGEFs thus their primary effect is the activation of RhoA.Actin cytoskeletal changes could acutely alter cell shape, motility and migration, well established responses induced through RhoA signaling 3,35 .It is now evident, however, that RhoA activation also leads to transcriptional responses. 6,7,11,13,20.Our findings with Gα12 KD glioma stem cells suggest that signaling through Gα12 induces transcriptional responses including genes that characterize mesenchymal-like and stem cell-like states and which would contribute to their invasive phenotype.Of related interest, studies examining DNA copy number alterations in GBM using large-scale network modeling identified GNA12 as a major hub correlated with disease-relevant transcriptional effects 36 .Together these findings support the hypothesis that G⍺12 activity regulates tumor cell migration and invasion through chronic and sustained transcriptional alterations.
A proneural-to-mesenchymal transition (PMT), has been described for GBM 24,28 .This resembles the epithelial-to-mesenchymal transition (EMT) observed in other solid cancers, which has been associated with increased stemness and metastasis 37 .Suggestive evidence for a role of G12 signaling pathway in EMT has been reviewed 38 , but transcriptional responses mediated through G⍺ 12/13 signaling have not been previously linked to the proneural-mesenchymal transition (PMT) in glioma cells.Notably, however, glioma stem cells in early passage culture tend to revert to a less aggressive phenotype with a different molecular signature than that of their parental GBM 39 , and the addition of serum, which contains activators of GPCRs coupled to G⍺12/13 and to RhoA mediated gene expression 6 , stimulates their transition to a more mesenchymal phenotype 24 .In addition, expression of the GBM associated gene RPHN2 (rhophilin) activates RhoA and was also reported to lead to mesenchymal transition of GBM cells 40 .The possibility that transcriptional responses regulated by G⍺12 promote the process of PMT is further supported by our finding that knockdown of G⍺12 in GSCs leads to increases in several genes reflective of a more proneural signature, in particular OLIG2, PATZ1, and TAZ 28,41,42 .In addition, expression of YKL40 considered by most to be an early marker of a mesenchymal shift in recurrent GBM 43,44 was reduced.
Our data demonstrate changes in stem cell frequency and expression of canonical stem cell genes, including key transcription factors like NANOG, SOX and NES, were decreased in the G12 knockdown cells.In addition, classical EMT protein families that modulate cell communication processes, e.g.cadherins (CDH11), collagens, and focal adhesion components (integrins) were highly differentially expressed in the tumor samples.Thus while our data do not conclusively support a PMT shift associated with G⍺12 deletion, as defined by GSEA using the Verhaak classification method 19 , or demonstrate all of the phenotypic changes associated with altered stemness, this is not unexpected since GBM presents a high degree of phenotypic variability due to its inter-and intra-tumor heterogeneity.Overall, however, the genomic changes we observed, along with the decrease in the tumor's aggressive features, are compatible with transcriptional reprogramming through altered expression of proneural-mesenchymal and stem cell genes.
We identified thrombospondin-1 (THBS1), associated with the most aggressive and invasive GBM tumors, as one of the most highly downregulated G⍺12 dependent genes in the tumor samples analyzed by DESeq2, demonstrating that THBS1 expression was decreased by 90% in G⍺12 KD tumors.We extended our analysis using GSCs in vitro, demonstrating that THBS1 expression was induced through G⍺12 signaling by S1P, as well as by direct activation of G⍺12 through ligand stimulated DREADD activation.Previous work has linked TGFβ/STAT3 signaling to THBS1 expression 25 , but to our knowledge, the data we present are the first to implicate GPCR and G⍺12 signaling in expression of thrombospondin-1.Notably, our analysis of the promotor of the THBS1 gene sequence using prediction tools for transcription factors binding sites (e.g., TRANSFAC-based public tools PROMO or MATCH, and TFBSPred, https:// www.micha lopou los.net/ tfbsp red/) revealed a variety of biding sites, including those for MRTFA/SRF and YAP/TAZ/TEADs, transcriptional effectors robustly regulated through G⍺12 signaling.
Our finding that THBS1 knockdown phenocopies that of G⍺12, with little effect on tumor size but a clear change in invasiveness at the tumor border, suggests that upregulation of THBS1 through G⍺12 signaling is one of the transcriptional targets that mediate GBM tumor invasiveness.We show here that its expression also correlated with that of G⍺12 in TCGA tumor samples.While the role of THBS-1 as regulator of cell invasion is less well established than its role in angiogenesis 26 , it has been shown to regulate the tumor microenvironment, bind to integrins, and activate several protein kinase pathways involved in cell migration including ERK, p38MAPK, and FAK 25,45,46 .Interestingly, our groups have shown that FAK is activated through integrin signaling downstream of RhoA and ligand induced G⍺12 activation 47 and through RhoA signaling in uveal melanoma 48 .Thus, it will be of interest to determine if FAK serves as a downstream effector of THBS-1 to mediate invasiveness of GBM.
Taken together, our in vivo and in vitro data suggest that GBM tumor cells respond to endogenous GPCR agonists in the tumor environment to engage G⍺12 transcriptional signaling that alters molecular programming of GSCs.Our findings support the notion that activation of G⍺12 signaling contributes to a phenotypic shift towards a more invasive and mesenchymal tumor growth pattern.Thus, downregulation of this signaling pathway could be efficacious in treating GBM by decreasing therapeutic resistance and the tumor infiltration that contributes to recurrence.
Figure 2 .
Figure 2. G⍺12 knockdown does not alter mouse survival or tumor size observed following orthotopic intracranial injection of GSC tumor cells.shRNA control or shG⍺12 knockdown GSC23 and HK281 cells labeled with IRFP720 were intracranially injected into syngeneic nu/nu mice.(A, B) Kaplan-Meier survival curves for GSC23 and HK281 control and shG⍺12 KD tumor-bearing mice (6 animals per group, in 2 independent experiments for GSC23, and 4 animals per group for HK281 in one experiment).Survival curves were not significantly different as determined using the Log rank test (C, D) Brain tumor growth of GSC23-engrafted mice as monitored using Fluorescence Molecular Tomography (FMT) emission at 720 nm.Representative FMT scan images at 17 days post intracranial injection and relative fluorescence quantification.(E) Additional set of animals injected with approximately 3 × 10 5 GSC23 cells and analyzed for tumor growth monitored by FMT did not show significant differences in tumor size (p = 0.1; n = 6).
Figure 7 .
Figure 7. Association of G⍺12 and thrombospondin-1 expression in mesenchymal GBM and requirement of THBS1 for GSC tumor invasion.(A) THBS1 is elevated in GBM patients in TCGA database assessed by GlioVis.(B) CCGA dataset indicates positive correlation between GNA12 and THBS1 in GBM patient tumor samples classified as mesenchymal.Pearson's correlation, HSD p < 0.01.(C) Mouse brain cross sections showing the effect of shTHBS1#1 and shTHBS1#3 KD compared with shControl at 17 days post intracranial injection (H&E or IHC for human nuclei).(HPM, × 40). | 2023-12-18T16:05:26.780Z | 2023-12-16T00:00:00.000 | {
"year": 2023,
"sha1": "e7253ad5502162388e53f1a4073f36112449e944",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ScienceParsePlus",
"pdf_hash": "602cbc74df214c05d78e4eb8e3f9dfe4a303145d",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
204122001 | pes2o/s2orc | v3-fos-license | Is There a Difference in Terms of Perinatal Outcomes Between Fresh and Frozen Embryo Transfers?
OBJECTIVE: Nowadays, fresh embryo transfers and frozen embryo transfers are frequently employed in the treatment of in vitro fertilization. This study aims to compare the pregnancy outcomes in patients who underwent fresh embryo transfers and frozen embryo transfers. STUDY DESIGN: All patients who underwent fresh embryo transfers and frozen embryo transfers at the in vitro fertilization center, Ondokuz Mayis University between 2010 and 2017 were screened retrospectively and the pregnancy results were evaluated at one-year follow-up. The study included a total of 912 transfers, 679 of which were fresh embryo transfers and 233 were fresh embryo transfers, in 756 patients. Comparisons were made in terms of biochemical pregnancy, clinical pregnancy rate, ongoing pregnancy, and live birth rate. RESULTS: Ectopic pregnancy, biochemical pregnancy, and abortus in fresh embryo transfers were found to be significantly more than that in frozen embryo transfers (p=0.001). However, no statistically significant difference in terms of clinical or ongoing pregnancy rate or live birth rate was observed. Birth weight was significantly lower in fresh embryo transfers than in frozen embryo transfers (p=0.001, p= 0.031). Multiple pregnancies preeclampsia, preterm labor, and placental abruption did not show a statistically significant difference in fresh embryo transfers and frozen embryo transfers. Yet, gestational diabetes was significantly more in frozen embryo transfers (p=0.011). CONCLUSIONS: Early pregnancy complications in fresh embryo transfers are higher than that in frozen embryo transfers. In terms of neonatal results, higher birth weight and gestational diabetes are more prevalent in frozen embryo transfers. In this study, it has been shown that fresh embryo transfers are more often associated with negative pregnancy outcomes. frozen embryo transfers can be better for pregnancy results
Introduction
The transfer of a frozen embryo by the process of thawing has led to a new era in the history of in vitro fertilization (IVF). At present, embryos can be frozen at all stages right from zygote to blastocyst and can be stored for years (1). some poor perinatal outcomes (6,7). Supraphysiological steroid levels may be another reason that would explain these perinatal outcomes (6,7).
This study aimed to compare the perinatal outcomes of patients who underwent frET and fzET.
Material and Method
All patients who underwent frET and fzET in the IVF center of Ondokuz Mayis University between 2010 and 2017 were screened retrospectively and the pregnancy results were evaluated at one-year follow-up. The study was approved by the Ethics Committee, Ondokuz Mayis University. The study was subject to local ethics committee approval (No:11/02/2019-E.3962) and consent for using data. All authors and the study protocol have complied with the World Medical Association Declaration of Helsinki regarding the ethical conduct of research involving human subjects.
A total of 912 transfers, out of which 233 were fzET and 679 were frET, involving 756 patients, were considered. Comparisons were made in terms of biochemical pregnancy, clinical and ongoing pregnancy, and live birth rates. The perinatal outcomes included preterm labor, preeclampsia, placental abruption, and gestational diabetes. Patients with either three or more unsuccessful transfers or those with polycystic ovary syndrome, endometriosis, and known endocrine diseases were excluded from the study.
Babies born under 37 weeks of gestation were considered preterm labor. Pregnancies ending before the 20th gestational week were considered as abortus. Patients with no gestational sac observed although positive beta-human chorionic gonadotropin (β-hCG) () were included as biochemical pregnancy. Intrauterine ex patients who did not receive fetal heartbeat after 20 weeks of gestation were taken. The diagnosis of ectopic pregnancy was made by laparoscopy or ultrasonography. Live birth rate was recorded as the birth of a live baby over 20 weeks of gestation. Ongoing pregnancy was considered a pregnancy that continued after the 12th week. Clinical pregnancy was confirmed by monitoring of fetal heartbeat on the ultrasound.
Statistical analysis
This study was conducted to determine the effect of fresh or frozen IVF cycles on other variables or parameters. Descriptive statistics for continuous (numerical) variables were expressed as mean and standard deviation, while that for the categorical variables were expressed as number (n) and percentage (%). In order to determine the sample width (magnitude) of the study, power was taken to be at least 0.80 and Type 1 Error was considered to be 0.05. Independent t-test was used to compare the mean of continuous variables in the groups. Chi-square test was used to determine the relationship between categorical variables. The statistical significance level (a) was considered to be 5% in the calculations. The SPSS (IBM SPSS for Windows, Ver. 24) statistical package program was used for carrying out the statistical analysis in the study.
Results
No statistically significant relationship was found between frET and fzET groups in terms of age (p >0.05) and infertility period (p >0.05). Indications do not show a significant change according to frET and fzET groups (p >0.05). (Table I).
In the groups, 13 patients had ectopic pregnancies in frET, no ectopic pregnancy was detected in the fzET. Ectopic pregnancy was significantly higher in frET group (p=0.001). Also, abortus and biochemical pregnancy were significantly higher in frET than that in the fzET group (p=0.001). There was no significant difference in terms of intrauterine fetal demise, clinical pregnancy, ongoing pregnancy or live birth rate between the two groups (p=1.000, p=0.900, p=0.696, p=0.630) (Table II).
There was no significant difference between preeclampsia, preterm labor and placental abruption between the two groups (p=0.440, p=0.706, p=0.865). However, gestational diabetes was significantly more in fzET than in frET babies (3.9% vs. 0.7%) (p=0.011) (Table III) In terms of gestational age, no significant difference was observed between the frET and fzET groups (p=0.944, p=0.666). Average birth weight was 2838 grams in frET group, while it was 3396 grams in fzET (figure 1).
Birth weight was found to be significantly lower in frET group than that in the fzET group (p=0.001, p=0.031) (Table IV). There was no statistically significant difference in terms of multiple pregnancies between the frET and fzET (p=0.389) ( Table V). In terms of gender and major congenital anomalies, there was no significant difference between the two groups (p=0.446) (Table VI and VII).
Discussion
Fresh embryo transfer (frET) and frozen embryo transfer (fzET) are practiced at many centers today. However, differing results related to the neonatal outcomes of these transfers have been obtained from various studies (2,4,5).
It was observed in this study that abortus, biochemical pregnancy, and ectopic pregnancy were significantly higher in frET. However, no significant difference was observed between the two groups when the neonatal outcomes were eval- uated (except that the birth weight and the frequency of gestational diabetes were more common in the fzET group). The authors believe that these results are either because of asynchronous relation between the endometrium and embryos in frET procedure or the hormonal environment due to Controlled Ovarian Hyperstimulation (COH). Another reason that can explain these results is that the embryos exposed to the freeze-thaw process are stronger. However, according to the results of the present study, application of frET or fzET did not change the perinatal outcomes in the later gestational weeks. Also, the clinical pregnancy rate, live birth rate, and ongoing pregnancy rate were not different.
For better perinatal results, not only a high-quality embryo but also the endometrium needs to be hormonally and biochemically suitable. The studies have shown that high estrogen level caused by COH may exert its effects on implantation and placenta (8,9). There was no difference found in terms of perinatal outcomes between the frET and fzET patients using donor oocytes. Since similar levels of progesterone and estro- gen were found in frET and fzET patients, no endometrial hyperstimulation-induced effects were observed (10,11).
In their study, while evaluating patients using autologous oocytes and donor oocytes, Mar Vidal et al. did not find any difference between the frET and fzET in patients using donor oocytes, yet they found poor perinatal outcomes in patients using autologous oocytes because of COH administration (12).
The literature does not report any significant difference between frET and fzET in terms of clinical pregnancy and ongoing pregnancy (12). In terms of live birth (13,14), there are studies indicating that live birth rate is higher in fzET, although this result could not be obtained in other studies (15,16). In the present study, there was no difference in terms of clinical pregnancy, ongoing pregnancy, and live birth rate.
Literature also reports different results on abortus. In some studies, abortus has been found to be higher in fzET than that in frET; however; there are also studies stating that there is no difference in terms of abortus between fzET and frET (17)(18)(19). Also, the rates of biochemical pregnancy are not different in frET and fzET according to the outcome of some studies (17). Nevertheless, the results of the present study clearly show that abortus and biochemical pregnancy were significantly higher in frET.
Earlier studies on ectopic pregnancy have shown that the frequency of ectopic pregnancy is less in fzET (20,21), which has also been observed in the present study. The presence of high contractility and impaired endometrial receptivity in frET cycles may lead to ectopic pregnancy (21,22).
In terms of neonatal results, 37,703 singleton pregnancies evaluated by Maheshwari et al. in their meta-analysis demonstrated that SGA, low birth weight, preterm labor, perinatal mortality, and postpartum bleeding were less in fzET than that that in frET (4).
In another study, Maheshwari et al. evaluated 112,432 singleton pregnancies and reported that while low and very low birth weight pregnancies were lesser in the fzET group, in terms of preterm rate and anomaly rate, there was no difference between frET and fzET (23). The findings of the present study in terms of neonatal outcomes appear to be consistent with these results. No other neonatal differences were identified other than the birth weight being higher in fzET. Also, gestational diabetes is more frequent in the fzET group; however, the reason is not completely understood. Inadequate growth resulting from incompatibility between the endometrium in the frET cycles and the synchronization with the embryo may be one of the reasons. Furthermore, it is also thought that some changes in the early embryo due to the freeze-thaw process may cause macrosomia in frozen embryos (24).
The authors of the present study did not find any difference between frET and fzET in terms of the gestational week.
However, studies indicating that preterm labor is more common in fzET and that there is no significant difference between a gestational week in frET and fzET have been reported in the literature (5,25,26).
Although the present study is a retrospective one, it is important as it reveals that early pregnancy complications are more frequent in frET.
The most important shortcoming of this study was that the implantation rate was not considered. As the number of embryos transferred in these patients was not reliable in the records of the authors, thus this information was not considered. The present study has shown that the results of early pregnancy are better in fzET than in frET cycles. Deterioration of the endometrial environment with COH may also have an impact on these results. As for neonatal results, there have been no differences except birth weight and gestational diabetes. Prospective studies are warranted to better understand this issue. | 2019-09-26T09:02:04.839Z | 2019-09-25T00:00:00.000 | {
"year": 2020,
"sha1": "993346c410eed45ca09daf902b56bfc0395cb5e4",
"oa_license": "CCBY",
"oa_url": "https://gorm.com.tr/index.php/GORM/article/download/932/814",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "3ab278c5a36d31f152316254cb6db5a8540f7db6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
108687277 | pes2o/s2orc | v3-fos-license | Perfmon2: A leap forward in Performance Monitoring
. This paper describes the software component, perfmon2 , that is about to be added to the Linux kernel as the standard interface to the Performance Monitoring Unit (PMU) on common processors, including x86 (AMD and Intel), Sun SPARC, MIPS, IBM Power and Intel Itanium. It also describes a set of tools for doing performance monitoring in practice and details how the CERN openlab team has participated in the testing and development of these tools.
Introduction: Justifying performance monitoring
There are multiple reasons why performance tuning of an application or a subsystem is still worth the effort, even in an era where hardware is seen as "cheap" and manpower is considered expensive.The first case is when computers are purchased for tens of millions of euros, so that even economies of just a few percent can compensate for the salaries of the people doing performance work.
A second case, which is fairly recent, is when computer centres fill up to their power and thermal limits with the consequence that no more servers can be installed.If additional capacity is nevertheless needed, one may either be forced to exchange some of the equipment with more thermally-efficient hardware (if it exists) or turn to performance tuning in order to squeeze out more performance.
A third case is when high-cost personnel wait for computers to complete their calculations.Percentage gains in turn-around time can then be translated directly into more efficient manpower.
An additional incentive is no doubt personal pride of the software designer/programmer.Ideally, one wants performance analysis to be performed throughout the entire development cycle of an application, so that the application does not exhibit "inefficient" behaviour or excessive consumption of computing resources.
The initial Itanium development
When the Itanium processor was designed (jointly by HP and Intel) a Performance Monitoring Unit (PMU) was added to the processor architecture as a complete and consistent facility.The PMU presented a well-defined interface to the operating system for both the programming and the corresponding data collection.For counting events, a vast number of counters were added -so that, for instance, every cycle in the execution pipelines could be accounted for or every action in the cache hierarchy could be explained.In addition, several advanced features, such as Branch Trace Buffers, were introduced.
When Linux was initially ported to the Itanium in the late nineties [1], Stéphane Eranian from HP Labs took on the task of developing the required software to exploit the PMU in order to monitor the performance of applications or even the kernel itself.The tool became an important instrument in the effort to port applications to the Itanium platform.
A bright idea comes along
As a natural extension to the work he had done on the Itanium, Stéphane then decided to extend the Linux kernel support to cover all modern processor families.He called the successor product, perfmon2 [2].In addition to providing a broad support for hardware variants, he negotiated with the Linux kernel maintainers to get the patch added to the main kernel tree.In the beginning the maintainers were sceptical, primarily because the kernel hooks for perfmon2 sit in very sensitive areas of the kernel, such as the dispatcher and the interrupt handler.The community did not want any unnecessary overhead to be added to such time-critical kernel areas, and it took several kernel releases to introduce the appropriate infrastructure for perfmon2 without upsetting the community.At the time of writing it seems that perfmon2 will be entirely added as of kernel version 2.6.24later this autumn.
Detailed description of perfmon2/pfmon
Perfmon2 aims to be a portable interface across all modern processors [3][4].It is designed to give full access to a given PMU and all the corresponding hardware performance counters.Typically the PMU hardware implementations use a different number of registers, counters with different length and possibly other unique features, a complexity that the software has to cope with.Although processors have different PMU implementations, they usually use configurations registers and data registers.Perfmon2 provides a uniform abstract model of these registers and exports read/write operations accordingly.The software supports a wide variety of processor architectures, including Intel Itanium, Intel P6, P4, P2, Pentium M, Core and Core 2 processors, the AMD Opteron (Dual and Quad-core), IBM Cell processor and a range of MIPS processors [5].The interface is implemented using system calls in order to support per-thread monitoring, implying a costly context switch by the kernel.The interface provides support for system-wide measurements.Multiple per-thread perfmon2 contexts can coexist at the same time on a system.Multiple system-wide sessions can co-exist as long as they monitor different processors.Per-thread mode and system-wide mode cannot exist at the same time.For each mode, it is possible to collect simple counts or create full sampling measurements, in both cases using 64-bit counters, in either user or kernel mode.Perfmon2 uses the number of events in order to determine the sampling period.In sampling mode, the interface uses a kernel sampling buffer in order to minimize the overhead, since this reduces the communication between user and kernel levels on each sampling counter overflow.
Figure 2. What is where regarding pfmon
For the cases where the performance units have too few counters, or when some events can not be measured together, perfmon2 supports event sets and multiplexing.
The perfmon2 interface is complemented by the libpfm library as well as with the pfmon tool.The library provides the actual access to the interface for user applications since it contains a set of functions, adapted to each processor, for read/write operations to the perfmon2 exported registers.The library also provides the API in order to take advantage of more complex PMU features, like the Branch Trace Buffer (BTB) and "opcode matching" on Itanium.
Pfmon [6] is a stand-alone command-line tool which takes advantage of perfmon2 and libpfm.It was initially designed in order to test the features of the interface and the library, but since users found it to be a very helpful tool, it accumulated features and matured over the years.
Figure 3. Profiling example
Pfmon can do system-wide or per-thread measurements, by monitoring across pthread_create, fork and exec function calls.Such measurements can be done for a new process or for an existing one by attaching dynamically to the process.The measurements can be triggered at a specific code location if desired.This can be useful if, for instance, one wants to skip the initialization phase of a program and only monitor the rest.In addition to counting PMU events, pfmon supports profiling without requiring a recompilation of the application.It can report which addresses from the application contribute to the overall number of cycles, instructions, cache misses etc.With the CERN openlab [7] extensions to pfmon, it can map these addresses to symbol names across multiple processes and shared libraries, dynamically loaded or linked against the application.For C++ and Java symbols pfmon can demangle symbols and produce more user-friendly names.In order to avoid pathological patterns in sampling mode, pfmon provides the possibility to randomize sampling periods.All results can be aggregated across different measurement domains and can be either printed on the screen or saved into a file for further analysis.
Complexity of CERN benchmarks -Shared library support in perfmon2.
The High Energy Physics (HEP) community develops a great deal of in-house software.There is a wide spectrum of components involved in the programming process, including programming languages such as C/C++, Java and even FORTRAN, in addition to scripting languages, such as python or bash.The applications are run on top of multiple hardware configurations and set-ups.The global development effort produces software that has a wide variety of functionality and complexity, from simple applications to huge simulation frameworks built with hundreds of shared libraries.These complex frameworks are based on a multitude of components developed by different teams inside HEP or by the open source community.As already mentioned in the introduction, there are multiple reasons for wanting to tune the performance of such frameworks.
Both pfmon, and the underlying perfmon2, offer many features that should be useful in the HEP computing environment in terms of performance monitoring and profiling.The portability and scalability across different hardware platforms is of particular value.However, pfmon was also found to have some limitations, especially related to dynamic libraries [8].The HEP applications make extensive use of such libraries, and because they spend most of their execution time in these libraries, the weaknesses in pfmon become problematic.In addition, if dynamic libraries are loaded and unloaded in various ways (i.e., from C/C++ program or from python script) in different parts of the application, pfmon's raw addresses can be misleading.Further problems are caused by the fact that a lot of HEP applications run as a consecutive series of smaller processes which is reported as a series of fork/exec events.
Bugs and other issues dealt with by CERN openlab
As already described, pfmon does not handle dynamic libraries correctly.It reports raw addresses from dynamic libraries instead of function names, which makes the analysis difficult, almost impossible.There is also the issue of multiple processes (created via fork or exec) that causes problems when we want to observe symbols instead of virtual addresses, since pfmon initially could perform symbol resolution only from the main executable.Given that pfmon, nevertheless, has a lot of other useful features, we have contributed some extensions in order to meet our requirements.
We have extended pfmon's functionality [9] allowing it to handle all libraries, both those that were linked against applications as well as those that were dynamically loaded during the execution.As a result our extended pfmon is able to deal with all symbols from such libraries and report profiling results with function names.Thanks to our improvement we are also able to see these symbols across all processes started from the main executable.Our extensions are fully portable and run on IA-32, Intel-64, and AMD64 as well as on IA-64 platforms.In the process of working on these improvements, we also solved a few additional issues related to pfmon.Since the HEP applications' set-up and resource requirements are very demanding, our tests helped us to find and solve memory leak problems.We have also solved a "dangling file handler" issue when we reached more than five thousand open files while running profiling.During various tests we have discovered a few bugs in verbose mode, where in some cases pfmon tries to print data which is not accessible.We have also found an issue with automatic inheritance of debug registers on fork call, which is acceptable for the x86 architecture, but not for IA-64.
GPFMON
Gpfmon [10] is a graphical front-end to pfmon, written in Python and GTK-2, running on Linux systems.The concept for this application, developed by CERN openlab, stems from the need to provide a convenient and user friendly way to launch pfmon/perfmon2 monitoring sessions.The frontend, nearing the beta phase as of this writing, not only does this but also brings additional value on top of the original program.Most notably, it provides an advantage to less advanced users, as well as advanced users requiring visualization capabilities.
Apart from the fact that gpfmon relieves users from writing 250-character long command lines, the tool provides some aid in event selection, by visualizing available PMU events, their descriptions, dependencies and counters independently of the architecture.The event selection process is assisted, according to the amount and availability of counters in the PMU.Moreover, scenarios consisting of event ratios may be selected, such as cycles per instructions or the percentage of missed branch predictions.Gpfmon automatically selects events needed to produce the selected ratios, in order to eventually produce either a single figure or time-dependent data.In effect, it is easy to gain a more complex insight into the monitored program and see how some ratios change over time.In addition, gpfmon supports remote monitoring sessions via plain SSH, without the need to install any additional software.In this scenario, the client (gpfmon) uses the SSH protocol to connect to a remote machine running perfmon2 and pfmon, and shows the results locally on the user's computer.This solution not only lifts the burden of supporting the GUI from the monitored machine, but also enables monitoring in GUI mode on less robust network links, which might not be able to support the GUI displayed via X-forwarding or VNC.Gpfmon also generates several types of plots for flat profile and sampling data, enabling users to see the characteristics of their applications at a glance.
Conclusions
Perfmon2 is an exciting development that is about to be included in the Linux kernel as the standard interface to the PMU on modern microprocessors.In our opinions it is likely that performance tuning will remain an important activity in the years to come, due to problems such as the capping of processor frequency because of power leakage issues or the saturation of cooling capacity in computer centres for the same reason.Once perfmon2 has been integrated with all the required hooks, tools like pfmon and gpfmon can be used without difficulty to monitor the performance of software applications.Thanks to the development in CERN openlab, the demands of the complex LHC applications will be fully covered.
Figure 4 .
Figure 4.A gpfmon session and a produced graph The tool will continue to evolve towards a robust user interface, adding even further value to pfmon, and including such features as advanced scenario support, results and profile management and enhanced plot generation.A fully stable 1.0 version of the program is expected in the fourth quarter of 2007. | 2019-04-12T13:55:59.792Z | 2008-07-01T00:00:00.000 | {
"year": 2008,
"sha1": "5a6778f5fcad97b30dec005e961eb81186f31ab6",
"oa_license": null,
"oa_url": "https://iopscience.iop.org/article/10.1088/1742-6596/119/4/042017/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "e23b5ff38e59243cfb59caba5d9186c8cbe8fdbb",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Engineering",
"Physics"
]
} |
53037731 | pes2o/s2orc | v3-fos-license | Perceptions of parents and healthcare professionals regarding minimal invasive tissue sampling to identify the cause of death in stillbirths and neonates: a qualitative study protocol.
BACKGROUND
Globally, around 2.6 million neonatal deaths occur world-wide every year and the numbers of stillbirths is almost similar. Pakistan is ranked among the highest countries in the world for neonatal mortality. In 2016, for every 1000 babies born in Pakistan, 46 died before the end of the first month of life. Also, Pakistan had the highest rate of stillbirths (43.1/1000 births) in 2015. To meet sustainable development (SDG) targets of reducing neonatal mortality and stillbirths, it is essential to gain understanding about the causes of neonatal death and stillbirths. In Pakistan, full autopsies are conducted only in medico-legal cases and are very rarely performed to identify a definitive cause of death (CoD) and because of cost and insufficient staff are generally not feasible. Recently, minimally invasive tissue sampling (MITS) has been used to determine CoD in neonates and stillbirths as it addresses some of the socio-cultural and religious barriers to autopsy. However, it is not known how families and communities will perceive this procedure; therefore, exploring family and healthcare professionals' perceptions regarding MITS is essential in determining acceptable and feasible approaches for Pakistan.
METHODS
The study will employ an exploratory qualitative research design. The study will be conducted at the National Institute of Child Health (NICH) hospital of Karachi. The data collection method will consist of key-informant interviews (KIIs) and focus group discussions (FGDs). FGDs will be conducted with the families and relatives of newborns who are visiting the outpatient department (OPD) and well-baby clinics of NICH hospital. KIIs will be conducted with the NICH - medical director, healthcare providers, professionals involved in proceedings related to death and dying, religious leaders, health sector representatives from the government, public health experts, maternal and child health (MCH) specialists, obstetricians and neonatologists and experts from the bioethics committee. Study data will be analyzed using NVivo 10 software.
DISCUSSION
The research will help explore specific cultural, religious and socio-behavioral factors that may increase or decrease the acceptability of MITS for identifying COD in neonates and stillbirths. The findings of the qualitative study will provide a better understanding of parents' and healthcare professionals' attitudes towards the use of MITS on neonatal deaths and stillborns.
Plain English summary
Pakistan is ranked among the highest countries in the world for neonatal mortality. In 2016, for every 1000 babies born in Pakistan, 46 died before the end of the first month of life. Also, Pakistan had the highest rate of stillbirths (43.1/1000 births) in 2015. To meet sustainable development (SDG) targets of reducing neonatal mortality and stillbirths, it is essential to gain understanding about the causes of neonatal death and stillbirths. Recently, MITS has been used to determine CoD in neonates and stillbirths as it addresses some of the socio-cultural and religious barriers to autopsy. However, it is not known how families and communities will perceive this procedure. Therefore, the purpose of this formative research is to explore the facilitating and inhibitory factors perceived by communities and healthcare professionals for the implementation of the MITS procedure.
The study will employ an exploratory qualitative research design. The study will be conducted at the National Institute of Child Health (NICH) hospital of Karachi. The data collection method will consist of key-informant interviews (KIIs) and focus group discussions (FGDs). Study data will be analyzed using NVivo 10 software.
This qualitative study will provide a thorough insight into the views of families and healthcare professionals, towards the use of MITS. The study will also highlight how a less invasive autopsy can address some of the barriers, such as delay in carrying out funeral practices and concerns regarding organ and tissue removal, which are particularly significant for some cultural and religious groups.
Background
There are approximately 2.6 million neonatal deaths world-wide every year and the number of stillbirths is almost similar. The number of neonatal deaths worldwide has declined from 5.1 million in 1990 to 2.6 million in 2016. This decline has been slower when compared to the under-5 mortality rates, i.e. 49% compared with 62%. In the first month of life, nearly half of all neonatal deaths take place in the first 24 h, and up to 75% of all neonatal deaths occur in the first week of life [1]. As per the recent UNICEF report, Pakistan is ranked among the highest countries in the world with regard to neonatal mortality. In 2016, for every 1000 babies born in Pakistan, 46 died before the end of the first month of their life. Also, Pakistan had the highest rate for stillbirths (43.1/1000 births) in 2015 making it the worst performer amongst 186 countries [2]. Therefore, Pakistan is considered to be one of the riskiest places in the world for childbirth. In order to reach the sustainable development (SDG) targets for neonatal mortality and stillbirth rates of 12 deaths per 1000 live births by 2030 [3], it is very important to identify the causes of death.
In low-middle-income countries (LMICs), infants often die without being cared for by a skilled healthcare professional, without any documentation from a medical examiner and are usually buried without conducting any cause of death (CoD) investigation [4]. Determining the causes of neonatal death and stillbirth in healthcare facilities and in the communities is crucial for several reasons. CoD determination is important to reveal the actual cause of neonatal death and stillbirth. CoD investigations are important to resolve uncertainties in global disease estimations. Lastly, the right information about the cause of neonatal deaths and stillbirths will help develop effective public health programs and will allow public health policymakers to make informed decisions for allocating health care resources [5][6][7].
In LMICs, full autopsies on neonates and stillbirths are rarely conducted to identify a definitive CoD and are unlikely to be feasible due to both resource constraints and acceptability issues. In countries like Pakistan, fullautopsies are not performed due to cultural, financial, religious, and physical barriers, except for medico-legal cases [7,8]. The minimally invasive tissue sampling (MITS) procedure is now being used to address the socio-cultural and religious barriers [9]. The MITS procedure involves extracting tissue specimens from a predefined set of organs and using that tissue for histopathologic examinations and organism identification. The methodology offers the possibility of gathering critical missing data to determine the causes of death in neonates and stillbirths. MITS is potentially quicker, less expensive, more acceptable and markedly less invasive, and has great potential to determine the CoD nearly as accurately as a full autopsy [10]. However, it is not known how families, communities and healthcare professionals will perceive this procedure or how they will decide whether or not to consent to a post-mortem needle biopsy as it still involves technical and cultural challenges [5].
Implementation of MITS procedures in areas where post-mortem procedures have been seldom utilized, such as in a Muslim country like Pakistan, requires an understanding of what is culturally and religiously is acceptable and feasible [6,11]. Furthermore, understanding how, when and by whom, and in which context grieving relatives of a deceased neonate or stillbirth should be approached to seek permission to perform such procedures is critical. Very few studies have explored facilitators and barriers to the MITS procedure [12]. Exploring family and healthcare professionals' perceptions and views regarding the MITS procedure is essential in defining best practice and determining acceptable and feasible approaches [6].
Rationale
As the MITS procedure is performed on the body of a recently deceased neonate or stillbirth, a number of complex factors including religious beliefs, cultural norms, financial limitations and ethical constraints inevitably arise and may make MITS difficult to execute in LMICs, and Pakistan is not different. Widespread uptake and acceptability of MITS will require a thorough understanding of the ethical issues, cultural and religious norms and practices to determine the feasibility of MITS prior to implementation [4]. For example, beliefs about death and the afterlife, opposition to and concerns about body disfigurement, difficulties in obtaining consent from grieving families, inadequate involvement/endorsement of community leaders, lack of community awareness, suspicion of researchers, and burial practices are some of the factors underlying MITS refusal. Understanding these kinds of family perceptions and healthcare professional concerns regarding MITS will be essential to increase community participation and acceptability.
Study purpose
The purpose of this formative research is to explore the facilitating and inhibitory factors perceived by communities and healthcare professionals for the implementation of the MITS procedure.
Study objectives
To explore and understand families' perceptions and views regarding premature births, stillbirths and neonatal deaths and its related causes. To explore cultural, social and religious norms and conduct around deaths (neonatal deaths/stillbirths) To explore families' and healthcare professionals' willingness to know the cause of death for neonates and stillbirths To examine families' and healthcare providers' attitudes towards the MITS procedure To identify perceived facilitators and barriers for the implementation of MITS procedure among families and healthcare professionals
Study design
This formative research will employ an exploratory qualitative research design using semi-structured interviews and a purposive sampling approach. The data collection methods for this formative research will involve key-informant interviews (KIIs) and focus group discussions (FGDs). The aim of the FGDs and KIIs is to explore and understand the acceptability of MITS procedure among public health experts, healthcare providers, clinicians, parents, families, patient advocates, and diverse social, ethnic and religious groups.
Study setting
The study will be conducted at one of the sentinel hospitals of Karachi, the National Institute of Child Health (NICH), because of their well-established pediatric care protocols and willingness to participate in the study. NICH is a 320-bed tertiary care public sector hospital providing care to infants and children in Karachi. The FGDs will be conducted at the outpatient department (OPD) and well-baby clinics of the NICH hospital where families and relatives are visiting.
Study participants Key-informant interviews (KIIs)
We will invite 'key informants' such as the medical director-NICH, healthcare providers (doctors, nurses/ midwives), religious leaders, health sector representatives from the government, public health experts, obstetricians, neonatologists, members of the ethics review committee and professionals involved in proceedings related to death and dying (mortuary attendants/ body preparers) to understand their views and acceptability of the MITS procedure (Table 1). KIIs will be sent/emailed a letter inviting them to participate in the qualitative study. A few KIIs will be arranged at NICH and others will take place at locations preferred by the interviewees. Key informants will be requested to sign consent forms before the interview begins, in which they will agree that the interview can be audio-recorded and written notes can be taken by a note-taker to record interviewee expressions and statements. The key-informant interview will be later transcribed into the local language. However, no identifying characteristic will be included in the transcription. Initially, the KII will involve discussion around health status of pregnant women and their perceptions about neonatal death/stillbirths. Later, the discussion will move towards exploring views about causes of neonatal deaths/stillbirths and acceptability of MITS procedure among healthcare professionals. Finally, the interview will explore perceived facilitators and barriers for implementation of MITS procedure and health systems requirements for implementing the new method to determine CoD. We anticipate that 13-15 participants will be recruited for KIIs, but we will cease interviews once data saturation has been achieved.
Focus group discussions (FGDs)
FGDs will be conducted with the families and relatives of newborns who are visiting the OPD and well-baby clinics of NICH hospital for regular growth monitoring, post-natal check-ups and vaccinations. A few FGDs will be conducted with the relatives of families who have experienced a recent neonatal death/stillbirth. Considering the cultural and ethical sensitivity, the research will not have focus groups with parents who have experienced a recent neonatal death/stillbirth. Additionally, FGDs will not be conducted with the parents of admitted newborns who are waiting at the in-patient areas (Table 1). FGDs will be arranged in one of the meeting rooms at NICH. Focus groups will be facilitated by a trained moderator who is experienced in this area. Focus group participants will be requested to sign a consent form before the dialogue begins, in which case they will agree that the discussion can be noted and audio-recorded for transcription purpose. Participants will be assured that their anonymity will be maintained and no identifying features will be mentioned on the transcript. The major themes will include a general discussion about the health status of pregnant women, perceptions about neonatal death/stillbirths and related practices, views about causes of neonatal deaths and stillbirths, acceptability of MITS among parents and families, perceived facilitators and barriers for implementation of MITS, and exploring perceptions of parents/families who have experienced a prior loss or relatives of families who have been affected by the recent neonatal death/stillbirth. We anticipate that 8-10 FGDs will be conducted, with at least 6 participants in each one. However, FGDs will be ceased once data saturation has been reached.
Eligibility criteria
The inclusion and exclusion criteria for study participants are provided below:
Inclusion criteria
Parents and relatives of newborns who are visiting the OPD and well-baby clinics of NICH hospital for regular growth monitoring, post-natal check-ups and vaccinations.
Relatives of families who have experienced a recent neonatal death/stillbirth. Key informants' such as the medical director-NICH, healthcare providers (Doctors, nurses/ midwives), religious leaders, health sector representatives from the Government, public health expert, obstetrician, neonatologist, member of ethics review committee and professionals involved in proceedings related to death and dying (mortuary attendants'/ body preparers) who are willing to give consent to participate in the study.
Exclusion criteria
We will not interview parents who have experienced a recent neonatal death/stillbirth. Considering the cultural and ethical sensitivity, this qualitative research will not interview parents of admitted newborns who are waiting in the in-patient areas. Participants (parents/families/KIIs) who are not willing to take part in this study.
Ethical considerations
Study participants will be asked to provide informed, written consent prior to participation in this study. Participants who are unable to write their names will be asked to provide a thumbprint to symbolize their consent to participate. Ethical approval from NICH and Aga Khan University Ethical Review Committee (AKU-ERC) has been taken prior initiating this study.
Data collection
Separate semi-structured interview guides have been developed for KIIs and FGDs. The interview guide will help explore participants' views towards full autopsy and MITS, perceived benefits, potential limitations or concerns, and implementation into clinical practice. At the start of the interview, participants will be provided with a standardized overview of autopsy and MITS ( Table 2).
Data analysis
The data will be transcribed into written-form from audio-recordings and will be analyzed via qualitative data analysis software NVivo 10. Written transcripts will be uploaded into NVivo 10 software to offer easy and organized retrieval of data for analysis. Thematic analysis will be carried out to analyze transcribed data collected through KIIs and FGDs. This involves an iterative process where data is coded, compared, contrasted and refined to generate emergent themes. Transcripts will be read several times to develop an interpretation of the participants' perception regarding the acceptability of MITS. The transcribed text will be divided into 'meaning units' which will be later shortened and labeled with a 'code' without losing the study context. Codes will be then analyzed and grouped into categories to capture. In the final step, similar categories will be assembled under main themes. Two independent investigators will perform the coding, category creation, and thematic analyses, and discrepancies will be resolved to reduce researcher's bias. To ensure the credibility of the research, study data will be triangulated by the data sources (parents, mothers, fathers, relatives, healthcare providers, clinicians, public health expert, bioethics expert) and data collection methods (FGDs and KIIs), to compare alternative perspective and reveal any inconsistencies [13].
Discussion
This qualitative study will provide a unique opportunity, thorough insight into the views of families and healthcare professionals, towards the use of MITS, a less invasive autopsy procedure. Such in-depth insights will be crucial to develop an understanding of the cultural, religious and socio-behavioral factors that may facilitate or hinder the acceptability of the MITS procedure. The study will describe how parents view the MITS procedure, and how these views relate to socio-demographic factors such as culture, religion and socio-economic status. The study will also highlight how a less invasive autopsy can address some of the barriers, such as delay in carrying out funeral practices and concerns regarding organ and tissue removal, which are particularly significant for some cultural and religious groups. Finally, the findings will have significant implications for future practice and policy regarding the provision of post-mortem services. Table 2 Overview of open autopsy and MITS Full autopsy -Most comprehensive and complete method to estimate CoD -Rarely undertaken in such resource-poor environments due to cultural, financial, religious, and physical barriers -Very extensive examination of internal organs begins with the creation of a Y or U-shaped incision from both shoulders joining over the sternum and continuing down to the pubic bone MITS -The MITS procedure involves body inspection and recording of basic anthropometric data; body weight, height/length, mid-upper arm circumference, head circumference, lower leg length and foot length -The procedure involves body palpation by a MITS specialist.
-The procedure involves imaging/photography by a MITS technician -The procedure uses biopsy needles to obtain samples of lung, brain, liver and other organs for histopathologic and microbiologic examination to help determine COD | 2018-10-27T14:19:15.440Z | 2018-10-22T00:00:00.000 | {
"year": 2018,
"sha1": "3749d52dfcce8fd17fd8f10d8f7137a75071d74c",
"oa_license": "CCBY",
"oa_url": "https://reproductive-health-journal.biomedcentral.com/track/pdf/10.1186/s12978-018-0626-0",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3749d52dfcce8fd17fd8f10d8f7137a75071d74c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
11648878 | pes2o/s2orc | v3-fos-license | Plant Extracts: A Potential Tool for Controlling Animal Parasitic Nematodes
Many plants play a crucial role in maintaining animal and human life in a natural balance with a tendency to establish an environmental armory among the different biosphere inhabitants. During evolution of living organisms in the biosphere biological interactions with other organisms are established and they affect each other in many ways. Different types of relationships are involved among organisms including parasitism. Heritable strategies of biological adaptation are developed by living organisms to overcome adverse environmental conditions. Plants have developed biochemical mechanisms to defend themselves from biological antagonists that act as their natural enemies (Ryan and Jagendorft, 1995). This principle has led scientists to search for bio-active compounds produced by plants against pathogens (Sheludko, 2010). Since long a number of plants and their metabolites are evaluated against diseases of importance not only in public health (Shah et al., 1987); but also in animal and agricultural production (Githiori et al., 2006). In the present chapter, the importance of using plant extracts as an alternative method of control of animal parasitic nematodes is reviewed from a broad perspective.
Introduction
Many plants play a crucial role in maintaining animal and human life in a natural balance with a tendency to establish an environmental armory among the different biosphere inhabitants. During evolution of living organisms in the biosphere biological interactions with other organisms are established and they affect each other in many ways. Different types of relationships are involved among organisms including parasitism. Heritable strategies of biological adaptation are developed by living organisms to overcome adverse environmental conditions. Plants have developed biochemical mechanisms to defend themselves from biological antagonists that act as their natural enemies (Ryan and Jagendorft, 1995). This principle has led scientists to search for bio-active compounds produced by plants against pathogens (Sheludko, 2010). Since long a number of plants and their metabolites are evaluated against diseases of importance not only in public health (Shah et al., 1987); but also in animal and agricultural production (Githiori et al., 2006). In the present chapter, the importance of using plant extracts as an alternative method of control of animal parasitic nematodes is reviewed from a broad perspective.
Use of plants as a source of phyto-medicines
Ancestral cultures worldwide developed, over many centuries, several cures and remedies from plants and plant extracts against many diseases affecting human populations and a traditional medicinal system based on empiric knowledge was established and was improved through time (Hillier and Jewel, 1983). Some devastating infectious diseases ie., malaria, responsible for deaths of thousands of people can be overcome with traditional herbal anti-malarian drugs obtained from South America, Africa and Asia ie., Cinchona (Cinchona sp.), Qing hao (Artemisa annua), Changshan (Dichroa febrifuga), Neem (Azadirachta indica), Cryptolepsis sanguinolenta) and other plants (Willcox et al., 2005). Researchers around the world have scientifically explored the real effect of many plants used as medicines
Parasites of veterinary importance
Livestock industry worldwide is severely affected by a number of infectious diseases caused by different kinds of parasites. The present chapter focuses on the use of plant extracts against the group of internal parasites and particularly to helminths known as Gastrointestimal Parasitic Nematodes (GIN); considered to be one of the most economically important group of parasites affecting the animal productivity around the world ( ). In this group of parasites the nematodes have a remarkable status as the main pathogens causing severe damage to their hosts. Haemonchus contortus and other genera/species of nematodes belonging to the group of trichostrongylids are of major concern because its blood-sucking feeding habits cause anemia that can be so severe resulting in the death of the animals (Macedo Barragán et al., 2009). This group of parasites is widespread in almost all tropical and sub-tropical countries and is considered responsible for deteriorating animal health and productivity.
Chemotherapy as the unique method of control
The most common method used to control ruminant helminthiasis is the use of chemical compounds commercially available as anti-helmintic drugs that are regularly administered to animals for deworming; the method is considered simple, safe and cheap (Jackson, 2009). There are several disadvantages in the use of such products such as their adverse effect against beneficial microorganisms in soil once they are eliminated with the feces (Martínez and Cruz, 2009). On the other hand, some anthelmintic compounds can remain as contaminants in animal products destined for human consumption ie., meat, milk, etc. (FAO, 2002). One of the main concerns in the use of anthelmintic drugs for controlling www.intechopen.com Plant Extracts: A Potential Tool for Controlling Animal Parasitic Nematodes 121 ruminant parasites is the development of anthelmintic resistance in the parasites that decreases the efficacy of the drugs (Sutherland and Leathwick, 2011; Torres-Acosta et al., 2011) and threatens economical sustainability of sheep production (Sargison, 2011). The anthelmintic resistance can reach enormous proportions when parasites develop mutations in their genome against different groups of anthelmintic drugs. Such phenomenon is known as "Multiple anthelmintic resistance" and it is a real threat to the inefficacy of commercially available anthelmintics (Taylor et al., 2009;Saeed et al., 2010). Such situation has motivated workers around the world to look for alternatives to control these parasites. Searching for plant bio-active compounds with medical properties against parasites has gained great interest in order to at least partially replace the use of chemical drugs. (Tables 1 and 2). Some forage have been evaluated searching for potential bio-active compounds against sheep and goat parasitic nematodes with variable results. However studies must be intensified; since some individual limitations in application have been noticed; ie., toxicity, metabolic disorders and inappropriate applications can cause severe damage and even the death of treated animals (Rahmann and Seip, 2008). Other plants are being investigated as bio-active forages in the control of Haemonchus contortus in lambs with good/moderate results. For instance Wormwood (Artemisia absinthium) which was offered to lambs for voluntary intake, parasitic burden was reduced almost in 50%. Additionally, faecal egg excretion expressed on a dry matter basis was also reduced by 73% in animals fed with the selected plant (Valderrábano et al., 2010). On the other hand, other plant/plant extracts ie., Melia azedarach (Chinaberrry tree, Indian Lilac) have shown promising results in trials that confirmed not only a very good anthelmintic efficiency, but also no side-effects (Akhtar and Riffat, 1984 (Lorimer et al., 1996); as well as bio-active enzymes such as cystein protease and secondary metabolites such as alkaloids, glycosides and tannins (Athanasiadou and Kvriazakis, 2004). Further in-depth studies need to be undertaken since even though anti-parasitic properties are being demonstrated, negative effects such as reduction in food intake by animals have been identified and this should be considered before establishing their use as an alternative method of control (Githiori et al., 2006).
Condensed tannin-rich plants
In recent studies, researchers are reaching beyond the general knowledge about lethal in vitro activity of plants and bio-active compounds derived from selected plants against the most important nematode parasites of ruminants. New efforts are being carried out to find practical applications of plants or plant products in the control of ruminant parasitic nematodes; including ways and means of overcoming limitations in applications to animals (Rahmann and Seipa, 2007). Recently in Laos, reduction in appearance of nematode eggs on goat feces with the Cassava foliage supplement has been demonstrated (Phengvichith and Preston, 2011).
Conclusions
The use of chemical anthelmintic drugs for controlling animal parasitic nematodes is rapidly loosing popularity due to a number of disadvantages. Anthelmintic resistance in the parasites is spreading and the inefficacy of chemical anti-parasitic compounds is threatening animal health. New plants with medicinal properties against parasites of ruminants are being investigated around the world with promising results. In the near future natural products obtained from plants extracts seems that likely will become a viable alternative of control of parasitizes of veterinary importance. When plant/plant extracts are being selected for use as anti-parasitic drugs in sheep particular attention should be given to the fact that the bio-active compound could be found in stems, roots, leaves, flowers, fruits or even in the entire plant. This means that obtaining plant extracts is a laborious and complex process. Also, the mode of extraction and the solvent used can determine the success in isolating the expected bioactive compounds; since a wide variety of compounds can be hidden into the structural parts of the plants and the only way they could be isolated is through exploring the use of a range of organic solvents. On the other hand, a rigorous effort to identify possible side effects due to the administration of plant extracts should be established before carrying in vivo assays. It is remarkablly important to consider that using plant/plant extracts as a unique method of control is insufficient to control itself the parasitosis in the animals. So, an alternated or combined method with other methods of control should be considered as an integrated method which would lead to reduce the use of chemical anthelmintic drugs.
Acknowledgments
Authors whish to express their gratitude to Dr. Felipe Torres Acosta (Autonomous University of Yucatan, Mexico) for his valuable comments on this chapter. | 2017-08-28T07:20:11.745Z | 2012-03-14T00:00:00.000 | {
"year": 2012,
"sha1": "878c83f81239655b865f3d7d557effb4a02298b6",
"oa_license": "CCBY",
"oa_url": "https://www.intechopen.com/citation-pdf-url/31343",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "008ab82f24848958a0feb65c5b1a732fba7657ee",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
} |
221769689 | pes2o/s2orc | v3-fos-license | An Anatomic Study on the Maxillary Sinus Mucosal Thickness and the Distance between the Maxillary Sinus Ostium and Sinus Floor for the Maxillary Sinus Augmentation
Background and objectives: The average rate of chronic sinusitis after maxillary implantation was approximately 5.1%. However, the evidence of predictive risk factors for sinusitis after implantation is lacking. The aim of this study was to perform an anatomic study on the maxillary sinus mucosal thickness (MSMT), the distance between the maxillary sinus ostium and sinus floor (MOD), and the MSMT/MOD ratio as a preoperative risk indicator for sinusitis after maxillary dental implantation. Materials and Methods: Between October 2008 and October 2019, all patients referred to the otolaryngology department were included in this study. A total of 120 patients were enrolled. The 95 patients who received no treatment prior to implantation were classified into Group A, the 16 patients who used antibiotics before implantation were classified into Group B, and the patients who had implants inserted after functional endoscopic sinus surgery were classified into Group C. The MSMT, MOD, MSMT/MOD ratio, anatomical factors associated with ostial obstruction, and the occurrence of postoperative sinusitis were reviewed. Results: There were significant group differences in MSMT (Group A vs. Group B, p = 0.001; Group B vs. Group C, p = 0.003; Group C vs. Group A, p < 0.0001). The MOD showed no significant difference among the three groups. The MSMT/MOD ratio showed significant differences between Groups A and B (p = 0.001), B and C (p < 0.0001), and C and A (p < 0.0001). Conclusions: It is important to check not only the proportion of the maxillary sinus occupying lesion, but also the status of the maxillary sinus osteomeatal complex when making therapeutic decisions. In addition, collaboration between dentists and otolaryngologists could improve outcomes in patients with maxillary sinus lesions.
Introduction
The use of dental implants has rapidly increased, and they are being used in almost all dentistry units in the country, resulting in a greater overall number of complications. In particular, low bone density and thin alveolar ridges can lead to various complications in the maxillary sinus [1]. The average rate of chronic sinusitis after maxillary implantation, among 25 studies, was approximately 5.1% [2]. Inflammatory edema in the maxillary sinus, membrane perforation, migration of graft material secondary to membrane perforation, and preexisting chronic sinusitis are known risk factors for postoperative sinusitis after dental implantation [2][3][4][5][6]. However, the risk factors are difficult to assess prior to implant placement in the maxillary sinus, apart from radiologically suspicious sinusitis. Some dentists have been exploring whether implants are appropriate in cases where abnormalities are detected in the maxillary sinus after panoramic radiography or cone beam computed tomography (CBCT). However, the evidence required to establish management guidelines after maxillary dental implantation is lacking. Therefore, we performed an anatomic study on the maxillary sinus mucosal thickness (MSMT), the distance between the maxillary sinus ostium and sinus floor (MOD), and the MSMT/MOD ratio as preoperative risk indicator for sinusitis after maxillary dental implantation, based on 11 years of experience of dentists and otolaryngologists at our hospital.
Materials and Methods
This study and the associated chart review were approved by the institutional review board of our hospital (approval no. KC19RESI0517). All patients referred from our periodontology department to the otolaryngology department, between October 2008 and October 2019 for dental implants, were included. Before the implant was placed, all patients had a CBCT scan taken. A total of 120 patients were enrolled in this study. Diabetes mellitus, asthma, past history of endoscopic sinus surgery, MSMT (including any cysts or solitary polyps), MOD, MSMT/MOD ratio, obstruction of the maxillary sinus ostium, anatomical factors potentially associated with ostial obstruction (such as paradoxical middle turbinate or Haller cells), and the occurrence of postoperative sinusitis were reviewed.
The patients were divided into three groups. The 95 patients who received no antibiotic treatment prior to implantation were classified into Group A, and the 16 patients who were using antibiotics before implantation were classified into Group B. The remaining nine patients, in whom implants were inserted after functional endoscopic sinus surgery (FESS), were classified into Group C.
All measured parameters are expressed as means ± standard deviation. A normality test was performed, and differences between two groups were analyzed by using Student's t-test or the Mann-Whitney test. Pre-and post-treatment differences were analyzed by using the paired t-test or Wilcoxon signed rank test. Differences among the groups were analyzed by using the Kruskal-Wallis test. The post hoc Mann-Whitney test was applied, with the significance level established by using Bonferroni's method. A p-value < 0.05 was considered to indicate statistical significance. All statistical analyses were conducted by using SAS software (ver. 9.4; SAS Institute, Cary, NC, USA).
The mean MSMT was 5.68 ± 5.74 mm in Group A, 11.84 ± 8.22 mm in Group B, and 25.3 ± 11.1 mm in Group C. There were significant differences between the groups (Group A vs. B, p = 0.001; Group B vs. Group C, p = 0.003; Group C vs. Group A, p < 0.0001). The mean MOD was measured to be 30.33 ± 5.08 mm in Group A, 29.98 ± 7.31 mm in Group B, and 31.35 ± 6.81 mm in Group C; there were no significant differences in the between-group comparisons (Group A vs. Group B, p = 0.804; Group B vs. Group C, p = 0.502; Group C vs. Group A, p = 0.637). The mean MSMT/MOD ratio was calculated to be 0.19 ± 0.19 in Group A, 0.39 ± 0.25 in Group B, and 0.78 ± 0.24 in Group C; there were significant differences between Groups A and B (p = 0.001), B and C (p < 0.0001), and C and A (p < 0.0001) ( Figure 1). Sinusitis after maxillary dental implantation was detected in 2 out of 120 cases. Details of the cases with sinusitis complications are provided in Table 1. Anatomical variations in the nasal cavity among the 120 cases were analyzed based on CBCT. Concha bullosa (n = 30, 25.0%), Haller cells (n = 13, 11.0%), and paradoxical curvature of the middle concha (n = 1; 0.8%) were seen in some cases. There were no anatomical variation-related cases of postoperative sinusitis. However, obstruction of the maxillary sinus ostium, which was another risk factor identified by CBCT, was observed in two cases (1.7%). In one case, which did not respond to antibiotics, the MSMT/MOD ratio was 0.6. Therefore, FESS, and implantation thereafter, was performed. In another case, the MSMT/MOD ratio was 0.23; therefore, we applied dental implants, but postoperative sinusitis nevertheless occurred in this case.
Discussion
There have been several reports on the relationship between maxillary sinus mucosal thickness and sinusitis after receiving maxillary dental implants [7,8]. Pneumatization of the maxillary sinuses, which shows variation among patients, affects the selection of implant methods. Pneumatization of the maxillary sinus in the edentulous maxillary ridge limits the volume of bone available for implant placement; maxillary sinus augmentation has been used to address this issue [9]. Maxillary sinus augmentation can be performed via a lateral or crestal approach [10,11]. Intraoperative complications associated with the lateral approach include sinus membrane perforation and bleeding, while postoperative complications include sinus graft infections, sinus infections, and Sinusitis after maxillary dental implantation was detected in 2 out of 120 cases. Details of the cases with sinusitis complications are provided in Table 1. Anatomical variations in the nasal cavity among the 120 cases were analyzed based on CBCT. Concha bullosa (n = 30, 25.0%), Haller cells (n = 13, 11.0%), and paradoxical curvature of the middle concha (n = 1; 0.8%) were seen in some cases. There were no anatomical variation-related cases of postoperative sinusitis. However, obstruction of the maxillary sinus ostium, which was another risk factor identified by CBCT, was observed in two cases (1.7%). In one case, which did not respond to antibiotics, the MSMT/MOD ratio was 0.6. Therefore, FESS, and implantation thereafter, was performed. In another case, the MSMT/MOD ratio was 0.23; therefore, we applied dental implants, but postoperative sinusitis nevertheless occurred in this case.
Discussion
There have been several reports on the relationship between maxillary sinus mucosal thickness and sinusitis after receiving maxillary dental implants [7,8]. Pneumatization of the maxillary sinuses, which shows variation among patients, affects the selection of implant methods. Pneumatization of the maxillary sinus in the edentulous maxillary ridge limits the volume of bone available for implant placement; maxillary sinus augmentation has been used to address this issue [9]. Maxillary sinus augmentation can be performed via a lateral or crestal approach [10,11]. Intraoperative complications associated with the lateral approach include sinus membrane perforation and bleeding, while postoperative complications include sinus graft infections, sinus infections, and sinusitis [12]. The following methods can be used to reduce or minimize complications in the posterior maxilla. With the crestal approach, which is considered less invasive, the application of hydraulic pressure may decrease perforations and increase survival rates [10]. It was previously believed that at least 5 mm of bone below the maxillary sinuses was required for application of the crestal approach [12].
However, in a more recent report, the crestal approach was applied with only 2 mm of alveolar bone available [13]. Sinus augmentation has been performed without graft materials, and a previous study reported no significant difference in short-term implant survival in maxillary sinus augmentation with versus without grafts [14]. The application of short implants may minimize complications. Previous reports compared outcomes between ≤ 6-mm-and ≥10-mm-long implants, placed after both lateral and transcrestal sinus augmentation; it was suggested that placement of short implants may be a more reliable option [15]. With the use of these methods, the success rates of maxillary sinus augmentation and maxillary dental implantation have improved.
However, the risk factors associated with the maxillary sinusitis have not been evaluated thoroughly. Ostial obstruction is believed to be one such risk factor, and it is important to both check for maxillary sinus pneumatization and measure the thickness of the mucous membrane. Chen et al. suggested that, when the height of polyp, cyst, or mucosal thickening was not less than half of maxillary sinus, treatment of sinusitis and preventive FESS was needed [16]. Chan et al. reported that it is necessary to consult with an otolaryngologist if mucosal thickening of maxillary sinus is more than one-third [17]. However, there is a lack of clinical evidence supporting this assertion. Therefore, in this study, we conducted this pilot study and explained the necessity to derive the related evidences.
Only 1.6% of the chronic sinusitis cases after maxillary implant were referred, compared to average rate of 5.1% for other 25 studies in this study. We thought that it is because of close consultation between dentists and otolaryngologists. In all the cases where any lesion in the maxillary sinus is identified in preoperative CBCT, dentists refer the patient to an otolaryngologist for assessment. Thus, the occurrence of complications may be lowered by reducing the risk factors before implants. Moreover, FESS is used if antibiotics confer no therapeutic benefits and the risk of developing chronic sinusitis after implantation is considered high. Our institute appears more strict with respect to the application of surgical procedures (mean MSMT/MOD ratio: 0.78 ± 0.24), compared to previous reports [16,17]. However, one case with ostium obstruction showed postoperative sinus complications, although the MSMT/MOD ratio was low (0.23). Based on these results, not only the MSMT and MSMT/MOD ratio, but also osteomeatal status, may be associated with postoperative sinusitis complications after dental implants. Therefore, for patients with maxillary sinus lesions who are unresponsive to antibiotics, FESS may be appropriate depending on the osteomeatal status and MSMT/MOD ratio. Previous studies have focused mainly on the state of the maxillary sinus floor regarding maxillary implants. Therefore, the key point of this manuscript is that the relative occupying lesion such as mucosal thickening or a cyst or solitary polyp should be assessed with maxillary sinus ostium position and the depth of maxillary sinus. Through this point, it was also to be discussed that, when taking cone-beam-computed tomography and imaging, and evaluating the entire maxillary sinus, including the maxillary sinus ostium, it would be an important criterion for evaluating surgery.
There were some limitations to this study. First, its retrospective nature limits the power of the results, compared to randomized controlled studies. Second, we could not perform routine postoperative CT in all enrolled patients, in consideration of the risk-benefit ratio of radiation exposure and cost of the examination. Third, all patients were seen at a single institute, i.e., our tertiary medical center, so selection bias is possible. Nevertheless, this study, which was characterized by close collaboration between two departments to confirm each patient's prognosis, provides important data that could facilitate future assessments of dental-implantation patients.
Conclusions
In this study, MSMT, MOD, MSMT/MOD ratio, obstruction of the maxillary sinus ostium, anatomical factors potentially associated with ostial obstruction, and past history of endoscopic sinus surgery were analyzed, and the results showed that significant group differences were noted in MSMT and MSMT/MOD ratio. It can be suggested that it is important to evaluate the status of the maxillary sinus complex during therapeutic decision-making prior to maxillary dental implantation, to decrease the likelihood of postoperative sinusitis. In addition, collaboration between dentists and otolaryngologists could improve outcomes in patients with maxillary sinus lesions. | 2020-09-18T13:06:10.177Z | 2020-09-01T00:00:00.000 | {
"year": 2020,
"sha1": "82e0310321477c19b03259452703862e26a413c9",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1648-9144/56/9/470/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cc4ab17f8dea7323c01a098dcfe25e41ebb56051",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119266387 | pes2o/s2orc | v3-fos-license | Temperature and phase-space density of cold atom cloud in a quadrupole magnetic trap
We present studies on the modifications in temperature, number density and phase-space density when a laser cooled atom cloud from the optical molasses is trapped in a quadrupole magnetic trap. Theoretically it is shown that for a given temperature and size of the cloud from the molasses, the phase-space density in the magnetic trap first increases with magnetic field gradient and then decreases with it, after attaining a maximum value at an optimum value of magnetic field gradient. The experimentally measured variation in phase-space density in the magnetic trap with the magnetic field gradient has shown the similar trend. However, the experimentally measured values of number density and phase-space density are much lower than their theoretically predicted values. This is attributed to the higher experimentally observed temperature in the magnetic trap than the theoretically predicted temperature. Nevertheless, these studies can be useful to set a higher phase-space density in the trap by setting the optimum value of field gradient of the quadrupole magnetic trap.
I. INTRODUCTION
The first demonstration of magnetic trapping of neutral atoms [1] proved to be an instrumental idea in achieving the Bose-Einstein condensation (BEC) of Rubidium atoms in a magnetic trap after evaporative cooling [2][3][4][5]. The trapping and cooling of atoms to very low temperature has various applications in atom interferometry based precision measurements (of time, acceleration, rotation and force), atom lithography, quantum information etc. [6][7][8][9][10][11]. For the realization of BEC in neutral atoms, the laser cooling of atoms is the first stage of cooling which is followed by the second stage cooling, i.e. evaporative cooling. For evaporative cooling, various traps such as magnetic trap [12,13], optical dipole trap [14,15], or a hybrid of magnetic and optical trap [16] are commonly used. For trapping of cold neutral atoms several designs of magnetic traps have been proposed and demonstrated [5,[17][18][19]. In the magnetic trap approach, the higher energy atoms from the trap are evaporated by the application of radio frequency radiation, leading to decrease in the temperature and increase in the in phasespace density. The phase-space density (ρ) is given as ρ = nλ 3 dB , where n is the number density and λ dB is the thermal de-Broglie wavelength for the atom. After the evaporative cooling, the final phase-space density required to achieve BEC must satisfy the condition ρ > 1.202 (in a harmonic trap) [20]. Thus the initial phase-space density of atoms in the magnetic trap is of significant importance which depends upon various parameters during the transfer of laser cooled atom cloud from the molasses to quadrupole magnetic trap. To obtain the maximum value of ρ in the magnetic trap after * Electronic address: spram@rrcat.gov.in the atom cloud is transferred from molasses to the magnetic trap, the value of parameters such as temperature and size of the atom cloud in the molasses govern the optimum magnetic field gradient of the trap.
In this work, we have studied the changes in the phasespace density of laser cooled atom cloud after its transfer from molasses to a quadrupole magnetic trap. Theoretically, it is shown that for a given temperature and size of the laser cooled atom cloud, there is an optimum value of magnetic field gradient of the quadrupole trap to obtain the maximum phase-space density of atom cloud in the magnetic trap after the capture of laser cooled atom cloud in the magnetic trap. The experimentally measured number density and phase-space density of atoms in the magnetic trap have shown the similar trend in variation with the magnetic field gradient of the trap. However, the experimentally measured phase-space density values are much lower than the theoretically estimated values. This is attributed to the higher values of measured temperature in the magnetic trap than the theoretically predicted values of temperature. This study is useful to set the higher initial phase-space density of the cloud in the quadrupole magnetic trap before the evaporative cooling. Since the adiabatic compression of the quadrupole trap conserves the phase-space density (ρ), the optimized value of ρ obtained during the quadrupole trap loading (by switching-on the optimum field gradient) can be preserved during the adiabatic compression [21]. The higher initial phase-space density in quadrupole trap can be helpful in achieving the goal of BEC degeneracy.
II. THEORY
We first theoretically calculate the phase-space density of the cold atom cloud ( 87 Rb atoms) after it is transferred from the molasses to a quadrupole magnetic trap. The atom cloud in the molasses is assumed to have N number of atoms at a temperature T i and size σ i (i.e. root mean square (r.m.s.) radius of the number density profile of the cloud). The phase-space density of this initial cloud after molasses is given as [5], where n i is the number density in the molasses, m is the mass of the atom, h is the Planck's constant and k B is the Boltzmann's constant. The number density of atoms in the atom cloud in the molasses is assumed to be as which results in relation between the total number of atoms N and and peak number density n i (0) in the molasses as Experimentally, it is standard practice that after cooling of atoms in the molasses, the cooling laser beams are switched-off and atoms are optically pumped to a trappable state before the capture of atoms in the magnetic trap. It is assumed that during the optical pumping, the rise in the temperature of the atom cloud is negligible. But, when the atom cloud is captured in the quadrupole magnetic trap, both, temperature and number density in the trapped cloud get modified due to interaction of atoms with the field of the trap. This finally leads to the modified temperature, number density and phase-space density (ρ) of the cloud in the magnetic trap as compared to those in the initial cloud obtained from the molasses. These modified values of the atom cloud parameters in the magnetic trap can be calculated as follows.
We consider a quadrupole magnetic trap whose configuration is as shown in Fig. 1, where gravitational field direction is along the opposite direction of y-axis. The magnetic field near the center of this magnetic trap can be approximately written as B = b[−x/2, −y/2, z] (as per current direction in Fig. 1), where b is the field gradient in axial direction (z-direction). When this magnetic trap is switched-on instantaneously, the atom cloud from molasses (which is assumed to be in the trappable state (|F, m F )) gains a potential energy from the magnetic trap which is given as [22,23], where m F is magnetic hyperfine angular momentum quantum number, g F is Landé g-factor and µ B is the Bohr magneton. On using n i from equation (2) in equation (4) and integrating over variables (x, y, z), we can find the potential energy as After the molasses and optical pumping, once the cold atom cloud is trapped in the magnetic trap and has reached the equilibrium with it, the total energy of the atoms in the magnetic trap is the sum of the initial kinetic energy and the potential energy gained from the magnetic trap (expressed in equations (4) and (5)). In such an equilibrium state formed in the trap, the kinetic energy of the atoms is one third of the total energy of the atom cloud in quadrupole trap, as predicted by virial theorem [24]. This gives the temperature (T f ) of the atom cloud in the trap as, which gives where κ = 0.24. The equation (7) is obtained after using g F m F = 1 for state |F = 2, m F = 2 of 87 Rb atoms. Assuming that all the atoms in the molasses are captured in the magnetic trap, the final peak number density n f (0) in the magnetic trap can be related to the number of atoms in the magnetic trap as, For (g f m f µ B b/2) > mg, the above equation is analytically integrable to give rise to the expression of the final peak number density for state |F = 2, m F = 2 of 87 Rb atoms as During the actual experiments, due to finite switchingon time of current in the magnetic trap coils, a fraction of the number of atoms in the cloud from the molasses may be lost due to fall under gravity as well as due to expansion of the cloud. The inefficient optical pumping also contributes to the reduction in number of atoms in the magnetic trap. Thus the number of atoms actually trapped in the magnetic trap (N ′ ) is always less than the number of atoms in the molasses (N ). Considering these different loss processes, and defining 'ε' as overall efficiency for the transfer of atoms from molasses to the magnetic trap, one can relate N ′ and N as N ′ = εN . The peak number density in the magnetic trap can, then, be re-written as (10) It is important to note here that the number density becomes independent of gravity when b >> (2mg/µ B )(i.e. ∼ 31 G/cm for 87 Rb) . Using the final peak density given by equation (10) and the final temperature T f in the magnetic trap given by equation (7), we can find the final peak phase-space density of the atom cloud in the quadrupole magnetic trap as, The aim of these calculations is to find the value of ρ f (0) in the quadrupole magnetic trap and the value of field gradient b which results in maximum value of ρ f (0), when the quadrupole magnetic field is switched-on to trap the atom cloud from the molasses at temperature T i and size σ i . Fig. 2 shows the calculated variation in temperature T f (given by equation (7)), the final peak number density n ′ f (0) (given by equation (10)) and the final peak phase-space density ρ f (0) (given by equation (11)) in the magnetic trap with quadrupole field gradient b, for different values of temperature T i and size σ i of the atom cloud in the molasses.
With increase in the b, ρ f (0) first increases and then decreases after attaining a maximum value at a certain field gradient b. The reduction in ρ f (0) after the maximum value is due to increase in the temperature of cloud in the magnetic trap and saturation in number density with b. The temperature (T f ) increases with b due to increase in potential energy gained by the cloud from the trap. For lower values of b, the number density in the magnetic trap is low which results in lower values of ρ f (0). Therefore a maximum for ρ f (0) is obtained in its variation with the field gradient b.
III. EXPERIMENTAL
Experiments have been performed to verify the theoretical predictions as discussed in the previous section. agram of the setup is as shown in Fig. 3(a) and more details are described in ref. [25]. This setup consists of a vapor cell MOT (VC-MOT) which is formed in a chamber at ∼ 1 × 10 −8 Torr pressure (with Rb-vapor), and an ultra-high vacuum MOT (UHV-MOT) which is formed in a glass cell at ∼ 6 × 10 −11 Torr pressure. The UHV-MOT is loaded by transferring atoms from the VC-MOT using a red-detuned push laser beam. The atom cloud in the UHV-MOT is used for the magnetic trap-ping as UHV environment in the glass cell suits well for a long life-time of atoms in the magnetic trap. To transfer atoms from VC-MOT to UHV-MOT, a red detuned (detuning value δ/2π = −1.0 GHz with respect to peak of (5S 1/2 F = 2) → (5P 3/2 F ′ = 3) transition of 87 Rb atom) push laser beam is focused on the VC-MOT. The duration and sequence of various stages from VC-MOT formation to magnetic trapping and detection are shown schematically in Fig. 3(b). Typically, the number of 87 Rb atoms which can be obtained in VC-MOT and UHV-MOT in this setup are ∼ 1 × 10 8 and ∼ 2 × 10 8 respectively after setting the appropriate values of various parameters in the setup. Atoms in the UHV-MOT are trapped in the magnetic trap after they are cooled in the compressed-MOT and molasses stages. The atoms in the UHV-MOT are kept in a compressed UHV-MOT for ∼ 20 ms duration, which is implemented by increasing the detuning of the UHV-MOT cooling laser beams to the value ∼ 50 M Hz to the red side of the cooling transition peak. The compressed MOT implementation leads to lowering of the temperature as well as increase in the density in the atom cloud in the UHV-MOT, which is useful for loading the magnetic trap. After the compressed UHV-MOT, these cold atoms are kept in the molasses for a variable time duration (3 − 9 ms) to further lower the temperature of the atom cloud for magnetic trapping. The atoms cooled in the optical molasses are then optically pumped to (5S 1/2 |F = 2, m F = 2 ) state for trapping in the quadrupole magnetic trap. The sequence of various stages shown in Fig. 3(b) is experimentally accomplished using different electronic pulses generated from a Controller system to control several acousto-optic modulators (AOMs), power supplies and switching circuits. This Controller system is operated through a PC (Personal Computer) and LabVIEW program. The electronic pulses from the Controller system are also used to generate probe laser pulses (using AOMs) and to trigger the detectors such as CCD camera for the characterization of cold atom cloud in the MOTs and magnetic trap.
To perform the optical pumping of atoms to (5S 1/2 |F = 2, m F = 2 ) state, the small parts of the cooling and re-pumping laser beams (with powers ∼ 2 mW in each part) are mixed and the combined beam is passed through an AOM in the double pass configuration. The output of this AOM (∼ 500 µW , peak intensity of the beam is ∼ 1.6 mW/cm 2 ), called optical pumping beam, is aligned to one of the UHV-MOT beam. The polarization of the optical pumping beam is made circular using a quarter wave-plate. This circularly polarized optical pumping beam (500 µs duration) in presence of a small bias field (∼ 2 G, ∼ 1.5 ms duration) transfers the laser cooled atoms from molasses to state (5S 1/2 |F = 2, m F = 2 ). The optical pumping beam parameters (power and duration) and the bias field parameters (strength and duration) are varied and corresponding variation in number of atoms in the trap is recorded. Finally, these parameters are set to obtain the maximum number in the quadrupole magnetic trap. A pair of water-cooled quadrupole coils (UMQC in Fig. 3(a)) for UHV-MOT as well as for magnetic trapping has been used in the experiments. To switch-on the current in these quadrupole coils, an IGBT (Insulated Gate Bipolar transistor) based switching circuitry has been used which results in the current rise-time in the coils ∼ 2.5 ms. The current switch-off time of this circuitry is much shorter (∼ 100 µs) than the rise-time. The switching circuit receives the trigger pulse from the Controller (shown in Fig. 3(b)) to switch the current in the coils for UHV-MOT formation as well as for magnetic trapping. For magnetic trapping, a much higher value of current in these quadrupole coils is used in the experiments. The number of atoms in the magnetic trap has been estimated using the well known fluorescence imaging method [26] which uses a resonant probe laser pulse of short duration (∼ 100 µs) to shine the trapped atom cloud and collect the emitted fluorescence on a CCD camera through an imaging optics. The temperature measurement for the atom cloud in the MOT or magnetic trap involves the similar imaging method, but the imaging is done during free expansion of the cloud [27] after its release from the MOT or magnetic trap. This free-expansion method has been used by us to estimate the temperature of cloud in the molasses and magnetic trap. The temperature of the atom cloud in the magnetic trap has also been measured by measuring the size of the cloud in the trap. The temperature values obtained by both these methods have shown a reasonable agreement, which is discussed in the next section of this paper.
IV. RESULTS AND DISCUSSION
In the experimental studies, the temperature and number of atoms in the atom cloud in the quadrupole magnetic trap has been estimated only when atom cloud after molasses and optical pumping stages gets trapped in the quadrupole trap and reaches a nearly steady state equilibrium with the trap. Fig. 4 shows the measured temporal variation in the CCD counts (which is proportional to number of atoms) of the cloud image taken after collecting the probe induced fluorescence from atoms in the quadrupole magnetic trap. The time shown in the graph is the delay in recording the image of cloud after switching-on current in the quadrupole magnetic trap coils (current ∼ 13 A). It is evident from the figure that, after an initial sharp fall in the counts, the CCD counts becomes nearly constant for time duration > 50 ms. Thus the number of atoms measured after ∼ 50 ms duration corresponds to the number actually trapped in the magnetic trap. The sharp initial fall in Fig. 4 is due to fast removal of un-trappable atoms from the trap. The Fig. 5(a) which shows the temperature of the atoms in the magnetic trap as a function of time spent in the magnetic trap also reveals that after ∼ 50 ms duration, the temperature of atom cloud in the magnetic trap does not change considerably. These results thus suggest that, after the atoms have spent time > 50 ms in the magnetic trap, the measurements of number, temperature and density of the atom cloud in the trap should be appropriate to characterize the atom cloud in the magnetic trap. The measurement of temperature of atom cloud in the magnetic trap has been performed by two independent methods. The first method is based on the measurement of the full width at half maximum (FWHM) size (R F W HM ) of the atom cloud in the magnetic trap for a given field gradient b and using the relation [28,29], where, T S MT denotes the temperature of atom cloud in the magnetic trap measured by the size of the cloud. The temperature measured by this method is shown by squares in Fig. 5. The second method which we used is free-expansion method [27]. In this method the images of the atom cloud released from the magnetic trap are recorded at different times during the expansion of the cloud, and temperature (T F E MT ) is estimated from the rate of change in the size of the cloud. The temperature measured by this method is shown by crosses in Fig. 5. Both these temperature measurement methods were tested on a cloud having the same initial temperature and size values after the molasses (T i ∼ 188 µK and σ i ∼ 0.91 mm). The data in the figure shows that temperature obtained from the two methods have shown a reasonable agreement. Fig. 5(a) shows the variation in temperature of the atom cloud in the magnetic trap with time whereas Fig. 5(b) shows the variation in the temperature of the atom cloud in the magnetic trap with field gradient b. Fig. 5(a) shows that temperature does not change considerably with time spent in the magnetic trap. However, from Fig. 5(b), it can be noted that the measured temperature of the cloud in magnetic trap is higher than the temperature expected from the equation (7) of theory (which is shown by a straight line in the figure Fig. 5(b)). These results are similar to the results of Stuhler et al. [24], who also have reported the measured temperature in magnetic trap higher than the theoretically expected temperature. This difference between experimentally measured and theoretically expected values of temperature could be due to several reasons. One of these could be the mismatch between the center of the atom cloud from molasses and the center of the magnetic trap. The Fig. 6 shows the measured variation in the number density and phase-space density of the atom cloud in the magnetic trap with the quadrupole field gradient b, for different values of initial temperature (T i ) and size (σ i ) of the cloud in the molasses. Here peak number density has been estimated by measuring the number of atoms and size of atom cloud in the magnetic trap. These data are obtained from the fluorescence image of the trapped cloud. The temperature was obtained from the size of the cloud to know the peak phase-space density of the trapped cloud shown in Fig. 6. It is evident from these results that measured values of ρ in the magnetic trap also first increases with b and then decreases with it, after attaining a maximum value at an optimum b. Thus measured number density and phase-space density have shown the similar trend in variation with b as predicted by theory (Fig. 2). However, as one can note from data in Fig. 2 and Fig. 6, the measured values of number density and phase-space density in the magnetic trap are significantly lower than their theoretically predicted values. We attribute this difference between measured and theoretical values of both the parameters (n and ρ) to the higher value of the measured temperature than the theoretically predicted temperature. Nevertheless, there exists an optimum field gradient at which the phase-space density in the magnetic trap is experimentally maximized.
In the theory, equations (10) and (11), we have assumed that number actually trapped in the magnetic trap N ′ is independent of field gradient b. But actually N ′ changes with b, since ε depends upon the gradient b and switching time of the magnetic trap to reach the set b value. This is due to escape of more energetic atoms from the trap for low values of b. This has been observed experimentally during the measurements (as shown in Fig. 7). However, this variation of N ′ with b can not account for the difference between theoretical and experimental values of phase-space density (as well as number density). Thus we believe that the pronounced difference between theoretical and experimental values of phasespace density (also number density) is due to difference in the theoretical and experimentally measured temperature of atom cloud in the magnetic trap. This is because, both, number density and phase-space density, are highly non-linear functions of the temperature.
V. CONCLUSION
We have studied the modifications in the temperature, number density and phase-space density of the cold atom cloud when it is transferred from the molasses to a quadrupole magnetic trap. It has been shown theoretically that, for a given temperature and size of the atom cloud in the molasses, there is an optimum value of the magnetic field gradient which results in the maximum phase-space density of the atom cloud in the quadrupole magnetic trap, when molasses cloud is trapped in the quadrupole magnetic trap. The experimentally measured variation in the phase-space density in the magnetic trap with the magnetic field gradient has shown the similar trend, with a maximum value of phase-space density observed at an optimum field gradient. However, the experimentally measured values of number density and phasespace density are much lower than their theoretically predicted values. This has been attributed to the higher val-ues of experimentally observed temperature in the magnetic trap than the theoretically predicted temperature. These results, however, guide us to choose the appropriate value of the magnetic field gradient to be switched-on for magnetic trapping of the cold atom cloud from the molasses. Further, the results of this study are useful to set a higher initial phase-space density of the cloud in the magnetic trap which may be helpful in evaporative cooling to achieve BEC.
It can be noted that the phase-space density in the quadrupole trap in our setup is much lower (∼ 10 −9 ) than the phase-space density in the magnetic trap (∼ 10 −7 ) of some of the recent BEC experiments [30,31]. This is due to much larger number of atoms (∼ 10 10 ) used in the experiments by these groups as compared to the number of atoms used by us. By using a larger number of atoms and lowering the temperature in the quadrupole trap, it will be possible to improve the phase-space density in our quadrupole trap. | 2014-06-10T04:57:25.000Z | 2014-01-28T00:00:00.000 | {
"year": 2014,
"sha1": "6e66c40a220a318233e1035f0628a59259fd22cc",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1401.7165",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "6e66c40a220a318233e1035f0628a59259fd22cc",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Physics"
]
} |
258735081 | pes2o/s2orc | v3-fos-license | Quasi-monoenergetic carbon ions generation from a double-layer target driven by extreme laser pulses
High quality energetic carbon ions produced via laser-plasma have many applications in tumor therapy, fast ignition and warm dense matter generation. However, the beam achieved in current experiments is still limited by either a large energy spread or a low peak energy. In this paper, a hybrid scheme for the generation of quasi-monoenergetic carbon ions is proposed by an ultra-intense laser pulse irradiating a double-layer target. Multi-dimensional particle-in-cell (PIC) simulations show that the carbon ions are first accelerated via laser piston mechanism in the former carbon layer and then further accelerated by Coulomb repulsion force in the attached neon target. Since electrons are bunched synchronously in longitudinal and transverse direction by radiation reaction during the whole acceleration process, a quasi-monoenergetic carbon ion beam is eventually produced. In the following stage, the neon target provides the Coulomb field required for the continuous acceleration of the carbon ions which helps to prevent the carbon ion layer from diffusion. It is demonstrated that quasi-monoenergetic carbon ions with peak energy of 465 MeV u−1, energy spread of ∼13%, a divergence of ∼15∘, and laser-to-ion energy conversion of 20% can be achieved by using a laser pulse with intensity of 1.23 × 1023 W cm−2. An analytical model is also proposed to interpret the carbon ion acceleration, which is fairly consistent with the PIC simulations.
Introduction
Laser-driven quasi-monoenergetic ion beams have quite important applications in high energy density physics [1], medical therapy [2,3] and inertial confinement fusion [4,5] due to their unique properties of ultrashort duration and localized energy deposition. For example, it is found that quasi-monoenergetic carbon ions can lower ignition energies about 25% as compared to protons with a Maxwellian distribution [6]. Especially, a laser-driven quasi-monoenergetic ion beam can solve the problem of nonuniform heating of warm dense matter (WDM) samples with exponential energy spectrum [7]. In addition, high quality ion beam is an effective way to kill cancer cells because of the Bragg peak. Especially, tumor therapy requires ion energy of more than 400 MeV u −1 for carbon ions [8] which have higher relative biological effectiveness than protons. At present, a novel concept for radiotherapy FLASH, which can reduce the side effects on healthy cells due to very high dose rates, receives increasing attention [9,10] and FLASH radiotherapy with carbon ions has been demonstrated in vivo [11]. Thus, the generation of high quality carbon ions is in high demand.
In the past few decades, various acceleration mechanisms have been put forward for quasi-monoenergetic ion acceleration from the interaction of a laser pulse with solid targets. The main acceleration regimes include radiation pressure dominant (RPD) acceleration, relativistic induced transparent (RIT) acceleration and Coulomb repulsion (CR) acceleration, etc. In RIT [12][13][14][15], although the ion beams get accelerated as quasi-monoenergetic in the early evolution, the energy spectrum usually diverges quickly [16]. Moreover, double-layer target in CR regime is demonstrated to be an effective method to obtain monoenergetic ion beams as well [17][18][19][20]. However, due to the radial distribution of the Coulomb field by repulsion of ion cores, the ion divergence is generally large. The RPD regime also promises the generation of high-energy quasi-monoenergetic ion beams with a high conversion efficiency [21][22][23][24], while there are high requirements for the matching condition of laser pulse and target. In recent years, the hybrid acceleration of ions has attracted great attention and has been demonstrated in theory and experiments [18,[25][26][27]. For instance, a quasi-monoenergetic proton beams can be generated through combining RPD and laser wake-field acceleration [28]. Experimentally, near 100 MeV proton beam can be obtained by a hybrid acceleration mechanism with the combination of RPD and target normal sheath acceleration using a linearly polarized laser pulse [29][30][31]. However, the laser-to-ion energy conversion efficiency is typically only a few percent and the energy spread is typically Boltzmann distribution in the experiments. Moreover, the achieved maximum energy in experiments [32] is limited by ∼48 MeV u −1 for carbon-ions. These are still far away for the demand of the particular applications.
With the rapid developments of laser techniques [33,34], laser-plasma interaction has entered the extreme laser regime. Nowadays, the peak intensities of laser pulses have achieved 10 23 W cm −2 in the laboratory [35]. Meanwhile, high power laser facilities, such as ELI [36], XCELS [37] and SEL [38], are expected to get the laser power higher than hundred-PW. With such extremely-intense laser pulse, the influence of quantum electrodynamics effects must be considered, including the production of pairs, radiation reaction (RR) and so on [39][40][41][42], which can strongly influence the motion of electrons and further the ion acceleration. It is a great concern about how these influence the laser-driven ion acceleration and how quasi-monoenergetic ion beams can be obtained under extreme laser pulse. Over the past years, RR has been studied extensively in the laser-plasma community [43][44][45][46]. It has been found that RR is quite significant in the near-critical density plasma [47], and has more impact on the proton acceleration for a linearly polarized laser pulse than a circularly polarized one because the RR force has increased by four orders of magnitude due to the J × B driven longitudinal oscillations [48][49][50]. Recently, it has been reported that as a circularly polarized drive laser with intensity of 8.56 × 10 23 W cm −2 collides with another intense linearly polarized laser pulse of the same intensity, a GeV monoenergetic proton beam can be produced via RPD and quantum radiative compression method [51].
In this paper, we proposed a hybrid acceleration regime that combines laser piston (LP) and CR. In this regime, a linearly polarized laser pulse of intensity about 10 23 W cm −2 is incident on a carbon and neon double-layer target. At first, the carbon ions are accelerated to high energies through LP, and the neon target, as a buffer layer together with RR, prevents the diffusion of carbon ions and electrons in space. Subsequently, the carbon ions are further accelerated by the CR while electrons are bunched by radiation reaction force, so that a higher density carbon and electron layer can propagate over a longer distance. During the whole interaction process, carbon ions stay in the acceleration phase, and the radiation reaction effects play a positive role for the ion beam acceleration and confinement. Finally, a quasi-monoenergetic carbon ion beam can be generated with peak energy of ∼450 MeV u −1 , which may find wide applications in tumor therapy, fast ignition and WDM generation.
Theoretical model
The schematic diagram of the scheme is shown in figure 1(a). Figures 1(b)-(d) show the number density evolution of carbon and neon ions in the double-layer targets. A one-dimensional model is developed to describe the whole acceleration process, which can be divided into two stages. The first stage can be described by the LP. A linearly polarized laser-irradiated carbon foil is pushed forward by the radiation pressure, because the laser pressure is much greater than the thermal pressure when the laser intensity is significantly greater than the relativistic threshold [52], as shown by the green dots in figures 1(b) and (c). The second stage can be described by a double-layer CR model [53,54]. The CR occurs because of the excess of positive charges near the end of the laser pulse when the carbon ions pass through the neon target, inducing the space charge separation field. Meanwhile, electrons are bunched transversely and longitudinally by radiation reaction force, which contributes to lower both the ion divergence and energy spread. When the ultra-intense laser pulse expels electrons from the neon target, the remaining neon ions lead to a strong CR electric field that can accelerate the carbon ions. The presence of the second neon target prevents the relativistic transparency of the ultra-thin carbon target, and prolongs the carbon ion acceleration. Besides, the spatial diffusion of the carbon ions can be also suppressed, forming a tight ion layer moving forward with the laser pulse. During this stage, the drive laser pulse pushes the electrons out of the neon target, while the neon ions receive less impetus and remain inside the target, as shown in figures 1(b)-(d).
In order to achieve stable acceleration of the carbon ions, an appropriate thickness of the neon target is required. On the one hand, the thickness cannot be too thin, ensuring sufficient time for laser interaction with the neon target. Otherwise, the interaction enters the relativistic transparent regime, in which electrons and neon ions (orange) at t = 15T0, 27T0 and 30T0 respectively. The blue curve is the carbon density at on-axis, and the black curve is the electron density at on-axis. and ions diffuse in space rapidly. Therefore, the minimum thickness of the neon target should satisfy L ⩾ ct 1 , where t 1 is the time of relativistic transparency for the single-layer carbon target [55]: where N = n e /n c is the normalized electron density, n c = m e ω 2 0 /(4π e 2 ) is the critical density, a 0 = eE 0 /m e cω 0 is the normalized laser intensity, ω 0 is the laser frequency, τ is the laser pulse duration and l is the target thickness, C s ∼ = (Zm e c 2 a 0 /M i ) 1/2 , and M i is the ion mass. On the other hand, the thickness of target cannot be too thick, which requires L ⩽ 20λ in simulations, guaranteeing the carbon layer to pass through the neon target and further accelerated by CR force. Moreover, in the double-layer CR model, the optimal thickness of the accelerated ion layer satisfies [53]: In the LP stage, the radiation pressure depends on the laser reflectance [56]. Assuming the relative amplitudes of reflected wave to be ρ and the frequency of the reflected and incident laser pulse to be ω ′ and ω, respectively. The radiation pressure is written by P = ( . Assuming a complete laser light reflection by the foil, the foil motion equation can be written as [52] dp where p is the momentum of ions, and c is the speed of light in vacuum. Considering the initial condition In addition, the ion energy in the CR stage can be estimated by ε CR =´eE c dx, where E c [57] is the electric field from CR, where a 1 = 2π Z i en i L, and a 2 = 2π n i Z 2 i e 2 /M i . Here Z i , n i are the ion charges and density, respectively. In our scheme, the energetic carbon ions cross the second layer of the neon target, and the CR is mainly provided by neon ions behind the carbon layer. It should be noted that a portion of electrons pass through the second layer together with the laser pulse, while some remain inside the target, as shown in figure 1(d), which diminish the CR effects. Here, we let σ denote the proportion of electrons being expelled, and the ion energy obtained after the two acceleration stages is approximated by We know that under extremely-intense laser field conditions, a large number of photons can be radiated during the acceleration, so that the RR effects cannot be ignored any more. It is shown by simulations that the electrons are bunched longitudinally and transversely by the RR, so n e increases instead of decreasing, which can slow down the electron and ion diffusion processes. On one hand, the incident laser interacts with the electrons in the target and pushes the opaque carbon layer forward at stage I. On the other hand, the laser pulse passes through and keeps away from remaining electrons inside the target at stage II, so that the incident laser does not interact with the electrons in the target. Thus the RR effects mainly affect the electrons remaining in the target at stage I and the electrons moving forward outside the target with the carbon ions at stage II. With the RR effects considered, the growth trend of the maximum carbon ion energy remains the same, but the energy spectrum is compressed, resulting in a quasi-monoenergetic carbon ion beam generation.
Results and discussion
To investigate the carbon ion acceleration process and the electron dynamics, we carry out two-dimensional particle-in-cell (PIC) simulations with the open-source code EPOCH [58]. The simulation box is x × y = 50 µm × 15 µm, sampled by 10 000 × 750 grid cells involving 25 macro-particles per cell for all species. The boundary conditions are absorbing for both particles and fields. A p-polarized laser pulse is incident normally from the left side of the simulation box, and has a Gaussian profile of a 0 exp(−y 2 /w 2 0 ), where w 0 = 6.5 µm is the laser focus spot size. a 0 = 300 is the normalized laser amplitude, corresponding to the laser intensity of 1.23 × 10 23 W cm −2 . The laser central wavelength is λ 0 = 1 µm. The temporal profile of the laser complies with the 'trapezoidal' distribution (rising-plateau-falling) with a duration of 17T 0 (1T 0 − 15T 0 − 1T 0 ), where T 0 ≈ 3.33 fs represents the laser period. The target is a composite one consisting of the front layer of carbon ions and the following layer of neon ions. The carbon layer is located at x = 10λ 0 with the initial electron density of n e1 = 660n c , and the thickness of d 1 = 100 nm. The parameter setting satisfies the condition E l > 2π en e l, to ensure that the carbon ions can be accelerated effectively by the laser beam and the laser pulse could interact with the neon target. The density of gas target depends mainly on the gas jet pressure and temperature. In our scheme, the underdense neon target has a length of 10λ 0 which is greater than the minimum target thickness of L min ≈3.7λ 0 from equation (1), and density of n e2 = 10n c which can be realized by lowering the temperature or increasing the pressure of gas target in laboratories.
The red curve in figure 2(a) shows the time evolution of carbon ion energy spread ∆ε/ε peak , where ε peak is the peak energy, and ∆ε is the full width at half-maximum of the quasi-monoenergetic spectrum. At the stage I, carbon ions are first accelerated in quasi-monoenergetic manner. Besides, we notice that there exists a small peak between 10T 0 -20T 0 in figure 2(a). It originates from the development of transverse instabilities. As the relativistic laser pulse irradiates a thin foil, its strong radiation pressure pushes the foil forward at t = 15T 0 . Meanwhile the Rayleigh-Taylor-like (RT-like) instability will develop very rapidly since the laser pulse has a Gaussian transverse distribution in intensity in our scheme. At a later time of t = 18T 0 , the foil damage from the development of transverse instabilities leads to the deterioration of energy spread. Thus, ∆ε becomes larger and ∆ε/ε peak is correspondingly increased. As time goes on, ∆ε improves only slightly, but ε peak increases rapidly under the action of laser radiation pressure. Therefore, ∆ε/ε peak begins to rise quickly and thus there is a small spike in the temporal evolution of ∆ε/ε peak . Furthermore, the following acceleration by CR in the stage II leads to ion diffusion. Such a deterioration is suppressed when the effects of RR are considered. As the RR force increases in stage II, RR effects become important and the ion energy spread is determined by the competition between the CR and the RR as displayed by the curve near the dash line in figure 2(a). Finally, the carbon ions can propagate stably forward as a quasi-monoenergetic beam since the RR dominates over the CR force. We assume the electrons of neon target are completely expelled when the carbon ions just pass through the neon target at the end of the laser pulse, and take σ = 0.34 from our simulations. Figure 2(b) shows the comparison between the theorical model (equation 5) and the simulations. As expected, the maximum energy of the carbon ions agrees well with the theoretical predictions. The evolution of the energy conversion efficiency η is illustrated in figure 2(c). Apparently, it increases rapidly in stage I and gradually saturates with a maximum of over 20% in stage II. Correspondingly, the energy spread is reduced significantly from 100% to 13.7% with a final center energy of approximate 5.4 GeV as seen in figure 2(d). In addition, we calculated the divergence angle of carbon ions within the laser focus spot size, defined by arctan(p y /p x ), indicating a carbon ions beam with a divergence angle of ∼15 • .
In order to explore the ion acceleration process, we perform additional simulations with a single-layer carbon target (SLT) by using the same laser parameters. Figure 3 present the spatial distributions of the background electrons and ions after shining by the laser beam. As seen from figures 3(a) and (b), the carbon target remains opaque in the double-layer target (DLT) case, and the LP dominates at the first stage. The electron diffusion is also prevented by the buffering effect of neon target and RR. Especially, the spatial distribution of carbon ions exactly follow the high energy electrons, forming a compact layer of thin thickness. The rebuilt phase-space (p y − p x ) illustrates that the carbon ion layer displays a small spatial divergence, and is stably accelerated at the first stage (shown by the dashed circle marked by A in figure 3(d)). When the laser pulse ends at t = 27T 0 , the carbon layer just passes through the neon target. At this moment, the electron density and size of the carbon layer are 236.3n c and 0.5λ 0 , respectively, satisfying the optimal thickness condition for CR (equation (2)). Hence, the carbon ions are continuously accelerated in the second stage, as shown by the red curve in figure 3(f). As the CR dominates, some high-energy electrons move forward with carbon ions, while the remaining low-energy electrons are focused in the neon target. These electrons are bunched and trapped in the laser field by RR and radiate high energy photons as seen in figure 3(c). Meanwhile, the electric field is enhanced considerably and the carbon layer is therefore tighten. In SLT case, although the carbon ions are accelerated in quasi-monoenergetic manner resembling to the DLT case, they undergo spatial diffusion earlier and become relativistic transparent to the incident laser, as shown by the blue curve in figures 3(a), (e) and (f). Finally, the energy spectrum of carbon ions are of Boltzmann-like distribution [16]. By comparison, the energy spread of carbon ions in the DLT case is significantly reduced. Figure 4(a) shows the evolution of carbon ions (red) and neon ions (black) over time. Because of the extremely high velocity obtained in the first stage, a large number of carbon ions with high energy pass through the underdense neon target. When the laser pulse passes through the neon target, CR plays an important role due to the excess of positive charge. The carbon ions are divided into forward and backward parts, and the forward carbon ions are compressed into an over-density layer with a small longitudinal size. Meanwhile, electrons are also divided into high energy parts outside the target and low energy parts inside Figure 4(b) shows the corresponding average electric field evolution over time, in which the red positive field results from the high density carbon layer in figure 4(a). The forward carbon ion layer can stay in the acceleration phase throughout the following process, as shown in figure 4(d). As a result, the quasi-monoenergetic characteristic of carbon ion beam can be maintained for a long time.
When the laser intensity is over 10 23 W cm −2 , the effects of RR on the electrons must be considered. Here, the motion equation of electron can be simplified in principle as dp/dt In consequence, the importance of quantum RR can be characterized by the quantum parameter 2 . It has been demonstrated that the quantum corrections cannot be ignored when χ e > 0.1 [59,60]. In order to identify the role of RR effects in the ion acceleration process in our scheme, we switch off the RR artificially in additional simulations for comparison. Especially, we trace the electrons in both cases with RR and without RR, as shown in figures 5(a) and (b). It is shown that the electrons can remain in the laser field for a longer time with RR considered, while they diffuse in space quickly when ignoring the RR. Finally, a tighter electron bunch both in longitudinal and transverse direction Meanwhile, the RR effects outside the target become more and more significant. The trend is also clearly reflected by RR force as shown in figure 5(e). In order to explain the effects of RR, we also calculate the CR force F CR = 2π Z i e 2 n i L/(1 + 2π n i Z 2 i e 2 t 2 /M i ). Figure 5(e) shows that F L , F CR and F RR are on the same order of magnitude, which demonstrates the RR play an important role. Here, F RR outside the target increases while the F CR decreases in stage II especially after t = 32T 0 , which has been reflected in the energy spread as shown in figure 2(a). In addition, the spectrum of carbon ions is optimized remarkably by the RR effects. As seen from figure 5(g), the carbon ions are quasi-monoenergetic when switching on the RR. This has been already proved in figure 2(d).
To check the robustness of the scheme, three-dimensional (3D) PIC simulations are performed with a reduced laser focus spot size of w 0 = 3λ 0 . The simulation window is x × y × z = 50 µm × 5 µm × 5 µm, divided by 10 000 × 50 × 50 grid cells with 8 macro-particles per cell for all species. Other parameters are the same as in the 2D simulations. The results are consistent with obtained in the 2D case. Figure 6(a) shows the average electric field and carbon ion number density on the axis at t = 32T 0 . It is clearly seen that the carbon ions are divided into backward and forward ones during the interaction process. Among them, the forward parts move in the accelerating phase of the electric field of the CR. Figure 6(b) presents the energy spectrum of carbon ions. It is demonstrated that a quasi-monoenergetic carbon ion beam is obtained in the 3D simulation, with a peak energy of 2.4 GeV and an energy spread of 19%. Note that the peak energy of the carbon ions is much lower than in the 2D simulation because of the multi-dimension effects, which has been extensively investigated [21,24,40].
Effects of laser and attached target parameters
In order to study the influence of target parameters and laser intensity on the carbon ion energy, we scan the simulation parameters including the thickness d and density n of the neon target. The thickness of the second target ranges from 0 to 14 µm and density from 0 to 20n c . The simulation results in figure 7(a) show that there is a minimal target thickness L min ⩾ 3.5λ 0 required for efficient carbon ion acceleration, which agrees well with the minimum target thickness 3.7λ 0 calculated from equation (1). In addition, figure 7(b) shows the relationship between the laser-to-carbon ion energy conversion efficiency η, the laser intensity a (black curve) and the charge-mass ratio of the second target (blue curve in insert figure). With the increase of laser intensity, the conversion efficiency shows a trend of decrease after the sharp increase at first. This may be due to the fact that the acceleration process is dominated by different mechanisms when the laser intensity a 0 < π(n 0 /n c )(l/λ). In addition, the red curve in figure 7(b) shows that the maximum carbon ion energy increases as the drive laser intensity increase, providing a way to enhance the carbon ion energies further in a direct manner. Moreover, the insert figure shows that their influences on the conversion efficiency are not serious when the charge-mass ratio is 1:1. The reason is that when the second target is a proton target, the high-energy carbon layer cannot pass through the proton layer completely, but push partial protons forward, so that the CR as detailed above decreases significantly. For the radiation pressure acceleration, it is favorable to use a circularly polarized laser pulse with sharp time rising front and transversely uniform intensity distribution to suppress the rapid development of transverse instabilities [26,61].To demonstrate the robustness and tolerance of our scheme, we performed additional PIC simulations using Gaussian and super-Gaussian laser pulse in time and kept other parameters the same as the trapezoidal laser, as shown in figure 8. We find that it still has a quasi-monoenergetic ion energy peak in super-Gaussian case, which is comparable to that with a trapezoidal time profile. For Gaussian laser pulse, the peak energy is lower due to the less laser energy. The higher energy of the ions can also be obtained by Gaussian shaped pulse of the same laser energy as the trapezoidal laser. This indicates that our scheme is therefore still applicable in a more general case. For high quality ions beam, it is very important to maintain its beam quality and charge for long distance transmission. For instance, a carbon ion beam with small divergence angle can achieve more localized energy deposition inside the cancer cells area, avoiding the damage to the surrounding healthy cells. Some optimizing methods can be used to alleviate emittance of the ion beam which can be effectively manipulated so that the ion beam can travel longer distances, such as novel target designs, laser-driven micro-lens and external electrostatic fields [1,62,63].
Conclusion
In summary, we provide an efficient hybrid acceleration scheme to produce quasi-monoenergetic carbon ions through ultra-intense laser-driven carbon and neon double-layer target. The carbon ions are significantly accelerated through the combined LP and CR. The resultant peak energy and energy spread can reach 464 MeV u −1 and 13.7%, respectively. The maximum ion energy agrees well with the theoretical model. Compared to the simple single planar target, much intense CR electric field and RR effects occur, leading to over 20% laser-to-ion energy conversion efficiency. In addition, a parameter scan over different neon target thicknesses and laser intensity demonstrates that there exists a minimal neon thickness for the hybrid scheme. This method offers possibilities to obtain more than 400 MeV u −1 quasi-monoenergetic carbon-ion beams with the future 100 PW laser facilities, which may be applied in fast ignition and cancer therapy in the future.
Data availability statement
The data cannot be made publicly available upon publication because no suitable repository exists for hosting data in this field of study. The data that support the findings of this study are available upon reasonable request from the authors. | 2023-05-17T15:19:14.864Z | 2023-05-15T00:00:00.000 | {
"year": 2023,
"sha1": "e29e7fa3c15523c4b950115ed497bfb6c2d9043e",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/1367-2630/acd572",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "20e7eed8eff761b3b3aa47ab45771702e2f18afb",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
24256313 | pes2o/s2orc | v3-fos-license | Regulation of Transforming Growth Factor-β Signaling and PDK1 Kinase Activity by Physical Interaction between PDK1 and Serine-Threonine Kinase Receptor-associated Protein*
To gain more insights about the biological roles of PDK1, we have used the yeast two-hybrid system and in vivo binding assay to identify interacting molecules that associate with PDK1. As a result, serine-threonine kinase receptor-associated protein (STRAP), a transforming growth factor-β (TGF-β) receptor-interacting protein, was identified as an interacting partner of PDK1. STRAP was found to form in vivo complexes with PDK1 in intact cells. Mapping analysis revealed that this binding was only mediated by the catalytic domain of PDK1 and not by the pleckstrin homology domain. Insulin enhanced a physical association between PDK1 and STRAP in intact cells, but this insulin-induced association was prevented by wortmannin, a phosphatidylinositol 3-kinase inhibitor. In addition, the association between PDK1 and STRAP was decreased by TGF-β treatment. Analysis of the activities of the interacting proteins showed that PDK1 kinase activity was significantly increased by coexpression of STRAP, probably through the inhibition of the binding of 14-3-3, a negative regulator, to PDK1. Consistently, knockdown of the endogenous STRAP by the transfection of the small interfering RNA resulted in the decrease of PDK1 kinase activity. PDK1 also exhibited an inhibition of TGF-β signaling with STRAP by contributing to the stable association between TGF-β receptor and Smad7. Moreover, confocal microscopic study and immunostaining results demonstrated that PDK1 prevented the nuclear translocation of Smad3 in response to TGF-β. Knockdown of endogenous PDK1 with small interfering RNA has an opposite effect. Taken together, these results suggested that STRAP acts as an intermediate signaling molecule linking between the phosphatidylinositol 3-kinase/PDK1 and the TGF-β signaling pathways.
To gain more insights about the biological roles of PDK1, we have used the yeast two-hybrid system and in vivo binding assay to identify interacting molecules that associate with PDK1. As a result, serine-threonine kinase receptor-associated protein (STRAP), a transforming growth factor- (TGF-) receptor-interacting protein, was identified as an interacting partner of PDK1. STRAP was found to form in vivo complexes with PDK1 in intact cells. Mapping analysis revealed that this binding was only mediated by the catalytic domain of PDK1 and not by the pleckstrin homology domain. Insulin enhanced a physical association between PDK1 and STRAP in intact cells, but this insulin-induced association was prevented by wortmannin, a phosphatidylinositol 3-kinase inhibitor. In addition, the association between PDK1 and STRAP was decreased by TGF- treatment. Analysis of the activities of the interacting proteins showed that PDK1 kinase activity was significantly increased by coexpression of STRAP, probably through the inhibition of the binding of 14-3-3, a negative regulator, to PDK1. Consistently, knockdown of the endogenous STRAP by the transfection of the small interfering RNA resulted in the decrease of PDK1 kinase activity. PDK1 also exhibited an inhibition of TGF- signaling with STRAP by contributing to the stable association between TGF- receptor and Smad7. Moreover, confocal microscopic study and immunostaining results demonstrated that PDK1 prevented the nuclear translocation of Smad3 in response to TGF-. Knockdown of endogenous PDK1 with small interfering RNA has an opposite effect. Taken together, these results suggested that STRAP acts as an intermediate signaling molecule linking between the phosphatidylinositol 3-kinase/PDK1 and the TGF- signaling pathways.
Transforming growth factor- (TGF-) 2 is involved in the regulation of many cellular responses, including cell proliferation, differ-entiation, apoptosis, migration, extracellular matrix formation, tissue repair, and immune homeostasis (1)(2)(3). TGF- signals through heteromeric complexes of transmembrane type I (TR-I) and type II (TR-II) serine-threonine kinase receptors (4 -6). TGF- receptors subsequently propagate signals downstream through direct interaction with cytoplasmic Smads, and possibly other proteins as well (7)(8)(9). Smad proteins are subdivided into three classes, the receptorregulated Smads (R-Smads), the common Smads (Co-Smads), and the inhibitory Smads (I-Smads). Once phosphorylation, R-Smads, including Smads 1, 2, 3, 5, and 8, dissociate from the type I TGF- receptor and physically associate with Co-Smads such as Smad 4, and translocate into the nucleus and regulate the transcription from specific gene promoters (10). In addition to Smads, several proteins interacting with TR-I or TR-II have been identified (4,5,11). Among them, serine-threonine kinase receptor-associated protein (STRAP), a novel WD40 domain-containing protein, was shown to interact with both TR-I and TR-II and to inhibit TGF- signaling (12). Moreover, STRAP was shown to stabilize the complex formation between Smad7, an inhibitory Smad, and activated TR-I in the inhibition of TGF- signaling, preventing Smad2 and Smad3 from access to the receptor (13).
The PDK1 (3-phosphoinositide-dependent protein kinase-1) has been demonstrated to phosphorylate and activate many members of the protein kinase A, G, and C subfamily of protein kinases that include PKB, p70 S6 kinase, protein kinase A, serum-and glucocorticoid-induced kinase (SGK), and a variety of protein kinase C isoforms (14). PDK1 is broadly expressed and has been described as constitutive (15)(16)(17). However, it was shown recently that PDK1 activity could be stimulated by phosphatidylinositol 3,4,5-trisphosphate in the presence of sphingosine (18) and modulated by several other PDK1-interacting partners (19 -21). These observations strongly suggest that other regulatory mechanisms of the PDK1 activity may exist. In addition, recent studies have shown that insulin inhibits TGF--mediated apoptosis and that PI3K signaling can be activated by TGF-, suggesting a functional link between PI3K and TGF- signaling pathways (22)(23)(24)(25). Therefore, it is necessary to investigate how these two signaling pathways are physically and functionally associated with each other.
In this study, we show that STRAP interacts with PDK1 and that this interaction is important for the modulation of PDK1 activity. Moreover, coexpression of PDK1 results in the enhancement of the STRAP-dependent TGF- transcriptional inhibition. These findings provide insights into the cross-talk between PI3K/PDK1 and TGF- signaling pathways.
Preparation of STRAP-specific Antiserum-The anti-STRAP polyclonal rabbit antiserum was raised against GST-tagged STRAP protein, produced in Escherichia coli. The recombinant STRAP protein was purified with glutathione-Sepharose 4B beads and used to immunize rabbits. The animals were boosted four times every week and bled from the ear vein 10 days after the last injection. A titer of the antiserum was measured by Western blotting and enzyme-linked immunosorbent assay method. Antiserum was removed by centrifugation after incubation at 37°C for 3 h.
Yeast Two-hybrid Specificity Test-A fish plasmid, pJG4-5 harboring PDK1, was transformed back into EGY48 cells along with either the bait plasmid, pEG202 harboring STRAP, or other several bait plasmids available in our laboratory as described (28).
Transient Transfection and in Vivo Interaction Assay-Each plasmid DNA indicated under "Results" was transfected into 293T or HepG2 cells with a calcium phosphate precipitation method or Lipofectamine Plus (Invitrogen), according to the manufacturer's instructions. After overnight incubation, the cells transfected were incubated in the pres-ence or absence of TGF-1 (100 pM) for 20 h. Cells were then washed and solubilized with lysis buffer containing 0.1% Nonidet P-40 as described (28). Detergent-insoluble materials were removed by centrifugation, and the cleared lysates were mixed with glutathione-Sepharose beads (Amersham Biosciences), and beads were washed three times with the lysis buffer. For immunoblotting, coprecipitates or whole cell extracts were resolved by SDS-PAGE and then transferred to PVDF membranes. The membranes were immunoblotted with the indicated antibodies and then developed using an enhanced chemiluminescence (ECL) detection system (Amersham Biosciences).
Assay of PDK1 Kinase Activity-Cells transiently transfected were washed three times with ice-cold PBS and solubilized with 100 l of lysis buffer (20 mM Hepes, pH 7.9, 10 mM EDTA, 0.1 M KCl, and 0.3 M NaCl). The cleared lysates were mixed with glutathione-Sepharose beads and rotated for 2 h at 4°C. After washing the precipitate three times with lysis buffer and then twice with kinase buffer (50 mM Hepes, pH 7.4, 1 mM dithiothreitol, and 10 mM MgCl 2 ), the precipitate was incubated with 5 Ci of [␥-32 P]ATP at 37°C for 15 min in the presence of kinase buffer containing 500 ng of recombinant SGK (Upstate), separated by 8% SDS-PAGE, transferred to PVDF membranes, and detected by autoradiography.
Luciferase Reporter Assay-HepG2 cells were transfected according to the Lipofectamine Plus method with the p3TP-Lux reporter plasmid, along with each expression vector as indicated. After 48 h, the cells were harvested, and luciferase activity was monitored with a luciferase assay kit (Promega) following the manufacturer's instructions. Light emission was determined with a Berthold luminometer (Microlumat LB96P). The cell extracts containing equal amounts of PDK1 and STRAP, determined by Western blot analysis, were used for luciferase assay, and the total DNA concentration was constantly kept by supplementing with empty vector DNAs. The values were adjusted with respect to expression levels of a cotransfected -galactosidase reporter control, and experiments were repeated at least three times.
Assays for Cell Death and Cell Survival-293T cells undergoing apoptosis, after treatment with TNF-␣ (20 ng/ml) and cycloheximide (10 g/ml), were quantitated by staining with the fluorescein isothiocyanate-conjugated annexin V and the fluorescent dye propidium iodide according to the manufacturer's instructions (Roche Applied Science). Cells in 6-cm dishes treated with TNF-␣ and cycloheximide for 14 h were harvested and incubated with annexin V-and propidium iodidecontaining buffer for 10 min at room temperature and then washed with PBS as described (28). 10,000 events were analyzed per sample using a FACSCalibur-S system (BD Biosciences). For a cell death experiment using the GFP system, 293T cells grown on sterile coverslips were transfected with pEGFP, an expression vector encoding GFP, together with expression vectors as indicated. After 24 h of transfection, the cells were treated with TNF-␣ and cycloheximide. The cells were fixed with icecold 100% methanol, washed three times with PBS, and then stained with bisbenzimide (Hoechst 33258). The coverslips were washed with PBS, then mounted on glass slides, using Gelvatol, and visualized under a fluorescence microscope as described previously (28). The percentage of apoptotic cells was calculated as the number of GFP-positive cells with apoptotic nuclei divided by the total number of GFP-positive cells. For a cell survival assay, 293T cells transfected with a STRAP-specific siRNA (515) for 12 h were seeded in 24-well plates at a concentration of 4 -5 ϫ 10 4 cells per well and allowed to grow in Dulbecco's modified Eagle's medium supplemented with 1% FBS. The cell number was counted with a hemocytometer at the indicated times. By using the 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide assaybased cell counting kit-8 (Dojindo), the percentage of cell survival was determined by estimating the value of parental cells at the indicated times as 100%.
Immunofluorescene and Confocal Microscopy-Hep3B cells were plated and transfected with FLAG-Smad3 and/or MYC-tagged PDK1 constructs on sterile coverslips, placed on ice, and washed three times with ice-cold PBS prior to fixation with 4% paraformaldehyde for 10 min at room temperature. Cells were then washed with PBS, treated with 0.2% Triton X-100, and rewashed with PBS. The mouse anti-FLAG (M2) antibody diluted 1000-fold in PBS or rabbit anti-MYC antibody diluted 200-fold in PBS was applied for 2 h at 37°C. The cells were then washed three times with PBS and incubated with 1000-fold diluted Alexa Fluor-594 anti-mouse secondary antibody or Alexa Fluor-488 anti-rabbit secondary antibody (Molecular Probes) at 37°C for 1 h. The coverslips were washed three times with PBS and then mounted on glass slides, using Gelvatol. Confocal laser scanning microscopy observations were done on a Bio-Rad MRC 1024 15-milliwatt argon-krypton laser as described (28).
RESULTS
PDK1 Interacts with STRAP-To explore the mechanism by which PDK1 activity is regulated, we employed a yeast two-hybrid specificity test to search PDK1-interacting proteins using more than 30 different baits available in our laboratory, together with the fish plasmid, pJG4-5 harboring a full-length of PDK1 cDNA, as described previously (28). From this random screening, we found that PDK1 cDNA interacted with a tested bait plasmid, pEG202 harboring STRAP (results not shown). Based on this, to investigate a possible cross-talk between PDK1 and TGF- signaling pathways, we first performed coimmunoprecipitation experiments and in vitro kinase assay using the human embryonic kidney carcinoma cell line 293T. Endogenous PDK1 immunoprecipitate from cell lysates treated with TGF- significantly reduced PDK1 activity about 3.0-fold compared with the control PDK1 immunoprecipitate untreated with TGF- ( Fig. 1), suggesting that a functional link between PDK1 and TGF- signaling pathways may exist.
We then speculated that PDK1 might interact with STRAP in intact cells, and we performed cotransfection experiments using GST-and FLAG-tagged eukaryotic expression vectors. In these experiments, the wild-type PDK1 and STRAP were coexpressed as a GST fusion protein and a FLAG-tagged protein in cells, respectively. The interactions of FLAG-tagged STRAP proteins to the GST-PDK1 fusion proteins were analyzed by immunoblotting with an anti-FLAG antibody. As shown in Fig. 2A, the STRAP was detected in the coprecipitate only when coexpressed with the GST-PDK1 but not with the control GST alone, in addition to the failure of the binding of the control vector alone (CMV) to GST-PDK1, demonstrating that PDK1 physically interacts with STRAP. In order to confirm the interaction between PDK1 and STRAP, we next carried out coimmunoprecipitation experiments using FLAG-STRAP-transfected 293T cell extract and endogenous PDK1 (Fig. 2B). Endogenous PDK1 was immunoprecipitated from cell lysates, and Western blot analysis showed that PDK1 was precipitated (Fig. 2B, lower panel). The binding of STRAP was subsequently analyzed by immunoblotting with an anti-FLAG antibody. STRAP was only present in the PDK1 immunoprecipitate when coexpressed with the FLAG-STRAP (Fig. 2B, upper panel) but not in the control transfected the vector alone (CMV). In addition, to examine the interaction between the two endogenous proteins, we produced antiserum specific to STRAP as described under "Materials and Methods." The specificity of the antiserum was confirmed by immunoblotting using cell extracts from 293T cells transfected with FLAG-STRAP (results not shown). As shown in Fig. 2C, immunoprecipitation of endogenous PDK1 by anti-PDK1 antibody and then immunoblotting with STRAP-specific antiserum showed the interaction of these two endogenous proteins, regardless of cell types used. This interaction was also confirmed by reciprocal coimmunoprecipitation experiments in which STRAP-specific antiserum, instead of anti-PDK1 antibody, was used for immunoprecipitation (results not shown). Taken together, our results demonstrate that PDK1 associates with STRAP in vivo.
Catalytic Domain of PDK1 Specifically Binds to STRAP in Vivo-To determine which region of PDK1 was necessary for the association with STRAP, we generated two PDK1 deletion constructs FLAG-PDK1(PH), comprising the carboxyl-terminal PH domain (amino acids 411-556), FIGURE 1. Regulation of PDK1 activity by TGF-. 293T cells were incubated for 20 h in the presence (ϩ) or absence (Ϫ) of TGF-1 (100 pM). Endogenous PDK1 was immunoprecipitated (IP) with an anti-PDK1 antibody, and the immunoprecipitates were subjected to in vitro kinase assay using SGK as a substrate, followed by SDS-PAGE and autoradiography (top panel). The circled P-SGK indicates the position of the phosphorylated SGK. The amounts of SGK used in the assay and immunoprecipitated PDK1 were analyzed with anti-His and anti-PDK1 antibodies using the same blot (2nd and 3rd panels). Band intensities were quantitated using NIH Image software, and the fold increase relative to TGF-1-treated samples was calculated (bottom panel). The data shown are the mean Ϯ S.D. of duplicate assays and are representative of at least three independent experiments. IB, immunoblot. TGF- and Insulin Modulate PDK1-STRAP Complex Formation-It has been demonstrated previously that the STRAP phosphorylation is slightly modulated when cells are stimulated by TGF- and that TGF- signaling pathway is regulated by insulin stimulation (13,25). Therefore, we assessed whether TGF- and insulin can influence the PDK1-STRAP complex formation in cells following TGF- and insulin treatment. Twenty four hours after transfection, cells were incubated in media with or without 100 pM TGF-1 for 20 h. PDK1 was precipitated, and the coprecipitation of STRAP was determined by anti-FLAG immunoblot assay. As illustrated in Fig. 4A, upon TGF- treatment, the association between PDK1 and STRAP was significantly decreased ϳ2.5-fold (top panel, 1st and 2nd lanes). We next examined the effect of insulin on the physical interaction between PDK1 and STRAP in 293T were determined by anti-FLAG antibody immunoblot (IB). The same blot was stripped and re-probed with an anti-GST antibody (middle panel) to confirm the expression of GST-PDK1 and a GST control (GST). B, 293T cells were transiently transfected with the vector alone (CMV), as a negative control, or FLAG-STRAP, and lysed and immunoprecipitated (IP) with an anti-PDK1 antibody. The PDK1 immunoprecipitate was analyzed for the presence of STRAP by immunoblot analysis using anti-FLAG antibody (upper panel). The amount of immunoprecipitated PDK1 was analyzed using anti-PDK1 antibody (lower panel). C, cell lysates from the parental 293T, Hep3B, and SK-N-BE(2)C cells were immunoprecipitated with rabbit preimmune serum (preimm.) or rabbit anti-PDK1 antibody (␣-PDK1). The immunoprecipitates were subjected to SDS-PAGE and immunoblotted with STRAP-specific antiserum. These experiments were performed in duplicate at least four times with similar results. cells transfected with plasmid vectors expressing GST-PDK1 and FLAG-STRAP. As shown in Fig. 4B (top panel, 3rd to 5th lanes), exposure of the cells to insulin resulted in an increase in the PDK1-STRAP complex formation (about 2.2-fold), but this effect was inhibited by wortmannin, a PI3K inhibitor, implying that STRAP can be involved in the PI3K/PDK1 signaling pathway.
STRAP Is a Positive Regulator of PDK1 Kinase Activity-In order to establish the physiological role for the PDK1-STRAP association, we first examined the effect of STRAP on PDK1 kinase activity. PDK1 was precipitated from the transfected cells, and its activity was measured by in vitro kinase assay using SGK as a substrate (29). As shown in Fig. 5A, coexpression of PDK1 with STRAP resulted in an increase of PDK1 activity ϳ4-fold (top panel, 2nd and 4th lanes). As a control, expression levels of the transiently expressed PDK1 protein were analyzed in GST pull-down precipitates, and the amount of PDK1 in all lanes was similar (Fig. 5A, 4th panel), indicating that the observed differences in phosphorylated SGK were not because of differences in PDK1 expression levels. We next tested whether STRAP could enhance PDK1 autophosphorylation. PDK1 was immunoprecipitated from the cell extracts expressing PDK1 alone or PDK1, together with STRAP, and the effect of STRAP on PDK1 activity in the immunocomplex was determined by autophosphorylation assays. Results show that PDK1 is more autophosphorylated in the presence of STRAP than in the absence of STRAP (Fig. 5B). These findings suggest that STRAP may be a positive regulator of PDK1 activity. PKB/Akt has been implicated in contributing to the sequestration of Bad away from the pro-apoptotic signaling pathway by Bad phosphorylation (30). To examine whether the downstream targets of PDK1 are affected by the overexpression of STRAP, we performed cotransfection experiments with plasmid vectors expressing GST-Bad, MYC-PDK1, and FLAG-STRAP, and cell lysates were precipitated with glutathione-Sepharose beads and immunoblotted with an anti-phospho-Bad antibody. As shown in Fig. 5C, coexpression of STRAP with PDK1 significantly induced the Bad phosphorylation compared with that of the PDK1 expression alone (top panel, 2nd and 3rd lanes). In addition, as expected, the PDK1-induced Akt phosphorylation was also increased by STRAP coexpression (Fig. 5C, bottom panel). In order to clearly confirm the physiological role of STRAP in the regulation of the PI3K/PDK1 signaling pathway, the in vitro kinase assay (Fig. 6A) or immunoblot analyses (Fig. 6, B and C) were performed in 293T cells transfected with STRAP-specific siRNAs using SGK as a substrate or anti-phospho-antibodies as indicated, respectively. As a result, reducing the amount of endogenous STRAP in cells with sequence-specific siR-NAs resulted in a significant decrease of PDK1 kinase activity (Fig. 6A), PKB/Akt phosphorylation (Fig. 6B), and Bad phosphorylation (Fig. 6C). Thus, it is evident from these experiments that STRAP physically interacts with PDK1 and positively modulates PDK1 as well as its downstream targets such as PKB/Akt and Bad in vivo.
Interaction between PDK1 and STRAP Attenuates TNF-␣-induced Apoptosis-Because STRAP associates with PDK1 and enhances PDK1 kinase activity (see Fig. 5), we next analyzed the effect of STRAP on TNF-␣-induced apoptosis. To this end, we chose 293T cells, which are susceptible to TNF-␣-induced apoptosis (28,31), and we performed dual annexin V/propidium iodide staining as described under "Materials and Methods." As illustrated in Fig. 7A (white bars), the transfection of 293T cells with a vector encoding PDK1 resulted in a slight decrease in apoptotic cell death as expected, and this suppression was potentiated when PDK1 was coexpressed with STRAP, indicating that STRAP plays The amount of precipitated PDK1 (GST purification) and the expression level of GST-tagged PDK1 proteins in total cell lysates (Lysate) were analyzed using anti-GST antibody, respectively (2nd and 4th panels). The level of PDK1-STRAP complex formation was shown as a bar graph, after densitometric reading of the autoradiograms and normalization for PDK1 and STRAP protein levels, and fold increase relative to TGF-1-treated samples was calculated (bottom panel). B, increase of the association between PDK1 and STRAP by insulin. 293T cells were transfected with expression vectors encoding GST-PDK1 and FLAG-STRAP as indicated. After 48 h of transfection, the cells were incubated for 30 min with or without 100 nM wortmannin and then treated with 100 nM insulin for 20 min. The cell lysates were subjected to precipitations with glutathione-Sepharose beads (GST purification). The resulting precipitates were examined by immunoblot analysis with an anti-FLAG antibody (top panel). The equivalent amounts of GST-PDK1 and FLAG-STRAP were assessed by immunoblot analysis of total cell lysates (2nd and 3rd panels, Lysate). The relative level of PDK1-STRAP complex formation was quantitated by densitometric analyses, and fold increase relative to untreated samples was calculated (bottom panel). The data shown are the mean Ϯ S.D. of duplicate assays and are representative of at least five independent experiments. DECEMBER 30, 2005 • VOLUME 280 • NUMBER 52 an important role in the modulation of PDK1-mediated survival signaling pathway by direct binding with PDK1.
STRAP Interacts with PDK1
To confirm further the involvement of STRAP in the suppression of TNF-␣-induced apoptosis, 293T cells were transiently transfected with GFP alone and with GFP and PDK1. In addition, cells were cotransfected with PDK1 and STRAP, together with GFP. Apoptotic cells were scored by a change in nuclear morphology among GFP-positive cells after inducing apoptosis by TNF-␣ treatment as described under "Materials and Methods." As shown in Fig. 7A (black bars), ϳ28% of 293T cells expressing PDK1 were apoptotic following TNF-␣ treatment. Cells cotransfected with PDK1 and STRAP expression plasmids showed higher apoptotic suppression (about 25% inhibition) than cells transfected with the PDK1 expression plasmid alone. However, this inhibitory effect was not because of STRAP itself, because the amount of STRAP used for these apoptotic analyses could not influence a change in apoptosis (see Fig. 7A, 4th bar). The effect of STRAP on TNF-␣-induced apoptosis was further assessed by the small interfering RNA experiments using a STRAP-specific siRNA (515) because this siRNA showed a little stronger effect on the down-regulation of endogenous STRAP compared with the another STRAP-specific siRNA (334). As shown in Fig. 7B, the apoptotic cell death was increased by the coexpression of STRAP-specific siRNA (515), depending on the amount of STRAP-specific siRNA. Next, to provide more evidence that STRAP is directly involved in the modulation of PI3K/PDK1 signaling crucial for cellular responses such as cell survival, cell growth, and protein synthesis, we examined the effect of STRAP on cell growth under the down-regulated conditions of endogenous STRAP using trypan blue exclusion (Fig. 7C) and cell counting kit-8 method (Fig. 7D). As a result, the property of STRAP to enhance the cell survival was confirmed in a dose-dependent manner from both experiments. To examine further the roles of STRAP in the cell growth, a human neuroblastoma stable cell line expressing STRAP (Strap) was established and used for the proliferative assays. As shown in Fig. 7, E and F, compared with parental SK-N-BE(2)C cells (Control) and pcDNA3-His vector transfectants (Vector), as negative controls, the growth rate under normal serum conditions was increased by the ectopic expression of STRAP. Taken together, these data clearly indicate that STRAP positively regulates PI3K/ PDK1-mediated protection against TNF-␣-induced apoptosis and cell survival.
STRAP Regulates PDK1 Kinase Activity by 14-3-3 Dissociation-Recently, it was reported that the binding of 14-3-3 to PDK1 suppresses PDK1 kinase activity (20). Based on this, we speculated that STRAP, a potential positive regulator of PDK1, might modulate PDK1-14-3-3 complex formation. To determine whether the mechanism of the stimulation of PDK1 activity by STRAP is correlated with the dissociation of 14-3-3 from PDK1-14-3-3 complex, we examined the effect of STRAP on PDK1-14-3-3 association by using in vivo binding assay as described under "Materials and Methods." Transfected cells were precipitated with glutathione-Sepharose beads, and the binding of 14-3-3 to PDK1 was estimated by immunoblot analyses using anti-FLAG antibody. As shown in Fig. 8A, coexpression of STRAP significantly decreased PDK1-14-3-3 association with ϳ54% compared with the control (top panel, 3rd and 4th lanes). This result indicates that STRAP may enhance PDK1 kinase activity by stimulation of the dissociation of 14-3-3, a potential negative regulator of PDK1, from PDK1-14-3-3 complex.
Because insulin increased the association between PDK1 and STRAP, and this effect was blocked by wortmannin treatment (see Fig. 4B), we next investigated whether insulin has a similar effect on PDK1-14-3-3 complex formation in intact cells. 293T cells were transfected with FLAG-14-3-3 or a vector alone (CMV). Immunoblot analysis using anti-FLAG antibody of the PDK1 immunoprecipitates revealed that, as expected, the interaction between PDK1 and 14-3-3 was significantly decreased with about 50% in the cells treated with insulin compared with the control without insulin treatment (Fig. 8B). Taken together, these results suggest that STRAP, like 14-3-3, can be also a modulator of downstream targets of the PI3K signaling pathway activated by insulin, in addition to its original role as an intracellular signal mediator in TGF- signal transduction.
PDK1 Negatively Regulates TGF--mediated Transcription-STRAP was known to negatively regulate the TGF--induced transcription in a dose-dependent manner and existed as a homo-or hetero-oligomer (12,13). To determine whether PDK1 also regulates STRAP activity using the same approach, we cotransfected the p3TP-Lux reporter plasmid, containing elements from the plasminogen activator inhibitor-1 promoter (32), with expression vectors encoding for PDK1 and/or STRAP into HepG2 cells, which are highly responsive to TGF-. We first examined the effect of increasing amounts of PDK1 on TGF--induced transcription. Overexpression of PDK1, like STRAP, suppressed the TGF--induced increase in luciferase activity in a dose-dependent manner (Fig. 9A). In order to further establish whether the activity of PDK1 is involved in the suppression of TGF--induced transcription, we next analyzed the effect of the catalytically inactive PDK1 KD mutant on TGF--induced transcription (where KD indicates kinase-dead). As shown in Fig. 9A, both wild-type (black bars) and catalytically inactive PDK1 KD (white bars) decreased TGF--induced transcription to a similar extent, suggesting that the suppression of TGF--induced transcription by PDK1 is independent of its kinase activity. We then attempted to determine whether the interaction of PDK1 with STRAP can influence TGF--induced transcription. Results showed that the addition of PDK1 to STRAP led to an enhancement of the inhibitory effect of STRAP in TGF- signaling in a dose-dependent manner (Fig. 9B). Once again, these results suggest that PDK1 activity seems not to be required for the STRAP-induced suppression of TGF- signaling. In addition, we examined whether PDK1 has a similar inhibitory effect on another TGF-responsive reporter (CAGA) 9 MLP-Luc, containing multiple Smad3/ Smad4-binding CAGA boxes (33). Similar results were also observed with the (CAGA) 9 MLP-Luc reporter in HepG2 cells (results not shown). To confirm further the inhibitory effect of PDK1 in the TGF- signaling, we also used a constitutively active form of type 1 TGF- receptor, instead of TGF-, to alternatively initiate TGF- signaling. The results obtained in these experiments were almost identical to those of TGF- treatment (results not shown). Finally, we determined FIGURE 6. Effect of STRAP siRNA duplexes on PI3K/PDK1 signaling pathway. 293T cells were cultured in 6-well plates as described under "Materials and Methods" and transfected with two kinds of STRAP siRNA duplexes (334 and 515). Total cell lysates were immunoprecipitated (IP) with the indicated antibodies. The PDK1 immunoprecipitate was analyzed for PDK1 activity by in vitro kinase assay using SGK as a substrate (A, top panel). The amounts of immunoprecipitated PDK1 and SGK used in the assay were analyzed with anti-PDK1 and anti-His antibodies using the same blot (A, 2nd and 3rd panels). The PKB/Akt and Bad immunoprecipitates were also analyzed for Akt activity and Bad phosphorylation by immunoblot (IB) analyses using anti-phospho-Akt(Thr-308) and anti-phospho-Bad(Ser-136) antibodies (B and C, top panels). The amounts of immunoprecipitated Akt and Bad used in this assay were analyzed with anti-Akt and anti-Bad antibodies using the same blot, respectively (B and C, middle panels). The circled P-SGK, circled P-Akt, and circled P-Bad indicate the position of the phosphorylated SGK, Akt, and Bad, respectively. Expression levels of endogenous STRAP and -actin mRNA in each sample were monitored by reverse transcription (RT)-PCR using STRAP-specific (forward, 5Ј-AACCATCAATTCTGCATCTC-3Ј; reverse, 5Ј-AAGGAAAGATGCAATCTGAA-3Ј) and -actin-specific primers (forward, 5Ј-CAAGAGATGGCCACGGCTGCT-3Ј; reverse, 5Ј-TCCTTCT-GCATCCTGTCGGCA-3Ј). These experiments were performed in duplicate at least four times with similar results. FIGURE 7. STRAP enhances PDK1-mediated inhibition of apoptosis. A, effect of STRAP on TNF-␣-induced apoptosis. 293T cells were transiently transfected with an expression vector encoding PDK1 (2 g) as indicated in the presence or absence of an expression vector encoding STRAP (2 g). Transfected or parental cells were incubated for 24 h and treated with TNF-␣ (20 ng/ml) and cycloheximide (10 g/ml) for 14 h to induce apoptosis. Apoptotic cell death was determined by flow cytometry for annexin V and propidium iodide (annexin V). Results shown are average of duplicate samples and are representative of three independent experiments. For a cell death experiment using the GFP system, 293T cells were transiently transfected with expression vectors encoding PDK1 (3 g) and STRAP (3 g) along with an expression vector encoding GFP (3 g) as indicated. To induce apoptosis, transfected cells were treated with TNF-␣ and cycloheximide as described above. GFP-positive cells were analyzed for the presence of apoptotic nuclei with a fluorescence microscope (GFP). The data shown are the means Ϯ S.D. of triplicate assays and are representative of at least three independent experiments. B, effect of STRAP-specific siRNA on TNF-␣-induced apoptosis. 293T cells were transiently transfected with a STRAP-siRNA duplex (515) along with an expression vector encoding GFP (3 g), and the transfected or parental untransfected cells (Ϫ) were incubated for 24 h and treated with TNF-␣ as described above to induce apoptosis. A fluorescence microscope was used to analyze the presence the roles of PDK1 in TGF--induced transcription using the PDK1 knockdown system. As shown in Fig. 9C, the transfection of PDK1 siRNA resulted in a significant increase of TGF--induced transcription that was proportional to the amount of PDK1 siRNA transfected. As a control, the cells transfected with PDK1 siRNA showed a significant reduction of endogenous PDK1 compared with the nontransfected cells (Fig. 9D, upper panel), indicating that the PDK1 siRNA used in this experiment is an effective siRNA duplex for the suppression of endogenous PDK1 expression. Taken together, these findings strongly suggest that PDK1 associates with STRAP and enhances the STRAP-induced inhibition of TGF- signaling.
PDK1 Potentiates the Association between Activated Type I TGF- Receptor and Smad7-To explore how PDK1 negatively regulates TGF--mediated transcription, we examined the effect of PDK1 on the association between TR1(TD), an activated type 1 TGF- receptor, and Smad7 because STRAP was known to inhibit TGF- signaling by stabilizing the complex formation between TR1(TD) and Smad7 (13,34). 293T cells were transiently transfected with GST-TR1(TD), FLAG-Smad7, FLAG-STRAP, and MYC-PDK1. As shown in Fig. 10A, compared with control cells expressing GST-TR1(TD) and FLAG-Smad7, the coexpression of STRAP significantly stimulated the association between TR1(TD) and Smad7 about 3-fold (Fig. 10A, top panel, 3rd and 4th lanes), consistent with the previous observations obtained by Datta and Moses (13). Furthermore, the addition of PDK1, together with STRAP, caused a stronger association between TR1(TD) and Smad7 (about 4-fold increase), indicating that PDK1, like STRAP, plays a key role in the stabilization of TR1(TD)-Smad7 complex and contributes to the inhibition of TGF- signaling (top panel, 4th and 5th lanes). We then examined whether the coexpression of PDK1 affects the association between STRAP and Smad7 because STRAP contributes to the stable association between TR1(TD) and Smad7 for inhibiting TGF- signaling (13). As a result, as shown in Fig. 10B, the interaction of Smad7 with STRAP was increased about 2-fold in the transfected cells expressing PDK1 compared with the control cells without PDK1 (top panel, 2nd and 3rd lanes), indicating that PDK1 may enhance the inhibitory TGF- signaling by assisting STRAP to recruit Smad7 to TGF- receptor and to stabilize the association between Smad7 and TGF- receptor. Collectively, these results suggest that PDK1, together with STRAP, might stabilize the complex formation of TR-Smad7 for blocking TGF- signaling.
PDK1 Prevents the Nuclear Translocation of Smad3-Because Smad3 is able to strongly reverse the synergistic inhibition of TGF--dependent transcription by STRAP and Smad7 (13), we next examined whether PDK1 could modify Smad3 movement. The cells were transfected with MYC-PDK1 or FLAG-Smad3 in the presence of Smad3 or PDK1, respectively. As shown in Fig. 11 The abundance of GST-tagged proteins in GST precipitates was determined by anti-GST antibody immunoblot (2nd panel). B, insulin effect on PDK1-14-3-3 complex formation. After 48 h of transfection with or without FLAG-14-3-3, the cells were incubated for 20 min in the presence (ϩ) or absence (Ϫ) of 100 nM insulin. The cells were immunoprecipitated (IP) with an anti-PDK1 antibody and then immunoblotted with an anti-FLAG antibody to determine the complex formation between PDK1 and 14-3-3 (top panel). As a negative control, 293T cells were transfected with the vector alone (CMV). Expression levels of transfected FLAG-14-3-3 (2nd panel) and endogenous PDK1 (3rd panel) were confirmed by immunoblot analyses with the indicated antibodies using total cell lysates (Lysate). The level of the complex formation between PDK1 and 14-3-3 was quantified and shown as a bar graph using densitometric analyses as described, and fold increase relative to STRAP-transfected and insulin-treated samples was calculated (A and B, bottom panels). These experiments were performed in duplicate at least four times with similar results.
of TGF- exhibit only cytosolic staining, but TGF- treatment increased its nuclear translocation, as expected (upper panel, 1st and 2nd lanes). However, upon TGF- treatment the coexpression of PDK1 inhibited the nuclear translocation of Smad3 (upper panel, 2nd and 4th lanes), whereas the coexpression could not influence the Smad3 movement in untreated cells (upper panel, 1st and 3rd lanes). On the other hand, the change of PDK1 localization by Smad3 was not observed in the presence or absence of TGF- (lower panel). Taken together, these results suggest that PDK1 coexpression prevents the normal translocation of Smad3 from the cytoplasm to the nucleus in response to TGF-.
DISCUSSION
Several reports suggest the involvement of cellular proteins for controlling PDK1 kinase activity. RSK2 has been shown to interact with PDK1 and to potentiate PDK1 activity (35). In addition, PDK1 activity was controlled by PDK1-interacting proteins such as Hsp90 (19), 14-3-3 (20), and protein kinase C-related kinase 2 (21). Moreover, recent studies have shown that Src kinases regulate PDK1 activity by PDK1 phosphorylation at tyrosine residues (36,37). These observations raise the possibility that additional proteins may be involved in the regulation of PDK1 activity, even though PDK1 activity has been thought to be constitutively active in cells (15,17). To address this question, we sought to identify cellular proteins that directly associate with PDK1. Here we report an isolation of STRAP as a PDK1interacting protein.
STRAP is known to be a WD40 domain containing protein, which inter-acts with TR-I and TR-II and negatively regulates TGF- signaling (12). In addition, STRAP was found to interact with Smad7 for the synergistic effect on the inhibition of TGF- signaling (13). Recent studies have shown that the PI3K pathway may be associated with the TGF- signaling pathway. For example, TGF- potentiated PI3K activation and Akt phosphorylation in Swiss 3T3 cells (25), and LY294002, a PI3K inhibitor, blocked the Smad2 phosphorylation induced by TGF- (38). Runyan et al. (39) also demonstrated that TGF- could activate PDK1 in human mesangial cells, resulting in the enhancement of Smad3-mediated collagen I expression. In this respect, the association of PDK1 with STRAP provides an interesting aspect to the regulation of PDK1 activity and TGF--induced transcription.
In the case of the regulation of PDK1 activity, as shown in Fig. 5, a significant increase was observed in the PDK1 activity by direct binding of STRAP. This, together with the previous observations that PDK1 activity is controlled by its interacting proteins (19 -21, 35), strongly suggests that the physical association of PDK1 and STRAP also plays an important role in the modulation of PI3K/PDK1 signaling pathway. In addition, as shown in Fig. 8, our present results demonstrate that STRAP coexpression significantly reduces the association of PDK1 with 14-3-3, a known negative regulator of PDK1 (20). Thus, it seems that the possible likely mechanism by which STRAP stimulates PDK1 activity would be through the removal of 14-3-3 from PDK1-14-3-3 complex, probably by competing with 14-3-3 under cellular stimulations. This notion was further supported by the fact that the binding affinity of 14-3-3 to the endogenous PDK1 was decreased by insu- lin, which can increase the physical association between PDK1 and STRAP (Fig. 4B).
In this study, to see whether PDK1-STRAP complex formation can influence a biological function of STRAP, we analyzed the effect of PDK1 on the inhibitory TGF- signaling induced by STRAP. Coexpression of PDK1 apparently potentiated the TGF--mediated transcrip- The relative amount of the complex formation between TR1(TD) and Smad7 was quantified by densitometric analysis as described above, and fold increase relative to control samples transfected without PDK1 and STRAP was calculated (bottom panel). B, 293T cells transfected with the indicated plasmid vectors were precipitated with glutathione-Sepharose beads (GST purification), and the cell lysates were immunoblotted with the indicated antibodies (top panel for anti-FLAG, 2nd panel for anti-MYC, and 3rd panel for anti-GST). Expression level of FLAG-Smad7 in total cell lysates was confirmed by immunoblot analysis with an anti-FLAG antibody (4th panel, Lysate). The relative amount of the complex formation between STRAP and Smad7 was quantified and shown as a bar graph using densitometric analysis as described, and fold increase relative to control samples transfected without PDK1 was calculated (bottom panel). These experiments were performed in duplicate at least five times with similar results. DECEMBER 30, 2005 • VOLUME 280 • NUMBER 52 tional inhibition induced by STRAP (Fig. 9). This would be analogous to the previous observation in which STRAP and Smad7 synergized the inhibition of TGF- signaling by direct binding (13). In order to define the detailed mechanism of PDK1 function in the inhibition of TGF- signaling, we further analyzed the relative strength of binding of Smad7 and the activated type I TGF- receptor in 293T cells where these proteins were coexpressed with STRAP in the presence or absence of PDK1 because the association between Smad7 and the type I TGF- receptor was shown previously to play a critical role in the inhibition of TGF- signaling (34). As shown in Fig. 10A, a significant increase in the physical association between Smad7 and the activated type I TGF- receptor was observed in the transfected cells expressing PDK1 compared with the control cells without PDK1, suggesting that PDK1, like STRAP, could contribute to the stabilization of the physical association between Smad7 and the type I TGF- receptor for the inhibitory TGF- signaling. This effect of PDK1 can be explained either by a direct physical binding of PDK1 to STRAP or by an enzymatic function of PDK1 on an interacting partner such as STRAP. However, our observed results that the kinase-dead mutant of PDK1 could not influence the formation of the PDK1-STRAP complex, compared with the control containing the wild-type PDK1, do not favor the second model describing the importance of the catalytic function of PDK1, probably through the phosphorylation of STRAP, in the modulation of the stable association between Smad7 and the type I TGF- receptor. In support of this notion, we could not observe a direct phosphorylation of STRAP by PDK1 using in vitro kinase assay when cells were cotransfected with PDK1 and STRAP or the recombinant STRAP was used as a substrate (results not shown). However, we do not rule out the possibility that PDK1 itself, not through STRAP by physical interaction, directly functions to recruit Smad7 to the type I TGF- receptor because PDK1 also interacts with Smad7 (results not shown). In summary, as shown in this report, our findings now demonstrate that STRAP may be a positive regulator of the PI3K/ PDK1 signaling pathway in addition to its negative role in the TGF-mediated signaling pathway. Furthermore, our results suggest that PDK1 potentiates the inhibition of TGF- signaling induced by STRAP through the stronger stabilization of the physical association between Smad7 and TGF- receptor and the prevention of the nuclear translocation of Smad3. Based on our observed results, at this moment we imagine that the relative levels of association between PDK1 and STRAP, simply controlled by stimuli such as insulin, growth factors, and TGF-, may be an important factor for determining whether STRAP functions as a regulator in the modulation of PI3K/PDK1 signaling pathway or TGF- signaling pathway. In this regard, the more detailed mechanism of the cross-regulation between PI3K/PDK1 and TGF- signaling pathways will be the interest of future study. | 2018-04-03T02:30:07.626Z | 2005-12-30T00:00:00.000 | {
"year": 2005,
"sha1": "fee35e7a69e2c70e5ea0f30f511a9af68929d113",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/280/52/42897.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "ff46c6542cce9b7ded55e42a383c415f9f614824",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
248530619 | pes2o/s2orc | v3-fos-license | Influence of peritoneal dialysis catheter type on dislocations and laxative use: a retrospective observational study
Background There is currently no consensus regarding the optimal type of peritoneal dialysis (PD) catheter. Although few studies showed that weighted catheters result in lower complication rates and superior long-term outcomes than non-weighted catheters, there are no studies on the use of laxatives linked to catheter malfunction, a patient-related outcome potentially affecting the quality of life. Thus, we compared the burden of acute and chronic laxative use in a cohort of PD patients having either weighted or non-weighted catheters. Methods We performed a single-center, retrospective, observational study in two renal units, comparing acute and chronic laxative therapy related to catheter drainage failure in a cohort of 74 PD patient,s divided by peritoneal dialysis catheter type. In addition, we evaluated the number of patients who experienced minor and major dislocations, catheter-related infection rate, hospitalization for catheter malfunctioning, episodes of catheter repositioning, and dropout from PD. Results Laxative use was significantly more common among patients in the non-weighted catheter group (acute: 30.3% vs. 9.8%, p = 0.03; chronic: 36.4% vs. 12.2%; p≤0.02). Furthermore, weighted catheters were superior to non-weighted catheters for all the secondary outcomes (dislocations: 12.2% vs. 45.5%; p = 0.001). Conclusions Weighted self-locating catheters have lower drainage failure, thus reducing the need and burden of acute and chronic laxative use among PD patients. Graphical abstract Supplementary Information The online version contains supplementary material available at 10.1007/s40620-022-01329-6.
Introduction
It is established that well-functioning catheters are associated with a lower incidence of peritonitis and better efficiency of dialysis, thereby representing an essential tool to guarantee optimal Peritoneal Dialysis (PD) delivery. However, currently there is no consensus regarding the best catheter to perform PD [1].
Ideally, well-functioning catheters should ensure a good quality of treatment, thus significantly reducing mechanical complications. Indeed, mechanical complications represent one of the leading causes of dropout among PD patients, accounting for almost 40% of PD to hemodialysis shifts in the first three months and almost 25% dropout overall [2].
There are various types of available PD catheters. They differ with regard to design, including the number of cuffs, shape of the subcutaneous tract (straight vs. swan neck), and of the intra-peritoneal tract (straight vs. coiled) [3]. Moreover, there is a particular category of PD catheters known as "selflocating catheters" designed by Di Paolo in 1992. Self-locating catheters, also known as weighted catheters, are provided with a 12 gr weighted tungsten tip at the end of a classic Tenckhoff catheter to avoid catheter migration [4].
Some observational studies [5,6], and some small randomized controlled trials (RCTs) [7,8], showed beneficial effects on mechanical complications with self-locating catheters. According to these studies, self-locating catheters are less prone to tip migration, resulting in less drainage failure events and eventually reducing catheter-related infections. However, despite these promising findings, self-locating catheters usage is limited [9], and an adequately powered RCT is still lacking.
While awaiting more robust evidence, acute and chronic laxative use has become widespread among PD patients. Indeed, laxatives induce peristalsis that prevents tip migration [10]. Although helpful in preventing catheter migration, chronic laxative use can easily turn into laxative abuse in this subgroup of patients, affected by chronic constipation and frequent mechanical complications. Laxative abuse could lead to electrolyte and acid/base changes, patient discomfort, abdominal cramps, and dehydration due to factitious diarrhea [11]. In addition, laxative use may provoke transmural migration of enteral microflora to the peritoneal cavity, predisposing to peritonitis [12].
Our study aimed to compare the mechanical complications related to PD catheter type as well as burden of laxative use in a cohort of PD patients with either weighted or non-weighted catheters.
Methods
We conducted a single-center, retrospective, observational study among all the incident PD patients followed-up in two Renal Units of the same institution in Milan, Italy, from 2014 until 2020.
We included all patients above 18 years of age and on dialysis for at least three months, excluding those with shorter follow-up to avoid confounding factors such as early mechanical complications and unknown adhesion syndrome.
We divided patients into two cohorts based on catheter type: weighted, with self-locating properties (Care-Cath® B.Braun Avitum, Mirandola, Italy) and nonweighted, standard straight Tenckhoff catheters. Weighted catheters progressively replaced the Tenckhoff catheters and were widely adopted in the two units in 2019.
We collected data on catheter tip migration (highlighted by abdominal x-ray), acute and chronic laxative use, PD dropout linked to catheter malfunction, and catheterrelated infections (peritonitis and exit-site infections).
The primary endpoint of the study was to evaluate laxative use in PD patients to either prevent, or treat drainage failure.
Secondary outcomes included episodes of dislocations, HD shift due to catheter malfunction (PD dropout rate), hospitalization for malfunctioning, catheter repositioning, catheter-related infections, and cuff-shavings.
Chronic laxative therapy -for which a definition is lacking -was reported in patients who chronically used laxatives at least three times a week. Chronic laxative use in otherwise non-constipated patients was considered a parameter of catheter malfunctioning. Patients on chronic laxative therapy for constipation were excluded from the final analysis.
We evaluated the percentage of patients that experienced drainage failure and needed acute laxative use, and the percentage of patients with at least one abdominal x-ray with evidence of tip migration. We distinguished between minor dislocations, for which acute laxative use was followed by complete resolution, and major dislocations, highlighted through catheter tip migration seen by abdominal x-ray after acute laxative use. We excluded patients already on treatment with chronic laxatives from the acute laxative use group.
PD dropout is expressed as the percentage of patients who had to change renal replacement therapy, shifting to hemodialysis (HD), because of PD catheter malfunctioning. Episodes of hospitalization and catheter repositioning were reported as number of patients hospitalized due to catheter-related mechanical issues.
The chi-square test, the T-test, and the Mann Whitney U test were used for the statistical analysis of baseline data and of primary and secondary endpoints. The mid-p exact test evaluated the difference between infection rates, while multiple logistic regression was used to analyze the correlation between independent variables.
The Medical Ethics Committee of the "ASST-Fatebenefratelli-Sacco" approved the protocol. All patients provided written informed consent to participate in the study. The authors observed the Helsinki guidelines.
Results
We identified 82 eligible patients and excluded 8 of them, 4 due to early mechanical complications and 4 to chronic constipation, respectively (Fig.1). Of note, 2 of the 4 patients who were excluded because of chronic constipation had a non-weighted catheter. Finally, 41 and 33 patients were included in the non-weighted catheter and weighted catheter groups, respectively.
The two study groups were balanced with regard to age, BMI, prevalence of diabetes, diverticulosis and percentage of patients who had undergone previous abdominal surgery. Major differences between the two populations are the widespread use of automated peritoneal dialysis (APD) in the weighted catheter group (80.5% vs 60.6%; p = 0.07), and the older catheters in the non-weighted catheter group (median: 851 days vs 516 days; p = 0.07), reflecting the more recent introduction of the weighted catheter in the two PD units. Furthermore, there was a non-significant difference in type of laxative used between the two groups, with a higher percentage of patients using lactulose rather than Movicol® in the weighted catheter group compared to the non-weighted catheter group (80% vs 41.7%; p = 0.44). All the catheters were implanted with the mini-laparotomy approach.
Baseline characteristics of the study population are provided in Table 1.
Regarding the primary endpoint, acute and chronic laxative use was more common among non-weighted catheter patients: 30.3 % vs 9.8% (p = 0.03) and 36.4% vs 12.2% (p = 0.02), respectively. Dislocations were also more frequent among patients with a non-weighted catheter, with a significant difference in radiologically-proven catheter tip migration: 36.4% vs 2.4% (p < 0.0001). The number of patients who experienced either a clinically diagnosed dislocation, or a radiologically-proven catheter tip migration, was much higher among the non-weighted catheter group (45.5% vs 12.2%; p = 0.001).
Multiple logistic regression analysis showed that among the analyzed independent variables (Table 2), only the type of catheter was significantly related to the primary outcome (OR 4.22; CI 95% 1.349 to 15.09; p = 0.018).
Catheter-related infections also differed between the two groups, with a significant reduction in peritonitis incidence among patients in the weighted catheter group: 0.07/365 days vs 0.26/365 days (p = 0.002). The incidence rate ratio (IRR) of peritonitis was 0.27 (73% reduction), whereas no differences were observed regarding exit-site infections.
Hospitalization and catheter repositioning were also significantly more frequent among patients in the nonweighted group, 21.2% vs. 2.4% (p = 0.01).
The dropout rate for mechanical complications was higher among the non-weighted catheter group, even if the number of events was too small to carry out a statistically reliable comparison (9% vs 0%; p = NS). Detailed results are provided in Table 3.
Discussion
In this single-center, retrospective, observational study, we compared the frequency of mechanical complications between two types of peritoneal catheters, as well as their infection rates. Our results show that weighted catheters were associated with reduced rates of mechanical complications (such as catheter dislocations), peritonitis, and hospitalization for catheter malfunctioning and repositioning. These findings are consistent with previously reported studies [5][6][7][8].
Furthermore, we analyzed two new parameters: acute and chronic laxative use. We report the widespread use of laxatives in otherwise non-constipated patients within the nonweighted catheter group. To our knowledge, this is the first study to evaluate these parameters in a comparison between two PD catheter types.
Catheter tip migration is a common complication with non-weighted catheters, with an incidence as high as 24% [14]. When migration occurs, catheter functioning is affected, leading to drainage failure [15]. Restoration of the proper catheter position can be achieved through non-invasive or minimally invasive techniques, such as laxative use or repositioning with a metal wire, respectively. However, refractory cases often require surgical revision or removal and replacement of the catheter [16]. On the other hand, weighted catheter dislocation can often be reversed more easily and non-invasively by positional changes under radioscopic control.
Avoiding catheter tip migration should be one of the main goals in PD. Indeed, in our cohorts it reduced hospitalization rate, laxative use, and catheter manipulation.
Patients on PD are prone to chronic constipation because of aging, hypothyroidism, hypercalcemia, diabetes, and autonomic nervous system dysfunction. Moreover, most PD patients receive treatments that can potentiate constipation, such as phosphate and potassium binders, calcium channel blockers, opioids, and iron preparations [17,18]. Constipation in patients receiving PD is associated with increased risk of mechanical and infectious complications, thus affecting catheter drainage and promoting laxative use [19]. However, treatment of constipation with laxatives may predispose to bacterial translocation and peritonitis in PD patients (Fig.2) [12,17]. In our study, the weighted catheter reduced laxative use, both in the acute and chronic settings. Moreover, laxative use was an indirect index to determine the drainage failure rate and appeared to be related to the peritonitis rate, the risk of which was reduced in the weighted catheter group (IRR 0.27).
The consistency of both the primary and the secondary outcomes represents the major strength of this study. These findings prompt a correlation between the number of dislocations, laxative use, and peritonitis rate. Reducing the incidence of dislocations guarantees optimal peritoneal dialysis quality and places patients at a lower incidence of infections depending on catheter manipulation and laxative use (Fig1).
The recent ISPD practice recommendations on the prescription of high-quality, goal-directed PD [20] underscored the concept that the well-being of the person on PD involves much more than just the removal of toxins. Rather, the healthcare system should focus on the person undergoing PD, beyond the medical perspective of the "patient" status, with goal-directed dialysis delivery. Avoiding or at least limiting the use of laxatives and their consequences should be a goal of PD care that can be achieved by adopting weighted, self-locating catheters.
There are some limitations in this study. Firstly, the retrospective nature of the study results in some biases, such as selection and information bias. Although they represent the whole of incident patients in our center, the small number of patients included could open the study to a certain degree of variability. However, the two groups were well balanced, except for PD technique and catheter vintage, which presumably are not factors affecting the dislocation rate or laxative use.
A recent systematic review and meta-analysis suggested that weighted catheters result in lower complication rates and superior long-term outcomes compared to non-weighted catheters [21]. This study adds observational evidence of the weighted catheter benefits. A randomized controlled trial should confirm the superiority of the weighted catheter over the non-weighted catheter.
Conclusions
We confirmed the lower incidence of catheter dislocation with the weighted catheters. What this study adds is the evidence of reduced laxative burden in PD patients with a weighted catheter, a relevant patient-related outcome, which in addition seems to be related to a reduction in peritonitis rate.
Our study is another proof-of-concept suggesting the need for a well-designed, sufficiently powered, large study to compare the two types of catheters.
Funding Open access funding provided by Università degli Studi di Milano within the CRUI-CARE Agreement.
Declarations
Conflicts of interest MG's institution, the University of Milano, received consulting fees from B. Braun Avitum Italy, the company manufacturing the weighted catheter. The other authors have nothing to disclose.
Ethical statement
The Medical Ethics Committee of the "ASST-Fatebenefratelli-Sacco" approved the study protocol.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 2022-05-06T13:37:52.455Z | 2022-05-06T00:00:00.000 | {
"year": 2022,
"sha1": "2350fd36116cea45e1ea397619e407cc00ddb007",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s40620-022-01329-6.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "2350fd36116cea45e1ea397619e407cc00ddb007",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
53225155 | pes2o/s2orc | v3-fos-license | FLOPs as a Direct Optimization Objective for Learning Sparse Neural Networks
There exists a plethora of techniques for inducing structured sparsity in parametric models during the optimization process, with the final goal of resource-efficient inference. However, few methods target a specific number of floating-point operations (FLOPs) as part of the optimization objective, despite many reporting FLOPs as part of the results. Furthermore, a one-size-fits-all approach ignores realistic system constraints, which differ significantly between, say, a GPU and a mobile phone -- FLOPs on the former incur less latency than on the latter; thus, it is important for practitioners to be able to specify a target number of FLOPs during model compression. In this work, we extend a state-of-the-art technique to directly incorporate FLOPs as part of the optimization objective and show that, given a desired FLOPs requirement, different neural networks can be successfully trained for image classification.
Introduction
Neural networks are a class of parametric models that achieve the state of the art across a broad range of tasks, but their heavy computational requirements hinder practical deployment on resourceconstrained devices, such as mobile phones, Internet-of-things (IoT) devices, and offline embedded systems. Many recent works focus on alleviating these computational burdens, mainly falling under two non-mutually exclusive categories: manually designing resource-efficient models, and automatically compressing popular architectures. In the latter, increasingly sophisticated techniques have emerged [4,5,6], which have achieved respectable accuracy-efficiency operating points, some even Pareto-better than that of the original network; for example, network slimming [4] reaches an error rate of 6.20% on CIFAR-10 using VGGNet [10] with a 51% FLOPs reduction-an error decrease of 0.14% over the original.
However, few techniques impose a FLOPs constraint as part of a single optimization objective. Budgeted super networks [14] are closely related to this work, incorporating FLOPs and memory usage objectives as part of a policy gradient-based algorithm for learning sparse neural architectures. Mor-phNets [1] apply an L 1 norm, shrinkage-based relaxation of a FLOPs objective, but for the purpose of searching and training multiple models to find good network architectures; in this work, we learn a sparse neural network in a single training run. Other papers directly target device-specific metrics, such as energy usage [17], but the pruning procedure does not explicitly include the metrics of interest as part of the optimization objective, instead using them as heuristics. Falling short of continuously deploying a model candidate and measuring actual inference time, as in time-consuming neural architectural search [12], we believe that the number of FLOPs is reasonable to use as a proxy measure for actual latency and energy usage; across variants of the same architecture, Tang et al. suggest that the number of FLOPs is a stronger predictor of energy usage and latency than the number of parameters [13].
Indeed, there are compelling reasons to optimize for the number of FLOPs as part of the training objective: First, it would permit FLOPs-guided compression in a more principled manner. Second, practitioners can directly specify a desired target of FLOPs, which is important in deployment. Thus, our main contribution is to present a novel extension of the prior state of the art [7] to incorporate the number of FLOPs as part of the optimization objective, furthermore allowing practitioners to set and meet a desired compression target.
FLOPs Objective
Formally, we define the FLOPs objective L f lops : f × R m → N 0 as follows: where L f lops is the FLOPs associated with hypothesis h(·; θ θ θ) := p(·|θ θ θ), g(·) is a function with the explicit dependencies, and I is the indicator function. We assume L f lops to depend only on whether parameters are non-zero, such as the number of neurons in a neural network. For a dataset D, our empirical risk thus becomes Hyperparameters λ f ∈ R + 0 and T ∈ N 0 control the strength of the FLOPs objective and the target, respectively. The second term is a black-box function, whose combinatorial nature prevents gradient-based optimization; thus, using the same procedure in prior art [7], we relax the objective to a surrogate of the evidence lower bound with a fully-factorized spike-and-slab posterior as the variational distribution, where the addition of the clipped FLOPs objective can be interpreted as a sparsity-inducing prior p(θ θ θ) ∝ exp(−λ f max(0, L f lops (h, θ θ θ) − T )). Let z ∼ p(z|π π π) be Bernoulli random variables parameterized by π π π: where ⊙ denotes the Hadamard product. To allow for efficient reparameterization and exact zeros, Louizos et al. [7] propose to use a hard concrete distribution as the approximation, which is a stretched and clipped version of the binary Concrete distribution [8]: ifẑ ∼ BinaryConcrete(α, β), thenz := max(0, min(1, (ζ − γ)ẑ + γ)) is said to be a hard concrete r.v., given ζ > 1 and γ < 0. Define φ φ φ := (α α α, β), and let ψ(φ φ φ) = Sigmoid(log α α α − β log −γ ζ ) and z ∼ Bernoulli(ψ(φ φ φ)). Then, the approximation becomes ψ(·) is the probability of a gate being non-zero under the hard concrete distribution. It is more efficient in the second expectation to sample from the equivalent Bernoulli parameterization compared to hard concrete, which is more computationally expensive to sample multiple times. The first term now allows for efficient optimization via the reparameterization trick [3]; for the second, we apply the score function estimator (REINFORCE) [16], since the FLOPs objective is, in general, nondifferentiable and thus precludes the reparameterization trick. High variance is a non-issue because the number of FLOPs is fast to compute, hence letting many samples to be drawn. At inference time, the deterministic estimator isθ θ θ := θ θ θ ⊙ max(0, min(1, Sigmoid(log α α α)(ζ − γ) + γ)) for the final parametersθ θ θ.
FLOPs under group sparsity. In practice, computational savings are achieved only if the model is sparse across "regular" groups of parameters, e.g., each filter in a convolutional layer. Thus, each computational group uses one hard concrete r.v. [7]-in fully-connected layers, one per input neuron; in 2D convolution layers, one per output filter. Under convention in the literature where one addition and one multiplication each count as a FLOP, the FLOPs for a 2D convolution layer h conv (·; θ θ θ) given a random draw z is then defined as L f lops (h conv , z) = (K w K h C in + 1)(I w − K w + P w + 1)(I h − K h + P h + 1) z 0 for kernel width and height (K w , K h ), input width and height (I w , I h ), padding width and height (P w , P h ), and number of input channels C in . The number of FLOPs for a fully-connected layer h f c (·; θ θ θ) is L f lops (h f c , z) = (I n + 1) z 0 , where I n is the number of input neurons. Note that these are conventional definitions in neural network compression papers-the objective can easily use instead a number of FLOPs incurred by other device-specific algorithms. Thus, at each training step, we compute the FLOPs objective by sampling from the Bernoulli r.v.'s and using the aforementioned definitions, e.g., L f lops (h conv , ·) for convolution layers. Then, we apply the score function estimator to the FLOPs objective as a black-box estimator.
Experimental Results
We report results on MNIST, CIFAR-10, and CIFAR-100, training multiple models on each dataset corresponding to different FLOPs targets. We follow the same initialization and hyperparameters as Louizos et al. [7], using Adam [2] with temporal averaging for optimization, a weight decay of 5 × 10 −4 , and an initial α that corresponds to the original dropout rate of that layer. We similarly choose β = 2/3, γ = −0.1, and ζ = 1.1. For brevity, we direct the interested reader to their repository 1 for specifics. In all of our experiments, we replace the original L 0 penalty with our FLOPs objective, and we train all models to 200 epochs; at epoch 190, we prune the network by weights associated with zeroed gates and replace the r.v.'s with their deterministic estimators, then finetune for 10 more epochs. For the score function estimator, we draw 1000 samples at each optimization step-this procedure is fast and has no visible effect on training time. 3-12-192-500 1.0% 205K GD [11] 7-13-208-16 1.1% 254K SBP [9] 3-18-284-283 0.9% 217K BC-GNJ [6] 8-13-88-13 1.0% 290K BC-GHS [6] 5-10-76-16 1.0% 158K L 0 [7] 20-25-45-462 0.9% 1.3M L 0 -sep [7] 9-18-65-25 1.0% 403K We choose λ f = 10 −6 in all of the experiments for LeNet-5-Caffe, the Caffe variant of LeNet-5. 1 We observe that our methods (Table 1, bottom three rows) achieve accuracy comparable to those from previous approaches while using fewer FLOPs, with the added benefit of providing a tunable "knob" for adjusting the FLOPs. Note that the convolution layers are the most aggressively compressed, since they are responsible for most of the FLOPs in this model. Orig. in Table 2 denotes the original WRN-28-10 model [18], and L 0 -* refers to the L 0 -regularized models [7]; likewise, we augment CIFAR-10 and CIFAR-100 with standard random cropping and horizontal flipping. For each of our results (last two rows), we report the median error rate of five different runs, executing a total of 20 runs across two models for each of the two datasets; we use λ f = 3 × 10 −9 in all of these experiments. We also report both the expected FLOPs and actual FLOPs, the former denoting the number of FLOPs, on average, at training time under stochastic gates and the latter denoting the number of FLOPs at inference time. We restrict the FLOPs calculations to the penalized non-residual convolution layers only. For CIFAR-10, our approaches result in Pareto-better models with decreases in both error rate and the actual number of inference-time FLOPs. For CIFAR-100, we do not achieve a Pareto-better model, since our approach trades accuracy for improved efficiency. The acceptability of the tradeoff depends on the end application. | 2018-10-20T06:08:30.610Z | 2018-10-19T00:00:00.000 | {
"year": 2018,
"sha1": "309e4a8d1ac0a32832447ac0ffb09df8da9894c7",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "309e4a8d1ac0a32832447ac0ffb09df8da9894c7",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
266465363 | pes2o/s2orc | v3-fos-license | RELATING THE ORGANIZATIONAL CULTURE ASSESSMENT INSTRUMENT TO ADHERENCE IN A PRAGMATIC TRIAL OF A MUSIC INTERVENTION
Abstract Embedded pragmatic trials encourage the translation of evidence-based interventions to “real-world” settings. Most pragmatic trials of behavioral interventions for people with dementia suffer from low adherence. Understanding how organizational values and structure may increase adherence is important. We report findings from an embedded, pragmatic trial (ePCT) of a personalized music intervention for managing behaviors in residents with dementia, conducted in 54 nursing homes (NHs) from four corporations between June 2019 and February 2020. Before the trial began, the administrator and a nursing staff member from each NH completed the Organizational Culture Assessment Instrument (OCAI). Using the OCAI, respondents rated their organizational culture by allocating a total of 100 points across four competing domains: Clan, Adhocracy, Hierarchy, and Market. Results were aggregated to understand how differences in culture impacted corporate-level adoption of the intervention. All four corporations allocated the majority of their points to Clan culture, which is focused on collaboration and staff engagement. However, corporations differed in their scoring of the secondary culture type. The two corporations that rated Hierarchical culture, which prioritizes consistency and efficiency, highly were more likely to adhere to the intervention protocols. The corporation with Market highly rated had the lowest adherence to the protocols. After controlling for other corporate characteristics, including for-profit status, size, and overall quality, hierarchical culture was associated with greater numbers of exposed residents and a higher dose of the music, compared to other culture types. Understanding the role of organizational culture on pragmatic implementation is an understudied area for research.
cognition, depressive symptoms, amount of support received and satisfaction with support with life space mobility.The study included 247 older adults aged 65 and above from the University of Alabama at Birmingham (UAB) Diabetes and Aging Study of Health (DASH).Average age was 73, 45% of the sample were Black/African American, 53% were female, and 47% were married.Results from multiple covariate-adjusted regression analyses revealed that being Black/African American, older, female, and higher depressive symptoms significantly predicted lower life space mobility (all p's < .05)while being married, educated, and reporting better health significantly predicted greater life space mobility.Similarly, higher cognitive function was a significant predictor of greater life space mobility (B = .140,p < .05).Results remained significant even when adjusted for covariates.Amount of support received and satisfaction with support did not predict life space mobility.Findings from this investigation identify individuals who are at risk for restricted life space mobility and suggest protective factors.Establishing these associations with life space mobility within a health disparities framework would be important as it would draw attention to functioning in later life for socially disadvantaged groups and help inform interventions.
Abstract citation ID: igad104.2133Music offers a promising non-pharmacological alternative for managing behavioral dysregulations in people with Alzheimer's disease and other dementias (ADRD).Using data from an embedded, pragmatic trial (ePCT) of a personalized music intervention for nursing home (NH) residents with ADRD, we examined resident and NH characteristics associated with exposure to the intervention and dose of music received.Participants were enrolled from 54 NHs (27 treatment,27 control) between June 2019 and February 2020.The intervention was resident-preferred music delivered at early signs of agitation.Intervention dose was calculated by multiplying song duration and number of plays, averaged over days exposed.Facility and resident-level characteristics were identified using the Minimum Data Set and the Certification and Survey Provider Enhanced Reports.A mixed-effects hurdle model was used.483 residents participated (67.7% female, mean age 79.8±12.2years).Female residents (p=0.04)taking antipsychotic medications (p=0.06) were more likely to receive the intervention, as were residents from NHs with greater nursing involvement (p=0.02).Residents with greater health instability received a greater dose (p=0.04).In this ePCT of a personalized music intervention, NHs with more nursing engagement had greater use of the intervention and appropriately chose residents with antipsychotic use to participate.After adjusting for initial selection, staff used the intervention more frequently with residents who had a higher likelihood of death in the next six months, potentially indicating the beneficial use for comfort at the end of life.Our findings offer insights into future tailoring of personalized music interventions to increase the likelihood of successful implementation.
FACTORS ASSOCIATED WITH THE USE OF A PERSONALIZED MUSIC INTERVENTION FOR NURSING HOME RESIDENTS WITH DEMENTIA
Abstract citation ID: igad104.2134Yochai Shavit 1 , Brett Anderson 2 , and Laura Carstensen 3 , 1. Stanford University,Stanford,California,United States,2. MemorialCare Long Beach Medical Center,Anaheim,California,United States,3. Stanford,Stanford,California,United States Age differences in temporal discounting have long puzzled researchers.Although older adults tend to prioritize the present over the future due to more limited time horizons compared to younger adults there is no evidence for an age association with temporal discounting of monetary rewards.Socioemotional selectivity theory posits that as time horizons become more limited goals related to emotional meaning are prioritized over future-oriented goals because they are realized in the present, leading older adults to place more value on experiences than younger adults.A small body of evidence showing age-related preferences for small, immediate, rewards in emotionally meaningful domains is consistent with SST.However, prior studies used hypothetical tasks with vague trade-offs, limiting their interpretability and generalizability.In the present study, we developed a novel paradigm to examine age differences in temporal discounting of rewards related to emotional experience in a controlled environment.120 participants, aged 22-96 years, came to the lab and made a series of choices indicating if they would prefer to replace 5 minutes of boring tasks with an emotionally meaningful variant of the task today, or a larger amount of time in their next visit in six months.Participants then made similar choices about the timing of receiving comparable monetary rewards.We hypothesized that in contrast to monetary rewards, older adults are less likely than younger adults to wait for rewards related to emotional experiences because time becomes increasingly valued as it grows scarce.Findings support the hypothesis and highlight the role of perceived time-horizons.
OLDER AGE IS ASSOCIATED WITH TEMPORAL DISCOUNTING OF TIME USE BUT NOT MONETARY REWARDS
Abstract citation ID: igad104.2135
RELATING THE ORGANIZATIONAL CULTURE ASSESSMENT INSTRUMENT TO ADHERENCE IN A PRAGMATIC TRIAL OF A MUSIC INTERVENTION Enya Zhu, Ellen McCreedy, Laura Dionne, and Vincent Mor, Brown University School of Public Health, Providence, Rhode Island, United States
Embedded pragmatic trials encourage the translation of evidence-based interventions to "real-world" settings.Most pragmatic trials of behavioral interventions for people with dementia suffer from low adherence.Understanding how organizational values and structure may increase adherence is important.We report findings from an embedded, pragmatic trial (ePCT) of a personalized music intervention for managing behaviors in residents with dementia, conducted in 54 nursing homes (NHs) from four corporations between June 2019 and February 2020.Before the trial began, the administrator and a nursing staff member from each NH completed the Organizational Culture Assessment Instrument (OCAI).Using the OCAI, respondents rated their organizational culture by allocating a total of 100 points across four competing domains: Clan, Adhocracy, Hierarchy, and Market.Results were aggregated to understand how differences in culture impacted corporate-level adoption of the intervention.All four corporations allocated the majority of their points to Clan culture, which is focused on collaboration and staff engagement.However, corporations differed in their scoring of the secondary culture type.The two corporations that rated Hierarchical culture, which prioritizes consistency and efficiency, highly were more likely to adhere to the intervention protocols.The corporation with Market highly rated had the lowest adherence to the protocols.After controlling for other corporate characteristics, including for-profit status, size, and overall quality, hierarchical culture was associated with greater numbers of exposed residents and a higher dose of the music, compared to other culture types.Understanding the role of organizational culture on pragmatic implementation is an understudied area for research.
THE ASSOCIATION BETWEEN COGNITION AND UPPER EXTREMITY MOTOR REACTION TIME IN OLDER ADULTS: A NARRATIVE REVIEW
Alexandria Jones, Natalie Weaver, Mardon So, Abbis Jaffri, and Rosalind Heckman, Creighton University, Omaha, Nebraska, United States Response timing is essential to optimal sensorimotor control across the lifespan.While it is broadly assumed that reaction time increases as cognition declines with age, it is unclear if this assumption is supported by the literature.The purpose of this narrative review was to determine the association between cognition and upper extremity reaction time in older adults.Cognitive domains of sensation and perception, motor construction, perceptual motor function, executive function, attention, learning and memory, and language were considered.We conducted a systematic search using Scopus database.The search strategy was designed to meet four inclusion criteria: 1) community-dwelling adults >60 years, 2) upper extremity motor task, 3) at least one cognitive assessment, 4) simple reaction time measure.1154 articles were screened.Two articles met the full inclusion criteria, but the studies did not associate the cognitive assessment and simple reaction time measures.Nine articles that met three inclusion criteria were reviewed.We found that executive function and learning and memory have been associated with complex and choice reaction time measures.Language, perceptual motor function, and attention have been studied with mixed evidence for an association with reaction time; whereas, sensation and perception and motor construction have not been assessed.Overall, limited research has compared cognitive domain function and simple reaction time to determine if age-related changes are associated.While the complex interplay between cognition and motor function is of substantial interest, these measures are often interdependent and additional knowledge is needed to understand their influence on sensorimotor control with age.
THE EFFECT OF DUAL-TASKS ON WALKING SPEED IN HEALTHY OLDER ADULTS: A SYSTEMATIC REVIEW AND META-ANALYSIS
Hyeon Jung Heselton 1 , Adam Rosen 2 , and Julie Blaskewicz Boron 3 , 1. University of Nebraska Omaha, Omaha, Nebraska, United States, 2. University of Nebraska at Omaha,Omaha,Nebraska,United States,3. UNO,Omaha,Nebraska,United States Normative age-related changes occur in both walking and cognitive performance; non-normative changes could lead to decreased mobility, loss of autonomy in daily life, and increased fall risk.This systematic review and metaanalysis aimed to assess the effects of completing a dualtask on overground walking speed in healthy older adults.Database searches were carried out in three electronic databases (PubMed, PsycINFO, ProQuest) from inception through October 2022 to find studies that assessed healthy older adults who completed a walking test and a walking task combined with a neurocognitive assessment (dual-task).Methodological quality and risk of bias was assessed via the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE).Cohen's d effect size with a 95% confidence intervals, Cochrane's Q, I2, and Fail-safe N statistics were calculated between the single-and dual-task walking speeds.The initial search yielded 459 articles; six were included in the final review after completing all screening procedures.Included studies had relatively high STROBE scores (19.97±0.82).Eight individual effects across the six articles (total sample size=5627) were calculated.Walking speed significantly decreased in dual-task conditions with cognitive tasks (walking speed Δ=1.03; 95%CI 0.65 to 1.40; p<.001); individual effects were higher during high load cognitive tasks (i.e., serial 7's subtraction, TMT part B).Dual-tasking with a cognitive component clearly affected walking speed in aging adults.Studying different age groups will help create a model of the aging process to understand when changes begin, as well as the rate of change; this could be useful for earlier fall-prevention interventions.
THE LONG-TERM RELATIONSHIP BETWEEN BMI AND COGNITIVE PERFORMANCE: A CROSS-LAGGED PANEL ANALYSIS
Andrew Fiscella 1 , and Ross Andel 2 , 1. University of South Florida, Tampa, Florida, United States, 2. Arizona State University, Tempe, Arizona, United States While obesity has traditionally been associated with negative outcomes, an obesity paradox has been observed which suggests that older adults may show some benefit from having a higher weight.This longitudinal study examined the association between body mass index (BMI) and episodic memory performance.Data from 14,639 participants collected as part of the Health and Retirement Study were used in a 10-year random intercept cross-lagged panel analysis.BMI was used to measure obesity while T-scores from tests of immediate and delayed recall were used to represent episodic memory performance.Initially, a higher BMI was associated with lower episodic memory scores in the following wave (β= -.132, p<.001), but this association became positive over time (β=.142,p< .001)with higher BMI related to better episodic memory scores, even after accounting for demographic and health covariates.The association between BMI and episodic memory over time was stronger in older adults (β=.068, p<.05) than middle-aged adults (β=.020, p=.533). | 2023-12-23T05:13:32.853Z | 2023-12-01T00:00:00.000 | {
"year": 2023,
"sha1": "842c076c4edd6474a06dfcc757162e89e3d59934",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "842c076c4edd6474a06dfcc757162e89e3d59934",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": []
} |
119360925 | pes2o/s2orc | v3-fos-license | On the Symmetry of Low-Field Ordered Phase of PrFe4P12 : 31P NMR
We have performed 31P nuclear magnetic resonance (NMR) experiments on the filled skutterudite compound PrFe4P12 to investigate the exotic ordered phase at low temperatures and low magnetic fields. Analysis of the NMR line splitting in the ordered phase indicates that totally symmetric (\Gamma_1-type) staggered magnetic multipoles are induced by magnetic fields. This is incompatible with any type of quadrupole order at zero field. We conclude that the ordered phase has broken translational symmetry with the wave vector Q=(1,0,0) but the Th point symmetry at the Pr sites is not broken by the non-magnetic \Gamma_1-type order parameter.
Intermetallic compounds with the filled-skutterudite structure RT 4 X 12 (R = rare earth, T = transition metal, X = pnictogen) has attracted strong recent attention because of a variety of intriguing phenomena in a common crystal structure such as metal-insulator transition, 1) multipole ordering, 2) exotic superconductivity, 3) and anomalous phonons. 4) Among them, PrFe 4 P 12 shows a peculiar phase transition at T A =6.5 K. 5) The low temperature phase has no spontaneous magnetic moment at zero field 6) and is suppressed by a magnetic field of 4-7 T dependent on the field directions, 5) resulting in a heavy-fermion state with a large cyclotron effective mass m * c = 81 m 0 . 7) The low temperature phase has a structural modulation with the wave vector Q = (1, 0, 0), indicating loss of ( 1 2 , 1 2 , 1 2 ) translation. 8) Spatial ordering of distinct electronic states of two Pr 3+ ions in the bcc unit cell was also observed by resonant X-ray scattering, 9) and the field-induced staggered magnetization was observed by neutron scattering. 10) Although these experiments and the elastic measurements 11) suggest an antiferro-quadrupole order likely to be of Γ 23 -type, direct identification of the order parameter has not been made yet.
In this letter, we report results of nuclear magnetic resonance (NMR) experiments on 31 P nuclei (spin 1/2) in a single crystal of PrFe 4 P 12 , focusing on the ordered phase. We observed field-induced splitting of the 31 P NMR lines below T A . The results for various field directions revealed that the splitting is due to totally-symmetric staggered magnetic multipoles (octupole as well as dipole) belonging to the Γ 1 representation. We conclude that the order parameter at zero field has Γ 1 symmetry, excluding any type of quadrupole order.
In the filled skutterudite structure (the space group Im3, T 5 h ), Pr atoms are surrounded by an icosahedral cage of P atoms and form a body-centered cubic lattice as shown in Fig. 1. 6,12) Iron atoms sit at the middle between the corner and the body-centered Pr atoms. Although all the P sites are crystallographically equivalent, they have * E-mail address: jkiku@isc.meiji.ac.jp different NMR frequencies in magnetic field because of anisotropic hyperfine interaction. For later discussion, we define six types of P sites giving distinct NMR frequencies for general field directions as P1(0, u, v), P2(0,ū, v), P3(v, 0, u), P4(v, 0,ū), P5(u, v, 0) and P6(ū, v, 0), where u and v are the asymmetry parameters (Fig. 1). 13) If the direction of the external field H is invariant under a symmetry operation which transforms one P site to another, those two P sites must have the same NMR frequency. assignment shown in Fig. 2(a). The small splitting of the two high frequency lines for H [001] is probably due to nuclear spin-spin coupling. On crossing T A from above, each line splits into a pair of lines as shown in Figs. 2(b) and 3. The splitting grows continuously near T A at low fields (0.4 T) as shown in Fig. 3. At higher fields above about 1.5 T, however, the splitting develops discontinuously and there is a narrow temperature range in the vicinity of T A where the spectrum has both split and unsplit lines. This indicates that the second order phase transition at low fields changes to first order at higher fields, consistent with the results of specific-heat measurements. 5) We define the splitting of the hyperfine field ∆H as the frequency interval of the splitting divided by the nuclear gyromagnetic ratio γ N . The field dependence of ∆H is plotted in Fig. 4 for various field directions at T =2 K. A remarkable result is that ∆H extrapolates to zero at H=0. Such a feature has been also observed in polycrystals by Ishida et al. 14) The field dependences of ∆H are different for different sites. This is most evident for H [001], where ∆H for P3 and P4 increases monotonically with increasing field while ∆H for other sites shows a maximum. The field variation of ∆H is smooth in all cases without any jump or kink, indicating absence of additional phase transitions up to the boundary to the high-field heavy fermion (HF) state. A first-order Table I. Symmetries of multipoles up to hexadecapoles in the T h crystal field. The + and − signs show the parity under time reversal. The multipoles are defined in terms of the dipole J as follows. 16,17) Quadrupoles Here ξ, η, ζ represent x, y, z and their cyclic permutation. The bars on products denote summation over all permutations of thier subscripts. Since (T α x , T α y , T α z ) and (T β x , T β y , T β z ) ((H α x , H α y , H α z ) and (H β x , H β y , H β z )) have the same symmetry Γ − 4 (Γ + 4 ) in the T h group, they can be mixed.
Symmetry
Magnetic multipoles Nonmagnetic multipoles Jx, Jy, Jz, Tx, Ty, Tz - Table II. Symmetries of multipoles in the T h crystal field in magnetic fields. The numbers in parentheses in the fifth column indicate the equivalent P sites and the last column shows the number of NMR lines for a single P 12 cage in the presence of the magnetic multipoles. They should be multiplied by two to get the total number of lines in the antiferro-multipole ordered phase with Q = (1, 0, 0). transition to the HF phase is apparent from the sudden vanishing of ∆H at the phase boundary. Detailed examination of angle dependence of the spectra below T A reported in ref. 15 revealed that all of the NMR lines above T A split always into two lines and ∆H never vanishes for any field direction. Obviously, the doubling of the NMR lines should be ascribed to the loss of ( 1 2 , 1 2 , 1 2 ) translation and the distinct electronic states of the two Pr ions at the corner (PrI) and the body center (PrII) of the original bcc lattice. 8,9) This should divide each of the P1-P6 sites into two sublattices P1(I)-P6(I) and P1(II)-P6(II). The former (latter) belongs to the P 12 cage surrounding the PrI (PrII) sites. We should stress that there is no additional line splitting, i.e., the number of NMR lines from each of these cages is exactly the same as the number of lines above T A . This means that the T h point symmetry at both Pr sites is preserved below T A , which is not compatible with any type of quadrupole ordering. In the following, we provide more precise arguments on these points.
The hyperfine field at P nuclei is determined by the spin density distribution of Pr-4f electrons through the dipolar and the transferred hyperfine interactions. The very local nature of the latter interaction causes nuclei to couple not only to dipole moments but to octupoles and higher order magnetic multipoles. This is because the local spin density near a P nucleus can be nonzero when a Pr ion has finite expectation value of a high order magnetic multipole, even if the dipole moment, or the spatial integration of the spin density, is zero. Such a possibility has been first recognized by Sakai et al. 18) in their analysis of the NMR data on CeB 6 . 19) Magnetic and nonmagnetic multipoles up to hexadecapoles are presented in Table I as the basis of irreducible representations of the T h group. In magnetic fields, they are decomposed into sets of smaller number of basis for reduced symmetries as shown in Table II. The NMR line splitting ∆H is due to some staggered magnetic multipoles with Q = (1, 0, 0). The vanishing ∆H at zero field indicates that these staggered multipoles also disappear at zero field. Therefore, the order parameter (OP) at zero field must be a nonmagnetic staggered multipole, which is even with respect to the time reversal and does not couple to nuclear magnetic moments. As discussed by Shiina et al., 16) however, OP at zero field and the field-induced multipoles must belong to the same irreducible representation of the symmetry group reduced by the magnetic field. Thus identification of the field-induced magnetic multipoles allows us to determine the symmetry of OP at zero field.
Using the invariant form of the hyperfine coupling at the P sites in the filled skutterrudite structure, 15,20) the difference of the hyperfine fields at Pn(I) and Pn(II) (n=1-6), ∆H(n) = (∆H where T s xyz , T s ξ , and J s ξ are the staggered octupole and dipole moments and c ij is the hyperfine coupling constants. For terms with ±, the + (−) sign should be taken for P1 (P2). Expressions for P3 and P4 (P5 and P6) are obtained by applying once (twice) simultaneous cyclic permutations x → y → z → x in the subscripts of the multipoles and ∆H x → ∆H y → ∆H z → ∆H x . Since the external field is much larger than the hyperfine field, ∆H is equal to ∆H · h, where h is the unit vector along the field direction. From eq. (1), we can determine the equivalent sites and the number of NMR lines in the presence of these field-induced magnetic multipoles as shown in the last two columns of Table II. For example, if a staggered component of J y or T y were induced by the field along [001], eq. (1) tells ∆H Table II, we can conclude that the field-induced magnetic multipoles must have the Γ 1 symmetry for all field directions. In other words, only the totally-symmetric multipoles can avoid additional line splitting. For dipoles, this means that the induced dipole moment is always parallel to the external field, which is consistent with the neutron scattering measurements. 10) This can be intuitively understood as illustrated in Fig. 5 Table III. It is apparent that only the Γ 1 representation at zero field has always a Γ 1 component in magnetic fields irrespective of the field direction.
At present, we cannot identify the detailed form of the OP because the number of NMR lines depends only on the symmetry of the OP in magnetic fields. The OP may thus include various multipoles of different ranks in general, as far as they are nonmagnetic and have the Γ 1 symmetry. Such a multipole order can be caused, for example, by alternate breathing of the icosahedral cages which preserves local T h symmetry at the Pr sites. This type of lattice distortion has been observed in the insulating phase of PrRu 4 P 12 , 21) where the antiferro order of the hexadecapole H 0 ∝ J 4 x + J 4 y + J 4 z has been proposed. 22) In fact, this is the only totally-symmetric nonmagnetic multipole among those listed in Table I. Quantitative analyses of the splitting ∆H as a function of direction and magnitude of the field will give further information about the OP such as the rank of a dominant multipole. Some phenomenological approaches are in progress, 15,20) although a microscopic theory is needed to uncover the mechanism of this interesting phase transition.
In conclusion, we have presented 31 P NMR data in a single crystal of PrFe 4 P 12 , confirming antiferro order of nonmagnetic multipoles at low fields. There exist fieldinduced staggered magnetic multipoles, which is compatible only with the totally-symmetric order parameter at zero field. The T h point symmetry at the Pr sites is thus preserved in the ordered phase, excluding any type of quadrupole order. | 2019-04-14T02:13:29.966Z | 2007-01-22T00:00:00.000 | {
"year": 2007,
"sha1": "19da50a723b5679b7aec8ac70b25a41fbae279ee",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/0701510",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "19da50a723b5679b7aec8ac70b25a41fbae279ee",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
231835172 | pes2o/s2orc | v3-fos-license | Melatonin Maintains Macrophage M1 Phenotype to Reverse LPS-stimulated Tumor Immune Tolerance
Background: Lipopolysaccharide (LPS) is a potent trigger of macrophage-mediated inammation and its repeated stimulation results in immune tolerance. This study is to explore the cellular mechanisms of LPS-mediated tumor immune tolerance and to investigate whether melatonin can reverse this tolerance. Methods: The effect of melatonin and LPS on macrophages was assessed by cell proliferation, morphological changes, phagocytosis and autophagy in vitro. The tumor-preventing effect of melatonin and LPS were evaluated in the urethane-induced lung carcinoma model and in the H22 liver cancer allograft model. Immunouorescence, immunohistochemistry and ELISA were used to examine protein expression. The related targets and pathways of melatonin were predicted by comprehensive bioinformatics, and the clinical association of bacterial infections and survival was evaluated in cancer patients by meta-analysis. Results: In vitro (cid:0) Raw264.7 macrophages were polarized toward the M1 phenotype by single LPS administration but toward the M2 phenotype by repeated LPS administration. Interestingly, combination treatment with repeated LPS and 10 µM melatonin prevented macrophage polarization toward the M2-like phenotype and exerted lasting antitumor ecacy. In the urethane-induced lung carcinoma model, repeated LPS administration stimulated macrophage polarization toward the M2 phenotype and promoted lung carcinogenesis, which was abrogated by macrophage depletion, while melatonin alone or in combination with repeated LPS challenge restored M1-like macrophages and prevented carcinogenesis. In the H22 liver cancer allograft model, melatonin maintained the macrophage phenotype and promoted the tumor-suppressing effect of repeated LPS challenge. Furthermore, we found that macrophages repeatedly stimulated with LPS had a high level of surface lipid rafts that mediated PI3K/AKT and JAK2/STAT3 signaling and prevented both LPS sensitivity and immune response by self-expression of PD-L1 and surface expression of PD-1 receptor on NK cells, whereas melatonin decreased surface lipid rafts and PI3K/AKT and JAK2/STAT3 signaling. Finally, we conducted a comprehensive bioinformatics analysis of melatonin-relevant targets and pathways involved in M2 macrophage polarization and evaluated the clinical associations of bacterial infections and survival in cancer patients. Conclusions: This study suggests a function of melatonin in regulating macrophage polarization to maintain LPS-stimulated tumor immune surveillance.
cells. [8] The 10-year datasets from rst-line anti-CTLA4 therapy show an unprecedented long-term survival in 20% of terminal metastatic melanoma patients that had never been seen before with other approaches to cancer treatment, indicating an exciting success in cancer immunotherapy that can provide hope of becoming "super-survivors" to incurable cancer patients [9.10] . Nevertheless, response rates with the most promising immunotherapy, such as PD-1 inhibitors, which represent immune-checkpoint-blockade(ICB)mediated rejuvenation of exhausted T cells, only exhibit a modest 20% overall survival bene ts in many solid tumors, whereas CAR-T cell therapy was reported to result in cerebral edema and cytokine release storms [7.11] . In addition, a more recent clinical report showed that PD-1 inhibitors could speed up tumor growth and promote tumor hyperprogression in 9% of cancer patients, indicating a need to optimize cancer immunotherapeutic approaches. [12] Current immunotherapies mainly activate tumor-in ltrating T lymphocytes and natural killer cells without regard to environmental changes that can cause various states of T cell dysfunction, such as anergy, tolerance, exhaustion, and senescence. [12.13] In fact, antitumor T cell immunotherapies originally exhibited responses but subsequently became resistant during prolonged antigen exposure due to "immune exhaustion" induced by the immunosuppressive microenvironment that is shaped by tumor-associated in ammatory cells. [14] Obviously, effective tumor immune rejection relies not only on antigen exposure-induced adaptive immune responses but also on the innate immune surveillance-regulated microenvironment. [15] Tumor-in ltrated macrophages (TIMs) are a major in ammatory cell in ltrating the tumor microenvironment and are responsible for the immunosuppressive microenvironment and tumor progression. [16] Recent studies have shown that the inhibition of macrophage-mediated phagocytosis is an essential mechanism for tumor immune evasion. [17] In the clinic, TIMs are positively associated with high tumor grade and poor prognosis in various cancers. In mouse cancer models, TIM depletion or reeducation can reverse their tumor-promoting functions. [18.19] In addition, some recent reports have shown that innate macrophages also play important roles in the intratumor in ltration of CD8 cytotoxic T cells and the establishment of the long-lived memory lymphocytes. Therefore, targeting TIMs to reawaken innate immunity has emerged as a new cancer immunotherapy strategy. [20] . It is well known that activated macrophages are divided into antitumor M1 and protumor M2 phenotypes. [21] LPS can polarize macrophage toward the M1 phenotype, but repeated stimulation results in immune tolerance. [22] Melatonin is a neurohormone secreted by the pineal gland, This study explored the cellular mechanisms of LPS-mediated tumor immune tolerance and investigated whether melatonin can reverse this tolerance. [23] Our results are the rst to indicated a vital role of macrophage polarization in LPS tolerance and also suggests a new mechanism by which bacterial infections increase the risk of carcinogenesis.
41000100002406. Liposome-encapsulated clodronate (LEC) was prepared as described previously. [24] Cell Culture and Assay Raw264.7 macrophages and the Lewis lung carcinoma (LLC) cells were from ATCC, purchased from the Chinese Academy of Sciences and grown in RPMI1640 medium supplemented with 10% (v/v) fetal bovine serum (FBS) in a humidi ed atmosphere containing 5% CO 2 and 95% air at 37℃. Raw264.7 macrophages were seeded in 24-well plates and stimulated with by 10 ng/ml LPS or 10 ng/ml IL-10 for 24 h to obtain M1-like (M1) and M2-like (M2) macrophages. To collect cell-conditioned media, M1-like or M2-like cells were cultured in serum-free medium for 24 h, centrifugated to remove the cells and further ltered to remove debris for supernatant collection as M1 and M2 cell-conditioned media(M1-CM and M2-CM,respectively). The supernatant levels of IFN-γ, TNF-α, NO, PD-L1, IL-10 and TGF-β1 were determined by ELISA kits, according to the manufacturer's protocols. [25] The results were calculated from linear curves obtained by using the Quantikine kit standards.
For proliferation analysis, LLC cells at 1 × 10 5 cells/mL were seeded in a 96-well plate and treated with M1 or M2 cell-conditioned media for 48 h, M1-like or M2-like cells were also treated with LPS or melatonin alone or in combination for 7 d (changing the medium every 2 days), and living cells were examined by MTT reduction assay, according to our previous method. [24] For the morphological assessment, the cells were analyzed by a Laser holographic cell imaging and analysis system (HoloMonitor M4, Phiab, Sweden). [25] For phagocytic ability assessment, neutral red phagocytosis was detected. For autophagy analysis, the cells were stained using PE-conjugated anti-LC3-B or anti-p62 antibodies. For apoptosis analysis, the binding of ANXV-FITC to phosphatidylserine was measured by an automated cell counter LLC cell immune clearance LLC cell immune clearance was assessed using a calcein-release assay, according to our previous method. [24] Brie y, NK cells (DX5+) were puri ed from the ICR mouse spleens using the MACS separation system (Miltenyi Biotec, Bergisch Gladbach, Germany), stimulated with IL-2 (10 ng/ml) for 24 h in the presence of M1-like or M2-like cell-conditioned media and harvested as attacking cells. Mitomycin Ctreated LLC cells were labeled with 10 µM calcein-AM as target cells and were placed into a 96-well plate with CD8 + T cells at 100:1, 50:1, and 25:1 (NK cells:LLC cells) ratios for 6 h at 37 °C. The supernatants were transferred from each well to another 96-well plate, and the uorescence was measured using a Synergy2 multimode microplate reader (BIO-TEK). Maximum release was obtained from detergentreleased LLC cells, and spontaneous release was obtained from LLC cells incubated in the absence of CD8 + T cells (n = 5).
To detect how the different macrophage phenotypes affect NK cells, NK cells were cultured in the lower chamber at a concentration of 5 × 10 6 cells/ml and were stimulated with IL-2, and M1-like or M2-like cells were added in the upper chamber at 2 × 10 6 cells/ml in the presence or absence of anti-IL-10, anti-TGF-β1, MβCD, WP1066 or LY294002. After coincubation for 24 h at 37 °C, the supernatant was centrifugated for the PD-L1 assay, adherent cells in the upper compartment were removed by a cotton swab, and the lter inserts were incubated in medium supplemented with 5 mg/ml DAPI for 30 min at 37 °C and analyzed for cell migration using an inverted uorescence microscope. NK cells in the lower chamber were collected for surface PD-1 receptor assay using PE-conjugated anti-PD-1 antibody.
Cytolytic assay
The cytolytic activity of LLC cells was assessed by CFSE-7AAD staining. Brie y, LLC cells were incubated with CFSE -labeled M0, M1-like or M2-like cells at 20:1 and 10:1 (M0, M1 or M2 cells: LLC cells) ratios for 6 h. Then, 7AAD was added to the cell suspensions and incubated on ice for 15 min. The percentages of 7AAD + cells among CFSE + cells were analyzed using an automated cell counter and analysis system.
Urethane-induced lung carcinogenesis model
Urethane (600 mg/kg body weight), alone or in combination with liposome-encapsulated clodronate (LEC, 4 mg/mouse) was injected intraperitoneally (i.p.) into ICR mice once a week for eight weeks, according to our previous protocol. [26] Following the rst urethane injection, mice received melatonin (20 mg/kg/day) via intragastric administration once a day or LPS (1 mg/kg/day) via intravenous tail injection once a week alone or in combination for twelve weeks. At thirteen weeks after the rst urethane injection, orbital venous blood was collected for serum assays of IFN-γ, IL-2, TNF-α,PD-L1, IL-10 and TGF-β1 using an ELISA kit. The mice were sacri ced, and cell-free alveolar uid was collected by inserting a cannula into the trachea with three sequential injections of 1 mL PBS, followed by centrifugation, for cytokine assay (IFN-γ, IL-2, TNF-α, ROS, IL-10 and TGF-β1), while the separated cells were resuspended in 0.9% sterile saline for total cell counts. Macrophages in the suspensions were enriched by magnetic cell sorting utilizing anti-F4/80-coated beads, and macrophage immunophenotypes were analyzed by FITCconjugated anti-mouse CD86 and CD163 staining using an automated cell counter and analysis system (Nexcelom Cellometer X2, Nexcelom, USA).
Spleen NK cells (DX5+) were separated using the autoMACS separation system for assays of surface PD-1 receptor and memory NK cell rate (NKG2c + NKG2a-) using an automated cell counter and analysis system.
The average numbers of lung carcinomas per mouse were calculated. A portion of each lung was preserved in 10% buffered formalin and routinely embedded in para n. Lung sections were stained by immunohistochemistry and immuno uorescence according to our previous method. [27] After overnight incubation with the primary antibodies (anti-PD1, anti-iNOS, and anti-CD31), the slides were incubated with the FITC-conjugated goat anti-mouse IgG for 30 minutes. The total immunohistochemical and immuno uorescence scores were calculated by the intensity score and proportion score by excluding the primary antibody and IgG matched serum, respectively, as positive and negative controls.
In addition, the lung vascular integrity was assayed by the Evans blue dye extrabarrier technique according to our previous method. [28] Tumor allograft model H22 cells were used for tumor allograft experiments. Two hundred microliters of saline containing 1 × 10 6 cells were injected subcutaneously into the lateral axilla of mice to establish tumor allografts. One day after tumor inoculation, in vitro LPS-induced M1 or IL-10-induced M2 cells (2 × 10 6 cells in 200 µL saline) were injected intravenously into mice once a week for three weeks; simultaneously, mice received melatonin (20 mg/kg) via intragastric administration once a day and LPS (1 mg/kg) or LEC (4 mg/mouse) via intravenous tail injection once a week alone or in combination for 3 weeks. Tumor size was monitored twice a week with calipers and calculated as the length x width 2 /2. On the twenty-second day after tumor inoculation, orbital venous blood was collected for serum assays of IFN-γ, IL-2, TNF-α,PD-L1, IL-10 and TGF-β1. The mice were euthanized, the tumors were extracted and weighed, peritoneal macrophages were enriched by magnetic cell sorting utilizing anti-F4/80-coated beads, and macrophage immunophenotypes were analyzed. Spleen NK cells were separated using the autoMACS separation system for assays of surface PD-1 receptor and memory NK cell rate. The complete assay procedure was similar to the methods in the urethane-induced lung carcinogenesis model.
In addition, the tumor vascular integrity was assayed with the Evans blue dye extrabarrier technique according to our previous method. [25] For the immune rechallenge study, the tails of 1/2 of the 40 mice were injected subcutaneously with 5 × 10 5 H22 cells suspended in 50 µl saline. One day after tumor inoculation, the mice received melatonin (20 mg/kg) via intragastric administration once a day and LPS (1 mg/kg) or LEC (4 mg/mouse) via tail intravenous injection once a week alone or in combination for 2 weeks. Fifteen days after tumor implantation, the tumor-bearing tail was cut off to remove the primary tumor, and the mice were rechallenged with subcutaneous injections of 1 × 10 6 H22 cells in 200 µl saline in the anks, while 10 normal mice were challenged with identical H22 cells. Tumor size was monitored twice a week with calipers. At thirty-six days, the same detections as above were carried out.
The regulatory mechanism of melatonin on neutrophils The gene expression pro les GSE5099 were obtained from the Gene Expression Omnibus (GEO) database, Up-and downregulated genes related to tumor-associated macrophages were identi ed using GEO2R, and the human structures of these differential proteins were collected from the protein data bank (PDB) for docking analysis. The chemical structure of melatonin was obtained from PubChem, and the docking exercise was conducted using the online software systemsDock with the autoremoval of unspeci ed protein structures. Docking scores over 5 were regarded as the potential targets for melatonin. Gene ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) enrichment analyses were performed for the potential targets using the Database for Annotation, Visualization and Integrated Discovery (DAVID) and the online software Omicshare. The protein-protein interactions (PPIs) among these potential targets were constructed using the STRING database, and the hub genes were identi ed using Cytoscape.
Network Meta-analysis
We systematically searched PubMed and Web of Science to identify eligible studies published from Jan 1, 1990 to Apr 1, 2019. OR and hazard ratios (hrs) with 95% con dence intervals (cis) were used to evaluate the risk of bacterial infections in carcinogenesis and cancer survival. Fixed and random-effect meta-analyses were conducted based on the heterogeneity of the included studies. To minimize case selection bias, sensitivity analyses were performed.
Statistical analyses
The data were statistically analyzed using GraphPad Prism, Version 5.0 (San Diego, CA, USA) and are presented as the mean ± SD. The differences between two groups were evaluated using a t-test. A P value of less than 0.05 was considered statistically signi cant. Meta-analyses were performed using the xedeffects inverse-variance method using RevMan 5.3. Heterogeneity was calculated using the I 2 statistics and chi-square Q test, and a P-value of heterogeneity < 0.10 or I 2 >50% indicated signi cant heterogeneity. Usually, a xed-effect model was used, and the random-effects model was adopted when the heterogeneity was signi cant.
M2 macrophages had an opposing effect on LLC cells compared to that of M1 macrophages
It was reported that cancer-associated macrophages have M2-like characteristics and exert tumorpromoting actions, unlike M1-like macrophages, which show antitumor functions. [29] To explore these properties, we induced the polarization of Raw264.7 macrophages into M1-like cells and M2-like cells. As expectedly, compared to the characteristics of M1-like cells, M2-like cells had high levels of surface CD163 expression (Fig. 1A, 1B) with morphological changes indicated by the cell distribution (Fig. 1D, 1E ). In neutral red phagocytosis, there was no difference between M1 and M2 cells (Fig. 1C); however, M2like cells had a reduction in ROS indicated by intracellular uorescence of DCFH-DA (Fig. 1F) and an increase in autophagy indicated by LC3-B and p62 immuno uorescence (Fig. 1H). When cocultured with M1 cell-conditioned media (M1-CM), LLC cells showed reduced proliferation (Fig. 2B) and increased apoptotic rates (Fig. 2C), which were not affected by M2 cell-conditioned media (M2-CM). Consistent with these results, M1-like cells led to LLC cell lysis (Fig. 2D), which was not in uenced by M2-like cells, and M1 cell-conditioned media promoted LLC cell immune clearance (Fig. 2D) which was prevented by M2-CM.
M2 macrophages had the more lipid rafts and suppressed NK cells It is well known that high TIMs and low NK cells may be associated with an poor prognostic survival in cancer patents [30] . To explore how macrophages affect NK cells, we detected differences between M1-like and M2-like macrophages and their in uence on NK cells. M2 cells had the more lipid rafts, as indicated by membrane cholesterol (Fig. 2I) ang produced more PD-L1 (Fig. 2G ) and JAK2/STAT3 activation (Fig. 2G) compared to those of M1 cells. M1-CM promoted immune clearance of LLC cells by NK cells (Fig. 2E), which was prevented by M2-CM. Consistent with these results, M2 cells promoted NK cell expression of PD-1 surface receptor (Fig. 2F ), and MβCD and anti-TGF-β1 antibody but not anti-IL-10 antibody prevented M2 cells from inducing PD-L1 production and NK cells from expressing PD-1, whereas JAK2/STAT3 signal blockade by WP1066 but not PI3K/AKT signal blockade decreased M2 cell lipid rafts and PD-L1 production (Fig. 2G,2H).
Melatonin reverses the macrophage tolerance of LPS
Macrophages may be tolerant to LPS, an effect that lasts for a period of time. [31] To explore how macrophages become tolerance to LPS, we observed the effects of melatonin and LPS on macrophage polarization. As expected, repeated LPS treatment led to macrophage tolerance of LPS (Fig. 1A) accompanied by a shift from the M1 to M2 phenotype, the increased lipid rafts and the JAK2/STAT3 activation (Fig. 2G, 2H), while no more than three stimulations induced macrophage polarization toward the M1 phenotype (Fig. 1A). Unexpectedly, when administered at the dose (10 µM), which had a small effect on cell viability in M1 cells ( Fig. 2A), melatonin improved the morphological changes indicated by cell distribution (Fig. 1D, 1E) and induced apoptosis in M2 cells ( Fig. 2A) with the increased ROS (Fig. 1F) and the decreased autophagy (Fig. 1H), accompanied by suppression of lipid rafts and JAK2/STAT3 signaling. Importantly, the combination treatment with LPS and melatonin resulted in the polarization of macrophages toward M1 cells (Fig. 1A). Furthermore, we found that both of lipid raft depletion by MβCD and JAK2/STAT3 signal blockade by WP1066 attenuated the effect of melatonin on M2 cells (Fig. 2F, 2H).
Melatonin maintains macrophage M1 phenotype to reverse LPS-mediated carcinogenesis in a urethane-induced lung cancer model To con rm the roles of macrophages in carcinogenesis, we investigated macrophage phenotypes in a urethane-induced mouse lung cancer model, which is usually used for studying basic lung tumor biology and nding new tumor intervention strategies. In this study, the mice received melatonin or LPS for twelve weeks. At thirteen weeks, lung cancer nodes were visible to the naked eye (Fig. 3A). The number of lung cancer nodes was 26.2 ± 4.1, regardless of the heterogeneity of tumor histology in the control group (Fig. 3B). As expected, macrophage in ltration in alveolar cavities was positively correlated with lung carcinogenesis in the control group (Fig. 3C, 3D, 3E, 3F), and both of melatonin treatment and macrophage depletion induced by LEC prevented lung carcinogenesis (Fig. 3A, 3B). Unexpectedly, LPS alone did not prevent lung carcinogenesis but promoted these incidents (Fig. 3A, 3B). Immunophenotypes showed that the in ltrated macrophages were similar to IL-10-treated Raw264.7 macrophages and expressed more surface CD163 in the control group (Fig. 3E, 3F ), indicating an M2 phenotype. ELISA showed that the levels of IFN-γ, IL-2, and TNF-α decreased, while the levels of IL-10 and TGF-β1 increased in serum and in alveolar cavities in control mice compared to those of normal mice (Fig. 3G, 3H), indicating immune tolerance. These changes were promoted by repeated LPS administration and were attenuated by melatonin. Importantly, the combination of LPS and melatonin maintained the M1 phenotype (Fig. 3D, 3F) and reverses the lung carcinogenesis-promoting effect of LPS (Fig. 3A, 3B), accompanied by a reduction in serum PD-L1 levels (Fig. 3G ), spleen NK cell surface PD-1 expression (Fig. 4C), lung tissue immunohistochemical staining of PD-1, iNOS, immuno uorescence staining of CD31 (Fig. 4A, 4B), and permeability to evans blue dye (Fig. 4E), as well as an increase in spleen memory NK cells (Fig. 4D) and NK cell surface PD-1 expression rates (Fig. 4C), indicating a better immune restoration and lung vascular integrity.
Melatonin maintains the macrophage M1 phenotype to promote LPS-induced tumor suppression and to reverse repeated LPS-induced immunosuppression in the H22 liver cancer allograft model (Fig. 5A, 5B ). The tumor-suppressing e cacy of LPS was promoted by melatonin, macrophage depletion and M1 cell injection, which decreased the intratumor permeability of evans blue dye (Fig. 5C) but was attenuated by M2 cell injection (Fig. 5A, 5B), which increased the intratumor permeability of evans blue dye. Immunophenotyping showed that the in ltrated macrophages were similar to IL-10-treated Raw264.7 macrophages and expressed more surface CD163 in the control group, indicating an M2 phenotype (Fig. 5D). The results of serum cytokines (Fig. 5G) and spleen NK cell analyses (Fig. 5E, 5F) were similar to those in the urethane-induced lung cancer model, indicating an M2 macrophage-polarizing effect of repeated LPS. The immunosuppressive e cacy of repeated LPSinduced macrophages was further con rmed in an H22 cell rechallenge immune study where tumor immune rejection was suppressed by repeated LPS stimulation, while it was promoted by melatonin. Importantly, the combination of LPS and melatonin synergistically stmulated immune rejection of H22 cells (Fig. 6A), accompanied by a reduction in peritoneal M2 macrophages (Fig. 6B), serum levels of PD-L1, IL-10 and TGF-β1 (Fig. 6E), and spleen NK cell surface PD-1 expression (Fig. 6C) and an increase in peritoneal M1 macrophages (Fig. 6B), serum levels of IFN-γ, IL-2, TNF-α (Fig. 6E), and spleen memory NK cell rate (Fig. 6D), indicating a sustained effect of melatnin on the M1 macrophage phenotype. The immune-rejecting e cacy of melatonin on tumor rechallenge could be aborgated by macrophage depletion, indicating an important role of macrophages in secondary immunity and immune memory.
Melatonin regulates macrophages by targeting the multiprotein network We queried 590 upregulated genes (LogFC ≥ 1.5, P < 0.05) and 994 downregulated genes (LogFC ≤-1.5, P < 0.05) related to M1-associated macrophages and obtained 181 targets. A total of 136 potential targets with a docking score > 6.0 (pKd/pKi) were selected for GO and KEGG analyses (Fig. 8, Fig. 7A). A hypergeometric distribution count > 4 and P < 0.05 were set as threshold criteria to identify the functional gene ontology and pathways. GO enrichment analysis indicated that the potential targets of melatonin were primarily associated with the "signal transduction", "innate immune response", "cell proliferation", "protein phosphorylation" and "apoptotic process" terms ( Fig. 8). KEGG enrichment analysis revealed that the potential targets of melatonin were signi cantly enriched in the "Pathways in cancer", "TNF signaling pathway" and "JAK/STAT signaling pathway" terms (Fig. 7C). The PPI network identi ed 4 key genes (JAK2, STAT3, PIK3CA, and AKT1) that were hub genes for melatonin (Fig. 7B). These results were con rmed by Western blot analysis (Fig. 2F, 2G, 2H).
Bacterial infection increases the risk of carcinogenesis and is adversely associated with survival times in cancer patents.
Discussion
Cancer may arise from alterations in different physiological processes and is refractory to cure due to unknown etiology and genetic heterogeneity. Based on self-healing ability, immunotherapy has been great expectation against distinct types of cancer. [32][33][34][35][36][37] Current immunotherapies targeting different cellular checkpoint controllers emerged as having either innate or acquired resistance due to the immunosuppressive tumor environment, guiding the direction of developing cancer immunotherapy. [38] The object of cancer immunotherapy is to stimulate a long-lasting immunosurveillance, maintaining the antitumor immunity. [39] Macrophages, as the rst-line immune responders in the innate immune system, are responsible for nonresolving in ammation in tumors. Understanding the roles of macrophages in the tumor immune response may help develop new immunotherapeutic strategies and enhance the response rate of immunotherapy. [40] Macrophages can exhibit both pro-and antitumorigenic properties, depending on their phenotype. Recent studies have demonstrated that targeting TIMs can reverse the immunosuppressive tumor microenvironment and stimulate robust tumor-speci c immune responses, which is consistent with the fact that immunosuppressive TIMs are abundant in the tumor microenvironment and are positively correlated with poor prognosis. Therefore, maintaining the antitumorigenic phenotype of macrophages rather than completely depleting TIMs represents a new cancer immunotherapy strategy. [41.42] LPS is a potent trigger of macrophage-mediated in ammation and has been recognized as a potent antitumor agent in animal tumor models. [42] However, its use in human cancer therapy was not very successful due to LPS-induced tolerance, a state of altered responsiveness in macrophages, which results in poor tumor response and is a major cause of secondary hospital infections. Previously, Boris et al reported that β-glucan could reversed the epigenetic state of LPSinduced immunological tolerance to reduce overall sepsis mortality. [22.43] In this study, we rst showed that the combination of LPS and melatonin could prevented macrophage polarization toward M2-like cells and therefore exerted a lasting antitumor e cacy, suggesting a novel effective strategy for reversing LPS tolerance.
Macrophages can be activated in response to different agents to become M1 and M2 macrophages and thus exert different functions. It is well known that LPS is able to polarize macrophages toward the M1 phenotype, which exerts the proin ammation and antitumor effects. [44] Consistent with these functions, in an animal model, LPS demonstrated a therapeutic effect on the transplanted tumor with inhibition of tumor size and growth. In small clinical trials, LPS also led to cancer remission and disease stabilization in cancer patients. [45.46] However, LPS is responsible for the biological properties of bacterial endotoxins, which are potent in ammagens and result in fever, septic shock, toxic pneumonitis, and respiratory symptoms; however, a cohort study found that long-term exposure to endotoxin was associated with a reduced risk of lung cancer. [46.47] In goats, LPS treatment triggered an excessive in ammatory response and elevated the body temperature to 40 °C. [48] In clinical trials, even pretreated with ibuprofen resulted in the unavoidable LPS-mediated clinical toxicities. [49] In addition, subsequent LPS-induced macrophage resistance after activation was also an obstacle that prevented durable tumor response to macrophages.
Despite intense investigations of various epigenetic and genetic changes in tolerant macrophages for many years, a unifying mechanism that is responsible for LPS tolerance remains elusive. [50][51][52] Jo¨rg et al used genome-wide transcriptional pro ling technology to show the hyporesponsiveness of most LPS target genes in tolerant macrophages. [53] In the present study, Raw264.7 macrophages were polarized toward M1-like cells by a single LPS challenge but toward M2-like cells by repeated LPS challenge, suggesting a critical process of LPS tolerance in macrophages. The meta-analysis showed that bacterial infection increases the risk of carcinogenesis and is adversely associated with cancer survival, suggesting a wide role of LPS tolerance in cancer progression. A mechanistic understanding of the role of LPS in tumor progression will provide unique therapeutic alternatives.
Melatonin is a pleiotropic molecule and has numerous physiological and pharmacological actions. [54.55] Melatonin generally plays a check-and-balance role in immunity and in ammation. Melatonin is an immunostimulator that drives an activated immunocyte state in favor of effectively clearing pathogens under normal or immunosuppressive conditions, while it acts as an immunosuppressor that urges immunocytes to enter an inactivated state that suppresses in ammatory reactions under excessively in ammatory conditions. [56.57] Several studies found that melatonin had an exciting potential to override the immunosuppressive tumor environment. [58] Various reports have shown that melatonin promotes 80% survival in lethal LPS-treated mice and signi cantly reduces the LPS-treated mortality in mice and rats by correcting the LPS-induced in ammatory imbalance with decreased levels of NO and lipid peroxidation. [59.60] In addition, a recent study demonstrated that melatonin cold suppresses indoleamine 2,3-dioxygenase-1 (IDO1) (a key immunomodulatory enzyme associated with cancer immune escape) to overcome tumor-mediated immunosuppression. [61.62] Based on these results, we believe that the combination of LPS and melatonin can maintain macrophage sensitivity to LPS and simultaneously limit excessive pathogenic stimuli to avoid a "macrophage exhaustion" phenotype. Consistent with our hypothesis, in this study, the combination of LPS and melatonin resulted in optimal cancer prevention in a urethane-induced lung carcinogenic model and H22 liver cancer allograft model without signi cant side effects. Furthermore, we found that macrophages repeatedly challenged with LPS had a high level of surface lipid rafts and JAK2/STAT3 activation, which prevented both M1-like polarization and immune responses, whereas melatonin decreased surface lipid rafts and JAK2/STAT3 signaling. Bioinformatics analysis found that the potential targets of melatonin regulation of macrophages were primarily associated with the "in ammatory response", "signal transduction", "cell proliferation" "innate immune response" and "negative regulation of apoptotic process", indicating a function of melatonin targeting multi-protein networks, such as "Jak-STAT signaling pathway", "Toll-like receptor signaling pathway" and "Chemokine signaling pathway", whereby melatonin maintains M1 macrophage phenotypes to reverse LPS-stimulated immune tolerance. We used a pharmacological blocker to con rm the important role of Jak-STAT signaling in the melatonin-maintained macrophage phenotype, and JAK2/STAT3 signal blockade prevented M2-like macrophages from producing PD-L1 and NK cells from expressing PD-1, whereas lipid raft depletion decreased M2-like macrophage JAK2/STAT3 signaling, indicating an association of M2 macrophage functions and the feedback loop of lipid rafts -JAK2/STAT3-PD-L1.
Certainly, this is only one of the important feedback loops.
Historically, bacterial therapy as oncolytic agents has been recognized for malignant brain tumours, which showed an extended survival times for patients who developed infections at the site of resection of malignant gliomas, and cancer vaccines were assumed to be based on immunotoxins of bacterial origin. [63] It was previously reported that occupational exposure to endotoxin in organic material reduced the risk of lung cancer among workers employed ≤ 35 years but increased the risk of lung cancer among those employed > 50 years, implying a cancer-promoting action of endotoxin tolerance. [64.65] In fact, LPS (also referred to as endotoxin) as a cell wall component of gram-negative bacteria has been used for tumor destruction for many years, such as a report by Chicoine et al that intratumor injection of LPS could cause the regression of subcutaneously implanted mouse glioblastoma multiforme in animal tumor models. [66] Goto S et al also reported that three of ve evaluable human cancers showed a signi cant response to intradermal LPS administration, suggesting that LPS is a potent antitumor agent. [67] However, only one trial of LPS use in human cancer therapy declared a poor tumor response and unbearable side effects. In addition, LPS inhalation can also produce both a systemic and a bronchial in ammatory response. [68.69] Therefore, it is necessary for LPS use as an antitumor agent to balance its in ammatory response and tolerance. Our study provides a safe and effective strategy for the clinical application of LPS, and this therapeutic strategy is worth investigating.
Conclusions
In summary, melatonin regulation of immune balance plays an important role in the LPS-stimulated macrophage response, and our ndings suggest several potential clinical implications. First, M2 macrophage polarization correlates with LPS-stimulated "macrophage exhaustion", supporting therapeutic targeting of TIMs in an immunosuppressive environment. Second, alteration of the macrophage phenotype is superior to depletion of TIMs for reversing LPS tolerance and tumor immunosuppression, indicating the potential of reeducating these cells to reverse their protumor functions for antitumor properties. Third, macrophages, as the rst-line responders of innate immune surveillance, are needed not only for the initiation of tumor innate immune responses but also for the long-lasting preventive functions of NK cells against tumors, indicating the rst priority for the development of e cient immunotherapies. Certainly, whether reeducating TIMs by the combination of melatonin and LPS is the most effective approach for restoring antitumor immune responses needs to be fully evaluated, and the dose-effect and exposure-timing relations also need also to be further investigated. The data present Mean ± SD, the experiments were repeated 3 times, and statistical signi cance was determined by a t-test. *P < 0.05, **P < 0.01 . Mel:Melatonin. Serum cytokine levels (n=6). The data present Mean ± SD, the experiments were repeated 3 times, and statistical signi cance was determined by a t-test. *P < 0.05, **P < 0.01 Mel:Melatonin. | 2020-07-02T10:34:33.532Z | 2020-06-30T00:00:00.000 | {
"year": 2020,
"sha1": "54acc6c8f20cce05a4b82491233f8dff026b0b21",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-37928/v1.pdf?c=1631859401000",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "1e36a62a8492c07602759471f2fddc419ec79d02",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
} |
96221269 | pes2o/s2orc | v3-fos-license | Negative electric field dependence of mobility in TPD doped Polystyrene
A total negative field dependence of hole mobility down to low temperature was observed in N,N'-diphenyl-N,N'-bis(3-methylphenyl)-(1,1'-biphenyl)-4,4'diamine (TPD) doped in Polystyrene. The observed field dependence of mobility is explained on the basis of low values of energetic and positional disorder present in the sample. The low value of disorder is attributed to different morphology of the sample due to aggregation/crystallization of TPD. Monte Carlo simulations were also performed to understand the influence of aggregates on charge transport in disordered medium with correlated site energies. The simulation supports our experimental observations and justification on the basis of low values of disorder parameters.
Introduction
Films of organic semiconducting materials are widely used in developing various optoelectronic devices like organic light emitting diodes (OLED), organic field effect transistors (OFET), organic solar cells etc [1,2]. Spin cast films of molecularly doped polymers (MDP) are used most widely as active layer in these devices because of simple fabrication technique. These MDP films are mostly amorphous/disordered and have low carrier mobility. To improve mobility and transport properties various methods, like annealing, irradiation etc. are employed to reduce the disorder in the active layer (i.e. change film morphology) [3][4][5][6]. These methods improve structural order in the organic thin film by creating more ordered regions in the otherwise amorphous film. Often, depending upon the processing conditions while film casting, the ordered regions are also formed unintentionally due molecular aggregation / crystallization of dopants in MDP or aggregation of polymer chains in case of conjugated polymers [7,8]. Therefore, in most of the cases where the active layers are MDP or pure polymer, films are not purely disordered or amorphous rather they are partially ordered. The charge transport in the partially ordered films therefore demands deeper understanding of transport mechanism. The assumptions made in hopping based charge transport models, for example the Gaussian Disorder Model(GDM) [9], developed for a completely isotropic and disordered medium probably not sufficient [10]. Moreover these models of charge transport do not consider morphology of the active layers in finer details. The presence of ordered regions can in-fact reduce the overall energetic disorder in the material [5,7] and that enhances the mobility but substantially influence the charge transport mechanism [7, 11,12]. Field and temperature dependence of mobility in such partially ordered samples show wide variation with morphology of the sample and is subject of intense research. In general the field dependence of the mobility in polymer films follows Poole-Frenkel type behavior. However, in certain cases, at low temperature, either very weak field dependence or even negative field dependence of mobility has also been reported [13][14][15][16]. Reports also suggest dependence of mobility on temperature as 1/T as well as 1/T 2 dependence. In some cases it is hard to distinguish between the two power law behavior of the mobility [13, 14,17].
In Poole-Frenkel type behavior, the mobility increases with increase of electric field in a log µV s.E 1/2 fashion. According to GDM the increase of mobility in log µV s.E 1/2 fashion in disordered molecular solids is due to the tilting of density of states by the applied potential that lead to the decrease of energetic barrier as seen by the charge carriers in its transit [9]. At higher electric field strength the energetic barrier seen by the carrier is negligibly small and this results in the saturation of drift velocity of the carrier. Once the drift velocity get saturated the mobility decrease with further increase of electric field, i.e. negative field dependence of mobility [18]. At intermediate field strength when the temperature is low, the field dependence of mobility remains positive. When temperature increases carrier gains more thermal energy to over come the energetic barrier. Thus the mobility increases and the field dependence of mobility weakens. At higher temperature energetic barrier seen by the carrier is negligibly small which results in negative field dependence of mobility even at intermediate field strengths. Lower the energetic disorder inside the sample lower is the temperature at which negative field dependence can be observed [7a,11,13b]. In principle if disorder is very low one can observe negative field dependence of mobility at lower temperature. The above reasoning is also justified on the basis of GDM. GDM predicts the variation of the mobility as given by equation [9], where µ 0 is a prefactor mobility, σ is the measure of energetic disorder (width of the Gaussian distribution of site energies), Σ is the measure of positional disorder, the measure of geometrical disorder, k is the Boltzmann constant, T is the temperature in Kelvin and C is an empirical constant. From Equation (1), the term ( σ kT ) 2 − Σ 2 decides the slope of field dependence of mobility at the intermediate field regime, where the mobility shows log µV s.E 1/2 dependence. When temperature increases the term becomes negative. Also, if the value of σ is low then the above term become negative at lower temperatures. This suggests that if the overall energetic disorder of the film is very low or if the charge transport occurs through regions of very low energetic disorder then the slope of field dependence of mobility can remain negative down to lower temperatures. Similarly if ( σ kT ) 2 < Σ 2 , the mobility shows a negative field dependence. This generally happens only when the positional disorder in the sample is remarkably very high. It has also been shown using Monte Carlo simulations that high positional disorder can lead to negative field dependence of mobility at lower electric field strength [9]. In MDPs the negative field dependence has generally been observed at very high temperature and also when the concentration of the dopant is very low [3].
In this paper we show negative field dependence of mobility down to low temperatures in MDP. We have investigated the field and temperature dependence of mobility in films of N,N'-diphenyl-N,N'-bis(3-methylphenyl)-(1,1'biphenyl)-4,4'diamine (TPD) dye. TPD dye is a well known blue emitting laser dye and also a hole transporting dye. Films of TPD have been used widely in fabricating various organic devices [2,3]. We used TPD doped in Polystyrene (PS) at 40:60 proportions by weight (TPD:PS) and measured hole mobility using time-of-flight(TOF) transient photoconductivity technique. A total negative field dependence of mobility was observed down to low temperature ∼150K. This was attributed to the low value of disorder in the sample due to the presence of aggregation/crystallization of TPD molecules. The study highlights the drastic change in morphology of the sample and the resulting reduction in the overall disorder of a molecularly doped polymer upon aggregation/crystallization of dopant. Monte Carlo simulation was performed to understand the influence of aggregates on charge transport with correlated site energies. Simulation supports our experimental observation as well as the justification provided on the basis of low disorder present in the sample.
Experimental details
Solutions of TPD doped Polystyrene at 40:60 proportions by weight were made by dissolving the required amount of TPD and PS in Chloroform. Films were spin-cast on to a neatly cleaned fluorine doped tin oxide (FTO) coated glass substrate. Samples were kept in vacuum for 24 hours at ambient temperature to remove the residual solvent. A thin layer of amorphous Selenium (a-Se) and an Aluminum top electrode was deposited on to it by thermal evaporation. All coatings were done at base pressure of 10 −5 mbar. Thickness and capacitance of the samples were ∼4µm and ∼10pF respectively. Field and temperature dependence of mobility in these samples was determined using conventional small signal Time of Flight (TOF) transient photoconductivity technique [3]. Samples were mounted on a homemade cryostat to perform temperature dependent studies. A variable DC potential was applied across the device such that no injection occurs from the electrodes to the sample. A 15ns laser pulse from second harmonic of Nd: YAG laser (532nm) was used for generating a thin sheet of charge in a-Se at a-Se/ TPD:PS interface. Laser intensity is adjusted so that total charge generated is less than 0.05CV, where C is the capacitance of the device and V is the voltage applied across the device. The time resolved photocurrent is acquired using an oscilloscope as a voltage across a load resistance. The transit time, τ , is obtained from photocurrent signal and the mobility is calculated using µ = L 2 /V τ , where L is the thickness of the sample. Morphology of the samples was characterized using Scanning electron microscope (SEM) images.
Details of Monte Carlo Simulation
Monte Carlo simulations wereperformed to support our experimental observations and explore the influence of aggregates on field and temperature dependence of mobility. The Monte Carlo simulation is based on the commonly used algorithm reported by Schönherr et al [19]. A lattice of 70x70x70, along x, y and z direction, with lattice constant a = 6Å was used for computation. Z direction is considered as the direction of the applied field. The size of the lattice is judged by taking into account of the available computational resources. The site energies of lattice were initially taken randomly from a Gaussian distribution of mean ∼5.1eV and standard deviation σ = 75meV, which gives the energetic disorder parameterσ=σ/kT . The value for σ was chosen close to the experimental value observed in TPD based MDPs [3]. The site energies were made correlated by considering the energy of a site as an average of energies of neighboring sites which is defined as follows [20], Where the variable ε j denotes the uncorrelated energies on the neighboring sites, N is the normalization factor that results in required standard deviation and K is kernel that provides a degree of correlation among the sites [20]. In our simulation, kernel K is considered as unity within a sphere of radius a (a is the intersite distance) and zero outside this sphere. Simulation was performed on this energetically disordered lattice with the assumption that the hopping among the lattice sites was controlled by Miller-Abrahams equation [21] in which the jump rate ν ij of the charge carrier from the site i to site j is given by where E is the applied electric field, a is the intersite distance, k is the Boltzmann constant, T is the temperature in Kelvin, ∆R ij = R i −R j is the distance between sites i and j and 2γa is the wave function overlap parameter which controls the electronic exchange interaction between sites. Throughout the simulation we assume 2γa = 10[9,19].
Film morphology was varied by incorporating cuboids of so called ordered regions (representing molecular aggregates/microcrystallites in MDPs) of varying size that are placed randomly inside the otherwise disordered host lattice. Sizes of ordered regions were limited to a maximum size of 25x25x40 sites along x, y and z directions. The energetic disorder inside the ordered region was kept low compared to the lattice. This is justified because the aggregates are more ordered regions and hence to simulate the charge transport the cuboids must be of low energetic disorder compared to host lattice. The site energies inside the ordered regions were also taken randomly from another Gaussian distribution of standard deviation ∼15meV (we chose 5 times less compared to host lattice). Earlier reports have even suggested a ten fold reduction of energetic disorder inside the aggregates [22]. Further details of simulation with varying film morphology are provided in Ref [11]. The site energy of the ordered region was also correlated as explained above. The mean energies of ordered regions were chosen such that their difference from the mean energy of host lattice is in the order of kT. This is justified by the fact that the aggregation of dopants can also lead to change in the energy gap (shift in HOMO, LUMO levels) and hence the mean energy of Gaussian distribution. Simulations were performed by varying the concentration of such ordered regions (varying the percentage of volume of lattice occupied by ordered region), temperature and electric field so as to simulate the field and temperature dependence of mobility.
Experimental Results
Fig .1 shows the typical time of flight transient signal obtained for TPD:PS (40:60 wt%) at 290K. Transient showed some plateau suggesting non-dispersive transport. We observe dispersive transport at low electric field strength and low temperature. The transit time was always determined from the double logarithmic plot as shown in the inset of Fig. 1. Fig. 2 shows the field dependence of mobility parametric with temperature. Negative field dependence of mobility was observed down to low temperature and through out the range of electric field studied. The mobility becomes almost field independent at 150K. There is no remarkable increase in mobility above room temperature compared to lower temperatures. Data at low electric field strength, at low temperature, were not recorded due to very low magnitude of photocurrent transient signal.
At high temperatures, in the low field regime, a slight increase in mobility was observed with decrease of electric field. Temperature dependence of zero field mobility and the slope (β) of intermediate field region of log µ(E = 0)V s.E 1/2 follow T −2 as predicted by GDM (Figure 2 [3,9]. Hence the observed negative field dependence of mobility down to low temperature (∼150K) in TPD:PS can be attributed to the presence of very low value of energetic disorder in the sample studied. As explained above (see introduction), low of value of energetic disorder can lead to negative field dependence of mobility down to low temperature. When the disorder is very low then the barrier offered to carrier is also very low. Thus drift velocity of the carrier saturates at lower electric field strength which lead to the decrease of mobility with increases of field strength [9,16].
The morphology of TPD:PS in our study was investigated using SEM images. Fig. 3 shows the SEM images of TPD:PS in our study. SEM images showed that TPD has undergone aggregation. Aggregation of TPD resulted in the formation of crystalline regions of few micron sizes and also chains of such microcrystals that are spread all over the sample. Earlier reports of crystallization/aggregation of TPD in TPD doped Polystyrene system, even at low concentration and without annealing [23], also supports our observation. Earlier reports of mobility measurement, who apparently report only positive field dependence, in TPD doped polymers assert that their study was performed in totally amorphous sample with no signs of aggregation of dopants at least during the time of experiment [2,3,24]. Presence of more ordered regions can drastically change the entire morphology of the sample and can lead to reduction in the effective energetic and positional disorder. This is consistent with our observation of low energetic and positional disorder in TPD:PS. Due to aggregation the charge transport therefore occurs through a combination of ordered and less ordered regions. In such cases the charge transport is highly influenced by the packing and orientation of ordered regions. If the charge transport occurs mostly through these ordered regions, regions of very low disorder, then a negative field dependence of mobility down to low temperature can be expected. So the presence of these microcrystallites can drastically change the morphology with lower energetic and positional disorder and the behavior of charge transport in these samples.
Simulation Results
In order to justify and support the above explanation a Monte Carlo simulation of charge transport in disordered lattice was also performed. Aim of the simulation was to understand the influence of aggregates on charge transport, in particular the negative field dependence. Simulated field and temperature dependence of mobility for a pure host lattice having DOS with standard deviation ∼75meV (without considering any ordered regions) has been discussed in detail in our earlier report [11]. Simulation results were as predicted by GDM. To study the influence of embedded micro crystals/aggregation of dopants on charge transport, the simulation was performed after incorporating ordered regions inside the host lattice (as explained in simulation procedure above). Fig. 4 shows the field dependence of mobility, at ∼248K, parametric with the concentration of ordered regions having DOS with standard deviation ∼15meV and mean energy lower by ∼ kT compared to mean energy of host lattice. Magnitude of mobility at all regimes of electric field increases with increase of concentration of ordered regions concomitant with decrease of slope at the intermediate field regime. The saturation of mobility and decrease of mobility with further increase of electric field, at high field regime, was observed when the concentration of ordered region was higher than 60%.
With increase in concentration of ordered region the saturation of mobility occurs at lower electric field strength. For the cases when the concentration of ordered region is less that 60% the mobility was not completely saturated even for the maximum strength of electric field used for simulation. When the concentration of the ordered regions inside the host lattice is very high (∼98%) the field dependence of mobility become totally negative even at this low temperature (∼248K). The mobility decreases with increase of electric field right from low electric field strengths used for simulation.
The observed features in the field dependence of the mobility, after incorporating the ordered regions inside the host lattice, can be explained on the basis of decrease of effective energetic disorder in the host lattice when ordered regions are embedded in it. If the energetic disorder is small then the energetic barrier seen by the carriers due to disorder will be small. This results in higher mobility but weaker field dependence. This explains the increase in magnitude of mobility and decrease in the slope of semi log µV s.E 1/2 curve at intermediate field regime with increase in concentration of ordered regions in the host lattice. The low value of energetic disorder also can lead to saturation of drift velocity/mobility at lower electric field strength. When the concentration of ordered regions inside the host lattice is very high, the overall energetic disorder will be very low. At such high concentration the carrier will travel mostly through ordered region only. Since the energetic disorder inside the ordered region is small (only ∼15meV in our study) the mobility saturates at low electric field strength and even at low temperatures. In such a case one can observe total negative field dependence of mobility down to low temperatures. Moreover for a very low value of energetic disorder the term ( σ kT ) 2 − Σ 2 remains negative down to low temperature. It is also possible to have total negative field dependence down to low temperature when the charge transport occurs totally through ordered regions, i.e. when the whole host lattice is completely occupied with ordered region (Fig. 4, 100%). Fig. 5 shows the field dependence of mobility parametric with temperature when host lattice is completely occupied with ordered region. A total negative field dependence of mobility down to low temperature is observed but the mobility in this case decreases with increase of temperature. Higher magnitude of mobility was observed for lower temperature (∼150K) and low magnitude of mobility at higher temperature (∼390K). This kind of behavior is possible when the overall energetic disorder is very low. When energetic disorder is very low and the thermal energy is comparable to energetic disorder then thermal energy dominates which forces the carriers to move in longer paths. Effectively the transit time increases and mobility decreases with increase of temperature [16].
From the simulation it is inferred that charge transport in TPD:PS occurs mostly through aggregates but not completely through aggregates. Hence the effective disorder seen by the carrier is low which results in total negative field dependence of mobility. Experimentally we observed that mobility de-creases with decreases of temperature which rules out the possibility that charge transport is occurring completely through aggregates. The influence of small disordered regions present in entire trajectory of charge transport (major part of the trajectory is occupied by ordered region) for each carrier become prominent only at low temperature. When the thermal energy of carrier is low, the carrier will see the effect of disorder and that results in the positive field dependence of mobility at low temperature. Fig. 6(a) shows the field dependence of mobility parametric with temperature when 98% of the host lattice is occupied with ordered regions having DOS with standard deviation ∼15meV and mean energy lower by ∼ kT compared to mean energy of host lattice. Mobility decreases with decrease of temperature and positive field dependence of mobility was observed at lower temperature (200K). Negative field dependence of mobility was observed down to 248K. At temperature around 300K, there is no variation of mobility with increase of temperature for all electric field strength used for simulation. This observation is similar to what observed experimentally in TPD:PS at higher temperatures. The temperature dependence of zero field mobility and slope of intermediate field region of log versus E 1/2 plot followed 1/T 2 dependence as predicted by GDM (Data not shown).
Conclusion
Time of flight mobility measurement was carried out in TPD:PS. A total negative field dependence of mobility down to low temperature was observed. This was explained on the basis of low value of energetic disorder in TPD:PS. The low value of energetic disorder was attributed to aggregation of TPD that results in the formation of crystalline regions of few micron size and chaining of such crystals. The presence of aggregates drastically changes the morphology of the sample and reduces the overall disorder in the sample. Monte Carlo simulation study clearly showed the influence of aggregates on charge transport which supports our experimental observation. The simulation suggests if charge transport occurs mostly through aggregates (crystalline regions of very low energetic disorder) the mobility saturates at low electric field strengths and it can lead to a total negative field dependence of mobility down to low temperature. From the simulation studies it is inferred that the charge transport in TPD:PS in the present study are not occurring totally through aggregates but in combination of aggregates and disordered medium. Thus our experimental and simulation studies showed the influence of morphology of the sample on field and temperature dependence of mobility. The study also highlights the need to consider film morphology in various models used for analyzing charge transport in polymer films. Figure 4. Simulated field dependence of mobility for host lattice (σ=75meV), at 248K, embedded with ordered regions of various concentrations having DOS of standard deviation∼15meV. Figure 5. Field dependence of mobility of host lattice with ∼15meV parametric with temperature. Case is similar when host lattice is completely occupied with ordered regions. | 2008-07-24T10:20:57.000Z | 2008-07-24T00:00:00.000 | {
"year": 2008,
"sha1": "a9cd29e3acbedefae64e770a4a6afcfd23f2e666",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0807.3843",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "a9cd29e3acbedefae64e770a4a6afcfd23f2e666",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Chemistry",
"Physics"
]
} |
235284124 | pes2o/s2orc | v3-fos-license | Research Progress on the Law of Nitrogen Transfer and Transformation in Sediment
The eutrophication of closed and semi-closed landscape water bodies such as lakes and rivers is one of the typical environmental problems in cities. The bottom sludge formed under long-term eutrophic water is prone to re-release of nitrogen and phosphorus elements and cause secondary pollution. While effectively intercepting and controlling external pollution, attention should be paid to the secondary release of internal pollutants. Analyzing the nitrogen exchange of eutrophic sediments in sediments-overlying water-plants and the release of internal nitrogen in sediments is conducive to the wider application and promotion of plant ecological restoration technology in sewage treatment projects.
Introduction
The continuous eutrophication of enclosed and semi-enclosed landscape water bodies in urban wetlands such as lakes and rivers is one of the typical ecological environmental problems in current cities. This directly affects citizens' quality of life, physical health and the improvement of the living environment. The main reason for the eutrophication of closed landscape water is that pollutants such as nitrogen and phosphorus in the water exceed the standard. When such enclosed and semi-enclosed landscape water bodies are seriously polluted, the nitrogen and phosphorus nutrients in the water bodies can accumulate in the bottom mud through physical methods such as sedimentation or particle adsorption. And it may release a large amount of nitrogen and phosphorus nutrients to the overlying water body again through the effect of suspension [1][2][3]. Therefore, even if the external pollution source can be effectively 2 controlled in time for the black and smelly water body that has already undergone eutrophication; the concentration of various pollutants in the water still exhibits an extremely high phenomenon for a long time. An important reason is the bottom mud. Bottom mud is an important storage reservoir for pollutants in lakes, rivers and ponds. It is also a place for decomposition and digestion, material circulation, and energy flow and exchange in the water ecosystem. A large amount of nitrogen, phosphorus, nutrients, organic matter, etc. are deposited in the silt at the bottom of the river through various physical effects. The decomposition and differentiation of substances will consume a large amount of dissolved oxygen. As a result, the water body and the bottom mud are in a hypoxic or anaerobic environment. As a result, a large amount of toxic, harmful and irritating gases are produced, including ammonia nitrogen, hydrogen sulfide, methane and so on. Especially in a long-term anaerobic environment, the deposited sludge is overloaded with oxygen consumption, forming secondary endogenous pollution that will be released to the water body for the second time. At the same time, these regenerated secondary pollutants will enhance the resorption of exogenous nitrogen, phosphorus and heavy metal elements, forming a vicious circle [4][5].
Research progress at home and abroad
At present, many experts and scholars have shown through a large number of studies that the migration and transformation of nitrogen and phosphorus between the sediment and the overlying water is affected by many factors. It mainly includes the influence of environmental conditions such as water temperature, pH, and dissolved oxygen [6][7][8] and the physical conditions of water fluctuations. There have been significant studies on the distribution of phosphorus in shallow lakes in Denmark [9], the loading of phosphorus in sediments of Lake Pontcharrain in the United States [10], and the information on nitrogen and phosphorus in Yinfu Reservoir in China. China's Yinfu Reservoir, regarding the release of nitrogen and phosphorus [11], the problem of nitrogen and phosphorus flux in the water body and sediment in Dianchi Lake [12], etc., have all appeared in the sediments of the re-release of excess pollutants, causing secondary pollution. Some aquatic plants themselves have a large demand for nutrients. Nitrogen and phosphorus can be absorbed and utilized through roots, stems and leaves. In particular, the welldeveloped root system of plants can take root in the bottom mud, which can effectively absorb nutrients and effectively prevent the bottom mud from floating up. It is beneficial to control the release of nutrients in the bottom mud. Reduce the probability of secondary pollution in the water body. And this control technology has been widely used in engineering practice [13][14][15]. The absorption and enrichment of nitrogen and phosphorus nutrients by aquatic plants is an effective way to repair, regulate and control the eutrophication of water bodies [16][17][18][19]. Different plant species have huge differences in the removal effect of nitrogen and phosphorus. The removal rate of nitrogen and phosphorus in the studied plant species ranges from 20% to 98%. In the water body restoration method, plant ecological restoration is an efficient, simple, economical and sustainable restoration method. It has been widely used in the restoration and treatment of various eutrophic water bodies [20][21]. Some scholars have studied the restoration effects of the aquatic plant Eichhornia crassipes on water bodies with different levels of eutrophication. It was found that this plant can efficiently remove excess N and P elements from water bodies [17]. The research of Liu Pan et al. [22] showed that: the floating water plant Hydrangea japonicus can significantly reduce its concentration by absorbing and utilizing nitrogen and phosphorus in eutrophic water. Wu Juan et al. [23] found through research that the submerged plant Hydrilla verticillata can absorb a large amount of nitrogen and phosphorus through rapid growth. This effectively reduces the nitrogen and phosphorus content in water bodies and sediments. Zhu Huabing et al. [24] compared the effects of Eichhornia crassipes and cattails on the removal of nitrogen and phosphorus in eutrophic water bodies. It was found that nutrients such as nitrogen and phosphorus in the water body declined rapidly.
To achieve the goal of restoring cleanliness of eutrophic water bodies is mainly to reduce the concentration of the main nutrients nitrogen and phosphorus that cause eutrophication of water bodies. The effect of different aquatic plants on the absorption of nitrogen and phosphorus in water bodies and sediments has become an important indicator for judging the purification ability of aquatic plants. Most 3 of the existing research focuses on the efficient purification of eutrophic water bodies by different aquatic plant species, while there are relatively few studies on the purification of nitrogen in sediments formed by water bodies with different eutrophication levels and the law of nitrogen migration and transformation. Nitrogen and phosphorus are the most important limiting factors for the occurrence of eutrophication in water bodies. The research on the release boundary and amount of nitrogen in sediments formed by water bodies with different eutrophication levels is useful for preventing water sediments in practical engineering applications. It is very necessary to cause secondary pollution. The research on the nitrogen release rule of the bottom mud of the closed landscape water can be applied to the control of the internal pollution of the landscape water and provide favorable conditions for ecological restoration to improve the self-purification ability of the water body.
The transfer and transformation process of nitrogen from bottom mud to overlying water
The main form of nitrogen exchange between sediment and overlying water is ammonia nitrogen, and the main way for nitrogen exchange between sediment and overlying water is through the adsorption and desorption of ammonia nitrogen [25]. Studies have shown that changes in the concentration of ammonia nitrogen in the overlying water of the sediment can effectively reflect the adsorptiondesorption process of ammonia nitrogen in the sediment [26]. When the concentration of ammonia nitrogen in the overlying water is low and high, the adsorption and desorption dynamics of ammonia nitrogen in the bottom sludge show the opposite state, but the lack of change is consistent after a certain fluctuation and tends to a stable state. At the same time, the concentration of ammonia nitrogen in the bottom sludge is inversely proportional to the adsorption efficiency of the bottom sludge. On the contrary, the concentration of ammonia nitrogen in the overlying water is directly proportional to the adsorption efficiency of the bottom sludge. It is precisely because of this adsorption-desorption dynamic change process existing in the sediment and the overlying water that after controlling the entry of exogenous pollutants, the stability of the sediment or water body can be changed to eliminate endogenous pollution. Improve the quality of water bodies.
The environmental conditions of the overlying water promote the release of nitrogen from the sediment
The nutrients in the bottom mud appear as "sinks" or "sources" in closed and semi-enclosed landscape water bodies such as ponds, lakes, and reservoirs, which are mainly affected by the physical conditions and chemical composition of the overlying water. When the water body is seriously polluted, a large amount of excessive nitrogen and phosphorus nutrients in the water body will accumulate in the bottom mud through physical effects including sedimentation, particle adsorption, etc., and under certain conditions, the nitrogen and phosphorus nutrients will be back-released in the water body, causing Secondary pollution [13,15]. The main factors that affect the release of nitrogen and phosphorus nutrients from the bottom mud to the overlying water include pH, temperature, dissolved oxygen, and water flow.
Alternate dry and wet conditions promote nitrogen release from sediments
When the bottom sludge is in a "wet" state, the first is mineralization. At this time, a large number of ammoniated microorganisms will produce a large amount of ammonia nitrogen through their own metabolism. When the accumulation reaches the peak, the microbial enzymatic reaction will produce Inhibition results in a sharp drop in the number of ammoniated microorganisms. Other types of microorganisms multiply by using the ammonia nitrogen in the bottom sludge as an energy source, so that the amount of ammonia nitrogen is rapidly reduced. Conversely, when the bottom mud is in a "dry" state, a large amount of oxygen will accumulate on the surface of the bottom mud, which is conducive to the growth and metabolism of nitrogen-fixing microorganisms. At the same time, the evaporation of water drives the soluble nitrogen to move to the surface, resulting in total nitrogen. The phenomenon 4 content is increasing. Studies have shown that adding sand particles to the bottom mud with alternating wet and dry conditions can cause the sediment to become sandy, which is beneficial to reduce the activated nitrogen content in the bottom mud [27]. Yang Bin et al. [26] found through research that in the overlying water layer of the sediment, the proportions of various types of nitrogen in descending order are: nitrate nitrogen, ammonia nitrogen, and nitrous nitrogen. The nitrogen in the sediment is mainly organic nitrogen, and the proportion of various nitrogen is ammonia nitrogen, nitrate nitrogen, and nitro nitrogen.
Methods of controlling the release of sediment
The re-release of nutrient elements in bottom mud has become an important source of water pollution. Therefore, in the process of water pollution control, not only the external source pollution must be controlled, but the second release of the internal source requires more research and attention. Effectively controlling the secondary release of nutrient elements in the bottom sludge is of great significance to the treatment of water pollution. At present, the main methods for controlling the release of sediment are: in-situ remediation method and ex-situ remediation method.
Study on plant species that absorb nitrogen in bottom mud
In the process of sediment restoration, phytoremediation is a long-term, environmentally-friendly and sustainable restoration method. Especially the pollution of bottom sludge was caused by landscape water bodies. Phytobioremediation can not only continuously and effectively remove excess nitrogen and phosphorus in sediments and water bodies, but also play an aesthetic role in landscape design. At present, a large number of studies have discovered various types of aquatic plants that can efficiently absorb nitrogen and phosphorus, including submerged plants, floating plants and emergent plants. The main principles of phytoremediation technology include: the plant itself needs to absorb directly; the plant root system releases special secretions and enzymes; the plant and the root zone microbes work together.
Outlook
Excess nitrogen and phosphorus are important substances that cause eutrophication of water bodies to form sediment pollution. However, the accumulated nitrogen and phosphorus elements in the bottom sludge have the potential to be released again, which will cause serious secondary pollution to the water body. Therefore, the research on the transfer and transformation law of nitrogen and phosphorus in sediments and the control of the release of nitrogen and phosphorus in sediments are very important. In future research, more research will be conducted to explore the internal principles of the migration and transformation of nitrogen and phosphorus, and new technologies, methods and new materials will be developed to control the release of nitrogen and phosphorus in sediments, and provide a more scientific approach to water pollution control. , Effective and low-cost governance methods. | 2021-06-03T00:37:16.905Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "decc587e7fac22ba0f8606fb50eadf6f9f3c8aa4",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/781/5/052026",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "decc587e7fac22ba0f8606fb50eadf6f9f3c8aa4",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
3079842 | pes2o/s2orc | v3-fos-license | Correlation between Corneal Topographic Indices and Higher-Order Aberrations in Keratoconus.
Purpose To compare corneal higher-order aberrations (HOAs) between normal and keratoconic eyes, and to investigate the association between elevation-based corneal topographic indices and corneal wavefront data in the latter group. Methods In this cross-sectional comparative study, 77 normal right eyes of 77 control subjects and 66 eyes of 36 keratoconic patients were included. In each eye, elevation- based corneal topographic indices including mean keratometry readings, best-fit sphere, maximum elevation, and 3-mm and 5-mm zone irregularity indices were measured using Orbscan II. The Galilei Scheimpflug analyzer was used to measure HOAs of the corneal surface. The independent student t-test was used to compare HOAs between the study groups. Spearman correlation was used to investigate possible associations between Orbscan and Galilei data in the keratoconus group. Results All Zernike coefficients up to the 4th order except for horizontal trefoil, and vertical and horizontal tetrafoil were significantly greater in the keratoconus group than normal eyes (P<0.05). Root mean square (RMS) of HOAs up to the 6th order and total HOAs were significantly higher in the keratoconus group (P<0.05). In the keratoconus group, the strongest association was observed between vertical coma (r=-0.71, P<0.01) and total RMS of HOAs (r=0.94, P<0.01) with irregularity in the 3-mm zone. Spherical and vertical coma aberrations were significantly correlated with mean keratometry (P<0.05 for both comparisons). Conclusion Centrally located corneal HOAs are significantly greater in keratoconic eyes than normal controls. Anterior and inferior displacement of the cornea causes the majority of higher-order aberrations observed in keratoconus.
INTRODUCTION
Keratoconus is a non-inflammatory, usually progressive disease of unknown etiology. It leads to thinning and bulging of the cornea and consequently produces irregular astigmatism and myopia. Detection of keratoconus among refractive surgery candidates is important because the prevalence of keratoconus is higher in such eyes as compared to the general population 1,2 and operating on an undetected keratoconic cornea is a major cause of postrefractive surgery ectasia. 3 The diagnosis of keratoconus is based on biomicroscopic findings, and additional tests, such as pachymetry, keratometry and corneal topography. 4 Measurement of Placido disk-based videokeratography and central corneal thickness are widely used methods in the diagnosis of keratoconus. [5][6][7] However, Placido disk-based corneal topography only examines the central 7-8 mm of the anterior corneal surface and its results are sensitive to alterations in reference point or viewing angle. 8,9 Additionally, the accuracy of contact ultrasonic pachymetry is operator dependent. 10 Other available diagnostic modalities, in particular, corneal elevation-based topography and wavefront analysis, have become more widely used in an effort to supplement videokeratographic data. The Orbscan II (Bausch & Lomb, Rochester, NY, USA) is a 3-dimensional scanning-slit topography system employed for analysis of the anterior and posterior corneal surfaces as well as pachymetry. It utilizes a scanning-slit system to measure 18,000 data points and also uses a Placido-based system to make necessary adjustments to yield topographic data.
The Galilei dual Scheimpflug system (Ziemer Ophthalmic Systems AG, Zurich, Switzerland) is a noninvasive diagnostic system designed for evaluating the anterior segment by analyzing corneal shape and thickness, pupil size, and anterior chamber size, volume and angle. It combines both technologies including Placido imaging which provides curvature data, and Scheimpflug imaging which is optimal for precise elevation data. The Galilei system can extrapolate corneal wavefront data from analysis of anterior and posterior corneal surfaces using mathematical analysis of height data. 11 Some investigators have used higher order aberrations (HOAs) to distinguish early keratoconus form normal and others have used them to grade the severity of keratoconus. 12,13 The purpose of this study was to compare corneal HOAs, measured with the Galilei Scheimpflug analyzer, between normal and keratoconic corneas, and to evaluate the correlation between these data and topographic findings obtained with the Orbscan II in keratoconic corneas.
METHODS
In this prospective comparative study, 36 consecutive patients (17 male subjects) with keratoconus and 77 normal age-matched controls (34 male subjects) who were scheduled for refractive surgery were included. Only right eyes of the normal group were enrolled in the study.
The diagnosis of keratoconus was based on slit lamp findings (stromal thinning, conical protrusion, Fleischer ring and Vogt striae), and associated Placido-based topographic patterns described by Rabinowitz. 5 Eyes with previous acute corneal hydrops or history of ocular surgery were excluded. In the normal group, the only ocular abnormality was refractive errors and subjects with any ocular pathology such as dry eye, keratoconus, glaucoma, retinal disease, ocular surgery, or systemic conditions such as diabetes mellitus and connective tissue disorders were excluded. All participants were asked to stop wearing soft contact lenses for at least two weeks and rigid gas-permeable contact lenses for at least four weeks before obtaining measurements.
The Ethics Committee of the Ophthalmic Research Center approved the study and written informed consent was obtained from all participants after explaining the purpose of the study.
A complete ocular examination including slit lamp biomicroscopy, cycloplegic refraction, best spectacle-corrected visual acuity (BSCVA) using a Snellen chart, intraocular pressure measurement and dilated fundus examination was performed.
Data including mean keratometry readings, anterior and posterior elevation best-fit sphere, maximum anterior and posterior elevation, and 3-mm and 5-mm zone irregularity indices were obtained from Orbscan II measurements.
For measurements with the Galilei dual Scheimpflug analyzer, appropriate alignment of the scan center with the corneal apex was checked using an initial Scheimpflug image formed on the monitor. The measurement results were checked under a quality-specification window; only measurements with an "OK" reading were included.
HOAs were calculated for corneal diameter of 6 mm up to the eighth order in terms of Zernike polynomials. Zernike coefficients JOURNAL OF OPHTHALMIC AND VISION RESEARCH 2013; Vol. 8, No. 2 were transformed to the standard form as recommended by the Optical Society of America. 14 Since both eyes were included in the keratoconus group, enantiomorphism was neutralized by inverting the sign of mirrorsymmetric coefficients of left eyes as follows: 15,16 For all Z n m if n = even and m < 0 →Z n m = -(Z n m ) For all Z n m if n = odd and m > 0 →Z n m = -(Z n m ) All measurements were obtained by an experienced operator using the same machines and procedures.
Statistical Analysis
Data including age, spherical equivalent (SE) refraction, keratometry readings, Zernike coefficients, and root mean square (RMS) were expressed as mean ± standard deviation using SPSS 17.0 (SPSS Inc., Chicago, IL, USA). Independent student t-test was used to compare HOAs between the study groups. In the keratoconus group, Spearman's correlation was used to investigate associations between Orbscan derived data including keratometry readings, anterior and posterior elevation best-fit sphere, maximum anterior and posterior elevation, and 3 and 5 mm zone irregularity indices against HOAs as determined by the Galilei. P values less than 0.05 were considered as statistically significant. except for horizontal trefoil, and vertical and horizontal tetrafoil were significantly greater in the keratoconus group as compared to the normal group. The RMS of HOAs up to the 6 th order and total HOAs were also significantly higher in the keratoconus group (P<0.05). However, there was no statistically significant difference in terms of 7 th or 8 th order RMS between the study groups. (Table 2) Spearman correlation indicated the strongest association to be between total RMS of HOAs and irregularity at the 3-mm zone. Among individual Zernike coefficients, the association between vertical coma and irregularity of the 3-mm zone was strongest (Table 3). Additionally, spherical and vertical coma aberrations were significantly correlated with mean keratometry readings (P<0.05). However, the correlation of spherical aberration was stronger (r=0.67, P<0.01). No aberration was significantly correlated with the anterior elevation best-fit sphere and only horizontal coma demonstrated a significant but weak association with the posterior elevation best-fit sphere (Table 3).
DISCUSSION
This study reports the characteristics of individual Zernike coefficients in keratoconic eyes in comparison to normal controls. To the best of our knowledge, it is the first to investigate the correlation between height data and corneal aberrations. The Orbscan II quantitatively depicts abnormalities of both anterior and posterior corneal surfaces, while the corneal wavefront determines the extent an abnormal cornea can distort the retinal image and also predicts the potential visual performance.
Although height map data could have been obtained with the Galilei Scheimpflug analyzer and this instrument has previously been found to have high repeatability for corneal wavefront measurement in normal unoperated eyes, 11,17 it utilizes a mathematical method to convert height data to corneal aberrations. Therefore, since these two variables are mathematically correlated, statistical analysis for association does not make sense. For this reason, we utilized independent data obtained by the Orbscan machine.
The results of our study demonstrated that all aberrations except for horizontal trefoil, and vertical and horizontal tetrafoil were significantly higher in the keratoconus group than normal eyes. Interestingly, these findings were supported by the results of Spearman correlation which demonstrated a significant association between spherical, coma and trefoil aberrations with topographic indices reflecting the severity of keratoconus such as mean keratometry, and 3-and 5-mm zone irregularity.
Additionally, the RMS of HOAs up to the sixth order was significantly higher in the keratoconus group. However, the RMS of remaining orders (7 th and 8 th ) was comparable between the study groups. It is possible that these aberrations are too coarse to represent subtle changes in irregular corneas. Previous studies have reported that aberrations (especially coma-like aberrations) in keratoconic eyes are significantly different from normal eyes. 13,[18][19][20][21] Maeda et al 18 compared corneal aberrations in normal eyes and eyes with forme fruste or mild keratoconus and found that coma-like and spherical-like aberrations were significantly higher in the latter group. Additionally, they reported the dominance of coma-like aberrations over spherical-like aberrations in the keratoconus group. This observation indicates that the earliest manifestation of corneal asymmetry in keratoconus is vertical. 22 In the current study which included more severe cases, both vertical and horizontal coma were significantly greater in the keratoconus group than controls. However, vertical coma had a stronger association with irregularity in the 3-mm and 5-mm zones in keratoconic eyes, as compared to horizontal coma. In keratoconus, the cone is displaced inferiorly in most of the cases, resulting in an increase in topographic indices such as the inferior-superior (I-S) index or surface asymmetry index (SAI). 23 This explains why vertical aberrations were more prominent and why the Zernike coefficient of horizontal trefoil did not differ between the study groups.
The present study revealed that spherical aberration in keratoconic eyes had a stronger association with mean keratometry readings than did coma. In addition to inferior displacement, a keratoconic cornea also displaces anteriorly resulting in a hyperprolate profile, leading to an increase in spherical aberration.
The results of our study also demonstrated that both horizontal and vertical tetrafoil aberrations of keratoconic eyes were not significantly higher than controls. Furthermore, except for a weak association between vertical tetrafoil and irregularity in the 3-mm zone, both vertical and horizontal tertrafoil were not significantly correlated with any topographic parameter determined by Orbscan in the keratoconus group. These observations indicate that the peripheral portion of the corneal wavefront may be less affected in keratoconic eyes.
In conclusion, centrally located corneal higher-order aberrations are significantly greater in keratoconic eyes as compared to controls. However, the peripheral portion of the corneal wavefront remains relatively spared in keratoconus. Characteristics of HOAs may be predicted form abnormalities in corneal shape as determined by height data. | 2018-04-03T04:59:15.193Z | 2013-04-01T00:00:00.000 | {
"year": 2013,
"sha1": "daeaf3ad61c624ce3968a1e9d39e4326ed56cfe3",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "daeaf3ad61c624ce3968a1e9d39e4326ed56cfe3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
56304027 | pes2o/s2orc | v3-fos-license | Using machine learning to identify factors that govern amorphization of irradiated pyrochlores
Structure-property relationships is a key materials science concept that enables the design of new materials. In the case of materials for application in radiation environments, correlating radiation tolerance with fundamental structural features of a material enables materials discovery. Here, we use a machine learning model to examine the factors that govern amorphization resistance in the complex oxide pyrochlore ($A_2B_2$O$_7$). We examine the fidelity of predictions based on cation radii and electronegativities, the oxygen positional parameter, and the energetics of disordering and amorphizing the material. No one factor alone adequately predicts amorphization resistance. We find that, when multiple families of pyrochlores (with different B cations) are considered, radii and electronegativities provide the best prediction but when the machine learning model is restricted to only the $B$=Ti pyrochlores, the energetics of disordering and amorphization are optimal. This work provides new insight into the factors that govern the amorphization susceptibility and highlights the ability of machine learning approaches to generate that insight.
Designing materials for advanced or next-generation applications requires understanding of how properties are related to structure, thatis, identifying so-called structure-property relationships. Having such relationships guides the search for new materials with enhanced performance by identifying regions of structure and composition space that exhibit superior properties. For nuclear energy materials, a key performance metric is tolerance against radiation damage. Pyrochlores (A 2 B 2 O 7 ) have been extensively studied for their potential application as nuclear waste forms [1-10] and have been incorporated into some compositions of the SYNROC waste form [11]. In this context, significant effort has been directed toward understanding how the chemistry of the pyrochlore -the nature of the A and B cationsdictates the amorphization susceptibility of the compound. In particular, several experimental efforts [12][13][14][15][16] have been focused on determining the critical amorphization temperature, T C , the temperature at which the material recovery rate is equal to or faster than the rate of damage, as summarized in Fig. 1. Typically, these experiments were performed in an electron microscope equipped with an ion source, such that samples were simultaneously irradiated with electrons and 1 MeV Kr ions. Though the value of T C is expected to vary depending on ion irradiation conditions [17], 1 MeV Kr ion irradiation results should be comparable.
As a consequence, a number of "features" -or basic structural and energetic properties -have been identified that provide insight into the radiation response of pyrochlores. These include the radii and electronegativities of the A and B cations [8,13]; the x parameter, which describes how the oxygen sublattice deviates from ideality [4,8,13]; the enthalpy of formation of the pyrochlore [6,18]; and the energy to disorder the pyrochlore to a disordered fluorite structure [1,19]. Further, there has been discussion on the extent of the disordered phase field in the phase diagram and its relationship to amorphization resistance [7]. Most of these features have been only heuristically correlated with amorphization resistance or only applied to a subset of pyrochlore chemistries. We are only aware of one attempt to quantify the relationship between these types of features and a prediction of T C . In that work, Lumpkin and co-workers established a relationship between T C and lattice constants, electronegativies, disordering energetics, and oxygen positional parameter [8]. While their model provided a significant advance in describing the structure-property relationships of pyrochlores, here we demonstrate how, through the use of machine learning, greater insight can be extracted. In particular, while they considered the disordering energy as one of their Experimentally measured values of T C , ordered as a function of A cation radius, for several different pyrochlores.
features, they used data from atomistic potentials that does not adequately describe all of the chemistries in the experiments. Further, they did not have access to data describing the amorphous state of these compounds. Finally, modern machine learning methods, applied to materials science, offer new avenues to examine the structure-property relationships in these types of systems.
Here, we use machine learning methods to demonstrate how a set of features, for a range of pyrochlore chemistries, can be used to predict T C . We use both structural parameters such as cation radius and electronegativity supplemented by energetics calculated with density functional theory (DFT) to build a database of features as a function of pyrochlore chemistry.
We analyze this database, building machine learning models that predict T C as a function of pyrochlore chemistry based on a systematic collection of features. We consider pyrochlore chemistries for which experimental data exists for T C , which includes pyrochlores where B=Ti, Zr, Hf, and Sn. We find that, when considering the full range of chemistries, the two features that best predict T C are the ratio of the radii and the difference in electronegativities of the A and B cations. However, to predict more subtle dependencies of T C with pyrochlore chemistry characteristic of a given B chemistry, the energies to disorder and amorphize the compound provide a better prediction of T C .
As compared to Ti, Hf, or Zr, Sn is a chemically very different element. It, like Ti, is multivalent, but unlike Ti, has a much stronger prevalence to adopt a charge state other than 4+. Further, as discussed below, it has a significantly higher electronegativity than the other B cations, producing a more covalent bond. This implies that Sn pyrochlores should be less amorphization resistant [20]. However, experiments have shown Sn pyrochlores to be more amorphization resistant than other pyrochlores [5]. This all suggests that Sn pyrochlores are electronically much more complex than the other pyrochlore families, which is one reason that we use DFT to determine the energetics of disordering and amorphization, as DFT can account for the varied valence of the Sn cations. Further, the inclusion of Sn pyrochlores in this analysis, precisely because the behavior is counter-intuitive, provides a more stringent test of the methodology.
DFT Energetics
Figure 2a provides the energetics for disorder and amorphization of a given pyrochlore, as found using DFT, as a function of the chemistry of the pyrochlore. These are ordered by A cation radius. Focusing first on the energetics to disorder, there is a general trend that as the A cation radius increases, the energy associated with disordering the pyrochlore to a disordered fluorite also increases, consistent with previous results using DFT [21]. This is particularly true of the B=Zr, Hf and Sn families of pyrochlores. For the B=Ti family, there is a peak in the disorder energy for the A=Gd composition, again consistent with previous DFT and empirical potential calculations [19,21]. Ti pyrochlores (which they do) but also that Sn pyrochlores would be less resistance to amorphization than Ti pyrochlores, which they are not. Thus, other factors must also be important. We propose that the energy of the amorphous phase is one of those factors.
The energy differences between ordered pyrochlore and an amorphous structure are also provided in Fig. 2a. In the case of the B=Hf and Zr families, these are again relatively monotonic with increasing A cation radius. However, the behavior of the B=Ti and Sn families is more complex. In particular, for the B=Ti family, the amorphous energy is non-monotonic with A cation radius, but the peak is for a different chemistry than was the disordering energy. In the B=Ti family, the amorphous energy is greatest for A=Y and generally is high for A=Dy and Tb. The B=Sn family exhibits even more complicated behavior. There is a peak in the amorphous energy for A=Gd and a minimum for A=Ho.
Finally, the shaded regions in Fig. 2a
Correlation of Features with Amorphization Resistance
The DFT results reveal that there are significant differences in the energetics of disorder and amorphization in pyrochlores as a function of both A and B chemistry. We use a machine The shaded regions highlight the differences between the disordered and amorphous structures.
learning approach to quantify the correlations between these energetics, as well as other features associated with pyrochlores, and the amorphization resistance, as characterized by The features considered here are r A /r B , the ratio of the ionic radii of the A and B cations; ∆X = X B − X A , the difference in electronegativity of the A and B neutral metal atoms (X A and X B , respectively); x, the oxygen positional parameter, which measures the deviation of the oxygen sublattice from an ideal (fluorite-like) simple cubic sutlattice; E O→D , the energy difference between the disordered and ordered phases; and E D→A , the energy difference between the amorphous and disordered phases. These features were chosen because (a) they have been shown to correlate to some degree in previous studies and (b) our DFT results indicate that the energetics depend strongly on the A and B chemistry of the pyrochlore, suggesting they may provide a strong descriptor of each compound. We did not consider the enthalpy of formation, proposed by other authors as a factor in radiation tolerance [6,18], as a feature because data was not available for all compounds.
However, before we examine the results of the machine learning model, it is instructive to examine how the selected features correlate with T C . Figure 3 provides simple plots of each feature against T C . The values for T C , summarized in Table S1, are taken from Refs. [12,13,15,16]. shows an overall correlation with T C but again the details are lost. E O→D , on the other hand, seems to correlate reasonably well for pyrochlores within a given family but does not describe variations of T C between families. Finally, E O→A , similar to x and ∆X, seems to generally correlate separately for B=Sn pyrochlores and the other families of pyrochlores.
Thus, while there are rough trends indicating some insight from each of these features, there is certainly not enough of a correlation in any case for a quantitative prediction. However, this suggests, as noted by other authors [8], that combinations of these features may provide predictive capability. Hence, we use a machine learning approach to quantify this.
Results of the Machine Learning Model
We use a machine learning (ML) approach to quantify the correlations between the five features described in the previous section and T C . More specifically, we employed kernel ridge regression (KRR) [22][23][24]-an algorithm that works on the principle of similarity and is capable of extracting complex non-linear relationships from data in an efficient manner-with a Gaussian kernel to learn and quantify trends exhibited by T C in the feature space discussed above. A randomly selected 90%/10% training/test split of the available data was used for statistical learning and testing the performance of the trained model on previously unseen data. A leave-one-out cross validation is used to determine the model hyper-parameters to avoid any overfitting of the training data that may lead to poor generalizability. The trained model can subsequently be used to make an interpolative prediction of T C for a new Next, within the KRR ML model, we aim to identify the best feature combination that exhibits highest prediction performance, quantified by its ability to accurately predict T C of the test set compounds. We do this in a comprehensive manner by building KRR ML models using all possible combinations of Ω features with Ω ∈ [2,5]. Performance of each of these models was evaluated separately on the entire data set as well as on a reduced set that only included the Ti pyrochlores. The root mean square (rms) errors for the T C predictions on training and test sets for various models is presented in Fig. 4. In order to account for model prediction variability associated with randomly selected training/test splits, Fig. 4 reports the rms errors averaged over 100 different randomly selected training/test splits for each of the models. The 2D models that lead to the lowest rms errors on the test set data have been marked with a ' ' in Fig. 4a (when taking the entire data) and Fig. 4b built on a lower dimensional feature set) should always be preferred over a more complex one. Therefore, henceforth we focus our attention on the the best performing 2D models.
The superior performance exhibited by the (r A /r B , ∆X) feature pair is not entirely unexpected and can be understood by looking at Fig. 3b and e. As alluded to previously, while r A /r B helps capture the overall T C trends among different chemistries, ∆X allows for an effective separation between different chemistries (especially, between the Sn-based compounds and rest of the dataset), while still capturing relative T C trends between these subgroups. The best performing feature pair for the titanate pyrochlores dataset, however, is constituted by E O→D and E D→A . While the (r A /r B , ∆X) feature pair performs much poorer on this subset than the overall dataset, the performance of (r A /r B , E O→D ) feature pair is also found comparable to that of the best 2D feature pair.
While Fig. 4 captures the average performance and variability (taken over 100 different runs) for our best performing 2D models (marked with a ), in Fig. 5a-b we present parity plots comparing the experimental T C with the ML predictions using the best 2D descriptors found for the entire dataset (Fig. 5a) and the titanates (Fig. 5b) in T C versus the feature values and make predictions of T C for new chemistries. Figure 5c shows the best two-feature descriptor for the entire set of pyrochlores considered.
Again, in this case, the two features that best correlate with T C are r A /r B and ∆X. This combination of features is able to distinguish the different T C behavior exhibited by the B=Sn pyrochlores and the other families of pyrochlores, by virtue of the properties of ∆X.
However, as discussed above, this combination of features has an effective uncertainty of ∼ 100 K, indicating that it cannot describe the fine features exhibited by the B=Ti family of pyrochlores. For example, T C is not monotonic with A cation radius (see Fig. 1). As discussed, limiting the model to just the B=Ti pyrochlores results in a different optimal two-feature set, namely E O→D and E O→A , as shown in Fig. 4b. In particular, as shown in Fig. 5d, this set of features can describe the subtle behavior in which the A=Gd compound has the highest value of T C , correlating with the fact that it has the highest value of E O→D , while the A=Y compound, which has values of E O→D similar to the neighboring compounds, exhibits an anomalously low value of T C . This is a consequence of its rather high value of E D→A , a consequence of the fact that Y is not a rare earth and thus the bonding associated with it is subtly different to the other elements around it.
DISCUSSION AND CONCLUSIONS
Combining experimental results for T C for various pyrochlore compounds, DFT calculations of the energetics of disordering and amorphization, and a machine learning model, we conclude that (a) basic ionic properties such as r A /r B and ∆X have the qualitative capability of predicting trends in T C over a wide-range of pyrochlore compounds but that While the feature set of ∆X and r A /r B have the best predictive capability for distinguishing between the various families of pyrochlores, the reason why Sn pyrochlores are radiation tolerant while exhibiting such high disordering energies is found in examining the amorphization energetics. The gap between the disordering and amorphization energies for the Sn pyrochlores is typically quite large and even if, during the course of irradiation, enough energy is deposited into the lattice such that the structure becomes disordered, it is not enough to amorphize the material. The gap betwen the disordering and amorphization energies is much larger in the Sn pyrochlores than it is in the Ti family and, for some A cations, larger than for the Hf and Zr families as well. Thus, the origin of the radiation tolerance of some of the Sn pyrochlores comes from the fact that they are extremely difficult to amorphize.
The insights gained by the machine learning model apply specifically to pyrochlores and, because of the interpolative nature of these models, to the families of pyrochlores considered here. That said, the features identified as being best able to predict T C can be justified physically and thus may be applicable to other classes of complex oxides, such as δ-phase [25], is not possible in pyrochlore [26]. Further, other factors, such as short-range order, which is known to occur in complex oxides [27,28], may also play a role. However, we suspect that treating the disordered state as truly random captures much of the behavior of these materials, given the ability of the disordered fluorite structure to predict order-disorder temperatures in these systems [21,29].
In this work, we have used T C as a metric for relative amorphization resistance. In reality, the value of T C encompasses not only thermodynamic properties such as disordering and amorphization energetics, but also kinetic processes of defect annihilation and defect production. Thus, actually predicting T C from fundamental defect behavior would be a daunting task. However, it does provide a metric to compare the susceptibility of amorphization that has been measured for a range of pyrochlore chemistries.
Finally, this work highlights the utility of machine learning approaches in materials science. In this case, the ML model elucidates those features which provide predictive capability, providing insight into those factors which dictate amorphization resistance in pyrochlores. The model also shows that sets of two features result in optimal predictions; higher-order feature sets do not add significant value. The fact that different combinations of features provide are optimal for predictions for the entire set of pyrochlores (r A /r B and ∆X) versus the Ti family (E O→D and E O→A ) reinforces the point that the best set of features depends on the level of detail (here, the error in the predicted T C ) required in the prediction.
Density Functional Theory
Density functional theory (DFT) calculations were performed using the all-electron projector augmented wave method [30] within the local density approximation (PBE) with the VASP code [31]. A plane-wave cutoff of 400 eV and dense k-point meshes were used to ensure convergence. The lattice parameters and all atomic positions were allowed to relax, though the cells were constrained to be cubic. The disordered fluorite structure was modeled using the special quasirandom structures (SQS) approach [32]. The SQS structures were generated as described in Ref. [21]. The amorphous structures were created by performing ab initio molecular dynamics at a very high temperature and then quenching the structures to 0 K.
For the B=Zr and Hf families, there is a deviation from true monotonic behavior at A=Tb, in contrast with previous DFT calculations [21] that used the same methodology (pseudopotentials, functional, k-point mesh, and energy cutoff). We assume that the differences from previously published results are due to changes in different versions of VASP.
Machine Learning Model
We used Kernel ridge regression (KRR) with a Gaussian kernel for machine learning.
KRR is a similarity-based learning algorithm, where the ML estimate of a target property (in our case the critical temperature T C ) of a new system j, is estimated by a sum of weighted kernel functions (i.e., Gaussians) over the entire training set, as where i runs over the systems in the training dataset, and |d ij | 2 = ||d i −d j || 2 2 , the squared Euclidean distance between the feature vectors d i and d j . The coefficients w i s are obtained from the training (or learning) process built on minimizing the expression | 2016-07-22T18:57:00.000Z | 2016-07-22T00:00:00.000 | {
"year": 2017,
"sha1": "aadc5d6fd6542e22048550b4fe3423281458214e",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1021/acs.chemmater.6b04666",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "aadc5d6fd6542e22048550b4fe3423281458214e",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science",
"Physics"
]
} |
256959118 | pes2o/s2orc | v3-fos-license | Identification of Differentially Expressed Non-coding RNA in Porcine Alveolar Macrophages from Tongcheng and Large White Pigs Responded to PRRSV
Porcine reproductive and respiratory syndrome (PRRS) is one of the most ruinous diseases in pig production. Our previous work showed that Tongcheng pigs (TC) were less susceptible to PRRS virus (PRRSV) than Large White (LW) pigs. To elucidate the difference in PRRSV resistance between the two breeds, small RNA-seq and ribo-zero RNA-seq were used to identify differentially expressed non-coding RNAs (including miRNAs and lincRNAs) responded to PRRSV in porcine alveolar macrophages (PAMs) from TC and LW pigs. Totally, 250 known mature miRNAs were detected. For LW pigs, there were 44 down-regulated and 67 up-regulated miRNAs in infection group; while for TC pigs, 12 down-regulated and 23 up-regulated miRNAs in TC infection group were identified. The target genes of the common differentially expressed miRNAs (DEmiRNAs) in these two breeds were enriched in immune-related processes, including apoptosis process, inflammatory response, T cell receptor signaling pathway and so on. In addition, 5 shared DEmiRNAs (miR-181, miR-1343, miR-296-3p, miR-199a-3p and miR-34c) were predicted to target PRRSV receptors, of which miR-199a-3p was validated to inhibit the expression of CD151. Interestingly, miR-378 and miR-10a-5p, which could inhibit PRRSV replication, displayed higher expression level in TC control group than that in LW control group. Contrarily, miR-145-5p and miR-328, which were specifically down-regulated in LW pigs, could target inhibitory immunoreceptors and may involve in immunosuppression caused by PRRSV. This indicates that DEmiRNAs are involved in the regulation of the immunosuppression and immune escape of the two breeds. Furthermore, we identified 616 lincRNA transcripts, of which 48 and 30 lincRNAs were differentially expressed in LW and TC pigs, respectively. LincRNA TCONS_00125566 may play an important role in the entire regulatory network, and was predicted to regulate the expression of immune-related genes through binding with miR-1343 competitively. In conclusion, this study provides an important resource for further revealing the interaction between host and virus, which will specify a new direction for anti-PRRSV research.
SCIEnTIfIC RepoRts | (2018) 8:15621 | DOI: 10.1038/s41598-018-33891-0 virus, PRRSV genome mutates with a high rate. Due to the poor cross-protection of the traditional vaccine for PRRSV variants, clinical prevention of PRRS is quite difficult, thus host genetic improvement of PRRSV resistance would be a better choice. As early in 1998, Harbul et al. reported that genetic differences of host could affect the susceptibility to PRRSV and clinical symptoms under PRRSV infection were different among breeds 8 . In the last decade, more artificial infection experiments were conducted within different pig breeds or populations with different backgrounds, which provides a strong evidence and support for the genetic contributions to PRRSV resistance.
In 2006, a highly pathogenic PRRSV (HP-PRRSV) broke out in China, and the epidemic persisted for a long time 9 , which made pig industry in a serious deficit state. While during the outbreak of HP-PRRSV, the Tongcheng (TC) pigs, a fine local variety in central of China, displayed extremely strong resistance to PRRSV 10 . Our previous study with artificial infection showed that TC pigs were less susceptible to PRRSV than LW pigs, manifesting as less tissue lesions, less virus load in serum, lower level of IL-10 but higher level of anti-viral cytokine interferon-gamma (IFN-γ) 11 . With RNA-sequencing, we compared the transcriptome difference of PAMs between TC and LW pigs, which revealed that TC pigs may promote the extravasation and migration of leukocytes to defend against PRRSV infection 12 .
With the development and widespread application of high-throughput sequencing technology, non-coding RNAs, including microRNA (miRNA) and long intergenic non-coding RNA (lincRNA), have been gradually recognized and found to participate in numerous biological processes. Some miRNAs were reported to regulate PRRSV proliferation. For example, miR-181 could strongly inhibit PRRSV replication through binding with ORF4 and PRRSV receptor CD163 13,14 ; miR-23, miR-378 and miR-505 were verified to directly target PRRSV genomic and subgenomic RNA 15 ; miR-26a could up-regulate the expression level of IFN-I and ISGs 16,17 ; miR-125b suppressed PRRSV replication through down-regulate NF-κB pathways 18 ; while miR-373 facilitated the replication of PRRSV by negative regulation of IFN-β 19 . Except for miRNAs, growing evidence suggested that lincRNAs could serve as ceRNAs through competing with mRNAs by binding to miRNAs and an increasing number of lincRNAs were confirmed to function as regulators of immune system [20][21][22] . Despite of the important roles of non-coding RNAs in immune response regulation, however, our previous work as well as other reported PRRSV-related transcriptome studies was focused on the function of protein-coding genes in PRRSV infection, less is known about the expression pattern of miRNAs and lincRNAs in the process of PRRSV infection. In this study, RNA sequencing with small RNA, and total RNA of ribosomal RNA removal, from PAMs of both TC and LW pigs were performed to obtain profiles of miRNAs and lincRNAs, which we hope and believe will provide another perspective for revealing the PRRSV resistance mechanism.
Materials and Methods
Sample preparation and RNA isolation. Pigs used in this study were selected from our previous research 11 . The HP-PRRSV strain was PRRSV WUH3 with a dose of 10 −5 TCID 50 for intramuscular challenge at 3 mL per 15 kg for experimental pigs. Twelve 5-week-old piglets of Tongcheng pigs (TC, n = 6) and Large White pigs (LW, n = 6) were slaughtered on 7-days post infection and then sampled, three for control and the other three for HP-PRRSV infection, respectively. PAMs were collected by bronchioalveolar lavage from lungs, lysed with TRIzol reagent (Thermol Fisher Scientific, Waltham, MA, USA), frozen in liquid nitrogen and stored at −80 °C until RNA extraction. Tissues including heart, liver, spleen, lung, kidney, brain, testis, mesenteric lymph nodes (MLN) and inguinal lymph nodes (ILN) were also collected for RNA isolation and analysis. All animal procedures were approved by the Ethical Committee for Animal Experiments at Huazhong Agricultural University, Wuhan, China (Animal experiment approval No. HZAUSW-2013-005, 08/27/2013). The animal experiments were performed at the Laboratory Animal Center of Huazhong Agricultural University.
Total RNA was extracted according to the manufacturer's instructions. RNA degradation and contamination were initially monitored on 1% agarose gels. RNA purity was determined using the NanoPhotometer ® spectrophotometer (IMPLEN, CA, USA). Then, the quantity of RNA was measured through Qubit ® RNA Assay Kit in Qubit ® 2.0 Flurometer (Life Technologies, CA, USA). RNA integrity was assessed using the RNA Nano 6000 Assay Kit of the Agilent Bioanalyzer 2100 system (Agilent Technologies, CA, USA).
Library construction for small RNA sequencing and data analysis. Library construction and sequencing were conducted in the Beijing Novogene Technology Co. Ltd. Company (Beijing, China). A total of 3 μg RNA per sample was used to generate a small RNA library using NEBNext® Multiplex Small RNA Library Prep Set for Illumina® (NEB, USA.) following manufacturer's recommendations and index codes were added to attribute sequences to each sample. Library quality was assessed on the Agilent Bioanalyzer 2100 system. Totally, twelve libraries were constructed. After the clustering of the index-coded samples on a cBot Cluster Generation System using TruSeq SR Cluster Kit v3-cBot-HS (Illumina), the libraries were sequenced on an Illumina Hiseq. 2000 platform and 50 bp single-end reads were generated.
To guarantee the quality of subsequent analysis, raw reads were processed to obtain clean reads as described: (1) Discard reads with poly-N > 10%, or poly-A, T, C, G and low-quality reads; (2) Remove reads containing 5′adapter contaminants; (3) Filter out reads without 3′adapters or insert tags; (4) Trim 3′adaptor sequences. Length-filtered clean reads were mapped to the reference pig genome (Sscrofa 10.2) by Bowtie. Then the aligned reads were compared with pig miRNA precursors in miRBase 21.0 to find known miRNA. Custom scripts were used to obtain miRNA counts, which were later normalized by TPM (transcript per million): mapped read count × 10 6 /total read count 23 . MiRNAs with at least five reads coverage in either library of the twelve were considered to be expressed and used for the following analysis. Differential expression analysis was performed by SARTools package 24 , setting p value < 0.05 and Fold Change (abbreviated as FC) ≥ 2 as threshold.
Target genes of miRNAs were predicted by software miRanda 25 . Genome sequence of PRRSV-WUH3 were downloaded from NCBI (accession number: HM853673.2) and used for miRNA binding sites prediction. Meanwhile, 3′UTRs of pig mRNA from Ensembl database were extracted to find underlying host target genes of expressed miRNAs. Also, an integrated analysis between differentially expressed (DE) miRNAs and DEmRNAs 12 was performed.
Library construction for ribo-zero RNA sequencing and data analysis. The detailed procedure of library construction for ribo-zero RNA sequencing is described in previous publication 12 . In brief, after removal of rRNA, twelve strand-specific cDNA libraries were constructed and 100 bp paired-end raw reads were generated. Clean data were obtained by removing reads containing adapter, reads containing ploy-N and low quality reads. Clean reads were mapped to the reference pig genome (Sscrofa 10.2) by TopHat v2.0.9 26 . The aligned reads were assembled into transcripts with Cufflinks and merged together with Cuffmerge 27 . To identify lincRNA, four steps were taken: (1) transcripts with single exon or shorter than 200nt were removed; (2) coding potential score, generated with CPC software, lower than −1 was supposed to be non-coding 28 ; (3) transcripts with FPKM value in all samples less than 0.01 were discarded; (4) transcripts overlapping with known gene or falling into 1 kb of any protein-coding genes were also removed 29 . Cuffdiff was used to identify DElincRNAs between groups, setting p value < 0.05 and |FC| ≥ 2 as threshold. DEmRNAs 12 within 500 kb of lincRNA were regarded as the cis target genes, while DEmRNAs co-expressed with DElincRNAs were considered as trans target genes. For the predicted target genes of DEmiRNAs and DElincRNAs, gene ontology (GO) and KEGG pathway analysis were performed using DAVID (https://david.ncifcrf.gov/), considering p value < 0.05 as the condition of significantly enrichment.
Construction of lincRNA-miRNA-mRNA regulatory network. Based on the sequence complementary and negative correlation between miRNA and target mRNA or target lincRNA, as well as the co-expression relationship between lincRNA and mRNA, the regulatory network of lincRNA-miRNA-mRNA was analyzed.
qRT-PCR validation of differentially expressed miRNA and lincRNA. First strand cDNA synthesis kit (Takara, Dalian, China) was used for reverse transcription, and common qRT-PCR and stem-loop qRT-PCR were conducted respectively to determine the relative expression of lincRNAs and miRNAs. Pig U6 snRNA and GAPDH were used as internal control for miRNA and lincRNA, respectively. Primers used for qRT-PCR are listed in Table S1. The qRT-PCR was performed in a total volume of 10 μL containing 5 μL 2 × SYBR Green Real-time PCR Master Mix, 0.2 μM of each primer, 1 μL cDNA and 3.6 μL ddH 2 O. The qRT-PCR reaction was conducted at 95 °C for 3 min, followed by 40 cycles of 95 °C for 15 s, 60 °C for 15 s, 72 °C for 20 s. All of the reactions were run in triplicate. Relative gene expression level was calculated using the 2 −ΔΔCT method 30 . Student's t-test was performed for data analysis, p value < 0.05 and p value < 0.01 was considered as the statistical threshold of significant difference and highly significant difference, respectively.
Results
Overview of miRNA sequencing. To reveal the expression difference of non-coding RNAs in PAMs from PRRSV infected TC and LW pigs, twelve small RNA libraries and twelve cDNA libraries were constructed and sequenced. The experiment was divided into four groups: TC_CON, TC_INF, LW_CON and LW_INF. For small RNA sequencing, the numbers of raw reads which covered at least 51.6 million bases ranged from 10M to 13M (Table 1). Q20 was above 97.89% and Q30 was above 94.70%, which indicated a high accuracy of sequencing data. Clean reads accounted for 96.92% to 98.53% of the raw reads. The length of clean reads was mainly from 20 nt to 24 nt, which is in accord with the general length range of miRNA. Of the 411 annotated mature miRNA, 208 miR-NAs were transcribed in all sequenced individuals. We analyzed the first nucleotide bias of the identified miRNA, which revealed that it tended to be uracil at most cases (Fig. S1).
Identification of DEmiRNAs and qRT-PCR validation.
Setting p value < 0.05 and |FC| ≥ 2 as the criteria for screening DEmiRNAs, 67 up-regulated and 44 down-regulated miRNAs were identified to be differentially expressed after PRRSV infection in LW pigs. Also, compared with TC_Control group, 23 up-regulated and 12 down-regulated miRNAs were identified in TC_Infection group (Table S2). Additionally, there were 31 and 30 miRNAs differentially expressed in the control and infection groups between these two breeds, respectively (Table S2; Fig. 1a). It is worth noting that miR-451, miR-486, miR-199b-3p, miR-199a-3p, miR-199b-5p and miR-31 were differentially expressed in all the four paired-comparisons (Table S2): CON_TC vs CON_LW, INF_TC vs INF_LW, LW_CON vs LW_INF and TC_CON vs TC_INF. Heatmap for all DEmiRNAs in the four groups revealed that TC_CON and LW_CON displayed similar expression pattern and clustered together, and the two infection groups clustered together. However, within one cluster, no matter control or infection group, some miR-NAs displayed extreme expression difference between the two breeds (Fig. 1b). miR-378, miR-378b-3p, miR-582 and miR-7135-3p were specifically down-regulated in PAMs of TC pigs, while there were 80 specific DEmiRNAs in PAMs of LW pigs. To validate the analysis results, six DEmiRNAs (miR-146b, miR-335, miR-378, miR-451, miR-532-5p and miR-9-1) were randomly selected for stem-loop qRT-PCR assay. The results indicated that trends of relative expression of qRT-PCR were consistent with small RNA-seq (Fig. 2). Functional analysis of the target genes of DEmiRNAs. miRanda was used to predict target genes of DEmiRNAs against PRRSV genome, PRRSV receptor and other host genes. Of the shared DEmiRNAs before and after infection in the two breeds, 23 miRNAs could bind to PRRSV genome (Table S3). Of which, miR-1343, miR-296-3p, miR-199a-3p and miR-34c could target PRRSV receptor CD151; miR-181a and miR-181b could target CD163 and miR-34c could target CD209. They were differentially expressed between control and infection group in both breeds (Table S2), which indicated that these common DEmiRNAs might play a key role in the regulation of PRRSV infection in pigs. In addition, four specific DEmiRNAs in TC pigs (miR-378, miR-378b-3p, miR-7135-3p and miR-582) were predicted to have ten binding sites on PRRSV genome. Then miR-199a-3p was randomly selected to validate the result of bioinformatics analysis. As shown in Fig. 3, over-expression of miR-199a-3p by mimics downregulated the expression of CD151.
Identification and qRT-PCR validation of differentially expressed lincRNAs.
Overview of ribo-zero RNA sequencing data were described in another publication 12 . After series of rigorous screening, 616 lincRNA transcripts were identified eventually, which were distributed on each chromosome except chromosome Y. The average length of these lincRNA transcripts is 2709 nt and more than 60% of the lincRNAs contain only two exons. Compared with protein-coding genes, the identified lincRNA has fewer exon number, lower coding potential and lower expression level (Fig. 5). Compared with TC control group, there were 11 significantly down-regulated and 19 up-regulated lincRNAs in TC infection group. Also, 29 lincRNAs and 19 lincRNAs were found to be down-regulated and up-regulated in LW infection group, respectively (Table S5). 11 DElincRNAs were randomly selected to validate through qRT-PCR. The results showed that the relative expression of lincRNAs was consistent with RNA-seq analysis result (Fig. 6a). Additionally, the expression level of 5 DElincRNAs was detected in ten tissues by RT-PCR, which revealed that the selected DElincRNAs expressed highly in immune system except TCONS_00146873 (Fig. 6b).
Functional annotation of the target genes of DElincRNAs.
To explore potential cis-regulatory lincRNAs, DEmRNAs within 500 kb upstream and downstream of DElincRNAs were analysed 31 . Totally, 150 DEmRNAs were found and they were enriched in regulation of signal transduction and reproduction-associated biological processes (Table S6). Based on trans-acting, Pearson Correlation Coefficient of DElincRNAs and DEmRNAs was calculated. Result showed that 915 DEmRNAs co-expressed with lincRNAs were found, which were significantly enriched in multiple immune-related biological processes (Table 3). Target genes of DElincRNAs between control and infection group in both breeds were enriched in the similar biological processes and KEGG pathways.
Construction of lincRNA-miRNA-mRNA regulatory network. As down-regulated miRNAs closely related to immunity, they were used to do the following analysis. Based on the sequence complementarity and negative correlation between miRNA and its targets, as well as the co-expression relationship between lincRNA and mRNA, the regulatory network of lincRNA-miRNA-mRNA was analysed (Fig. 7). The results showed that lincRNA TCONS_00125566 (TC: log 2
Discussion
PRRS has been pandemic for over 30 years, but it's still one of the main enemies of large-scale pig farms. Clinical prevention is difficult due to its high variability and the pressure of immune selection. Therefore, it is particularly important to enhance the genetic resistance to PRRSV and further improve the genetic structure. In this study, small RNA-seq and ribo-zero RNA-seq were performed to study the regulation of miRNA and lincRNA in the interaction between virus and host, which not only provides a new facet to investigate the differences of disease resistance of TC and LW pigs, but also lays foundation for further studying the antiviral function of miRNA and lincRNA. As a result, we identified some miRNAs and lincRNAs which may play important roles in PRRSV defending in two breeds, including miR-181, miR-296-3p, miR-744, miR-185, let-7c, miR-145-5p, miR-328, etc., as well as TCONS_00074289, TCONS_00037786, TCONS_00125566 and so on. As an important regulatory factor, miRNAs participate in diverse biological process, such as development, metabolism, cell proliferation and apoptosis, also in the intricate networks of host-pathogen interactions and innate immunity. In accordance with previous research, miR-181, validated to inhibit PRRSV replication 13,14 , was up-regulated post infection compared with control groups in both breeds. miR-296-3p and miR-744 was decreased more than 5-fold in both breeds, and they could target LDOC1 (up-regulated post-infection), which served as a negative regulator of NF-κB 32 and was involved in anti-inflammatory response in both two breeds. Interestingly, miR-451 was up-regulated in both breeds, while it was expressed higher in TC pigs, even 17 times to LW infection group. Previous research indicated that miR-451 could inhibit the expression of pro-inflammatory factors 33 . Thus, its higher expression level probably contributed to relatively weakened tissue inflammation in TC pigs. It is also well known that PRRSV is an immunosuppressive disease, and IL-10 is a vital immunosuppressive factor during PRRSV infection 34 . In the current study, miR-185 and let-7c, which were specifically down-regulated in LW pigs, targeted SIGLEC5 that was up-regulated during PRRSV infection. Previous study showed that mouse SIGLEC5 enhanced IL-10 production while inhibiting TNF-α production in macrophages 35 . As a member of Siglec family, SIGLEC5 could facilitate the escape of pathogenic organisms from the control of the natural immune system 36 . Thus, LW pigs with higher expression of SIGLEC5 and IL10 manifested a state of immunosuppression. In addition, the specifically down-regulated miR-145-5p in LW pigs could target Cytotoxic T-lymphocyte associated protein 4 (CTLA-4), which had a significant lower expression in TC infection group than that in LW infection group. CTLA-4, as a negative regulator of T-cell activation, could reduce response to antigen 37 . Besides, miR-328, specifically up-regulated in LW pigs, could target programmed death ligand-1 Graft-versus-host disease 6 Allograft rejection 6 Autoimmune thyroid disease 6 Primary immunodeficiency 5 (PD-L1), which was up-regulated in LW infection group compared with TC infection group. Research showed that increased PD-L1 expression on antigen presenting cells (APCs) could aid in virus survival and decrease T-cell activity 38 . TIM-3, which could inhibit Th1 cells' immune activity by mediating apoptosis and thus inducing immune suppression, might be targeted by miR-362 and miR-365-3p. To sum up, the above mentioned DEmiRNAs were specifically down-regulated in LW pigs, while all of their target genes were up-regulated in LW pigs. Interestingly, these target genes together with IL-10 participated in immunosuppression, which indicated that some DEmiRNAs were involved in the regulation of immunosuppression in LW pigs. As a common DEmiRNAs of two breeds, miR-199 was effective in targeting and regulating HBV (Hepatitis B virus), thus curing the disease caused by HBV 39 . In this study, we found miR-199a-3p could inhibit CD151 expression at protein level. Since CD151 serves as the receptor of PRRSV, this would be an appropriate way to prevent PRRSV by miR-NAs through modulating their targeted genes. Apart from miRNAs, lincRNA is also an indispensable part of transcriptome. However, pig lincRNAs of immune system have been rarely reported. In the current study, we identified 616 differentially expressed lin-cRNA transcripts in PAMs, between control and infection group within the each pig breed or between control groups of the two pig breeds.
Co-expression analysis based on Pearson correlation coefficient identified many DEmRNAs, which were involved in immune-related process and pathway, including inflammatory response, apoptosis, cytokine-cytokine receptor interaction and so on. The regulation of these DEmRNAs by lincRNA was mostly mediated by trans-acting, which was consistent with a previous research 40 . Interestingly, one DElincRNA-TCONS_00074289-was expressed only in PAMs and lung, which indicated its function in the development of lung and PAMs. Recent reports have suggested that lincRNAs can potentially interact with other classes of non-coding RNAs including miRNAs and then regulate the expression of mRNA 41 .
In this study, we systematically analyzed the complex effects of the interactions between miRNAs and their target genes and provided lncRNA-miRNA-mRNA networks. TCONS_00037786 and TCONS_00125566 were recognised as important competing endogenous RNA. Of the target genes, GADD45B was an anti-apoptosis factor 42 ; SLAMF7 could inhibit the production of proinflammatory cytokines 43 ; Belong to chemokine receptor family, CXCR3 and CCR5 were involved in inflammatory response; DUSP4 is closely related with cell proliferation, differentiation and apoptosis through negatively regulating MAPK pathway 44 . Therefore, we speculated that these two lincRNAs together with miR-296-3p and miR-1343 took part in the regulation of inflammination and apoptosis in pigs, which of course need further experimental validation.
In summary, miRNAs were involved in PRRSV defending, and due to the difference of genetic background, some of them displayed specific expression pattern in one breed, and also conbined with lincRNAs to modulate physiological process such as anti-virus and regulation for host immune, which probably provides a new evidence for the genetic contributions to PRRSV resistance.
Conclusions
In this study, the expression and regulation of miRNA and lincRNA were analyzed in TC and LW pigs responded to PRRSV. Down-regulated miRNAs were involved in the regulation of PRRSV proliferation by participating in immune-related biological processes and pathway. Also, miRNA could target immunosuppressive receptor and ligand genes, leading to a stronger degree of immunosuppression in Large White pigs. Moreover, network interaction analysis was performed and some functional non-coding RNAs were found. This study lays the foundation for exploring the interaction mechanism between host and PRRSV and further revealing the disease resistance mechanisms of Tongcheng pigs. | 2023-02-18T15:38:03.677Z | 2018-10-23T00:00:00.000 | {
"year": 2018,
"sha1": "0bfd317fd42c7b677638cab0335a6664544a33a4",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-018-33891-0.pdf",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "0bfd317fd42c7b677638cab0335a6664544a33a4",
"s2fieldsofstudy": [
"Biology",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": []
} |
252221381 | pes2o/s2orc | v3-fos-license | Widespread subclinical cellular changes revealed across a neural-epithelial-vascular complex in choroideremia using adaptive optics
Choroideremia is an X-linked, blinding retinal degeneration with progressive loss of photoreceptors, retinal pigment epithelial (RPE) cells, and choriocapillaris. To study the extent to which these layers are disrupted in affected males and female carriers, we performed multimodal adaptive optics imaging to better visualize the in vivo pathogenesis of choroideremia in the living human eye. We demonstrate the presence of subclinical, widespread enlarged RPE cells present in all subjects imaged. In the fovea, the last area to be affected in choroideremia, we found greater disruption to the RPE than to either the photoreceptor or choriocapillaris layers. The unexpected finding of patches of photoreceptors that were fluorescently-labeled, but structurally and functionally normal, suggests that the RPE blood barrier function may be altered in choroideremia. Finally, we introduce a strategy for detecting enlarged cells using conventional ophthalmic imaging instrumentation. These findings establish that there is subclinical polymegathism of RPE cells in choroideremia.
C horoideremia 1 is a rare X-linked retinal degeneration caused by loss-of-function mutations in the CHM gene that results in nyctalopia, progressive visual field loss, and ultimately, blindness [2][3][4] . The name of the disease suggests primary involvement of the choroid 5 , a vascularized support structure for the outer retina. For example, choriocapillaris degeneration has been linked with declining visual function in choroideremia 6 . However, several studies have suggested that choroideremia primarily affects the retinal pigment epithelial (RPE) cells [7][8][9][10][11] . Other studies report photoreceptor degeneration as a driver of the disease 12,13 or that there are multiple interacting factors [14][15][16][17][18][19] . Given the various factors proposed, and the possibility of discrepancy between cell and animal models compared to the actual clinical manifestation of choroideremia, visualization of the in vivo pathogenesis of the disease across the outer retinal layers is of critical importance for developing future treatments for these patients.
Over two dozen clinical trials are currently underway for choroideremia, many involving gene augmentation [20][21][22][23] . A common aim is the preservation of central vision provided by the macular photoreceptors, which often remain intact until the late stages of disease, typically around the fifth decade of life 2 . Although there are numerous structural (e.g., optical coherence tomography (OCT), autofluorescence) and functional (e.g., visual fields, electroretinography) measures for assessing disease progression or treatment efficacy, the majority of these approaches are better suited for assessing photoreceptor structure/function than RPE status. Clinically available techniques (e.g., OCT, autofluorescence) provide information about whether the RPE monolayer is intact, but to date, these methods are unable to provide cellular-level information about the RPE. In the current study we sought to evaluate the RPE layer, along with the photoreceptor and choriocapillaris layer to see to what extent each layer is affected in choroideremia, and to help inform future clinical trials.
Female carriers of choroideremia are typically regarded to be mostly unaffected. Although it is recognized that there is a wide spectrum of disease severity affecting female carriers, severe symptoms such as night blindness and visual impairment similar to those observed in affected males are thought to affect only a minority of the female carriers 24 . In a previous study, abnormal full-field electroretinography responses were detected in only 15% of female carriers, despite 96% of these carriers presenting with ophthalmoscopic signs such as patchy atrophy and coarse granularity in the mid and far periphery 25 . It has been hypothesized that the degree to which female carriers are affected depends on lyonization (random X-inactivation) 12,16,26,27 within the RPE mosaic, which would result in patches of both mutant and normal RPE cells in which the mutant RPE cells could degenerate over the course of five or more decades of life. In particular, lyonization could explain the patchy pigmentary changes such as melanin clumping due to pigment migration or areas of RPE atrophy reported in female carriers 7,12,[28][29][30] . To our knowledge, this theory has not been conclusively demonstrated to date due in part to the lack of tools for imaging the RPE at a cellular level in patients over large enough areas within the eye to detect mutant and normal patches of RPE.
Adaptive optics (AO) retinal imaging 31 has provided cellularlevel resolution of the living human retina in patients with choroideremia, particularly for assessing the status of cone photoreceptors 6,9,32,33 . Structurally, well-preserved cone photoreceptors have been reported at the margins of RPE atrophy 16,33 but areas of reduced cone density elsewhere have also been observed 9,16,32 . Functionally, retinal sensitivity assessments showed a sharp drop-off in cone function near areas of atrophy consistent with the sharp loss of cone photoreceptors at or near the atrophic edge 6,33 . In female carriers of choroideremia, cones seem to be relatively well-preserved 32 , but with the possibility of patchy loss of photoreceptors in some eyes 16 . Although these studies have suggested that RPE damage precedes cone photoreceptor degeneration, limited assessment of the RPE mosaic at the cellular level has been performed. Recently, adaptive optics enhanced indocyanine green (AO-ICG) imaging has been introduced as a method for visualizing the RPE mosaic 34 which has revealed the presence of enlarged RPE cells in Bietti Crystalline Dystrophy 35 . We have also shown that AO-ICG can be used to visualize the choriocapillaris 36 . AO-ICG in combination with other modalities, particularly adaptive optics optical coherence tomography (AO-OCT), provides a means to visualize the RPE mosaic [37][38][39] . Taken together, these techniques enable in vivo multimodal assessment of the photoreceptor, RPE, choriocapillaris complex at the cellular level in both affected males and female carriers of choroideremia.
RPE cells are enlarged in both affected males and female carriers.
The characteristic heterogeneous pattern of fluorescence 34,35 was observed in all affected males and female carriers imaged using AO-ICG. However, unlike the mosaic of healthy eyes, in choroideremia eyes, the RPE cells were substantially enlarged by up to~5Χ, especially in the affected males ( Fig. 1). In many of the enlarged cells, hypofluorescent RPE nuclei could be observed (arrows, Fig. 1c), consistent with our previous findings revealing nuclei in some larger RPE cells 35 . Evidence of these enlarged RPE cells could not be detected based on fundus autofluorescence or clinical OCT. The AO-ICG fluorescence pattern was observed across the retina in female carriers as well as within the areas where RPE was intact in affected males. Measurements of RPE density were below the 99.9% confidence interval for expected normal RPE density 40 at all measured eccentricities for both affected males (filled symbols) and female carriers (unfilled symbols). Measurements in affected males were performed up to the edge of atrophy, generally <3 mm (Fig. 1f). Across all eccentricities, the average RPE density in the female carriers (1981 cells/mm 2 ) was in between that of healthy eyes (5947 cells/mm 2 ) and affected males (844 cells/mm 2 ), but there was considerable variability in the female carriers, especially near the foveal center (eccentricity 0.0 mm). For the female carriers, the presence of interspersed enlarged RPE cells among less affected RPE cells (Fig. 1d) is consistent with the hypothesis that there are patches of mutant RPE cells distributed throughout areas with normal RPE cells in which the X-chromosome that harbors the loss-of-function mutation has been inactivated during the lyonization process.
Multimodal AO imaging incorporating co-registered AO-OCT and AO-ICG images also showed that RPE cells were enlarged in choroideremia (Fig. 2). For a subset of subjects (three affected males and five female carriers), AO-OCT images of RPE cells were also obtained to evaluate whether RPE cells could be detected. Although AO-OCT imaging in healthy eyes has been successfully demonstrated, to our knowledge, there are limited examples of AO-OCT imaging performed in patients with advanced disease, which can be challenging due to the longer acquisition time needed to obtain averaged volumes for RPE cell imaging (125-150 volumes per location; see Methods: Adaptive Optics Imaging). As previously demonstrated, side-by-side comparison of the same RPE cells imaged using AO-OCT and AO-ICG helps to establish how to interpret images of disrupted RPE 37 , which can appear strikingly different compared to healthy RPE. Whereas RPE cells can be identified on the basis of differences in overall fluorescence intensity in AO-ICG images 41 , in AO-OCT images, the darker RPE cell centers can be used to infer cell-to-cell spacing, most easily visible when the RPE cells are in close proximity to each other. In healthy subjects (Fig. 2i, k) the cell centers are tightly packed and in female carriers (Fig. 2e, g) they are sparsely distributed. In the affected male, it is difficult to distinguish the boundaries of individual RPE cells in the AO-OCT images, but comparison of the AO-ICG images alongside AO-OCT images suggests that the dark spots correspond to cell centers within enlarged RPE cells (Fig. 2a-d). In general, the sources of image contrast are not yet well understood when the cellular structure itself is as dramatically altered as it is in the case of these enlarged RPE cells, which introduces challenges for image interpretation. Nonetheless, these multimodal AO images provide an initial glimpse into the characteristics of AO-OCT RPE imaging in diseased eyes and help to confirm that the RPE monolayer is intact even in areas of low cell contrast (AO-OCT shows the presence of RPE cells in hypofluorescent areas of the AO-ICG image, and conversely, AO-ICG shows the presence of RPE cells in low-contrast areas of the AO-OCT image). Taken together, these data suggest the possibility for substantial RPE enlargement to occur in the setting of an intact RPE monolayer as revealed by clinical OCT scans. Table 1). White rectangles indicate areas where high resolution adaptive optics (AO) images were obtained. Scale bar, 1 mm. c, d Late phase adaptive optics enhanced indocyanine green (AO-ICG) images of the RPE mosaic obtained in the white rectangles from (a and b). e For comparison, an AO-ICG image obtained from a similar area in a healthy right eye is shown. Green hexagons denote the approximate size of single RPE cells in the immediate vicinity of the hexagon (barely visible in the healthy eyes). The nuclei of RPE cells are visible in some of the enlarged cells (arrows point to examples of nuclei visible as hypofluorescent spots). In the healthy eye, RPE cell size is nearly constant across the image; in contrast, the RPE cells in the carrier eye are variable in size, with many of the cells enlarged. The RPE cells in the affected eye are consistently enlarged across the entire image. Scale bar for (c-e), 100 µm. f Quantitative measurements of RPE cell density obtained at different locations (eccentricities) from the fovea out to~5.0 mm in the temporal direction (female carriers, unfilled symbols; affected males, filled symbols; triangles, left eye; squares, right eye). Measurements of RPE cell density in the affected eyes were possible only up to the edge of atrophy, which was generally <3.0 mm. RPE density in all subjects was significantly lower than expected normal values (gray dots; the gray band represents 99.9% confidence interval) 40 .
However, whether this RPE disruption occurs concomitant with photoreceptor or choriocapillaris degeneration remains to be shown. RPE cells are disrupted to a greater extent than surrounding neural and vascular layers. We performed AO-ICG imaging to visualize the photoreceptor, RPE, choriocapillaris complex in vivo in 5 eyes from 3 affected males and 11 eyes from 6 female carriers (Fig. 3). Currently, AO-ICG imaging of the choriocapillaris can only be generated during the transit phase immediately after dye injection which limits the imaging of the choriocapillaris to a small field of view (0.6 mm × 0.6 mm). As central vision is typically well-preserved until the late stages of the disease, we opted to acquire the choriocapillaris images at the foveal center of each subject. At this location, the photoreceptor, RPE, choriocapillaris complex was intact in all subjects imaged and the relative size scale of each layer was quantified (foveal cone spacing, RPE cell spacing, and choriocapillaris flow void effective diameter) 36 (Fig. 3b). The degree to which the RPE was enlarged was significantly increased compared to both the photoreceptor (P < 0.001) and choriocapillaris (P < 0.001), but there was no significant difference between the photoreceptors and choriocapillaris (P = 0.63). In general, these observations applied to both affected males and female carriers.
To further explore whether there were differences in the rate of progression of each of the three layers, longitudinal imaging of the same region was performed in a subset of subjects who were able to return for a second visit (Supplementary Table 1). In this subset of 2 eyes from 1 affected male and 5 eyes from 3 female carriers, the visit-to-visit change was greatest in the RPE layer, followed by the choriocapillaris, and then the photoreceptors (average foveal cone spacing, RPE cell spacing, and choriocapillaris flow void effective diameter increase from visit 1 to visit 2: cone photoreceptors, 1% [6%]; RPE, 24% [15%]; choriocapillaris, 4% [9%]; mean [SD]) (Fig. 3c). The visit-to-visit increase in size scale observed in the RPE was significantly greater than those observed in both the photoreceptor (P < 0.01) and choriocapillaris (P < 0.05) layers; there was no significant difference between the photoreceptors and choriocapillaris (P = 0.49). These findings suggest a faster rate of progression in the RPE compared to the surrounding layers and is consistent with our overall observation that the RPE is severely disrupted in choroideremia. Interestingly, in two of the carrier eyes (C3L and C5R), the RPE mosaic (AO-ICG fluorescence pattern) appears to shift slightly from visit to visit above the more stable choriocapillaris even over a relatively short follow-up period of 2-6 months (Supplementary Movie 1), which contrasts with the stable mosaic observed in healthy subjects across years 35 . This might be due to the dropout of individual RPE cells within and near the imaging field of view, resulting in a local, compensatory rearrangement of cells to preserve the continuity of the RPE monolayer.
Disruption of outer retinal barrier function of the RPE. Patches of fluorescently labeled photoreceptors were observed in choroideremia. To further characterize this phenomenon, we modeled the uptake of ICG dye into the RPE cells using mice. Building upon our previous studies showing that the murine RPE layer is selectively labeled following intraperitoneal administration of ICG 34,35 , we performed additional experiments to examine the status of the photoreceptors in mice (Fig. 4). Here, we confirmed that the overlying photoreceptors were not labeled with ICG under normal physiologic conditions (Fig. 4a). The presence of ICG within RPE confirmed that the dye was successfully delivered to the mouse eye. Even though the photoreceptors and RPE cells are in close apposition with each other, after carefully separating the retina from the RPE layers through dissection, we confirmed that the photoreceptors remained unlabeled after systemic administration of ICG However, photoreceptors were readily labeled with ICG following ex vivo incubation of the separated retina with ICG, suggesting that the tight junctions between RPE cells which comprise the outer blood retinal barrier effectively prevent the photoreceptors from being labeled with ICG under normal conditions. We also repeated this condition with imaging performed using en face AO microscopy 42 (same excitation and emission spectra used for human AO-ICG imaging), further corroborating this finding. These results demonstrate that photoreceptors are readily labeled when exposed to ICG.
Upon close examination of the AO-ICG images in human participants, patches of fluorescently labeled photoreceptors were observed in 10 eyes from 5 female carriers as well as in 1 eye from 1 affected male ( Fig. 4c and Supplementary Fig. 1). A total of 11 distinct patches of ICG labeled cone photoreceptors were selected from the female carriers for further analysis. All ICG labeled cones identified in the AO-ICG images were colocalized with cones identified in simultaneously acquired, co-registered nonconfocal split detection images (n = 2317 cones), supporting our claim that these fluorescently labeled structures are indeed cone photoreceptors. Although we did detect a small subset of nonlabeled cones (392 cones visible on non-confocal split detection but not on AO-ICG), we found most of the cones within these patches were labeled (86%). The presence of fluorescently labeled cones overlying hypofluorescent RPE cells suggests that this finding is not due to the cone imprinting phenomenon 34 . Quantification of cone density of the cone mosaic within these patches (including both fluorescently labeled and non-labeled cones) yielded values comparable to normative histologic data (Fig. 4d) 43 . In addition, from the AO-OCT images, measurements of outer retinal length (ORL) were performed, defined as the distance between the inner segment/outer segment junction (ellipsoid zone) and the RPE bands 44 . ORL measurements in both labeled and non-labeled photoreceptor regions from five female carriers further showed no statistically significant differences between these two groups (P = 0.45). Overall, the ORL in female carriers were similar to normative data from healthy eyes. Finally, fundus-guided retinal sensitivity measurements performed during the same visit as AO-ICG in 2 eyes from 1 female carrier revealed that both ICG labeled and non-ICG labeled photoreceptors had normal retinal sensitivity values (normal range: 29-31 dB) 45 . There were no significant differences in retinal sensitivity observed between labeled and non-labeled patches (ICG labeled, 30.3 ± 0.9 dB; non-labeled 30.2 ± 1.1 dB, P = 0.81; n = 10 ICG labeled and 21 non-labeled patches, mean ± SD). Altogether, these results suggest that ICG labeled cone photoreceptors in choroideremia are neither structurally nor functionally deficient (i.e., the labeling of photoreceptors with ICG is a result of showing foveal cone photoreceptors (PR), RPE cells, and the choriocapillaris (CC) microvasculature. Images from an affected male, a female carrier, and a healthy eye are shown. Note that subject A4L has a relatively well-preserved island of RPE cells at the fovea in which RPE cells are less enlarged compared to other affected males; outside of the fovea, RPE cells still form a contiguous monolayer but are dramatically enlarged (see Fig. 5a). Scale bars: PR, 10 µm; RPE, 50 µm; CC, 100 µm. b Box plots of photoreceptor spacing, RPE spacing, and choriocapillaris flow void diameters show that the RPE is the most severely affected layer of these three layers (center line: median; box limits: upper/lower quartiles; whiskers: 1.5× interquartile range; points beyond the whiskers: outliers). Data corresponding to the subjects shown in (a) can be determined using the legend. Measurements of photoreceptors, RPE, and choriocapillaris performed in choroideremia were compared to normative histologic data 43 , normative in vivo RPE data 40 , and normative in vivo choriocapillaris data 36 , respectively. For subjects who had two visits, only data from the first visit was used for this analysis. c Longitudinal imaging acquired at the same location and co-registered across visits revealed the degree to which photoreceptors, RPE, and choriocapillaris changed from one visit to the next (time between visits varied between 2 and 12 months; Supplementary Table 1). The largest changes were observed in the RPE layer, further corroborating our finding that the RPE layer is the most disrupted layer out of these three layers. 34 . The overlying photoreceptors (PR) remain unlabeled under normal conditions due to the tight junction between RPE cells, which together establish the outer blood retinal barrier. The PR outer segments (OS) and outer nuclear layer (ONL) can be discerned using autofluorescence imaging (430 nm). A faint infrared autofluorescence can be observed in the choroid due to melanin. Scale bar, 50 µm. b No detectable ICG signal was observed in the photoreceptor layer after systemic injection of ICG. The presence of photoreceptor profiles was confirmed using differential interference contrast (DIC) microscopy. However, following an ex vivo incubation with ICG, the photoreceptors were readily labeled with ICG. These results were further corroborated using a custom-assembled adaptive optics microscope capable of simultaneous acquisition of en face confocal reflectance (displayed in log scale), non-confocal split detection, and AO-ICG images of the photoreceptor mosaic. Scale bar (all images in b), 10 µm. c ICG labeled photoreceptors observed in a female carrier. The heterogeneous RPE fluorescence pattern can be observed in the background. Zooms of the white box show that the fluorescently labeled photoreceptors are consistent with photoreceptors imaged using non-confocal split detection. Scale bars: 50 µm (C6L larger image), 10 µm (zooms, AO-ICG and Split Det). d The densities of cone photoreceptors in areas of ICG labeling are similar to normative histologic values 43 . e Outer retinal length (ORL) measurements were performed using AO-OCT images acquired in areas of ICG labeling and in areas without ICG labeling. There was no apparent difference in ORL between labeled or nonlabeled photoreceptors, and both were similar to ORL measurements performed in healthy eyes. f Retinal sensitivity measurements (microperimetry) performed within patches of ICG labeled photoreceptors were within normal limits and similar to measurements obtained in neighboring, non-labeled photoreceptors.
disruption to the blood barrier function of the RPE and does not represent a defect in the photoreceptors themselves).
It is possible that these patches arise transiently during the initial remodeling of the RPE layer following focal dropout of neighboring RPE cells, that eventually resolve following reestablishment of tight junctions between remodeled RPE cells. Unfortunately, it was difficult to assess RPE cells underlying patches of fluorescently labeled photoreceptors as the fluorescent signal from the photoreceptors obscured the RPE mosaic ( Supplementary Fig. 1). However, in one patient, simultaneously-acquired darkfield images of the RPE mosaic 46 were visible in which there appeared to be RPE enlargement underlying a patch of fluorescently-labeled photoreceptors ( Supplementary Fig. 2). This degree of RPE enlargement cannot be explained by eccentricity-dependent differences in RPE morphology 40 , which would support the theory that these patches arise due to defects in the RPE. Further examination of OCT scans and fundus autofluorescence in patches of fluorescently labeled photoreceptors revealed that half (50%) of the patches were associated with a neighboring vitelliform-like lesion. RPE underlying vitelliform lesions are expected to be hypocyanescent due to blocking of the ICG fluorescence 47 . These lesions appeared to be of adult onset when comparisons were made with previous multimodal imaging from the same patients. We therefore hypothesize that these vitelliform-like lesions are secondary to the underlying retinal degeneration (choroideremia) and are a further indication of RPE dysfunction.
Although we observed many examples of patches of ICG labeled photoreceptors, especially in female carriers (Supplementary Figs. 1 and 3), collectively, these still represent a very small portion of the overall retinal area imaged (examples of areas without fluorescently-labeled photoreceptors can be seen in Supplementary Fig. 4). The relatively uniform labeling of photoreceptors in patches overlying even hypofluorescent RPE cells is similar to the ICG labeled photoreceptors that we labeled ex vivo in mice, suggesting that high resolution ICG imaging could lead to insights about the integrity of the outer blood retinal barrier function of the RPE.
Enlarged RPE cells can be detected using late phase ICG imaging even without adaptive optics. Alongside AO-ICG imaging, late phase ICG imaging using a commercially available scanning laser ophthalmoscope (SLO) (Spectralis, Heidelberg Engineering) revealed that individual enlarged RPE cells could be identified even without the use of AO (see Fig. 5 and Supplementary Movie 2 for a visualization of how the heterogeneous pattern of fluorescence develops in the time between the mid-late and "true" late phases). Although ICG imaging is routinely performed in clinical practice, it is not common to use it to capture the late phase (in this study, defined as 45 min or longer after intravenous injection). While one affected male (subject A4) had a smaller island of better-preserved RPE cells within the larger island of intact RPE cells, the other three affected males had uniformly enlarged RPE cells throughout the entire remaining retina, which could be readily determined using late phase ICG, but not mid-late phase ICG (Supplementary Movie 2). Identifying such patients might be critical for assessing the results of possible therapeutic interventions.
AO imaging typically uses smaller imaging fields of view (<0.6 mm × 0.6 mm) and is more time-and resource-intensive than current clinical testing. Hence, the ability to quickly capture a snapshot of the heterogeneous RPE pattern with an SLO across a wider field of view (9 mm × 9 mm) could be of value for choroideremia, and in particular for the female carriers ( Fig. 5d and Supplementary Figs. 3-5). In one female carrier (subject C4), the dramatically enlarged RPE cells were similar in appearance between female carriers and affected males ( Supplementary Fig. 5), who had RPE density measurements similar to those of affected males (Fig. 1f). Across all female carriers, small, scattered hyperfluorescent spots were observed using conventional imaging. We confirmed that these corresponded to enlarged RPE cells seen in AO-ICG images obtained in overlapping areas. Likewise, in addition to the hyperfluorescent enlarged RPE cells, larger than expected patches of hypofluorescent RPE cells can also be observed. Together these findings illustrate the extent to which carrier retinas are affected and provide a rare glimpse into the pattern of lyonization in an adult eye. Considering the relatively long time-course of this disease (~decades), and the possibility of maintaining well-preserved photoreceptors despite enlarged RPE cells (e.g., affected males), longitudinal monitoring of the late phase ICG signal (Supplementary Movie 1) in larger cohorts of female carriers may be justified for understanding how mutant colonies of RPE cells give rise to the milder pigmentary changes reported for female carriers.
In summary, the comparisons between conventional SLO and AO-ICG images suggest the possibility for routine assessment of the RPE mosaic at the cellular level in choroideremia and other diseases affecting the RPE, a potentially transformative tool that can be readily implemented into clinical practices without the need for new or complex AO instrumentation.
Discussion
Our data are in agreement with the view that choroideremia is primarily an RPE disease 10,33 revealing enlarged RPE cells (Fig. 1) and increased distance between RPE cell centers (Fig. 2). These disruptions to the otherwise contiguous RPE monolayer (as assessed using fundus autofluorescence and OCT) occur earlier than clinically evident. To our knowledge, this is the first in vivo evidence of polymegathism of the RPE cells in choroideremia, and it is similar to the RPE polymegathism reported in age related macular degeneration [48][49][50] . Some of the enlarged RPE cells may be multinucleated (e.g., the two arrows on the far right of Fig. 1c and arrows in Supplementary Fig. 4), which has also been suggested as a protective mechanism for the RPE to help maintain homeostasis 51 .
Whereas progressive RPE changes were observed over a period as short as 2 months, the cone photoreceptors and choriocapillaris at the fovea remain relatively stable during the same period. Although there may be differences between preservation of these layers in the fovea vs. at eccentric locations, our observation of normal structure and function even in fluorescently labeled photoreceptors are in agreement with previous studies showing well-preserved cone photoreceptor structure up to 16,33 or beyond the edge of RPE atrophy 6 , as well as preserved retinal sensitivity 6,33 . However, as we are unable to assess the state of rod photoreceptors from our imaging, it is unclear when and to what degree rods are affected relative to the cones, RPE, and choriocapillaris. Rods have also been reported to be a primary site of degeneration in choroideremia 12 and future studies comparing rods and RPE cells in vivo are warranted.
Despite the relatively milder disease observed clinically in female carriers, our imaging clearly shows widespread enlarged RPE cells present in these patients. Our data are consistent with histological findings of polymegathism of the RPE observed in a female carrier 52 . In a subset of the female carriers, we also explored whether disease severity was related to skewed X-inactivation. Here, we evaluated X-inactivation ratio in five affected heterozygous females by digest analysis at the AR and RP2 loci on peripheral blood-derived DNA. Segregation analysis was possible in three female carriers who had a male sample to determine the expression of the pathogenic CHM allele (Supplementary Table 2). Fundus autofluorescence imaging was used to assess disease severity. Four female carriers were considered to have mild disease, and two intermediate; for the affected males, two were considered intermediate, two severe, and one mild (Supplementary Table 1). There was no clear relationship between disease severity and X-inactivation. Nonetheless, the scattered hyper-and hypo-fluorescent regions visible in the late phase ICG images (Supplementary Figs. 3-5) provide a glimpse into the in vivo lyonization pattern within the RPE mosaic that ultimately serve as loci of RPE degeneration within these eyes. This late phase ICG pattern of patchy cyanescence appears to be a characteristic pattern, observed in 100% of the female carriers imaged.
The observation of ICG labeled photoreceptors in the majority of female carriers imaged is notable, as it suggests a defect in RPE integrity. Even though most of the fluorescently labeled photoreceptors were found on female carriers (mild or intermediate severity), we did find patches of these photoreceptors in one affected male (subject A4, intermediate severity) (Supplementary Fig. 1). However, since these cone photoreceptors appeared to have both normal structure and function, we concluded that this phenomenon likely represents a defect in the RPE. The fact that some of these patches of labeled photoreceptors appeared in different locations at different times (2-6 months) in the subset of female carriers that underwent longitudinal imaging, also supports the hypothesis that subclinical remodeling of RPE cells occurs within the intact monolayer and can be revealed using late phase ICG.
Fluorescence ICG imaging is routinely used for visualizing the integrity of retinal and choroidal vasculature. Here, we show that late-phase ICG imaging is also valuable for visualizing enlarged RPE cells in patients using ophthalmoscopes with or without AO.
AO-ICG imaging, however, provides higher resolution and cellular detail, but it is inherently more time and resource intensive; a larger montage constructed from approximately one hundred overlapping AO images (Supplementary Figs. 3-5) requires 30-45 min to acquire (not including the waiting time after injection). In contrast, acquisition of conventional late phase ICG images that cover a much larger area of the retina requires only a few minutes using conventional SLO. We also demonstrated that SLO imaging is particularly useful for identifying areas where the RPE mosaic is disrupted in female carriers, who have patches of enlarged RPE cells distributed throughout the retina. For affected males, these images reveal the extent to which the remaining RPE is affected and may be particularly relevant for selecting candidate patients for gene therapy or other therapeutic interventions, especially if they have an area of better-preserved RPE within the larger preserved area (Fig. 5a). Our findings also help to explain the smooth and mottled pattern of autofluorescence that has been previously reported in some affected males 53 . This nonangiographic application using ICG is already available in many clinics and can be readily translated to clinical practice.
Overall, affected males showed substantial RPE disruption across their remaining retina, but our data suggests that female carriers should also be considered as part of the continuum of disease. The patches of enlarged RPE cells within female carriers might be useful as an in vivo model for studying the early stages of RPE disruption in affected males. Moreover, when viewed at the level of individual RPE cells, the commonality and similarity of having clusters of enlarged RPE cells suggests a common disease mechanism between the affected males and female carriers. These findings warrant increased efforts toward developing treatments for symptomatic female carriers as well as exploring the use of RPE-based endpoint measures monitoring treatment efficacy in affected males [20][21][22][23]54 . We expect that visualization of g) showing that the heterogeneous pattern of fluorescence captured using the Heidelberg SLO matches the pattern imaged using AO-ICG. These side-by-side comparisons illustrate that individual enlarged RPE cells can be detected using conventional imaging, even without AO. a, d The green 'x' denotes the fovea. Scale bar, 1 mm. b, c, e-h Scale bar, 500 µm. the RPE at the cellular level will have a positive impact on current and future gene therapy trials, as a means to better inform patients of their status and eligibility for treatment, to monitor possible risks as treatment trials progress, and also as a potential outcome of clinical evaluations for choroideremia 55 . Ultimately, including female carriers who exhibit signs of RPE disease alongside affected males in such trials 24 could help to accelerate progress toward realizing a cure for the disease.
In conclusion, there is subclinical, widespread polymegathism of RPE cells in choroideremia, present in both affected males and female carriers. Progressive RPE changes are observed in the fovea in choroideremia, but the overlying photoreceptors and underlying choriocapillaris remain intact and relatively stable. We demonstrate the possibility of detecting enlarged RPE cells using commercially available, conventional ophthalmic imaging in combination with late phase ICG. This approach may be useful as a clinical measure for tracking the progression of choroideremia or for monitoring the integrity of the outer blood retinal barrier function of the RPE in this blinding retinal disease.
Methods
Clinical evaluation of patients. Six female carriers and five affected males of choroideremia were recruited from the National Eye Institute eye clinic for this study (NCT02317328; https://clinicaltrials.gov). All patients had molecularly confirmed pathogenic or likely pathogenic variants in CHM, and no known allergies to shellfish, iodine, or ICG. Participants who were willing to return for a follow-up visit were invited for follow-up adaptive optics (AO) imaging. All patients underwent best corrected visual acuity testing, dilated funduscopic examination, multimodal AO imaging, ocular biometry (IOL Master, Carl Zeiss Meditec), color fundus photography (Topcon), OCT (Spectralis, Heidelberg Engineering), and autofluorescence imaging (Spectralis, Heidelberg Engineering). Autofluorescence images relative to the patient's age were used to determine disease severity 24 . ICG was only administered in patients 18 years and older. Mid-late and late phase ICG images were acquired (Spectralis, Heidelberg Engineering), where mid-late phase was defined as 15-30 min after administration of the first dose of ICG and late phase as >45 min.
Adaptive optics imaging. Eyes were dilated using 2.5% phenylephrine hydrochloride (Akorn Inc) and 1% tropicamide (Sandoz, A Novartis Division). A custom-assembled multimodal AO instrument incorporating confocal reflectance 56 , non-confocal split detection imaging 57 , AO-ICG 34-36 and AO-OCT 37 was used for imaging. AO-OCT and AO-ICG imaging were performed sequentially. During imaging, subjects were advised to blink naturally, and frequent breaks were taken between video acquisitions. Prior to each imaging session, light power levels were measured at the corneal plane and maintained <135 µW for the 790 nm light source (reflectance and AO-ICG), <45 µW for 880 nm (wavefront sensing), and <1.45 mW for 1080 nm (AO-OCT).
For AO-ICG imaging (performed in 6 female carriers and 4 affected males), a field of view ranging from 1.5 to 2 degrees (750 × 605 pixels) was used. Confocal reflectance and non-confocal split detection images were simultaneously recorded alongside AO-ICG. Imaging was performed in three parts: before, during, and after i.v. administration of ICG at a dose of 25 mg distributed in two doses (1 + 2 mL). In part 1 (~10-15 min per eye, including breaks), reflectance images were acquired at overlapping retinal locations between the fovea and~5.0 mm temporal (female carriers) or up to the atrophic border (affected males). Simultaneously-captured AO-ICG images confirmed the absence of ICG signal. In part 2 (~5-10 min per eye), the fovea of each eye was imaged for the first few minutes after administration of each ICG dose. The second dose (2 mL) was administered 10 min after the first dose (1 mL). In part 3 (~30-45 min per eye), a larger area covering the fovea, parafovea, and a temporal strip of retina was imaged. For AO-ICG, part 3 imaging was typically performed 30-45 min after the initial injection of dose 1 34,35 . In total, 50-150 video locations were acquired per eye, depending on the amount of imaging time available. Videos were acquired at a speed of 17 frames per second (~9 s videos were used for parts 1 and 3; for part 2, a 2 min video was used, followed by a 45 s video). Following acquisition, AO-ICG images from parts 1 and 3 were processed to correct for eye motion, and then overlapping images were assembled into montages using Photoshop (Adobe, San Jose, CA, USA) 34,58 . Images of the choriocapillaris were obtained from part 2 and co-registered to the images from parts 1 and 3 36 .
For AO-OCT imaging (performed in 5 female carriers and 3 affected males), a 1.5 degree field of view (300 × 300 pixels) was used. Repeated AO-OCT volumes were collected in order to reveal the RPE [37][38][39] . For photoreceptor imaging, 50 volumes were obtained for each retinal location and for RPE imaging, 125-150 volumes were obtained for each retinal location. To ensure sufficient time for decorrelation of the speckle pattern observed in the RPE layer between video acquisitions, retinal locations were paired, and acquisition was alternated between the two locations (acquisition of 5 volumes at a time per location). The spectral domain AO-OCT system had a speed of 147 kHz which enabled volumes to be acquired at a speed of 1.6 volumes per second. In total,~450 volumes were acquired per eye (~20-30 min). Following acquisition, retinal volumes were digitally flattened based on the outer retinal layers, and then a 2D en face projection of the photoreceptor layers 59 was used to compute eye motion correction in the lateral direction, following the same registration procedures used for AO-ICG images 58 . RPE images were extracted from a single en face slice of the averaged OCT volume.
The retinal scaling factor for conversion from degrees to mm was computed using a paraxial ray trace on a three-surfaced simplified model eye 60 updated with the subject's biometry information obtained after dilation (axial length, corneal curvature, and anterior chamber depth) 61 .
RPE density measurements. Using AO-ICG images, 11 eyes from 6 female carriers and 7 eyes from 4 affected males were used for RPE quantification. For each eye, regions of interest (ROIs) were selected from the fovea (0 mm) out to 5 mm eccentricity in the temporal direction (female carriers) or up to the atrophic border (affected males). For each ROI, RPE cells were manually identified by an expert grader. A second expert grader reviewed all ROIs and performed additional manual correction, flagged for additional review by the first expert grader. This iterative process continued until either full agreement between the two graders was reached, or the image was discarded. Out of a total of 125 ROIs selected, 7 containing large areas of hypofluorescence that made grading difficult were discarded (6% of the ROIs). The identified cell centers were used to construct Voronoi diagrams from which cell densities were calculated. Any Voronoi neighborhood that exceeded the boundary of the ROI was discarded and the number of remaining cells was divided by the total area of the remaining Voronoi neighborhoods.
Photoreceptor, RPE, and choriocapillaris measurements. For the cross-sectional analysis, 11 eyes from 6 female carriers and 5 eyes from 3 affected males were evaluated (one subject was excluded due to unstable fixation during the part 2 AO-ICG imaging). Longitudinal evaluation was performed in 5 eyes from 3 female carriers and 2 eyes from 1 affected male. For each eye, a staircase of ROIs was selected 36 : a 300 µm square ROI was selected from the choriocapillaris image, followed by a 200 µm square ROI from an AO-ICG image of RPE, selected within the larger choriocapillaris ROI; finally, a 50 µm square ROI was selected from a confocal reflectance image of photoreceptors, selected within the RPE ROI. Cones and RPE cells were manually identified by two expert graders and iterative correction to the markings were performed until mutual agreement was reached. Similarly, flow voids were manually segmented from the choriocapillaris image by two expert graders until consensus was reached. Cell spacing was quantified based on the density recovery profile 62,63 and the average effective diameter of choriocapillaris flow voids (diameter of the circle with equivalent area) was quantified 36 .
Quantification of ICG labeled cones using AO-ICG. In select image regions where ICG labeled photoreceptors were observed, correspondence between photoreceptors in AO-ICG and non-confocal split detection images were qualitatively assessed, and cell centers were manually identified separately for each modality by an expert grader. Two additional expert graders reviewed all ROIs and performed additional manual correction until full agreement between the three graders was achieved. The identified cell centers were used to construct Voronoi diagrams from which cell densities were calculated.
Outer retinal length measurements. ORL was measured in AO-OCT as the distance between the inner segment/outer segment junction (ellipsoid zone) and the RPE bands 44 .
Retinal sensitivity measurements. Retinal sensitivity measurements were performed on 2 eyes from 1 female carrier (subject C4) 2 days after AO-ICG imaging. Immediately after AO-ICG imaging was completed on the first day, AO-ICG images were processed and montaged to determine locations at which ICG labeled photoreceptors were present. Test points located above ICG labeled photoreceptors and control points above non-labeled photoreceptors (within~1°of test points) were determined based on the AO-ICG imaging data and coordinates transferred to a color fundus photo. Fundus-controlled perimetry was performed using a COMPASS device (iCare, Finland) controlled with an external laptop using the open perimetry interface (OPI) 64 . Specifically, we built an interactive web application in the software environment R, with the add-on libraries Shiny and OPI 64,65 . This web application allowed us to acquire an infrared reflection (IR) reference image with the COMPASS device. A color fundus photograph (with labeled test points) was then registered to this IR reference image using the affine transformation from the 'Landmark Correspondences' function in ImageJ. The coordinates of all test and control points (in terms of the COMPASS IR reference image) were then extracted, converted from pixels to degrees, and transferred to the web application. At each test point, a Goldmann III (0.43°diameter,~126 μm in an emmetropic eye) stimulus was presented against a photopic background (10 cd/ m 2 ) 66 . Threshold at each test point was obtained using a Bayesian ZEST threshold strategy with a uniform prior and a stop criterion of SD < 0.5 dB for the probability density function 67 . For the given test points (average eccentricity for OD of 9.62°[ 8.48°, 11.2°] and for OS of 8.67°[6.85°, 9.9°]), a sensitivity of 29-31 dB can be considered as normal considering the age of the carrier 45 .
Mouse histology. All mice were housed in the NIH animal facilities. Two female mice (C57BL/6 J, 3-month-old; BALB/cJ, 4-month-old) were used for this study to replicate previously-published procedures described in pigmented 34 and albino 35 mice. Intraperitoneal injection of ICG (200 µL of 5 mg/mL) was performed. 16 h later, mice were euthanized by CO 2 inhalation and then eyes were enucleated. For the pigmented mouse, immediately after enucleation, the eye was embedded in optical cutting temperature compound, rapidly frozen using acetone (cooled to approximately −70°C by addition of dry ice), and then cryosectioned through the center of the eye (8-10 µm thick). A single cryosection was collected on a Superfrost Plus microscope slide (Fisher Scientific), vacuum dried, and mounted in Immu-Mount (ThermoFisher Scientific) immediately before imaging. For the albino mouse, after enucleation, the eye was hemisected using McPherson-Vannas scissors 14124 (World Precision Instruments) while immersed in media (Gibco FluroBrite DMEM, ThermoFisher Scientific) cooled to~0°C by ice surrounding the dissection chamber. After removing the lens and anterior segment, the neural retina was gently detached from the RPE with fine forceps and divided into four equal pieces. One piece served as a control. The other pieces were incubated with 2 µM ICG dissolved in media for 1 h (37°C, 5% CO 2 ), and then washed five times with the Gibco FluroBrite DMEM (ThermoFisher Scientific) media.
For the albino sample, microscopy was performed using either a custommodified confocal microscope (SP8, Leica) or a custom assembled AO microscope 42 . For imaging using the SP8, samples were transferred to glass slides and covered with a 0.17 mm thick coverslip with the photoreceptor layer facing the coverslip. Silicone grease was applied surrounding the sample to control the gap between the slide and the coverslip, in addition to anchoring the coverslip. A high NA objective (HC PL APO CS2 40×/1.30 OIL, Leica) was used. 3D differential interference contrast (DIC) and ICG fluorescence images were sequentially acquired. DIC images were acquired using a 488 nm CW laser diode and a transmitted light photomultiplier tube detector. ICG images were excited using a 730 nm CW laser diode and detected using an avalanche photodiode with an 810/ 90 nm bandpass filter. For imaging using the custom assembled AO microscope, we placed the sample in the chamber that was anchored by a mesh instead of the glass coverslip to avoid the reflection of the light on the glass-water interface. This microscope was constructed using Thorlabs CERNA components and a Nikon 0.8 NA water-dipping objective coupled with an AO scanning light ophthalmoscope outfitted with confocal reflectance 56 , non-confocal split detection 57 , and AO-ICG [34][35][36] capabilities.
X-chromosome inactivation assay. The X-chromosome Inactivation (XCI) assay was performed using the human RP2 GAAA repeat primers and the human AR CAG repeat internal primers reported previously 69 . Briefly, the HpaII (NEB) digested and undigested DNA samples were amplified using the RP2 and AR primers using the NEBNext Ultra II Q5 master mix (NEB) in a multiplex reaction, in which the forward primers were FAM-labeled. The PCR products were then mixed with a size standard and analyzed on an ABI 3500 automated genetic analyzer (Applied Biosystems). The peak areas were called using the GeneMapper software (Applied Biosystems) and the X-inactivation for each alleles was calculated using this formula: (d1/u1)/[(d1/u1) + (d2/u2)], where d1 and d2 refer to the peak areas of allele 1 and allele 2 in the digested samples, and u1 and u2 refer to those of allele 1 and allele 2 of the undigested samples, respectively. The XCI assay was performed on samples from five heterozygous females, of which segregation analysis was performed using a sample from a first-degree affected male relative to determine the extent of inactivation of the variant allele.
Study approval. For animals, experiments were conducted according to protocols approved by the NIH IACUC. For human subjects, research procedures adhered to the tenets of the Declaration of Helsinki. The study was approved by Institutional Review Board of the National Institutes of Health (NCT02317328). Written, informed consent was obtained from all patients after the nature of the research and possible consequences of the study were explained.
Statistics and reproducibility. Two-tailed unpooled t-tests were used to compare groups. For the pair-wise comparisons between three groups (e.g., photoreceptor, RPE, and choriocapillaris layer), the Bonferroni correction for multiple tests was applied. Statistically significant values are displayed as (P < 0.001), (P < 0.01), and (P < 0.05). For non-significant differences the corresponding "P = " are shown in the methods section. The data used to perform the t-test are displayed as mean [SD] or mean ± SD. For Fig. 1f, the 99.9% confidence interval is displayed as a gray band. For Fig. 3b, box plots are displayed for the three groups (center line: median; box limits: upper/lower quartiles; whiskers: 1.5× interquartile range; points beyond the whiskers: outliers). The individual data points and sample sizes corresponding to the plots in the figures and used in the statistical analysis are provided in Supplementary Data. Quantitative image analysis was carried out with multiple expert graders (for more details, please refer to the "RPE Density Measurements", "Photoreceptor, RPE, and Choriocapillaris Measurements", and "Quantification of ICG labeled cones using AO-ICG" sections above).
Reporting summary. Further information on research design is available in the Nature Research Reporting Summary linked to this article.
Data availability
All data generated or analyzed during this study are included in this published paper (and its supplementary information and supplementary data files). Source data is located in Supplementary Data 1. | 2022-09-15T06:16:42.128Z | 2022-09-13T00:00:00.000 | {
"year": 2022,
"sha1": "4ea40690163ebacb03a1b5c5886816d3c223a8a9",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s42003-022-03842-7.pdf",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "052c0147fcbfda9b9640b96341a0195d6066f440",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119235813 | pes2o/s2orc | v3-fos-license | Convection physics and tidal synchronization of the subdwarf binary NY Virginis
Asteroseismological analysis of NY Vir suggests that at least the outer 55 per cent of the star (in radius) rotates as a solid body and is tidally synchronized to the orbit. Detailed calculation of tidal dissipation rates in NY Vir fails to account for this synchronization. Recent observations of He core burning stars suggest that the extent of the convective core may be substantially larger than that predicted with theoretical models. We conduct a parametric investigation of sdB models generated with the Cambridge STARS code to artificially extend the radial extent of the convective core. These models with extended cores still fail to account for the synchronization. Tidal synchronization may be achievable with a non-MLT treatment of convection.
INTRODUCTION
Hot subdwarf B (sdB) stars are core-helium burning stars which have had their hydrogen-rich envelopes stripped, most likely in a binary interaction. The stars are typically slow rotators. However, those in close binaries are somewhat spun up. The sdB stars in close binaries, with orbital periods less than 10 d, have either low-mass main-sequence or white dwarf companions. The companions are unseen so it is not possible to measure the inclination of the observed systems unless they are eclipsing. If the system is tidally locked then the spin period and orbital period of the binaries should be the same and an observed rotation velocity would allow the inclination to be measured. Several observed sdB systems challenge this assumption (Pablo et al. 2012a,b;Schaffenroth et al. 2014). Theoretical calculations of tidal synchronization time-scales for sdB stars fail to account for synchronization via either the equilibrium or dynamical dissipation mechanisms (Preece et al. 2018).
Of the observed pulsating sdB binaries, the eclipsing HW Vir type binary NY Vir (PG 1336−018) is the only object whose outer layers show evidence of synchronous rotation with the binary orbit (Charpinet et al. 2008). The star oscillates with p-modes in its outer 55 per cent. Rotation in the deep interior is not constrained owing to the lack of sensitivity of p-modes to these regions. Charpinet et al. (2008) The companion mass M 2 is either 0.11 or 0.12 M and the orbital period of the binary P orb = 0.101016 d.
The tidal synchronization time-scale is inversely proportional to the ratio of the radius of the dissipative region to the binary separation to the sixth power (Darwin 1879;Eggleton 2006). Increasing the radius of the convective region reduces the tidal synchronization time. Observational asteroseismic data suggest the radial extent of the He burning core, as illustrated in Fig. 1, is substantially underestimated in stellar models (Van Grootel et al. 2010b,a;Charpinet et al. 2011;Giammichele et al. 2018). We investigate whether increasing the radius of the convective zone could reduce synchronization times sufficiently to account for the observed synchronization of NY Vir. We examine the effect that increasing the extent of the convective region has on all the quantities which go into the tidal synchronization calculations.
STELLAR MODELS
The evolutionary models used in this study were all constructed with the Cambridge STARS code as first described by Eggleton (1971) and subsequently updated by Pols et al. (1995) and Stancliffe & Eldridge (2009 model were created, one with overshoot (labelled: δ ov = 0.12), one without overshoot (δ ov = 0) and one without overshooting but with a modified Schwarzschild criterion (∆∇ + 0.15), where ∆∇ ≡ ∇ r − ∇ a , the difference between radiative and adiabatic thermodynamic gradients d ln T/d ln P.
Under the standard Schwarzschild criterion, the convective region is defined as that where ∆∇ > 0 and hence includes the semiconvective region. The super-adiabacity of the convective region that develops as the star evolves is very low and is in fact more likely a semi-convective region. For each of these classes, an early and a late model were compared. The early model was defined as the model obtained when the fractional core He abundance by mass drops to 0.9. The late model was defined to be the model where the convective core reached its maximal radial extent. The initial models were constructed without a modified Schwarzschild criterion by the same method used by Preece et al. (2018). We introduce several mechanisms for artificially increasing the convective region.
Modifying the Schwarzschild Criterion
For the models labelled ∆∇ + 0.15, the extent of the convective region was artificially extended by modifying the Schwarzschild criterion for stability against convection from ∇ r − ∇ a > 0 to ∇ r − ∇ a + 0.15 > 0. This has the effect of forcing convection to occur in regions near to convective boundaries which would otherwise be radiative. The increment 0.15 was chosen because this was the largest which produced stable evolutionary models.
Semi-convection and Overshooting
Eggleton (1972) implemented semiconvection in STARS as a diffusive process which follows Schwarzschild & Härm (1958)'s prescription. It assumes that the energy transport by convection in the semiconvective region is borderline negligible but that there is substantial chemical mixing which avoids any discontinuity in the chemical profile. Semiconvective regions then have ∇ r ≈ ∇ a . For convective overshooting we introduce a parameter δ such that convection occurs when where ∇ r and ∇ a correspond to the radiative and adiabatic thermodynamic gradients ∂ ln T/∂ ln P, respectively, and where the overshooting parameter δ is Here ζ is the ratio of radiation pressure to gas pressure and δ ov is a user defined parameter calibrated to observations. Typically δ ov = 0.12 gives the best results (Schröder et al. 1997). It is calibrated for stars with initial masses between 2.5 and 7 M . For He-core burning models the inclusion of overshooting suppresses the growth of the semiconvective zone. This ultimately leads to the formation of a smaller combined convective and semiconvective region than if overshooting were not used. For the models with the modified Schwarzschild criterion, overshooting occurs where ∇ r − ∇ a + 0.15 > −δ By contrast, the stellar evolution code MESA defines semiconvective regions as those which are unstable to convection according to the Schwarzschild criterion but stable according to the Ledoux criterion (Paxton et al. 2011(Paxton et al. , 2013. The MESA overshooting region is defined as l ov H P where l ov is user defined and H P is the pressure scale height. Because H P → ∞ as r → 0 the overshooting length can become very large for stars with small convective cores.
Mixing Length Theory
In the STARS code mixing length theory as described by Böhm-Vitense (1958) is used. Near the core the pressure scale height H P → ∞. The mixing length l is defined to be αH P and sets the average distance travelled by convective elements. The mixing length parameter α is a user defined constant. Physically, convective elements cannot travel an infinite distance. As suggested by Eggleton (1972), the mixing length is modified such that it cannot exceed the distance to the edge of the convective zone. This also has a modest effect on the mixing velocity w.
CONVECTIVE TIDAL DISSIPATION
The most efficient mechanism for tidal dissipation in sdB stars in close binaries is convective dissipation. Convection implies the bulk movement of material over large distances within the star. Turbulent viscosity in the convective region causes the tidal bulge to move away from the line connecting the centres of mass of the two stars. Fig. 2 is a schematic diagram illustrating the dissipation.
The tidal synchronization time-scale τ sync , owing to convective dissipation, as described by Eggleton (2006) and . Schematic diagram of tidal interactions. The first panel shows a single, unperturbed star. The second panel shows a tidally distorted star which is either tidally synchronized or has no dissipation. The tidal bulge lies along the line connecting the centre of masses of the binary system. The final panel shows a nonsynchronously rotating, tidally distorted star with a dissipation mechanism. The tidal bulge lags or leads the line connecting the centre of masses of the stars. This produces a torque which spins the star either up or down until it is rotating synchronously. Eggleton et al. (1998), is where I is the moment of inertia of the star, τ visc is the viscous time, Q is the dimensionless quadrupole moment, a is the binary separation radius, R 1 is the radius of the dissipative region, M 1 is the mass of the dissipative region, M 2 is the mass of the companion, ω is the angular frequency of the binary, Ω 0 is the initial spin angular frequency of the primary and Ω is the final spin angular frequency. The viscous time τ visc is where γ(r) is a dimensionless structural property related to the coupling of the tides. The tides are described as fast when the orbital period is faster than the convective turnover time. In this circumstance the dissipation of the tides is damped in a way that depends on the turbulent spectrum of convective cells. Zahn (1966) and Hurley et al. (2002) use a damping factor Ψ(r) Goldreich & Nicholson (1989) have used Penev et al. (2007) revisited the problem with 3D hydrodynamical simulations and found better agreement between theory and observation with Ψ 1 (r).
The Tidal Synchronization Time-scale
As can be seen in Table 1, tidal synchronization time-scales for NY Vir predicted from standard models of sdB stars are close to or longer than the Hubble time. Tidal synchronization cannot occur before these models exhaust their core helium supplies, move off the EHB and on to a white dwarf cooling track. Because τ sync is inversely proportional to the radius of the dissipative region to the sixth power, simple calculations suggest that increasing the convective radius r conv should substantially decrease the synchronization time.
Somewhat surprisingly, increasing r conv by a factor of 2.5 by modifying the Schwarzschild criterion only reduces the synchronization time by about an order of magnitude. The changes in the structural properties of the star affect the quadrupole tensor and so too the tides. Increases in the viscous time and mass of the convective region and decreases in the quadrupole moment and moment of inertia term counteract the effect of increasing the fractional convective radius.
The equation for tidal synchronization has multiple terms, all of which have an allowed physically constrained ranges. The dependence of the radial extent of the convective zone on the individual terms and their allowed ranges is now examined.
The Dimensionless Quadrupole Moment
The mass quadrupole tensor of an object describes the spatial distribution of the matter. If the object is a point source the quadrupole tensor vanishes. The dimensionless quadrupole moment Q is given in Table 1. Varying r conv /R sdB does not particularly change Q because the early and late models with convective overshooting have the same fractional convective radius. However Q is sensitive to the total radius and density of the star and Q doesn't particularly change for the evolving models with no overshooting and a modified Schwarzschild criterion (∆∇ + 0.15). For the models tested (1 − Q) 2 is between 0.97 and 0.99. Because (1 − Q) 2 is close to unity in all cases considered it does not have a substantial influence on the tidal synchronization time-scale.
The Mass and Radius of the Convective Region
The sdB star He cores are small but dense. The H-rich envelope is radially extended but accounts for a small amount of the mass. The mass as a function of radius can be seen in Fig. 3. The outer regions of the star expand as the star evolves. In addition the high internal density means a small increase in the convective radius substantially increases the convective mass. When convective overshooting is used the radius of the convective region stays approximately the same but the mass increases by half. It is worth noting that whether a region is convective or radiative has little impact on the density profile. The models with the modified Schwarzschild criterion are denser than the standard models. Furthermore, the mass and radius term in Eq. 3 can be plotted as in Fig. 4. From this the mass and radius term can be constrained to be between 10 8 and 7 × 10 9 g 2 cm −6 .
Moment of Inertia Term
For tidal calculations the ratio of the moment of inertia at the edge of the convective core to the moment of inertia if the mass were confided to a shell at the same radius is required. The overall dependence of the moment of inertia term on the fractional convective radius is displayed in Fig. 5. This term lies between 0.15 and 0.37.
The Viscous Time
Tidal interactions convert kinetic energy from tidal distortions into heat by dissipative processes whilst conserving angular momentum. This dissipation can be calculated from the square of the variation in the quadrupole tensor over time. As the companion moves around the sdB star the gravitational potential through the star changes cyclically. This changes the matter distribution and so affects the quadrupole tensor. If not synchronized, the tidal bulge moves around the star following the companion. This in- Early δ ov = 0 Late δ ov = 0 Early ∆∇ + 0.15 Late ∆∇ + 0.15 Figure 4. The mass to the second power over the radius to the sixth power as a function of fractional radius for the same models as in Fig. 3. troduces a time dependent velocity field in the dissipative regions. The dissipation has a time-scale of τ visc . The viscous time as calculated by Eq. 4 with Ψ 1 is shown in Fig. 6 for convective regions modified to extend throughout the star. The mixing length l used is the distance to the edge of the convective region such that at r = 0, l = R 1 and at r = R 1 , l = 0. The time-scales at the models' convective boundaries are also plotted. The δ ov = 0.12 models are again omitted because they are almost identical to the δ ov = 0 models. The mixing velocity w is not well defined for regions which would be radiative. If r < R 1 we use w from the models. If r > R 1 , w was given the same distribution but over the extended region. The dip in the Late ∆∇+0.15 models is due to a large peak in γ(r) at the convective boundary. This is most likely an artifact of our modification to the Schwarzschild criterion. Without this peak the τ visc profile is almost identical to the Early ∆∇ + 0.15 profile. Overall, larger convective cores have longer viscous times.
Critical Viscous Time
All terms on the right hand side of Eq. 3 are known for any given mass and radius. The equation can be rearranged for the desired synchronization time. The maximum τ visc to synchronize the system in this time as a function of convective radius can be derived. The upper limit to the viscous time for synchronization within the EHB lifetime τ EHB = 10 8 yr is plotted in Fig. 7. The viscous time as calculated with Eq. 4 for each of the evolutionary models in Table 1 is also plotted. The τ visc calculated for the models with Eq. 4 and the usual mixing theory estimates for the convective velocity are above the upper limits even when the damping factor is ignored. The τ visc increases as the convective core grows for all models with increasing radius owing to the high density of the material in the helium mantle. The damping factor is the most influential parameter. The choice of Ψ stratifies τ visc by orders of magnitude. When the damping factor is excluded τ visc does not vary much between the models and is about 10 yr. Fig. 7 shows that doubling the radial extent of the convective region increases the viscous time-scale by approximately an order of magnitude. If the mixing velocity is increased such that w = l/P orb the tides are no longer considered fast and so are not damped. The convective velocity is driven by the heat flux. Increasing the velocity would cause a substantial increase in the heat flux which would then change the temperature gradient and hence the structure in other significant ways. If the convective cells turnover without releasing all of their energy to the surroundings higher velocities can be reached without changing the overall heat flux. Some 3D hydrodynamical simulations of convective regions in stars have typical velocities which are much larger than those predicted by mixing length theory (Arnett et al. 2009;Gilkis & Soker 2016).
The derived upper limits are all within an order of magnitude of each other. If the radius of the dissipative region is small the viscous time must be less than a year for synchronization to be achieved. For convective regions which take up more than half of the sdB star by radius the critical viscous time is less than 10 yr. This is more than the viscous time predicted when no damping is included.
DISCUSSION
J162256+473051 is another HW Vir type system with a lower mass companion in a shorter-period orbit than NY Vir. Observations show that this star is rotating sub-synchronously. Calculations of the tidal synchronization time indicate that this system should synchronize more rapidly than NY Vir owing to its substantially smaller orbital separation. If J162256 + 473051 is neither expected nor observed to be synchronized, why should NY Vir appear to be synchronized?
HW Vir type systems are most likely formed via a common-envelope interaction (Han et al. 2002). The spin of the outer regions of the sdB star that subsequently forms are affected during this process. NY Vir has p-modes which propagate through the outer 55 per cent of the star. These p-modes are consistent with synchronization. The presence of p-modes means that the region must be radiative. However, synchronization could have been achieved during the common-envelope phase. NY Vir's companion for each convective radius, to achieve tidal synchronization during the EHB lifetime. Each line represents a single evolutionary model. The points plotted and connected with dotted lines are the viscous times predicted by the stellar models. The dotted lines connect the end points of evolutionary sequences with the same input physics (δ ov = 0, δ ov = 0.12 and ∆∇ + 0.15). For completeness the evolutionary models with convective overshoot are also plotted. The early models without a modified Schwarzschild criterion both start at almost identical places. The spread in the points is due to different damping factors. The squares use Ψ 1 , the triangles Ψ 2 and the circles Ψ = 1. The diamonds have the convective velocity w = l/P orb , the minimum velocity required for the tides not to be considered fast. Increasing the fractional convective region increases the viscous time-scale.
is more massive and more radially extended than that of J162256 + 473051 so its companion should have had more of an effect during this phase.
CONCLUSIONS
Asteroseismic evidence for rotational and orbital synchronization in the hot subdwarf binary NY Vir is at variance with our previous theoretical predictions of tidal synchronisation in such stars. Because the tidal synchronization timescale is inversely proportional to the radius of the convective region to the sixth power artificial extensions to the convection boundary have been examined to see whether a larger convective region could account for the observed synchronization. Increasing the radius of the convective region by a factor of 2.5 decreases tidal synchronization times by less than one order of magnitude, insufficient to bring NY Vir close to synchronization within its core-helium burning (or extended horizontal-branch) lifetime. The individual terms of Eq. 3 were examined to test how much each contributes to the synchronization time and what constraints may be placed on the quantities contained in them. The boundary of the convective core was moved outwards to see if there was a radius at which tidal synchronization could occur without any modifications to the theory. It was found that even making the stars fully convective would not be sufficient because the orbital periods are shorter than the convective turnover time and consequently dissipation of the tides is damped. The damping factor and choice of mixing length theory are the areas of largest uncertainty. If the convective mixing velocity is increased such that w > l/P orb all the models predict tidal synchronization within the EHB lifetime. Some 3D hydrodynamical simulations of convective regions predict velocities substantially larger than those calculated with MLT. If these calculations prove correct, tidal synchronization might result from invocation of non-classical convection physics.
In seeking an alternative explanation for the synchronization of NY Vir, we note that the common-envelope phase is not well understood. If the tides do not synchronize on the EHB it is possible that at least the outer layers of the sdB star were synchronized during the common-envelope phase. | 2019-03-14T18:00:02.000Z | 2019-02-25T00:00:00.000 | {
"year": 2019,
"sha1": "0f449e80c221cb60881f41e5856dc32db1b9a378",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1903.06176",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "0f449e80c221cb60881f41e5856dc32db1b9a378",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
59128557 | pes2o/s2orc | v3-fos-license | Web Service Composition Framework using Petrinet and Web Service Data Cache in MANET
A Mobile Ad Hoc Network (MANET) is characterized by multi-hop wireless links and frequent node mobility. Every neighboring node in the MANET is likely to have similar task and interests, several nodes might need to access the similar web service at different times. So, by caching the repeatedly accessed web service data within MANET, it is possible to reduce the cost of accessing the same service details from the UDDI and also from the external providers. Composition of web services leads to a better alternative as, at times a candidate web service may not completely serve the need of the customer. An effective Data Cache Mechanism (DCM) has been proposed in [6] using the Distributed Spanning Tree (DST) as a communication structure in Mobile network to improve scalability and lessen network overload. As an enhancement, Ant Colony Optimization (ACO) technique has been applied on DST to cope with the fragile nature of the MANET and to improve the network fault tolerance [1]. In these perspectives, an efficient Web Service Cache Mechanism (WSCM) can be modeled to improve the performance of the web service operations in MANET. In this paper, a fine grained theoretical model has been formulated to assess the various performance factors such as Cooperative Cache and Mobility Handoff. In addition to these, the performance improvement of WSCM using DST and ACO optimized DST techniques in MANET has been proved experimentally using Precision and Data Reliability of the system using appropriate simulation.
Introduction
A Mobile Ad-Hoc Network (MANET) is an autonomous collection of mobile nodes that communicate over relatively bandwidth constrained wireless links.Since the nodes are mobile, the network topology may change rapidly and unpredictably over time.The network is decentralized; where all network activity including discovering the topology and delivering messages must be executed by the nodes itself.i.e., routing functionality will be incorporated into mobile nodes [18].The nodes in MANET would probably work for tasks of similar goal (common interest).So, most of the nodes would try to access the same web service data at different time through their corresponding Access Point (AP).The Access Points may be located at the boundaries of the MANET, where reaching them could be costly in terms of delay, power consumption, and bandwidth utilization.Moreover, the access points would be connected to a highly overloaded resource (e.g., a satellite), or an external network that is susceptible to intrusion plays a vital role in response time, security and availability of the system.For such reasons, it is recommended to cache the frequently accessed service information within the nodes in the MANET and the search application should check for the availability of the required service data within the network before requesting the external service registry [2].
Caching of service refers to the technique of caching the service method invocation information (WSDL) from the registry or service response from the corresponding service providers.The cached responses of service methods can be utilized only if the future requests use similar arguments as that of cached responses.Caching the WSDL information saves significant time and resources because subsequent service requests of similar methods will not be required to download WSDL files in a repetitive fashion.At this juncture, a service item refers to the cached WSDL information description or/and the cached web response of the corresponding service.So, the MANET applications should check for the existence of the desired service item within the network before attempting to request the external service source [2,6].This scenario can reduce the overload of the system for accessing external source for same service and also avoid the possible intrusion threats.
A set of proxy nodes are introduced in the MANET to provide information about the mobile nodes in the network and the services invoked by them.The proxy nodes are configured with a domain ontology and petri net modelling.The domain ontology enhance the selection of appropriate web service (by using semantics), from the service registry or peer agent nodes.The petri net modelling aims to provide composite value added service to the service requester.The service data caching within the MANET can be discussed in two methods based on the diversity of the cached service information.In most of the works, the decision to cache a service is done locally in the proxies [3,14,16,17], that is, without taking into account the all the peers within the network.In such case, there may situation happens that multiple copies of the same data can be cached in the proxy nodes.This redundancy of same service data cache could reduce the possibility to cache different data that are also of interest, which affect the overall performance of the cache system.On other hand, nodes are made to decide the caching data co-operatively among the proxies [13,15,18], which can improve the cache diversity and also the overall data cache performance of the system.Various opportunities and challenges, like load balancing and mobility, which arise on caching web data within mobile networks, are theoretically discussed in [12].
Lan Wang [3] proposed Clustering in large-scale MANET as a means of achieving scalability through a hierarchical approach in which every node in the cluster is one hop away from every other node, that is, each cluster is a diameter-1 graph.But static cached data item manager may easily become the traffic bottleneck and single point failure of the cluster [4].Hassan Artail et al. [2] proposed the Minimum Distance Packet Forwarding technique for search applications R. Baskaran, P. Dhavachelvan within MANET that are based on the concept of selecting the nearest node from the designated nodes.The cache techniques in the studied works endure problems such as large hop count, message density and single point failure because of not following some efficient communication structure within the MANET.To cope with the problem stated, Distributed Spanning tree (DST) has been used as a communication structure in MANET for effective data cache technique proposed in our previous work [5].DST is a recent and formally proved communication structure in MANET to lessen node isolation problem, to reduce the number of hops required to reach the nodes and makes the network scalable [7,8,10].
Another important performance factor in MANET is finding and maintaining routes since node mobility causes topology change which need to be observed for effective communication.
In [1], Ant Colony Optimization has been used to deal with the fragile nature of the MANET which dynamically identifies the optimal path between the nodes in the DST on-demand.It is also justified that applying ACO on DST, enhance the effective routing of message (at low cost) in the MANET which in turn reduces the number of message hops required for communication to achieve excellent efficiency in DCM applications.Though, effective WSCM in MANET using DST and ACO techniques has been formally proposed in [1,6], experimentation analyses of the work has been performed for very few performance factors such as hit ratio and message passes.Thus, in this paper, it is intended to conduct an extensive analysis on several other critical performance factors such as Cooperative Cache model, Mobility Hand-off, Precision and Data Reliability.
Background information needed
In this section, the discussion on innovative techniques proposed in [1,6] which are necessary to understand the performance assessments performed in the following sections of the paper.
Distributed Spanning Tree (DST)
Distributed Spanning Tree (DST) [10] is the interconnection formation we follow as in [5,11,21] which, improve the routing and reduce the number of message passes required for any communication in MANET.DST systematizes MANET into a hierarchy of groups of nodes.The DST is an overlay structure designed to be scalable [11].It supports the growth from a small number of nodes to a large one.A comprehensive algorithm for formulation of DST in MANET has been proposed and exemplified in [9].Consequently, the MANET can be logically converted into DSTs and each DST should have its root node, named as the Head Node (HN) and the possible Leaf Nodes (LNs).Every HN will hold the complete details regarding its LNs and vice versa.These HNs are to be generated dynamically and should hold the service cache details, which is to be accessed by their corresponding LNs and indeed by other HNs also.In addition to the cache details, domain ontology and Petrinet formalism is included in the HNs to deliver prominent service compositions.
The DST formulated MANET can be represented as G m in equation (1), Web Service Composition Framework using Petrinet and Web Service Data Cache in MANET 177 Where, • DST v is the Distributed Spanning Tree and 'i' is the total number of DSTs formed in the network and 0 < v ≤ i • HN v is the Head Node (HN) and 'i' is the total number of HNs in the peer network equal to the number of DSTs and 0 < v ≤ i • LN refers to the Leaf Node(s).In LN vz , refers to the corresponding HN v and 0 < z ≤ j v − 1, where 'j v − 1' is the total number of LN s in the corresponding DST.
Ant colony optimization for DST
Ant colony optimization (ACO) [19,21] is one of the most recent techniques for approximate optimization.The inspiring source of ACO algorithms are real ant colonies.More specifically, ACO is inspired by the ants' foraging behavior.By applying the ACO over the formulated DST [1], we can obtain the optimal path in terms of reduced number of message passes among the nodes in the network.ACO is also capable to reform a new optimal path in case of any problem with the current optimal path.In this paper the Ant Colony Optimization Algorithm has been modified and proposed for finding an optimal path in DST of the MANET.By optimizing every DST and connection among all the other DSTs through their HNs, it can be argued that the entire network is optimized with ACO technique for improved efficiency.
The Complexity of ACO technique depends on the method it is implemented in the MANET.In DST structure, computational complexity for ACO technique can be calculated as, Where, • N DST is the number of DSTs or the number of HNs formed in the network • N LN is the number of LN under a HN (theoretically taken same number of LNs under every HN)
Web Service Data Cache Mechanism (WSCM) with DST and ACO techniques
An efficient WSCM system in MANET with DST and ACO techniques has been formulated in [1,6] with necessary algorithms.The projected system has the capability to cope with fragile and dynamic topology changing MANET environment and the system structure can be viewed as a four layered as shown in Figure 1.
MANET Network Layer -is a network level layer consists of wireless and highly dynamic topology network.
DST Formulation Layer -is simple and converts the graph structure MP2P network into a collection DSTs.This DST structure provides the features that are necessary for a dynamic network like reduced size of routing table, minimizes routing overhead, easy network management, reduced message pass, load balancing and fault tolerance.This layer offers the dynamic node insertion into the network and exit from the network in both normal and abnormal manner.This layer also makes the system highly scalable.R. Baskaran, P. Dhavachelvan ACO optimization Layer -provides the system to manage with the dynamic nature of the MANET.This layer works in simple, effective and on demand way which makes the system to operate on a fragile system with asymmetric links and constantly changing topology.
Web Service Data Cache Management and Service Composition -is an application level layer with specified protocol for an effective web service information cache management system in MANET.Using the Petrinet formalism, the web services are composed together to enhance the service quality.Thus, using this four layer system structure, WSCM application can be efficiently performed in the highly fragile MANET environment using DST and ACO techniques.
Experimental analyses
In this section, an extensive analysis of DCM system using DST and ACO techniques in MANET has been performed based on critical performance factors such as Cooperative Cache model, Mobility and Hand-off, Availability, Routing technique, Cache Replacement method, Precision and Data Integrity of the system.
Simulation Setup
A MANET environment with 30 nodes and 70 service items has been simulated using OM-NeT++ tool, which is an object-oriented modular discrete event network simulator.Table 1 show the partial view of HNs and its neighboring nodes, and hop distance between them.Simulation parameters are followed as similar to [1].In the simulated network, the HNs formed are node05, node11, node17, node23 & node29 and other nodes act as LN to any one of these HNs.For our simulated, equation ( 1) can be rewritten as, Web Service Composition Framework using Petrinet and Web Service Data Cache in MANET 179 Where, • The node number are to be preceded by the term node, for example the DST 1 should be interpreted are DST 1 = (node06, node03, ..., node17), in which the first node node06 is the HN and all other nodes are LNs of DST 1 .
To simulate the web service cache mechanism, we created 50 different service items and stored in the service registry which are accessed by the proxy nodes in the MANET through the access point.When a node requires any service, it will send the request to its corresponding HN (proxy node).The HNs upon receiving the requests, extracts the keywords and match with the cache entries to identify the service has already executed from that node with in a cut-off time.The cut-off time is a time span a service information in a cache will remain valid, after which the WSDL information will be deleted from the cache (leading to cache miss for the next attempt).This deletion ensures to devour the up to date information in the cache.The HNs advertises its presents in the network for the LNs to identify and request for a service.During the very first time access of any service, the LN saves the copy of the accessed service item and intimates its corresponding HN to save the type of the service item and the details of LN which holds it.When any node request for the web service of similar nature, the requesting node is served with the service item by the LN which holds the cache, identified through its HN.A keyword extracted from the request may or may not match with the cache entries.If it matches, the WSDL information is provided to the corresponding requesting LN using standard message passing.If the keyword does not match with the cache table, the domain ontology is used at the first level to find related services (rather than exact keyword matching) from the cache.At the next stage, if related services also give a cache miss, the peer HNs or the external service registry is contacted for availing the service configurations.Upon reaching all the HNs, it can be found that the service does not exist within the network and obtained from the external UDDI/service providers.
S.No
A service request initiated by a LN can be fulfilled by an atomic service or a composite service.At times, atomic services may not be available for a given service request, where several compatible services are identified, composed and executed.To identify the compatible services for composing a value added service, petrinet formalism is used in the MNs.Petri Net is used to mainly identify the reachability of the compatible web services within the domain.
Definition 1: The Best and Worst case analysis for accessing the cached service item based on the message hops can be modeled as follows.
Best Case: The total number of message hops required to access the cached data item is minimum, when service item is being cached in any LN which is under the HN of the requesting node.R. Baskaran, P. Dhavachelvan This can be expressed as, n(access messagehops ) + n(dictionary search ) ≤ 4N where, • n(access messagehops ) is the total number of message hops required to access the cached service item.
• n(dictionary search ) is the total time to search the semantic of the keyword in the search.
• N is the total number of message hops between HN and its LN (consider equal for every LNs).
Worst Case: The total number of message hops required to access the cached service item is maximum, when the HN of the node that holds the requested service item is at distance 'k' from the HN of requester node, where 'k' is the total number of HNs in the MANET.This can be expressed as, n(access messagehops ) + n(dictionary search ) ≥ (kXM ) + 4N where, • n(access messagehops ) is the total number of message passes required to access the cached item • n(dictionary search ) is the total time to search the semantic of the keyword in the search • M is the total number of message pass between two HNs (consider equal for between every HNs).
• N is the total number of message pass between HN and its LN (consider equal for every LNs).
An extensive analyses to model the various performance factors such as Cooperative Web Service Cache model, Mobility and Hand-off, Availability, Routing technique, Cache Replacement method, Precision and Consistency for the WSCM in DST and ACO optimized DST MANET have been performed in the following sections.
Cooperative cache model
To improve the service information accessibility, mobile nodes should cache different service item that of their neighbor nodes [1].Every LN should cooperatively cache the different service to avoid the replicated caching of same service within the network.Caching same service on different LNs may reduce the access delay but on considering size of the cache memory in LNs it will block caching more frequently accessing services (different).Definition 2: Let LN i is the mobile node which cache the service item.To show that each LN caches different service, Web Service Composition Framework using Petrinet and Web Service Data Cache in MANET 181 where, • k be the number of LN which cache data item in MANET.
• n be the number of service items cached in m th LN and 0 < m < k.
• S id is web service index entry in LN_TABLE of a LN.
Eq. 6 refers that no same service is cached by the different LNs and Eq. 7 refers to no same service is within a LN.Thus, there is no repetition in the web service index entry in LN_TABLE of every LN.So, every service is cached only once in the MANET.
Mobility and hand-off model
Due to the fragile nature, mobile nodes are set to move free in MANET and node tracking task added complex.This tracking task can be accomplished in two scenarios, exit or remove from the MANET [5] and switching among the DSTs within the MANET.
Scenario 1: A node can exit/remove from the network dynamically in the following fashion: The node that wants to get remove from the network should send an inform message to its HN, so that the corresponding HN removes the node details and its cached web service identity from its LNs list, which obviously removes from the spanning tree.R. Baskaran, P. Dhavachelvan Figure 4: Comparison on % of utilization of consistent service item performance of three different schemes Scenario 2: Switching of node from one DST to another is configured automatically by the HN by passing some messages with the node.Any LN can voluntarily hand-off itself from any HN take HN 1 , if the condition satisfies that number of hops between LN and HN 1 exceeds the number of LNs under HN 2 .And the LN can join to HN 2 in which number of hops between LN and HN 2 will be less than number of LNs under HN 2 .This hand-off should be intimated to both HN 1 and HN 2 .It is must that every LN should be under any HN.If a request arises in the mean time between handover, the HN 1 will transmit a "Binding Warning" and intimate LN is now under HN2.Table 2: Comparison on precision for HNs involved in serving the nodes in all three different scenarios Definition 3: Let v i be a mobile node in MANET which is under HN i and HN j be another HN in the MANET.Then, v i can decide to hand-off from HN i and to join under HN j , if it satisfies the rule of Eq. 9. If, where, Web Service Composition Framework using Petrinet and Web Service Data Cache in MANET 183 • n be the total number of HNs in MANET.
• nhops(v, HN ) be the number of hops required to reach node v from HN.
• n(LN i ) be the total number of LNs under the Head Node HN i .
Figure 5.Comparison on data item cached and served using three different schemes.
Thus, any LN whose distance from existing HN is lesser than another HN can perform handoff from existing HN and join with new one.The Mobility and Handoff model works under the assumption that the HNs will always be in access range within the MANET.This assumption is made as LNs under the MN should not be disconnected from the spanning tree and thereby to the network.
Precision
The precision refers to the ratio of total number of service request received to the total number of requested service found in the MANET.The different performance data observed from the simulation in first 100 seconds are tabulated in Table 2; which contain HNs created, No. of services cached by LNs of each HN and No. of service item requests served by each HN, either at an atomic service or a composition.Composition of web services is carried using the Petri net modelling by the following phases -identifying the similar web services using domain ontology, classification as compatible and non-compatible, execution of compatible web services pertaining to the goal of the service request.To explain in clear manner, consider the first entry in Table 2, the mobile node, node06 is an HN and the total number of data items cached in its LNs, the total number of data item request served using the cache and the total number of request received for service item are 11, 31 and 95 respectively.Thus Precision for HN node06 is 32.62%.
Table 2 shows that precision percentage of HNs in the MANET can be improved using DST and ACO techniques.The maximum precision value recorded for MANET, MANET with DST scheme and MANET with ACO optimized DST scheme is 41.7%, 58.94 and 75.63% respectively which is illustrated in Figure 2.This is because the DST and ACO techniques reduce the number of message hops required for any operation, such as cache request and cache reply, by discovering an optimal path between the nodes in the MANET.So operations performed faster and more requests are served from nodes local cache within the stipulated time period.
Figure 3 illustrates the comparison on service request received and served using three different schemes.From Figures 3 and 4, it can be observed that in ACO optimized DST scheme outperforms other two schemes.Thus, precision performance of the system can be improved from DST scheme from 30.9% to 53.7% and which is further improved to 66.1% using ACO optimization technique.R. Baskaran
Service reliability
Service reliability or consistency refers to the correctness of the cached service at the time of access within the MANET.Though consistency technique followed is Time-Based it is not required that every mobile should be synchronized in clock.Every HN which stores the service item type also store metadata about the service item S i which contain the time at which S i is being cached.This information is used to check the service item validity.'T ' is the constant time value in the MANET which can be varied based on service item being updated in outside network.The factor used to measure the consistency of the system is percentage of consistent service item usage.
Table 3 shows the comparison on % of utilization of consistent service item by LNs in all three different scenarios for first 100 seconds of simulation run.From these statistics, it can be observed that utilization percentage of consistent service item is much improved DST MANET and ACO optimized DST MANET schemes.The same is illustrated in the Figure 4 and 5, which confirms the improved performance of the DST and ACO optimized DST schemes over MANET scheme.
Table 4 shows the utilization increases more using petrinets and deliver compositions.As per the assumption, the mobile agents tender similar service request, a service composition held in the cache will be a suitable candidate for most of the request, rather being a single atomic service.Without loss of generality, we can say that the service served ratio to the atomic service with respect to the composite service will be less.And it is trivial that if an atomic service can satisfy a customer request, a composite service (which includes compatible atomic services) will also satisfy the request in a more efficiently.
Discussion
To Summarize, the DST structure offers the capabilities that are necessary for a dynamic network like reduced size of routing table, minimizes routing overhead, easy network management, reduced message hops, load balancing and fault tolerance.ACO optimized DST structure provides the system to manage with the highly fragile nature of the MANET and to find the optimal route between HNs and HN & its LNs on demand fashion.This layered approach works in simple, effective and on demand way which makes the system to operate on a fragile environment with asymmetric links and constantly changing topology.Thus, the performance of service cache technique for Web Service Composition in MANET has been analysed for Cooperative Cache, Mobility Hand-off, Precision and Data Reliability method.An extensive experimenta-
Conclusion
The work presented in this paper described modeling and assessing the various performance factors for the Service Cache Mechanism for composing web services using DST and ACO techniques in MANET proposed in our previous works.A comprehensive theoretical model has been developed for the performance factors such Cooperative Cache, Mobility Hand-off, Precision and Data Reliability methods.In addition to these, the performance improvement of Web Service Cache Mechanism (WSCM) has been proved experimentally for improved based on Precision and Service Integrity factors using three different schemes such as MANET, DST MANET and ACO optimized DST MANET.The simulation results shows that the precision performance of the WSCM system is improved using the DST scheme from 30.6% to 53.6% and which is further improved to 68.1% using ACO optimization scheme.And the service reliability performance is enhanced from 50.3% to 67.0% and to 88.0% using DST and ACO optimization schemes respectively.
Figure 1 :
Figure 1: The layered architecture of WSCM in ACO optimized DST MANET
Figure 2 :Figure 3 :
Figure 2: Comparison on Precision performance of three different schemes
Table 1 :
HNs formulated and its hop distance from the nearest two HNs in the simulated MANET
Table 3 :
, P. Dhavachelvan Comparison on % of utilization of consistent service item by LNs in all three different scenarios
Table 4 :
Web Service Composition Framework using Petrinet and Web Service Data Cache in MANET 185 Comparison on % of utilization of consistent service item Vs a composition by LNs tion has been performed on precision and service reliability of the system which confirms that the ACO optimized DST scheme improves the efficiency of service cache technique in MANET environment. | 2018-12-18T13:11:13.740Z | 2015-04-01T00:00:00.000 | {
"year": 2015,
"sha1": "5ea4ee04de156a6a873b53643ce27c8c288d1d4f",
"oa_license": "CCBYNC",
"oa_url": "http://univagora.ro/jour/index.php/ijccc/article/download/1751/491",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "5ea4ee04de156a6a873b53643ce27c8c288d1d4f",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
151572348 | pes2o/s2orc | v3-fos-license | The parable of the Feast ( Lk 14 : 16 b – 23 ) : Breaking down boundaries and discerning a theological – spatial justice agenda
Description This research is part of two different research projects. It is part of the project, ‘SocioCultural Readings’, directed by Prof. Dr Ernest van Eck, Department of New Testament Studies, Faculty of Theology, University of Pretoria and part of the research project, ‘Spatial Justice/Spirituality and Health’, directed by Dr Stephan de Beer, Director of the Centre for Contextual Ministry and member of the Department of Practical Theology, Faculty of Theology, University of Pretoria
Introduction
Walking through the streets of Pretoria, or driving from one end to the other, you are confronted with the stark contrasted realities in which people find themselves.You can move past the seat of power of the Republic of South Africa (Union Buildings) and at the same time be confronted by the extreme poverty of homeless individuals occupying the sidewalks.Standing at the Hatherley dumpsite, in Mamelodi East, extreme opposites can be noted: big, expensive machinery bringing in waste from the city whilst poor, vulnerable people dig through it to seek survival.
This can be translated into the age-old theodicy question: 'Why does God allow bad things to happen'?Rephrasing the question into a more contemporary one: 'In a country as South Africa, with our inclusive constitution, how can this be happening?'How can this be allowed where almost 80% of the country's population define themselves as Christian (Census 2011)?Harvey (2012) asks the question of what: are we to make of the immense concentrations of wealth, privilege, and consumerism in almost all the cities of the world in the midst of what even the United Nations depicts as an exploding 'planet of slums'?(p. 4) There exists a tension of extreme wealth and extreme poverty.This tension creates boundaries which define access to housing, education, medical care and right to the city.
What type of city we want to live in is intertwined with the 'kind of people we want to be', the 'kind of social relations we seek' and 'what style of life we desire' (Harvey 2012:4).Do we want to live in a society that segregates the haves and the have-nots?Do we want to perpetuate the apartheid city to move the poor to the outskirts, the periphery of the city, without adequate access to services, and build the rich better, more secure, more exclusive environments?Do we let wealth dictate who has right to the city or do we consider the right to the city means that all can participate in a reinvention of the city for the common good (Harvey 2012:4)?
In this article, we will explore two contesting spaces: the Christian Revival Church (CRC) and the Hatherley dumpsite.These two spaces are in walking distance from each other.The rational for choosing the above-mentioned spaces is based on two levels.Firstly, because it was part of the discussions during the Spatial Justice Conference (see below) and, secondly, theological reasoning.CRC is a megachurch with immense wealth and Hatherley is a landfill site where people live from the city's waste.Hatherley exposes the 'poverty of church' and the 'impotency of public policy and city planning' (De Beer 2014a:7); it exposes the segregation in South Africa; it exposes the disregard for human life.
The parable of the Feast (Lk 14:16b-23) is perhaps the example par excellence in the New Testament that addresses spatial justice and reconciliation.In the parable, Jesus advocates for the eradication of all boundaries linked to the social-economic status of the marginalised.The parable argues, from a social justice perspective, that there is no such thing as privileged space; priviliged space, on the contrary, builds boundaries.The reading of the parable presented critically engages with real-life experiences of marginalised people living on the periphery of the city and the boundaries that are created by megachurches in their close surroundings.This article will also explore the pre-industrial city of Luke 14:16b-23 and how it relates to the modern city.The parable will be interpreted in its socio-cultural, political, economic and religious context.The boundaries that are created by honour, reciprocity and physical walls in the parable will be explored.The parable will be used to interpret the boundaries that exist between the dumpsite and the church.
Moving between (physical) boundary lines
Boundaries are used for different reasons.It can be used to indicate physical boundaries such as walls or boarders between countries.Boundaries in an abstract sense can help people navigate in relating to other people.Boundaries have the ability not only to protect but also to divide and marginalise people.The apartheid city used boundaries to segregate people.In modern-day cities (as well as in the preindustrial city), boundaries were used as a clear indication of where people are allowed to be based on their social or financial standing.Is there a boundary that separates Hatherley dumpsite and the CRC from one another?Or is there complementary interaction?
Cities interact complementary and more intensively when the one supplies the demand of the other and vice versa (Soja 1971:3).The interaction of the CRC and the Hatherley dumpsite (Mamelodi East) is not so much complementary.
One creates waste and the other lives in it.The first has tremendous privilege and wealth, whilst the second tries to survive amongst the discarded, leftover waste of the first.What do we make of these discrepancies?It is easy to accuse the ruling party of South Africa that they only seek the interest of a few, whilst churches in our midst are doing the same.The 2015 conference took place on the Mamelodi campus of the University of Pretoria.Part of the programme was an immersion exercise.One cannot be truly immersed in any place in just 2 days, but the aim of the immersion exercise was to become aware of the contested spaces that exist in the east of Pretoria.This was done by means of a few site visits.Two of the sites that the conference visited were the Hatherley dumpsite and the CRC, Pretoria.
Spatial Justice Conference
In this part of the article, we will discuss various encounters at the two sites.We had encounters with the two sites not only as conference attendees, but we also visited the sites after the conference to gain a better understanding of these spaces.
Megachurches
In traditional Protestant churches, a church with more than 2000 regular attendees is defined as a megachurch.Some scholars suggest that churches with more than 10 000 regular attendees are classified as 'gigachurches' (Maddox 2012:147 Entering the premises of the church, it becomes clear that CRC's building fits into this upper-market, suburban, elitist area.'[M] egachurches generally are an urban phenomenon located in the suburbs of very large cities' (James 2007:193).Why did they choose to build CRC in the east of Pretoria, in one of the most expensive suburbs and not in Mamelodi or lower income suburbs in Pretoria west?The sheer size and architecture of the building demand attention.Entering the building, one finds a massive, clean foyer.Everything is well maintained and nothing is out of place.The church plays into the notion that pervades our society: 'big is better and stronger' (James 2007:191).
Requesting to speak with someone about the church leads to an interrogation of sorts.The first questions we were asked were the following: 'are you a member of the church?' and 'with whom do you have an appointment?'If you cannot positively answer the first round of questions, it would be more difficult to meet with someone.During the conference visit, we were initially denied just to view the auditorium.
1.For a more in-depth study on megachurches in South Africa, Genevieve James's PhD thesis can be considered.
2.See http://www.silverlakes.co.za/hoa/about-us On a follow-up visit, we were again initially denied, only to be reluctantly and partly accommodated in the end.
The auditorium of CRC can seat up to 7000 people.The erection of the church building started in 2012 and was completed in beginning 2014 (Bester 2014:26).During the conference visit, we were informed that the building alone was over R200 million debt free.And it was mentioned: 'Our pastor, he is the man who holds the vision for the church, he does not believe in bank loans, we do not owe anyone!' Maddox (2012) notes the universal rhetoric of growing megachurches: A tiny group began in the pastor's lounge room; then borrowed, leased and finally bought a warehouse.Once sufficiently established, they commissioned purpose-built premises, completing the main auditorium, conference facilities, television studio, gymnasium and school, before 'planting' offshoots in nearby cities and foreign lands.(p.153) CRC's story fits squarely into the same rhetoric, starting as a small church in 1994 and now having congregations all over South Africa and internationally with over 53 000 members. 3 On a typical Sunday, there are three services at the Pretoria congregation for the CRC.You are struck by the amount of expensive cars that are parked on the premises.Cars are parked outside on the sidewalk as there is not enough parking.There are also about 15 busses parked outside.People are bussed in from various areas of the city.
In the parking area, it might seem for a moment as if the rainbow nation ideals have been realised.People from different races and cultures, old and young, are all gathered here for the same end in mind, namely, to hear the message of the senior pastor, At Boshoff.
CRC is dominated by two senior pastors: At Boshoff and Nyretta Boshoff.Not only at CRC, megachurches 'are often dominated by a single, senior pastor' (Yip & Ainsworth 2013:508).Megachurch pastors tend to have some sort of celebrity status (cf.Paparazzi pastors 2011; Yip & Ainsworth 2013).During our visits, many people placed an emphasis on the importance of the senior pastor.To be part of the church, it is of utmost importance to understand his vision and to follow suit.At Boshoff not only represents but also embodies the CRC brand.Yip and Ainsworth (2013:508) note that in most cases megachurches are made up of 'attendees' rather than 'members'.Church attendees, unlike members, are not involved in: decision-making about the church's operations, structure or practices.However, church attendees are involved through participation in small 'care groups,' which meet regularly outside the Sunday service and further reinforce the vision and teachings of the senior pastor.(Yip & Ainsworth 2013:508) 3.See http://www.crc.org.za/It seems that it is no different at CRC as they state: 'Here we grow and thrive with other believers to discover our potential and purpose through sharing the word received by Ps.At on Sundays'. 4 Entering the building on a Sunday, you are met with contemporary-style modern music.There are doors that are bigger than life and lights everywhere.The lights, music and sound create an effect that is distinctly 'unchurchlike' (Yip & Ainsworth 2013:510).You can enter the bookshop to your left to buy the senior pastor's books, sermons or the band's music.On your right, you can sit at the coffee shop.Yip and Ainsworth (2013:508) note that the senior pastor of a megachurch, who 'constitutes a human brand', is not only 'transferred to the production of merchandise (music, books, audio recordings) but also used to co-enact the identity of the church'.
Seated in the auditorium, you are bombarded with a 8064 mm × 4608 mm larger-than-life LED screen (Bester 2014:29), and state-of-the-art speakers with advertisements and 'TV' presenters who are dressed by boutique stores.
They advertise a strong focus on 'The Pastor' and his apparent life-changing sermons that you can buy at the bookshop.
From the moment you drive into the premises, there are redshirted ushers to ensure you park on the right parking space and walk on the right walkway.Everywhere you walk in the building, there is a smiling red-shirted usher to remind you to 'enjoy the service' and herd you in the right direction.Nkosi (2014:9) has indicated that the Hatherley dumpsite was established without following international standards for a municipal solid waste management system, as articulated in the National Environmental Management Waste Act 59 of 2008, hence the dramatic impact on the environment.
Standing amidst the chaos and unidentifiable smells, one is confronted with an uncomfortable feeling, a feeling of uneasiness.How is this injustice possible?How can people be discarded like this?How can government allow this?How is the church of Jesus Christ allowing this to happen?Am I contributing to this injustice in some way?
During the 2015 conference, attendees were introduced to the dumpsite by the site manager.He spoke about the struggles the poor and vulnerable people are faced with on a daily basis, about the babies that are born in this harsh, unhealthy place.
During another visit, 6 we met with two young men living on the dumpsite.From the garbage, they have constructed a temporary shelter.Cooking (rotten) meat, chicken and other vegetables that they found digging through the garbage, they welcomed us into their world.The following is an attempt to reconstruct our conversation on their life as experienced on the dumpsite: Q1.Where do you get food from?
A. We get food from the rubbish offloaded by trucks.For instance, we collected this meat that we are now cooking from the rubbish that a truck dumped this afternoon.
Q2.
Where do you get water to drink from?
A. We go to ask water outside the dumping site.It is quite far where we get the water from.
Q3. Where do you go to use ablution and toilet facilities?
A. We use the bush because there are no toilet facilities here on site.
Q4.
Where did you come from and why did you choose to stay here?
A. I came from Limpopo and decided to live here to get recycling material to sell for money.My brother is from Mozambique but has been here in South Africa for more than 10 years.I do not have an ID book or birth certificate and therefore cannot find a proper job.I have done piece jobs, but you are treated badly and do not always get paid.It is better to be here because I know what to expect.Without my ID I also cannot get medical treatment from the clinic that I require.
Q5. Are you looking for a job?
A. We have skills like plumbing, painting, roofing, electrical work, carpentry and car repairing, but because of lack of identity documents we cannot find a job.
6.For this specific visit, two of the article's authors, Ezekiel Ntakirutimana and Wayne Renkin, visited Hatherley with the objective to engage in conversation with people who are living on the dumping site and to try and understand their daily struggles and how they see the role of the church at the dumping site.
These two men are not the only people who choose the dumping site over informal work.For instance, Modipa (2014) reported in a local newspaper how a 30-year-old woman lost her job as a domestic worker.The report reveals also how she made much more money than being a domestic worker or even selling fruits.But it does not come without its own challenges as it is not easy working on the dumping site.
Daily Maverick (Health-E News 2015) describes another story of a different kind related to a 20-year-old man who lived on the Hatherley dumpsite, 'sorting through rubbish for recycling by day to make money to score hits of nyaope 7 by night'.Rekord newspaper reported a story of two young boys, 6 and 11 years old, drowned whilst swimming in dirty water at Hatherley.The news article further reported that the Tshwane Metro 'supported the family through their time of bereavement', but with no mention how further steps were taken to prevent the same problem happening again in the future (Funeral for drowned Mamelodi boys 2015).
A feast with no boundaries and no privileged space (Lk 14:16b-23)
'Hidden' cultural scripts
There are two main ways to approach the parables when it comes to their interpretation.One can read the parables in 7.Nyaope is also known as whoonga or wunga.It is a fine white powder with a cocktail of ingredients.'The ingredients of nyaope are not always known, and in fact the recipe may vary from place to place' (Solomons & Moipolai 2014:302).It has been reported that possible ingredients of nyaope include heroin, strychnine/rat poison, detergent powder, anti-retroviral drugs (ARVs) and efavirenz (Grelotti et al. 2014:512;Solomons & Moipolai 2014:302).
their literary contexts, that is, the way in which the parables of Jesus were redactionally applied by the evangelists, or one can interpret the parables in the socio-cultural, political, economic and religious context in which they were told by Jesus (ca 27-30 CE).The reading of the Feast below takes the second approach.
For the second approach, it is important to take serious the cultural scripts (social values) that were part of the world of Jesus.What do we find in a specific parable that is 'hidden' to the modern reader?What social values were part of the repertoire of Jesus and his audience -their shared cultural world of references -that resonance in a parable?(see Scott 2001:109-117).In other words, 'what cultural scripts are embedded in the parable of the Feast that the modern reader should take cognizance of?' What did the hearers of the parable know that the modern reader does not now, the socalled native's point of view?Also, if there are 'hidden' cultural scripts embedded in a parable, how can they be brought to the surface?Social-scientific criticism, as an exegetical approach, consciously addresses these questions by using reading scenarios to identify and interpret social values embedded in ancient stories like the parables.These reading scenarios enable the modern reader to hear (read) the parable as the first hearers did, to value what they valued, and to understand what they understood (Neyrey 1996:115).
What are the social values that are 'hidden' in the parable of the Feast? 8 Firstly, the social setting of the parable of the Feast, namely, a city, would have evoked certain physical and social aspects that were typical of pre-industrial cities in the time of Jesus.In pre-industrial cities location, social status and location of dwelling went hand in hand.Walls physically demarcated who belonged where, and gates controlled the interaction between the different social groups that inhabited the city.The political and religious elite (those with honour, status, power and privilege) occupied the walled-off centre of the city, and the non-elites occupied the outlying area of the city, located between the inner and outer walls of the city (Rohrbaugh 1991:133-146).Occupation in the outlying area normally was organised in terms of particular families, income groups, guilds, ethnicity and occupation.The elite and non-elite thus were physically and socially isolated from each other.The pre-industrial city also 'housed' the socially ostracised (e.g.prostitutes, beggars, tanners and lepers).These people lived outside the outer walls of the city and were only allowed to enter the city during the day, for example, to look for work as day labourers.The important fact for the understanding of the parable is that social contact between the different groups, especially the elite and the non-elite, was nearly none-existent. 9 8.For a detailed discussion of the social values embedded in the parable, see Van Eck (2013:7-9).9.'A member of the urban elite took significant steps to avoid contact with other groups except to obtain goods and services.Such a person would experience a serious loss of status if found to be socializing with groups other than his own.Thus social and geographical distancing, enforced and communicated by interior walls, characterized both internal city relations and those between city and country' (Rohrbaugh 1991:136).
http://www.hts.org.zaOpen Access Secondly, the fact that the feast consisted of a meal immediately would have been understood by the hearers of the parable as a ceremony which included aspects such as boundary making, purity concerns and status.Also, the extension of an invitation would have evoked aspects such as gossip, honour, patronage and reciprocity.In the Mediterranean world (the world of Jesus), shared meals were seen as a ceremony that confirmed shared values, structures, status and honour rating. 10Likes, therefore, only ate with likes (persons with the same social standing, status and honour rating).Elite, who occupied the walled-off centre of the city, only ate with other elite within the inner wall, and not with non-elites occupying the outlying area of the city or the impure and marginalised living outside the city walls.Meals also had to do with what is known as reciprocity.
Accepting an invitation to a meal was to be followed up by the same kind of invitation to the host.A guest who did not reciprocate by becoming a host to the initial host was seen as someone without any honour, as was someone who ate with persons with a lower honour status.
As a rule of thumb, persons who were invited to a meal normally received two invitations.The first invitation, which informed guests that a feast was going to take place, in essence, was an honour challenge; will the invited guests consider my honour rating and status as such that they would attend, also willing to abide with the reciprocal implications of their acceptance of my invitation?The answer to this question was given implicitly when the second invitation was extended on the day and time of the meal.In the time interval between the first and the second invitations, first-century Mediterraneans normally practised what can be called gossip as a social game.In oral and non-literate societies, such as first-century Palestine, gossip was an institutionalised means of informal communication, interwoven in the daily affairs and interactions between people, and everybody partook in it (Andreassen 1998:41).As a controlled cultural form, gossip had several social functions, such as consensus building, the reaffirmation and enforcement of group values, boundary maintenance and the moral assessment of individuals (Rohrbaugh 1991:251-256).Gossip, and status and honour, thus were two sides of the same coin, 'one of the chief weapons which those who consider themselves higher in status use to put those whom they consider lower in their proper place' (Gluckman 1963:309). 11
Reading the parable 12
The parable of the Feast is a short story by Jesus in which a man prepared a feast and to which he invited many guests.
10.'When people gathered for meals in first-century Mediterranean cultures, the event was laden with meaning.Meals were highly stylized occasions that carried significant social coding, identity formation, and meaning making.Participating in a meal entailed entering into a social dynamic that confirmed, challenged, and negotiated both who the group as a whole was and who the individuals within it were' (Taussig 2009:22).
11.For a more extensive description of gossip as a necessary social game in inter alia the first-century Mediterranean world, see Van Eck (2012:2-9).
That the man most probably was one of the rich elite can be deducted from the parable; he has the means to entertain many.The double invite in the parable also illustrates the man's wealth, because the double invite was a special sign of courtesy practised by the wealthy (Scott 1989:169).The host most probably was part of the urban elite, who lived in the walled-off centre of the city.
Because only people with the same social standing, status and honour rating ate together, his invited most probably also were from the elite who lived in the walled-off centre of the city.That this was the case is clear from the parable, at least in the case of the first two invitees.The first invitee had the means to acquire a piece of land, and the second has bought five yokes of oxen, indicating that he owned a large estate.Because likes only ate with likes, the other invited guests, like the one who recently got married, most probably were of the same or even higher status as the host.
As stated above, the (first) invitation extended to the guests, in essence, was an honour challenge to the invited.Did the invited consider the host as one of their peers?Was his honour rating high enough for the invited to accept the invitation?Were they willing to reciprocate after accepting the invitation?Would their attendance enhance their respective honour ratings?Or would attendance shame them?
Almost immediately after the first invitation, the gossip network amongst the elite would have kicked in before the second invitation was received.As put by Rohrbaugh: Initially the potential guest would have to decide if this was a social obligation he could afford to return in kind.Reciprocity in regard to meals was expected….But more importantly, the time between the invitations would allow opportunity for potential guests to find out what the festive occasion might be, who is coming, and whether all had been done appropriately in arranging the dinner.Only then would the discerning guest be comfortable showing up.The nearly complete social stratification of pre-industrial cities required keeping social contacts across class lines to a minimum and elaborate networks of informal communication monitored such contacts to enforce rigidly the social code.(Rohrbaugh 1991:141;emphasis added) When the second invitation is extended, it is clear that the host's honour challenge is turned down.This is clear from the three excuses in Luke 14:18-20.Important here is not the content of the excuses, but what lies behind it.The host is shunned, not only by the three guests who make excuses but by all of the invited.Everybody who was invited (the many) turns down the invitation.Nobody shows up because of the gossip network of the community.The host was morally assessed, and boundary maintenance took place.Something was wrong with the feast.What it was, the parable does not say.It was, however, a good enough reason not to attend.
Receiving this news, the host got angry.He did not make it amongst his peers.Boundaries were drawn and he was rejected and shamed.What could he do to save face?This is http://www.hts.org.zaOpen Access the surprising element of the parable.The host decides to be a different kind of host, a host not interested in honourratings or balanced reciprocity (what he can get out of inviting people to a feast).He, therefore, sends his slave to invite people living in the wider streets and squares and the narrow streets and alleys (Lk.14:21) -those who live in the city between the inner and outer walls.And when there is still room for more, he sends his slave to invite those in the roads and country lanes or hedges (Lk.14:23) -the socially impure (expendables) living outside the city walls.
Whilst the urban elite first invited took significant steps to avoid contact with those living outside the inner and outer walls of the city, the host socialises and eats with them.He abandons the ever-present competition for acquired honour in the first-century Mediterranean world, replaces balanced reciprocity (quid pro quo) with generalised reciprocity (giving without expecting anything back) and declares the purity system which deems some as socially and ritually (culturally) impure null and void.All walls have been broken down, privileged space was erased and the world was upside down.
Not, however, from the perspective of the kingdom of God, the point Jesus wanted to make with the parable.In the kingdom, elite hosts are real hosts when they act like the host in the parable: giving to those who cannot give back, breaking down physical (walls) and man-made boundaries (purity and pollution) and treating everybody as family (generalised reciprocity), without being afraid of being shamed.This was the kingdom of God, a kingdom in which the pivotal value of honour that organised and stratified society had no role, a kingdom in which purity did not ostracise and marginalise the so-called unclean or expendables.In the kingdom, there are no boundaries between people, and no space is deemed as privileged.
Conclusion
The physical design of the pre-industrial city is the determinant of social status.People were separated by walls; political and social elites were walled off, and gates controlled the interaction between the different social groups that inhabited the city.The socially ostracised were 'housed' outside the walls and were only allowed to enter during the day for very specific purposes such as working as day labourers.
Not much has changed in the post-apartheid city.Today the inner city no longer hosts all of the elite, as the elite have moved to the periphery of the city and created new walledoff, privileged spaces.If you are not part of the elite, you may not freely enter the new walled-off places.Entry is limited to invitation and permission.The CRC contributes to the creation of walls and privileged spaces.Physical and metaphorical walls are created.People from Mamelodi and other places that fall outside the social class of the elite are welcome to enter the church, but only when they are invited, given permission and only during certain times.They are bussed in and out again to limit contact between the elite and the non-elite as in the pre-industrial city.
People who are invited to the walled-off CRC are people who can reciprocate.They are not just encouraged, but reminded that they are obliged to give money abundantly to church; otherwise, God will not bless them.You must be able to take part in the quid pro quo system.
In this parable, Jesus is inviting us to be like the host.We must breakdown walls and barriers that uphold the status quo of society, where the vulnerable are ostracised to the margins, to live out a life on a dumpsite, digging through the trash, eating rotten food -all for the sake of survival.
In the church, Christ is the host who invites those on the other side of the wall, those whom society does not welcome.We as the church are challenged by this parable to critically ask the question: Who do we invite and allow to come to the table of Christ?Is there any person that we are excluding?Are we creating boundaries and privileged spaces on the basis of people's social-economic status?Or are we actively working against the injustice of boundaries?How do we justify spending millions on an exclusive worshipping space whilst there are people in walking distance who are living on a landfill site?
When Jesus and his disciples left the Hulene dump and came out in the beautiful city with all its cars, He stopped and told his disciples that those who they just met at the dump will inherit the kingdom of God.(De Beer 2014a:1, quoting Father Juliao Mutemba on his reflection on the Hulene Dumpsite in Maputo, Mozambique)
The parable of the Feast (Lk 14:16b-23): Breaking down boundaries and discerning a theological-spatial justice agenda Read online:
It was also hosted by the Unit for Social Cohesion and Reconciliation.During the 2013 conference, it became clear that there was a need to reflect on theology and space, as well as to discern a theological spatial agenda.
On 21-22 September 2015, a conference on spatial justice was hosted by the Unit for Social Cohesion and Reconciliation located in the Centre for Contextual Ministry, University of Pretoria.The conference was held in conjunction with the Religious Cluster of the Ubuntu Research Project which involves different key role players.The theme of this conference was prompted by a 2-year earlier conference in 2013 with the theme Rainbow: premise or promise?Consultation on social cohesion and reconciliation.
Encounters at Christian Revival Church, Pretoria
bussed in and out.The church needs more rich people than poor people.Mamelodi is located in Region Six of the Metropolitan and is divided into western and eastern parts.Mamelodi is one of the biggest townships in the City of Tshwane with an almost 98% representation of black communities who speak Northern Sotho, Ndebele, Zulu and Tsonga(Nkosi 2014:14).The dumpsite is located in the eastern part with a new Reconstruction and Development Programme development in the vicinity.Through various forced removals and constant fighting for survival, Mamelodi continues to be a fragmented community.The people of Phomolong, Mamelodi EXT 6 were removed from Marabastad to 'make way for new developments.Twelve years later, the proposed developments never occurred' (cf.Council moves Avondzon squatters to a rubbish dump 2008; De Beer 2014b:222; Funeral for drowned Mamelodi boys 2015; Landman 2010; Marabastad protests relocation 2002; Ndlazi 2016; Selaluke 2012).Hatherley dumpsite serves as a reminder of the fractured, neglected, abused and vulnerable communities of Tshwane.Social boundaries are drawn around communities such as these.This is evident through the lack of interaction from churches at the dumpsite, especially the CRC.Entering the dumpsite, you will have the feeling that you are on a discarded place, a place outside people's daily thoughts.You know that you are at the periphery where vulnerable and poor people are disconnected from normal social networks.The unidentifiable smells challenge your sense of smell.Hundreds of people, caught in the vicious jaws of poverty, inhabit the dumpsite.They run after the dump trucks, seeking some form of livelihood in other people's garbage.They compete with the heavy machinery to collect potentially valuable items before they are destroyed or buried underneath a mountain of garbage.One person's trash is another's treasure.Here are people who live on the margins of society where their basic human rights and dignity are not affirmed.Dumping sites are prophetic signs of what is wrong with our society -they are places where humans are discarded as waste together with toxic materials, dirty needles and wasted food; and a sign of the grossest possible failure of creation in its most vulnerable state -unsustainable both ecologically and in terms of humanwell-being.(p. 1) They herd you until you are seated, and strongly discourage you to choose your own seat but to be content with the one allocated to you.Entering the enormous, modern building, it is not immediately clear what you will get in this place: is it a concert hall; is this a night club or rock concert 'minus the scents of sin, smoke and alcohol'(Maddox 2012:147); or can this be a church?It has all the elements of a modern-day building, including live stream capabilities; however, it lacks the traditional Christian symbols that you will find in a traditional Christian church (cf.Yip & Ainsworth 2013:511).Money is a strong focus of the church.You are reminded to tithe and above that to give abundantly.Our various information sessions informed us that CRC focuses on the rich rather than the poor.Without the rich, the poor cannot be 4.See http://www.crc.org.za/5.Sources vary on the population of Mamelodi.According to Statistics South Africa's 2011 Census, Mamelodi has a population of 334 577 counted persons.Other sources estimate that this is a conservative figure.Wireless Africa (see http:// wirelessafrica.meraka.org.za/wiki/index.php/Mamelodi_Mesh)estimates the population close to one million. | 2018-12-11T10:11:45.815Z | 2016-10-31T00:00:00.000 | {
"year": 2016,
"sha1": "d081b35dc31492b8ad870fa77bb374b2dcb5a509",
"oa_license": "CCBY",
"oa_url": "https://hts.org.za/index.php/hts/article/download/3512/8556",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d081b35dc31492b8ad870fa77bb374b2dcb5a509",
"s2fieldsofstudy": [
"Philosophy"
],
"extfieldsofstudy": [
"Sociology"
]
} |
125613853 | pes2o/s2orc | v3-fos-license | Energy acceptance and on momentum aperture optimization for the Sirius project
A fast objective function to calculate Touschek lifetime and on momentum aperture is essential to explore the vast search space of strength of quadrupole and sextupole families in Sirius. Touschek lifetime is estimated by using the energy aperture (dynamic and physical), RF system parameters and driving terms. Non-linear induced betatron oscillations are considered to determine the energy aperture. On momentum aperture is estimated by using a chaos indicator and resonance crossing considerations. Touschek lifetime and on momentum aperture constitute the objective function, which was used in a multi-objective genetic algorithm to perform an optimization for Sirius.
Introduction
The adopted approach to optimize the non linear optics of Sirius storage ring is to use a Multi-Objective Genetic Algorithm, where the objective function is an estimator of on momentum aperture area and Touschek lifetime. Details about Sirius storage ring lattice can be found in [1] and [2]. On momentum aperture calculation is performed in key regions using chaos indicators, and energy acceptance calculation is performed using the concept of energy aperture (dynamic and physical). The Touschek lifetime is calculated through the energy acceptance. The idea is to have a fast algorithm to estimate the on momentum aperture area and Touschek lifetime, instead of performing a long and precise calculation, which in the case of the detailed lattice of Sirius takes around 7 hours. The goal is to take between 2 and 10 minutes per ring model. Therefore, we do not want to simulate lattice errors and correct them for each storage ring model tested, since it takes a considerable amount of time. Furthermore, for a lattice without errors, it is possible to take advantage of the five-fold symmetry in the Sirius optics, by simply using one superperiod for tracking. Regarding this strategy, care should be taken for 6D tracking to not change the synchrotron frequency nor the RF bucket. Our approach was to adjust the RF cavity frequency f RF , so that f RF = kf 0 , where k is the harmonic number and f 0 is the revolution frequency in a superperiod.
To rigorously test a ring model, a set of twenty lattices with different configuration errors is generated. For each lattice, orbit, tune, coupling and symmetry are corrected. Then, on and off momentum apertures are carefully determined for each one of them. Throughout the paper, we always refer to this test when introducing lattice errors; the average of the results represents the estimation.
See Table 1 for definition of notations and symbols used in the paper. 4D map implies that cavity and radiation are not considered. When not explicitly stated, the derivative is implied to ∈ [0, L), longitudinal position δ energy deviation τ delay in respect to the synchrotron particle β energy dependant betatron function ν horizontal and vertical tunes x = (x, x , y, y , δ, τ ), space coordinates x XD co 4D or 6D closed orbit coordinates Φ XD N (x) N turns 4D or 6D map of a particle, x represents the initial coordinates || · || 2 is the euclidean norm
Energy Acceptance Calculation
Let us define physical aperture A phys as the smallest invariant betatron amplitude for which a particle with energy deviation δ collides with the vacuum chamber. Then, where X V C is the vacuum chamber half-width. Dynamic aperture represents the smallest invariant betatron amplitude for which the particle will eventually be lost. For the case of Sirius, where we do not have an x-plane symmetry, the calculation of dynamic aperture A dyn is performed as follows where Ω(δ) is the greatest interval containing x 4D co (s 0 , δ) and satisfying for a fixed longitudinal position s 0 , which in our case it was chosen in a straight section. All the above 4D closed orbit coordinates depend on s 0 and δ. ||Φ 6D ∞ (x)|| 2 < ∞ means that the particle trajectory does not diverge. In practice, we used N = 131 turns for a rough estimation and N = 900 for a more precise determination of A dyn . The next step is to calculate the invariant betatron amplitude induced by Touschek scattering, . A more detailed explanation of the equation used to calculate a ind is discussed in [3]. For each s * , we solve for δ, Equation (3) has two solutions δ + t (s * ) and δ − t (s * ), which are the positive and negative local transverse acceptances, respectively. The overall energy acceptance is given by where δ RF is the acceptance of the RF system and δ ± w is the energy deviation (positive and negative) for which the tune cross a resonance. An analogous approach to calculate energy acceptance is also used in [4]. Finally, Touschek lifetime is calculated by the method described in [5]. Figure 1 shows that the limiting aperture for Sirius is the dynamic aperture. Black dots represent the corresponding invariant betatron amplitude of a particle lost during tracking in the rigorous test with errors. In this case, the acceptance estimation is a good approximation of the Touschek acceptance. However, the method presented to calculate Touschek lifetime only guarantees an upper bound, which mostly is a satisfactory approximation.
On Momentum Aperture Calculation
For Sirius project, the horizontal plane aperture at the negative side is important, since injection will occur at x ≈ -8 mm [6]. We use chaos indicators to estimate this aperture. When introducing errors in the lattice, particles in regions where there is a significant trace of chaos are usually lost. In this section, we compare the performance of two chaos indicators, the well known Diffusion and the proposed one ASDR (Average Square Distance Ratio).
Diffusion
Diffusion is calculated using NAFF (Numerical Analysis of Fundamental Frequencies) [7], a fast and precise algorithm to calculate fundamental frequencies of a motion. As it is explained in [8], let us denote NAFF by the operator This operator calculates the fractional part of the horizontal and vertical tune, based on 4D tracking of N turns in the ring, starting from coordinate x 0 . The diffusion vector is defined as A chaos indicator is, then, given by ||D|| 2 . We observed that ||D|| 2 > 10 −4 represents a significant probability of losing the particle, when lattice errors are introduced.
ASDR
Let {x i,0 } 1≤i≤M be a set of initial conditions and x i,n = Φ 4D n (x i,0 ). Then, we define for 1 ≤ i < M , where ζ represents any space coordinate in R = {x, x , y, y }. For example, ζ = x, gives x i,n , which is the horizontal position coordinate of the vector x i,n . Finally, the ASDR indicator is given by where #(R) = 4 is the cardinality of R. To simplify the analysis, let us consider only the horizontal dynamic, i. e., then one can show that Notice that we must have the successive initial conditions close to each other, in particular ∆ω i << 1/N << 1; and β x must be small, in order to obtain a π/2 phase shift between x and x , which allow us to cancel oscillating terms. Furthermore, ω i /(2π) must be far from any integer, in order to the oscillating terms remain small. It is important to note that, if we are in a region where the frequency shift ∆ω i << (∆A i /A i )/N , then ASDR 2 i ≈ 1/4. Analogous reasoning can be done when added the vertical dynamics. For a well-behaved motion, it is expected that 1/4 < ASDR 2 i < 1. Therefore, for sufficiently close initial conditions, ASDR 2 i > 1 indicates a motion different from the one proposed, which suggests a chaotic behavior.
Comparison
We have compared the performance of chaos indicators over 62 ring models, on which we had already performed the rigorous test including errors. Figure 2 shows the Relative Mean Square Error (RMSE) of the indicator as a function of the chosen threshold. As expected, if the chaos indicator threshold is too small or too large, then the prediction fail.
For both indicators the number of turns per initial condition was N = 130 and the step between initial conditions for x − was 20 m and for δ − was 8.5 · 10 −5 .
For this comparison, indicator ASDR with a relative error of approximately 10% at threshold around 1.05 performs better than Diffusion with a relative error of approximately 15% at threshold around 10 −4 . In fact, we observed that for these thresholds, ASDR generally predicts a smaller aperture, which is more precise than Diffusion.
Optimization Results
The described methods to calculate energy acceptance and on momentum aperture are sufficient to form an objective function, which calculates Touschek lifetime τ TC and on momentum aperture area A x×y , as a function of the ring model. This function takes between 2 to 10 minutes per model, while the standard technique takes around 7 hours. Let us represent the objective function as f : where S is the vector of strength of sextupole families, N S = dim(S) (In case of Sirius, N S = 21) and the quadrupole strengths are calculated in order to reach the specified tunes ν with minimal modification. Singular Value Decomposition (SVD) was used to perform this task. This objective function is used in a Multi-Objective Genetic Algorithm (MOGA), which in our case was NSGA-II. A detailed explanation of this algorithm is given in [9]. To enhance search time, some restrictions are imposed, which are much faster to calculate than the objective-function. The feasible region was restricted to model rings with small positive chromaticity and a minimum upper bound for Touschek lifetime, which is calculated using only the physical aperture, see Eq. (3). The gain is to provide a fast way to avoid rings which does not have the minimum lifetime required.
At certain point in the optimization of Sirius, NSGA-II was exploring solutions for which some driving terms (see [10]) were being increased to enhance on momentum aperture (h 21000 , h 30000 ) or energy acceptance (h 10110 ), however when lattice errors were added, the particles did not survive. Therefore, these driving terms were added as constraints to remain smaller than a certain quantity. Thus, avoiding this kind of problems. Another strategic constraint we have added was a specific tune resonance of fourth order for the x-plane, which aided to predict the on momentum aperture, along with the chaos indicators.
The best results of the optimization, which were submitted to the rigorous test with errors are listed in Table 2, where we used a multi-bunch uniform filling operation mode, with 1% coupling and total current of 100 mA. Despite model R11G70M024 having a gain of 2.3 hours in Touschek lifetime, the loss in on momentum aperture may be risky for injection. The most safe result is model R11G16M019, where we have a significant gain in every aspect and a small standard deviation. 38.9 ± 2.0 −11.0 ± 0.7 11.5 ± 0.7
Conclusion
Using the energy acceptance and on momentum aperture methods, we only had an upper bound estimation of the parameters. However, adding constraints related to driving terms and resonances, enhanced the methods to be reasonable approximations.
The best ranked machines from the optimization were selected to perform a rigorous test with lattice errors. Then, we could detect the best configuration models among them. These results are shown in Table 2.
The optimization method still needs improvement. The next step is to study the possibility of using another optimization algorithm with faster convergence than NSGA-II, for example the algorithm presented in [11]. | 2019-04-22T13:07:19.818Z | 2017-07-01T00:00:00.000 | {
"year": 2017,
"sha1": "de2e0e0bc9dccf080026742b11fd6f46fda29fd0",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/1742-6596/874/1/012068/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "d8822bdf4126198e5721783d89d288b0a9fee0b8",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
54662942 | pes2o/s2orc | v3-fos-license | Lipid peroxide , glutathione and glutathione-dependent enzyme ( GST ) in mixed zooplankton from the North-West Coast of India : Implication for the use of environmental monitoring
This work deals with an experiment on mixed zooplanktonic organisms collected from the shore water of Diu coast. The study analyzed and measured lipid peroxidation (LPX), as a marker of oxidative stress and glutathione-s-transferase (GST) activity, as a marker of organic pollution. Both LPX and GST activities were highest at the stations close to the shore and marginally decreased along with transect. The reduced glutathione (GSH) levels were highly variable. The results were discussed in relation to the biomarker application of mixed zooplankton, lipid peroxidation (LPX), glutathione-S-transferase (GST) and reduced glutathione (GSH). The results indicated that, the three potential markers (LPX, GST and GSH), could be used as a measure for bio-monitoring the costal ecosystem using mixed zooplankton as suitable organisms.
INTRODUCTION
Marine environments are subjected to several forms of disturbances related to anthropogenic activities such as shipbuilding, transport, oil refinery spill, industrial and urban effluents, dredging and dumping of sediments, often extending their influence through diffusion of pollutants.The water born pollutants which generate reactive oxygen species (ROS) are the source of toxicity for aquatic organisms living in the polluted environment and practically responsible for disruption of physiological functions (Livingstone, 2001).*Corresponding author.E-mail: rathod@nio.org.
ROS such as superoxide radical (O 2 •), hydroxyl radical (OH•) and hydrogen peroxide (H 2 O 2 •), etc., are highly reactive compounds, which if not neutralized efficiently, will attack almost all bio-molecules of cells including proteins and DNA.Membrane lipids are more susceptible to their attack generating lipid peroxidation (Elstner, 1991).To protect themselves from these highly reactive compounds, cells have their own interdependent antioxidative defense system, composed of protective proteins that remove the ROS/RNS.These chemical radicals are normally metabolized by the enzymatic antioxidants, such as superoxidants and superoxide dismutase (SOD), that converts superoxide anion (O 2 •-) to H 2 O 2 in all aerobic organisms.Catalase degrades H 2 O 2 to water and oxygen, and GPx detoxifies both H 2 O 2 and hydro peroxides (ROOH), using reduced glutathione (GSH) as a cofactor.Intracellular homeostasis cycling is regulated by antioxidants, mainly by thiol-containing molecules such as GSH.Glutathione reductase (GR) has important role in maintaining the intracellular status of GSH.Glutathione S-transferase (GST) catalyzes conjugation reaction with GSH.Apart from this, cells possess some other antioxidant defense mechanism to neutralize the effect of toxic compounds.Metallothionein (MT), has a high content of cystein (SH) group protein, which detoxifies heavy metals (Halliwell et al., 1986a;Romero-Isart et al., 2002).
Zooplanktons are small heterotrophic animals inhabiting almost every type of aquatic environment.Numerous studies have demonstrated the suitability of zooplankton for biomarker study (Fossi et al., 2001a;Petres et al., 2001b).Marine zooplanktons have relatively high lipid reserves and can accumulate hydrophobic compounds directly from the sea or by ingestion of contaminated prey.Several studies have focused on the distribution of the conservative contaminants in zooplanktons (Harding, 1986;Shailaja et al., 1991).Kureishy et al. (1978) and Kanan and Sen Gupta (1987) also reported the values of total DDT in zooplankton from the Arabian Sea as 0.3 to 3.2 µg/g, but very little has been concluded about their toxic effects in marine organisms, especially on zooplankton.Zooplanktons are close to the base of the marine food web and therefore, can play a vital role in transfer of pollutants through several tropical levels.Ecotoxicological risks to zooplankton can be used as an early warning signal of risk to the health of the marine ecosystem.In this study, the potential use of certain biomarkers in mixed zooplankton was investigated.Attempts were made to measure the level of LPX, GST act i v i t y and GSH level in mixed zooplankton.
MATERIALS AND METHODS
Zooplankton samples were collected from 12 different stations along and off Diu coast on the 5th and 7th of October 2009 covering the geographical limits of latitude 20° 39' 53.3 to 20° 42' 02.3 N and longitude 71°00' 26.0 to 71°04' 59.7 E. ( F ig ur e 1).For this purpose, horizontal hauls were taken using Heron Traton net of mesh size 200 µm.A trawler was used to operate the net (Tow speed at 1.5 knot, haul time 5 min).The total water filtered was computed with the help of the current speed, mouth area of the sampling net and towing time.50% of the collected sample was preserved in 5% neutralized formaldehyde for identification and quantitative analysis and the remaining 50% samples were filtered, washed in d is t il l e d water and cryopreserved in liquid nitrogen cans for biochemical estimation.
TBARS assay
The whole animals were processed and were assayed according to the method of Ohkawa et al. (1979) with minor modifications.In brief, the reaction mixture that contained approximately 1 mg of protein, 1.5 ml of 0.8% aqueous solution of thiobarbituric acid (TBA), 1.5 ml of 20% acetic acid (CH3COOH) (pH 3.5) with NaOH, 0.2 ml of (8.1%) sodium dodecyl sulfate (SDS), 0.2 ml of double distilled water and 0.1 ml of (0.76%) butylated hydroxyl toluene (BHT) was heated at 95°C for 60 min, then cooled to room temperature and centrifuged at 2000 × g for 10 min.The absorbance of the supernatant was read at 532 nm.The amount of thiobarbituric acid reactive substances (TBARS) formed was calculated by using an extinction coefficient of 1.56 × 1 0 5 M -1 cm 1 and expressed as nmol TBARS performing protein.
Measurement of GST activity
The mixed zooplankton was carefully surface dried with filter paper, thoroughly washed with phosphate buffer (pH 7.4) and homogenized with 50 mM phosphate buffer (pH 7.4) containing 1 mm EDTA, 1 mm DTT, 0.15 M KCL and 0.01% PMSF.Homogenization was carried out at 4°C using 12 to 15 strokes of a motor driven Teflon Potter homogenizer and was centrifuged at 10,000 × g.The supernatant was taken for enzyme assay.The enzyme GST was measured at 340 nm according to Habig et al. (1974) using 0.3 mM CDNB as substrate.
Measurement of reduced glutathione (GSH)
A crude homogenate (20%) was prepared with 5% trichloro-acetic acid (TCA) and was centrifuged at 2000 × g for 10 min.The deproteinised supernatant was used for the assay of GSH using 0.6 mM DTNB in 0.1 M phosphate buffer (pH 8.0) and the formation of thiol anion was measured at 412 nm.
Estimation of proteins
Protein content was estimated by the Folin-phenol reaction as described by Lowry et al. using bovine serum albumin (BSA) as a standard.
DISCUSSION
The aim of these studies was to evaluate the potential use of certain biomarkers (LPX, GST and GSH) in mixed zooplanktonic organisms.LPX, GST and GSH levels were measured in the sub-cellular fraction of mixed zooplankton from the Arabian sea.Attack by molecular oxygen produces lipi d peroxynulical that can abstract a hydrogen atom from an adjacent lipid to form a lipid hydroperoxide.The LPX process affects membrane fluidity and integrity of biomolecules associated with membrane and can be quantified by measuring TBARS assay.The formation of TBARS is commonly used as a marker of biomembrane lipid peroxidation, which is an indicator of oxidative stress (Oakes et al., 2003).Therefore, lipid peroxidation (LPX) has been taken as an index of oxidative stress.The highest LPX level was observed at stations 2, 3, 5, 7, 8, 9 10, 11 and 12 (Figure 2a), closer to the outflow of the river Ghogla (Figure 1).Since TBARS is primarily an outcome of generation of free radicals, it was suggested that this organism is faced an oxidative challenge.This result agrees with results obtained with mussels, when exposed to copper (Cu), aluminum (Al), lead (Pb), cadmium (Cd) and in field study (Viarengo et al., 1990;Torres et al., 2002).The laboratory early findings suggested that, exposure of mercury (Hg), WSF of petrol and diesel, naphthalene and pyrene to Perna viridis generated LPX.GSH is the most abundant intracellular thiol and through its antioxidant properties, it is able to protect cells from ROS formed during normal mitochondrial respiration and metabolism of foreign chemicals (Hammond et al., 2004).Glutathione exerts its antioxidant function by reaction with superoxide radical (O 2 •), peroxy radical and singlet oxygen ( 1 O 2 ) followed by the formation of oxidized glutathione and other sulfides.A decrease in tissue concentration (Figure 2c) of available GSH may be due to increased consumption of GSH via GST or it may have directly reacted with ROS, in turn leading to alteration in the redox balance.Decreased GSH level was found in mussel exposed to copper (Cu).De Giulio et al. (1995) suggested that, trace metals can bind to GSH and thereby, l imi t its availability to counteract lipid peroxidation and DNA damage.Further, when GSH values were plotted versus LPX, it showed a correlation (r = -0.596,p < 0. 05 ) (Figure 3a).This suggests that GSH might play an important role in protecting the organisms from ROS.
Stations
Potential source of contaminant mediated ROS production by marine organisms include interactions with metals, petroleum aromatic hydrocarbon (PAH) and other organic chemicals such as nitroaromatics (Lemaire et al., 1994).The study also focused on the phase II GST enzymes at different stations.GSTs are a multigen family of enzymes (isoforms) grouped into seven classes: alpha, mu, pi, theta, sigma, zeta and omega (Lenartova et al., 1997a;Mennervick, 1985b).The enzyme catalyzes the conjugation of a large variety of compounds bearing an electrophillic site, with reduced glutathione (Edwards et al., 2000).GST activity varies in different stations (Figure 2b).Interestingly, a positive correlation was obtained with LPX level (Figure 3b).Edwards et al. (2000) reported that, glutathione transferase play an important role in the detoxification of ROS in the cells and protecting the lipids from peroxidation.The induction of GST activity of mussels in field study, by toxicant was done by several authors in the organisms (Prakash, 1995;Torres, 2002;Canesi, 1999;Bainy, 2000;Strmac and Braunbeck, 2002;Lau, 2003).The increase in GST activity may be a strategy to prepare for oxidative stress in an effort to defend cells against oxidative damage (r= 0.56, p < 0.05) (Figure 3b).
From the result, it can be suggested that, the zooplanktons more effectively use the glutathione metabolism pathway as an adaptive response against ROS generated pollutants.
Conclusions
In this study, variations of LPX, GSH and GST activity in the mixed zooplankton was reported.An increase in LPX and GST activ it y in mixed zooplankton collected in the near shore area was shown which marginally decreased until it levels off.This study permits a better characterization of the impact of pollution on marine organisms taken together.The present results indicate that, these three potential markers (LPX, GST and GSH) could be used as measure for bio-monitoring the coastal -using mixed zooplankton as suitable organism.
Figure 1 .
Figure 1.Sampling site of zooplankton collection near the shore water of Diu Island.
Table 1 .
Measurement of LPX, GST and GSH per gram proteins by TBARS and CNDB methods. | 2018-12-05T15:13:09.769Z | 2011-08-01T00:00:00.000 | {
"year": 2011,
"sha1": "223cf642ba63d56fe503bc9cd67a2a04f8f347a6",
"oa_license": "CCBY",
"oa_url": "https://academicjournals.org/journal/AJB/article-full-text-pdf/63A557424751.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "1894fd55bd092fbd2cfcd54fa393d66bc8032858",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
} |
134741335 | pes2o/s2orc | v3-fos-license | Soil Water Regime Evaluation after Biochar Amendment
In this paper, we have evaluated soil water regime in top soil layer based on agronomic classification. The study site was located in Malanta (near Nitra city, Slovakia). The whole site was divided into plots with the size 6 × 4 m separated by 0.5 m bands. Our field experiment began on March 2014 when a certificated biochar was applied to a depth 0-15 cm of soil profile in different rates. We have compared two plots: one with application of biochar in amount of 20 t/ha (B20) and second plot was without biochar amendment (Control). The soil water content in 0-15 cm depth was monitored by the 5TM sensors in 5 minute interval and stored using the EM 50 data loggers. Two sensors were installed at each plot and average value was used based on good correlation coefficient between them. Monitored period was from 12.8. to 22.10.2015 and the experimental area was cultivated with maize. Average daily value of soil water content and soil water storage were used to soil water regime evaluation. Results showed that 1) soil water content was higher at Control plot (we had expected higher values of soil water content at B20 plot based on scientific studies); 2) year 2015 was extremely hot and vegetation period and monitored period as well, were very dry. Therefor was soil water content below the hydrolimit wilting point (θWP) during a dominant part of monitored period. These results reflected also soil water regime evaluation, when deficit of soil water for plants was during long time of monitored period. Optimal soil water storage for plants was only 13 days at Control plot and 3 days at B20 plot. Our hypothesis, that this type of biochar (with specific characteristics) will improve soil water regime, was not confirmed.
Introduction
Soil water is part of the hydrosphere, which is located in the soil profile and together with groundwater and surface water it is the third water source which is directly involved not only in the hydrological but also in the production and biological cycle. This water source is non-alternative in the agricultural crop production process and sensitively responds not only to climate change but also to anthropogenic activity [1]. Changes in land use influence runoff processes, soil water regime and water balance [2]. Soil water content is one of the basic elements of water balance. One of the modern technologies for studying the water balance in the soil-groundwater-plant-atmosphere system are weighing lysimeters [3]. Soil water content can be influenced by surface runoff, soil erosion, infiltration process or management area process [4]. Relationship between plant and soil water content is used to characterization the hydrolimits. Hydrolimits are specific soil water contents which are defined for certain values of water potentials. Closer attention is paid to three hydrolimits: field water capacity (θ FC ), point of decreased availability (θ PDA ) and wilting point (θ WP ) [5]. The soil water content between field water capacity and wilting point is the interval at which the water in the soil is available for the plants at that site. However, each plant has a differentiated ability to take water through its root system, so the critical soil water content (wilting point) is affected by the plant during a year. The soil water regime is the summary of all changes of content, physical state and relocation of the water in the soil for a certain time period and is the typical long time behaviour of the average values of these characteristics [6]. It is highly dependent on the grain size composition of the soil, which is, among other things, a stable feature of soil fertility. Perhaps, it is one of the most important soil regimes. In terms of overflow and deficit of soil water, the knowledge and assessment of the soil water regime is particularly important from the production and environmental point of view [7].
The soil water regime can be assessed on the three basic criterions from which we determine the name of classification [1]: 1. according to the direction and intensity of water circulation in the soil in the hydrological year -hydrologic classifications; 2. according to the predominant soil water content over a longer period of time -ecologic classification; 3. according to the relation between real and potentially accessible water for plants in the active root zone of grown agricultural crops -agronomic classification.
We have chosen the agronomic classification because it is the most suitable classification for soil water regime evaluation from the view of optimal crops state.
During the past decade, numerous articles focusing on the use of biochar have been published, but have shown inconsistent results. Reactions in the soil after the addition of biochar depend on the characteristics of biochar, soil, climate and soil-inhabiting organisms. However, due to high variability in the quality of biochar, its effects on soils and plants are likely to differ [8]. Biochar, a product of the thermal degradation of biomass rich in carbon, may alter the physical properties of the soil [9]. Moreover, most trials with biochar have been carried out in the laboratory over short time periods making translation of these results to field conditions difficult. Another aspect of these studies is that most have tended to focus on problematic soils (e.g. with excessive soil acidity or salinity, severe nutrient imbalances, critically low soil organic matter) where the responses of biochar addition can often be dramatic. However, these soils are not representative of fertile agricultural areas where the likelihood of biochar application from a practical and economic perspective may be greatest [10]. Our experiment is very special because it has been carried out in field conditions, during normal agricultural management. We tried to study direct impact of biochar on soil water content and soil water storage.
Studied area and used biochar
The study site was located at experimental area in Malanta, which is located approximately 5 km northeast of the city Nitra in west part of Slovakia (N 48°19'00''; E 18°09'00'') (figure 1). Altitude location is 175 MASL [11]. The soil type is classified as the Haplic Luvisol [12], with the content of sand 15.2%, silt 59.9% and clay 24.9% -silt loam.
The whole site was divided into plots with the size 6×4 m separated by 0.5 m bands. Our field experiment began on March 2014 when a certificated biochar was applied to a depth 0-15 cm of soil profile in different rates. Biochar, used for the field experiment, was produced from paper fiber sludge + grain husks; 1:1 per weight (Sonnenerde Company, Austria) by pyrolysis at 550 °C for 30 minutes in a Pyreg reactor. Table 1 shows the basic biochar characteristics. We have compared two plots in our analyses: one with application of biochar in amount of 20 t/ha (B20) and second plot was without biochar amendment (Control). Monitored period was from 12.
Soil water content and soil water storage determination
The measurements of soil water content were performed with 5TM dielectric sensors (by Decagon Devices, USA) (figure 2). Two sensors were installed in 5-10 cm depth at each experimental plot. Correlation coefficient between two sensors at the same plot was 0.95 or 0.98, respectively [13]. The soil water content data were collected in a 5-minute interval and stored using the EM 50 data loggers (by Decagon Devices, USA). Based on good correlation coefficients, an average value of soil water content for a plot was analyzed. There was calculated soil water storage in top soil layer (0-15 cm depth) using measured soil water content.
Soil water regime evaluation
In 1988 [14] was published an equation (1) to determine soil water regime based on soil water content and hydrolimits. Original equation was modified in 2007 [15] and there was used soil water storage and relevant values of hydrolimits (2). There were distinguished 10 types of soil water regime in active root zone at agronomic classification (table 2). Although a longer time period is used to determination of soil water regime, we have decided to apply this calculation to daily step in order to analyse the impact of biochar on soil water regime. Average daily values of soil water storage were used to daily soil water regime determination based on equation (2).
( 1 ) ( 2 ) where: A -agronomic coefficient to soil water regime evaluation (-) θ i -average soil water content of the active root zone in i-day of the monitored period (m 3 .m -3 ) θ WP -hydrolimit wilting point (m 3 .m -3 ) θ FC -hydrolimit field water capacity (m 3 .m -3 ) W i -soil water storage of active root zone of soil profile in i-day of the monitored period (m.m -1 ) W WP -soil water storage equal to soil water content for wilting point θ WP (m.m -1 ) W FC -soil water storage equal to soil water content for field capacity θ FC (m.m -1 ) n -number of days in monitored period. Value A > 1 (overflow of water in soil profile) we calculate when θ i > θ FC , then is also W i > W FC . Deficit of soil water content (A < 0.01) is when θ i is near θ WP , subsequently W i is near to W WP . If θ i = θ WP then also W i = W WP , the value A is equal to 0. In case if W i < W WP the value A is negative. It means that it is going about total deficit of soil water content for plants for both examples, which prefigures critical situation for plants. For that reason, it would be good to show these facts in the agronomic classification of types of soil water regime, as it is in table 2 [15].
Results and discussions
The shown methods of evaluating the soil water regime are applicable for long-term measurements of soil water content. In our study we tried use them to evaluate daily soil water regime at plots with biochar amendment and without biochar. Monitored period (12.8.-22.10.2015) was in the end of the maize vegetation period. Based on previous research of biochar application (e.g. [16]) we had expected that at B20 plot would be higher values of soil water content and soil water storage as well. But the results were the opposite. Soil water content was higher at Control plot that's why the soil water storage in 0-15 cm depth was also higher at Control plot ( figure 3.). The year 2015 was extremely hot and vegetation period and monitored period as well, were very dry. The deficit of the soil water was caused mainly by very small amounts of precipitation totals, by high air temperatures and high evapotranspiration in top soil layer during monitored period. Therefor was soil water storage below the hydrolimit wilting point (W WP ) from middle of August 2015 and during a dominant part of September 2015. These results reflected also soil water regime evaluation, when deficit or total deficit of soil water for plants was during long time of monitored period (figure 4.). Optimal soil water content for plants was only 13 days at Control plot and 3 days at B20 plot during monitored period.
Conclusions
Soil water regime evaluation is important for developing a prognosis of the soil water regime changes caused by technical actions in landscape and also by water balance in certain areas. With respect to keeping optimal conditions for plants it seems to be as the best agronomic classification, which is applied in this paper. We have used this classification to daily soil water regime evaluation even though it is used for longer time period. Our priority was found out if the biochar amendment should improve soil water content and soil water storage, as well. Results showed that soil water content and soil water storage were higher at Control plot during all monitored period and they were below the hydrolimit wilting point during a dominant part of monitored period. That's why deficit or total deficit of soil water storage was calculated based on agronomic classification at Malanta area. It must be said that a deficit or overflow of soil water is extreme and installs critical conditions for the growth of vegetation. But monitored period was in the end of maize vegetation period so this situation did not affect the root system. In addition, we have analysed a top soil layer in time, when root system of maize was deep enough to avoid damage the crops. Our hypothesis, that this type of biochar (with mentioned characteristics) will improve soil water regime, was not confirmed. | 2019-04-27T13:13:19.146Z | 2019-03-01T00:00:00.000 | {
"year": 2019,
"sha1": "f3351294752af7b69f34fa47d964ea68f8e6bfa9",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/221/1/012110",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "3ea8a1e413c76c30ca5dc9aea0af9cf95840b93a",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Environmental Science"
],
"extfieldsofstudy": [
"Physics",
"Environmental Science"
]
} |
51712633 | pes2o/s2orc | v3-fos-license | BRCA1 Interacting Protein COBRA1 Facilitates Adaptation to Castrate-Resistant Growth Conditions
COBRA1 (co-factor of BRCA1) is one of the four subunits of the negative elongation factor originally identified as a BRCA1-interacting protein. Here, we provide first-time evidence for the oncogenic role of COBRA1 in prostate pathogenesis. COBRA1 is aberrantly expressed in prostate tumors. It positively influences androgen receptor (AR) target gene expression and promoter activity. Depletion of COBRA1 leads to decreased cell viability, proliferation, and anchorage-independent growth in prostate cancer cell lines. Conversely, overexpression of COBRA1 significantly increases cell viability, proliferation, and anchorage-independent growth over the higher basal levels. Remarkably, AR-positive androgen dependent (LNCaP) cells overexpressing COBRA1 survive under androgen-deprivation conditions. Remarkably, treatment of prostate cancer cells with well-studied antitumorigenic agent, 2-methoxyestradiol (2-ME2), caused significant DNA methylation changes in 3255 genes including COBRA1. Furthermore, treatment of prostate cancer cells with 2-ME2 downregulates COBRA1 and inhibition of prostate tumors in TRAMP (transgenic adenocarcinomas of mouse prostate) animals with 2-ME2 was also associated with decreased COBRA1 levels. These observations implicate a novel role for COBRA1 in progression to CRPC and suggest that COBRA1 downregulation has therapeutic potential.
Introduction
Prostate cancer (PCA) continues to be the second leading cause of cancer related deaths in men, with the overwhelming majority of deaths due to castration resistant prostate cancer (CRPC) [1]. Given that early stage PCA is dependent on androgen receptor (AR) signaling, androgen-deprivation therapy (ADT) is the standard therapeutic approach for clinical management of PCA. While ADT is effective in regressing tumor growth, the response is transient (12-18 months) leading to cancer relapse. Relapse of cancer following ADT (known as CRPC) occurs because the recurring tumors grow either in the absence of or low concentrations of androgens [2,3]. CRPC is fatal as no effective durable systemic therapy currently exists. Despite castrate levels of androgens, AR signaling is still active under these conditions and human prostate tumors express AR. Reactivation of AR signaling occurs through numerous mechanisms, such as AR amplification, mutation, splice variants, coregulators, inflammatory cytokines, and receptor tyrosine kinases, contribute to the development of CRPC [4]. These data suggest that development and progression of CRPC is complex and involves compensatory signaling networks. Thus, understanding the molecular factors that contribute to progression to CRPC is critical for successful clinical management of PCA. Towards achieving this goal, we identified an unexpected role for activation of cofactor of BRCA1 (COBRA1), a protein traditionally known to be involved in transcription pausing as yet another mechanism potentially contributing to progression to aggressive prostate cancer. COBRA1 (aka NELF-B) is one of the four subunits of the negative elongation regulatory (NELF) complex originally identified as a BRCA1-interacting protein. COBRA1 prevents transcriptional elongation by stalling RNA polymerase II (RNAPII) at the proximal promoter region [5,6]. Given its ability to repress transcriptional activity of multiple oncogenes such as estrogen receptor alpha (ERα), earlier studies suggested that COBRA1 may play a tumor-suppressor role [7,8]. Emerging evidence suggests paradoxical oncogenic and tumor suppressive roles for COBRA1. For example, human gastrointestinal adenocarcinomas show increased COBRA1 expression and protein levels compared to normal upper gastrointestinal tract implicating a potential oncogenic role for COBRA1 [9]. Recent studies also indicate a developmental role since mouse embryonic fibroblasts from COBRA1-KO animals show reduced proliferation and elevated apoptosis [10,11]. Interestingly, COBRA1 functions as an AR co-activator by virtue of its ability to interact with AR ligand-binding domain (LBD) [12]. Clinically, patients carrying BRCA mutations are at significantly elevated risk for developing metastatic disease and death from PCA [13]. Furthermore, recent studies showed that nearly 20% of prostate cancer patients who carry the BRCA1 biallelic mutation are at risk for developing castrate resistant prostate cancer. More importantly, these patients carrying biallelic inactivation of BRCA2 are responsive to PARP-1 inhibitors further emphasizing the clinical relevance [14]. Recent mouse genetic studies strongly suggest a mutually antagonistic role of COBRA1 and BRCA1 in both mammary gland development and mammary tumorigenesis [15,16]. However, the role of COBRA1 in prostate cancer is largely unknown. Based on these evidence, we tested whether COBRA1 plays a role in prostate pathogenesis either directly or through its regulatory effects on gene expression. Here, we provide first time evidence for an oncogenic role for COBRA1 in human prostate cancer and its potential as a therapeutic target.
Results and Discussion
Basal level and expression of COBRA1 were analyzed in a panel of human prostate cancer cell lines, human prostate tumor array comprising of low (<7) and high (≥7) Gleason score (GS) tumors and a commercial cDNA prostate tissue array. We observed (i) elevated mRNA expression of COBRA1 with increasing tumor aggressiveness ( Figure 1a); (ii) significantly increased COBRA1 protein levels in high GS tumors compared with low GS tumors ( Figure 1b); and (iii) elevated levels of COBRA1 in an advanced mesenchymal phenotype cell line compared with its isogenic epithelial counterpart ( Figure 1c). In silico analysis of Oncomine data showed significantly elevated mRNA expression in prostate tumors compared to normal prostate gland ( Figure 1d). These data taken together suggest a potential role for COBRA1 in prostate cancer progression. Expression of COBRA1 in human prostate tumors: (a) Expression of COBRA1 as assessed by qRT-PCR using the Tissue Scan Prostate Cancer Tissue qPCR Panel III (Origene, Rockville, MD, USA) comprising of low (GS < 7; n = 11) and high (GS ≥ 7; n = 37) GS tumors. Data was analyzed using unpaired two-tailed t-test with Welch's correction (p = 0.0003); (b) Immunohistochemical evaluation of COBRA1 in human prostate tumor microarray comprising low (GS < 7; n = 11) and high (GS ≥ 7; n = 13) GS tumors. Cumulative analysis of this data is presented as box plot. Statistical analysis of the data was performed using unpaired two-tailed t-test with Welch's correction; (c) COBRA1 mRNA expression was measured by qRT-PCR in ARCaP (E) and ARCaP (M) cells. Error bars indicate ± S.E.M. (n = 3). * p < 0.05; (d) Box plots of COBRA1 expression in normal prostate gland (NPG) and prostate carcinoma (PC) from Oncomine database (http://www.oncomine.org, accessed on 19 May 2015). Data sets are log transformed and illustrated as median centered box plots between the differences of mRNA expression within cohorts. Statistical significance was determined by a two-tailed Mann-Whitney test. IHC pictures shown are at 100 and 500 microns (low magnification images) and 100 and 20 microns (high magnification images) for low and high GS tumors respectively.
It was previously shown that COBRA1 interacts with AR LBD and can function as a coactivator of AR [12]. This evidence led to the hypothesis that COBRA1 facilitates androgen independency. To investigate this proposition, AR expressing androgen responsive LNCaP and castrate resistant C4-2B cells with COBRA1 knockdown and overexpression were grown under androgen-deprived conditions. Vector transfected (NTC) and COBRA1 silenced LNCaP cells (shCOBRA1) failed to thrive under these conditions; while COBRA1 overexpressing (pCOBRA1) cells formed large colonies and thrived under androgen-deprived conditions (Figure 2a, left panel). Surprisingly, overexpression or knockdown of COBRA1 had no effect on growth of C4-2B cells under these experimental conditions (Figure 2a, right panel). Stable knockdown of COBRA1 in LNCaP and C4-2B was accompanied by a small but significant reduction in proliferation under hormone-replete conditions ( Figure 2b) and COBRA1 overexpression enhanced proliferation in LNCaP but not in C4-2B cells (Figure 2b). These results taken together suggest that COBRA1 may be involved in cellular adaptation under castrated conditions but may not be an important player after cells have adapted to grow in the absence of androgens. To investigate if COBRA1 activates AR signaling, we analyzed AR reporter activity and mRNA expression changes in AR and its bonafide target genes, PSA and TMPRSS2. COBRA1 silenced LNCaP and C4-2B cells had significantly reduced AR-reporter activity in both LNCaP and comprising of low (GS < 7; n = 11) and high (GS ≥ 7; n = 37) GS tumors. Data was analyzed using unpaired two-tailed t-test with Welch's correction (p = 0.0003); (b) Immunohistochemical evaluation of COBRA1 in human prostate tumor microarray comprising low (GS < 7; n = 11) and high (GS ≥ 7; n = 13) GS tumors. Cumulative analysis of this data is presented as box plot. Statistical analysis of the data was performed using unpaired two-tailed t-test with Welch's correction; (c) COBRA1 mRNA expression was measured by qRT-PCR in ARCaP (E) and ARCaP (M) cells. Error bars indicate ±S.E.M. (n = 3). * p < 0.05; (d) Box plots of COBRA1 expression in normal prostate gland (NPG) and prostate carcinoma (PC) from Oncomine database (http://www.oncomine.org, accessed on 19 May 2015). Data sets are log transformed and illustrated as median centered box plots between the differences of mRNA expression within cohorts. Statistical significance was determined by a two-tailed Mann-Whitney test. IHC pictures shown are at 100 and 500 microns (low magnification images) and 100 and 20 microns (high magnification images) for low and high GS tumors respectively.
It was previously shown that COBRA1 interacts with AR LBD and can function as a coactivator of AR [12]. This evidence led to the hypothesis that COBRA1 facilitates androgen independency. To investigate this proposition, AR expressing androgen responsive LNCaP and castrate resistant C4-2B cells with COBRA1 knockdown and overexpression were grown under androgen-deprived conditions. Vector transfected (NTC) and COBRA1 silenced LNCaP cells (shCOBRA1) failed to thrive under these conditions; while COBRA1 overexpressing (pCOBRA1) cells formed large colonies and thrived under androgen-deprived conditions (Figure 2a, left panel). Surprisingly, overexpression or knockdown of COBRA1 had no effect on growth of C4-2B cells under these experimental conditions (Figure 2a, right panel). Stable knockdown of COBRA1 in LNCaP and C4-2B was accompanied by a small but significant reduction in proliferation under hormone-replete conditions ( Figure 2b) and COBRA1 overexpression enhanced proliferation in LNCaP but not in C4-2B cells (Figure 2b). These results taken together suggest that COBRA1 may be involved in cellular adaptation under castrated conditions but may not be an important player after cells have adapted to grow in the absence of androgens. To investigate if COBRA1 activates AR signaling, we analyzed AR reporter activity and mRNA expression changes in AR and its bonafide target genes, PSA and TMPRSS2. COBRA1 silenced LNCaP and C4-2B cells had significantly reduced AR-reporter activity in both LNCaP and C4-2B cells (Figure 2c). While silencing COBRA1 significantly reduced AR message levels in LNCaP and C4-2B cells; the AR target genes affected differed between the 2 cells lines. In LNCaP cells TMPRSS2 was significantly reduced while in C4-2B cells PSA (prostate-specific antigen) was significantly reduced (Figure 2d) suggesting differential participation of COBRA1 in AR-mediated transcriptional regulation between androgen-responsive and castrate resistant cells. However, whether subtle differences in the amount of COBRA1 knockdown contributes to the observed differences cannot be ruled out. We interpret these observations to suggest that COBRA1 expression facilitates progression to castrate resistant disease by affecting AR signaling. Our results do not rule out the role for other nuclear receptors in mediating these effects. Further, COBRA1 can physically interact with other transcription factors including Sp1 or Sp3 as there is precedence for interaction of COBRA1 with c-Fos and AP-1 [17]. Our study sets the stage for additional work to understand the mechanism(s) of COBRA1 involvement in prostate cancer progression. C4-2B cells (Figure 2c). While silencing COBRA1 significantly reduced AR message levels in LNCaP and C4-2B cells; the AR target genes affected differed between the 2 cells lines. In LNCaP cells TMPRSS2 was significantly reduced while in C4-2B cells PSA (prostate-specific antigen) was significantly reduced (Figure 2d) suggesting differential participation of COBRA1 in AR-mediated transcriptional regulation between androgen-responsive and castrate resistant cells. However, whether subtle differences in the amount of COBRA1 knockdown contributes to the observed differences cannot be ruled out. We interpret these observations to suggest that COBRA1 expression facilitates progression to castrate resistant disease by affecting AR signaling. Our results do not rule out the role for other nuclear receptors in mediating these effects. Further, COBRA1 can physically interact with other transcription factors including Sp1 or Sp3 as there is precedence for interaction of COBRA1 with c-Fos and AP-1 [17]. Our study sets the stage for additional work to understand the mechanism(s) of COBRA1 involvement in prostate cancer progression. There was no significant difference in COBRA1 message among nontransformed and various prostate cancer cell lines although protein level was higher in cancer cells compared with nontransformed cells (Figure 3a,b). Based on protein levels we chose to examine the biological effects of COBRA1 modulation using BPH1 (overexpression), LNCaP, and DU145 (silencing) cells. We observed consistent overexpression (~2 fold) in BPH1-C (BPH1-COBRA1) cells and~0.5 fold knockdown of COBRA1 in LNCaP and DU145 cells ( Figure S1a). Overexpression of COBRA1 resulted in enhanced anchorage independent growth in BPH1 cells, while silencing COBRA1 resulted in decreased anchorage independent growth in DU145 (highest basal COBRA1 level) and had no significant change in LNCaP cells (Figure 3ci-ciii). It is noteworthy to mention that although BPH1 cells exhibited significant increase in anchorage-independent growth, these cells grew slower than the cancer cells, perhaps an indication of their nontumorigenic nature. Similar effects were observed on cell viability with COBRA1 modulation ( Figure S1b). of COBRA1 modulation using BPH1 (overexpression), LNCaP, and DU145 (silencing) cells. We observed consistent overexpression (~2 fold) in BPH1-C (BPH1-COBRA1) cells and ~0.5 fold knockdown of COBRA1 in LNCaP and DU145 cells ( Figure S1a). Overexpression of COBRA1 resulted in enhanced anchorage independent growth in BPH1 cells, while silencing COBRA1 resulted in decreased anchorage independent growth in DU145 (highest basal COBRA1 level) and had no significant change in LNCaP cells (Figure 3ci-ciii). It is noteworthy to mention that although BPH1 cells exhibited significant increase in anchorage-independent growth, these cells grew slower than the cancer cells, perhaps an indication of their nontumorigenic nature. Similar effects were observed on cell viability with COBRA1 modulation ( Figure S1b).
Examination of the morphology of DU145-shCOBRA1 cells showed distinct changes suggestive of epithelial phenotype compared with the NTC cells that appeared to have a mix of mesenchymal and epithelial phenotype (Figure 3d). This observation prompted us to examine the proteins that are well established markers of epithelial-mesenchymal transition (EMT). We found increased levels of E-cadherin and β-catenin with no changes in vimentin (Figure 3e). These observations are consistent with the data presented in Figure 1c showing that ARCaP-M (mesenchymal cells) have significantly higher expression of COBRA1 than ARCaP-E (epithelial) cells. These data lead us to believe that high levels of COBRA1 in DU145 cells may be associated with cell plasticity due to the lack of Ecadherin/β-catenin complex that play important roles in epithelial barrier. Since gain of cell migration and loss of cell adhesion is a characteristic of mesenchymal cells, we used real-time cell imaging migration assay to test whether COBRA1 silencing would affect the migratory capability. We found significantly decreased migration of shCOBRA1-DU145 cells as a function of time compared with the NTC cells (Figure 3f). The data presented thus far shows that COBRA1 is overexpressed in prostate tumors and contributes to the adaptation and survival of prostate cancer cells under castrate conditions. To examine whether COBRA1 could serve as a therapeutic target, we analyzed protein changes in the prostate and tumors samples obtained from a retrospective 2-ME2 intervention study conducted in transgenic adenocarcinoma of the mouse prostate (TRAMP) model. We previously demonstrated that 2-ME2 (i) intervention regressed prostate tumor growth in this model and (ii) down regulates c-FLIP [18][19][20]. Analyses of COBRA1 protein levels showed significant decrease in the prostate from 2-ME2 intervention group compared to the vehicle control (Figure 4a). Consistent with these in vivo Examination of the morphology of DU145-shCOBRA1 cells showed distinct changes suggestive of epithelial phenotype compared with the non-targeted shRNA transfected cells (NTC) cells that appeared to have a mix of mesenchymal and epithelial phenotype (Figure 3d). This observation prompted us to examine the proteins that are well established markers of epithelial-mesenchymal transition (EMT). We found increased levels of E-cadherin and β-catenin with no changes in vimentin (Figure 3e). These observations are consistent with the data presented in Figure 1c showing that ARCaP-M (mesenchymal cells) have significantly higher expression of COBRA1 than ARCaP-E (epithelial) cells. These data lead us to believe that high levels of COBRA1 in DU145 cells may be associated with cell plasticity due to the lack of E-cadherin/β-catenin complex that play important roles in epithelial barrier. Since gain of cell migration and loss of cell adhesion is a characteristic of mesenchymal cells, we used real-time cell imaging migration assay to test whether COBRA1 silencing would affect the migratory capability. We found significantly decreased migration of shCOBRA1-DU145 cells as a function of time compared with the NTC cells (Figure 3f).
The data presented thus far shows that COBRA1 is overexpressed in prostate tumors and contributes to the adaptation and survival of prostate cancer cells under castrate conditions. To examine whether COBRA1 could serve as a therapeutic target, we analyzed protein changes in the prostate and tumors samples obtained from a retrospective 2-ME 2 intervention study conducted in transgenic adenocarcinoma of the mouse prostate (TRAMP) model. We previously demonstrated that 2-ME 2 (i) intervention regressed prostate tumor growth in this model and (ii) down regulates c-FLIP [18][19][20]. Analyses of COBRA1 protein levels showed significant decrease in the prostate from 2-ME 2 intervention group compared to the vehicle control (Figure 4a). Consistent with these in vivo observations, treatment with 2-ME 2 decreased COBRA1 protein levels in DU145 cells in a dose-dependent manner (Figure 4b). 5 µM 2-ME 2 treatment decreased migration of DU145 cells significantly as a function of time (Figure 4c). These results suggest that 2-ME 2 could suppress migratory ability of prostate cancer cells in part via inhibition of COBRA1. Furthermore, treating castrate resistant C4-2B cells with 2-ME 2 (3 µM) caused significant (p < 0.05) DNA methylation changes in 3,255 genes (n = 91 hypermethylated and n = 3164 hypomethylated) including COBRA1 according to results obtained with an Infinium HumanMethylation450 BeadChip Kit. Functional annotation charts using the Database for Annotation and Visualization and Integrated Discovery (DAVID) on the 3000 most hypomethylated (negative fold change) genes revealed pathways associated with transcription and transcriptional regulation (Figure 4d). For the 1 µM treatment, only the genes exhibiting hypermethylation were run in DAVID together whereas for the 3 µM treatment, only the genes exhibiting hypomethylation were run in DAVID together. Bar charts indicating the level of significance of the association of the DAVID ontology terms with each treatment groups' list of differentially methylated genes (Figure 4d). Of note, hypermethylated genes produced the chart for cells treated with 1 µM 2-ME 2 because almost all of the methylation changes observed were positive fold changes. Conversely, treatment with 3 µM 2-ME 2 produced hypomethylated genes because most of the methylation changes observed in this treatment group were negative fold changes. Interestingly, we identified COBRA1 as one of the hypomethylated genes in these pathways. Although comprehensive investigations are necessary to conclude whether 2-ME 2 -mediated decreased expression in COBRA1 is indeed due to changes in its methylation status, nonetheless, these results suggest that 2-ME 2 suppresses prostate tumorigenesis possibly by altering the methylation status of COBRA1. This could explain many of the previous observations regarding changes in gene expression in response to 2-ME 2 observed by various groups. While this study does not demonstrate the involvement of 2-ME 2 in transcriptional pausing, it would be interesting to test the hypothesis that 2-ME 2 inhibits COBRA1-mediated RNA Pol II transcriptional activity to prevent prostate pathogenesis. While this manuscript was under preparation, an oncogenic role for the negative elongation factor E (NELFE) was identified in hepatocellular carcinoma [21]. changes in gene expression in response to 2-ME2 observed by various groups. While this study does not demonstrate the involvement of 2-ME2 in transcriptional pausing, it would be interesting to test the hypothesis that 2-ME2 inhibits COBRA1-mediated RNA Pol II transcriptional activity to prevent prostate pathogenesis. While this manuscript was under preparation, an oncogenic role for the negative elongation factor E (NELFE) was identified in hepatocellular carcinoma [21]. Although localized prostate cancer can be effectively treated, options for treatment of metastatic castrate resistant disease (CRPC) are mostly palliative with no cure and is therefore a major clinical challenge. Although the treatment landscape for management of CRPC has changed significantly over the past decade, still the pathways that activate AR signaling in the absence or low levels of androgens is poorly defined. These observations underscore the need to understand the cellular, Although localized prostate cancer can be effectively treated, options for treatment of metastatic castrate resistant disease (CRPC) are mostly palliative with no cure and is therefore a major clinical challenge. Although the treatment landscape for management of CRPC has changed significantly over the past decade, still the pathways that activate AR signaling in the absence or low levels of androgens is poorly defined. These observations underscore the need to understand the cellular, biochemical, and molecular alterations associated with pathological progression to castrate resistance. Along these lines, data presented in this manuscript that show COBRA1 as a potential factor contributing to progression to castrate resistance are significant. To the best of our knowledge these data for the first time implicate oncogenic role of COBRA1 in prostate cancer progression through its ability to allow adaptation to castrate-resistant growth conditions and the loss of epithelial barrier integrity. We also provide evidence that COBRA1 may be a novel therapeutic target in prostate cancer management since treatment with anti-estrogenic compound(s) inhibits COBRA1-related effects observed in prostate cancer. Furthermore, emerging evidence links germline and somatic mutations in DNA repair genes including BRCA1 with castrate resistance [22]. Given that COBRA1 is a BRCA1 interacting protein, we speculate that therapeutic targeting of COBRA1 could provide an additional option for patients with DNA repair aberrations.
COBRA1 Stable Cell Generation
COBRA1 stable knockdown cells were generated with shRNA targeting COBRA1 using pSUPER-retro-neo retroviral shRNA expression plasmid (Oligoengine, Seattle, WA, USA). In parallel, control cells were generated using a scrambled shRNA. The optimal concentration of G418 (neomycin) for selection and maintenance of COBRA1 stable cells was established by performing kill curve using range of G418 concentrations (0.1-2.0 mg/mL). Cells were seeded at a density of 1 × 10 5 cells/mL in complete media in T75 flask. Following their attachment, cells were transfected with 10 µg of total plasmid DNA per flask using Lipofectamine 2000 reagent (15 µL; Invitrogen, Grand Island, NY, USA). G418 selection was used to select transfected cells 48-72 h post-transfection (BPH1, 1000 µg/mL; LNCaP, C4-2B, DU145, 500 µg/mL). G418 was replaced every 2-3 days by adding fresh media containing appropriate dose of G418, and cells were examined visually for toxicity daily. Cells were maintained in the media containing G418 and collected as a polyclonal line. The polyclonal cells were plated sparsely at a very low density (~10 cells/well) in 6-well plate and allowed to form individual colonies. The individual colonies were trypsinized and transferred to 10 cm dish for monoclonal expansion. The efficiency of COBRA1 overexpression or knockdown was verified using western blotting and qRT-PCR. However, we noted that the knockdown efficiency decreases with time. For ectopic expression, cells were transfected with pcDNA3.1-based expression vectors for COBRA1 or mock transfected with the empty vector.
Luciferase Assay
For transfections, human prostate cancer cells were plated in triplicate at a density of 1 × 10 5 cells per well in 24-well plates. Following their attachment, cells were transfected with ARE reporter plasmids (0.5 µg) along with Renilla luciferase (10 ng) using Lipofectamine 2000 reagent (Invitrogen,). Luciferase activity was determined after 36 h transfection using the Dual Luciferase Reporter Assay system (Promega, Madison, WI, USA) essentially as described previously [20].
Cell Growth and Proliferation
Trypan blue, soft agar growth, and MTT assays were used to determine growth and survival. For trypan blue assay, cells were plated at a density of 1 × 10 4 cells/well in 24-well plates for 2-3 days, and then trypsinized, combined with the Trypan blue reagent (Sigma, St. Louis, MO, USA), and cell numbers were counted. For soft agar assay, cells were seeded at a density of 10,000 cells/well in 96 well plate containing semisolid agar media. Transformation ability of these cells was measured using CytoSelect TM 96-well Cell Transformation Assay (Cell Biolabs, San Diego, CA, USA) following 6-8 days incubation. Fluorescence was read on SpectraMax M5 plate reader (Molecular Devices, San Jose, CA, USA) using 485/520 nm. Briefly, cells growing in semisolid agar media were solubilized, lysed, and incubated with CyQuant GR Dye (Cell Biolabs) for measuring fluorescence. For cell proliferation, cells were seeded in triplicate at a density of 4 × 10 3 per well in 96-well plate. Cell proliferation was detected following 72 h incubation essentially as described previously [20] by measuring absorbance at 570/650 nm.
Migration Assay
The migration rate of androgen independent prostate cancer cells was assessed using the real-time cell imaging system (IncuCyte TM live-cell ESSEN BioScience Inc., Ann Arbor, MI, USA). A scratch was made using the 96-pin WoundMaker TM (ESSEN BioScience Inc.) in cells growing in 96 well plate. Cell migration was monitored in real time over a period of 14 h, and images were automatically acquired and analyzed using IncuCyte TM 96-well Cell Migration Software Application Module (ESSEN BioScience Inc.). Data is represented as the Relative Wound Density (RWD), which is a representation of the spatial cell density in the wound area relative to the spatial cell density outside of the wound area at every time point (time-curve).
DNA Methylation Array
Global DNA methylation levels in androgen independent prostate cancer cells C4-2B were measured following treatment with 2-ME 2 (1 µM and 3 µM) for 5 days. By using a MethylMiner Methylated DNA enrichment kit (Invitrogen), methylated DNA was isolated from fragmented whole genomic DNA via binding to the methyl-CpG binding domain of human MBD protein coupled to paramagnetic Dynabeads M-280 Streptavidin through a biotin linker. Then, samples were subjected to DNA methylation analysis on the Illumina HumanMethylation450 Beadchip Kit (BASIC core facility at UTHSA). This analysis produced a list of genes with significant changes in DNA methylation (hyper or hypomethylation; p < 0.05). For each gene list, up to 3000 genes were selected and run in the Database for Annotation and Visualization and Integrated Discovery (DAVID) that produced Functional Annotation Charts.
Animal Experiments
Western blot analysis of COBRA1 levels were examined in TRAMP (transgenic adenocarcinomas of mouse prostate), prostate tumors and tissues were obtained from a repository available from previous studies in our laboratory [18,19]. Tissues were procured from a study testing the potential of 2-ME 2 (50 mg/kg body weight through drinking water) by administrating to 22-25 week old of TRAMP mice for additional 25 weeks [18,19]. Proteins were extracted using RIPA buffer from the tumors and tissues in the control group (30, 38, and 42 weeks), and in the treatment group (38 and 42 weeks).
Immunohistochemistry
COBRA1 rabbit polyclonal antibody (Dr. Rong Li, Department of Molecular Medicine, University of Texas Health San Antonio, San Antonio, TX, USA) was used. Sections from paraffin embedded tissues were heat cleared and rehydrated. Antigen retrieval was performed with citrate buffer at pH 6.0 in a 121 • C pressure chamber. Endogenous peroxidase was quenched with a TBS buffer containing 3% hydrogen peroxide followed by a protein blocking buffer incubation. Each step was carried out at room temperature. The sections were incubated for 1 h at room temperature with the antibody. The negative control sections were incubated with a Universal Rabbit negative control Rabbit Ig fraction (DAKO Corp., Carpinteria, CA, USA). The ancillary and visualization systems were: Rabbit HRP polymer (BioCare Medical, Concord, CA, USA) and DAB Chromogen System (DAKO Corp.). IHC slides were evaluated and graded by pathologist (R. R) in a blinded fashion. The total COBRA1 staining was scored as the product of the staining intensity (on a scale of 0-3) and the percentage of cells stained (on a scale of 0-5), resulting in a scale of 0-8. Staining intensity was scored as follows: 0, none of the cells scored positively; 1, weak staining; 2, moderate staining intensity; and 3, strong staining intensity. Percent staining was scored as follows: 1, 20%; 2, 30%; 3, 60%; 4, 80% and 5, 100% cells stained. Low (100 µm) and high (50 µm) magnification images were taken using a Nikon Eclipse Ci microscope equipped with camera (D5-F12).
Oncomine Data
COBRA1 expressed in normal prostate gland and prostate carcinoma were obtained from two independent studies for each gene expression in the Oncomine database. Primary sources are from different group's microarray data mentioned in the graph (http://www.oncomine.org). Data sets are log transformed and illustrated as median centered box plots between the differences of mRNA transcription within cohorts. Statistical significance was determined by a two-tailed Mann-Whitney test. Detailed information of the standardized normalization and statistical calculations are indicated on the Oncomine website.
Statistical Analysis
All numerical results are expressed as mean ± S.D. or S.E.M. derived from 3 independent experiments, unless otherwise stated. Statistical analyses were conducted using Student's t-test and statistically significant differences were established as p < 0.05. The statistical significance of IHC data was calculated using unpaired two-tailed t test with a Welch's correction.
Conflicts of Interest:
The authors declare no conflicts of interest. | 2018-08-01T15:10:17.881Z | 2018-07-01T00:00:00.000 | {
"year": 2018,
"sha1": "454ec3058990c2f3ad415a572a0ea34cf9afb1f5",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/19/7/2104/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "454ec3058990c2f3ad415a572a0ea34cf9afb1f5",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
119306769 | pes2o/s2orc | v3-fos-license | Radial multipliers and restriction to surfaces of the Fourier transform in mixed-norm spaces
In this article we revisit some classical conjectures in harmonic analysis in the setting of mixed norm spaces $L^p_{rad} L^2_{ang} (\mathbb{R}^n)$. We produce sharp bounds for the restriction of the Fourier transform to compact hypersurfaces of revolution in the mixed norm setting and study an extension of the disc multiplier. We also present some results for the discrete restriction conjecture and state an intriguing open problem.
Introduction
The well-known restriction conjecture, first proposed by E. M. Stein, asserts that the restriction of the Fourier transform of a given integrable function f to the unit sphere,f | S n−1 , yields a bounded operator from L p (R n ), n ≥ 2, to L q S n−1 so long as This conjecture has been fully proved only in dimension n = 2 by C. Fefferman [7] (see also [4] for an alternative geometrical proof). In higher dimensions, the best known result is the particular case q = 2 and 1 ≤ p ≤ 2(n+1) n+3 , which proof was obtained independently by P. Tomas and E. M. Stein [13].
The periodic analogue, i.e. for Fourier series, was observed by A. Zygmund [16], but also in two dimensions. It asserts that for any trigonometric polynomial P (x) = |ν|=R a ν e 2πiν·x , ν ∈ Z 2 , the following inequality holds: uniformly on R > 0 and where Q is any unit square in the plane. The alternative proof given in [4] allows us to connect both the periodic and the nonperiodic restriction theorems, explaining the reason for the apparently different numerologies of the corresponding (p, q) exponent ranges. It also raises an interesting question about the location of lattice points in small arcs of circles [3].
The first result in this paper goes further in that direction: Given {ξ j } a finite set of points in the circle { ξ = R} of the plane, let us consider We have: 1-The authors are partially supported by the grant MTM2014-56350-P from the Ministerio de Ciencia e Innovación (Spain). Theorem 1. The following inequality holds where the suprermum is taken over all unit squares of R 2 and µ corresponds to the Lebesgue measure.
The corresponding result in higher dimensions (n ≥ 3) is an interesting open problem: Although there are many interesting publications by several authors throwing some light on the restriction conjecture, its proof remains open in dimension n ≥ 3. One of the more remarkable improvements was B. Barcelo's thesis [12]. He proved that Fefferman's result also holds for the cone in R 3 . Another interesting result was given by L. Vega in his Ph.D. thesis [14], where he obtained the best result in the Stein-Tomas restriction inequality when the space L p (R n ) is replaced by L p rad L 2 ang (R n ) . Here we shall consider the restriction of the Fourier transform to other surfaces of revolution in these mixed norm spaces. Several special cases have already been treated [8,9] but we present a more general and unified proof for "all" compact surfaces of revolution: That is, in R n+1 , n ≥ 2, we consider cylindrical coordinates (r, θ, z) where the first components (r, θ) correspond to the standard polar coordinates in R n ; 0 < r < ∞, θ ∈ S n−1 , and z ∈ R denotes the zenithal coordinate. In this coordinate system, the L p rad L 2 zen L 2 ang R n+1 norm is given by We can state our result.
Theorem 3. Let Γ be a compact surface of revolution, then the restriction of the Fourier transform to Γ is a bounded operator from L p rad L 2 zen L 2 ang R n+1 to L 2 (Γ), i.e. there eists a finite constant C p such that 3) so long as 1 ≤ p < 2n n+1 . A central point in this area is C. Fefferman's observation that the disc multiplier in R n for n ≥ 2, given by the formula is bounded on L p (R n ) only in the trivial case p = 2. However, it was later proved (see ref [6] and [10]) that T 0 is bounded on the mixed norm spaces L p rad L 2 ang (R n ) if and only if 2n n+1 < p < 2n n−1 . Here we extend that result to a more general class of radial multipliers. for all rapidly decreasing smooth functions f , where m satisfies the following hypothesis: T m is then bounded in L p rad L 2 ang (R n ) so long as 2n n+1 < p < 2n n−1 . Finally, let us observe that Theorem 4 admits different extensions taking into account Littlewood-Paley theory. Some vector valued and weighted inequalities are satisfied by T 0 and the so called universal Kakeya maximal function acting on radial functions.
Restriction in the discrete setting
Proof of Theorem 1. First let us observe that, by an easy argument, we can assume M = 1 without loss of generality. Next we take a smooth cut-off ϕ sot that We can then write where q is a point in R 2 . We havê Note that the L 4 norm off majorizes the left hand side of (1.1), On the other hand, we havê Furthermore, because the supports of ϕ k and ϕ j have a finite overlapping, uniformly on the radius q.e.d.
Using similar arguments we can obtain the following analogous result: In R 2 let us consider the parabola γ (t) = t, t 2 and a set of real numbers {ξ j } so that An interesting open question is to decide if the L 4 norm could be replaced by an L p norm (p > 4) in the inequality above. It is known that p = 6 fails, but for 4 < p < 6 it is, as far as we know, an interesting open problem [2].
The restriction conjecture in mixed norm spaces
Recall that in R n+1 we establish cylindrical coordinates (r, θ, z), where (r, θ) corresponds to the usual spherical coordinates in R n and z ∈ R denotes the zenithal component. We will also use the notation (ρ, φ, ζ) to refer to the same coordinate system.
The L p rad L 2 zen L 2 ang R n+1 norm is therefore given by Let g be a continuous positive function supported on a compact interval I of the real line that is almost everywhere differentiable, and consider the surface of revolution in R n+1 given by We are interested in the restriction to Γ of the Fourier transform of functions in the Schwartz class S R n+1 . The restriction inequality for 1 ≤ p < 2n n+1 is, by duality, equivalent to the extension estimate: for q > 2n n−1 .
To compute f dΓ let us recall Next we use the spherical harmonic expansion is an orthonormal basis of the spherical harmonics degree k. We then obtain: where J ν denotes Bessel's function of order ν (see ref. [15]). Denoting by G 2 (z) := g (z) Taking into account the orthogonality of the elements of the basis Y j k together with Plancherel's Theorem in the z-variable, we obtain that the mixed norm f dΓ q L q,2,2 is up to a constant equal tô where ν k = k + n−2 2 . On the other hand we havê Therefore our theorem will be a consecuence of the following fact: for all j and Schwartz functions a j , the following inequality holds: for q > 2n n−1 . Remark 6. Taking into account the hypothesis about g we will look for estimates depending upon A = sup x∈I |g (x)| and B = sup x∈I |g ′ (x)|, where I is the compact support of g. It is also easy to see that we can reduce ourselves to consider the sums over the family of indices {ν j } ∞ j=1 such that ν j ≥ n−2 2 . Therefore it is enough to shoŵ for a family of smooth functions {b j } j and indexes ν j ≥ n−2 2 . In order to show (3.8) we will need a sharp control of the decay of Bessel functions; namely the following estimates: Lemma 7. The following estimates hold for ν ≥ 1.
These assymptotics follow by the stationary phase method as it is shown in [15], [1] and [5].
Proof of Lemma 5 . To prove 3.8 we shall first decompose the ρ-integration in dyadic parts: where M = 2 m , m = 0, 1, . . . For the lower integrand, we have the following splitting: In order to bound I we invoke Minkowski's inequality and property 5. of Lemma 7.
where A = g ∞ . Since the sum is taken over all ν j ≥ n−2 2 , the inner integrand is well defined and we can bound (3.10) The second part is similarly bounded (3.11) Then Lemma 5 will be a consequence of the following claim: Claim 8. For all q > 4, the following inequality holds truê Indeed, if q > 4 we need only to note that invoke our claim and sum over all dyadic intervals in (3.9): It is then a simple matter to check that the exponent is negative for q > 2n n−1 . If the exponent q is however smaller, 2n n−1 < q ≤ 4, we need to use an extra trick. Note that equation (3.12) for all q 1 > 4. Then using Hölder's inequality and the previous inequality, Therefore, summing over all intervals, we obtain where the exponent −q n−1 2 + n is negative for all q > 2n n−1 .
To prove Claim 8 let us split each dyadic integrand in (3.9) in three parts corresponding to the differnt ranges of control of Bessel functions. (3.14) Similarly, I ∞ M is also easily bounded as if k > 2r, |J k (r)| ≤ k −1 , and in I ∞ M , k > 4M g (z) > 2ρg (z). Furthermore, since ρg (z) > 1, (ρg (z)) −2 < (ρg (z)) −1 and, in I ∞ M , we have |J k (ρg (z))| 2 ≤ (ρg (z)) −1 . This shows that again Finally, we need to work a little bit harder than in the previous cases to obtain a suitable estimate for I c M . First of all note that Minkowski's inequality yields (3. 16) In I c M we want to use estimate (3) of Lemma 7, we thus need to split the inner integral so that ρg (z) ∼ ν j + αν j in the according range of α. Consider the family of sets , for α = 0, 1, 2, . . . , (M g (z)) 2 3 , so that G α ⊇ [M, 2M ] and in each interval ρg (z) ∼ ν j + αν 1 3 j , and split (3.16) in the following way Let us also define We can then invoke Lemma 7 and rearragne the sums to bound I c M by Note that the second sum is easier to control than the first. We shall, therefore, focus on the first term, I c,1 M, . Since the intervals G α have length M Furthermore, using Young's inequality, since q > 4, taking 2/q = 1/s − 1/2 we obtain We have thus ahowed that the central integrand I c M can also be bounded in the desired way; (3.17) q.e.d.
Generalized Disc Multiplier
In the late 80's it was proved independently in [6,10] that the disc multiplier operator is bounded in the mixed norm spaces L p rad L 2 ang (R n ) for all 2n n+1 < p < 2n n−1 . Let us here explore further the theory of radial fourier multipliers following the ideas presented in the aforementioned articles.
Let m be a radial function and consider the fourier multiplier Once again, recall the expansion of a given function f in terms of its spherical harmonics, Then, the classical formula relating the Fourier transform and the spherical harmonics expansion, [11], yieldŝ The expression of T m in terms of its spherical harmonics expansion is then Therefore, computing once more the Fourier transform of a radial function, where K ν (t, r) = √ rtˆb a m (s) J ν (2πts) J ν (2πrs) sds.
In order to simplify the notation, note that Y j k (θ) T k,j m f (r) (4.1) with T k,j m defined as before, but K ν (t, r) = √ rtˆb a m (s) J ν (ts) J ν (rs) sds.
Let us take a closer look at the kernel of the operator K α , K α (t, r) = √ rtˆb a m (s) J α (ts) J α (rs) sds. | 2016-01-19T11:43:18.000Z | 2016-01-15T00:00:00.000 | {
"year": 2017,
"sha1": "0ff34581631392b1b4faa61d7159abf7c25b10a4",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1601.03870",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "0ff34581631392b1b4faa61d7159abf7c25b10a4",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
234657993 | pes2o/s2orc | v3-fos-license | CHALLENGES OF SERVICE LEARNING PRACTICES: STUDENT AND FACULTY PERSPECTIVES FROM MALAYSIA
Purpose – The primary aim of service learning is to produce holistically developed students. Despite the mandate from the Ministry of Higher Education, Malaysia to infuse service learning in the programs of studies since 2015, service learning in the country remains in its infancy. Critical insights concerning contextual compatibility are still missing in the Malaysian context. In this regard, the current paper aims to investigate the perspectives of lecturers and students on the challenges they have encountered while participating in service learning.
Methodology – The study employed a qualitative approach and the principles of Scholarship of Teaching and Learning (SoTL) guided the collection of data. Students and lecturers who participated in the study were selected using purposive sampling techniques. The data from the students was collected using focus group interviews, while in-depth face to face interviews were used to collect data from the lecturers. These two sources of data were then analysed using a thematic analysis method.
Findings – From the perspectives of the students, the challenges encountered were as follows: 1) there is a gap between theory and practice and 2) the lack of cognitive autonomy, while from the perspectives of the lecturers, the challenge was lack of structural support. There was also a theme seen in the common challenges experienced by the participants which was about the relationship and rapport with the community.
Significance –The findings provide insights into the challenges faced by lecturers and students in a public university where service learning is practised. These insights may have implications for academic developers providing training workshops on service learning and for lecturers involved in the design and implementation of service-learning projects.
INTRODUCTION
Service learning is experiential learning that integrates practical experiences into the academic curriculum. Its theoretical and practical foundations stem from experiential education and constructivism, whereby these two fields of study have helped frame service learning as an opportunity for students to apply their knowledge within the community (Furco, 2001), and includes community engagement and the educational benefits of experiential learning (Parker et al., 2009).
Service learning encourages students to be creative when applying their knowledge and skills learned in the classroom to resolve issues and challenges encountered in the community. The students are guided by their lecturers when undergoing service learning in a selected community. Students will begin by gaining an understanding of the needs of the community, and would then identify the problems related to those needs. Subsequently, the students would offer practical solutions and will work together with the community to solve the problems. They learn to identify the issues within the community and engage to rectify the problems faced by the community, often with the cooperation of co-partners such as the community itself, authority bodies or industrial participants (Felten et al., 2016). It is a rich and comprehensive learning process where knowledge transfer happens during the implementation process. The main purpose of service learning is to produce holistically developed students who are able to think, act and reflect based on empirical evidence and human values (Furco, 1996). It is an immersive learning experience that promotes high impact practices in a curriculum that caters for the development of critical thinking skills, people skills, innovativeness, entrepreneurial mindset, resilience with cognitive flexibility, emotional and contextual intelligence, and passion for lifelong learning (Kilgo, Ezell Sheets & Pascarella, 2015; Awang-Hashim, Kaur & Valdez, 2019).
In recent years, there has been a surge in the implementation of high impact educational practices, including service learning approaches around the world (Conway, Amel, & Gerwien, 2009;Celio, Durlak, & Dymnicki, 2011;Xu, Li & McDougle, 2018;Salam et al., 2019;Bringle & Clayton, 2020). However, to yield similar beneficial outcomes, the academic systems around the world have consistently reviewed their approaches of such practices to examine contextual compatibility, and identify the challenges that might limit its applicability or recognise similar benefits in different contexts (Butin, 2006;Taylor, 2017).
In Malaysia, the current educational blueprint has included initiatives toward enhancing service learning, as this pedagogical approach is seen as the means to achieve the national educational goals of producing graduates with the necessary skills for employability (Malaysian Education Blueprint, 2015-2025. The implementation of service learning is at the stage where it needs to be incorporated in a more structured and systematic manner into the academic program (Ministry of Education Malaysia, 2015).
This endeavour is now recognised as the 'third mission' for universities across the nation. Therefore, the Ministry of Higher Education has recently devised a set of national guidelines on service-learning implementation in Malaysian universities, which is known as 'SULAM' (Service Learning Malaysia -University for Society) (Department of Higher Education, 2019). Moreover, some universities in the country have also created their own specific set of service-learning guidelines which are deemed to be better suited to their specified context. This is in line with the study conducted by Mackenzie, Hinchey and Cornforth (2019), and further asserts that service-learning strategies should be aimed at creating a sustainable environment for cooperation between the university and community. Hence, significant efforts should be geared towards the ideal implementation of service learning within the academic curriculum.
However, as service learning is a fairly new pedagogical approach, its practical implementation has faced significant challenges, particularly within the Malaysian context. These contextual issues may have impeded its impact. The main objective of this paper is to review the common challenges surrounding service learning around the world, and to apply these insights as an analytical framework to highlight the specific issues faced by service-learning practitioners in the country. Moreover, these findings is expected to contribute towards enhancing the effectiveness of this approach in the Malaysian tertiary education system.
The Outcomes of Service Learning
Various research on service-learning implementations have shown a wide range of outcomes that are associated with student engagement with their communities. From a survey of 1,066 alumni from 30 campuses conducted by Richard, et al., (2018), the findings revealed that dialogue across different cultures was the strongest predictor in sustaining civic engagement outcomes after the students involved had graduated from college. The study suggested that, through service learning, students had the opportunity to engage with individuals who were different from them, which subsequently created the impetus for students to continue to serve the community after their university years. Additionally, students had the opportunity to study the relationship of their own participation in academic, co-curricular programs and post-college civic engagement.
Other valuable outcomes that relate to real work-life experiences might include resolving conflicts through critical thinking and deepthinking abilities (Jelinek, 2016). In a study conducted by Wang, et al., (2012) on leadership and service learning of first-year female engineering students, it was suggested that leadership skills could be acquired through service learnings. Therefore, service learning offers opportunities to develop life skills, and enhance academic attainment and civic responsibilities among students (Sax & Astin, 1997, Hoxmeier & Lenk, 2020. Students who participate in service learning tend to acquire life skills such as deep reflection and problem-solving (Ahmad, Said & Mohamad Nor, 2019). They are likely to be more prepared and would review course materials to help them resolve issues within the community. In addition, these students tend to be more open-minded and empathetic on the needs of the community, which led to a better understanding of the world around them. Furthermore, it was suggested that students who participated in service learning became more open to diversity and multiple perspectives (Pike, Kuh, & McCormick, 2010;Nishimura & Yokote, 2020) as the experiential learning provided opportunities for students to interact with people from different backgrounds and needs. This enabled students to develop their sense of civic responsibility, and promoted ethical and respectful engagement with the communities that they served.
The Challenges of Service Learning
Despite its many advantages, service learning has its fair share of challenges when it comes to execution. It was found that the service learning implementation in a university might be hampered if the university's eco-system failed to work synergistically to support the educational institution, instructors and students, and also the community (Salam et al., 2019b;Nishimura & Yokote, 2020). Karasik (2004) highlighted the five main challenges faced by educational institutions when implementing service learning, which are: pedagogy, community, students, faculty, and the university. Pedagogy was closely related to the role of the faculty to develop and deliver service-learning based curriculum to the students. Students are regarded as an important entity in service learning as they drive service-learning activities. Service learning allows students to engage with the community to solve problems. Thus, faculty and students might sometimes encounter challenges when collaborating with the community as both sides would have a different understanding/ perspective on the issues that require specific solutions.
Therefore, a university that hosts a service-learning curriculum must be prepared, both structurally and intentionally, to implement their respective projects. Conversely, problems will arise if the university administration that has embraced service-learning programs do not implement well planned procedures and supports, and lack the pedagogical preparedness, particularly with regards to the curriculum design and delivery. Hence, it is important that both the university administration and institutional pedagogical approach must be compatible to create an impactful service learning environment for the students (Chng, Leibowitz & Mårtensson, 2020).
In the study conducted by Ziegert and McGoldrick (2008), the authors highlighted the perspectives of the instructors and listed out the areas of concern when carrying out service learning. These included the challenges of integrating service learning with the course content, the role of the instructor, preparation time, and assessment. Some instructors believed that the incorporation of service learning into their courses will lead to a loss of focus by the students on the academic content, while other instructors were afraid of losing control over the students' learning while being away for community engagement. A number of instructors had experienced distress over the challenges of matching students' skills with the specific needs of the community, and found it to be time-consuming. Similarly, they also suffered significant levels of stress when supervising the service learning activities of their students in the field. Bennett (2016) addressed the significance of institutional commitment that included the following factors: structure, process and funding, and resolution of the stakeholders' will power to engage in servicelearning projects. In the absence of these factors, all parties involved in the service-learning project would have wasted their efforts, time and energy. Additionally, clear communication is vital for the successful implementation of service-learning. Morin (2009) asserted that the major pitfall among students was the lack of communication with their peers, instructors or clients. These issues were likely the consequences of the students' inexperience when dealing with other people over technical matters, the time constraints when setting up meetings with friends, lecturers or clients, and the students' lack of expertise or skills to finish the project successfully. Yusop and Correia (2013) further supported the notion of the mental stress when completing service-learning projects among students. Additionally, students participating in service learning would sometimes exhibit emotional outbursts that resulted from the intense cognitive and physical labour that was expected of them. With relation to the community, it was suggested that a short-term service learning program is inadequant to achieve the twofold benefits of meeting students' educational purposes and satisfying the community's needs (Tyron et al., 2008). Borgerding and Canigla (2017) asserted that despite the benefits of service learning, there was a need to consider the students' readiness and the crucial support that is required from the institutions. The students were considered pre-service teachers, however upon graduation, they had lost the motivation to practice service learning. On the other hand, the lecturers complained of the constant need to supervise the students during service learning projects. It was further suggested that the lack of resources from the educational institution hampered the implementation of service-learning projects.
The Current Context
The service learning program in higher education was primarily led by academics in the West (see Bringle & Hatcher, 1996, 2000Butin, 2003). The implementation of high impact practices in higher education, including service learning in western countries, has shown a positive impact on the students' development. However, universities across the globe have come under scrutiny by various stakeholders regarding their role and ability to produce graduates who are employable, and are equipped to contribute as responsible citizens towards local communities and the nation (Kagan & Diamond, 2019). Hence, the incorporation of service learning into the curriculum is conceptualised as the third mission of the university (Department of Higher Education Malaysia, 2019). In other words, the Malaysian Higher Education Institutions (HEI) seek answers to questions like; have universities done enough to create impact and develop the local community?, is the existence of the university felt by the community?, and how could universities play better roles in developing the community together with other stakeholders? Within the context of Malaysia, service learning that stems from the university-wide third mission, i.e. the need for HEI to achieve a significant impact on community transformation by developing a sustainable service learning relationship with the selected communities (OECD, 1996).
Service learning in Malaysia has continued to remain in its infancy, although all higher education institutions have been mandated to incorporate service learning into their study programmes since 2015 (Ministry of Education Malaysia, 2015, see letter from JPT). Within Malaysia, a thorough review of the various literature conducted points to a limited numbers of studies that have measured the impact of service learning practice and the challenges associated with it. An overview of those studies is summarized in Table 1. Table 1 Sampling of Service-Learning Studies in Malaysia Authors and title of the studies Findings Khan and Jacob (2015) Service learning for pharmacy students: Experience of a homegrown community engagement elective unit Engaging with the selected community was an enriching experience for the students. The involved tasks and assessments incorporated service learning activities that have improved students' communication skills.
In addition, the students benefitted through the development of empathy skills and leadership abilities that are valuable for their future career and life.
Jacob, Palanisamy and Chung (2017) Perception of a privilege walk activity and its impact on pharmacy students' views on social justice in a service learning elective: A pilot study Students understood the differences in privilege among their peers. Several students acknowledged that the session enhanced their reflective skills and made them less judgemental towards the underprivileged.
Musa et al., (2017)
A methodology for implementation of service learning in higher education institution: A case study from faculty of computer science and information technology, Unimas.
The study proposed a working methodology on service-learning implementation in Unimas. There were three phases involved, which are: Phase 1 (Planning, Analysis and Design), Phase 2 (Delivery) and Phase 3 (Evaluation, Reflection and Monitoring) (continued) Authors and title of the studies Findings Huda et al., (2018) Transmitting leadership based civic responsibility: Insights from service learning It is a systematic literature review of civic-based leadership from a service-learning perspective. The findings suggested three cores: 1). strategic planning of community engagement projects; 2). creative thinking and professional skills with experiential leadership; and 3). rational problem-solving using leadership skills and knowledge. Salam et al., (2019a) Technology integration in service learning pedagogy: A holistic framework Institution readiness is vital, in the context of providing a reliable technological platform for servicelearning assessment. Problems arose when lecturers were needed to provide assessment for reflection tasks for a big class due to the lack of technology support and knowledge on using the platforms. Salam, et al., (2019b) Service learning in higher education: A systematic literature review The study provides a comprehensive review of service-learning literature in higher education. The findings revealed that servicelearning benefited the relevant stakeholders as most disciplines have incorporated service learning into their pedagogical strategies. Apart from that, the findings of this study suggests that technological integration aspects were lacking during the implementation of service learning in various disciplines.
The overview of the various studies conducted suggests that the effects of the implementation of service learning is currently limited as most investigations were focussed on measuring the effectiveness and learning outcomes of service learning. Given the novelty of this practice in the Malaysian context, it is crucial to understand the mechanics of its implementation with regards to the challenges and opportunities encountered both by lecturers and students. Nevertheless, the literature on service learning, both within Malaysia and abroad, suggests that the key aspect of service learning comprises the integration of community service into academic learning, whereby parallel development and partnership between the community and the students occur in a natural ecosystem (Bringle, et al., 2016). Moreover, it highlights the role of pedagogical innovation and effectiveness through the incorporation of the components of service learning to achieve its learning outcomes. Thus, reflective practice is key to ensure successful implementation of service learning. This study focuses on critical reflections from lecturers and students that seek to understand the mechanics of service-learning implementation, in relation to its challenges and opportunities in a Malaysian context.
The role of critical reflections by both learners and instructors/
facilitators is central to identify effective outcomes during the implementation of service learning. This reflective approach would enable a better understanding of the conceptualization and development of effective service learning practices. Therefore, by employing the Scholarship of Teaching and Learning (SoTL) method to investigate service learning practices, the issues and challenges that resulted from its implementation could be investigated. SoTL is defined as a systematic investigation of teaching and learning by employing validated criteria of scholarship to understand the factors that can enhance learning outcomes, or develop accurate understanding of learning which is shared to the academic community (Hutchings & Shulman,1999;Shulman, 2001). Hubbal and Clarke (2010) offered a heuristic model to investigate potential SoTL research questions. Based on this framework, this study focuses on investigating SoTL process questions that facilitates formative assessment of educational initiatives in a particular context. By placing the key stakeholders within the framework, this would provide an alternative vantage point from a teaching and learning perspective in SoTL studies as it could help determine better guidelines on the management of service learning in the higher education system. The participants engaged in systematic reflective practice and their experiences on the implementation of service learning were documented. More specifically, the research in question was guided by the following concern: What are the faculty and students' challenges in implementing service learning, in the context of higher education in Malaysia?
Research Design
This study employed a qualitative approach in the collection of data by conducting in-depth face to face interviews and focus group sessions. The interviews and focus group protocols were developed from the literature review and were verified by experts. The methodology for this study incorporated the principles of SoTL that encourages systematic academic inquiry in the teaching and learning practices within classrooms, and to share these findings with other academicians and practitioners for the benefit of all (Felten, 2013). Specifically, this study utilizes Gibbs' Reflective Cycle (1988), based on Kolb's Experiential Learning, to examine the impact of reflection on service-learning practices from the lecturers' (participants) knowledge and their pedagogical practices. This study was designed in the following phases:
Phase-1
Reflection by researchers on plausible issues that are related to service-learning implementation was carried out (misconceptions on service learning, haphazard practices, and limited references on local best practices were among the related issues examined). In the first meeting, training was conducted to reflect on issues faced by participants in their previous experiences of service-learning implementation. The participants were exposed to standard practices of service learning both locally and internationally. They discussed and provided feedback to the standard and procedures of service learning that was needed to be implemented in the forthcoming semester.
Phase-2
In this phase, the participants implemented the standard servicelearning procedures and practices. They reflected on their experiences during the second meeting. The focus of this phase was on the enhancement of service-learning practices and conception (reflection-on-action).
Phase-3
Reflection by researchers on issues related to service-learning practices, and the effectiveness of service-learning implementation that was achieved through proper guidance. The five-step guideline for reflections (Gibbs, 1994) were discussed in-depth during the sessions, and served as guidance for reflective writing by the participants. Samples of reflection questions based on the guidelines by Gibbs (1988) are shown below:
Experience
Describe your experience by drawing your attention to the facts or order of events. Put yourself back in the situation and try to relive the experience.
Ask yourself, what did you see? What did you hear? How did you feel? Etc.
Reaction
Write your experiences on how you reacted physically, mentally or emotionally. The distinction between your physical, mental and emotional reaction is that of a hand, head and heart response or in other words: What did you do physically? (physical movements) What were your logical and reasoned thoughts? (mental) What was your emotional response? (emotional)
Analysis
When analysing, consider the component parts that made up the experience. For example, if you were managing a situation, who was involved? What issues, problems or topics existed? Was the time of the day significant?
Interpretation
Ask yourself, what does this mean to you? Where did you fit in the big picture? Are you happy with this?
Action Plan
What will you do differently in the future?
Participants
This study was conducted at a university in the North of Malaysia. The researchers involved in the study had conducted a series of training on service-learning approaches throughout the year in 2019. A purposive sampling method was utilized to select the respondents for this study. The participants were selected based on two main criteria: 1) they have undergone at least one cycle of service-learning training organised at the university level, and 2) they have applied service learning pedagogical approach in their courses.
Based on these two criteria, eight lecturers were selected. The lecturers were from diverse fields of study, namely; Business Management, Creative Industry Management, Communication, Social Work Management, and Computing Studies. In total, the eight lecturers had a pool of 39 students who took part in their respective service-learning projects. Most of the students were in their final year of studies. The students selected were based on the lecturers' recommendation of their active involvement in their respective service-learning projects. The number of students varied due to program allocation for the student intake in each semester.
Data Collection Procedure
Institutional permission was first obtained to conduct this study. The selected lecturers and students were informed that their participation was voluntary. All respondents understood that they were free to leave the study and not participate if they ever felt uncomfortable with the questions, or experienced any discomfort during the course of their participation. Moreover, no compensation was provided for the lecturers' participation, while the student participants were given a monetary token of appreciation for their contribution in the focus group. It is suggested that any forms of incentive provided for students' work as compensation for their time and contribution is linked to genuine participation in research studies (Kelly et al., 2017) Data from the students were collected through focus group interviews. The interviews were conducted in a group. Each focus group was comprised of five to seven participants. Focus group discussions were used to analyse the perspective of the students involved. Based on the guidelines from Krueger and Casey (2015) on focus group, the researchers conducted the session with the aim of obtaining the authentic viewpoints of the participants' experiences in service learning activities or projects. Each focus group session lasted between 45 minutes to one and a half hours. Some examples of the questions asked in the focus groups included the following: 'Can you elaborate on your course? Can you tell me what course are you taking and what kind of activities have you participated in the service learning lessons that you took?' The data from lecturers were collected through in-depth face to face interviews. The in-depth interviews followed open-ended protocols. It was conducted to ensure that the researchers were able to obtain as much information as possible from the respondents (Lindlof & Taylor, 2002). Each session lasted between 40 minutes to one hour. Some examples of interview questions for the lecturers were as follows: 'Can you elaborate on your course and service-learning activity/project? What issues did you have in integrating the course contents in your service-learning project? All these sessions were recoded, and was later transcribed. (2013), a thematic analytical approach was employed in this study. The two data sources were analysed concurrently while taking into account the research question posed in this study. The thematic analysis was categorised into six main phases. Firstly, the researchers must examine the data thoroughly. At this stage, reviewing data with the theoretical framework is helpful to facilitate the interpretation of the data. Secondly, codes were established based on the data. The researchers began constructing initial codes to provide a better understanding of the data. Thirdly, the themes were identified. After the initial codes were established, these codes are combined and cross referenced to develop meaningful themes. Fourthly, the themes were reviewed based on the initial works of searching for themes from other literature. Fifthly, the themes were finalized with given names. In the final stage, a comprehensive report based on the outcomes of coding and themes were produced.
From the guidelines presented by Braun and Clarke
Essentially, it was a team effort between the research team members with the help of a research assistant who individually coded the interviews with the lecturers (participants) and focus group, and later came together to compare the coding outputs. Subsequently, the team reviewed the coding and assigned them to their respective emerging themes, and ensured that the patterns of the data were consistent with the concurred interpretation from the members of the research team. The data was then triangulated using the two sources to ensure that the themes were aligned with the groups' consensus.
FINDINGS
This study investigated the perspectives of lecturers and students on the challenges they encountered while participating in a servicelearning program. Thematic analysis of the data proposed a total of five themes from all participants. The challenges encountered from the students' perspectives involved the following two themes; 1) Gap in theory and practice, and 2) Lack of cognitive autonomy. The challenge encountered from the lecturers' perspectives was the lack of structural support.Moreover, a theme was identified from the common challenges faced by both lecturers and students, which are relationship and rapport with the community.
Gap in theory and practice
The main challenge identified from the students' experiences is the lack of any close association between what has been learnt in the classroom, and what needs to be implemented in a practical situation. When the students were asked on the effectiveness of carrying out their service learning projects, most of them responded that they did not know the strategies or have the sufficient skills to apply their theoretical knowledge in a practical setting.
"(My)(Knowledge) in theory is adequate. But how can I apply it?
We learn everything in theory. So, when we faced the real situation, we would have to take time to digest (the situation) and adjust ourselves (to practice it) because we're too used to learning only the theoretical bit. Theoretical knowledge is more structured. When we're in the field, things come in different forms; thus it takes time for us to understand what we need to do." [Student 10] The students were constantly struggling to apply the theoretical knowledge that they had gained in the classroom to real life situation. As one student remarked, "We are trying to think of ways to use the knowledge gained in the class, to apply in our field allocation".
[Student 9]
The students also mentioned the difficulties of finding appropriate field contexts that are related to the contents learnt in the classroom.
"When we were informed about the task, we were told that we needed to be a consultant for a company's strategic planning. However, there was a misunderstanding between us and the company, even though we had explained about our task objective. They asked us to distribute flyers to the potential customers. Later, we tried to explain about our goal again. It is indeed hard to understand and materialised service-learning project." [Student 35] Thus, it was not always certain that students will obtain near perfect field conditions to solve problems or contribute with the limited knowledge that they have gained in their classrooms.
Lack of cognitive autonomy
While various literature have pointed to the benefits of servicelearning in the development of higher order cognitive skills, the students in this study expressed their inability to operate independently when undertaking service-learning projects. They stated that they are highly reliant on their lecturers for guidance, for example, a student said that, "In terms of knowledge of the programme (service learning project), we rely 100% on our lecturer to explain what volunteerism is because most of us did not have any experience to carry it out."
[Student 02]
The students displayed a lack of confidence in independent decision making and evaluative judgement ability, and were constantly dependent on guidance. For example, one student said, "… as I said, when we had to apply the knowledge (in the real setting), we need to have additional knowledge from the Professor (who taught us), for example his own experience… so that we can gain some ideas on how to plan (the service learning project) ahead." [Student 11] Additionally, the students showed a lack of behavioural and decision making abilities. They emphasised the need of modelling that displayed a clear and concise set of guidelines required to carry out the project independently.
"For those (students) who have zero experience in conducting community work or volunteering, they wouldn't know how to handle service learning projects with any clear guidelines. They must be willing to participate and cooperate. Furthermore, they must make decisions and not rely on others too much." [Student 03] Some students spoke about their apprehension of taking independent decisions without consultation with their lecturers. "We consult our lecturers most of the times." [Student 31]. They expressed the need to seek constant guidance from their lecturers, which is contrary to the purpose of the service learning's expected outcomes. "Our lecturer stays with us throughout the whole event" [Student 30]
Lack of structural support
Another significant challenge identified was the lack of structural support, in relation to extra manpower, time, money and planning. The lecturers believed that this support would have made the implementation of service-learning to be more effective. Most lecturers voiced their concern over the implementation realities of service learning. For example, some lecturers expressed their concern on the financial constraints and other structural challenges faced when implementing service learning.
"Many academicians are reluctant to conduct Service Learning activities as it takes a lot of time in planning and delivering the project. Also it involves money. But to me, service learning activities are very important in building and polishing students' soft skills." [Lecturer A] Furthermore, the lecturers found it difficult to incorporate service learning into scheduled classrooms as it involved the participation of outside communities. A sample extract from a lecturer illustrates this issue: "The current syllabus is already packed and not easy to incorporate service learning in each programme. Some topics should be dropped to accommodate service-learning approach. We are struggling to finish the syllabus and students have a tight schedule. We need to manage the suitability of the service learning projects so that it will not interfere with the class time." [Lecturer G] Another issue faced by the lecturers was the lack of management/ administrative support. They suggested that time constraint and the number of students per class were among the factors that made it difficult for the implementation of service learning projects.
"A smaller class is preferable. It will make the students to be more critical where they can foresee things. So rather than 40 -50 students, the maximum should be 30 students only. This is because the students have many ideas for us to discuss. Thus, with one-anda-half-hour class, our discussion time is limited. We can only do surface discussion, a bit here and there. However, even with this touch and go concept, we could manage it as we had been consistently doing our discussion." [Lecturer E] Apart from that, the lecturers shared their concern on the lack of guidelines. To date, there is no specific set of institutional guidelines on service learning. Therefore, the lecturers were unsure on the amount of time that was needed for the project to be effective and the extent of the project within the community. They expressed the need for more structured guidelines for assessment that would ensure the appropriate learning outcomes are satisfied by the students, and to simultaneously provide the necessary service to the community.
"We also are puzzled with this matter; to what extent is the rigour of the task assigned for service-learning implementation we need to know how to assess the projects. There was no standard rubric for assessment; however, reflections are valuable evidence. Thus, it is important to know the alignment of the course syllabus and the SULAM projects. Moreover, balancing between course requirements, community needs, and self-satisfaction is vital. Thus, we need to justify the need to review the curriculum to fit SULAM." [Lecturer H]
Relationship and rapport with community
The last theme that was identified from the data involved other important groups of stakeholders that are included in the implementation of service learning projects. These stakeholders are usually the communities or companies that would benefit directly from service learning projects. Some students encountered problems when they were assigned to the same community or company to carry out their service-learning course requirements every semester.
"Some companies refuse to give their cooperation. We are facing a lack of community cooperation. In addition, arranging a visit to an organization is not an easy task. We have an issue in terms of lack of networking with the Small Medium Enterprises. Moreover, dealing with community leaders also poses a challenge. In certain cases, we have difficulty getting cooperation from the industry and community". [Student 31] Moreover, the lecturers also shared similar concerns when trying to establish a cooperative relationship with the community or industry as there were constant contrains to the schedules of both parties involved.
"It is not easy to get collaboration from the targeted community; We (I and the students) have to work smart and hard to get potential communities to participate in our service-learning project. And sometimes, the date execution of the projects cannot be changed due to community constraints. And it clashed with student other academic commitment. The students and I are in dilemma. However, we chose to go on with the project as it is the only date that the community is available." [Lecturer D] The projects that required funding for service learning to be implemented raised further problems in some cases, as can be seen in the viewpoints expressed by the participants: "It is tough to get sponsor especially when the project involves rural community and the students are racing with time to secure enough fund(ing) to organize the program." [Student 04] "It is hard. We have to initiate contact on our own. Moreover, we have to ensure things are smooth running. We did most stuff on our own. I hope the students learned a good lesson from this project.
DISCUSSION
The findings gathered from the analysis of the students' perspective suggested that their lecturers played an integral role in planning and executing service-learning projects. Moreover, it was clear that the lecturers needed to have thorough knowledge on the basic principles of service learning, regardless of whether they were teaching a course that incorporated service learning in theory or in practice. Furthermore, in the attempt to facilitate students' journey from theory to real practice, lecturers themselves are required to understand the philosophy of service learning pedagogy and the importance of scaffolding in experiential learning (Vygotsky, 1987), especially when challenging the students to identify and solve community needs and issues through the service learning approach. Additionally, in order to get students to be more committed and engaged in the service learning project, the lecturers must nurture students' self-confidence when conducting service learning projects. The study conducted by Shephard, Brown and Guiney (2017) highlighted the need to equip the lecturers as course instructors with the right mindset to be fully engaged with the community in any service learning projects.
According to scholars, the success of service learning relies on its careful planning and clear implementation guidelines (Wurdinger & Allison, 2017;Gerholz, Liszt & Klingsieck, 2017). Instructors must be ready to provide support to the students when facing any challenges as this is a key factor that ensures success of service learning (Wilson & Devereux, 2014). The students in the current context are from a teacher-centred education system and collectivist social set up, where young individuals have limited opportunities to exercise independent judgment and autonomous functioning . Therefore, to conduct themselves in a novice environment and contribute meaningfully to the context of service learning had posed a serious challenge. Scaffolding was a vital component that would ensure the smooth implementation of service learning projects. Scaffolding can be applied to combine the knowledge and skills between the students and lecturers, or between the students and their peers (Sleeter, Torres & Laughlin, 2004;Lim et al., 2020). In the study by Sleeter (2004), the facilitators had intentionally scaffolded the students during the process of managing their service learning experiences by providing face to face lectures, developing analytical skills and creating simulations for every activity. Moreover, these facilitators claimed that, through planned scaffolding activities, students were able to achieve the goals of their service learning projects. Due to the uncertainties in the planning and implementation of service learning, scaffolding has shown to be useful but to a certain extent, it needs to be well structured. This is because service learning is closely linked to the course learning outcomes, assessment practises and duration of the service. Therefore, it is crucial for lecturers to know the structured scaffolding techniques, especially for advanced and technical related courses such as mathematics (Ivars, Fenandez & Llinares, 2020) and engineering (Zheng, Wang & Yin, 2013) as these disciplines require a fundamental understanding of the basic disciplinary concepts before students can pursue their service learning projects.
The challenges from the lecturers' perspectives revealed that they were overwhelmed with the planning and implementation of service learning. This was due to the lack of structural support needed when undertaking a service learning program. Higher education institutions that implement service learning as an approach in teaching and learning must design courses with the specific purpose of incorporating it not only within the delivery process but also ensure that it relates to the community issues, and assess the methods practised by the participants to gauge the impact of service learning.
Furthermore, the lecturers also shared their concerns on the relevance of structural support. The importance of fostering an institutional culture was highlighted to be beneficial for service learning (Shrader et al., 2008). An optimal institutional culture would provide the training and professional development of staff as it is one of the core human resource function to build the necessary knowledge, skills and attitude among the course instructors and other support staff (Bender & Jordaan, 2007). Moreover, a token of appreciation should be presented to those who extended their service learning efforts for the success of the project, whereby all parties involved should be acknowledged and rewarded with some form of incentive (Vogel, Seifer & Gelmon, 2010). A cooperative workplace culture can create passionate members in the institutions that could inspire them to contribute more in the future. Additionally, the changing environment in the higher education system require upgrades in infrastructure, and the need to ensure availability of funds to support service learning (Bennet et al., 2016). Although most institutions are facing financial constraints, the top management must find alternative means to influence other stakeholders and industrial players to contribute for the success of service learning implementation.
To further extend accessibility and cooperation from the community, the students and lecturers must establish a trusting relationship with the community. Ideally, a university could establish a centre or department that would act as a liaison between the university and the targeted community. Jenkins and Sheehey (2011) suggested that during the planning stage of a service learning project, the students should be able to identify the community needs, analyse the community resources and establish effective communication channels with them. This would enable the students to identify the suitability of the project, and give due consideration for the needs of the community for a specifically designed service learning project. Kropp, Arrington and Shankar (2015) highlighted that in order to carry out a successful service learning project, students need to develop and plan sustainable projects that would attend to the needs of the community. This would gain commitment from the community to ensure that the project could solve the pressing issues they faced, and is within the capacity of the students and lecturers to achieve its goals. This is imperative to build complete trust with the community.
CONCLUSION AND RECOMMENDATIONS
The transformation of teaching and learning from traditional methods to more contemporary innovative ones that engage students in experiential learning to prepare them for future careers in the era of the Industrial Revolution 4.0 is an exciting challenge for both students and instructors. Service learning is a creative teaching and learning approach that allows students to cooperate with the community and help solve the issues faced by them. Most importantly, the students would gain a better understanding of the various issues faced by the society and learn from this invaluable opportunity in community engagement.
There are certain challenges that have impeded the implementation of service learning. However, several methods were identified to ensure service learning can be conducted successfully when certain factors are properly addressed through informed decision-making, especially before the implementation of service learning projects. The findings in this study have highlighted the specific issues that lecturers and students faced during the implementation of service learning. It is important to consider both lecturers' and students' perspectives in the attempt to understand the ways to conduct successful service learning projects that would allow students to fully reap the benefits from such experiential learning. Participation in service learning can help build leadership skills among students. When conducted correctly, students become more confident and proud of their contributions toward the community that they served (Barnett, Jeandron & Patton, 2009). However, students must be supported and guided from their early years in the university to prepare them for the challenges they may face during the service learning implementation.
The implementation of service learning requires total commitment and support from the whole institution (Bennet et al., 2016). Researchers and scholars have highlighted the essential requirements for service learning initiatives to be successful in an institution. These include subject and course designs, assessment, and evaluation (Polin & Keene, 2010), institutional culture (Shrader et al., 2008), staff training and development (Bender & Jordaan 2007), acknowledgments and incentives (Vogel, Seifer & Gelmon, 2010), and infrastructure that supports service learning in higher institutions (Bennet et al., 2016). Furthermore, when selecting a course for service learning, it should be offered to the third or fourth year students as they are more matured and are better equipped with the conceptual knowledge on the subject matter. With regards to assessment, lecturers should be provided flexibility in deciding the kind of assessments that are suitable for service learning projects as this would pave opportunities for instructors to explore the various plausible methods of assessment. However, for a more structured and standardized reference, it is suggested that the assessment on service learning should be allocated between 30% to 100%, with a minimum of 20 hours of student participation in each semester (Department of Higher Education Malaysia, 2019).
There are many initiatives on community and industry engagement in HE such as the Public-private Research Network (PPRN), Centre for University-Industry Collaboration (CUIC) and University Community Transformation Centre (UCTC). The initial focus of these collaborations have been on research partnership for innovations, with little emphasise on learning and teaching. Recently, under the former Minister of Education, the Ministry of Education (now known as MoHE) launched Service Learning Malaysia-University for Society (SULAM) and garnered support from industry players, such as Khind's Foundation, to collaborate in solving various community issues. Khind Foundation has continued to support SULAM projects nationwide through its generous funding to undergraduate students, via a competitive grant application. The Ministry of Higher Education (MoHE) has recently released a SULAM Playbook that was prepared by a taskforce at the national level to spearhead the SULAM movement within the country (Department of Higher Education, 2019). However, apart from such centralized MOHE efforts, each higher education institution should have a clear framework that will help drive effective initiatives to support service learning implementation in its campuses. Nevertheless, this study does present some limitations. For example, the perspectives of university administrators should also be taken into consideration as they will be able to provide important insights that could help improve the understanding of service learning implementation and identify other challenges that may arise from the association between the university and community projects. | 2021-05-17T00:04:23.176Z | 2020-07-31T00:00:00.000 | {
"year": 2020,
"sha1": "70ef4d89fff4e77091bcb6801d7050a5e2c57348",
"oa_license": "CCBY",
"oa_url": "http://e-journal.uum.edu.my/index.php/mjli/article/download/mjli2020.17.2.10/2527/",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "8806338951d6b4f9d7cbba4cfedb1f622bf3dbf1",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Sociology"
]
} |
263774706 | pes2o/s2orc | v3-fos-license | cGAS‐STING Pathway Activation and Systemic Anti‐Tumor Immunity Induction via Photodynamic Nanoparticles with Potent Toxic Platinum DNA Intercalator Against Uveal Melanoma
Abstract The cGAS‐STING pathway, as a vital innate immune signaling pathway, has attracted considerable attention in tumor immunotherapy research. However, STING agonists are generally incapable of targeting tumors, thus limiting their clinical applications. Here, a photodynamic polymer (P1) is designed to electrostatically couple with 56MESS–a cationic platinum (II) agent–to form NPPDT‐56MESS. The accumulation of NPPDT‐56MESS in the tumors increases the efficacy and decreases the systemic toxicity of the drugs. Moreover, NPPDT‐56MESS generates reactive oxygen species (ROS) under the excitation with an 808 nm laser, which then results in the disintegration of NPPDT‐56MESS. Indeed, the ROS and 56MESS act synergistically to damage DNA and mitochondria, leading to a surge of cytoplasmic double‐stranded DNA (dsDNA). This way, the cGAS‐STING pathway is activated to induce anti‐tumor immune responses and ultimately enhance anti‐cancer activity. Additionally, the administration of NPPDT‐56MESS to mice induces an immune memory effect, thus improving the survival rate of mice. Collectively, these findings indicate that NPPDT‐56MESS functions as a chemotherapeutic agent and cGAS‐STING pathway agonist, representing a combination chemotherapy and immunotherapy strategy that provides novel modalities for the treatment of uveal melanoma.
Introduction
Uveal melanoma (UM), as the most common primary intraocular malignancy, is prone to metastasis, resulting in a oneyear survival rate of < 50%. [1]Although immunotherapy has the potential to improve patient survival, [2] patients with UM exhibited poor responses to immunotherapy, e.g., immune checkpoint blockade (ICB). [3]1a,4] Moreover, low bioavailability and systemic toxicity of agents decreased the efficiency of immunotherapy. [5]Therefore, optimized immunotherapy strategies are continued and need to be explored for uveal melanoma.
cGAS-STING pathway is a natural immune signaling pathway, the activation of which can secrete the type I interferons (IFNs) to promote dendritic cell (DC) maturation, and subsequently increased immune cell infiltration into the tumor microenvironment (TME) (e.g., natural killer cells [NKs] and cytotoxic T cells [CTLs]), thus eliciting an antitumor immune response in vivo. [6]Therefore, immunotherapy based on activation of the cGAS-STING pathway may prove effective in the treatment of UM.However, STING agonists, typically, do not target tumors and are prone to degradation in vivo, thus limiting their clinical application. [7]As a number of novel strategies were applied to activate cGAS-STING pathway, [8] a system combination of immunotherapy and chemotherapeutic seems to be especially efficient.
Platinum drugs, such as cisplatin (Cis) and carboplatin, activate the cGAS-STING signaling pathway to some extent via leaked double-stranded DNA (dsDNA) into the cytoplasm. [9]6MESS ( [5,6-dimethyl-1,10-phenanthroline] [1S,2S-diaminocyclohexane] platinum [II]) is also a platinumbased chemotherapeutic agent with potent anti-cancer activity. [10]ore specifically, 56MESS inserts itself directly into DNA, thereby damaging the structure of DNA, and causing nuclear DNA (nDNA) leakage into the cytoplasm. [11]56MESS also has the potential to cause mitochondrial perforation, leading to the release of mitochondrial DNA (mtDNA). [12]6b] However, the therapeutic applications of 56MESS are hindered by its intense systemic toxicity [11a,13] ; hence, a safe and efficient delivery system is needed to send 56MESS directly to tumors.
Here, an amphiphilic polymer (P1) was synthesized via the condensation polymerization of a photodynamic monomer (M1), a thioketal-containing monomer (M2), and anhydride monomers (M3), which was further end-capped with methoxypolyethylene glycol (mPEG 5000 -OH) [14] (Scheme 1A).M1 can be excited by an 808 nm laser to generate reactive oxygen species (ROS), [15] which can immediately break the thiol-ketal bonds in M2 and result in the rapid degradation of P1. [16] Subsequently, P1, which contains pendant carboxylic acids, was used to encapsulate 56MESS via electrostatic interactions to form nanoparticles, designated NP PDT -56MESS (Scheme 1B).Injection of NP PDT -56MESS into tumor-bearing mice with UM resulted in the accumulation of NP PDT -56MESS at the tumor site.NP PDT -56MESS may induce the production of a large amount of ROS following light irradiation with an 808 nm laser (NP PDT -56MESS + L), which then disintegrates the nanoparticles and subsequently releases 56MESS.The ROS and 56MESS can severely damage the DNA and mitochondria and thus release dsDNA to activate cGAS-STING pathway to induce the anti-tumor effect (Scheme 1C).Hence, NP PDT -56MESS + L not only kills cancer cells by ROS and 56MESS, but also effectively activates the cGAS-STING pathway to elicit antitumor immunity, representing a combined chemotherapy and immunotherapy strategy for the treatment of UM.
Transmission electron microscopy (TEM) revealed that NP PDT -56MESS has a uniform, spherical shape with an approximate diameter of 100 nm (Figure 1A).Further characterization by dynamic light scattering (DLS) indicated that the average particle sizes of NP PDT and NP PDT -56MESS were 108.1 and 109.4 nm (Figure 1B; Figure S3A, Supporting Information), with polydispersity index (PDI) values of 0.12 and 0.14 (Figure 1C) and zeta potentials of −17.4 and −13.9 mV, respectively (Figure 1D).In addition, the particle sizes of NP PDT and NP PDT -56MESS remained relatively unchanged throughout four weeks of storage in phosphate-buffered saline (PBS) (Figure S3B,C, Supporting Information), and remained almost constant within 5 days in DMEM containing 10% FBS (Figure S3D,F, Supporting Information), indicating that they both had good stability.
The photophysical properties of the nanoparticles were then investigated.The absorption spectrum of NP PDT -56MESS ranged from 400-900 nm by UV-vis-NIR spectroscopy, with the absorption peak at 620 nm.Moreover, the emission spectrum of NP PDT -56MESS was within the near-infrared wavelength region (NIR II, 900-1700) with an emission peak at 931 nm under 808 nm laser excitation (Figure 1E).Notably, the absorption and emission spectra of NP PDT -56MESS were relatively the same as those of P1, [17] confirming the formation of NP PDT -56MESS by P1.
Furthermore, the ability of NP PDT -56MESS to generate ROS was explored.1,3-Diphenylisobenzofuran (DPBF) is a fluorescent probe that can be decomposed by ROS, resulting in a decrease in the OD value of DPBF at 415 nm.The results showed the OD values decreased by 50.00% upon irradiation of NP PDT -56MESS with an 808 nm laser for 60 s and by 65.09% after 90 s (Figure 1F).Additionally, TEM results indicated that NP PDT -56MESS was disintegrated into flocculent fragments following 180 s excitation (Figure S4A, Supporting Information).Moreover, 75.54%Pt content was released within 24 h upon irradiation of NP PDT -56MESS while it remained intact in the dark (Figure S4B, Supporting Information).Taken together, these findings indicated that NP PDT -56MESS + L rapidly produced a large amount of ROS to rapidly disintegrate the nanoparticles.
In Vitro Study of NP PDT -56MESS
In order to observe how readily NP PDT -56MESS is taken up, OCM-1 cells were treated with NP PDT -56MESS@Cy5.5 (NP PDT -56MESS labeled by cy5.5) for different durations.Confocal fluorescence microscopy (CLSM) results showed a gradual increase in the fluorescence intensity of intracellular cy5.5 in a time-dependent manner (Figure 1G; Figure S5, Supporting Information).Similarly, flow cytometric (FCM) findings indicated that the mean fluorescence intensity (MFI) had increased by ≈20-fold following 7 h of treatment compared with that at 1 h (Figure S6, Supporting Information).These results suggested that NP PDT -56MESS was successfully taken up by OCM-1 cells and that the intracellular uptake process was time-dependent.
Next, the ability of NP PDT or NP PDT -56MESS to generate ROS under the irradiation of 808 nm laser (NP PDT + L or NP PDT -56MESS + L) was investigated via DCFH-DA probes by CLSM and FCM.CLSM observations revealed a marked increase in the fluorescence of DCF within OCM-1 cells treated with NP PDT + L or NP PDT -56MESS + L (Figure 1H; Figure S7, Supporting Information).Meanwhile, the MFI of DCF in cells treated with NP PDT -56MESS + L or NP PDT + L was nearly 30-fold higher than that in cells treated with either NP PDT -56MESS or NP PDT by FCM (Figure S8, Supporting Information).These results indicated that NP PDT + L or NP PDT -56MESS + L produced abundant ROS in cells.
Anti-Tumor Effect Elicited by NP PDT -56MESS + L In Vitro
The ROS produced by NP PDT -56MESS + L would immediately degrade NP PDT -56MESS and release 56MESS.Subsequently, ROS and 56MESS will damage cellular DNA and mitochondria to induce anti-cancer effects in tumor cells (Figure 2A).Results showed that the IC 50 of NP PDT -56MESS + L was 1.46 μm in OCM-1 cells, while that of 56MESS was 2.23 μm (Figure 2B).Meanwhile, the maximum inhibition rate of NP PDT + L-treated OCM-1 cells approached 30% (Figure S9A, Supporting Information).Similar results were observed in the B16-F10 cells (Figure S9B,C, Supporting Information).These findings indicated that NP PDT -56MESS + L exerted the most potent anti-cancer activity, which may be due to synergistic effects between 56MESS and ROS.
Subsequently, the effect of NP PDT -56MESS + L on cellular DNA was explored.-H2AX is a marker of the DNA damage, [16a,18] CLSM analysis revealed that the fluorescence of -H2AX was the most intense within cells treated with NP PDT -56MESS + L (Figure 2C; Figure S10A, Supporting Information), and the MFI of -H2AX in cells treated with NP PDT -56MESS + L was 5.71-, 2.60-, and 1.54-fold higher than that in cells treated with PBS, NP PDT + L, and 56MESS by FCM, respectively (Figure S10B, Supporting Information).Hence, NP PDT -56MESS+ L induced remarkable DNA damage via a synergistic effect elicited by 56MESS and ROS.
Next, the effects of NP PDT -56MESS + L on mitochondria were explored.Firstly, mitochondrial-specific ROS was marked by the MitoSOX Red probe. [19]The fluorescence of MitoSOX was particularly evident in cells treated with NP PDT -56MESS + L (Figure S11, Supporting Information), indicating mitochondrial stress occurred in these cells.Mitochondrial dysfunction was further reflected by decreased ratios of JC-1 aggregation/JC-1 monomer. [20]he ratios of JC-1 aggregation/JC-1 monomer in cells treated with NP PDT -56MESS + L (10.55%) was significantly decreased when compared to that of cells treated with PBS (81.30%), indicating the prominent mitochondrial dysfunction occurred in NP PDT -56MESS + L treated cells (Figure S12, Supporting Information).In addition, significant changes were observed in mitochondrial morphology and structure within cells treated with NP PDT -56MESS + L by TEM, e.g., mitochondrial and swelling membrane rupture (Figure 2D).Therefore, NP PDT -56MESS + L induced a significant damaging effect on mitochondria.
Activation of the cGAS-STING Pathway by NP PDT -56MESS + L In Vitro
6a] The efficacy of NP PDT -56MESS + L in activating this pathway was further evaluated.
First, mitochondria and dsDNA were stained with anti-TOMM20 antibody and dsDNA marker, respectively. [21]CLSM images showed a remarkable red fluorescence of dsDNA outside the mitochondria in cells treated with NP PDT -56MESS + L (Figure 2E; Figure S13A, Supporting Information).Moreover, the cytosolic mtDNA increased by 3.45-fold after the treatment of NP PDT -56MESS + L by qPCR analysis (Figure S13B, Supporting Information).These results above indicated that NP PDT -56MESS + L induced remarkable dsDNA accumulated in the cytoplasm.Second, western blotting (WB) analysis revealed that the expression levels of p-STING, p-TBK1, and p-IRF-3 were significantly increased in NP PDT -56MESS + L-treated cells (Figure 2F; Figures S14 and S15, Supporting Information).Third, the MFI of p-IRF-3 in cells treated with NP PDT -56MESS + L was 1.38-, 5.58-, and 84.61-fold higher than that of cells treated with 56MESS, NP PDT + L, and PBS by FCM, respectively (Figure 2G).Taken together, these results indicated that NP PDT -56MESS + L resulted in the accumulation of dsDNA in the cytoplasm and activated the cGAS-STING pathway via cascade activation of target proteins.
Activation of the cGAS-STING pathway leads to an increase in the production of type I IFNs and pro-inflammatory cytokines, which then induce an anti-tumor immune response, including the maturation of DCs (Figure 2A). [22]ELISA results showed that the abundance of IFN- and IL-6 were 4.90 and 4.93 times higher in the supernatants of OCM-1 cells treated with NP PDT -56MESS + L when compared with those of cells treated with PBS (Figure S16, Supporting Information).Subsequently, bonemarrow-derived dendritic cells (BMDCs) were co-incubated with B16-F10 cells that had been treated with the different study drugs.FCM analysis showed that the proportion of mature DCs (CD80 + CD86 + ) was 1.40, 1.71, and 58.51 times higher in cells treated with NP PDT -56MESS + L than in cells treated with 56MESS, NP PDT + L, or PBS, respectively (Figure 2H; Figure S17, Supporting Information).Hence, NP PDT -56MESS + L exhibited the strongest potential of inducing anti-tumor immunity.
Metabolomic Analysis of OCM-1 Cells Treated with NP PDT -56MESS + L
To further investigate the effects of NP PDT -56MESS + L on tumor cells, metabolites in OCM-1 cells treated with PBS, Cis, 56MESS, or NP PDT -56MESS + L were analyzed by liquid chromatographymass spectrometry (LC-MC/MS).First, principal component analysis (PCA) revealed that intracellular metabolites were significantly separated in cells treated with different drugs, with PC1 and PC2 values of 71.6% and 14.5%, respectively (Figure S18, Supporting Information).Second, different metabolites appeared in the heat map, indicating different effects elicited by the various treatments (Figure 3A; Figure S19, Supporting Infor- mation).Third, the Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway enrichment analysis revealed that the metabolites related to the tricarboxylic acid (TCA) cycle were significantly enriched in cells treated with NP PDT -56MESS + L compared to cells treated with PBS, cisplatin, or 56MESS (Figure 3B-D), indicating significant dysfunction occurred on mitochondria in NP PDT -56MESS + L treated-cells.Furthermore, purine or pyrimidine metabolism was enriched in cells treated with 56MESS or NP PDT -56MESS + L (Figure 3B,C; Figure S20, Supporting Information), highlighting the presence of DNA damage within these cells. [23]In addition, the levels of citric acid, L-Aspartic acid, and D/L-glutamic acid in cells treated with NP PDT -56MESS + L were ≈406.8,3687.5, and 5.0 times higher than those in cells treated with PBS (Figure 3E-G), further suggesting that the impaired energy metabolism and dysfunction of nucleotide synthesis in cells treated with NP PDT -56MESS + L.
Biosafety and Bio-Distribution of NP PDT -56MESS In Vivo
Biosafety was evaluated before the application of NP PDT -56MESS for treatment in vivo.First, PBS, NP PDT , 56MESS, and NP PDT -56MESS were administered to healthy KM mice via the tail vein, respectively.(Figure S21, Supporting Information).The results showed that the body weight fluctuated by < 10% following administration of the NP PDT -56MESS or PBS (Figure S22, Supporting Information), indicating that the systemic toxicity of NP PDT -56MESS at this dose was negligible.Furthermore, hematoxylin-eosin (H&E) staining revealed no significant damage occurred in mice treated with NP PDT -56MESS (Figure S23, Supporting Information).Additionally, the serum biochemical indices of mice treated with NP PDT -56MESS did not significantly differ from those of PBS-treated mice, while the levels of AST and ALT were significantly increased in mice treated with 56MESS (Figure S24A-I, Supporting Information).Therefore, these results indicated that NP PDT -56MESS exhibited good biosafety in vivo.
Subsequently, the biodistribution of NP PDT -56MESS was assessed in a subcutaneous model of BALB/c nude mice (Figure 4A).First, mice were injected with NP PDT -56MESS@Cy7.5 (NP PDT -56MESS labeled Cy7.5) via the caudal vein and monitored with an ex vivo imaging System (IVIS) via fluorescence imaging.The fluorescence intensity of tumors continuously increased from 1 to 24 h, with a peak of 8.823 × 10 9 p s −1 cm −2 S −1 r −1 at 24 h.Thereafter, it gradually decreased to 8.08 × 10 9 p −1 s −1 cm −2 S −1 r −1 at 48 h (Figure 4B; Figure S25, Supporting Information).However, the fluorescence intensity in tumors remained 5.67, 3.15, 7.46, 5.53, and 1.61 times higher than those in the heart, liver, spleen, lung, and kidney at 48 h, respectively (Figure 4C,D).These results suggested that NP PDT -56MESS targeted tumors and accumulated at tumors for extended periods of time.
Anti-Tumor Effect of the NP PDT -56MESS + L In Vivo
Next, the anti-tumor effects of NP PDT -56MESS + L were evaluated in the BALB/c nude mice bearing OCM-1 tumors (Figure 4A).Firstly, no significant difference was observed in the weight among mice treated with different drugs (Figure 4E).Secondly, NP PDT -56MESS + L treated mice had the smallest tumors on day 15 (Figure 4F,G; Figure S26, Supporting Information), e.g., the tumor weight in mice treated NP PDT -56MESS + L was only 1/10 of that in mice treated with PBS.In addition, H&E staining showed remarkable nuclear fragmentation and nucleolytic in tumors of mice treated with NP PDT -56MESS + L (Figure 4H, top panel).Similarly, significant DNA damage was observed in tumors of mice treated with NP PDT -56MESS + L by TUNEL staining (Figure 4H, lower panel).These results suggested that NP PDT -56MESS + L exhibited a remarkable tumor-suppressive effect in vivo.
NP PDT -56MESS +L Activates the cGAS-STING Pathway to Enhance In Vivo Anti-Tumor Effects
NP PDT -56MESS + L was found to activate the cGAS-STING pathway in vitro.Next, the effect of NP PDT -56MESS + L to induce anti-tumor immunity was evaluated in vivo.To this end, we constructed the B16-F10 tumor-bearing C57BL/6 mouse model (Figure 5A).
First, the activation of the cGAS-STING pathway was investigated in vivo.No significant differences were observed in body weight between the mice treated with various agents (Figure 5B; Figure S27, Supporting Information), and tumors in mice treated with NP PDT -56MESS + L had ceased growing or even dissipated by day 15 (Figure 5C,D; Figure S27, Supporting Information).Additionally, the levels of IFN-, IL-6, and IFN-were 8.45, 9.53, and 4.42 times higher in the peripheral blood of mice treated with NP PDT -56MESS + L than those of mice treated with PBS (Figure 5E,F; Figure S28, Supporting Information).Moreover, the fluorescence of p-STING was especially noticeable in tumors of mice treated with NP PDT -56MESS + L (Figure 5G).These results above suggested that NP PDT -56MESS + L efficiently activated the cGAS-STING pathway in vivo.
Next, we investigated the anti-tumor immune response in vivo.FCM results showed that the relative proportion of mature DCs (CD80 + CD86 + ) in tumor tissues of mice treated with NP PDT -56MESS + L (61.57%) was higher than that of mice treated with PBS (17.80%) by 43.76% (Figure 5H; Figure S29, Supporting Information).Furthermore, the proportion of mature DCs within tumor-draining lymph nodes (TDLNs) of mice treated with NP PDT -56MESS + L (41.00%) was 1.90 times higher than that in mice treated with PBS (21.63%; Figure S30, Supporting Information).Collectively, these results indicated that NP PDT -56MESS + L induced DC maturation in vivo.
Scale bar: 500 nm.E) Representative CLSM images of dsDNA in the cytoplasm of OCM-1 cells after various treatments for 12 h (Blue, DAPI; Green, mitochondria; Red, dsDNA; the color of dsDNA in mitochondria is yellow while the color of the dsDNA outside of the mitochondria is red in the merged images).Scale bar: 20 μm.F) Expression of cGAS-STING proteins in OCM-1 cells after various treatments for 24 h by WB. -tubulin was used as the internal reference protein.G) FCM quantification of p-IRF3 proteins in OCM-1 cells after various treatments for 24 h.H) FCM quantification of mature bone-marrow-derived dendritic cells (BMDCs).Data are presented as mean ± SD.Statistical significance between every two groups was calculated via one-way ANOVA.* p < 0.05, ** p < 0.01, *** p < 0.001, and **** p < 0.0001.Mature DCs, as the most potent antigen-presenting cells, interact with myriad immune cells, resulting in the activation of NKs, downregulation of Tregs, and differentiation of naive T cells into CD8 + T cells. [24]FCM analysis showed that: i) the proportion of NKs (CD69 + NK1.1 + ) infiltrated in the tumor tissues of mice treated with NP PDT -56MESS + L was 65.36%, representing a 32.87% increase compared with that in mice treated with PBS (32.50%) (Figure 5I; Figure S31, Supporting Information); ii) the proportion of Tregs (5.26%) in the tumor tissues of the mice treated with NP PDT -56MESS + L was only 15.51% of that in mice treated with PBS (33.93%); Figure 5J and Figure S32 (Supporting Information); iii) The proportion of CD8 + T cells infil-trated in the tumor tissues of mice treated with NP PDT -56MESS + L was 47.97%, which was 33.50% higher than that of mice treated with PBS (14.47%; Figure 5K; Figure S33, Supporting Information); iv) the proportion of infiltrated CD8 + T cells in the spleen of mice treated with NP PDT -56MESS + L was 1.18 times higher than that of mice treated with PBS (Figure S34, Supporting Information).In addition, in order to visualize the infiltration of immune cells into tumor tissues, immunofluorescence staining was performed, revealing that the fluorescence intensity of CD8 + T cells was the highest in the tumor tissues of mice treated with NP PDT -56MESS + L (Figure S35, Supporting Information).These results suggested that NP PDT -56MESS + L effectively promoted the infiltration of anti-tumor immune cells via activation of the cGAS-STING pathway, thereby inducing a powerful anti-tumor immune response in vivo.
NP PDT -56MESS + L Protects Against Tumor Recurrence and Metastasis by Inducing Systemic Immunity
Mature DCs, NKs, and CD8 + T cells can further interact with each other or other immune cells, e.g., naïve T cells, which contributes to the development of systemic anti-tumor immunity. [25]Differentiated from naive T cells, memory T cells serve as the key components of the systemic anti-tumor immune response, preventing tumor recurrence and metastasis. [26]Hence, we further investigated the long-term effects of NP PDT -56MESS + L in the B16-F10 tumor-bearing C57BL/6 mouse model according to the workflow presented in Figure 6A. [27]FCM analysis revealed that the proportion of central memory T cell (T CM ; CD44 + CD62L + ) infiltrated in the spleen of mice treated with NP PDT -56MESS + L was 16.63%, which was 8.91 times higher than that of mice treated with PBS (1.87%); (Figure 6B,C; Figure S36, Supporting Information).Furthermore, the proportion of T CM (CD44 + CD62L + ) infiltrated in TDLNs of mice treated with NP PDT -56MESS + L was 24.63%, which was 6.84 times higher than that of mice treated with PBS (3.60%; Figure 6D; Figure S37, Supporting Information).Meanwhile, no signs of recurring primary tumors were observed in mice treated with NP PDT -56MESS + L, whereas the recurrence rate reached 40% in mice treated with PBS on day 20 (Figure 6E).As the rechallenge tumor model was established via subcutaneous injection of B16-F10 cells into the opposite flank of mice, the incidence rate of developing a second tumor in mice treated with NP PDT -56MESS + L was 20% by day 30 and 60% in mice treated with PBS (Figure 6E).Moreover, mice treated with NP PDT -56MESS + L had the highest survival rate (80%), even at day 60 (Figure 6F).Taken together, NP PDT -56MESS + L induced the systemic anti-tumor immune effect to inhibit the recurrence and metastasis of B16-F10 tumors and prolong the survival of mice.
Conclusion
In this study, nanoparticles named NP PDT -56MESS were synthesized, which were excited by an 808 nm laser to generate ROS and subsequently release 56MESS.In vitro, the ROS and 56MESS synergistically damaged cellular DNA and mitochondria, resulting in a significant increase in cytoplasmic dsDNA, which subsequently activated the cGAS-STING pathway.In vivo, NP PDT -56MESS selectively accumulated at the tumor sites of mice with reduced systemic toxicity.NP PDT -56MESS + L further induced the anti-tumor immune responses to enhance the anti-tumor effect via activating the cGAS-STING pathway.NP PDT -56MESS + L also induced systemic anti-tumor immune memory in mice by increasing the proportion of infiltrating T CM cells in the spleen, effectively inhibiting the recurrence and metastasis of melanoma.Importantly, the survival rate of mice treated with NP PDT -56MESS + L remained as high as 80% even at day 60.Hence, NP PDT -56MESS + L is not only an effective chemotherapeutic agent, but also a "STING agonist" capable of inducing anti-tumor immunity.Taken together, this study provides an effective chemotherapy and immunotherapy combinatorial strategy for the treatment of UM, which has great potential for clinical application.
Scheme 1 .
Scheme 1. Schematic illustration of cGAS-STING pathway activation and anti-tumor immunity induction via NP PDT -56MESS + L. A) Preparation of NP PDT -56MESS via self-assembly of P1 with 56MESS.B) NP PDT -56MESS produces ROS and releases 56MESS upon excitation by a NIR 808 nm laser.C) NP PDT -56MESS + L increases dsDNA in the cytoplasm, which activates the cGAS-STING pathway.Subsequently, IFN- is released by tumor cells to promote DC maturation, inducing an anti-tumor immune response in vivo.
Figure 2 .
Figure 2. Anti-tumor effect of NP PDT -56MESS + L in vitro.A) Schematic illustration of cGAS-STING pathway activation to induce an anti-tumor effect in cells treated with NP PDT -56MESS + L. (Green cells, tumor cells; Bule cells, BMDCs; Purple cells, mature DCs).B) Relative cell viability of OCM-1 cells treated with various drugs for 24 h via MTT assay.C) Representative CLSM images of DNA damage marker -H2AX in OCM-1 cells treated with various drugs for 12 h.(Blue, DAPI; Red, -H2AX).Scale bar: 40 μm.D) Representative TEM images of mitochondria in OCM-1 cells treated with PBS or NP PDT -56MESS + L for 12 h.(Red arrow, mitochondrial cristae; Orange arrow, mitochondrial vacuolation; Green arrow, rupture of mitochondrial membrane).
Figure 3 .
Figure 3. Metabolomic analysis of OCM-1 cells treated with NP PDT -56MESS + L. A) A heat map based on the different metabolites between the cells treated with PBS and NP PDT -56MESS + L. B) Dot plot depicting the differential KEGG pathways between cells treated with PBS and NP PDT -56MESS + L, C) Cis and NP PDT -56MESS + L, D) 56MESS and NP PDT -56MESS + L. Dot size corresponds to the enrichment ratio; dot color corresponds to the p-value.E-G) Fold changes of typical metabolites such as Citric acid, L-histidine, and DL-Glutamine in different groups.n = 3.Data are presented as mean ± SD.Statistical significance between every two groups was calculated via one-way ANOVA.* p < 0.05, ** p < 0.01, and *** p < 0.001.
Figure 4 .
Figure 4. Biodistribution of NP PDT -56MESS and anti-tumor effect of the NP PDT -56MESS + L in vivo.A) Schematic illustration of the in vivo study.B,C) Biodistribution of Cy7.5-labeled NP PDT -56MESS in OCM-1 bearing mice via fluorescence imaging in vivo and in vitro tissues.T, H, Li, S, L, and K represent tumor, heart, liver, spleen, lung, and kidney, respectively.D) MFI of Cy7.5-labeled NP PDT -56MESS in major tissues at 48 h after intravenous injection.E-G) Body weight changes, tumor growth inhibition curves, and tumor weight of mice treated with various drugs (Pt at 0.4 mg kg −1 , n = 5).H) H&E and TUNEL staining of tumor tissues in mice treated with various agents.Scale bar: 200 μm.Data are presented as mean ± SD.Statistical significance between every two groups was calculated via one-way ANOVA.* p < 0.05 and **** p < 0.0001.
Figure 5 .
Figure 5. NP PDT -56MESS + L activates the cGAS-STING pathway to induce anti-tumor effects in vivo.A) Schematic illustration of treatment schedules in vivo.B,C) Body weight changes and tumor growth inhibition curves of mice treated with various drugs (Pt at 0.4 mg kg −1 , n = 5).D) Representative images of mice after treatment at various time points.E,F) The concentration of IFN- and IL-6 in mouse serum following various treatments.G) Immunofluorescence imaging of p-STING in B16-F10 tumors after different treatments (Blue, DAPI; Red, p-STING).Scale bar: 50 μm.H-K) The proportions of mature DCs (CD80 + CD86 + ), NKs (CD69 + NK1.1 + ), Tregs (CD4 + Foxp3 + ), and CD8 + T cell (CD3 + CD8 + ) populations within tumors.Data are presented as mean ± SD.Statistical significances between all groups were calculated via an unpaired two-sided t-test in (B and C), and one-way ANOVA in (E, F, and H-K).**** p < 0.0001.
Figure 6 .
Figure 6.NP PDT -56MESS + L protects against tumor recurrence or metastasis by inducing systemic immunity.A) Schematic illustration of treatment schedules in vivo.B,C) Representative FCM analysis images and populations of T CM (CD44 + CD62L + ) in the spleen.D) Populations of T CM (CD44 + CD62L + ) in TLNDs.E) Tumor incidence after resection of the primary tumor (n = 8 mice per group).F) Survival analysis of mice up to day 60.Data are presented as mean ± SD.Statistical significance between all groups was calculated via one-way ANOVA in (C and D), and unpaired two-sided t-test in F. **** p < 0.0001. | 2023-10-10T06:16:55.279Z | 2023-10-09T00:00:00.000 | {
"year": 2023,
"sha1": "4337b5d0e3b3f747a4720317e54e3399e9b5fe5b",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/advs.202302895",
"oa_status": "GOLD",
"pdf_src": "Wiley",
"pdf_hash": "04818acd2715a8921b529406043a5a8de626702b",
"s2fieldsofstudy": [
"Medicine",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
113198142 | pes2o/s2orc | v3-fos-license | EFFECTIVENESS OF ELITE FEMALE BASKETBALL PLAYERS ’ TECHNICAL-TACTIC ACTIONS AND WAYS FOR THEIR IMPROVEMENT AT STAGE OF MAXIMAL REALIZATION OF INDIVIDUAL POTENTIALS
Purpose: study effectiveness of elite female basketball players’ technical-tactic actions and determine the ways for their improvement at stage of maximal realization of individual potentials. Material: the authors analyzed competition functioning’s indicators of female basketball players of national combined team of Ukraine and their age characteristics. Results: effectiveness of technical-tactic actions in structure of national female basketball players’ combined team of Ukraine competition functioning at European championship. The authors present: indicators of team composition; roles in team; won and lost games; quantity of scored and skipped points; technical-tactic actions; age of sportswomen. Age indicators of elite female basketball players at stage of maximal realization have been given. Conclusions: we have composed a list of the most important technical-tactic actions in competition functioning. We also outlined ways for their perfection at stage of maximal realization of individual potentials of elite female basketball players of different game roles.
sportsmen's technical-tactic actions at the account of highly specialized means of training and consideration of peculiarities of technical-tactic actions of different role players.Though, absence of elite competition practice practically can not be replaced even with the most effective and highly specialized training means owing to influence of psychological factors on a player.
In researches by I. Losieva, M. Pityn (2010) authors attempted to determine factors, which negatively influence on adaptation of young basketball players to training process and competition practice of teams [8].They showed that effectiveness of young players' technical-tactic actions is influenced by psychological factors, which are connected with player's adaptation to sport collective.Influence of these factors to larger extent is noticed in competition practice and to lesser extent in training process.The authors stress on demand in application of complex approaches, which are based on usage of highly specialized means in training process and required amount of competition practice in official competitions.
Foreign authors stress on specific problem and possible ways of its overcoming [14,21].For example dissertation of T. Khutsynskiy (2004) is devoted to influence of sex factors on many years' training of female basketball players [13].
The author shows that effectiveness of elite female basketball players is influenced on by phases of ovarian-menstrual cycle and other specificities, intrinsic to women's organism.That is why determination of orientation in training of female basketball players and in planning of micro-, meso-, and macro-cycles of training requires consideration of these factors' influence for avoiding of negative effects in training process.
Determination of effectiveness of technical-tactic actions of qualified female basketball players at stage of maximal realization, considering age indicators of sportswomen of different game roles will permit to specify age limits of this stage and orientation of training in macro-cycle's different structural formations.
Purpose, tasks of the work, material and methods
The purpose is to study effectiveness of elite female basketball players' technical-tactic actions and determine the ways for their improvement at stage of maximal realization of individual potentials.
The object of the research is technical-tactic functioning of elite female basketball players at stage of maximal realization of their individual potentials.
The subject of the research is technical tactic and age indicators in process of training and competition functioning of elite female basketball players at stage of maximal realization of their individual potentials.
Material and methods of the research
The authors analyzed results of competition functioning and age indicators of female basketball players of national combined team of Ukraine in European championship games for the period from 1995 to 2013.In the research we used the following methods: analysis of scientific literature, retrospective analysis of Internet data, pedagogic observations, analysis of advanced pedagogic experience and competition functioning's results, pedagogic experiment and methods of mathematical statistic.
Results of the research
Study of effectiveness of technical-tactic actions is a complex of problems.In competition process player is influenced by a number of factors, which complicate realization of his (her) technical-tactic potential.Besides, we should consider influence of group of factors, which belong to staff of coaches: character of chief coach and his technical-tactic preferences, which are connected with strategy of team, determined in training process and in competitions and etc.In final tournaments of European championships women's national combined team of Ukraine was trained by the following specialists: Results of performances of women's national combined team of Ukraine are given in table 1. Pedagogic analysis of table 1 data permits to say that in most of final tournaments our sportswomen took 10-13 places.Exception was final tournament of 1995, when women's combined team of Ukraine took first place under guidance of Volodymyr Ryzhov.
Results of women's national combined team of Ukraine (basketball) in finals of European championships 1995-2013, n=6
Analysis of teams' completing, considering players' game roles, witnesses about tactical preferences of coach staff of women's national combined team of Ukraine during preparation for official competitions and in process of management of competition functioning.The data about completing of teams with players of different roles are presented in table 2.
Quantitative staff of national combined basketball team of Ukraine (women) in finals of European championships 1995-2013, n=6
Pedagogic analysis of table 3 witnesses that the most effective was team staff at European championship 1995: 3-start players, and attacking backs; 6 -"light" and "heavy" forwards, 3center players.
Correlation of won and lost games witnesses about general level of sport fitness of women's national basketball combined team of Ukraine in finals of European championships 1995-2013.The received results are given in table 3. The data presented in table 4 witness that indicators of positive or negative difference and correlation of scored and skipped points are one of most important components with analysis of technical tactic actions' effectiveness in competition process.Most of specialists are of opinion that high effectiveness of technical-tactic actions, providing proper level of functional fitness and leader features of sportswomen's psycho-type, to large extent determines successfulness and efficiency of competition functioning.On the other hand, the higher level of sport competitiveness and sportsmanship of separate players and teams is the greater role in achievement of positive results can be played by secondary factors of competition functioning.Indicators of effectiveness of elite female basketball players' competition functioning are given in table 5. Application of these indicators for pedagogic analysis of technical-tactic actions' effectiveness is conditioned by their wide usage in practice and scientific researches.This list of indicators is used for determination of effectiveness of technical-tactic actions and assessment of competition functioning's effectiveness with formation of statistical material of official competitions under the auspices of international federation of basketball (FIBA), federation of basketball of Ukraine, coaches, specialists of complex scientific groups and other.In table 6 we present averaged indicators of technical-tactic actions of female basketball players of national combined team of Ukraine in finals of European championships.In process of analysis of literature and generalization of experimental results we determined the following ways of perfection of technical-tactic actions and increase of their effectiveness:
Table 6 Averaged indicators of competition functioning of basketball combined team of Ukraine (women) in finals of
-Application of highly specialized means of technical-tactic fitness's training, which by structure and character of loads are close to competition exercises.It permits to significantly realize available technical-tactic potential of female basketball players in competition process; -application of means of technical-tactic orientation on the basis of consideration of game role that permit to optimize training process and increase competition functioning's effectiveness.
Discussion
At the beginning of the research we hoped to receive a number of technical-tactic actions' indicators, which to the largest extent determine results of competition functioning and are used by scientists and coaches for assessment of its effectiveness.The basis of such assumptions was works by M. Bezmylova, O. Shynkaruk , 2010; [3], 2011; [2].In these works technical-tactic actions, used in different systems of technical-tactic actions' assessment are described.Besides, assessment of technical tactic actions, considering psycho-emotional condition of qualified female basketball players is given in dissertation of T. Khutsynskiy, 2004; [13].The author stresses on importance of influence of motivation components and psychic condition on final result of competition process.
We think that our purpose, formulated in this research, has been achieved, as far as the received list of technicaltactic actions' indicators is actually a basic one and, with certain modifications, is used in most systems of assessment of technical-tactic actions.We noted that analysis and generalization of scientific-methodic literature (V.Platonov, 2004;[9], 2008; [11], 2013 [10] The above said permits to regard technical-tactic actions as a leading component, which, to certain extent, determines effectiveness of competition process and realization of available technical-tactic potential of different role female basketball players in certain game.The received data about methodic approaches to perfection of technical-tactic actions shall be regarded just as main factors, which permit to optimize training processes and rationally determine orientation of training sessions.Generalization of age ranges for elite female basketball players at stage of maximal realization of individual potential will require further specifications and researches on example of national teams' sportswomen, who are leaders of European and world basketball. Conclusions: 1. Effectiveness of technical-tactic actions of different game roles' elite female basketball players is, to large extent, determined by character of competition functioning and peculiarities of sport training.Technical-tactic actions, which are used in analysis of competition functioning and determination of its efficiency, include: ball throws (2-scores and 3-scores; penalty throws), pick ups of ball (in attack, in defense, total), capture of ball, efficient passes, blocked shots, losses of ball, fouls (of player and with player).2. Main means of increasing effectiveness of elite female basketball players' technical-tactic actions are the following: -application of highly specialized means of technical-tactic orientation, which, by structure and character loads, are close to competition exercises; -application of means of technical-tactic orientation on the base of game role's specificities; -application of the most optimal and rational tactic constructions of game conduct against certain opponent.3. Approximate age indicators of elite female basketball players at stage of maximal realization of individual potentials can be considered the following: averaged indicators -≈25-27 years; age ranges-18-38 years.
The prospects of further researches this direction are based on constant monitoring of indicators of technicaltactic actions' effectiveness of elite female basketball players, participating in official international championships in order to find out the trends to perfection of technical-tactic level of different game roles' sportswomen.
, Zh. Kozina, 2009; [6], V. Koriagina, [7], S. Yelevych, 2008; [4], 2009; [5] et al.) and experimental results permit to outline main directions of perfection of elite female basketball players' technical-tactic actions and age limits of stage of maximal realization.Technical-tactic actions' indicators of elite female basketball players are the basis for optimizing of sportswomen's training: correction of training loads, determination of scope of competition functioning, which is minimally required for further perfection of sportsmanship and etc.
Table 3
It is well known that effectiveness of elite female basketball players' competition functioning if influenced on by correlation of scored and skipped points.Such information permits to form mark of effectiveness of team's technicaltactic actions in competition process, expert assessment of sportswomen's complex fitness, determine strong and weak points of attacking and defensive actions of separate basketball players, links and team in general (see table4).
Table 4
Scored and lost points of basketball combined team of Ukraine (women) in finals of European championships 1995-2013, n=6
Table 5
Indicators of effectiveness of basketball combined team of Ukraine (women) in finals of European championships 1995- 7n table7we give age indicators of elite female basketball players, who participated in finals of European championships 1995-2013.The purpose is specification of age limits of stage of maximal realization of individual potentials in process of many years' training.Pedagogic analysis of age indicators of female basketball players permits to state that age of most of sportswomen was ≈25-27 years old.Age ranges of elite basketball players are within 18-38years. | 2019-04-14T13:03:13.686Z | 2015-08-15T00:00:00.000 | {
"year": 2015,
"sha1": "b8be54cfbf1ba7c8b5c8f51050600dd4a2b3bc18",
"oa_license": "CCBY",
"oa_url": "http://www.sportpedagogy.org.ua/html/journal/2015-08/pdf-en/15srarip.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b8be54cfbf1ba7c8b5c8f51050600dd4a2b3bc18",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Engineering"
]
} |
119587996 | pes2o/s2orc | v3-fos-license | Model Structures for Correspondences and Bifibrations
We study the notion of a bifibration in simplicial sets which generalizes the classical notion of two-sided discrete fibration studied in category theory. If $A$ and $B$ are simplicial sets we equip the category of simplicial sets over $A\times B$ with the structure of a model category for which the fibrant objects are the bifibrations from $A$ to $B$. We also equip the category of correspondences of simplicial sets from $A$ to $B$ with the structure of a model category. We describe several Quillen equivalences relating these model structure with the covariant model structure on the category of simplicial sets over $B^{\mathrm{op}}\times A$.
Introduction
A useful concept from ordinary category theory is the notion of profunctor. This has several incarnations. If A and B are categories, then a profunctor from A to B may be viewed as a functor F : B op × A → Set, or equivalently as a colimit preserving functor P(A) → P(B) between the categories of presheaves on A and B respectively. There is an equivalence of categories (1) [B op × A, Set] between the category of correspondences from A to B and the category DFib(A, B) of two-sided discrete fibrations from A to B. A two-sided discrete fibration (p, q) : X → A × B is, roughly speaking, a functor whose fibers X(a, b) are covariant in a ∈ A and contravariant in b ∈ B. The concept was exploited by Street in [19,20]. There are analogues for the notions of profunctor, correspondence and two-sided discrete fibration at the level of simplicial sets. These notions have been studied in [2,4,9,13,10,15]. If A and B are simplicial sets, then a profunctor from A to B may be thought of as a simplicial map B op × A → S, where S denotes the ∞-category of spaces (Definition 1.2.16.1 of [13]), alternatively we may replace such a map with the left fibration over B op × A that it classifies.
The notion of correspondence has a straightforward interpretation at this level also: if A and B are simplicial sets we shall say that a simplicial map p : X → ∆ 1 is a correspondence from A to B if there are isomorphisms p −1 (0) ≃ B and p −1 (1) ≃ A (see Definition 3.1). The correspondences from A to B form the objects of a category Corr(A, B), which is a certain subcategory of the category (Set ∆ ) /∆ 1 of simplicial sets over ∆ 1 (see Remark 3.3). Correspondences of simplicial sets feature prominently in Lurie's discussion of adjoint functors in [13]; they also play a role in [4].
The notion of two-sided discrete fibration also extends to the context of simplicial sets. In [13], Lurie introduced the notion of a bifibration (f, g) : X → A × B which is an inner fibration together with a condition which encodes the idea that the fibers X(a, b) of the map (f, g) depend covariantly on a and contravariantly on b (this notion is also considered in [10]). Bifibrations are the analog for simplicial sets of the notion of twosided discrete fibration in category theory. For brevity we shall use Lurie's terminology of 'bifibration' rather than 'two-sided discrete fibration'. In the paper [15] Riehl and Verity refer to bifibrations as modules; they play a key role in their study of the formal category theory of ∞-categories.
One of our aims in this paper is to exhibit the bifibrations in simplicial sets from A to B as the fibrant objects of a model category. In Section 4.4 we shall prove the following result (see Theorem 4.19): Theorem A. Let A and B be simplicial sets. There is the structure of a left proper, combinatorial model category on (Set ∆ ) /(A×B) for which the cofibrations are the monomorphisms and the fibrant objects are the bifibrations from A to B.
The existence of this model structure was known to Joyal (see [10]) but a construction of it has not appeared in the literature to date. Following Joyal we call this model structure the bivariant model structure to reflect the covariant and contravariant nature of bifibrations.
To establish the existence of this model structure we study bifibrations in some detail, replicating many properties of left and right fibrations established by Joyal and Lurie. For instance we study the behaviour of bifibrations under exponentiation (Section 4.3), and we introduce the concept of bivariant anodyne map in (Set ∆ ) /(A×B) (see Section 4.2). We introduce the notion of bivariant equivalence (Section 4.6) and prove that a map X → Y between bifibrations in (Set ∆ ) /(A×B) is a bivariant equivalence if and only it is a fiberwise homotopy equivalence, generalizing the corresponding facts for left and right fibrations (Remark 2.2.3.3 of [13]). We also prove that a bifibration X → A × B is a trivial Kan fibration if and only if its fibers are contractible Kan complexes. Again, this is a generalization of the corresponding facts for left and right fibrations (see Lemma 2.1.3.4 of [13]).
In addition to the model structure for bifibrations, we also construct a model structure for correspondences. In Section 3.2 we prove the following result (see Theorem 3.9): Theorem B. Let A and B be ∞-categories. There is the structure of a left proper, combinatorial model category on Corr(A, B) for which the cofibrations are the monomorphisms and the fibrant objects are the correspondences X → ∆ 1 in Corr(A, B) for which X is an ∞-category.
The model structure for correspondences is left induced (in the sense of [5]) from the Joyal model structure on the slice category (Set ∆ ) /B⋆A . Its existence is well-known to experts -it is stated, but not proved, in [10] and it is alluded to in [13] for instance.
Our other objective in this paper is to describe a series of Quillen equivalences linking the covariant model structure on (Set ∆ ) /(B op ×A) , the correspondence model structure on Corr(A.B), and the bivariant model structure on (Set ∆ ) /(A×B) , which generalize the equivalences (1) and (2). Such a description has recently been given by Ayala and Francis in [2] at the level of ∞-categories. We shall refine the equivalences between ∞-categories that are established in [2] to Quillen equivalences between the model categories above. In fact, we shall describe some additional Quillen equivalences, one of which is of a rather surprising nature.
The twisted arrow category construction (see Construction 5.2.1.1 of [14]) associates to a simplicial set X a new simplicial set Tw(X), equipped with a canonical map Tw(X) → X op × X which is a left fibration if X is an ∞-category. If X is a correspondence from A to B, then base change along the map B op × A → X op × X induced by the inclusions A ⊆ X and B ⊆ X induces a functor a * : Corr ( Of note is the fact that the functor a * appears as both a left and right Quillen equivalence. There is a similar series of adjunctions connecting the categories (Set ∆ ) /(A×B) and the category Corr(A, B), which is described in terms of the edgewise subdivision functor sd 2 from [6]. In Section 4.7 we prove (see The adjoint pair (d ! , d * ) is not a Quillen pair for these model structures; there is however another Quillen equivalence relating these model categories (see Theorem 4.42).
In summary then the contents of this paper are as follows. In Section 2 we review some facts about the covariant model structure and Joyal's notion of dominant map that we will need later in the paper. In Section 3 we describe the model structure for correspondences and prove the existence of the Quillen equivalences from Theorem C above. In Section 4 we study the notion of a bifibration in simplicial sets; we introduce the concept of a bivariant anodyne map and bivariant equivalence. We describe the bivariant model structure on (Set ∆ ) /(A×B) and prove the existence of the Quillen equivalence from Theorem D above.
Finally, we point out that several of the results in this paper seem to be known to experts, but equally proofs of them are missing from the literature; in this paper we fill these gaps. Notation: for the most part we use the notation and terminology from Lurie's books [13] and [14], except where we have indicated. Thus Set ∆ denotes the category of simplicial sets, h(S) denotes the homotopy category of a simplicial set S, etc. Following the convention in [14], we will say that a left cofinal map of simplicial sets is what is called a cofinal map in [13] and that a map of simplicial sets is right cofinal if and only if its opposite is left cofinal.
The covariant model structure
Let S be a simplicial set. We recall some features of the covariant model structure on the category (Set ∆ ) /S of simplicial sets over S from [10] and [13].
Notation 2.1. Recall that the category (Set ∆ ) /S is canonically enriched over Set ∆ . If X → S and Y → S are objects of (Set ∆ ) /S then the simplicial mapping space map S (X, Y ) is the simplicial set defined by the pullback diagram where the lower horizontal map corresponds to the structure map X → S.
Covariant equivalences.
Recall that a map f : is a weak homotopy equivalence for every left fibration L → S. The covariant equivalences are the weak equivalences for the covariant model structure on (Set ∆ ) /S introduced by Joyal and Lurie. Dually there is the contravariant model structure on (Set ∆ ) /S , described in terms of right fibrations on S.
The following theorem from [17] gives a very useful criterion for recognizing covariant equivalences. 17]). Let S be a simplicial set and let f : X → Y be a map in (Set ∆ ) /S . The following statements are equivalent: (1) f is a covariant equivalence; (2) the induced map X × S R → Y × S R is a weak homotopy equivalence for every right fibration R → S; (3) for every vertex s, and for every factorization ∆ 0 → Rs → S of the map s : ∆ 0 → S into a right anodyne map followed by a right fibration, the induced map X × S Rs → Y × S Rs is a weak homotopy equivalence.
2.2.
The right cancellation property. Recall that a class of monomorphisms A in a category C is said to satisfy the right cancellation property if the following condition is satisfied: if u and v are composable morphisms in C such that u ∈ A and vu ∈ A, then v ∈ A also. Left anodyne maps are an important example of a class of maps with this property.
Proposition 2.4 (Joyal). The class of left anodyne maps satisfies the right cancellation property.
The following result from [17] gives a useful criterion for detecting when a given class of monomorphisms in Set ∆ satisfying the right cancellation property contains the class of left anodyne maps.
From [18] we have another very useful example of a class of monomorphisms satisfying the right cancellation property. 18]). The class of inner anodyne maps in Set ∆ satisfies the right cancellation property.
We will make use of this fact in the proof of Proposition 3.17.
Dominant maps.
In this section we recall some facts about the notion of dominant maps of simplicial sets introduced by Joyal (we shall need some of the results from this section in the proof of Lemma 4.29 in Section 4.6). The following result is due to Joyal; we give a proof since we have not been able to find one in the literature to date.
Proof. Suppose given a dominant map u : A → B and suppose that p : R → B is a right fibration. Form the pullback diagram We need to prove that the right derived functor is fully faithful, where (Set ∆ ) /(A× B R) and (Set ∆ ) /R are equipped with the contravariant model structures. We will prove that the counit is an isomorphism. Since p : R → B is a right fibration, the left derived functor is conservative (Corollary 10.15 of [10]), and hence it suffices to prove that the image Lp ! ǫ is an isomorphism in Ho((Set ∆ ) /B ). We have a natural isomorphism Lp ! Lv ! ≃ Lu ! Lq ! . A straightforward argument, using the fact that p is a right fibration, shows that the canonical natural transformation is a natural isomorphism. Therefore Lp ! ǫ is isomorphic to the natural transformation We state the following result which appears in [10] and [7]. We first need some notation.
is the nerve of a category, then the simplicial set B b//b ′ is the nerve of the category of factorizations of the arrow f : The proof of this statement is reasonably straightforward and is left to the reader. We note the following consequences.
Proof. It suffices to prove that if u : A → B is dominant then u × id : A × C → B × C is dominant for any simplicial set C. Since dominant maps are invariant under categorical equivalence, we may suppose without loss of generality that A, B and C are ∞-categories. This follows immediately from Lemma 2.13, using the fact that for any vertices b ∈ B and c ∈ C we have a pullback diagram involving the undercategories B b/ and C c/ , and similarly for overcategories.
We conclude this section with the following useful example of a dominant map.
Lemma 2.17. For every n ≥ 0 the diagonal map ∆ n → ∆ n × ∆ n is dominant.
Proof. The diagonal map ∆ n → ∆ n × ∆ n is a retract of the diagonal map (∆ 1 ) n → (∆ 1 ) n ×(∆ 1 ) n . Therefore, since dominant maps are closed under retracts (Remark 2.10) and products (Lemma 2.16) we are reduced to proving that ∆ 1 → ∆ 1 × ∆ 1 is dominant. This can be proven using Lemma 2.13 and a case by case analysis.
2.4.
Inner anodyne maps and inner fibrations. We close this section by recording a couple of straightforward results about inner fibrations and inner anodyne maps that we will need later in the paper. Proof. It suffices to prove that p is a left fibration, since T is a Kan complex. Every edge of S is an equivalence and hence is p-cocartesian (Proposition 2.4.1.5 of [13]). Therefore p has the right lifting property against every horn inclusion of the form Λ n 0 ⊆ ∆ n , n ≥ 2 (Remark 2.4.1.4 of [13]). Therefore, invoking the assumption that p has the right lifting property against the map ∆ { 0 } → ∆ 1 , it follows that p is a left fibration. Proof. Factor i as i = pj, where j : A → B ′ is inner anodyne and p : B ′ → B is an inner fibration. Then p is a categorical fibration, since p is bijective on objects and B is an ∞-category. Therefore p is a trivial Kan fibration and hence has a section s : B → B ′ , which exhibits i as a retract of j. Hence i is inner anodyne.
Correspondences
3.1. The category of correspondences from A to B. We recall the notion of a correspondence between simplicial sets from Section 2.3.1 and Section 5.2.1 of [13]. Remark 3.2. We do not require that the map p in the above definition is an inner fibration; we will reserve the term fibrant correspondence to describe such a map (see Section 3.2 below). Note also that we call a correspondence from A to B is what is called a correspondence from B to A in [13]. Remark 3.3. We write Corr(A, B) for the subcategory of (Set ∆ ) /∆ 1 whose objects are the correspondences from A to B and where a map f : and u : ∆ n → X is a simplex, then the composite map pu : ∆ n → ∆ 1 has a unique decomposition pu = i ⋆ f , where i : ∆ k → ∆ 0 and f : ∆ n−k−1 → ∆ 0 . It follows that ui factors through B, and uf factors through A. Therefore ui ⋆ uf is an n-simplex of B ⋆ A. This defines a unique map X → B ⋆ A, from which it follows that B ⋆ A is a terminal object of Corr(A, B). as a full reflective subcategory of (Set ∆ ) /B⋆A . The reflector L is defined on objects as follows: if X ∈ (Set ∆ ) /B⋆A with structure map p then L(X) is the correspondence defined by the pushout diagram (1) X is fibrant (2) the canonical map p : X → B ⋆ A is an inner fibration (3) the canonical map X → ∆ 1 is an inner fibration (4) X is an ∞-category.
Proof. The equivalence of statements (3) and (4) Finally, to complete the proof, we shall prove that (2) ⇐⇒ (4). The implication (2) ⇒ (4) is immediate from the fact that B ⋆ A is an ∞-category. To prove the converse, assume that X is an ∞-category and consider a commutative diagram We will prove that this map is compatible with the projection to B ⋆ A. The map v : ∆ n → B ⋆ A decomposes as v = x ⋆ y, where x : ∆ k → B and y : ∆ n−1−k → A. If k = −1 or k = n then we can find a diagonal filler for the diagram above since both A and B are ∞-categories. Otherwise, we have ∆ k ⊆ Λ n i and ∆ n−1−k ⊆ Λ n i . Since X is an ∞-category, we may extend the map u along the inner horn inclusion Λ n i ֒→ ∆ n to obtain a map w : ∆ n → X. It follows that w|∆ k = x and w|∆ n−1−k = y and hence pw = v.
More generally, we have • weak equivalence if the underlying map of simplicial sets is a categorical equivalence and for which the fibrant objects are the correspondences X whose underlying simplicial set is an ∞-category.
Proof. We use Proposition A.2.6.13 from [13]. To begin with, as observed in Remark 3.6 above, Corr(A, B) is presentable. We verify the three conditions (1), (2) and (3) from op. cit. Let C denote the class of cofibrations in Corr(A, B) and let W denote the class of weak equivalences in Corr(A, B). The weakly saturated class of monomorphisms in (Set ∆ ) /B⋆A is generated by the set of boundary inclusions ∂∆ n ⊆ ∆ n in (Set ∆ ) /B⋆A for n ≥ 0. The simplices in B ⋆ A are of the following three types: It follows that C is generated as a weakly saturated class by the set C 0 of monomorphisms in Corr(A, B) of the form For (1), observe that the class W is the inverse image of the class of categorical equivalences of simplicial sets under the forgetful functor Corr(A, B) → Set ∆ . It follows that W is perfect by Corollary A.2.6.12 of [13].
(2) follows immediately from the fact that the Joyal model structure on Set ∆ is left proper.
For (3), observe that if f : X → Y is a map in Corr(A, B) which has the right lifting property with respect to every morphism in C 0 , then f is a trivial Kan fibration. For then f has the right lifting property with respect to every monomorphism in Set ∆ .
For the characterization of the fibrant objects, observe that X → B ⋆ A has the right lifting property with respect to all maps in C ∩ W if and only if the underlying map of simplicial sets is a categorical fibration. We then apply Lemma 3.7. Remark 3.11. The category (Set ∆ ) /B⋆A has a natural structure as a simplicial category which is tensored and cotensored over Set ∆ . This structure induces on Corr(A, B) the structure of a simplicial category which is tensored and cotensored over Set ∆ . If X ∈ Corr(A, B) and K is a simplicial set, then the cotensor X ⊗ K is defined by the pushout diagram If X is again a correspondence in Corr(A, B) and K is a simplicial set, then the cotensor K Y is defined by the pullback diagram where the lower horizontal map is conjugate to the canonical projection (B ⋆ A) × K → B ⋆A. Note that K Y so defined is a correspondence: the two squares in the commutative diagram has the structure of a simplicial category, tensored and cotensored over Set ∆ . 3.3. Distributors to correspondences, and back again. Recall that the edgewise subdivision of a simplicial set X (in the sense of Segal [16]) is defined by composing the functor X : ∆ op → Set with the opposite of the 'doubling functor' This construction can be used to relate the category (Set ∆ ) /(B op ×A) with the category of correspondences from A to B. The relation is as follows.
Observe that the doubling functor above induces a functor between simplex categories The functor σ induces an adjunction and in fact the functor σ * has a further right adjoint σ * : We make the following observations about the functors σ ! and σ * . Proof.
is a monomorphism if and only if the underlying map of simplicial sets is a monomorphism; therefore it suffices to check that for every n ≥ 0 the induced map σ ! (f ) n : σ ! (X) n → σ ! (Y ) n is a monomorphism of sets. But σ ! (f ) n is easily seen to be the map f 2n+1 : X 2n+1 → Y 2n+1 which is a monomorphism by hypothesis.
Lemma 3.14. If f : C → A and g : D → B are maps determining objects D ⋆ C and The key point here is the canonical functor ⋆ : . It follows that the endo-functor a * a ! of (Set ∆ ) /(B op ×A) is isomorphic to the endo-functor σ * σ ! . It follows that a * a ! preserves all colimits and hence is determined by its value on the n-simplices ∆ n → B op × A for n ≥ 0. A short calculation, using Lemma 3.14, shows that in fact the unit map ∆ n → a * a ! (∆ n ) is isomorphic to the diagonal map ∆ n → ∆ n ×∆ n , where ∆ n ×∆ n is regarded as an object of ( . To see this, it suffices to prove that the functor a * : Corr(A, B) → (Set ∆ ) /(B op ×A) preserves colimits. This follows from the fact that σ * preserves colimits and the fact that σ * iL = σ * (this last fact can easily be seen using the observation made in Remark 3.15). Proof. From Lemma 3.13 we have that a ! sends monomorphisms to monomorphisms. Let A denote the class of all monomorphisms v in (Set ∆ ) /B op ×A such that the underlying map of simplicial sets a ! (v) is inner anodyne. We need to prove that every left anodyne morphism in (Set ∆ ) /(B op ×A) is contained in A. Therefore, by Proposition 2.5 it is sufficient to prove that A is saturated, satisfies the right cancellation property, and that the initial vertex maps i n : ∆ 0 → ∆ n are contained in A for all n ≥ 0. By Proposition 2.7, the class of inner anodyne maps in Set ∆ has the right cancellation property; the functoriality of a ! then implies that A also has the right cancellation property. Likewise it is clear that A is a saturated class of monomorphisms since the inner anodyne maps in Set ∆ form a saturated class and a ! is a left adjoint.
Let n ≥ 0; we show that i n : and hence it suffices to prove that this last map is inner anodyne. This map factors as The first map in this composite is a pushout of the map [13], and the second map in this composite is inner anodyne by another application of this lemma. Hence A contains the left anodyne morphisms in Set ∆ , which completes the proof of the proposition.
The following corollary is straightforward. [14]; note also that Tw(A) is precisely the Segal edge-wise subdivision of A from [16]). Thus Corollary 3.18 gives an alternative proof that this canonical map is a left fibration (we hasten to point out that this proof proceeds along similar lines to the proof of Proposition 1.1 in [3]).
Remark 3.20. Suppose that X is an ∞-category and that x is an object of X. Observe that Tw(X)| { x } × X may be described as the diagonal of the bisimplicial set The bisimplicial set X ∆ op / has a canonical augmentation over the constant bisimplicial set X x/ which in degree n is the canonical map In particular it follows that there is a homotopy equivalence between the fiber Tw(X)(x, y) and Hom L X (x, y) for all objects x and y of X.
where the lower horizontal map is induced by the inclusions A ⊆ X and B ⊆ X. Proof. The functor a ! sends monomorphisms to monomorphisms, and hence a * sends trivial fibrations to trivial fibrations. We prove that a * sends fibrations between fi- In [2] Ayala and Francis prove that there is a categorical equivalence between the ∞-category Fun(B op × A, S) and an ∞-category of correspondences from A to B. The following theorem refines their result to a statement at the level of model categories (this latter statement is also certainly well-known; it is stated without proof in [10] and it is also stated as Remark 2.3.1.4 in [13]). We shall give a proof, since one has not appeared in the literature so far, and since we shall need some results obtained in the course of the proof for the proof of Theorem 3.25. Proof. We prove that (i) a * reflects weak equivalences between fibrant objects, and (ii) where R a ! X denotes a fibrant replacement of a ! X in the model structure for correspondences on Corr(A, B).
We prove (i). Suppose that f : X → Y is a map between fibrant objects in Corr(A, B) such that a * X → a * Y is a covariant equivalence in (Set ∆ ) /(B op ×A) . We need to prove that f is a categorical equivalence. Therefore, we need to prove that f is essentially surjective and fully faithful. The essential surjectivity is immediate since f is a map between correspondences in Corr(A, B). To prove that f is fully faithful it suffices to prove that the induced map on mapping spaces Hom L X (a, b) → Hom L Y (a, b) is a weak homotopy equivalence for each pair of objects a ∈ A and b ∈ B. This follows immediately from Remark 3.20 and Remark 3.21.
is a categorical equivalence and R a ! X → B ⋆A is an inner fibration. Thus R a ! X is a fibrant replacement of a ! X in the model structure for correspondences on Corr(A, B) (Theorem 3.9). We claim that a * sends categorical equivalences in Corr(A, B) to covariant equivalences in (Set ∆ ) /(B op ×A) . To see this we argue as follows: suppose that f : Then Y ′ is an ∞-category, and j : Y → Y ′ is an acyclic cofibration which is a bijection on objects. It follows that j is inner anodyne (Lemma 2.19). We may factor the composite map jf in (Set ∆ ) /B⋆A as jf = f ′ j ′ , where f ′ : X ′ → Y ′ is an inner fibration, and where j ′ : X → X ′ is inner anodyne. We observe that the underlying simplicial map f ′ is a bijection on vertices, and hence is a categorical fibration. Therefore, since f ′ is a categorical equivalence, f ′ is a trivial Kan fibration. Hence σ * (f ′ ) is a trivial Kan fibration, since σ ! preserves monomorphisms (Lemma 3.13). It now suffices to prove the following claim: the functor σ * : The inner anodyne maps in (Set ∆ ) /B⋆A are a saturated class of monomorphisms, generated by the inner horn inclusions Λ n k → ∆ n in (Set ∆ ) /B⋆A . We need to calculate the image σ * (Λ n k ) → σ * (∆ n ) of such a horn inclusion under the functor σ * . The simplices in B ⋆ A are of the following three types: It follows that the inner horn inclusions in (Set ∆ ) /B⋆A are of the following types: k → ∅ ⋆ ∆ n , 0 < k < n. By Lemma 3.14 the image under σ * of each of the first and last of these types of morphism is the empty map, while the image under σ * of the second and third maps are respectively the left anodyne morphisms This completes the proof of the claim.
To complete the proof of the theorem it suffices to prove that X → a * a ! X is a covariant equivalence in (Set ∆ ) /(B op ×A) . Equivalently, by Remark 3.15, it suffices to prove that X → σ * σ ! X is a covariant equivalence in (Set ∆ ) /(B op ×A) . We will prove that in fact this map is a left anodyne map in (Set ∆ ) /(B op ×A) .
Using the skeletal filtration of X, we see that by an induction argument we are reduced to the case where X is obtained from X ′ by adjoining a single n-simplex along an attaching map ∂∆ n → X ′ . We have a commutative diagram in which the two right hand vertical maps are left anodyne by the inductive hypothesis, and where the left hand vertical map is the diagonal inclusion ∆ n → ∆ n × ∆ n (see Remark 3.15) and hence is left anodyne. Therefore it suffices by Lemma 3.24 below to prove that for any n ≥ 0 the square is a pullback. From Remark 3.15, the map ∆ n → σ * σ ! ∆ n is the diagonal inclusion ∆ n → ∆ n × ∆ n . Let us write δ n : ∆ n → ∆ n × ∆ n for this map. Clearly the square δn is a pullback for any 0 ≤ i ≤ n. It follows that the square is a pullback for any 0 ≤ i ≤ n. Since is a union of the subobjects ∂ i ∆ n−1 of ∆ n and the functor σ * σ ! is a left adjoint, it follows that is a union of the subobjects ∂ i ∆ n−1 × ∂ i ∆ n−1 of ∆ n × ∆ n . The result then follows from the fact that the square (3) above is a pullback for every 0 ≤ i ≤ n.
Lemma 3.24. Suppose that
is a commutative diagram of maps of simplicial sets in which the left hand square is a pullback and in which the maps f , g and h are left anodyne. Then the induced map is also left anodyne.
Proof. The induced map factors as
and, since left anodyne maps are preserved under pushouts, we see that it suffices to prove that therefore the result follows from the right cancellation property of left anodyne maps in Set ∆ (Corollary 4.1.2.2 of [13]), since A → A ′ is left anodyne by hypothesis and The functor a * has the distinction of being simultaneously a left and right Quillen equivalence. Recall the adjoint pair (a * , a * ) (see Remark 3.16). We have the following theorem. Proof. We show first that the pair (a * , a * ) is a Quillen adjunction. Clearly a * sends monomorphisms to monomorphisms; and we have proved above (see the proof of Theorem 3.23) that a * sends categorical equivalences to covariant equivalences.
To prove that the Quillen pair (a * , a * ) is a Quillen equivalence it suffices to prove that the Quillen pair (a * a ! , a * a * ) is a Quillen equivalence. We have proven above (see the proof of Theorem 3.23) that the natural transformation X → a * a ! X is left anodyne for every X ∈ (Set ∆ ) /(A op ×B) . It follows easily that a * a ! reflects covariant equivalences. To complete the proof it suffices to prove that a * a * X → X is a covariant equivalence for every left fibration X → B op × A. We will prove that in fact this map is a trivial Kan fibration. Suppose given a commutative diagram (1) (p, q) is an inner fibration; (2) for every n ≥ 1 and for every commutative diagram X (a,b) of (p, q) over (a, b) is a Kan complex for every pair of vertices Proof. Clearly bifibrations over A × B are stable under base change along maps of the form f ×g : A ′ ×B ′ → A×B. Therefore, pulling back along the map a×b : ) is a Kan complex.
Bivariant anodyne maps.
In this section we introduce the concept of bivariant anodyne maps and study some of their properties. where g : ∆ { n−1,n } → A is a degenerate edge.
We extend the original usage of the term bifibration in [13] to cover the following more general situation. (1) all inner horn inclusions for 0 < i < n; (2) all horn inclusions such that g|∆ 1 × { i } is a degenerate edge of B for every vertex i in ∆ n ; (2 ′′ ) all inclusions of the form Then the weakly saturated classes of morphisms in (Set ∆ ) /(A×B) generated by the classes (1) and (2), the classes (1) Proof. The proof of this proposition is essentially the same as the proof of Proposition 3.1.1.5 of [13]. We give the details. To begin with, the weakly saturated class generated by (1) and (2 ′ ) is clearly contained in the weakly saturated class generated by (1) and (2 ′′ ). As in the proof of op. cit., one easily proves that the weakly saturated class generated by (1) and (2 ′′ ) is contained in the weakly saturated class generated by (1) and (2 ′ ). It follows that the weakly saturated class generated by (1) and (2 ′ ) is equal to the weakly saturated class generated by (1) and (2 ′′ ). We prove that every map in (2) is a retract of a map in (2 ′′ ). Suppose given a map . Observe that the composite map gr : ∆ n × ∆ 1 → B restricts to a degenerate edge gr| { i }×∆ 1 of B for every vertex i of ∆ n ; this is clear from the definition of r if i = 1 and follows from the assumption that g|∆ { 0,1 } is degenerate when i = 1. The maps j and r exhibit the inclusion Λ n 0 ֒→ ∆ n as a retract of the map Λ n 0 × ∆ 1 ∪ ∆ n × { 0 } ֒→ ∆ n × ∆ 1 in (Set ∆ ) /(A×B) with structure map (f, g)r : ∆ n × ∆ 1 → A × B. From the discussion above, this map belongs to the class of maps (2 ′′ ). It follows that the weakly saturated class generated by (1) and (2) is contained in the weakly saturated class generated by (1) and (2 ′′ ).
We now prove that the weakly saturated class generated by (1) and (2 ′ ) is contained in the weakly saturated class generated by (1) and (2). Suppose given a map in (2 ′ ) of the form in which the structure map (f, g) : ∆ n → A × B satisfies g| { i } × ∆ 1 is a degenerate edge of B for every vertex i of ∆ n . We have the standard filtration for every i = 0, 1, . . . , n. The (n + 1)-simplex of X n obtained from X n−1 via the attaching map Λ n+1 0 → X n−1 corresponds to the (n + 1)-chain (0, 0) (0, 1) (1, 1) · · · (n, 1) of [n] × [1]. By assumption the edge g| { 0 } × ∆ 1 of B is degenerate. It follows easily that the map above belongs to the weakly saturated class generated by the maps (1) and (2) Proof. The induced map X N → X M × Y M Y N is an inner fibration by Corollary 2.3.2.5. of [13]. We prove that the induced map has the right lifting property against the class of maps (2) from Definition 4.4. By Proposition 4.9 it suffices to prove that the indicated diagonal filler exists in every commutative diagram of the form denotes the composite map, then π B N u|∆ 1 ×{ i } is a degenerate edge in B N for every vertex i of ∆ n . By adjointness, it is sufficient to prove that the indicated diagonal filler exists in the commutative diagram For every vertex i of ∆ n and for every vertex v of N , the map Next we prove the assertion in the special case that p is a bifibration X → A × B. Suppose that u : K → L is a monomorphism in (Set ∆ ) /(A×B) . The induced map is an inner fibration. Since it is an inner fibration between Kan complexes it suffices by Lemma 2.18 to prove that it has the right lifting property with respect to the inclusion ∆ { 0 } ⊆ ∆ 1 . By adjointness, the indicated diagonal filler exists in the diagram if and only if the indicated diagonal filler exists in the diagram The indicated diagonal fillers therefore exist by Proposition 4.9. It follows by Lemma 2.18 that the induced map above is a Kan fibration between Kan complexes. Finally, we prove the general form of the assertion. Suppose that p : X → Y is a bifibration and that u : K → L is a monomorphism. We use Lemma 2.18 again. The map map A×B (L, X) → map A×B (K, X) × map A×B (K,Y ) map A×B (L, Y ) is an inner fibration between Kan complexes by the results of the preceding paragraphs. Therefore we are reduced to proving that the indicated diagonal filler exists in any commutative diagram of the form By adjointess, this is equivalent to proving that the indicated diagonal filler exists in the induced diagram is a homotopy equivalence for all bifibrations X → A × B. By Lemma 4.11, the class A is equivalently the class of monomorphisms u : K → L in (Set ∆ ) /(A×B) such that the induced map above is a trivial Kan fibration for every bifibration X → A× B. It follows easily that A is weakly saturated.
To complete the proof we will prove that A contains the classes (1), (2) and (3) To show that the indicated diagonal filler in this diagram exists, it suffices, by adjointness, to prove that X → A × B has the right lifting property against the canonical map and where N → A × B is the given structure map of the object N of (Set ∆ ) /(A×B) . It follows easily that the map (4) belongs to the class of maps (2 ′′ ) from Proposition 4.9 and hence the indicated diagonal filler can be found.
Recall that a fiberwise homotopy between maps f, g : X → Y in (Set ∆ ) /(A×B) is an edge in the simplicial set map A×B (X, Y ) (see Notation 2.1) between the vertices f and g. Recall that a map h : X → Y in (Set ∆ ) /(A×B) is said to be a fiberwise homotopy equivalence if there exists a map k : Y → X in (Set ∆ ) /(A×B) such that the maps hk, 1 Y and the maps kh, 1 X are fiberwise homotopic.
We have the following result. Proof. To prove the first statement it suffices to prove that if h : X × ∆ 1 → Y is a fiberwise homotopy between maps f, g : X → Y in (Set ∆ ) /(A×B) , then h induces a homotopy between the maps f * , g * : map A×B (Y, Z) → map A×B (X, Z) for any bifibration Z → A × B. This follows easily from the fact that map A×B (M × ∆ 1 , Z) = map A×B (M, Z) ∆ 1 for any object M in (Set ∆ ) /(A×B) . We prove the second statement. Suppose that f : X → Y is a bivariant equivalence between bifibrations. Observe that the map f * : map A×B (Y, X) → map A×B (X, X) is a homotopy equivalence between Kan complexes and hence there exists a map g : Y → X in (Set ∆ ) /(A×B) and an edge h in map A×B (X, X) between gf and 1 X . Hence gf and 1 X are fiberwise homotopic. The vertices f gf and f belong to the same path component in map A×B (X, Y ). Therefore, by the assumption on f , there exists an edge k in map A×B (Y, Y ) between the vertices f g and 1 Y . Hence f g and 1 Y are fiberwise homotopic. Hence f is a fiberwise homotopy equivalence. Proof. This follows immediately from Lemma 4.14, using the fact that a trivial fibration in (Set ∆ ) /(A×B) is a fiberwise homotopy equivalence. Proof. If f : X → Y is a bivariant fibration then it has the right lifting property against every bivariant anodyne map in (Set ∆ ) /(A×B) since a bivariant anodyne map is a bivariant equivalence (Lemma 4.13).
We prove the converse. Suppose that f : X → Y has the right lifting property against every bivariant anodyne map and that X → A × B, Y → A × B are bifibrations. Let M → N be a monic bivariant equivalence. We need to show that we can find the indicated diagonal filler in any commutative diagram of the form From such a diagram we obtain the commutative diagram in which each of the horizontal maps and vertical maps are Kan fibrations between Kan complexes by Lemma 4.11. Since M → N is a bivariant equivalence, the vertical maps are in fact trivial Kan fibrations. It follows from Lemma 4.11 that the induced map is a trivial Kan fibration. In particular it is surjective on vertices which implies the existence of the sought-after diagonal filler in the diagram above.
Proposition 4.18. Let A and B be simplicial sets. The subcategory of bivariant equivalences in the category of morphisms ((Set ∆ ) /(A×B) ) [1] is an accessible subcategory.
Proof. We first prove that if a map f : X → Y in (Set ∆ ) /(A×B) is a bivariant fibration and a bivariant equivalence then it is a trivial fibration. Given such a map f , we factor it as f = pi where i : X → X ′ is a monomorphism and where p : X ′ → Y is a trivial fibration in (Set ∆ ) /(A×B) . Then we have a commutative diagram By Lemma 4.15, the map p is a bivariant equivalence, hence i is a bivariant equivalence by 2-out-of-3. Therefore the indicated diagonal filler exists, and hence f is a retract of a trivial fibration. It follows that f is a trivial fibration. The remainder of the proof proceeds in exactly the same fashion as the proof of Corollary A.2.6.6 of [13]; the small object argument shows the existence of a functor T : ((Set ∆ ) /(A×B) ) [1] → ((Set ∆ ) /(A×B) ) [1] together with a natural transformation 1 → T such that for any morphism f : Proof. We use Proposition A.2.6.8 from [13]. The category (Set ∆ ) /(A×B) is presentable, so therefore we need to verify the conditions (1)-(5) from the statement of that proposition. The conditions (1) Proof. We prove statement (1), the proof of statement (2) follows by duality. It is clear that π * B sends trivial Kan fibrations in (Set ∆ ) /B to trivial Kan fibrations in (Set ∆ ) /(A×B) . Therefore it suffices by Proposition 4.17 and Remark 4.2 to prove that if X → Y is a right fibration in (Set ∆ ) /B between right fibrations X → B and Y → B, then A × X → A × Y is a bifibration in (Set ∆ ) /(A×B) . Clearly A × X → A × Y satisfies (1) and (3) of Remark 4.6, therefore it suffices to prove that the indicated diagonal filler exists in every diagram of the form If X → A×B is an object of (Set ∆ ) /(A×B) , then X/T is defined by the pullback diagram where the map A → A T × B T is isomorphic to the product of the diagonal map A → A T and the constant map ∆ 0 → B T given by the structure map T → B.
Remark 4.22.
Suppose that X → Y is a map in (Set ∆ ) /(A×B) and that S → T is a map in (Set ∆ ) /B . There is a canonical commutative diagram in (Set ∆ ) /A of the form and an induced map The following lemma gives a sufficient criterion for the induced map from Remark 4.22 to be a right fibration.
Proof. This follows from Remark 4.8, using the fact that As an application of this lemma we have the following useful proposition. Proof. It is clear that the functor (−)/T preserves trivial fibrations. Therefore it suffices to prove that (−)/T preserves fibrations between fibrant objects. Hence it suffices to prove that if X → Y is a bifibration in (Set ∆ ) /(A×B) , then X/T → Y /T is a left fibration in (Set ∆ ) /A . This follows immediately from Lemma 4.23, taking S = ∅.
In particular, taking T → B to be the identity map id B : B → B, we see that the functor π * A : (Set ∆ ) /A → (Set ∆ ) /(A×B) is left Quillen for the covariant model structure on (Set ∆ ) /A and the bivariant model structure on (Set ∆ ) /(A×B) . An analogous statement is true for the functor π * B . We record this observation in the following proposition. Proof. We need to prove that the map f has the right lifting property against the inclusion ∂∆ n ⊆ ∆ n for all n ≥ 0. This is clear when n = 0, since the fibers of f are non-empty. Suppose n > 0. Consider a commutative diagram of the form We want to show that the dotted arrow exists making the diagram commute. By a base change we may suppose that A = B = ∆ n , Y = ∆ n and that the structure map Let h : ∆ n × ∆ 1 → ∆ n denote the canonical projection. Let k : ∆ n × ∆ 1 → ∆ n be the canonical contraction of ∆ n onto its final vertex so that k|∆ n × { 0 } = id ∆ n and k|∆ n × { 1 } is the constant map on the final vertex.
(⇐) Let f : X → Y be a pointwise weak homotopy equivalence. Suppose first that f : X → Y is a bivariant fibration. Then f is a trivial fibration by Proposition 4.26, since the fibers of f are contractible. Hence f is a bivariant equivalence (Lemma 4.15). Now suppose that f is an arbitrary pointwise weak homotopy equivalence between bifibrations. Via the small object argument, we may factor f as f = hg, where h : X ′ → Y is a bifibration and where g : X → X ′ is a bivariant anodyne map. Then g is a bivariant equivalence by Lemma 4.13; since it is a bivariant equivalence between bifibrations it is a pointwise weak homotopy equivalence by the forward implication proved above. Hence h is a pointwise weak homotopy equivalence. By Proposition 4.17 we see that h is a bivariant fibration. Hence it is a bivariant equivalence by the special case we have proven above.
The following characterization of bivariant equivalences is anticipated by Theorem 2.3. This characterization is due to Joyal.
Theorem 4.28 (Joyal). Let A and B be simplicial sets and let f : X → Y be a map in (Set ∆ ) /(A×B) . The following statements are equivalent:
left anodyne map followed by a left fibration, then the induced map
is a weak homotopy equivalence; Proof. The proof that (ii) implies (iii) is trivial. We prove that (i) implies (ii). Suppose that f : X → Y is a bivariant equivalence in (Set ∆ ) /(A×B) . As in the proof of Proposition 4.18 above, we can find a commutative diagram in ( in which the vertical arrows are bivariant anodyne maps and f ′ : X ′ → Y ′ is a trivial Kan fibration. It follows that without loss of generality, we may suppose that f : X → Y is a bivariant anodyne map. Therefore we will prove that if f : X → Y is a bivariant anodyne map then the induced map R × A X × B L → R × A Y × B L is a weak homotopy equivalence. The class of all maps X → Y in (Set ∆ ) /(A×B) with this property is weakly saturated. Therefore it suffices to show that this class contains all maps of the form (1), (2) and (3) By a base-change argument we may suppose that A = ∆ n . Let R → A be a right fibration. We will prove the following statement: if Λ n 0 → ∆ n is a map of the form (2) in (Set ∆ ) /(A×B) from Definition 4.4 then the image of We use the theory of mapping simplexes (see Section 3.2.2 of [13]). There is a sequence φ : A n → · · · → A 1 → A 0 of composable morphisms between Kan complexes and a quasiequivalence M (φ) → R (see Definition 3.2.2.6 of [13]). We have a pullback diagram of the form in which the horizontal maps are categorical equivalences by Proposition 3.2.2.10 of [13]. Therefore, it suffices to prove that (π B ) ! M (φ)|Λ n 0 → (π B ) ! M (φ) is a contravariant equivalence in (Set ∆ ) /B , since every categorical equivalence is a contravariant equivalence. But the map M (φ)|Λ n 0 → M (φ) forms part of a pushout diagram and hence is bivariant anodyne, since it is the pushout of the bivariant anodyne map A n × Λ n 0 → A n × ∆ n . This suffices to complete the proof, by (1) of Proposition 4.20. To see that the diagram (7) above is a pushout, observe that from the proof of Proposition 3.2.2.10 from [13] we have a pushout diagram of the form where φ ′ denotes the composable sequence φ ′ : A n−1 → · · · → A 1 → A 0 . It follows that the top square and the outer square in the composite diagram are pushouts, and hence so is the diagram (7).
Finally, suppose that (iii) holds; we will prove that (i) holds, i.e. f is a bivariant equivalence. Via the small object argument, we may find a commtutative diagram in which the vertical maps are bivariant anodyne maps and X ′ , Y ′ are bifibrations with structure maps (p X ′ , q X ′ ) : into a right anodyne map followed by a right fibration, and a left anodyne map followed by a left fibration respectively. We claim that the canonical maps are weak homotopy equivalences. The first map above factors as Lb is left anodyne (and hence is a weak homotopy equivalence). It follows that the first canonical map above is a weak homotopy equivalence. The proof that the second canonical map above is a weak homotopy equivalence is completely analogous. Therefore, under the hypothesis that (iii) holds, we see that X ′ → Y ′ is a pointwise weak homotopy equivalence (and hence f is a bivariant equivalence) if and only if the two vertical maps are weak homotopy equivalences. This follows from the implication (i) implies (ii), which we have already proven.
The following is a very useful example of a bivariant equivalence. Proof. We use the characterization of bivariant equivalences from Theorem 4.28. Let a ∈ A and b ∈ B be vertices. We have a commutative diagram Remark 4.34. We could have obtained the bivariant model structure by taking a left Bousfield localization of the Joyal model structure on (Set ∆ ) /(A×B) at the set of horn inclusions Λ n 0 ⊆ ∆ n and Λ n n ⊆ ∆ n in (Set ∆ ) /(A×B) of the form described in Definition 4.1. However, this approach would require us to prove that every bifibration is a categorical fibration. This is straightforward to prove if A and B are ∞-categories, but it is not a priori obvious for arbitrary simplicial sets A and B. Just as in the earlier case for Segal's subdivision functor in Section 3.3, the functor sd 2 can be used to relate the category (Set ∆ ) /(A×B) with the category of correspondences from A to B. The functor δ induces a functor The functor δ induces an adjunction Similarly we have an adjoint pair where d * is the functor which sends an object X → A × B in (Set ∆ ) /(A×B) to the correspondence d * X whose set of n-simplices is where ∆ n → B ⋆ A is an n-simplex.
Remark 4.35. Analogous to Lemma 3.14, if f : C → A and g : D → B are maps of simplicial sets determining objects D ⋆ C and C × D of (Set ∆ ) /B⋆A and (Set ∆ ) /(A×B) respectively, then we have δ * (D⋆C) = C ×D. Note also that, analogous to Remark 3.15, the functor δ * : (Set ∆ ) /B⋆A → (Set ∆ ) /(A×B) sends the object D ⊔ C to the initial object ∅ of (Set ∆ ) /(A×B) . It follows that there is a natural isomorphism of functors d * d ! ≃ δ * δ ! .
Remark 4.36. The relationship between the subdivision functor sd 2 from [6] and the functor d * is as follows. If X ∈ Corr(A, B) then there is a pullback diagram of the form where the map A × B → B × A is the switch map which interchanges the two factors and where the map B × A → X × X is induced by the inclusions B ⊆ X and A ⊆ X. This is analogous to the relationship between a * X and the twisted arrow category, or Segal edgewise subdivision of X (see Remark 3.21).
Remark 4.37. Recall (see Lemma 1.1 of [6] and Proposition (A.1) of [16]) that for any simplicial set X there are natural isomorphisms | sd 2 X| ≃ |X| and | Tw(X)| ≃ |X| on geometric realizations. In particular there is an isomorphism | Tw(X)| ≃ | sd 2 X|, natural in X. Recall also that there is a canonical isomorphism |X op | ≃ |X| between the geometric realization of a simplicial set, and the geometric realization of the opposite simplicial set. We claim that the following diagram commutes where the left hand vertical map is the isomorphism mentioned above, and the right hand vertical map is the product of the canonical isomorphism |X op | ≃ |X| and the identity map on |X|. Since all of the functors involved commute with colimits, it suffices by naturality to prove the claim in the special case when X = ∆ n . Since all of the functors involved also commute with finite products, and ∆ n is a retract of (∆ 1 ) n , it suffices to prove the statement when X = ∆ 1 .
Our next aim is to prove that the Quillen adjunction from Proposition 4.38 is in fact a Quillen equivalence. We first need a preliminary result. Proof. By adjointness it suffices to prove that the induced map is an acyclic cofibration in the bivariant model structure for every boundary inclusion ∂∆ n ⊆ ∆ n in (Set ∆ ) /(A×B) .
The unit map ∆ n → d * d ! ∆ n is a bivariant equivalence (Lemma 4.29). Therefore, by a 2-out-of-3 argument, it suffices to prove that the unit map S → d * d ! S is a monic bivariant equivalence for every object S → A×B in (Set ∆ ) /(A×B) . Recall (Remark 4.35) that there is an isomorphism d * d ! S = δ * δ ! S.
Using the skeletal filtration of S we see that by an induction argument we are reduced to the case where S is obtained from S ′ by adjoining a single n-simplex along an attaching map ∂∆ n → S ′ in (Set ∆ ) /(A×B) . We have a commutative diagram in (Set ∆ ) /(A×B) of the form ∆ n ∂∆ n S ′ δ * δ ! ∆ n δ * δ ! ∂∆ n δ * δ ! S ′ in which the middle and right hand vertical maps are monic bivariant equivalences by the induction hypothesis. A straightforward argument, using the fact that the bivariant model structure is left proper, shows that the induced map S = ∆ n ∪ ∂∆ n S ′ → δ * δ ! S = δ * δ ! ∆ n ∪ δ * δ ! ∂∆ n δ * δ ! S ′ is a bivariant equivalence. To close the inductive loop we need to prove that S → d * d ! S is monic. For this it suffices to prove that for any n ≥ 0 the square ∂∆ n δ * δ ! ∂∆ n ∆ n δ * δ ! ∆ n is a pullback in (Set ∆ ) /(A×B) . The proof of this is completely analogous to the proof of the corresponding fact in the proof of Theorem 3.23 and is omitted. Proof. From Lemma 4.39 we have that the counit map d * d * X → X is a trivial Kan fibration whenever X → A × B is a bifibration. Therefore it suffices to prove that d * reflects weak equivalences. Suppose then that X → Y is a map in Corr(A, B) such that the image d * X → d * Y is a bivariant equivalence. Therefore, by Theorem 4.28, we have that the induced map d * X × A×B (A /a × B b/ ) → d * Y × A×B (A /a × B b/ ) is a weak homotopy equivalence for all vertices a ∈ A and b ∈ B. From Remark 4.36 we see that there is an isomorphism Using Remark 4.37 together with the fact that there is an isomorphism (B b/ ) op ≃ (B op /b ) op we see that there is an isomorphism |d * X × A×B (A /a × B b/ )| ≃ |a * X| × |B op |×|A| (|B op /b | × |A /a |)| natural in the correspondence X. It follows that for any vertices a ∈ A and b ∈ B, the induced map
is a weak homotopy equivalence if and only if the induced map
) is a weak homotopy equivalence. Therefore, d * X → d * Y is a bivariant equivalence if and only if a * X → a * Y is a covariant equivalence, using Theorem 2.3. Therefore X → Y is a weak equivalence in the correspondence model structure since a * reflects weak equivalences (Theorem 3.23). which sends a correspondence X ∈ Corr(A, B) to its simplicial set of sections Γ(X) = map ∆ 1 (∆ 1 , X). The structure map Γ(X) → A×B is induced by the inclusion ∂∆ 1 ⊆ ∆ 1 . The functor Γ has a left adjoint C : (Set ∆ ) /(A×B) → Corr(A, B) which sends an object (f, g) : X → A × B in (Set ∆ ) /(A×B) to the correspondence where the map X × ∂∆ 1 → A × B restricts to g on X × { 0 } and restricts to f on X × { 1 }.
We then have the following result from [2]. We only sketch the proof, since an ∞-categorical version can be found in [2].
Sketch of proof. The proof that Γ is a right Quillen functor is a straightforward modification of the proof of Proposition 2.4.7.10 of [13]. It can be shown that the functor C reflects weak equivalences (see [2]). The result then follows from Proposition B.3.17 of [14]. | 2018-07-22T02:15:15.000Z | 2018-07-22T00:00:00.000 | {
"year": 2018,
"sha1": "e02f89d88bca6a2feb943f6a7b0f1501240b47d9",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "e02f89d88bca6a2feb943f6a7b0f1501240b47d9",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.